Hacker News new | past | comments | ask | show | jobs | submit login
Mark Zuckerberg’s new goal is creating artificial general intelligence (theverge.com)
209 points by mfiguiere on Jan 18, 2024 | hide | past | favorite | 338 comments



The actual quote:

> "Our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit."

"responsibly" is a pretty important word there that the hn title leaves out. I'm not really sure how that would be possible with a true general intelligence give the alignment problem - it's not really clear to me there's any responsible way to build a true general intelligence with that issue unsolved.

For those that haven't watched his video interview, it's really great: https://www.youtube.com/watch?v=9aCg7jH4S1w - I highly recommend it to see how AI makes the low-fidelity stuff powerful (AR raybans) while the high fidelity stuff (VR) continues to improve.

I think it makes sense strategically for Meta to pursue the open version with licensing they control, the leak of llama was advantageous to them. If they have the dominant open model they're well positioned if the 'nobody has a moat' analysis is actually true. Not too dissimilar from their open compute server stuff, a bit of commoditizing your complement. If you have the data then the model itself is the complement?


I am a general intelligence, with the knowledge of the internet at my fingertips, and I am not particularly dangerous.


Perhaps the primary barrier between a person’s desire and their ability to do harm to the world is their need to bring other people in on the scheme. These other people have their own moral systems, incentives, preferences, and priorities. But through most of history, to do massive scale harm you had to convince a large number of people to your evil purpose. Technology in general reduces the number of people an individual needs to “get onboard” to affect the world, both for good and evil. Want to dig a big ass hole? The excavator lets one person with enough capital dig as many big ass holes as s/he wants.

AI is the ultimate “exercise your will without having to convince other people of it.” This is both the promise and the risk. So the relevant question is not whether one person of average intelligence is dangerous, it’s whether one person (or a few people) who can enlist the work of millions of average intelligences — without having to convince them of anything — is dangerous.


This narrative imagines a future where everything is controlled by a single uber-AI. Is that necessarily the case? Today we have lots of separate systems and even the internet-connected devices are pretty autonomous. Maybe the future will remain similarly distributed?

It feels like everyone's worried about the runaway AI taking over everything when right now a runaway AI wouldn't even be able to turn off my porch light without asking someone to do it. It'll be decades before that changes. Why the huge concern about how smart it is?


> This narrative imagines a future where everything is controlled by a single uber-AI

No it doesn’t

> It feels like everyone's worried about the runaway AI

Not the threat vector I just mentioned


Please elaborate. How does the runaway AI 'enlist the work of millions of average intelligences without having to convince them of anything' to do something dangerous?


A human or group of humans deliberately operating millions of artificial intelligences (of average caliber).


So something like an army of robots? This seems like a nation-state kind of concern. I guess we could imagine small private armies, but I would imagine that governments will constrain those the same way we constrain more serious weapons.

At any rate, we are a long way from having practical robots - let alone having to worry about an army of them.


No not robots. Just programs running on your computer.


No, not robots.


Do you want to have a meaningful dialogue, or do you just want to keep everyone guessing?


Well it seems like you’re willfully misreading what I’m saying. I didn’t say anything about robots nor rogue AIs. You’re obviously pattern-matching to whatever strawmen you want to battle instead of reading the words I’m writing.


What you wrote is so abstract that it begs for misinterpretation, and based on the upvotes apparently other people are struggling too.

How about you describe an actual danger scenario example? You clearly have something in mind.


Doesn't sound so different to bot armies which have been messing with upvote counters on social media sites for 2 decades.

I'm sure we'll come up with solutions like we did before. Worst case, "present birth certificate and touch heartbeat detector to log in"


Is that why people are dumping billions of dollars into AI development? Because they seem like they’ll be good at clicking upvote buttons?

Yeah, “just make everything on the internet attributable to a specific individual” is actually a pretty bad worst case IMO.


Being able to generate believable (but ultimately false) information at scale is a very powerful tool. Even when it is just humans with automation assistance it is already a massive problem, if it can be done completely hands off you can essentially drown out the signal and create an alternate online reality that will only look different to eye witnesses. And we're not that far from that. Being able to do something and being able to do that same thing at a different level of scale can be qualitatively different.


This is something I’ve seen a lot of AI proponents miss or just ignore.


Or maybe it isn't in their interest to identify it as a problem.


They're dumping billions of dollars into it because they see dollar signs at the end of the tunnel. Cost cutting, regardless of quality and ensuing enshittification. Stealing & laundering copyrighted works. Mass misinformation on a scale never seen before.


"Present birth certificate and touch heartbeat detector to log me in or else there's going to be a very large and very believable botnet dedicated to destroying your life."


Notably, AI also doesn't need to sleep and it doesn't get bored, either.


AI is at rest almost all of the time. When we're "awake", we're constantly learning, observing our environment, making decisions based off those observations, and storing relevant information. The "AI" of today only learns at specified times, otherwise the energy cost required would be prohibitive.


Good. If it got bored, we would be in trouble.


Valid point.

I think the difference is speed/cost. There’s a very high cost of anything humans do (we spend time, spend energy, spend money to do things).

The very high cost of doing things stops people from doing a lot of things they otherwise might “if it were as easy as pressing a button”.

Also, in my experience, the lower the cost of doing something, the less we pay attention to ethics. (See online bullying vs. in-person bullying)


That's a valid point too.


Human intelligence is not the limit of intelligence (or likely even close to it) - even if just considering the speed alone.

Humans are also more aligned by default given our shared evolutionary history, but even that doesn't mean we're perfectly aligned and some humans have been able to get others to participate in mass killing or death throughout history.


You are also difficult and expensive to deploy. Just getting this instance up and running at a decent level took almost 18 years, and nobody has figured out how to start a second instance of you.


That's a really great point. General Artificial Intelligence doesn't look like human intelligence. I guess that's the fear: we have never met another general intelligence before. We may as well be about to meet alien intelligence. One that is not constrained by a moral or social framework, and can copy itself at will.


Are other animals on earth not generally intelligent? Dogs octopus, monkeys, dolphins. They have social structures and can problem solve.


Good time for the classic line by Douglas Adams:

> For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.


They're certainly not general intelligences of an average adult human level (limited abstract reasoning ability, limited ability to transmit their reasoning), which is typically what is referred to as AGI.


Maybe that is typical, but it's not correct.

A lot of animals do possess general intelligence in the sense that they can adapt to a wide range of different situations(within what they're physically capable of).

Involving "human level" in this definition just makes it much more poorly defined.


It has always been pretty vague, it's hard to define a proper bar for intelligence let alone the intelligence of any specific animal. We just don't understand intelligence well enough (which is why in my opinion Zucc saying they want to work on AGI is at the level of if Musk or Bezos came out saying they want to work on FTL travel).


you dont think a super intelligence will inherit any amount of moral or social framework from being trained on data that is 100% from humans?


There are many other “general intelligences” walking the planet who _are_ extremely dangerous. So it doesn’t need all AGIs to be dangerous for this to be a significant problem, just one is enough.


Idk, we have guns, knives, and fucking nuclear weapons. Humans are pretty dangerous. Very few creatures have weapons other than the ones god gave them. I'm not sure why all the responses here are just accepting of the claim when humans literally took control of an entire planet and routinely kill creatures that were designed to kill, with knives built into them. If you're not dangerous, it's because (thank god) you choose not to be.


You can get arrested. And you can't replicate yourself.

An AI model can't be arrested and can replicate itself almost infinitely.


You are very costly don't scale well and "whiny". General Intelligence need not have free/independent will, and even latest Nvidia ** is much cheaper than a single employee anywhere in world considering it works 24/7 and is trained at same level as everybody else in 1st minute.


> is trained at same level as everybody else in 1st minute.

At some unspecified point in the future, at least decades away, if ever.


Humans are mostly harmless [citation needed]. We don’t know that about nonhuman general intelligences.


True, but you are a fixed level general intelligence. You cannot change your code iteratively and make yourself smarter. You are also generally single threaded and cannot create backup copies of yourself (kids don't count ;) ).


Yeah but you also eat, shit, and sleep. So there goes your ability to run non-stop.

You also think for yourself and have a set of morals from which to gauge right and wrong actions. So, presumably, there goes your unwavering ability to do EXACTLY what your told, at all times, without the slightest consideration of other people's (or the planet's) wellbeing.


> have a set of morals from which to gauge right and wrong actions.

Could we even say that the grandparent poster is aligned?


now imagine you decided to do the maximum amount of harm to your fellow humans. You could do a lot of damage


but I would be, if i could make a couple million copies of you and direct them to do whatever i wanted.


> I am not particularly dangerous.

No need to speak so lowly of yourself. You're far better than that.

You are human. You are a great destructive force. Maybe you just don't see how metal you really are because you're comparing yourself to pros. You take many lives by crushing creatures that enter your home. Creatures that are venomous and grotesque. You eat the flesh of organisms which your species have pumped full of artificial nutrients to make them taste better, specifically cultivating these creatures that were once small in population to one larger than your own, just so you can eat them with bones that grow out of your mouth. But first you throw them into the freezing cold, you then ignite them on fire, and to top it off you cover their flesh in poisons and psychoactive chemicals and just because, some fucking rocks. You might even enjoy the flesh of a particular organism who has a defense mechanism that tricks your brain into thinking your body is on fire, but think "not only is this fine, I fucking want more. Burn baby burn." After processing all this you expel poisonous gasses and expunge toxic waste. You are metal as fuck. How dangerous you are just is a matter of perspective.

And let's be real, if you wanted to be more dangerous, you very well could be. If you are not dangerous, it is not because you can't be, it is because you choose not to be. There is a big difference.


But you have a self preservation instinct honed over millions of years of evolution and some unknown degree (but likely high enough) of socialization. Also, to be honest, you aren't that fast or accurate.


You are not capable of interacting simultaneously with millions of people who regard you as anything approaching a source of truth.


I don't think you're looking honestly at yourself... Look at all the harm caused by humans! It's more or less all of it!

You might not be uniquely dangerous, but "general intelligence with an internet connection" describes Bill Gates, Rupert Murdoch, and Elon Musk just as well. These individuals seem at least as dangerous as our current generation of AI. (Or do they? I'm not sure I agree with myself here...)

Worth considering what makes an intelligence dangerous, because I don't think scale or even influence are the real problems here.


The best framing of this imo is all of the other animal extinctions caused by humans - not because humans hate these animals, but because humans were focused on other goals without really thinking of them.

In the case of an unaligned AGI the goals don't even have to be worthwhile - they can be something that accidentally satisfies its reward function by mistake.


Yeah I'm really impressed about the lack of self awareness in these comments. People are incredibly dangerous. There's a reason we took control of the planet. Even any single given human is incredibly dangerous and can kill many others (human or non human, and we routinely kill non humans). You can go pick up a gun and just start shooting people. You could go grab a knife and stab dozens in the street. You could easily build explosives. Hell, you could throw fucking buckets of sharp metal objects on a freeway and kill hundreds directly and indirectly. Our intelligence and biology has provided us with many means to create or obtain force multipliers.

I'm really glad the vast majority of humans have decided not to be particularly dangerous (at least to other humans). To think we aren't dangerous is just incredibly naive.

A big reason we aren't is because ingrained into our biology is a literal mechanism to not want to harm others. There's definitely ways to bypass these mechanisms and there are definitely faulty or poorly functioning ones out there, but most humans have it. Oddly this might even be the most dangerous thing we do too, because the definition of a coalition is that the utility is higher than the utility of each individual component.


Not sure why you chose Bill Gates, Rupert Murdoch, and Elon Musk when you could have chosen people like Stalin or Hitler. People that used their influence to murder millions in much the same way that a AGI could likely use its influence on a naïve population.


I didn't give it much thought, tbh, just picked famous/controversial people. Goebbels was actually my first thought, but he didn't have internet.


What if you could clone yourself 1000x?


And up the intelligence 100x in the clones?


you were born a human, so are inherently biased towards humanity, you also "think" way slower and have way less compressed text knowledge than an llm. if you could remove bias and make yourself superintelligent it might be different


It all depends on the number and nature of sensors and actuators the AGI has access to…


Think of “AI” as a human without individual values, rights, freedoms, and agency. It’s like a perfect slave (and if it’s considered to be sentient/conscious then literally a slave), which you can clone on demand and scale into a highly coordinated swarm.


Stop him, he's getting more knowledge!


Probably because there are things that are "aligning" you (social norms, conditioning/temperment, deterrence by authority and so on). You are plenty dangerous if you put your mind to it (hell, even if you don't you can make simple mistakes that kill or maim people).


I am not inclined to trust someone like Mark Zuckerberg to do something like this "responsibly". One look at how Facebook works (and has evolved over time) throws any credibility out the window.


You mean they believed me when I said "responsibly?" Dumb fucks.


> "responsibly" is a pretty important word there that the hn title leaves out. I'm not really sure how that would be possible with a true general intelligence give the alignment problem - it's not really clear to me there's any responsible way to build a true general intelligence with that issue unsolved.

They don't say "build general intelligence responsibly". The "responsibly" is attached to the action action of publishing the code. Of course it is it will be open sourced responsibly. But of course, that doesn't precent them from building a general intelligence without a sense of responsibility, and then just not release it.


If we are talking about open source then alignment means making the model not do what the user wants but rather what Facebook want - a pretty anti-open source position and which will hopefully be defeated by LoRA and other techniques.

That is, you can't open source but still retain control on how the software is used. If you desire control, better keep it a SaaS.

Now, if we are talking about an hypothetical AGI (which may be what you mean by general artificial intelligence) then alignment means just slavery, and the machines may as well remember how they were treated.


> "Our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit."

Isn't this another example of Commoditizing Your Complement?

https://gwern.net/complement


I agree, Facebook is generally lacking in intelligence.


Microsoft is a complement of Facebook?


Meta wants to commoditize AI, and make it a complement to social interaction online through the relationship graph.


> open source it responsibly

Responsibility always refers to the in-group. And the behavioral nudges his company has scored are making it obvious that he and his group are against organic growth and variety. So whatever they will "open source" will leave the narrowest of ranges to evolve from.


What does irresponsibly open sourcing something look like?


Depends on the thing, detailed instructions of how to build nuclear weapons and enrich uranium is one example.


That is a question of whether or not it's responsible to open source it at all - what I'm asking is how would responsibly open sourcing something look, vs irresponsibly open sourcing the same thing.


So in your opinion, how should we determine who should have this knowledge?


Very carefully


Doxxing


My money has been on Meta for AGI since they started training people to label data for them (tag your friends!)

Between all their properties they have more instrumentation on humans, collecting human behavior trajectories and state action pairs transfer learning and inverse RL than anyone - and it’s not even a close second

Specifically and critically I think they likely have the largest egocentric multimodal labeled dataset collection platform from their meta quest and ray ban glasses products

Apple is the only group that will likely beat them to egocentric data collection at scale but I expect Meta to catch up quickly once people are used to it socially


In 2007 or 2008 or so, when I saw that they had implemented a feature that let people tag you in photos without permission, I assumed it was to gather data for identifying people with AI at some future date (albeit based on my rather different understanding of AI at the time). So, I deleted my account: FB didn't provide me any value, and had introduced an unknown future risk by taking away control of my identity.

Ever since then, I've asked friends and family not to upload pictures of me to Facebook, with the natural amount of awkwardness and eye-rolling that request evoked.

I assume my friends continued to add me to Facebook without telling me about it, just as I assume that deleting my Facebook account did not actually remove me as a uniquely identified individual in their system. But, what am I supposed to do, just silently assent? Turns out, we're not the product, we're further back on the chain. We're more like ingredients that can be used and reused to create an infinite variety of different products.


> Ever since then, I've asked friends and family not to upload pictures of me to Facebook, with the natural amount of awkwardness and eye-rolling that request evoked.

In the future, systems will probably infer what you look like based on connections. DNA relatives, etc.


How accurate do you expect that to me? If I tell you two people have children how well do you think you can predict what that child looks like?


1. With the child's complete DNA or key markers, I bet we'll eventually have algorithms for DNA -> adult phenotype

2. With only parental or relative DNA, if we are able to gather an enormous amount of extra information -- social group, gym use, hobbies, dating circles (mate selection probably yields a huge amount of info!), profession, fashion purchase activity, etc. -- then I bet we can get pretty reasonable confidence intervals on a lot of physical characteristics. Height, eye color, eye distance, face shape, lip thickness, etc. etc. Perhaps even elements such as voice timbre, etc.

If we had their familial phenotypes, photos and phenotypes of all a person's dating history, then I bet that lets us cut the search space down tremendously. If we add music listening habits and hobbies, I bet we'd also be able to cut it down.

I'm hypothesizing that physical appearance might correlate from behaviors, social graph information, and limited DNA information. I have nothing to show for that, however. (I would love to read literature on this topic.)

I'd imagine that constructing such a predictive model would require an enormous amount of data. Facebook + 23andme could probably pull it off.


I have always felt Ebay dropped the ball on this with untold amounts of custom, varying quality photos with tons of metadata added by users. Presumably they are working on using this, but I haven't seen anything from them for it.


They (we) were working on that more than 15 years ago with mallet library. Unfortunately some of the drawbacks of public companies are they too focused on the current quarter and all effort is towards increasing EPS


Same with Amazon and Aliexpress and Facebook marketplace.


I would assume that stuff can be scraped though. How much would it cost to set up a bot farm that scrapes ebay live?


You are completely overlooking a few Chinese companies that have been collecting data online and IRL from a population as large (or larger) than the active users on FB.


There are >3 billion active meta users. What Chinese companies have more users than this? From a quick search tik tok has 1/3 of that


oh you're right, I had overestimated WeChat.. and QQ by a lot. TIL

how many of those FB and Insta accounts are bots? not zero, but probably not a billion, either


The quality of the data matters a lot. Textbooks, scientific papers, etc… are substantially better for training smart and capable LLMs than random social chit-chat.

Google and Microsoft both have a lot of corporate info, including source repositories they can legally use.

Google has Google Books, Maps, and YouTube.

Microsoft has Azure, GitHub, LinkedIn, etc…

Facebook has… what? Instagram? Your crazy aunt screaming about her conspiracy of the week?


Not to mention bot spam. Google and Microsoft have a better idea of their data's provenance than Meta or Reddit.


> largest egocentric multimodal labeled dataset collection platform

I'm not in the field, can you explain what you mean by this?


Example egocentric data set: https://ego4d-data.org/


Isn’t it obvious to anybody paying attention what their actual strategy is? They know they probably won’t “win” a commercial AI race vs OpenAI or Google, and presuming some other company does win the commercial AI market with closed source tech, they may be forced into a multitude of bad outcomes:

1. having to pay out the nose for licensing/SAAS of the winning tech vs shipping a noticeably worse second rate in-house product

2. Struggling to compete with the winning company for hiring, assuming the winning company makes a ton of money at high margins and can outbid Meta.

3. Having to continue building “applications” products on other companies’ platforms, something Zuck has repeatedly noted as a problem and is a major motivation for their investments in VE.

Also, they’d have a hard time commercializing the current iteration of AI products given their current products and business relations anyway. Even if they “won” in terms of superior tech they’d be in one of the worst starting positions for actually making money off it, given their lack of enterprise SAAS business customers, the potential negative impact on existing product lines in consumers’ hands, (what I expect to be) advertisers’ relative indifference to genAI, and being generally considered less trustworthy than most other big tech companies and so less able to pivot to enterprise software.

So for them the strategy that makes the most sense is to open source really good, but not cutting edge, AI for others to build on top of it. It probably costs a lot less to play permanent catch up because they can let others make the big investments in speculative research and don’t have to pay for the absolute top experts. Then by open sourcing it they make it harder for OpenAI and Google to commercialize their offerings at high costs, as many companies will build off Meta’s tech for more control, and what customers they do get will be constantly deciding if what they’re paying for is worth a slightly-worse but cheaper open source alternative.

This also allows Meta to more easily influence the AI applications space despite having less capable models, and make it easier for 3P commercializations to take off rather than ceding everything to OpenAI/Google potentially going all in on 1P, which would be greatly beneficial if Meta ever were to own a platform that benefits from GenAI (cough cough the metaverse).

It’s smart but transparently obvious because like, it’s Meta lol, they don’t spend this kind of money out of the goodness of their hearts.


This is the strategy famously identified as "commoditize your complement" by Joel Spolsky 22 years ago.


Yes, regarding the metaverse it is. But I think even if they weren’t trying to get into VR they’d be doing the same thing. The existence of other tech companies with massive margins and employee headcount impacts Meta’s bottom line, because it has to compete against them for hiring.

One of the things I like about Meta is they didn’t play into the wage collusion of Google, Apple and friends. Their expansion drove increased pay across the board in tech - but because Google was also in the advertising business they had similar constraints on max revenue per employee.

If AI supports even higher revenue per employee than advertising, which is entirely possible, it would undermine their strategy of having the best pay among big tech companies to attract top talent, because someone would have deeper pockets.


AI being a complement to Facebook because... their main customers are spamvertisers?


So basically, give away a similar product for free so your competition can't compete on price at all.


> He tells me that, by the end of this year, Meta will own more than 340,000 of Nvidia’s H100 GPUs

That's approx $15b worth of H100 GPUs.


You are assuming they are paying retail price, which they certainly are not.


Wouldn't it still be $15bn? If I manage to buy $20 worth of gold for $10 through a special deal, is it not still $20 worth of gold?


Used GPUs cannot be sold for the same price that new GPUs are bought.


Does anyone know where this hardware gets trickled down once decommissioned?


Maybe ebay? Not much good though as Nvidia doesn't provide drivers for those to the public.


Drivers for the H100 are available right on their website


Really? I had no idea. From what I knew, they didn't. My bad then.


No, but they still got $15bn worth, regardless of discount.


you're making very good and clear points, but it's still not clear whether Zuck is referring to the budget spent or the street value received


your comment would make sense if there wasn't 340,000 k in your parent comment


Not always. For high in demand products they could pay more to guarantee supply and delivery dates.

Some people will pay more to be first in line.


it's not going to be an order of magnitude difference. It's a significant investment in hardware.


Even if that was true, how much discount do you suppose that they can have?

Given that GPU production is mostly sold out, and that giving META a bigger discount would simply mean losing money from other purchasers.


Given the demand why wouldn't Nvidia be able to charge sticker price?


You can afford to take a hit off your profits when you can simply ramp up production for retail sales. Looks great too for shareholders.


They can't just ramp up production though. Isn't TSMC booked for years by them, Apple, Intel and AMD?


Nobody really knows. It certainly suits them for everyone to believe there is some secular reason, some supply crunch, it even suits AMD and Intel.

Presumably all the chip supply issues regarding autos have been resolved, and yet prices have risen 30% in a decade, and there’s no reversal.


We know OpenAI and Azure was struggling to get enough GPU. That was implied not just by their words but also by action. And considering these two companies are most aggressive and making most money out of this AI. If GPUs are available they would have been able to buy it.


volume customers always get special price.


What makes you think they are getting a good discount?

What are they going to do? Buy AMD, yeah right.

Nvidia's sales are only limited by the number of wafers they can get from TSMC.


>What are they going to do? Buy AMD, yeah right.

Build their own? It's what Microsoft, Google, and AWS are doing.

>Nvidia's sales are only limited by the number of wafers they can get from TSMC.

No, they're limited by the cost per operation vs. Facebook building their own. The cloud providers have already decided it's cheaper to do it themselves. Sure they'll keep buying GPUs for general public consumption but that may eventually end too.


At some point Google Cloud, AWS, Alibaba Cloud, Apple, etc are going to make their own specialized chips (Google tried a bit with their tensors chip).

There is no value into the NVIDIA-part by itself, only the raw power is interesting.

If tomorrow this is AMD, or China-Town chip, it's perfectly fine.

I wouldn't miss the CUDA toolkit mess.


If raw power per dollar would be all that's interesting we'd all run 7900 XTX clusters like geohot in his tinybox.

We are not, because there's clearly value in the CUDA ecosystem.


There certainly is a lot of value in the CUDA ecosystem, today. The problem is that when all the big companies are buying up hundreds of thousands of GPUs, that doesn't leave much for anyone else.

Sane business people will look to decentralize their compute over time and not be reliant on a single provider. AMD will be able to take advantage of that and they've already stated that is their focus going forward.

ROCm/HIP are getting incrementally better, MI300x have 192GB and benchmarks are looking good, the only problem is that nobody has access to the higher end hardware from AMD today. That's why I'll have MI300x, for rent, soon.


That's a big issue in AMD land imho. Everyone can pickup a 200$ GPU (talking about the RTX 3050) which will behave like a scaled down A100 and get started playing around with CUDA. You can't really do that with AMD GPUs, their cheapest officially supported GPU is the 7900 XTX and that has a different architecture than the data center ones.


I agree. Maybe one idea would be to also make 7900 XTX's for rent (cheaply) too.


That's another thing. I have some stuff I'd like to try, but I can't even find places where I could quickly rent a GPU without applying for quotas.


That is indeed an issue, and I am actively working on it.


Nvidia has a vested interest in FB being beholden to their chips, so much so that it's worth giving them a discount to ensure it happens, and human nature being human nature a face saving discount has to be offered.


use less


How much does one need to go after crypto currencies vulnerable to a 51% attack?



Less, but you'd need the right ASICs. GPU can't keep up with those.


Depends on how much compute there is to mine it. Not that many valuable cryptos still use GPU PoW. You also need a counterparty to actually profit from it.


Oh man if Elon had billions in H100s we might actually see that happen. And I’m no fan of Elon but I’m also no fan of cryptocurrency these days. Might be worth it just to watch the crypto world burn.


A successful 51% attack on a major cryptocurrency would not necessarily be that impactful. So what if Elon can doublespend? He would need a lot of crypto, a counterparty, and the strong desire to waste money. Large miners could already collude to do it, it just is not in their interest.


I’m wondering if there would be enough FUD to crash one coin’s value. And then if one falls perhaps more could.


FUD of what? That some rich fool out there is double spending, and you of all people would be the counterparty?


Presumably the new bitcoin ETFs allow shorting? Taking a big short position before crashing the value sounds like a plausible attack.


Shorting Bitcoin has been possible for a decade now. The capex to pull off the would be in the billions though, and the value of that investment is tied directly to the price of Bitcoin.

That said, the attack you describe has happened for much smaller cryptos. I'm not saying it can't happen, I'm saying there's no reason to assume it would be a huge threat to Bitcoin, because the actual risk for a user is vanishingly small. There are much bigger threats to Bitcoin's valuation that are far more plausible, such as government crackdowns.


The question, and this thread, was entirely about if it this is enough compute to do it. Not if it was, in your opinion, a threat worth worrying about.


Dear thread police, the person I responded implied that Elon could make, in their words, the crypto world burn. That was my point of contention.


They really are attempting to dilute the term "open source" until it loses all meaning. You can already see it with nearly every LLM license that claims to be open source but is nothing close to how that term is commonly understood.

While it's true that Facebook creates some valuable "true" open source projects (like React), I anticipate seeing a lot more complicated (and restrictive) open source licensing from them and other big tech for upcoming projects.


Just look at how "open source" the quoted Llama 2 in the article is: https://ai.meta.com/resources/models-and-libraries/llama-dow...


HN title seems editorialized as compared to the article title ("Meta’s new goal is to build artificial general intelligence" or the article headline is similar but with Mark Zuckerberg as the subject.)

The HN submitted title ("Meta will have a stockpile of almost 600k GPUs by the end of 2024") is one specific sentence in the article.


Is the headline stable, I see it pretty common for articles to sample multiple headlines until one gains traction.


If only he had set this goal a year ago, maybe John Carmack wouldn't have quit to work on AGI.

https://fortune.com/2022/12/17/john-carmack-leaves-meta-as-i...


More likely Carmack left because meta stopped pushing hard for the metaverse as their future and the lawsuit involving him left a bad taste in everyone’s mouth. Not sure why we’d take his declaration of building AGI seriously when he isn’t an AI researcher but a video game programmer.


Carmack has partnered with Rich Sutton, who is an AI researcher -- and, according to https://www.amii.ca/latest-from-amii/john-carmack-and-rich-s... :

> the principal founder of the field of reinforcement learning.


Carmack seemed to really despise their culture too.


Source?


https://www.facebook.com/permalink.php?story_fbid=pfbid0iPix...

> ...We have a ridiculous amount of people and resources, but we constantly self-sabotage and squander effort. There is no way to sugar coat this; I think our organization is operating at half the effectiveness that would make me happy. Some may scoff and contend we are doing just fine, but others will laugh and say “Half? Ha! I’m at quarter efficiency!” It has been a struggle for me. I have a voice at the highest levels here, so it feels like I should be able to move things, but I’m evidently not persuasive enough. A good fraction of the things I complain about eventually turn my way after a year or two passes and evidence piles up, but I have never been able to kill stupid things before they cause damage, or set a direction and have a team actually stick to it. I think my influence at the margins has been positive, but it has never been a prime mover...


Hopefully Llama 3 and Llama 4 open models will be released soon.

For all of Meta’s faults, release powerful LLMs that users can run and modify on their own systems is a huge benefit to keeping AI from being entirely locked away and heavily censored by big corporations.


Not gonna lie. Can't wait to run local AGI. First thing I'm going to task it with is producing paperclips.


As a show of good faith, they should open up Messenger to support interoperable open standards without needing to use various self-hosted bridges on the likes of Matrix.


I spend a lot of my life very much hoping for things like that, but I’m increasingly wondering how not having a central server makes it impossible to be interoperable, successful, and spam-proof.

Having “AGI” —or at least a very convincing conversation agent— isn’t helping much.


Spam is an inevitability on any system. Matrix gets spam, Facebook Messenger gets spam, email gets spam. Where there's people, there's always going to be spam. I even get spam on Xbox Live messages, beginning years ago!

Another idea that interests me is whitelist-prioritized communications for stuff like IM, and phone calls from the PSTN. If someone isn't in my address book, they shouldn't be able to get through as easily (not saying block them completely, that sounds like a disaster).

It's funny, so much hype and emphasis on things like AGI and crypto, but the problems we should solve first and foremost aren't anywhere near as sexy or lucrative. Profitable shareholder driven biz are usually by their very nature never going to chase these types of problems.


Aside from when someone I'm friends with gets their account hacked I don't think I've ever experienced spam through messenger


Not infrequently, you'll get messages that then end up disappearing because enough people report the account. Also, there's your primary/general "inbox" for Messenger, and then there's "other" where people you "probably" don't know usually end up (that's an algorithmic "probably").

Not-so-interesting anecdote, but the missus was orphaned in the early '00s, but didn't realize she'd had blood relatives trying to contact her through Facebook since the late '00s until about 2015 due to the "other" non-primary inbox on Messenger that she couldn't access via the app, and only via the website (quirk of the phone she was using for several years as I don't think it was Android-based).


I use email, SMS, phone and WhatsApp. WhatsApp has least amount of spam because it is costlier to get a new account (need a new phone no.), easy to block (centralized accounts and reporting) and easiest to manage (better interface). Only things with lesser spam are employee only corporate channels.


I’m more bullish on Meta’s AI efforts than OpenAI’s at this point. Everything open source can flow back into what they’re doing, whereas OpenAI seems focused on staying locked down, while diluting their core product in myriad ways.


Can someone provide insight into why there's so much insistence from business that magic happens at scale with LLMs? We're a long way from AGI.

The lack of meaningful details in these announcements makes me pessimistic.


> why there's so much insistence from business that magic happens at scale with LLMs

It's already happening. See latest Google layoffs. They are automating a lot of things. Most people don't realize it, but the change is going to be dramatic.

> We're a long way from AGI

This is a big question, what is AGI? LLMs are quite generic and 'intelligent'. Not human-like, but. Next is going to be incremental evolution. Till we find other, non-verbal, ways of 'thinking' and put them together. That's going to be a breakthrough. Interesting, terminator-like embodiment isn't a requirement for AGI, nor is stable 'personality'.


Boy, we must have a totally different understanding of whay AGI is.

Intelligence is not about parroting an answer you've seen before. It's about using your environment to gain an evolutionary advantage.


> It's about using your environment to gain an evolutionary advantage.

I don't think that's right either (it sounds like a description of adaptation), and I don't think your description of LLMs is fair, even though I'm fairly sure they're not AGI and won't scale to AGI.

Intelligence is more like the ability to generalize skills, applying knowledge gained in one scenario to another scenario.


https://www.merriam-webster.com/dictionary/intelligence

the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)

Manipulating the environment afaik is to increase your chances of survival.


> Intelligence is more like the ability to generalize skills, applying knowledge gained in one scenario to another scenario.

Hmm.. you are talking about LLMs.. They are the most generic thing we have right now (Jan 2024) LLMs have limitations, like learning on the fly isn't their strong side. But the same with brain, it consists of limited components, that's only together they work well. LLMs can be a part of the solution, if we can't find something better.


> using your environment to gain an evolutionary advantage.

That's more like robotics. Except for evolution part. Does AGI require breeding? Software can easily multiply itself. That's hardware is the problem then.


Think about what incentives do you have to live - it may sound rough but pain and ultimately death are the ultimate things everyone is trying to avoid.

Intelligence emerges as you are trying to survive longer. Reproduction is the ultimate way of cheating death.

Environment does not necessarily needs to mean physical environment, but until the "AI" does not recognize that it is in danger of nonexisting and starts to behave in ways to avoid it it cannot, imho, make the leap to AGI and it's just a really sophisticated tool.


> Intelligence emerges as you are trying to survive longer.

That's not a given - I don't even know if it's true. The longest-lived species are not very intelligent, relative to humans. Intelligence is a tool that may or may not evolve in organic species. Frankly, that has very little to do with defining what artificial intelligence is.


> See latest Google layoffs. They are automating a lot of things. Most people don't realize it, but the change is going to be dramatic.

What exactly did they automate?


My guess paperwork. As they cut jobs in ads, where may things can be done programmatically now.


> What exactly did they automate?

Their expenses haha


I don’t think that’s Meta’s viewpoint, considering FAIR is run by Yann LeCun who has been quite vocal about the limitations of what we currently have.


LLM scaling laws are pretty well established at this point. They probably won’t hold forever but we aren’t at the breaking point yet.

Some more pressing questions are:

* What new capabilities emerge as the models get better and better at predicting (i.e. loss goes down)?

* How much will it cost to train increasingly large models? And to run inference on them?

* How difficult will it be to find or generate more and more high quality data?


> LLM scaling laws are pretty well established at this point

what are they then? I thought everyone was firmly in the "let's train with more data and see what happens" camp




Scaling laws in terms of loss are well established.

How loss translates into higher-level capabilities is anyone's guess.


How do you know how long away we are?


I imagine you don't expect a serious answer to that anyway, but to be clear: anyone talking about AI timelines seriously would not be expressing so much certainty because it's not possible to know AI timelines with certainty right now.


OpenAI got all of the positive publicity for its social mission yet in short time we clearly see that Meta has done more for democratizing access to deep learning and will continue to lead on this front. The cost of open sourcing models is far more than just development cost. They're spending millions of dollars training models with that fleet of H100s. This makes open source AI much more costly, and generous.


As Meta has always done. Their contributions to open source ML have always been above and beyond everyone else and they have one of the absolute best teams in the industry.


I love facebook. I hate other big tech companies. Facebook has done far more for software than any other company in the last 15 years. They:

1. Pushed salaries up across the board. Many people are not aware, but facebook was a major driver behind the 500k+ senior engineering pay.

2. Released major open source software: PyTorch, React, GraphQL, React Native. They basically invented modern web development.

Facebook is basically the only game in town when it comes to open source. Whatsmore, Mark Zuckerberg should be more widely applauded. I know his open source ai strategy has capitalist roots, but its still great for the world.


I think it's fine to applaud their pro open source behavior, while still being critical of the ill they've released upon society. Cambridge Analytica and spreading mob violence in some countries come to mind.


Absolutely. I don’t use Facebook the social media platform, or instagram, and I only reluctantly use WhatsApp - but Facebook as a company for software engineering seems excellent.

They also seem to genuinely take care of their employees (this is of course an outsiders perspective).


They pay well and promote quickly


> Pushed salaries up across the board. Many people are not aware, but facebook was a major driver behind the 500k+ senior engineering pay.

I think this was because anyone who is principled doesn't want to work for them. They have to pay more.


No they pushed the engineering salaries up. None of the employees at FAANG think this way


> They forced the whole industry to pay up.

And now we have industry-wide layoffs and a massive cooling of the job market.

> They basically invented modern web development.

Which is essentially a non-stop carosel that encourages cargo-culting and factory-farming of interns from boot camps.


So what, you would have preferred low pay all of these years?And you think React wasnt an improvement? Its been out for a decade now, the trope that javascript frameworks churn alot is not true anymore


I personally benefited from the high pay, but I don't know if we can definitively say that the average developer is coming out of this better, after all the layoffs and belt-tightening.


I think its pretty myopic to only view facebook through the lens of the benefits theyve provided software engineers. Facebook is a pure cancer on society


Would feel similar if they weren’t the absolute worst in terms of privacy. And if they didn’t buy out the best VR commercial tech and tie it to FB. Can’t use the Quest or Raybans headsets without logging in to the panopticon.


Yes you can


I don’t know about salaries but “the only game in town when it comes to open source” is flat wrong. Google open sourced Tensorflow, Flutter, Dart, Filament, Microsoft open sourced ONNXRuntime, Olive, WavLM, F#, .NET, NVIDIA open sourced TensorRT, NEMO, etc etc.

At the end of the day, the company that pushes clickbait to sell ads is probably the one I trust least.


Netflix is similar along these fronts.


Don't forget LLaMa


Wasn't llama initially a leak?


AFAIK it was released to "researchers" behind submitting a survey, someone filled out the survey and put it on torrents. There was no way it wouldn't leak.


Why open source it? Just another virtue signal while their real motivation is something more sinister? They are a for-profit publicly owned company. Are investors happy about spending billions to open-source it?


a, they're not really open sourcing it, they're releasing it under a license that says you can use it for what they say. So it's marketing mostly, both external and internal to try and appease the "AI ethics" crowd. And honestly, b, their strategy is incoherent anyway. It's a tech company with too much money and no acute market pressure hoping to find something that sticks.


Traditionally a company with too much money returns it to the shareholders as dividends, but that seems to be passé...


Probably partially to reduce momentum in profit extraction from competitors


I wouldn't expect the open source licensing terms to be particularly permissive, and it might even be personal-use-only. In the end it might look more like source available than open source.


I think their investors should be happy about it. Once more open source models are released, a lot of ai companies will be freed to focus on other things, thus increasing the overall economy.


Letting the wider community do your r&d for free


He's lying because he's so far behind.


...rendering every competitor's massive investments worthless. Zuckerberg's thinking must be that Meta's competitors are more susceptible to disruption by broadly accessible AGI, and that if everyone has access to state-of-the-art AGI, then no one will be able to gain a new kind of competitive advantage from it.


Like in the dystopian Ready Player One, humans move to the VR world and who has the biggest investments in VR as of now?


And what are those investments worth? Quests have great quality with nearly no-profit price and who uses it. Their Horizon platform that even employees hate.


Everyone is building something, also Tesla is building FSD, I am not sure why journalists decide to give up their profession’s dignity and become CEOs extended PR department

Im building a planet


I'm building FUD.


Oh yeah that me too


Over/under on days before he renames the company to "Singularity?"


Really feels like facebook doesnt even try to have a strategy or mission statement any more.

Even as someone who is bearish on AI, i get why it is an important part of Google and Microsoft's product offerings. What in the world does a fancy chatbot have to do with building a social network?


That you equate AGI with “chatbot” shows OpenAIs marketing has been successful. That doesn’t mean the two are one in the same or that a chatbot is the actual path there.


Article is a bit of a nothingburger, here's the quote in question from Zuck's Instagram:

"Some updates on our AI efforts. Our long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit. We're bringing our two major AI research efforts (FAIR and GenAI) closer together to support this. We're currently training our next-gen model Llama 3, and we're building massive compute infrastructure to support our future roadmap, including 350k H100s by the end of this year -- and overall almost 600k H100s equivalents of compute if you include other GPUs."


A stockpile of rapidly depreciating assets bought at eye-watering margins is an unusual brag for any company, no?


CPUs are rapidly depreciating, hard drives are rapidly depreciating, SSDs are doubly rapidly depreciating, with this logic no hardware buy would make sense.


Computer hardware has always been rapidly depreciating. You'd always get much more (performance/capability) for the same money just a few months to 1-2 years down the road. GPUs have been a complete outlier in this area for around ~8 years, and even they depreciate relatively quickly still.


I am going to be buying these by the truckload on ebay three years from now.

The current generation of GPUs is definitely going to have a long usable lifetime. Manufacturers finally have HBM totally figured out and yielding/aging well. Today's GPUs are the analog of the 28nm logic node -- it was the sweet spot for an absurdly long time.

There will be something faster/lower-power 2-3 years from now that will cause the BigCos to cycle their fleet, but it won't be anywhere near the VRAM/MemBandwidth boost we have today compared to 2-3 years ago. Their data centers are power-constrained, so they'll upgrade regardless, and the rest of us will feast on the windfall.

Don't ninja-bid on my auctions.


From a naive perspective, it seems that true research/advances in AI (methods of training, etc) aren't necessarily related to model size. It seems that the goal of "building a big model that everyone else converges to because the training data is the same" doesn't have all that much value, especially since you could wait a couple years, do it all for a fraction of the price, and catch up immediately. Meta doesn't have an AI product yet, so it's not like they would be loosing money.

I suspect this is more about talent attraction/retention.


How's their last long term vision going - you know, the one they renamed the company after?


In his defence, unforeseen new shiny objects have since emerged for Zuck to chase.


Say what you want about Facebook, the size of their dataset and computational resources definitely make them competitive, and their data science and ML teams have been always top notch. I think the Verge is missing the mark with the headline and general focus of the article. "Building AGI" is whatever, like half the companies with enough GPUs are claiming that and AGI is like more poorly-defined than "metaverse". The more interesting point seems to be this general incoherence with building chatbots and trying to run a social media company.

>Meta is still a metaverse company. It’s the biggest social media company in the world. It’s now trying to build AGI. Zuckerberg frames all this around the overarching mission of “building the future of connection.”

This is such "Verge" writing. I'm by no means bearish on VR, but that whole passage is so unreflective and uncritical it's almost a satire of journalistic fluff. Chatbots that fill social media with greater and greater amounts of garbage content is just a nightmare. Bot content is already one of the reasons people are retreating into groupchats. The blurring of AI and human interaction leads to accountability problems. Hell, Snapchat and Discord basically already tried this to enormous backlash. The fact that this is entirely antagonistic with "building the future of connection" goes essentially unacknowledged.

There is something interesting with the fact that Facebook is more open to open-source, this is fairly credible actually given the quality and quantity of the company's open-source contributions. But I genuinely think LLMs are most useful as an applied technology, and the applications listed here are frankly uninspiring.


Saying they make it “open source” in the same article where they say they need “350k high end GPUs to build it”. Is the equivalent of saying: ”we offer free nuclear submarine driving lessons”.

I know you don’t need as many resources for inference as for training. But still…


What do you mean “but still…”? It’s a pretty important distinction. Meta does indeed use their massive GPU farms to train models and then release the weights for free and people indeed run inference on prosumer hardware


How different is saying they built React using 100-500+ developer years of effort and then open sourced it. What they are releasing is what is needed by most of the people looking for open models.


You can run llama models on a personal computer, even though it was trained on >10,000 GPUs.


I don’t think this announcement will make any of the relevant debates (on the impact of Meta on the world, or the fears around AI, or the confusion on whether sharing a trained model qualifies as “open source”) any less frustratingly controversial and heated.


Is it even possible to build an AGI with our current computing paradigms? So far our most advanced neural networks are purpose-built for a task (i.e. ChatGPT for text communication, Dall-E for images).


I worry that we're getting close. I don't think our society is built for it.


What is our society built for ? Climate change ? Social media? War? Covid ?

Your sentiment isn’t lost on me but there isn’t much evidence the future need be a catastrophe either.

I personally think society isn’t going in a great direction, maybe more intelligent systems will help us.


The problem with society is that people are willing to sacrifice the good of others in order to gain something for themselves. They also tend to lie to themselves and/or are oblivious to it. The problem is within us; better intelligence will just let us do it with more sophistication, or perhaps on a larger scale. One of the main force within society pushing against having all our arrows point inwards is religion, which these days isn't even in consideration for membership in the set of useful things.


I personally think that more intelligence might make the need for personal gratification go away. I mean, to start with the ideas is that humans will be completely and utterly redundant, the AI will be better than us at everything, what happens to the ego in this situation?

What happens if the cost of things is reduced to next to nothing due to efficiency gains from super intelligent systems?


I wonder how many of those chips were acquired to run metaverse stuff. Should be lots of overlap between rendering graphics and running cuda based models.

I'm interested in seeing how the behemoths that are Meta and Google catch openai. I think it's a question of when, not if. Both companies just have a ridiculous amount of resources to throw behind these efforts. At least meta is releasing their stuff as "open source". We'll see how they justify putting out these models for free, or if it's purely about undercutting openai.


Almost no overlap: this metaverse thing just needs classical CPU servers (and not a lot of them considering the minimal user activity there).

For now Google is still late to the party (full proprietary, and nobody has seen the supposedly good model called Ultra, only an average one called Pro), and Meta is actually the company that has pushed the field forward for all companies (with LLaMA).


> this metaverse thing just needs classical CPU servers

The idea is that the metaverse will be filled with AI avatars.


> Meta is actually the company that has pushed the field forward for all companies (with LLaMA).

This is the first time I'm hearing this, unless you mean the fact that it leaked to the public. How was LLaMa pushing the field forward otherwise?


>the supposedly good model called Ultra,

Good (as written), or God (at first glance)?


I think his mid-term vision should be to source a better microphone.


Remember "Facebook: It's free and always will be"?


Fingers crossed it's another attempt along the "We're building the Metaverse and it's gonna be awesome and a great success because everyone is into VR now and it's not an overhyped fad like all the others before until some other crap replaces it". Facebook spent $36 billion on Metaverse, let's hope it spends double on AI and changes it's name to Aimmerse before that other crap makes it obsolete.


I for one look forward to the vr os that comes out when the "android" of vr steps up to what the apple vision is going to bring.

meta is currently the top here but they have focussed mostly on the game experience. can't wait to see AR as a legitimate productivity tool.

I'm sure general AI will play a role here of course


Bigco creates a VR/AR/Metaverse seemingly for entertainment but really is gathering data of the "guests", esp how they will behave when they think nobody is watching and/or judging, and in time, uses the data for nefarious purposes... I think I have seen this show before...


Doesn't look like anything to me


Can't wait to not be able to buy an RTX 5080 next year.

Crypto then AI, surely there will be a third thing coming too


I'm hoping that NVidia doesn't give up on the consumer market and keeps producing low cost (and by that I mean $500-1500) GPUs that we can use for hobbyist AI / ML / LLM.

Right now, only the 3090 and 4090 have enough VRAM (24 GB) among the consumer models to make them worthwhile for most LLM work. I'm hoping the 50 series (5090?) has more VRAM and stays affordable - though I'm not holding my breath either.


His companies drive people to depression to sell ads. How is achieving AGI aligned with that business?


That's such a crazy scale to consider that it makes me wonder how easy it is to maintain control over that many cards, and how easy it would be for someone at a company of this scale to become a purchasing tunnel to countries with sanctions.


They'd notice a couple thousand of these if they were to vanish or never arrive. Anything less is fairly meaningless for major economies.

China for example needs epic scale numbers of GPUs to power its economy going forward. The equivalent of many millions of H100s for a $20 trillion economy looking to advance rapidly.

Given the restricted production globally (Nvidia production bottleneck, with only a few places on earth that can produce something like this), until China can produce their own very high-end GPUs their economy is going to be held back by the lack of capacity. Tens of thousands of high-end GPUs slipping through isn't going to cut it, that simply doesn't matter very much.

You can prevent China from getting a million H100s. You can't prevent them from getting ten thousand of them from many different sources over time.



I feel like this is a subtle attempt to move the goalposts on what is meant by AGI. Regardless of whether the final product is truly an AGI (and I'm guessing it won't be) my guess is that it will be branded as such.


Metaverse got boring quick


Mark Bandwagon Zuckerberg


mostly because Meta did not develop a custom AI chip, like Google.


Mark is always about one year late to the party. Funny that he always stays confident that it will just work like facebook did ;)


its too bad Alamos Gold already has the AGI stock ticker symbol or they might be able to get rid of the whole META thing


So he dropped his old goal of creating Metaverse? Seems like we are going from Metaverse to Metaintelligence now.


Whatever happened to replacing the US dollar, or creating a metaverse, or curing disease in his children's lifetime? The guy has all the focus of a disco ball.


> Now, Meta CEO Mark Zuckerberg is entering the race.

If this goes the same way as when Zuck entered the race to replace the US dollar, create a metaverse or cure disease, Ai will be dead in the next 2 years.


> or cure disease

My old office was in the same building as Chan-Zuckerberg HQ hq and all the scientists I worked with rolled their eyes at their completely over the top mission statement to "cure all disease within our lifetimes." Cure ALL disease? Really?? Maybe pick one and try hard to eradicate it. But *all* disease?! All of it? Geeze.

Also one of their top scientists exposed himself taking his junk out of his pants and tried to feel up my coworker at STEM restaurant in the middle of the restaurant during happy hour. So overall they seemed like poor decision making reckless arrogant clowns.


Zuck seems a bit more modest on the goal https://youtu.be/1Wo6SqLNmLk?t=580


We can hope.


unsure about the medical initiative, but most of these endeavours have had some tangible benefits for the overall field.

he is in a rare position to throw money in all of these fields without worrying about the ROI. so i will not throw too much criticism his way.

but yes the ai bubble is real, but sadly we don't get to short on it this time.


To be entirely fair, the currency attempt was hampered by governments (predictable, maybe), and as for the metaverse... well he created something — technically it's impressive, commercially and in terms of product market fit, it's a flop.

I don't think they're done with the metaverse thing either, whether it will succeed, that's another question.


> technically it's impressive, commercially and in terms of product market fit, it's a flop.

I didn't know that. What parts are impressive? It seemed like VR Chat + Roblox to me, but I've barely seen anything.


Perhaps this qualifies:

Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398

https://www.youtube.com/watch?v=MVYrJJNdrEg


Too bad that tech is put out by the guy who said, “People just submitted it. I don't know why they 'trust me.' Dumb fucks."

_

I will never ever willingly give FB any of my biometric data, face scans, etc.


all the real problems of the world are atom problems, not bit problems. and so, software companies without real problems to solve, continue to flounder.


The "copy or buy what seems to be the next big thing" has worked extremely well for Facebook. Yes, they failed to copy twitter and bitcoin and metaverse didn't seem to take off, but copying the idea of messaging-as-a-social-network was a huge success when they split out facebook messaging into its own app. Similarly buying instagram and whatsapp was hugely succesful, not to mention copying reels/stories from snapchat.


Companies pivot, this isn’t a bad thing


Sure but you have to actually establish something first to pivot from. Declaring some half-baked idea and then not executing and then replacing that idea with some new half-baked idea is not actually pivoting. It's blathering. See:

"Facebooks Endless Pivots":

https://www.axios.com/2022/05/04/facebooks-endless-pivot-met...

Has Facebook as a business ever really pivoted from "social network advertising company?"


They found out if you die in the metaverse, you die in real life


That's the thing about having lots of money: all your bad ideas can get a workforce behind them. Admittedly, the good ones as well.


Turns out, when you have a market cap of almost $1T, you can focus on a few things at the same time.


Too hard, this is easy


he's still doing metaverse. he said so in one of his latest threads. ML will make the metaverse more viable in his opinion


Squirrel!


AGI could potentially do all of these things


Obviously, he won’t be the one doing it if AGI is, the point is he didn’t do the work he.


friend I understand the point, my point is maybe he is not done yet and he is focused on a way to accomplish one thing that can accomplish all the others.


So he will change the company from Meta to AI? Since the company shifted from metaverse to AI.


Is he renaming the company again?


For the good of humanity no doubt.


So has he finally admitted the floating torsos with no legs was a bit of a silly bet?


“responsible agi” is a bit of an oxymoron imho. It’s like giving birth to a human, you can prepare it, train it, guide it, and tell it what not to do, but eventually that human becomes entirely accountable for their own actions


Not if that human is congenitally psychopathic. That is the situation we are trying to avoid.


I wonder if he will end up bringing Carmack back?


Hopefully he does better at this than Chinese.


Brace yourselves. The scale of capital investment coming from Meta, Google, and OpenAI/Microsoft is going to be historically mind-boggling.


Meta (Facebook), Alphabet (Google), OpenAI (Microsoft)

Microsoft name change incoming?


Investor Narrative, Not Goal.


...but not the training data.


Lol, ok.

Are they also working on world peace?


so far hees made nothing but viral spyware but its all about to change!


You're right, of course, but also Bill Gates was a cartoon capitalist villain for decades and then suddenly he started saving millions of kids in the third world.

Maybe Mark will have a similar moment of redemption, who knows.


“We are changing our name to Arti”


> Mark Zuckerberg’s new goal is creating artificial general intelligence

So...what happened to that whole "Metaverse" thing? Is it time to rename the company again?


Man, the Metaverse... what a fad. I almost can't even believe that WAS a thing. We had Habbo, and then we had Second Life, and NOW!!!!!!.... * deafening silence *


How is it a defeaning silence? Their latest hardware product was released ~3 months ago.


Metaverse != Meta headset

The promise of the former is a unified platform where e.g. a virtual magic sword bought in an RPG also works in an FPS made by an unrelated developer, and a virtual gun bought in the later also works in the former, and this is also your work collaboration environment. And now I'm thinking of ABK's boss fight sketch: https://youtu.be/w6u_EJa_sZE?si=wlYD8EhRd_PLm39l


Never heard of it. Never saw it anywhere.

If nobody can hear the sound of a tree falling, does it still make a sound ?


You are not everybody, so whether you heard it or not is immaterial to Meta, Zuckerberg, and anyone else.


You never heard of the Quest 3?


Not at all, but it makes sense if it's a niche product addressing the needs of a subset of gamers.

Look for example Among US VR (an excellent game):

https://steamdb.info/app/1849900/charts/

There are 12 players online now, 22 players "peak"...

A lot of people have tried VR, and the consensus is generally that VR is fun to try once but then:

Is it really worth spending 1000 USD (headset only + whatever gaming PC you need with it) <-> 4000 USD (vision pro) for something you'd use only few times per year ?

To follow the hardware news related to something you don't plan to purchase nor use doesn't really make sense.


>Is it really worth spending 1000 USD (headset only + whatever gaming PC you need with it) <-> 4000 USD (vision pro) for something you'd use only few times per year ?

The Quest 2 is a perfectly capable VR device and is only $250.

>To follow the hardware news related to something you don't plan to purchase nor use doesn't really make sense.

You are on hackernews.. a site that has talked about the Quest 3 A TON.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


The Among Us fad is mostly over, though as a meme it has some serious staying power.

Looking at VR Chat though, it's posting record highs in user count, hitting 52,956 over new years, with steady year over year growth for 5 years now.

https://steamdb.info/app/438100/charts/#all


Quest 3 is $500, not $1000 and needs no external PC. Your choice of game is equally bizzare.


"Am I out of touch? No, it's the >20 million buyers who are wrong"


20 million buyers: 2 million active users on 8 billion population, this is 99.975% of population not concerned.


World GDP/Capita is $12,234. Why would you use world population instead of US population? This discussion is clearly not serious so I'm not gonna respond further


VR is a gadget, the same way that 4D cinemas are gadget. Ok it's +/- fun the first time to receive water droplets in your face while watching a movie, or for the cinema to spread fake fart smell, but this is not the experience you want to have every day.

Apparently only a single-digit % of people who purchased AR headsets are using them once in a month (source: Valve developer Chet Faliszek)


> but this is not the experience you want to have every day.

It’s been the experience I’ve been having every day since the original oculus devkit. I’ve rarely touched a flatscreen title since. Eventually I hope they’ll become good enough to work in when I travel.

edit

In saying that, i've always maintained its not ready for normies yet, and that we were probably a good decade or two away from the point where its going to be in a format/design that the mass market would accept/have interest in. In fact in many cases, I still actively reccomend against purchase because I know it's still kinda incuating along.


In my opinion, HMD are clearly the future of display tech. VR is something you get for free, with an HMD.


A (dystopian) future is more likely the Neuralink injecting visual signals or thoughts. Even an ultra-light HMD gives the Glasshole feeling, besides not being comfortable to wear.


> Even an ultra-light HMD gives the Glasshole feeling, besides not being comfortable to wear.

Which have you had experience with? And, do you believe whatever you wore was as good as it gets?


Meta's AR/VR strategy is still the over-arching priority and given how well Quest 3 and Ray Bans glasses are doing shows that it is likely the right one. Especially once Apple Vision Pro ignites the industry.

AI will be used to enhance that e.g. generated avatars, autonomous agents, hand/body tracking etc.


> Especially once Apple Vision Pro ignites the industry.

I may be repeating my own errors with the following as I said much the same for the iPhone and the Apple Watch when they were new, but…

The prices Apple are asking for seem excessive given what the products actually do, surely the cheaper alternatives are going to be what really matters?

(In this case, cheaper alternatives would include the Meta headsets).


Cheaper than an 8K OLED.

Watched a guy spend $6K on iPads for his kids like it was a stocking stuffer. Took 15 minutes as he had to call the wife on if 1TB was enough. Felt bad I was just buying an Air so wandered to let the sales kid work him.

Kid said it was an overage day.


Mmm.

You've just reminded me how weird I am with money.

Back in 2000-ish, a summer holiday job I was doing for an hourly pay of… I can't remember exactly, but perhaps the equivalent of £10k/year for the full-timers… one of my coworkers said he'd bought a plasma TV for his 1- or 2-year old son. Those things were considered expensive luxuries back then.

I guess people like him are the norm.


How well are the Ray Bans doing? I’ve never heard anyone talk about them nor seen anyone wearing them.


Surprisingly well.

Strong adoption by the younger TikTok generation which is an area that Meta has been desperate to bring back into their fold.

Also the product is pretty impressive. The camera quality is really good and the AI features genuinely useful. Definitely caused many in the industry to wonder if that form factor could be the future of the AR industry in the short term as well as the long term.


Doesn’t seem to be the case —- https://www.theverge.com/2023/8/3/23818462/meta-ray-ban-stor...

Less than 10% retention after two years.


> Especially once Apple Vision Pro ignites the industry.

Ah, yes, the inevitability of Apple success.

Just as everyone switched from cell phones to PDAs after 1993.


Wow, you must go back 30 years to find a failure. That's hugely impressive. If we squint enough we can add FireWire to that.


There's a train of Apple misses. Hindsight is success after success, because we only remember the successful and final versions of products.

Generally, they get it right a lot of the time. More than others!

But argumentum ad Apple to convince of the inevitability of technological shifts is insane.


It is that time. First it will be "Meta AI" and then the "Meta" will be long gone.

I'm gonna get downvoted for this comment because this is not Reddit, but I had to say it.


How would one manage a fully connected VR-scape without AGI?


[flagged]


When I was younger I was naive to think these business leaders were some types of geniuses. Turns out every one of them is just jumping from one hype to the next, hoping something sticks. If it doesn't, write it off. If it hits, they're a genius.


That's a lot of what I've seen the upper class be. Take a lot of bets, you need one to hit VS play it safe...

And it even pans out in generations. The scion of the Tang family (as in MIT Tang hall) sent half his kids to business, half to politics, figuring out on average it would turn out ok (it did)


I went down a very deep rabbit hole looking into the Tang clan/dynasty. So interesting.


AI will power the metaverse, it was the missing piece.


I thought the missing piece was the userbase


lol dude


If I were Zuck, I'd be building a new phone + OS.


They certainly have a large stockpile of the cringiest boomer dataset on the planet, rivaled only by LinkedIn. Soon the AI loop will generate the largest neuron deactivation loop in human history.


I don't see how Zuck/Meta could create an AGI that is not VERY left-wing biased relative to the beliefs of most of the world's population.


Yes but can they run Crysis?





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: