Hacker News new | past | comments | ask | show | jobs | submit login
Elon Musk wanted an OpenAI for-profit (openai.com)
582 points by arvindh-manian 39 days ago | hide | past | favorite | 640 comments



I guess it's not news but it is pretty wild to see the level of millenarianism espoused by all of these guys.

The board of OpenAI is supposedly going to "determine the fate of the world", robotics to be "completely solved" by 2020, the goal of OpenAI is to "avoid an AGI dictatorship".

Is nobody in these very rich guys' spheres pushing back on their thought process? So far we are multiple years in with much investment and little return, and no obvious large-scale product-market fit, much less a superintelligence.

As a bonus, they lay out the OpenAI business model:

> Our fundraising conversations show that:

> * Ilya and I are able to convince reputable people that AGI can really happen in the next ≤10 years

> * There’s appetite for donations from those people

> * There’s very large appetite for investments from those people


> Is nobody in these very rich guys' spheres pushing back on their thought process?

Yes, frequently and loudly.

When Altman was collecting the award at Cambridge the other year, protesters dropped in on the after-award public talk/Q&A session, and he actively empathised with the protestors.

> So far we are multiple years in with much investment and little return, and no obvious large-scale product-market fit, much less a superintelligence.

I just got back from an Indian restaurant in the middle of Berlin, and the table next to me I overheard a daughter talking to her mother about ChatGPT and KI (Künstliche Intelligenz, the German for AI).

The product market fit is fantastic. This isn't the first time I've heard random strangers discussing it in public.

What's not obvious is how to monetise it. Old meme parroted around was "has no moat", which IMO is like saying Microsoft has no moat for spreadsheets: sure, anyone can make the core tech, and sure we don't know who is Microsoft vs StarOffice vs ClarisWorks vs Google Docs, but there's more than zero moat. From what I've seen, if OpenAI didn't develop new products, they'd be making enough to be profitable, but it's a Red Queen race to remain worth paying for.

As for "much less a superintelligence": even the current models meet every definition of "very smart" I had while growing up, despite their errors. As an adult, I'd still call them book-smart if not abstractly smart. Students or recent graduates, but not wise enough to know their limits and be cautious.

For current standards of what intelligence means, we'd better hope we don't get ASI in the next decade or two, because if and when that happens then "humans need not apply" — and by extension, foundational assumptions of economics may just stop holding true.


> When Altman was collecting the award at Cambridge the other year, protesters dropped in on the after-award public talk/Q&A session, and he actively empathised with the protestors.

He always does that to give himself cover, but he has clearly shown that his words mean very little in this regard. He always dodges criticism. He used to talk about the importance of him being accountable to the OpenAI board and them being able to fire him if necessary when people were questioning the dangers of having one person have this much control over something as big as bleeding edge AI. He also used to mention how he had no direct financial interests in the company since he had no equity.

Then the board did fire him. What happened next? He came back, the board is gone, he now openly has complete control over OpenAI, and they have given him a potentially huge equity package. I really don't think Sam Altman is particularly trustworthy. He will say whatever he needs to say to get what he wants.


Wasn't he fired for questionable reasons? I thought everyone wanted him back, and that's why he was able to return. It was, as I remember, just the board that wanted him out.

I imagine if he was doing something truly nefarious, opinions might have been different, but I have no idea what kind of cult of personality he has at that company, so I might be wrong here.


> I thought everyone wanted him back, and that's why he was able to return.

Everyone working at OpenAI wanted him back. Which only includes people who have a significant motivation to see OpenAI succeed financially.

Also, there are rumours he can be vindictive. For all I know, that might be a smear campaign. But if that were the case, and half the people at OpenAI wanted him back, the other half would have a motivation to follow so as not to get whatever punishment from Sam.


It sounds to me people working in AI these days have a lot more options than being fraid of a particularly vindictive man.


Between just working on ML tech and working on ML tech with US$XXX billion of Microsoft’s money—salaries aside—imaginably, having access to nuclear plants worth of energy, compute power and cutting edge GPUs, legal department for defending intellectual property violation, etc., makes for a very different value proposition compared to some other startup in the space. (The self-image of working at a company allegedly “advancing humanity” likely helps, too.)


True to some extend. But if you have OpenAI on your resume I have to imagine there’s no necessity for your next job to be much different (e.g. go to any FAANG)


I think (based on very shallow research) among FAANG behemoths Meta is the closest in terms of resources thrown at ML, but it’s an ad company that hasn’t got such a “we advance humanity” image. Equity vesting and that sort of stuff can additionally make moving companies a problematic prospect, even if the new place offered a competitive salary…


Yeah, Meta pays well, but they probably wouldn't match the $10M equity pile that some generic engineer or researcher who joined 1 year before ChatGPT dropped could be sitting on. Ofc, the equity->cash conversion rate for OpenAI pseudo stocks isn't clear, but those paper gains would be tough to abandon.


> Wasn't he fired for questionable reasons?

There were a number of concerns, including for-profit projects he was pursuing despite his public insistence on OpenAI being non-profit as well as generally deceptive behaviour on his part. The last part at least is consistent with what others have said about Altman previously, including what allegedly led to his exit from YC, although they have kept those stories pretty quiet. But it seems like PG himself no longer has a lot of trust in Altman after he basically made him his heir apparent, and he has known him for a while now.

What's more, the driving force behind the board's move to remove him from OpenAI was reportedly Ilya Sutskever who was one of their key people and one of the handful of original founders. So it wasn't just a bunch of external people who wanted Altman gone, but at least one high level insider who has been around for the entire thing.

Altman himself was even once asked by journalists whether we could trust him to which he outright answered "no". But then he pointed to the fact that the board could remove him for any reason. Clearly, he was suggesting that there were strong safe guards in place to keep him in check, but it is now apparent that those safe guards never really existed while his answer still stands.


Don’t pay much particular attention to AI but I’m not seeing the knock on him saying “no.”

What human in the world could be trusted with AI? Only delusional people could say yes to that question.


> What human in the world could be trusted with AI? Only delusional people could say yes to that question.

"AI" is too broad a topic to draw that conclusion.

Almost everyone can be trusted with Stockfish. Almost: https://github.com/RonSijm/ButtFish

Most of us can be trusted with current LLMs, despite the nefarious purposes that they can be put to by a minority. Spam and fraud are still minority actors, even though these tools increase the capabilities of those actors; and they're still "only" at the student/recent graduate level for many tasks, so using them for hazardous chemical synthesis will likely literally explode in the face of the person attempting it.

Face recognition AI (and robot control attached to it) is already more than capable enough to violate the 1980 Protocol on Blinding Laser Weapons, and we're just lucky random lone-wolf malicious actors have not yet tried to exploit this: https://en.wikipedia.org/wiki/Protocol_on_Blinding_Laser_Wea...

We don't yet, it seems, have an AI capable enough that a random person wishing for world peace will get a world with the peace of the grave, nor will someone asking for "as many paperclips as possible" accidentally von-Neumann the universe into nanoscale paperclip-shaped molecules — indeed, I would argue that if natural language understanding continues in the current path, we won't ever get that kind of "it did what I said but not what I meant" scenario; what I currently think most likely is as an analogy for the industrial revolution, where we knew CO2 was a greenhouse gas around the turn of the 1900s, and still have not yet stopped emitting it because most people prefer the immediate results of things that emit it over the distant benefits of not emitting it.

But even that kind of AI doesn't seem to be here just yet. It may turn out that GenAI videos, images, text, and voice is in fact as much of a risk as CO2, that it collapses any possibility of large-scale cooperation, but for the moment that's not clear.


a smartphone can be used to guide an ICBM.

what human in the would could be trusted with it?


A road can be used for abductions, acts of violence, and other harmful activities.

What human in the world could be trusted with it [roads]?


Most humans are frequently harming themselves (and sometimes harming others) with their smartphones, so...


> I thought everyone wanted him back,

Ilyia Sutskever who was the chief Scientist of the company and honestly irreplacable in terms of AI knowledge left after Altman returned.


Only 1 of the 6 board members are still at OAI.


He also apologized and said he was wrong.


That was only after it was apparent that a majority of employees would back Altman's return I believe. A majority of which who had spent less time with him than Ilya had in all likelihood.


Define everyone. I was delighted when they fired him. I don't believe he has humanity's best interest at heart.


745 of 770 employees responded on a weekend to call for his return, threatened mass resignations if the board did not resign; among the signatories was board member Sutskever, who defected from the board and publicly apologized for his participation in the board's previous actions.


>745 of 700 employees responded on a weekend to call for his return, threatened mass resignations if the board did not resign

I would think final count doesn’t really matter. Self serving cowards, like me, would sign it once they see the way the wind was blowing. How many signed it before Satya at Microsoft indicated support for Altman?


I followed the drama. The point I was (somewhat unsuccessfully) trying to make was that while, sure, there were groups who wanted him back (mainly the groups with vested financial interests and associated leverage), my sense was that the way it played out was not necessarily in line with wider humanity's best interest, i.e. as would have been hoped based on OpenAI's publicly stated goals.


Oh, in that case sure.

The statements in the whole popcorn drama still don't add up in my mind, so from the point of view of "humanity's best interest", I'd say it's still bad.

I thought you meant at the time, not with the benefit of hindsight.


>my sense was that the way it played out was not necessarily in line with wider humanity's best interest,

Sure. But you make the foolish assumption here that humanity even has humanity's best interests at heart. Sentiment may be negative on current LLM generative based AI, but there's still plenty of people with either potential vested interests or simply seeing missing the forest for the tree. It's pretty hard to say "everyone hates/loves AI" at this current time.


I'd do that too if I held stock, and I think the guy is a borderline-vampire.


745 of 700?


Whoops, typo for 770, edited to correct. Thanks!



At the time, was it possible for people working at OpenAI to, er, "cash out"?

I don't actually know the answer to that, but I'm suggesting that perhaps people had additional motives for the organization to preserve its current hype/investment growth strategies.


He was fired because he was leading the for profit in a direction that was contrary to the mission of the non profit parent that was supposed to be controlling the for profit.

Those working for the for profit got $$$ in their eyes once MSFT started throwing billions at them and said fuck the mission and bring back Altman we’re going full Gordon Gekko now.


This guy is outright scary. He gives me the chills.


the whole group of these techbro billionaires come off as barely human ghouls when you hear them speak, imo

see also: peter thiel on piers morgan recently.


They probably know that nobody would vote for them which is why they fund Trump and other politicians who do have social skills.


Indeed. Words are very very inexpensive and fool a lot of people. Never pay attention to any words. Judge people by their actions.


> The product market fit is fantastic. This isn't the first time I've heard random strangers discussing it in public.

Hardly the evidence of PMF. There is always something new in the zeitgeist, that every one is talking about, some more so than others .

2 years before it was VR, few years before that NFTs and blockchain everything, before that it was self driving cars before that personal voice assistants like Siri and so on .

- self driving has not transformed us into minority report and despite how far it has come it cannot in next 30 years be ubiquitous, even if the l5 magic tech exists today in every new car sold it will take 15 years for current cars to lifecycle.

- Crypto has not replaced fiat currency , even in most generous reading you can see it as store of value like gold or whatever useless baubles people assign arbitrary value to, but has no traction for 3 out of other 4 key functions of money .

- VR is not transformative to every day life and is 5 fundamental breakthroughs away.

- Voice assistants are useless setting alarms and selecting music 10 years in.

There has been meaningful and measurable in each of these fields, but none of them have met the high bar of world transforming .

AI is aiming for much higher bar of singularity and consciousness. Just in every hype cycles we are in peak of inflated expectations, we will reach a plateau of productivity where it is will be useful in specific areas (as it already is) and people will move on to the next fad.


> 2 years before it was VR, few years before that NFTs and blockchain everything, before that it was self driving cars before that personal voice assistants like Siri and so on .

I never saw people talking about VR in public, nor NFTs, and the closest I got to seeing blockchain in public were adverts, not hearing random people around me chatting about it. The only people I ever saw in real life talking about self-driving cars were the ones I was talking to myself, and everyone else was dismissive of them. Voice assistants were mainly mocked from day 1, with the Alex advert being re-dubbed as a dystopian nightmare.

> AI is aiming for much higher bar of singularity and consciousness.

No, it's aiming to be economically useful.

"The singularity" is what a lot of people think is an automatic consequence of being able to solve tasks related to AI; me, I think that's how we sustained Moore's Law so long (computers designing computers, you can't place a billion transistors by hand, but even if you could the scale is now well into the zone where quantum tunnelling has to be accounted for in the design), and that "singularities" are a sign something is wrong with the model.

"Consciousness" has 40 definitions, and is therefore not even a meaningful target.

> Just in every hype cycles we are in peak of inflated expectations, we will reach a plateau of productivity where it is will be useful in specific areas (as it already is) and people will move on to the next fad.

In that at least, we agree.


> never saw people talking about VR in public, nor NFTs, and the closest I got to seeing blockchain in public were adverts, not hearing random people around me chatting about it. The only people I ever saw in real life talking about self-driving cars were the ones I was talking to myself, and everyone else was dismissive of them. Voice assistants were mainly mocked from day 1, with the Alex advert being re-dubbed as a dystopian nightmare.

Isn't that funny? It speaks exactly to the GP's point about how frothing and uninformative the "zeitgeist" can be, as my experience happens to have been the opposite of yours. The people I happened to be casually brushing around during those earlier fads happened to be hip to them, or maybe my ears were more attentive, and so I heard them mentioned more often and with more enthusiasm.

In contrast, most of what I happen to hear about generative AI, outside of HN, tends to be idly laughing at its foibles and misrepresentation if it's mentioned at all.

I don't expect you to have the same experience as me, but I'm careful not to assume too much based on either of ours.


I mean nobody needs to bother with anecdotes here. chatgpt.com hit 3.7B visits in October and was #8 in worldwide internet traffic. Open AI say they have 300M weekly active users and 1B messages per day.

The idea that it's some fad with no utility or that the general public knows little about it is at this point laughable. It's the software product with by far the fastest adoption any software product has ever seen.

And if people want to lump that in with bitcoin or VR or whatever, shrug, i just don't know what to say.


Yes, because it is free.

How many paid users? Is a paid user generating profit right now? Google had to inject advertising in its search results to start earning real money. What will OpenAI do? That's the PMF.


There are millions of free apps.

They had some estimated 10M paid users a few months ago.

>Is a paid user generating profit right now?

For themselves ? I don't know and neither does the person saying they have no PMF.

There is nothing stopping Open AI from inserting ads into GPT's responses, either explicitly with the search tool or weaved into predicted responses.

Their revenue nearly quadrupled year on year and model training and inference costs have reduced by several orders of magnitude in the last few years.

I'm not saying that Open AI are foolproof. But the money being pumped to them is more or less expected.


90% of my use of OpenAI is to avoid the crap of internet search (Google or ddg, even when I find the right page finding the specific part of a pain)

At some point chatgpt will become crap and the cycle will repeat


That actual isn't evidence at all that the general public knows a lot about it, or even uses it much. It can be a small group of very active users.


To put it frankly, not at this scale.

I don't know what you think nearly 4B visits per month is (and top ten in worldwide traffic) that "a small group of active users" can generate. 300M active users a week is not a small group and you do not get there without general public awareness.


> I mean nobody needs to bother with anecdotes here. chatgpt.com hit 3.7B visits in October and was #8 in worldwide internet traffic. Open AI say they have 300M weekly active users and 1B messages per day.

As far as the "zeitgeist" conversation goes, though, there's one story in the non-tech news about AI right now and it isn't how it's making everyone's life easier. It's, yet again, a story about how someone trusted it and got burned because the markov chain landed on something not true.


> ... and that "singularities" are a sign something is wrong with the model.

Einstein thought that way too because his model predicted singularities in space-time fabric. Turns out he was wrong. Nowadays we call them blackholes and have images of them too.


He wasn’t exactly wrong in that the model still breaks down for black holes. Even though we know they exist we don’t know what goes inside them.


Black holes have had two different singularities in general relativity:

(1) the event horizon was the first singularity, but that turned out to be a free choice of the coordinate system, i.e. a mistake in the maths — and we can't see past the event horizons anyway, so we don't know what happens on the other side of them even though they're not "real" singularities

(2) the point in the centre where the geodesics stop, which everyone in the field knows is incompatible with (a) the assumption in the maths of relativity that spacetime is smooth and can be differentiated everywhere, and (b) quantum mechanics


The image is not of the singularity itself but the accretion disc.

This is not to discount that amazing accomplishment but to point out that, in addition to misrepresenting what the singularity is in a black hole, you also misrepresented what the image of a black hole is.


Self-driving won't take over by just being available in the new cars people are buying anyway.

It'll take over when people find it cheaper to ride robotaxis than to own a vehicle at all. That's potentially a much quicker transition, requiring significantly fewer new vehicles.


Nobody is going to sit and wait for a robo uber in the suburbs for ten minutes to go to the grocery store ten miles down the road. This is the main problem, America has suburbanised itself to hell and no amount of robo taxis will create the world you are suggesting. And elsewhere in the world there’s no need because most places have adequate public transit which solves the problem way more efficiently than robo taxis ever will.


People wait for things all the time and if it can take away the hassle or driving your kids somewhere, maintaining multiple vehicles for multiple family members etc. I'd bet that it does take off. Public transport doesn't solve groceries or other types of shopping though even in highly connected cities, especially for people with health problems or just aging ones.


Imagine a parent today driving to work and dropping off their two kids at school on the way. Instead of one car making three stops, you're suggesting three robo taxis would accomplish the same thing. What do you think already clogged roads would look like in this future? The only scalable solution is public transit, walking, and cycling.


I used to live in Germany, which had lots of excellent public transit, walking, and cycling. There was still quite a bit of car traffic. If even Germany's public transport isn't enough to eliminate cars, then we should put some effort into improving car transport, in addition to whatever we do with buses and bike lanes.

To that end, I'm not convinced automation will make things worse.

If the kid's school is near the parent's route to work, there's nothing stopping the parent from saving money by taking the kid in the same taxi.

If the kid's school is in the opposite direction, then a separate robotaxi can be more efficient. What matters is total system mileage. If a robotaxi takes the kid, then picks up a commuter starting near the school, then we save the parent's trip from the school back to their starting point.


Nobody is going to sit and wait for a robo uber in the suburbs for ten minutes to go to the grocery store ten miles down the road.

I absolutely would do that. Sometimes I already do wait that kind of time for a taxi or bus. Having the robo taxi turn up in ten minutes from when I decided I wanted to go to the shop would be fantastic.


Public transport doesn't compare to robotaxis. If robotaxis become a reality I expect a significant drop in car ownership and public transport use. The price will be competitive with the price of public transport, at least for single fare prices.


I can catch a bus every five minutes a one minute walk from my house in Utrecht. Those buses go anywhere. In Oslo I could ride the T-bane to the forest or to downtown. In Tokyo it was obvious public transit is the only way to go. All the places I lived outside the USA I would never take a taxi much less a robot one in lieu of Public transit.


Because taxis are not cheap enough. Once they become cheap enough, by not having a driver, the equation changes. Travel becomes more convenient.


In a city I don’t take public transport over uber due to the cost but due to the convenience and speed.

The only exception is the US, and even then that doesn’t apply in Manhattan.


That's what I'm saying. When price goes down, robotaxis become a more viable option due to convenience and speed. You're shopping? Just put it in the trunk. Have fun with 4 grocery bags on public transport.

Public transport can be faster for going from A to B, but most people live a couple of kms from A and they're not going to exactly B but somewhere in the vicinity. The "last mile" will more likely be by robotaxi.


> When price goes down, robotaxis become a more viable option due to convenience and speed.

Lower prices don't make things more convenient or fast.

> You're shopping? Just put it in the trunk. Have fun with 4 grocery bags on public transport.

Here is "The Shops at South Town" mall in Salt Lake City: https://www.google.com/maps/@40.5617162,-111.8944801,171m/da...

Note that some parts of that car park are 300m from some of the shops in that building. (There are choices of shops and parking spaces larger than that, but it looks like I can easily justify 300m without resorting to maxima).

Here is my old apartment: https://www.google.com/maps/place/Prenzlauer+Promenade+1,+13...

Note that within 300m of that address — i.e. the bounds of the example American shopping mall car park — there are 6 [super]markets, a pharmacy, at least five cafe/bakeries, two cinemas, several takeaways, at least two restaurants, a dentist, a car repair shop, a car dealership, a pet-goods shop, a kitchenware shop, a newsagent/post office, two sets of tram stops, four sets of bus stops, and I'm not counting any corner shops ("Späti" in the local language).

Carrying a few bags of groceries around inside well-planned cities is pretty trivial. Anything connected to that area by a single mode of transport is no harder to take shopping through than the transit systems within a mall — escalator, elevator, travelator — and that means one of the main commercial areas of the city (Alexanderplatz) and the road to it (which itself is basically a 3km long half-density strip mall with a tram the whole length and a railway crossing it in the middle and another at the south end) are both trivial to reach even with shopping — I've even seen someone taking an actual, literal, kitchen sink on one of Berlin's trams.

Even in my current place, still in Berlin but close to the outskirts without the advantages of density, a mere four grocery bags is easier to get through public transport than it is from one end of an American car park to the other.


> Lower prices don't make things more convenient or fast.

You are replying to a straw man, I never said anything like that. It makes for dull discussions.

> Even in my current place, still in Berlin but close to the outskirts without the advantages of density, a mere four grocery bags is easier to get through public transport than it is from one end of an American car park to the other.

It's unclear to me why you compare it to a geographically unrelated area. You should compare the public transport with a robotaxi alternative. At similar pricing, how is convenience and speed impacted.


>> Lower prices don't make things more convenient or fast.

> You are replying to a straw man, I never said anything like that. It makes for dull discussions.

If you didn't mean that, why did you say "When price goes down, robotaxis become a more viable option due to convenience and speed"?

> You should compare the public transport with a robotaxi alternative.

The alternative being "do I put these four bags in a trunk, or on the seat next to me?"


Not the op. If you reread what he wrote I think he’s trying to say the price going down changes the viability of the option—not the convenience and speed which he treats as inherent to the car.

Said another way: When price goes down, robotaxis (which are convenient and speedy) become a more viable option.


Thanks, that reading looks plausible :)


Yes, that is what I meant.


That's because people are behind the wheel. Once car ownership drops, the computerized traffic control can get much more efficient.


Computerised public transport is even more efficient

Your public transport with robo taxis takes up far more space in the surface than there is available in a normal city, as shown in this photo.

https://danielbowen.com/2012/09/19/road-space-photo/


Not sure what you think is my public transport. My ideal computerized public transport combines vehicles of all sizes and types, and an uber-like app let's me choose what I prefer based on cost and other attributes.


Why would you wait for 10 minutes? Robo taxis would be driving all around to be optimally available to the market.


The optimal route means picking up multiple people on the same route, and dropping them off elsewhere on the route, and then making the vehicle large enough so that they can do this comfortably, i.e. a bus.

It ends up being another form of public transport — perhaps one with no predictable timetable, or perhaps timetables are themselves a useful Schelling point, I wouldn't want to guess.

But as fixed bus routes make it possible to plan things like bus shelters, I think I'd prefer a bus over the logical progression of self-driving "taxis" that constantly move rather than finding parking spots.


That's optimal for the vehicles, but less optimal for the people, who have to walk to stops. I used to live in Germany which had fantastic public transport, but walking a mile to the nearest stop was something we took in stride. That's great for health, but didn't prevent Germans from being quite fond of their personal automobiles, and there didn't seem to be noticeably less traffic than in the US.

Meanwhile I think plenty of people find Uber perfectly convenient, just a bit expensive.

As far as physical efficiency goes, Tesla's plan is more efficient than their competitors'. Their dedicated robotaxis are either small two-seaters, or decent-size buses.


So people are not willing to wait for a robotaxi, but they are willing to wait for buses and trains?


Assuming we get to your level of value prop. It will still only be a choice for new first time buyers

If I already own a car, for which am paying an emi and no hope of selling for good value (because market is dropping as newest buyers disappear) it won’t make economic sense for me till I generate LTV from the car to switch over .

Either way it till take 1-2 decades after ubiquitous L5 availability, even if that was possible at all and soon


there are already cheaper ways to ride from point A to point B without owning a car (especially in urban areas) and yet car ownership has never been higher… ideal robotaxis will take at least one full generation to actually materialize.

just as a silly personal example, I took a car to the shop and got Uber credits. my wife was heading into the city for an event (had to deal with both traffic and parking) and I was like “take a free Uber” and she was like “no way, driving my own car…” my daughter on the other hand…


> despite how far it has come it cannot in next 30 years be ubiquitous, even if the l5 magic tech exists today in every new car sold it will take 15 years for current cars to lifecycle.

We must have a different meaning of ubiquitous. Ubiquitous means "found everywhere" not "has eliminated everything else."

You don't even need to cycle the fleet once to meet the definition of ubiquitous. If you can get a self driving car in any city and many towns, you see them all over the place, and a third of trips are done with them, that'd be ubiquitous in my book.

I don't see why you couldn't get there in 3 decades. I don't think it's likely in 12 years but it seems possible in that timeframe.


I considered fleet change as the transformation that eliminates driving as a needed skill .

Getting a slightly better version of Waymo in SF is just augmenting for a poor public transit with a very American car solution .

It scales pretty poorly and cannot replace good public transit, at best can augment them as last mile connectivity .

There is value to those of course and is disruptive to a sector, it is still a specific use case and not world changing. You are only disrupting the taxi industry not driving itself .

The claims at the time was not we are going replace taxis with robotaxis , but driving itself will be a thing of the past. That is the nature of hype cycles .


>people will move on to the next fad

AI isn't really a fad. It's going to something more like electricity, say.


My favorite comparison right now is to the microwave oven.

It's a novel thing. It's not going anywhere. It makes some things easier, sometimes, in both personal and professional settings. But you're not at a big loss without it and in many cases your choice to do without is a choice that favors quality, control, nuance, and skill refinement.


AI probably has more capacity to be incorporated into many things than microwaves.


I will bite, going with your analogy comparing to electricity on the other end of the spectrum.

electricity is not ubiquitous in many parts of the global south full 130 years.

The common definition is- access to electricity source that can provide very basic lighting, and charge a phone or power a radio for 4 hours per day. Not even grid connected.

Even with this basic definition there is still full 660 million people who don’t have access to even that. It only dropped below a billion in 2015.

https://ourworldindata.org/energy-access

You need beyond the basic to be meaningfully transformative, clean cooking fuel access is one proxy used that is still 2Billion people without access for example.

Nobody is denying the transformative nature of electricity or say internet , but even these things take a long long time, more than one life time to propagate and be world changing, and not because of their choice of lifestyle to live disconnected.

If electricity, clean water, sanitation and nutrition and basic healthcare is not available to half the population, what is AI going to do in next 3- 5 years or whatever ridiculous timeframe Sam Altman keeps talking about .

And this is a not productivity problem, there is more than enough food production capacity to feed the world, yet starvation and malnutrition is not solved.

AI may rock your world, it is not going to rock the world anytime soon even if was like electricity.


Something can be transformative long (long) before its "completely available to everyone ". Clearly electricity is transformative despite 660 million people not giving it.

So I guess it all depends on your definition of "transformative".

If a billion people are affected? 2 billion? One continent? All of them?

If something is big in Australia, Asia, Europe and North America, is that enough to describe it is "rock the world"?


People have been saying that for a few years now. I've been using it as a very advanced autocomplete while coding for that time and not much else.


I use it for cooking, proofreading, learning German, skimming long documents, shopping advice (by uploading photos), and asking dumb questions about physics that get me [closed] if I were to post on physics stack exchange, in addition to being a coding assistant.


Most of those existed before llms. Also still like a microwave, not very essential.


Farming existed before the combine harvester, and like the combine harvester LLMs are very useful tools in these tasks.

You can't get a useful response from German for Dummies if you ask it to rate the email you're writing in German and let you know what needs improvement, human teachers who can are expensive, and friends may not wish to be constantly bothered by a 40 year old asking the kinds of questions a native speaker might have asked their teacher when they were 13.


As a new language learner you're, by definition, not going to be in a place to judge the quality of the "feedback" you get from it.


Course not. I can make the judgement call "this is good at German" by virtue of all the Germans around me using it.

(Conversely, I'm told it's quite bad at, say, Chinese — though mainly by people online, as there aren't many native Chinese speakers in my life right now).


Electricity probably had niche uses for the first few years but as electrical stuff got better it got used in all sorts of things. I imagine AI will go a bit like that if quicker.

Looking back at the history of electricity, there is early stuff from the Greeks and then:

Faraday's homopolar disc generator 1831

Light bulb 1870

Transistor 1947

etc. So quite leisurely.


Considering the exponential increases in invention and adoption of new technologies for the past 200 years, I'm not sure that timeline is very close to what I'd expect from a new technology in this era.

I'd argue that we've already passed peak LLM as I think, after been having burned by it a few times, it's usage as a professional or reliable technology has seceded, rather than accelerated, over the past 6 months. Gone are the days of doom sayers talking about AGI taking over the world and LLMs replacing every single job at the call center and in software teams. Now everyone's scaled back to the things you can actually let it do, which is assist with a lot of coddling/verifying.


Common, VR, NFTs and blockchain were always abysses of void looking for a usecase. Driving cars maybe, but development has been stalling for 15 years.


Not to their proponents particularly at the time.

For the rest of us in HN it was easier to judge on much mileage will get, but a long time in each cycle we were in the minority, just like the AI skeptics including me and many here are today on how much we will value get.

Don’t get are wrong, AI is more real and more useful than the things before it, but everything is on a spectrum of success - some more and some less, but none of it has been or will be world changing or truly global.

The true breakthroughs rarely get public or even tech community attention until late in the cycle although their progress is public and can be tracked, like with EUV or mRNA or GLP-1 etc . Others which are incremental changes like 5G or cheaper smartphones do not get as much attention as their impact should warrant especially compared to the fads

Hype cycles don’t correlate well with real tech cycles


I'd say VR is just another form factor for games rather than being a void looking for a use-case, but I agree about NFTs and cryptocurrency*.

* technically not all blockchains because git and BitTorrent are both also hash trees and therefore non-currency block-chains, but definitely the cryptocurrency uses of them.


The point is not that they are snakeoil and have zero value . Every one of these things have some value in some area of applications.

However the hype cycle just as now in AI claimed it will change everything, not hey here is interesting new thing and it is going be used in select applications . Nobody is investing billions and billions on that pitch


Consciousness is a stupid and unreasonable goal, it is basically impossible to confirm that a machine isn’t just faking it really well.

Singularity is at least definable…Although I think it is not really the bar required to be really impactful. If we get an AI system that can do the work of like 60% of hardcore knowledge workers, 80% of office workers, and 95% of CEOs/politicians and other pablumists, it could be really change how the economy works without actually being a singularity.


Those numbers are approximately zero, because demand will immediately 10x as soon as AI makes that affordable, as computers have always done. So, profitable (if the AI can cost less than a human for the work), but not changing how the economy works


If white collar jobs get automated away to the extent that blue collar jobs were in the industrial revolution, that will be, by definition, a massive upheaval of the economy.


Comparing the site that was #8 in Internet worldwide traffic last month, has 300M weekly active users, 1B messages a day and is basically the fastest adopted software product in history to NFTs, VR and Blockchain does not make any sense.


Did you really just compare the thing that makes me able to code and ship 5x faster to... NFTs?


> code and ship

I notice that doesn't say what happened to the throughput of "diagnose, fix, and reduce technical debt." :p


Matter of perspective, Ai tools value depends on your existing skill level and whether it only doing what you already can do and you can correct it very well.

However that kind of user in my experience is rare , vast majority are junior developers who do not understand the output well enough and now generate 5x of it that I have to review and they no longer make the effort to understand any foundational concept .

This is not new, the rise in popularity of StackOverflow created a class of developers who didn’t want to spend the time to learn properly reading the manual.

In itself reducing the skill level needed to be productive is not inherently bad, it was PhD in CS could only program in 70s and 80s. Each generation of tech and abstraction opened up software development to newer class of employees.

However the difference is the industry desperately needed expansion each time because PCs became popular or the desktop or mobile devices etc . There was enormous demand for new applications both personal and professional which couldn’t be satisfied with kind of volume CS research programs could fulfill.

There is no such transformation currently , there aren’t a new class of users or major applications coming up in a such high volume, that quality drop is compensated by quantity .

There is lot more need for higher quality products as software market is plateauing from the highs mobile adoption gave.


really depends on the kind of AI. If I said I shipped a AI-genned game 5x faster , then people would indeed compare it to NFTs.


> 2 years before it was VR, few years before that NFTs and blockchain everything

Are you so deep in the SV tech echo chamber that you can’t distinguish between tech bro / snake oil salesman hype and something that actual normal people are engaged by?

The only people talking about NFTs in a non-disparaging way were delusional in large part because they themselves bought into it wanted to egg others on. NFTs were just a stupid faddish speculative ‘commodity’ market. It’s a completely different thing.

Ditto with “blockchain”. That well was positioned by people trying to make money from cryptocurrencies. The mechanics allowed for that. Again, it’s a completely different thing. Civilians with no vested interest weren’t sitting in restaurants singing the praises of decentralised ledgers or monkey pictures on someone’s Google Drive. I assure you. I also assure you that this IS however happening with ChatGPT. Out here in the real world, we see it.

This doesn’t mean that there aren’t issues with LLMs, both philosophically and in therms of their actual output. This doesn’t mean there aren’t the same old SV snake oil salespeople and YouTube wannabes trying to push some BS products and courses. But let’s not pretend for a second like this is usefully comparable to “blockchain” and “NFTs”.


I think that underestimates public engagements in these hypes. Billions were wasted by "normal" people worldwide on VR headsets, in-game nfts, and shitcoins. Real world money, not some inflated VC capital projections. And while the current LLMs still are an "advanced Eliza", regular people are quite happy to pay for its services.


I am sorry, every A-List celebrity was shilling a NFT project, from superbowl to other public spheres of discourse seemed all crypto. The future president of the country has a crypto project to sell, I don’t see how much mainstream it can get.

Yes, There is close connection to SV hype cycle and what happens in mainstream, most mainstream ones originate in SV but the ones I mentioned specifically impacted the larger public not just SV tech sphere


Blockchain and NFT took over the Super Bowl. That's pretty mainstream. There are Bitcoin ATMs.


Can you expand on your spreadsheet analogy?

I think Joel Spolsky explained the main Office moat well here: https://www.joelonsoftware.com/2008/02/19/why-are-the-micros...

> ... it might take you weeks to change your page layout algorithm to accommodate it. If you don’t, customers will open their Word files in your clone and all the pages will be messed up.

Basically, people who use Office have extremely specific expectations. (I've seen people try a single keyboard shortcut, see that it doesn't work in a web-based application, and declare that whole thing "doesn't work".) Reimplementing all that stuff is really time consuming. There's also a strong network effect - if your company uses Office, you'll probably use it too.

On the other hand, people don't have extremely specific expectations for LLMs because 1) they're fairly new and 2) they're almost always nondeterministic anyway. They don't care so much about using the same one as everyone they know or work with, because there's no network aspect of the product.

I don't think the moats are similar.


"Basically, people who use Office have extremely specific expectations."

Interesting point, but to OP's point- This wasn't true when Office was first introduced and Office still created a domineering market share. In fact, I'd argue these moat-by-idiosyncracy features are a result of that market share. There is nothing stopping OpenAI from developing their own over time.


Does office actually have a moat? I thought the kids liked Google docs nowadays. (No opinion as to which is actually better, the actual thing people should do is write LaTeX in vim. You can even share documents! Just have everybody attach to the same tmux session and take turns).


If you're writing a spreadsheet in LaTeX, I suspect something has gone very wrong somewhere along the line.

Spreadsheets are as much a calculation environment as they are a table of figures, and if you're technical enough to be writing docs in LaTeX you should be doing the calculations in a more rigorous environment where you don't copy the formulas between cells.


Excel, the world's most popular functional programming language.


Excel, the only IDE universally allowed on corporate devices


I'd argue the web browser falls into that category as well.


In more modern times, although I'd say the barrier of entry is higher there.

Plus most analogs would require additional procurement, to which the response for most business users would be "No, you're not a developer."


> This wasn't true when Office was first introduced and Office still created a domineering market share

It was absolutely true at that time, but Microsoft was already a behemoth monopoly with deep customer connections throughout enterprise and was uniquely positioned to overcome the moats of their otherwise fairly secure competitors, whose users were just as loyal then as the GP is describing about users now.

Even if OpenAI could establish some kind of moat, office applications make for very poor analogy as to how they might or whether they need to.


It totally was. Bit different. Excel has Lotus keybindings available to this day, the spreadsheet was the home computer’s 1st killer app and Microsoft killer it and took the market share.


Sure, there's nothing stopping any business from developing a moat. The Excel example doesn't make the case of OpenAI any clearer to me.


> Can you expand on your spreadsheet analogy?

Sure.

(I've been coding long enough that what Joel writes about there just seems obvious to me: of course it happened like that, how else would it have?)

So, a spreadsheet in the general sense — not necessarily compatible with Microsoft's, but one that works — is quite simple to code. Precisely because it's easy, that's not something you can sell directly, because anyone else can compete easily.

And yet, Microsoft Office exists, and the Office suite is basically a cost of doing businesses. Microsoft got to be market-dominant enough to build all that complexity that became a moat, that made it hard to build a clone. Not the core tech of a spreadsheet, but everything else surrounding that core tech.

OpenAI has a little bit of that, but not much. It's only a little because while their API is cool, it's so easy to work with that you can (I have) asked the original 3.5 chat model to write its own web UI. As it happens, mine is already out of date, because the real one can better handle markdown etc., so the same sorts of principles apply, even on a smaller scale like this where it's more of "keeping up in real time" rather than "349 page PDF file just to get started".

OpenAI is iterating very effectively and very quickly with all the stuff around the LLM itself, the app, the ecosystem. But so is Anthropic, so is Apple, so is everyone — the buzz across the business world is "how are you going to integrate AI into your business?", which I suspect will go about the same as when it was "integrate the information superhighway" or "integrate apps", and what we have now in the business world is to the future of LLMs as Geocities was to the web: a glorious chaotic mess, upon which people cut their teeth in order to create the real value a decade later.

In the meantime, OpenAI is one of several companies that has a good chance of building up enough complexity over time to become an incumbent by a combination of inertia and years of cruft.

But also only a good chance. They may yet fail.

> On the other hand, people don't have extremely specific expectations for LLMs because 1) they're fairly new and 2) they're almost always nondeterministic anyway. They don't care so much about using the same one as everyone they know or work with, because there's no network aspect of the product.

For #1, I agree. That's why I don't want to bet if OpenAI is going to be to LLMs what Microsoft is to spreadsheets, or if they'll be as much a footnote to the future history of LLMs as Star Division was to spreadsheets.

For #2, network effects… I'm not sure I agree with you, but this is just anecdotal, so YMMV: in my experience, OpenAI has the public eye, much more so than the others. It's ChatGPT, not Claude, certainly not grok, that people talk about. I've installed and used Phi-3 locally, but it's not a name I hear in public. Even in business settings, it's ChatGPT first, with GitHub Copilot and Claude limited to "and also", and the other LLMs don't even get named.


> ...like saying Microsoft has no moat for spreadsheets

Which would be very inaccurate as network-effects are Excel's (and Word's) moat. Excel being bundled with Office and Windows helped, but it beat Lotus-123 by being a superior product at a time the computing landscape was changing. OpenAI has no such advantage yet: a text-based API is about as commoditized as a technology can get and OpenAI is furiously launching interfaces with lower interoperability (where one can't replace GPT-4o with Claude 3.5 via a drop-down)


> OpenAI has no such advantage yet: a text-based API is about as commoditized as a technology

It has branding, for most people AI is ChatGpt. Once you reach critical mass, getting people to switch becomes difficult if your product is good enough and most people are happy.


Branding is a very weak moat outside fashion industry and similar places - outside visible status symbols. It's a relatively strong moat for clothes, watches, even things like the iPhone.

But for things you do alone at home, this quickly goes away. Uber is a strong brand, everyone associated non-taxi taxis with Uber. When Bolt came around in much of Europe, and offered the same service with better wait times (they were paying drivers more, so lots of drivers switched), people moved over to Bolt in droves.


> Branding is a very weak moat outside fashion industry and similar places - outside visible status symbols.

Have you tried getting someone to switch from Chrome to Firefox?

UI is basically the same, the product is more performant (faster load, easy to disable all ads, etc). But the moat is how resistant a normal user is to switch from the thing they've used without having to think about it, to the new thing.

I can definitely see the ChatGPT app becoming as "sticky" as Chrome.


First of all, that's not branding, it's bias for a known, existing solution.

Second of all, people have switched browsers en masse before, it just requires a good enough reason to do so. People switched from IE6 to Firefox in a relatively large number, and then they all switched to Chrome. Even today, people on Windows tend to install Chrome, while people on Macs tend to stick to Safari, so some browser choice is happening.

Third of all, and perhaps most importantly really, Chrome is free, so the only reason to switch away would be finding some major features you want in another browser. ChatGPT is not free, and it's losing money fast at its current price point. But if they make it more expensive, as they need to unless they can significantly reduce their costs, people will look for alternatives. Other than fashion brands, I don't think you'll find almost any example of people in general using a more expensive service when a cheaper equivalent one exists and is being actively marketed.


In my opinion the difference is that a recent graduate knows to say “I don’t know” to questions they’re not sure on, whereas LLMs will extremely confidently and convincingly lie to your face and tell you dangerous nonsense.


My experience is that intellectual humility is a variable, not a universal.

Seen some students very willing to recognise their weaknesses, others who are hamstrung by their hubris. (And not just students, the worst code I've seen in my career generally came from those most certain they're right).

https://phys.org/news/2023-12-teens-dont-acknowledge-fact-ea...

And yes, this is a problem with some LLMs that are trained to always have an answer rather than to acknowledge their uncertainty.


> For current standards of what intelligence means, we'd better hope we don't get ASI in the next decade or two, because if and when that happens then "humans need not apply" — and by extension, foundational assumptions of economics may just stop holding true.

I'm not sure that we need superintelligence for that to be the case - it may depend on whether you include physical ability in the definition of intelligence.

At the point that we have an AI that's capable of every task that say a 110 IQ human is, including manipulating objects in the physical world, then basically everyone is unemployed unless they're cheaper than the AI.


While I would certainly expect a radical change to economics even from a middling IQ AI — or indeed a low IQ, as I have previously used the example of IQ 85 because that's 15.9% of the population that would become permanently unable to be economically useful — I don't think it's quite as you say.

Increasing IQ scores seem to allow increasingly difficult tasks to be performed competently — not just the same tasks faster, and also not just "increasingly difficult" in the big-O-notation sense, but it seems like below certain IQ thresholds (or above them but with certain pathologies), some thoughts just aren't "thinkable" even with unbounded time.

While this might simply be an illusion that breaks with computers because silicon outpaces synapses by literally the degree to which jogging outpaces continental drift, I don't see strong evidence at this time for the idea that this is an illusion. We may get that evidence in a very short window, but I don't see it yet.

Therefore, in the absence of full brain uploads, I suspect that higher IQ people may well be able to perform useful work even as lower IQ people are outclassed by AI.

If we do get full brain uploads, then it's the other way around, as a few super-geniuses will get their brains scanned but say it takes a billion dollars a year to run the sim in real time, then Moore's and Koomey's laws will take n years to lower that to $10 million dollars a year, 2n years to lower it to a $100k a year, and 3n years to lower it to $1k/year.


I think this trend of using IQ as a primary measuring stick is flawed.

Human minds and AI minds have radically different architectures, and therefore have different strengths and weaknesses. IQ is only one component of what allows a typical worker to do their job.

Even just comparing humans, the fact that one person with, say, a 120 IQ can do a particular job—say they are an excellent doctor—obviously does not mean that any other human with an equal or greater IQ can also do that job effectively.


> I think this trend of using IQ as a primary measuring stick is flawed.

I agree, but it's difficult to fully describe the impact of a tech where $5 gets you the ability to do arithmetic faster than literally every human combined even if we were all working at the speed of the current world record holder, yet simultaneously it took a billion dollar research team 14 years to figure out how to get a robot to reliably tie its own shoelaces (this September: https://deepmind.google/discover/blog/advances-in-robot-dext...)

So, IQ it is. Rough guide for a human equivalent intelligence scale, but being mapped to a wildly different architecture so that it, the different architecture, can be reasoned about.

> Even just comparing humans, the fact that one person with, say, a 120 IQ can do a particular job—say they are an excellent doctor—obviously does not mean that any other human with an equal or greater IQ can also do that job effectively.

Indeed, though in this case I would expect that such a human could learn any of those skills.


It's difficult to fully describe, so let's just give up and use a deeply flawed benchmark? Why not try to develop benchmarks that actually work and tell us something useful instead?

The key issue is that there is no result an AI can achieve on a standard IQ test which guarantees that same AI can do any task at a superhuman level, apart from taking IQ tests. Can an LLM that scores 250 replace a human driver? Who knows? Can it replace a senior software engineer? Who knows? Can it replace a manual laborer? Again, who can say? We know a human with a 250 IQ can do all those things, but with an AI we have no idea, because those tasks have many more inputs than IQ.

Rather than IQ, which tells us almost nothing concrete, I think we should focus on what tasks it can actually achieve. What's a Waymo's IQ? Who cares?! I don't care about its IQ. I care about its ability to drive me safely across the city. Similarly, I don't care whether a coding AI can drive a car or write great novels.

Of course it's interesting to measure and speculate about IQ as it relates to AGI, but I think it gives people the very mistaken impression that we are on some kind of linear path where all we need to do is keep pushing up a single all-important number.


> It's difficult to fully describe, so let's just give up and use a deeply flawed benchmark? Why not try to develop benchmarks that actually work and tell us something useful instead?

Two reasons. First: in this sub-thread I'm focusing on employment issues due to AI, so consider the quote from above:

> At the point that we have an AI that's capable of every task that say a 110 IQ human is, including manipulating objects in the physical world, then basically everyone is unemployed unless they're cheaper than the AI.

IQ doesn't capture what machines do, but it does seem to capture a rough approximation of what humans do, so when the question is "can this thing cause humans to become economically irrelevant?", that's still a close approximation of the target to beat.

You just have to remember that as AI don't match human thinking, so an AI which is wildly superhuman at arithmetic or chess isn't necessarily able to tie its own shoelaces. The AI has to beat humans at everything (at least, everything economically relevant) at that IQ level for this result.

Second: Lots of people are in fact trying to develop new benchmarks.

This is a major research topic all by itself (as in "I could do a PhD in this"), and also a fast-moving topic (as in "…but if I tried to do a PhD, I'd be out of date before I've finished"). I'm not going to go down that rabbit hole in the space of a comment about the exact performance thresholds an AI has to reach to be economically disruptive.

For a concrete example of quite how fast-moving the topic is, here's a graph of how fast AI is now beating new benchmarks: https://ourworldindata.org/grapher/test-scores-ai-capabiliti...


More importantly, basically all of the IQ tests are in the training sets, so it's hard to know how the models would perform on similar tests not in the training set.


Indeed. People do try to overcome this, for example see the difference in results between "Show Offline Test" and "Show Mensa Norway" on https://trackingai.org/IQ

Even the lower value is still only an upper-bound on human-equivalent IQ, as we can't really be sure the extent to which the training data is (despite efforts) enough to "train for the test", nor can we really be sure that these tests are merely a proxy for what we think we mean by intelligence rather than what we actually mean by intelligence (a problem which is why IQ tests have been changed over the years).

My intention in this sub-thread is more of the economic issues rather than the technical, which complicates things further, because if you have an AI architecture where spending a few tens of millions on compute — either from scratch or as fine-tuning — gets you superhuman performance (performance specifically, regardless of if you count it as "intelligent" or not), then any sector which employs merely a couple of hundred workers (in rich economies) will still have an economic incentive to train an AI on those workers to replace them within a year.

This is still humans having jobs, and still being economically relevant, but makes basically everyone into a contractor that has to be ready and able to change jobs suddenly, which is also economically disruptive because we're not set up for that.


This is it. Another primary example is that I can put a 80 IQ person to work on a farm or let them build something under supervision. There is no way for an AI to do either at this time, even if the AI is much smarter.


Ah, AI have been doing both farm work and "building things under supervision" work for a while now.

Robots are a thing, AI controlling robots is a thing.


Robots at this point are specialized for a highly specific task. None of them are as dynamically deployable as a 80 IQ person. You can let a walk around a construction site doing whatever, no such robot is available right now.

If I'm wrong please post where I can buy such a general robot.


Robots are hardware, AI is software, so in principle you can put any AI in charge of any robot.

A quick google got me this list of current uses, I'd suggest you can skim the first part as it will be clear when you get to the actual examples: https://builtin.com/robotics/humanoid-robots

(But disregard Atlas, IIRC Boston Dynamics don't sell that, only the robot dogs? But you didn't actually ask for a humanoid, so you can have the dogs as a replacement).

I've seen demos from Agility before this: https://agilityrobotics.com/

Likewise Figure being tested in BMW: https://www.figure.ai/

And are you really sure about what it means to be an IQ 80 human? Because that's where people start having problems reading plans, safety instructions, timesheets etc., or following multi-step instructions, handling unexpected situations.


A robot is not only hardware. A robot is hardware and software.

The robots in the links you posted are either not commercially available or not capable of working in an unrestricted construction environment. Such an environment do require a humanoid frame at this time because the entire world is geared to that. I agree that a few show promising demos, but that's not what I would "available". Perhaps I should have been more clear. Unless I(As in some rando corporation) can go out and buy it I don't consider it available.


I completely agree with everything that you're saying here - my use of the term "basically everyone" was lazy in that I'm implying that at the 110 IQ level the majority (approx 70%) of people are economically obsolete outside of niche areas (e.g. care/butler style work in which people desire the "human touch" for emotional reasons).

I think that far below the 70% level we've already broken economics. I can't see a society functioning in which most people actually _know_ that they can't break out of whatever box they're currently in - I think that things like UBI are a distraction in that they don't account for things like status jockeying that I think we're pretty hardwired to do.


> At the point that we have an AI that's capable of every task that say a 110 IQ human is, including manipulating objects in the physical world, then basically everyone is unemployed unless they're cheaper than the AI.

Until a problem space is "solved", you will still need AIs that are more capable than those with 110 IQ to review the other AIs work. All evidence points to "mistakes will be made" with any AI system I have used and any human I have worked with.


> he actively empathised with the protestors.

I have significant doubt that Sam is capable of empathy, period. It seems like what he's capable of is an extremely convincing caricature of it which he has practiced for many years.


I mean, one could say the same about basically every human (and indeed, some AI systems).


> foundational assumptions of economics may just stop holding true

Those assumptions are already failing billions of people, some people might still be benefiting from those "assumptions of economics" so they don't see the magnitude of the problem. But just as the billions who suffer now have no power, so will you have no power once those assumptions fail for you too.


> I overheard a daughter talking to her mother about ChatGPT and KI (Künstliche Intelligenz, the German for AI).

Given the fact that how much ALL of our media beating that drum since GPT-3, I am hoping that my cat will start talking about it without understanding what it is. Even Draghi’s report has immense hidden lobbying dedicated to KI.


> The product market fit is fantastic.

> [...]

> What's not obvious is how to monetise it.

This is an interesting new use of 'product market fit'. I would have thought that in the absence of a path to monetisation, there is no market. Or we could talk about the 'market' for selling $10 bills priced at $5.


I phrased that badly, so that's fair.

If you follow the rest of that paragraph, my meaning may be clearer: there's a market for what they do right now — the OpenAI financial reports that have been bandied around say that if they were only offering this as a paid service, not also developing new models, they'd be making a profit — but it's not obvious how this will change going forward.

The first spreadsheets were easy to market: here's a solution to your problem, give us money and you can have this solution. But as time goes on, you need more than the original solution, because that's too easy to replicate.

ChatGPT was easy to replicate, and has been replicated.

The "needing more" to stay in the game is what I meant by the Red Queen race. What exists now is fine, it's a good fit, but the question of the long-term is still open.


> protesters dropped in on the after-award public talk

I’m going to guess that GP does not consider random protestors to be in Sam Altman’s ‘sphere’

> The product market fit is fantastic.

This is true insomuch that you define “product market fit” as “somebody mentioning it in an Indian restaurant in Berlin”

> every definition of "very smart"

Every definition you say?

https://bsky.app/profile/edzitron.com/post/3lclo77koj22y


> Every definition you say?

When I was growing up, "very smart" people had a law degree or a medical degree or understood quantum physics and/or relativity (but all three would be silly), or spoke perhaps three or four languages (but thirty would be a joke). They might perhaps also be above average at games of skill like chess, or perhaps could improvise poetry in some specific style (but every style would be a bit of a Mary Sue).

I may not trust ChatGPT's medical diagnosis or legal analysis, but it does pass those exams; I may not expect much novel insights about unifying QM and GR, but it seems to "know" the topics well enough for people to worry it may lead to cheating in those subjects and most others besides.

And the chess score may not be the absolute best, but it's way above the average human player.

And it seems entirely adequate in quite a lot of languages (though not native), and most styles of poetry (again, not what anyone would call the Nobel committee for, but way above what most people can dream of).

> https://bsky.app/profile/edzitron.com/post/3lclo77koj22y

We also didn't have spellcheck running all the time when I was growing up, and when we did there was quite a lot of red, but even that misses things like "I'm applying for collage" and "what lovely wether*" which were also real mistakes made by actual humans.

* spelled like that it means "castrated ram"


> I may not trust ChatGPT's medical diagnosis or legal analysis, but it does pass those exams;

It's really, really, really important to note that this could easily just be memorisation (and indeed, the GSM Symbolic paper from Apple suggests that it most likely is).

We don't have a good way of assessing the ability of models literally trained on all free text in existence, but comparing their performance based on text they've seen in training is definitely not it.


What I've always disliked about calling it "memorisation" is that we already have machines with much better memorisation than what LLMs do: the hard drives with the source documents.

To the extent that it's able to respond to and with natural language, this is already an improvement over, say, grep or PageRank.

To the extent that it is memorisation in the way we mean when we say a human has memorised something, it's an example of another aspect of what was meant by "intelligence" when I was growing up.

It's definitely weird and alien as an intelligence, but it still ticks the boxes we used to have even a decade ago, 2014, let alone the late 80s to 1999 when I turned 16.


That’s an odd way to define “smart”. The current crop of LLMs have absolutely no ability to query the raw input or output. I can look at the words I’m typing and think about them. Transformers don’t even get the raw characters, and they also don’t have any feedback from later layers into the queries generated at earlier layers. So the fact that they have any ability at all to reflect on their own input is IMO both impressive and somewhat surprising.

I occasionally wonder whether one could get better results by allowing later layers to query against the K/V data from earlier layers. This might make inference, and possibly even training, of models that don’t fit in a single node’s memory rather more complicated.


> When Altman was collecting the award at Cambridge the other year, protesters dropped in on the after-award public talk/Q&A session, and he actively empathised with the protestors.

I've known many CEO who will stare you right in the eye and say they hear your concerns then immediately disregard you and continue with business as usual. Being a psychopath is part of the job description.


> The product market fit is fantastic. This isn't the first time I've heard random strangers discussing it in public.

People were constantly discussing crypto in public, it even regularly made it into Tagesschau. Yet it remains an (influential) niche product.

People discussing a topic in public is not a good proxy for product market fit. It does not imply those people are using said product, nor does it imply people would be willing to pay for said product.

> even the current models meet every definition of "very smart" I had while growing up, despite their errors.

I recently tried to read a grocery store receipt into GPT4o and asked it what the KG price for the steak position was (it was listed as €/kg right next to the position name). It came up with some elaborate math, but not with the correct answer.

So yeah. These models might be able to confidentially answer questions, but I found them to be (partially) incorrect more often than not.


> and he actively empathised with the protestors.

https://www.simplypsychology.org/narcissistic-mirroring.html


Even if so, that's still showing awareness of what his critics are critical of.

Now, can you make a falsifiable prediction: what would a narcissist do that a normal person would not, such that you can tell if he's engaging in a narcissist process rather that what your own link also says is a perfectly normal and healthy behaviour?


> Even if so, that's still showing awareness of what his critics are critical of.

Mere awareness is really that meaningful.

>>> and he actively empathised with the protestors.

>> https://www.simplypsychology.org/narcissistic-mirroring.html

> Now, can you make a falsifiable prediction: what would a narcissist do that a normal person would not, such that you can tell if he's engaging in a narcissist process rather that what your own link also says is a perfectly normal and healthy behaviour?

He doesn't have to, because I think the point was to raise doubts about your interpretation.

But if you're looking for more evidence, there are all the stories (from many different people) about Altman being dishonest and manipulative and being very effective at it. That's the context that a lot of people are going use to interpret your claim that Altman "actively empathised with the protestors."


nit: isn't really that meaningful


I think an assumption that a lot of people make about people with power is that they say what they actually believe. In my experience, they do not. Public speech is a means to an end, and they will say whatever is the strongest possible argument that will lead them to what they actually want.

In this case, OpenAI wants look like they're going to save the world and do it in a noble way. It's Google's "don't be evil" all over again.


I'll sleep easier tonight after reading this! I know other people know what you say there, but at times reading HN one would suspect that it's not that commonly known around these parts.

Maybe public figures were "saying what they meant" in, I don't know, the mid-1800s. People who grew up with "modern" communications and media infrastructure (smartphones, brain rot, streaming garbage 24/7, ads everywhere, etc) do not have the capacity to act in a non-mediatic fashion in public space anymore.

So that's the reality, I think. Not only is Sam Altman "fake" in public, so is everyone else (more or less), including you and I.

Nonetheless, it's a national sport at least in massive chunks of the English-speaking world now to endlessly speculate about the real intentions of these pharaonic figures. I've said it before, but I'll say it again: what a very peculiar timeline.


It's human nature to assume honesty and good will. Societies couldn't function if that wasn't the case. Lies and deception are only common in interactions outside of our immediate tribe and community. They're the primary tools of scammers, politicians, and general psychopaths, who seek to exploit this assumption of honesty that most people have, and they're very successful at it. The problem is that technology has blurred the line between close communities and the rest of the world, and forced everyone to accept living in a global village. So while some of us would want the world to function with honesty first, the reality is that humans are genetically programmed to be tribal and deceitful, and that can't change in a few generations.

It's hilariously easy to be "successful" in the modern world. It's so easy that a dumb person following a playbook of "deny, divert, discredit" can become president.


This. All this bullshit about "AI will kill us all" is purely a device to ensure regulatory capture for the largest players, ie the people saying said bullshit.

Pretend your meme pic generator is actually a weapon of mass destruction and you can eliminate most of your competition by handwaving about "safety".


Nobody seems to remember how the Segway was going to change our world, backed by many of the VC power figures at the time + Steve Jobs.


The hype cycle for Segway was insane. Ginger (code name) wasn’t just going to change the world, it was going to cause us to rethink how cities were laid out and designed. No one would get around the world in the same way.

The engineering behind it was really quite nice, but the hype set it up to fail. If it hasn’t been talked up so much in the media, the launch wouldn’t have felt so flat. There was no way for them to live up to the hype.


I guess it depends on what media you follow. As a Brit my recollection was hearing it was a novelty gadget that about a dozen American eccentrics were using, and then there was the story that a guy called Jimi Heselden bought the company and killed himself by driving one off a cliff and then that was about it. Not the same category as AI at all.


I was in the tech space in the UK at the time and the hype behind this was off the charts. I seem to remember a meeting with Bezos where he was super hyped about it, and we had no idea what "it" was still. The speculation was crazy.


The "Code Name: Ginger" book, by a writer embedded with the team, is excellent btw.


The Segway was a bit early, and too expensive, but I would defend it... sort of.

Electric micromobility is a pretty huge driver of how people negotiate the modern city. Self-balancing segways, e-skate, e-bike and scooters are all pretty big changes that we are seeing in many modern cityscapes.

Hell, a shared electric bike was just used as a getaway vehicle for an assassination in NYC.


e-/bikes and e-/scooters are big changes to city navigation.

e-skate and segways are non-factors. And that's the difference between a good product (ebike or even just plain old bikeshare) and a bad one (segway).


Segway is just an electric pizza delivery bike _that don't look like one_. That's it. The Segway City is just electric Ho Chi Minh 1995 in style of 2000 Chicago.

People wants to die in style so much that they turn blind eyes to plasticky scooters and Nissan Leafs. To them, the ugly attempt don't exist, and the reskinned clones are the first evers in the history. But reality prevails. Brits aren't flying EE Lightning jets anymore.

Segways were kind of cool to me too, to be fair. To lesser extent Lime scooters too. Sort of. But they're still just sideways pizza bikes.


In Segways defense that self balancing tech has made and will continue to make an impact, just not world changing amount (at least not yet) and not their particular company but the companies they influenced - the same may end up true about openai


I remember serious discussions about how we'd probably need to repave all of our sidewalks in the US to accommodate the Segway


I think we all remember, and if we forget, we're reminded every time we see them at airports or doing city tours.


I don't think I've seen a Segway in close to ten years. Also I suspect most people under 25 have never even heard of Segway.


One wheels (Segway evolutionary grandchild) are almost as popular as electric skateboards.


So… not much? If anything, people drive electric scooters around here. Those seem to hit the sweet spot.


It reappeared as e-bikes and e-scooters - Lime, Bird, etc.


Reappeared as Electric unicycles, which look hilarious, dangerous, and like a lot of fun.


Apparently the first e-bike was invented in 1895. So I don’t think it is accurate to give Segway too much credit in their creation. Anyway the innovation of Segway was the balance system, which e-bikes don’t need.

(I’m not familiar with the site in general, but I think there’s no reason for them to lie about the date, and electric vehicles always show up surprisingly early).

https://reallygoodebikes.com/blogs/electric-bike-blog/histor...


None of which is doing great.


E-bikes are everywhere.


There are about 70 lime bikes that commute to the square by my flat. There's definitely some ebike stuff ongoing.


more as hoverboard, onewheel etc. the e-bikes and e-scooters don't really have similar balancing mechanisms


yeah, didn't even remember those, but that's even more on point


No it didn't


You mean Steve Wozniak.

Close but no LSD


https://www.theguardian.com/world/2001/dec/04/engineering.hi...

"Steve Jobs, Apple's co-founder, predicted that in future cities would be designed around the device, while Jeff Bezos, founder of Amazon, also backed the project publicly and financially."


Steve Jobs said that Segways had the potential to be "as big a deal as the PC."


What makes you so sure about the LSD?


>and no obvious large-scale product-market fit

I'm afraid you are in as much an echo chamber as anyone. 200 million+ weekly active users is large scale pmf


Exactly. There's plenty of return on investment. Knowledge workers around the world are paying nice subscription fees for access to the best models just to do their work. There are hundreds of millions of daily active users already. And those are just the early adopters. Lots of people are still in denial about that they need this to do their work very soon. Countless software companies pay for API access to these models as well. As they add capabilities, the market only becomes larger.

OpenAI is one of a handful of companies that is raking in lots of cash here. And they've barely scratched the surface. It's only a few years ago that Chat GPT was first released. OpenAI is well funded, has lots of revenue, and lots of technology coming up that looks like it's going to increase demand for their services.

There's a very obvious product market fit.


It’s the fastest product to 100m users ever. Even if they never update their models from here on out, they have an insanely popular and useful product. It’s better at search than Google. Students use it universally. And programmers are dependent on it. Inference is cheap — only training is expensive.

To say they don’t have PMF is nuts.


> And programmers are dependent on it.

that is clearly not the case


>It’s better at search than Google

in what world? what it's good at is suggesting things to search, because half of what it outputs is incorrect, so you have to verify everything anyway

it does, slightly, improve search, but it's an addition, not a replacement.


Two years ago half of what it output was incorrect. One year ago, maybe 30% of what it output was incorrect. Currently, maybe 20% of what it tells you is incorrect.

It's getting better. Google, on the other hand, is unequivocally getting worse.


Rubbish - there's no data that shows accuracy has improved by that much.


I brought every bit as much data to the conversation as you did.


> It’s better at search than Google.

That’s hardly a high bar now.

> And programmers are dependent on it.

Entry level ones, perhaps.


I mean, these models are super useful for small defined tasks where you can check their output.

They're also useful for new libraries or things that you're not an expert in (which obviously varies by domain and person, but is generally a lot of stuff).

I'm a data person and have been using them to generate scraping code and web front-ends and have found them very useful.

But I wouldn't trust them to fit and interpret a statistical model (they make really stupid beginner mistakes all the time), but I don't need help with that.

Like, in a bunch of cases (particularly the scraping one) the code was bad, but it did what I needed it to do (after a bunch of tedious re-prompting). So it definitely impacts my productivity on side projects at least. If I was doing more consulting then it would be even more useful, but for longer term engagements it's basically just a better Google.

So yeah, definitely helpful but I wouldn't say I'm dependent on it (but I'd definitely have made less progress on a bunch of side projects without it).

Note: it's good for python but much, much less good at SQL or R, and hallucinates wildly for Emacs Lisp.


s/programmers/front\ end\ html\ authors/


but they're not making money and there are plenty substitutes. I bet they have practically zero paid customer retention rate. People say they love it, so what.


>I bet they have practically zero paid customer retention rate

Why do you think that?

I know a few people with a subscription but I don't think I know anyone who has cancelled. Even people who have mainly moved to Claude kept the plus subscription because it's so cheap and o1 is sometimes useful for stuff Claude can't do.


This is my first weekend with no subscription to a model since the day chatGPT4 came out.

I feel like I got to a point of increasingly diminishing returns from the models. I know what they are good at and what they are not good at so I started avoiding what they are not good at.

It reminds me of Jaron Lanier saying years ago that we have to be on guard to not start acting stupid to make AI seem smarter. I had surely started doing that.

This is quite a nice feeling. I can still learn things the model sucks at. I could feel a creep coming into my head that there was no point in learning something the model wasn't good at.

The models are amazing at reinventing the wheel in React. Mapping this as if it scales to all human activity though is completely delusional.


> [people moved kept subscription] because it's so cheap and

...


I'm not sure what your point is? Are you arguing that no company can sustain itself on $20 per month subscriptions? Or just that OpenAI can't?

Or that $20 a month isn't cheap?


These guys didn't get to where they are now by admitting mistakes and making themselves accountable. In power play terms, that would be weak.

And once you are way up there and you have definitely left earth, there is no right or wrong anymore, just strong and weak.


>So far we are multiple years in with much investment and little return, and no obvious large-scale product-market fit

Literally every market has been disrupted and some are being optimized into nonexistence.

You don't know anyone who's been laid off by a giant corporation that's also using an AI process that people did 3 years ago?


I know companies that have had layoffs - but those would have happened anyways - regular layoffs are practically demanded by the market at this point.

I know companies that have (or rather are in the process of) adopting AI into business workflows. The only companies I know of that aren't using more labor to correct their AI tools are the ones that used it pre-ChatGPT/AI Bubble. Plenty of companies have rolled out "talk to our AI" chat bubbles on their websites and users either exploit and jailbreak them to run prompts on the company's dime or generally detest them.

AI is an extremely useful tool that has been improving our lives for a long time - but we're in the middle of an absolutely bonkers level bubble that is devouring millions of dollars for projects that often lack a clear monetization plan. Even code gen seems pretty underwhelming to most of the developers I've heard from that have used it - it may very well be extremely impactful to the next generation of developers - but most current developers have already honed their skills to out-compete code gen in the low complexity problems it can competently perform.

Lots of money is entering markets - but I haven't seen real disruption.


> Even code gen seems pretty underwhelming to most of the developers I've heard from that have used it - it may very well be extremely impactful to the next generation of developers - but most current developers have already honed their skills to out-compete code gen in the low complexity problems it can competently perform.

I'm in academia, and LLMs have completely revolutionized the process of data analysis for scientists and grad students. What used to be "page through the documentation to find useful primitives" or "read all the methods sections of the related literature" is now "ask an assistant what good solutions exist for this problem" or "ask LLMs to solve this problem using my existing framework." What used to be days of coding is now hours of conversation.

And it's also above-average at talking through scientific problems.


I've heard something similar from the legal field as well. When you're dealing with massive amounts of unstructured documentation that needs to be searched LLMs seem to be doing a better job of indexing that information than conventional search indexers. I agree that it is having a pretty big impact in those fields where large amounts of unstructured data needs to be combed through.


Altman appears to say AGI is far away when he shouldn't be regulated, right around the corner when he's raising funds, or is going to happen tomorrow and be mundane when he's trying break a Microsoft contract.


Millenarianism or millenarism (from Latin millenarius 'containing a thousand' and -ism) is the belief by a religious, social, or political group or movement in a coming fundamental transformation of society, after which "all things will be changed".. - From Wiki

Correct me if this the wrong meaning in the context. I will admit this is the first time I see this word. When I first read it I thought it was something to do with "millennials" also known as Gen Y.

>Robotics to be "completely solved" by 2020,

And we still dont have Lv4 / Lv 5 autonomous vehicles. Not close and likely still not in 2030. And with all the regulation hurdle in place it means even if we achieve it by 2030 in lab it wont be widespread until 2035 or later.


++1

me too


> Is nobody in these very rich guys' spheres pushing back on their thought process?

The moment someone does that they're no longer in the very rich guys sphere.


Claude is great, I think Claude 3.5 Sonnet is way better than OpenAI's GPT-4. It understands and follows instructions better, and the context length is larger too. Despite GPT-4 having some context length, too (duh), it often acts as if I started over. It fails at programming for me, and for basic stuff, like "giving movie recommendations without duplicates". If I send it twice or thrice, there will be duplicates despite having it instructed to omit them.

As long as Anthropic does not dumb their own models, they are going to be better than OpenAI, at least for what I use them.

So, in the present, Claude is way more useful to me and I am subscribed. Right now, they do not support text-to-speech and imagine generation, but once they do, I would just completely abandon OpenAI.

> The board of OpenAI is supposedly going to "determine the fate of the world", robotics to be "completely solved" by 2020, the goal of OpenAI is to "avoid an AGI dictatorship".

Given the above, I doubt it is going to be OpenAI at this rate.

It seems better for educational purposes given that it shows you the code with modifications in real-time and you can run Python scripts for example, but that said, I have not tried the "Educational" style for Claude.


I tried Claude.

If hardware continues it's evolution of speed in the next 10 years I can have Claude but local + running constantly and yeah that would change certain things fundamentaly


Try llama 3.3 70B. On groq or something. Runs on a 64GB macbook (4bit quantized, which seems to not impact quality much). Things have come a long way. Compare to llama 2 70b. It's wild


Llama 3.3 70B 8-bit MLX runs on Macbook 128GB at 7+ tokens per second while running a full suite of other tools, even at the 130k tokens size, and behaves with surprising coherence. Reminded me of this time last year, first trying Mixtral 8x22 — which still offers a distinctive je ne sais quoi!


Qwen 2.5 32B Coder is actually a viable locally-hosted alternative to Claude 3.5 Sonnet.

It's not better, but if I couldn't access Claude for some reason, I would definitely use it.


When ChatGPT was down a few days back, I locally booted up Codestral. It was decent and usable.


It's been clear for a while now Elon has no one in his life that's willing to push back on his inane ideas


[flagged]


That same space company said that they'd do point to point commercial travel using rockets. As for significantly outperforming: Ask NASA, price per launch has gone UP not DOWN. What's the point of reusability if it doesn't affect the bottom line in a positive way?


> The development of commercial launch systems has substantially reduced the cost of space launch. NASA’s space shuttle had a cost of about $1.5 billion to launch 27,500 kg to Low Earth Orbit (LEO), $54,500/kg. SpaceX’s Falcon 9 now advertises a cost of $62 million to launch 22,800 kg to LEO, $2,720/kg.

https://ntrs.nasa.gov/citations/20200001093

Why do you lie?


It is required to assume good faith in you HN replies… which imo makes it a nicer place to discuss things. People can be wrong about things or be referring to different information without intentionally lying.


This is pretty egregious, but I agree that assuming noble intent is the way to maintain civil discussion. No need to call him a liar - just downvote.


Even if true no need to attack a poster


The price for austronaut launches to the ISS.

I'd wager mainly because there is no competition, and there is no motivation for SpaceX to charge less.

The cost for regular launches has gone down a lot. But, to be fair, it has not gone down nearly as far as it could. SpaceX still has big margins there. I assume because again there is too little competition, and because going from 60 to 25 Million doesn't increase the potential market by all that much, so no huge motivation to drop prices.

Starship might allow them to go so much lower that entirely new markets can emerge, but I'm skeptical.

It's a bit of a different story for rideshare missions, those do make it way, way cheaper to launch small satelites.


> SpaceX still has big margins there

Probably to fuel the R&D cost


> That same space company said that they'd do point to point commercial travel using rockets.

I think they still hope to achieve that in the future. It is just there are a lot of more critical milestones they need to hit first before they try to tackle that.


The Department of Defence is already very interested in that capability


NASA contracted two companies to perform space launches, Boeing and SpaceX.

Due to massively inflating costs and a recent catastrophic failure, Boeing has a very uncertain future in the program.

I hope this helps!


I mean once Boeing is out of the picture SpaceX is just going to raise its rates.

Tesla sells EVs much cheaper in China than the US due to actual competition.


That's Boeing's problem, not SpaceX's. Boeing used to be a great American company.


Well, SpaceX's price increases are going to also be a NASA problem and by extension US taxpayer.


Gonna need a source on that one.

As much as I despise Elon Musk, the cost per kg to LEO has gone down a lot in recent years thanks to spacex.


Just wanted to point out for the record that the person you're replying to said "inane," not "insane." Might not have changed your reply all that much though.

IMO SpaceX is not an inane idea, though it's arguably insane from the perspective of the era when it was founded (2002-ish), but it definitely seems to have succeeded. Tesla might be in the same bucket. But everything else he's been involved with has either been a non-starter or turned to trash (see: Boring Company, Twitter, xAI, whatever the hell the flamethrower thing was, etc). My sense is that he's getting worse over time, as mental illness, delusions of grandeur, and/or yes-men he's been surrounding himself with increasingly cloud his judgement.


Space X succeded because they basically played Elon. There are enough smart people there to realize that they needed to have Elon handlers to prevent him from tanking progress.


Thank you. I used the word inane for a reason.


Even a broken clock is right twice a day


Your comment is out to lunch.


I use AI in my work as a machinist/CNC programmer on a weekly if not a daily basis.

It's primary advantage for a while has been it's crazy breadth of knowledge which it can present in a human, flexible form. But perhaps we excuse this as Generally Knowledgeable not Generally Intelligent.

But with the most recent models I'm finding the intelligence harder to deny. They are now catching errors and false assumptions unprompted and with regularity and providing reliable feedback and criticism. Gemini 2.0 can now read prints usefully, if not perfectly and has caught multiple print errors missed by the engineers and experienced machinists alike.

It provides valuable feedback and criticism and frequently interjects when it thinks I'm barking up the wrong tree. The newest models have gotten quite good at asking for clarification rather than leaning into assumptions.

Now it's certainly not the smartest. It's creativity is mediocre and it's lack of practical experience is apparent. But thats understandable, as _it's never machined a day in it's life_. It's merely studied what's been written.

Sure, hallucinations and the like are still a thing (though they've been cut down drastically). And the creativity is quite mediocre. But it seems a lot more intelligent than some of the burnouts coming out of trade schools.


Would you have disagreed with Ilya about AGI? Or do you just disagree with the cost and timeline?

Ilya reiterated on July 12, 2017, “Each year, we'll need to exponentially increase our hardware spend, but we have reason to believe AGI can ultimately be built with less than $10B in hardware.”


I remember seeing OpenAI like 10 years ago in GiveWell's list of charities along with water.org, deworm the world 80.000 hours and that kind of things.

It's a wild take to say that they have gotten nowhere and that they haven't found product-market fit.


Anyone familiar with the past 10 years of "unicorn" startups or Sam Altman's business history is perfectly content saying that OpenAI has gone nowhere and cannot find a market fit for their paid products.


OpenAI is a NFP, this is a non trivial difference from unicorn startups.

Also, 123 million Daily users. I'd say the found a market.


Are you sure?

Good Ventures / Open Philanthropy (Dustin Moskovitz) funded GiveWell and OpenAI, and GiveWell's leadership floated to OpenAI, but I'm not convinced that GiveWell funded OpenAI.

https://www.givewell.org/about/gw-op-relationship

They are certainly both entangled with the Effective Altruism community, but GiveWell came from the "humanities/finance" side, not the tech bro technoutopia side.


Didn't say they fund. What givewell did was compile lists of charities and recommend one or the other.


> Is nobody in these very rich guys' spheres pushing back on their thought process?

I will take a wild guess and say a qualified no in the sense that nobody who report directly to these people said anything against it and my conspiracy theory is that they were not idiots who didn't have their own misgivings but they prized their own personal gain / "professional growth" by being yes men over doing what they had a professional responsibility.

My favorite example is the Amazon Fire Phone

> Jeff Bezos reportedly "...envisioned a list of whiz-bang features... NFC for contactless payments, hands-free interactions to allow users to navigate the interface through mid-air gestures and a force-sensitive grip that could respond in different ways to various degrees of physical pressure", most of which ultimately did not end up in the final product. He also "obsessively monitored the product", requiring "even the smallest decisions needed to go by him".

Did nobody think that an expensive phone would make sense with a value conscious audience of Amazon.com? If nobody (who directly reports to the CEO) dares question even a relatively minor thing like this, how can we expect them to say anything about major/existential issues in a company such as "Open" AI?

https://en.wikipedia.org/wiki/Fire_Phone


There is a steelman argument for not pushing back on plans that don't make sense. You have to remember that folks like this are not just random mad men spouting crazy ideas. This is someone that has had crazy ideas in the past AND made them happen. In some cases, more than once. If you had a front row seat to watching someone deliver stuff that you thought couldn't be done, what you do when they came up with the next crazy idea? It is not unreasonable to subjugate your own judgement and just see where it goes.


Echo chambers are very effective at drowning out dissenting voices.


> much investment and little return, and no obvious large-scale product-market fit, much less a superintelligence.

I'm always blown away when I see comments like this.

It just makes me think the people that say this either work for a competitor or simply haven't used their products.

The things that OpenAI and similar companies have created are literally revolutionary. It's insane. Pretending it's "little return" is a very strange opinion.


> no obvious large-scale product-market fit

I mean, I know pessimistically "ackshually"-ing yourself into the wrong side of history is kind of Hackernews's thing (eg. that famous dropbox comment).

But if you don't think OpenAI found product-market-fit with ChatGPT, then I don't think you understand what product-market-fit is...


That’s because it sparked a virtuous feedback loop at a time when it just so happens people’s credulity levels are off the charts. The creators of AI tools have a benefit in hyping it, as do software vendors as do hardware companies as do hyper scalers as does meta as does musk as do systems integrators like Accenture and on and on.

I think OP is saying despite all that there is little evidence that end users are actually paying significant sums of money to use it. To this point it’s a great marketing tool for companies that are all eager to be viewed as innovative and you have lots of very wealthy smart people with clout like Bezos and Zuckerberg talking it up. Like any good bubble you have to have at the core a real asset.

So of course there are people who use it daily as many anecdotes here in the comments point out. It’s a genuinely interesting and useful technology. That doesn’t mean though that it’s going to result in AGI or become profitable while liquidity conditions are still easy. I promise you Mark Zuckerberg would be singing a very different tune regarding chip investment if he were having to compete with bonds yielding 6-7%+.


It’s popular but not making money. So maybe product-audience fit so far.


There is no killer app (yet). Chat is nice and there's various tools that are useful to devs. But broadly, there is no killer app that makes _everyone_ adopt it and gladly pay for it.

https://en.m.wikipedia.org/wiki/Killer_application


ChatGPT is the killer app… one of the killest apps in the history of mankind by any measure you want to use :)


There's no app that has been adopted faster than ChatGPT. Today, just 2 years off launch, it's one of the top ten most visited sites in the world. And they had some ~10M paid users a few months ago, nearly quadruple what they had the same time last year.

How is ChatGPT not a killer app ?


Yea same with Google. I mean search is nice. But broadly there’s no killer app that makes _everyone_ gladly pay for it.

Jury is still out on whether they’ll be able to subsidize those massive server costs of indexing the entire web.


Let's ignore that Google is wildly profitable, their product was so good they immediately captured the entire market and have held the dominant position for decades, and that search was beyond "nice"...

You offer some counter example, but do not address my actual point - there is no killer app.


The cost to Google per search is way lower, though.


Plus they actually make money from it!


If you push back on them, you get pushed out. If you suck up to them and their delusional belief systems, you might get paid. It's a very self-selecting, self-reinforcing process.

In other words, it's also an affective death spiral.


> multiple years in with much investment and little return

Hmm.. where do you live? It's quite transforming already in many areas and not going to stop any time soon. I'll call it 'The Last Explosion', as opposite to 'AI Winter'. By that I mean this explosion with result in AGI. Likely sub-human first, then super-human.


> So far we are multiple years in with much investment and little return

Copilot ? Recall ? "Your privacy is _very important_ for us"


> So far we are multiple years in with much investment and little return

Modern (2024) LLMs are "little return"? Seriously? For me, they've mostly replaced Google. I have no idea how well the scaling will contiue and I'm generally unimpressed with AI-doomer narratives, but this technology is real.


They're not talking about your anecdotes, they're talking about the capex and returns. Which are overwhelmingly negative.


Yea, I mean, if it can't show a return in the very short term, is it even worth doing? How could something possibly develop into being profitable after years of not being so?

All you have to do is point to bankrupt bookseller Amazon, whom that dumb Jeff Bezos ran into the ground with his money-losing strategy for years. Clearly it doesn't work!


There's growth focus and there's doing some of the largest fundraising rounds in the history of silicon valley during a time when debt is the most expensive it's been in decades and still barely keeping the lights on because your service is so unprofitable.


Interest rates are still well below their historical average right now.

And yes, venture capital investing involves high risk, of which all participants are fully aware, and intentionally looking for.

We're talking <2 years since ChatGPT went viral for the first time. Generally these stories take at least 10 years to play out. Uber was unprofitable for 15 years. Now they're profitable. Every VC-backed startup is unprofitable and barely keeping the lights on for many years. None have reached 100M active users in a matter of weeks like ChatGPT.

They absolutely might fail. But the level of pessimism here is just plain funny. How much money are you willing to bet against them ever getting profitable?


I think there's a big difference between companies like Google and Amazon that were operated unprofitably for years but did have the option to pivot in to profitability for a lot of that time. I would also consider Uber more of an outlier than the rule in terms of blitzscaling. Maybe OpenAI will be kept alive on life support for a while because there is so much in the industry riding on AI being the next big thing, but their problems are myriad and include terrible unit economics and a complete lack of moat in an environment full of bigger organizations that can pour nigh unlimited money into trying to outcompete them. Maybe it makes me a pessimist to point out basic business fundamentals in the face of unquestioning optimism.


>I think there's a big difference between companies like Google and Amazon that were operated unprofitably for years

How many years before they reached this point ?

chatgpt.com had 3.8B visits last month and was #8 in Internet worldwide traffic. It is the software product with by far the fastest adoption of any software product in history. "Some of the largest fundraising in Silicon Valley" you say ? Well, that sounds about right to me.


What part of poor unit economics do you not understand?


What part of unreasonable time frames do you not understand ?

What's so poor about the unit economics in the frame we're talking about ?

Open ai is one of the fastest growing in revenue and adoption. It's not like you even know to what degree free subs are being compensated by currently. Do you even realize how much Language model costs have gone down in just 2 years ?

You're the one who said, there's a difference between the likes of Uber and Google/Amazon who apparently could have pivoted to profitability much sooner but then conveniently ignore the question of when this pivot become possible.

Newsflash: almost everyone looks poor unit this soon out.


I mean, inference costs have decreased like 1000x in a few years. OpenAI is the fastest growing startup by revenue, ever.

How foolish do you have be to be worrying about ROI right now? The companies that are building out the datacenters produce billions per year in free cash flow. Maybe OP would prefer a dividend?


Given how close the tech is to running on consumer hardware — by which I mean normal consumers not top-end MacBooks — there's a real chance that the direct ROI is going to be exactly zero within 5 years.

I say direct, because Chrome is free to users, yet clearly has a benefit to Google worth spending on both development and advertising the browser to users, and analogous profit sources may be coming to LLMs for similar reasons — you can use it locally so long as you don't mind every third paragraph being a political message sponsored by the Turquoise Party of Tumbrige Wells.


> OpenAI is the fastest growing startup by revenue, ever.

No it's not. Facebook hit $2B in Revenue in late 2010 - early 2011, ~5 years after its founding.

https://dazeinfo.com/2018/11/14/facebook-revenue-and-net-inc...


Finding one example of him being wrong still kinda supports his point, don't you think?


No, it makes me think there are more. ", ever." Suggests you actually know the space of things you're talking about.


Especially when that example is Facebook!


MySpace had $1.5B in sales in 2009.


Facebook sold 2b of ads this week.


Coinbase was founded in 2013 and hit $1B in revenue in 2019 iirc


I would definitely be worried about ROI if my main product was something my big tech competition could copy easily because they did the research that led to my product.


What do you think the capex returns were for DARPA NET?

These reflexive humanist anti-AI takes are going to age like peeled bananas.


Likewise for the breathless praise.

The early internet was a public project, it didn't have to worry about its own return because the point was to build a robust communications system that could survive a nuclear strike. Seeing as Openai is trying to change to a for profit corporation, it's hardly an apt comparison.


> For me, they've mostly replaced Google.

That probably says more about you than about the tech.


I agree with your analysis. The nice thing I've realized is that this means I can just stop paying attention to it. The product sucks, and is useless. The people are all idiots with god complexes. All the money are fake funny money they get from their rich friends in silicon valley.

It will literally have no impact on anything. It will be like NFT's however long ago that was. Everybody will talk about how important it is, then they wont. Life will go one as it always have, with people and work, and the slow march of progress. In 30 years nobody is going to remember who "sam altman" was.


User name checks out


> Is nobody in these very rich guys' spheres pushing back on their thought process?

It's more simple. They've found that the grander the vision, the bigger the lie the more people will believe it. So they lie.

Take Tesla's supposed full self-driving as an example. Tesla's doesn't have full self-driving. Musk has been lying about it for a decade. Musk tells the same lie year after year, like clockwork.

And yet there are still plenty of true believers who ardently defend Tesla's lies and buy more Tesla stock.

The lies work.


The goal to "avoid an AGI dictatorship" may sound silly but it doesn't seem very far fetched that someone like Putin would replace themselves with a Putinbot which could then scheme to take over the world with their nukes much like now. One good thing about human dictators is at least they die eventually.


It sounds like they were worrying about Demis Hassabis rather than Putin.


> I guess it's not news but it is pretty wild to see the level of millenarianism espoused by all of these guys.

Unprecedented change has already happened with LLMs. So this is expected.

> So far we are multiple years in with much investment and little return

...because it's expensive to build what they're building.


What unprecedented change? No changes induced by LLMs are 'novel'. We've had much larger layoffs due to smaller technological improvements, for example. Productivity hasn't gone up nearly as much (a few percent?) as it did with electrification. What metric are you specifically thinking of?


Regarding the Silicon Valley Mindset, Douglass Rushkoff wrote a quite good book on the topic: https://bookshop.org/p/books/survival-of-the-richest-escape-...


lol it’s funny coming in here and seeing the HN echo chamber where AI never gets better and is just a fad. They will insist AI is useless even as it begins automating entire industries and surpassing humans in every benchmark. Claiming there’s no product market fit when ChatGPT has 200m users and companies are saving billions replacing workers with GenAI is laughable.

What keeps people from accepting this new reality? Is it ego, a fear of irrelevance due to AI inevitably eclipsing them in their most prized aptitudes?


> and no obvious large-scale product-market fit,

Really?

I use, and pay for, OpenAI every day


I'm going to favorite this thread and come back with a comment in 10 years. I think it will be fun to revisit this conversation.

If you really don't think that this line of research and development is leading to AGI then I think you are being hopelessly myopic.

>robotics to be "completely solved" by 2020

There are some incredible advances happening _right now_ in robotics largely due to advances in AI. Obviously 2020 was not exactly correct, but also we had COVID which kind of messed up everything in the business world. And arguing that something didn't happen in 2020 but instead happened in 2025 or 2030, is sort of being pedantic isn't it?

Being a pessimist makes you sound smart and world-weary, but you are just so wrong.


Being an optimist makes you sound naive and a dreamer. There is no scientific agreement that LLMs are going to lead to AGI in the slightest—we cannot even define what consciousness is, so even if the technology would lead to actual intelligence, we lack the tools to prove that.

In terms of robotics, the progress sure is neat, but for the foreseeable time, a human bricklayer will outcompete any robot; if not on the performance dimension, then on cost or flexibility. We’re just not there yet, not by a long stretch. And that won’t change just by deceiving yourself.


> line of research and development is leading to AGI

What do you mean by AGI exactly? if you want to come back in 10 years to see who's right, at least you should provide some objective criteria so we can decide if the goal has been attained.


I'm talking about this:

https://en.wikipedia.org/wiki/Artificial_general_intelligenc...

>Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks.

Most (if not all) of the tests listed on the wikipedia page will be passed:

>Several tests meant to confirm human-level AGI have been considered, including:

>The Turing Test (Turing)

This test is of course already passed by all the existing models.

>The Robot College Student Test (Goertzel)

>The Employment Test (Nilsson)

>The Ikea test (Marcus)

>The Coffee Test (Wozniak)

>The Modern Turing Test (Suleyman)


This debate is tiring from both sides. The best LLMs can already beat me in “most cognitive tasks” if you think that means sampling over possible questions.


For-profit isn't the problem. Lying about being non-profit to raise funds, and _then_ become for-profit, that's the underlying concern.


Too bad not disclosing that you always intended to convert the non-profit into a for-profit during your testimony while numerous senators congratulate you about your non-profit values isn't problematic.

https://www.techpolicy.press/transcript-senate-judiciary-sub...


Can this be considered perjury?


corporate puffery?


Some might characterize OpenAI leadership as not 'consistently candid'.


Yes, which means his post is an attempt to smear the credibility of musk, not make a legal defense.

If this were a legal defense, this would be heard in court.


Its true. Also it looks like Musks original statement was correct that they should have gone with a C-corp instead of a NFP.


That should be top comment on all OpenAI threads.

Just like “open source and forever free” - until of course it starts to make sense charging money.


>For-profit isn't the problem

It was a problem around 2014 - 2022. Even the lying part isn't new. It has always been there. But somehow most were blind to it during the same period.

Something changed and I dont know what it is. But the pendulum is definitely swinging back.


Also, the missing message here is "we want to become for-profit"


yes but its easy to hate musk thinking u siding with the good guys...


theres no good guys at the executive level of the AI world - its money and power dressed in saving humanity language.


Ignoring all the drama, this part is interesting:

"On one call, Elon told us he didn’t care about equity personally but just needed to accumulate $80B for a city on Mars."


One of the best things I've ever read. I'm going to use this in my next salary negotiation.

Oh you know I don't really care about the number, it's just that I'm working on this plan to desalinate all the water in the oceans.



This made me laugh but also made me think, "...that's it? That's all it takes?"


Land on Mars is cheap...

But no, I really doubt that's all it takes. Unless you discount all of the R&D costs as SpaceX operational expenses.


I imagine that's what he's doing. He's willing to put a lot of company money into getting the city on Mars started, because if he's first there, he's gonna set himself (or his dynasty?) up to make hundreds of billions of dollars. Being effectively in control of your own planet? Neat. Scary too.


Doing what exactly? What industry could Mars possibly support profitably?


> What industry could Mars possibly support profitably?

Government crew and supply launch contracts probably, though these don't need to be "profitable" in the conventional sense for the entity footing the bill.

All that's needed to be profitable for the launch provider is to convince Congress that it's in America's interest to establish a public-private project for a moon base and a Mars base. Now more than ever, when framed against a background of an ascendant Chinese Space program that will soon be sole operator of a space station.

Once the "Race to Mars" is won, NASA can grow soy-beans, do bone density studies, and any other science shit the NASA nerds can come up with post-hoc, but the main point will just be being there, and having the Stars and Stripes on the first flagpole planted in Martian soil.


Why would I need to choose just one? He’s the only person with a serious presence on the planet. He builds a landing pad that works exclusively with his rockets, and ANYTHING that happens on the planet will be going through him. Hell, he can probably convince China or the US to pay to build the habitats if he wants to.


But nothing will be happening on the planet, except a few rovers, because there is nothing of value whatsoever on Mars that is not literally a thousand times cheaper to find/do on Earth.


Except Mars. Tourism can make a lot of money.


Not this type of tourism. It might make a small amount of money, but there is no huge market for extremely expensive (think a hundred thousand dollars per trip per person) but 0-comfort exploration of barren wastelands that you can't even see for more than maybe an hour or risk being radiation poisoned. Will there be a trickle of extremely wealthy dare-devils? Sure. But this is neither a Greek island nor Monaco.


I agree it's not as straightforward and accept your version of the future as very real possibility as well, but I'm focusing more on the aspect of showing off something that differentiates you. Already on instagram you have plenty of things with lots of views and social capital that generate a great picture of a specific view, but the actual experience of getting there or everything around it are miserable. People pay a lot of money to have something that can make them feel special and going to Mars would have that property maxed out.


Even his cheapest estimates are still in the $100,000 USD range; I don't think you can support enough tourists to colonise Mars when the ticket alone costs that much.

I can see plenty of people going to Mars — religious groups, cults, anyone who wants to explore the final frontier, the most extreme preppers (which Musk arguably is given his "multi-planetary" arguments), and anyone who would like to build their own mountain-sized space castle.

What I can't figure out what any of those people might sell to Earth such that people on Earth are willing to trade for it, except for information which people on Earth can also produce without paying so much for a ticket.


I kinda see him as the Andrew Ryan of Mars' Rapture :-P

(if you haven't played Bioshock this probably wont make much sense - but if you have and listened to all the audiologs, i'm sure you'll find a bunch of parallels)


I don’t know how deeply they analysed it, but Kurzgesagt seem to think the Martian moons have particular value as central gravity wells, to then reach further out to mine asteroids

https://youtu.be/dqwpQarrDwk?feature=shared


Yeah and Mars is a shitty place to live. And will always be a shitty place to live. No amount of fantastical "terraforming" is going to create a magnetosphere.


Well actually... you can create a magnetic shield in front of the planet by putting a large nuclear reactor powered superconducting magnet at the Sun-Mars L1 point that would fully shield Mars from charged particles from the sun. No new technology is required besides making the thing relatively maintenance free. https://phys.org/news/2017-03-nasa-magnetic-shield-mars-atmo...


Well, maybe you can.


Making a magnetosphere should be easier than terraforming an entire planet. It's just some hundreds of thousands of kilometers of wires, and you can even use the wires while you are at it.

He obviously has plans to do neither of those. What makes me question why does he even want to go there... But a magnetosphere strikes me as "earlier infrastructure", while terraforming is "is that even possible?"


That sounds bad until you consider all the other alternatives.


Solar powered non-urgent batch compute. Mars has a shitton of land to build on and no people whining about you cutting down some rainforest or something to make room for your projects.


Space is a lot bigger and a lot nearer


Lifting materials into space is very expensive. If you build on a planet (or a moon) then you can mine it for materials reducing the expense.


Our moon is much closer to both us and, importantly for energy sources, the sun; and getting things off our moon can be done with things like SpinLaunch.

SpinLaunch on the moon wouldn't even need the vacuum chamber they use on Earth, but it would still need that on Mars.

The biggest down-side of the moon is the long night, but the solution to that is the same as one possible solution to the lack of magnetosphere on Mars: a very, *very* thick wire.

(I've not done precise calculations for this, but I have considered the scale needed for "thick wire" on an Earth-scale global power grid, and for that I was getting Earth's magnetic field at a distance of 11 km from a 1.36e6 A current, which in turn meant I had a 1m^2 cross section; naturally this is only going to happen if you have some serious automation for processing lunar/martian regolith into aluminium).


Tourism?


perhaps the "not dying on earth industry" after climate catastrophy hits


There's no climate scenario in which Mars is more habitable than Earth. Even if a Texas-sized asteroid crashed into Earth, Earth would still be more habitable than Mars.


I think this is true in almost any scenario. A Mars base is a second chance in the same way that a single potted orchid, suspended by a gossamer thread over a roiling vat of acid is a second chance for your lawn. If you humanity completely wiped itself out on easy mode, it will not last long on Mars after that.


This is strangely comforting


While I’m convinced we’re going to screw this planet up, the gap between “as bad as we can make Earth” and “as good as we can make Mars” is pretty huge, right? And not in a way that is optimistic for Mars.


True it's probably easier to survive on earth in some luxury bunker than on Mars no matter how much we destroy earth. Alternative theories: billionaire space tourism, it was never really about mars but about asteroid mining, it was never about mars he just wants the government subsidies


> True it's probably easier to survive on earth in some luxury bunker than on Mars no matter how much we destroy earth.

Definitely, and by a large margin.

If I said Mars combines the warmth of Antarctica with the moisture of the Sahara, the air pressure of the peak of Mount Everest, the environmental quality of a superfund cleanup site, the surface illumination after the 1883 Krakatoa explosion and subsequent volcanic winter, and the UV radiation from a completely destroyed ozone layer…

…if I said all that, I'd be understating how harsh Mars is.

The most positive thing I can say about colonising Mars is that the mere ability to actually do so will mean being able to recover from any possible catastrophe that comes our way.


Looking at his ventures, I think he is after power and leverage.


He's already made hundreds of billions of dollars though? A permanent Martian colony is an incredibly ambitious endeavor and even with a trillion dollars there's a reasonable chance the colony fails.


I don't think Elon is driven by money per se. I think he just wants to see expensive cool shit happen in his lifetime.


I wish he'd choose to do expensive cool shit that helped people or cured diseases and stuff.


He's working on letting the blind see and stuff like evs and communications. (https://www.mobihealthnews.com/news/elon-musk-s-neuralink-de...)


There are a lot of people doing that expensive cool shit. Not everyone’s expensive cool shit needs to be the same.


Hum... You (or he, or whoever) can't practically be in control of an entire planet.


Being first to market matters a lot, is my main point.


He's high on Ketamine.


Lol Apollo took a few percent of America's entire GDP.

Also astronauts were willing to risk their life putting the stars and stripes on the moon I doubt that Musk can inspire the same zeal...


Of course not, but Musk habitually underestimates the difficulty of things by about an order of magnitude.


SpaceX started with a small faction of that, just $100 million.


With a reusable launch vehicle yeah that's ballpark. Depends how you define "city" though.


I will always love Kim Stanley Robinson but I dont care: please, Musk, go to Mars, you can have it.


I wonder if Elon Musk is of the Red or the Green mindset.


He is a Green. Has talked about detonating nuclear weapons over the poles to melt ice.


$80 trillion would not be enough.


Measured another way, just a bit under 2 Twitters.


When he bought it, sure? Probably more like 20 X's


Offset by the value of toppling a hostile administration that had him in its crosshairs. Is that worth $XXB? Maybe not, but it's worth something.


I'm not sure if X actually caused that, given that even 4 years ago people were suggesting dementia in both Biden and Trump, and that Harris was kinda just parachuted in rapidly and without sufficient preparation when the fear about Biden having dementia got too loud to ignore.

And also that X was never as popular as Facebook.

On the other hand, the polls seemed pretty close, so perhaps it made enough of a difference on the margin.


The fact that owning Twitter was worth half a Mars colony to him should give you an idea of how seriously he's taking this whole thing. It's up there next to "Full Self Driving" and "$25,000 EV" in the Big Jar of Promises Used To Raise Capital and Nothing Else.


He bought Twitter for $40B.

His Tesla stock was 0% ytd until the election.

Post election it is up roughly 70% ytd & has paid for Twitter & the Mars colony multiple times.

Hard to say if that happens without him owning Twitter.


Are we really giving Twitter that much credit these days? I feel like we gave it less credit when it was actually popular. I would give Elon jumping around on stage more credit than what Twitter/X did for this election.


[flagged]


> I know it is hard to imagine, but 1/2 the United States voters see it as the last free speech area of the internet

It is very easy to imagine that. I can also imagine shuffling a bunch of cold cuts and slices of bread and making a sandwich like on Scooby Doo but that also has no reflection of reality


In no way do the election results justify that claim.


> Hard to say if that happens without him owning Twitter.

Its fairly easy.

In almost every western country the incumbent administration has been punished by voters due to inflation, this has been the case in the uk, germany, romania, france, mexico... list goes on. So Trump could have won without Elon buying twitter.

Similarly he could have donated to Trump without buying Twitter, and been on stage and been all day on twitter saying nonsense without purchasing. So being close to Trump is possible without buying Twitter.

The market would have reacted the same way, because the market is reacting to the fact that Trump is a corrupt leader and being close to him means the market will be skewed to benefit his cronies (in this case Elon). If im not wrong Trump has already mentioned creating "self driving car initiatives" that probably means boosting Tesla dangerous self driving mode, and also they have alluded to investigating rival companies to tesla and spacex or at least "reviewing their goverment contracts". Other industries without social media owners, like private prisions, also skyrocketed after trump won and those paid trump as much as Elon but were not on social media. The market would have reacted to Trump being corrupt regardless of Elon buying Twitter.

So its easy to say that his stock would be up 70% without buying twitter, as long as he kept the 250 million investment in the Trump campaign, and then market assesed Trump admin as affecting market fairness, both of which would happen without his purchase.


The ironic part is, Elon's own attorney, Alex Spiro, represented $100 million in BHR (China) shares for Hunter Biden. Alex also tried to save Jim Baker's job (FBI official) at Twitter, which is why Alex was removed. Those BHR shares are now controlled by Hunter's Malibu attorney.

Alex's law firm was able to get this story removed in 1-day from The Sun. But it's true, Alex's passport is online due to this deal-

http://archive.today/InaZQ


Twitter famously deplatformed Trump, as a sitting president. They were a captured institution and it was shown that corrupt intelligence agencies were directly influencing Twitter's censorship. There was no other distribution channel available. Messaging is important. Counter narratives are important.


> In a June 2023 court filing, Twitter attorneys strongly denied that the Files showed the government had coerced the company to censor content, as Musk and many Republicans claimed.[8] Former Twitter employees asserted that Republican officials also made takedown requests so often that Twitter had to keep a database tracking them.[9]

https://en.wikipedia.org/wiki/Twitter_Files


> shown that corrupt intelligence agencies were directly influencing Twitter's censorship

That is a hyperbolic way of saying the fbi used the report feature everyone has access to when worried about domestic terrorism along the lines of the Oklahoma City bombing. Not exactly government censorship - not a potato potahto thing.


It doesn't sound exactly ridiculous when Zuckerberg also admitted those same intelligence agencies pressured them into censoring the Hunter Biden laptop story.


That's so absurdly far from what Zuck said it's an outright lie.

"Zuckerberg told Rogan: "The background here is that the FBI came to us - some folks on our team - and was like 'hey, just so you know, you should be on high alert. We thought there was a lot of Russian propaganda in the 2016 election, we have it on notice that basically there's about to be some kind of dump that's similar to that'." He said the FBI did not warn Facebook about the Biden story in particular - only that Facebook thought it "fit that pattern".


Your rebuttal is corroborating my "lies".


So to you "intelligence agencies convinced Facebook that future big stories around leaks might be Russian propaganda" is the same as "FBI pressured Facebook to take down the Biden laptop story"?


Facebook didn't censor themselves.

Mark admitted the source of the censorship on Rogan's podcast and again doubled down on this fact with a letter to Congress.

https://x.com/JudiciaryGOP/status/1828201780544504064/photo/...


This. In hindsight, buying Twitter at a loss was well worth the long (though, more like medium) term results. As much as it disgusts me, I'm impressed.


Note he didn’t just buy Twitter. He was forced to buy twitter after trying to pump and dump the stock.


That's a mischaracterization. He tried to get out of the deal because the stock market overall had crashed significantly between 2021 and 2022 and Twitter was no longer worth $40B.


Honestly not convinced it was about that. I'm sure he likes the stock going up but I think he is somewhat earnest about why he bought twitter, or at least what he wants to do with it when he got stuck with it.

As per usual, he bet the house and won. The vibes are shifting, Trump won, etc


[flagged]


I've been hearing "nearly feature complete" for over a decade now: https://en.wikipedia.org/wiki/List_of_Predictions_for_Autono...


> v13 is way better and safer than a great Uber driver, let alone average.

Will need to see a source for this.

Especially since NHTSA crash reporting data is not made public.


I’m sure soon enough we won’t have to worry about NHTSA keeping that sort of data private (because that agency will simply be found inefficient and eliminated).



Wake me up when it stops having hard time recognizing when it’s raining to turn on the wipers reliably.

Then we can talk about FSD


Town squares go a lot further than I imagined


Elon Musk is nothing if not famous for astronomically over-promising and under-delivering.


You can see from the emails he outright does it just to "create a sense of urgency" and wants others to do the same. It does have its results but it churns through employees a lot as well. It's a good recipe to achieve great things but the problem is random middle managers of random SaaS b2b products thinking they need to do the same.


He is overly optimistic about timelines, but he usually delivers. Or did I imagine his company catching a fucking rocket out of the air with chopsticks? Guess that was under delivering


Red Dragon? Falcon 9 Booster 24-hr turn around? The plethora of missed milestones required to land on the moon's surface?


Just about the only open part about OpenAI is how their dirty laundry is constantly out in the open.


I think one issue is this highlights is how despicable all these individuals are.

The way I read this: Elon Musk wanted OpenAI to commit fraud. They refused. He went away. They decided to commit the same fraud. He sued.

They're a strong case in both directions. There's a legal principle that you can't take opposing views on the same legal issue.


Don't come here with your nuance! It'll confuse people and they won't know what side to take! The horror!


Is this one of the 12 days of OpenAI?


If this is a GPT-generated joke, I'd say they cracked AGI.


It seems the humans pursuing AGI lack sufficient natural intelligence. I'm sad that humans with such narrow and misguided perspectives have so much power, money, and influence. I worry this won't end well.


then surely you can create your own company and do it much better than them


Great idea, I’ll just drop everything in my life and do that


Sorry I triggered you.


I'm amazed OpenAI made these disclosures. My main takeaway was how wrong all the predictions of Sutskever and Brockman turned out to be, and how all their promises have been broken. Also interesting that up to 2019 OpenAI was focused on RL rather than LLMs.


What were their predictions?


"we have reason to believe AGI can ultimately be built with less than $10B in hardware"

"Within the next three years, robotics should be completely solved, AI should solve a long-standing unproven theorem, programming competitions should be won consistently by AIs, and there should be convincing chatbots" (2017)

"We will completely solve the problem of adversarial examples by the end of August"


> "we have reason to believe AGI can ultimately be built with less than $10B in hardware"

As a person who actually builds this infrastructure for Data Centers: Bwahaha!!!

This guy should have been laughed out of the room, and probably been out of a job if ANYONE took this guy serious. There are Elon levels of dillusions, and then there is this!


elon built xAi's infrastructure for less than $6b. So obviously Elon thinks so too.


Was he there to make predictions?


What would robotics completely solved even mean?


Humanoid robots operating in the world built for humans performing useful tasks


LLMs never took off until they combined them with RL via RLHF. RLHF was discovered in their RL research on game playing. GPT3 was out for quite a while with much lower impact than the chatgpt release and finished training in like december 2019 I read somewhere, released mid 2020. There were later better checkpoints, but it still didn't have much impact except for code completion.

With just a raw language model instructions and chat didn't work to near the same degree.

Both elements are important and they were early in both. Illya's first email here talks about needing progress on language:

2016

Musk: Frankly, what surprises me is that the AI community is taking this long to figure out concepts. It doesn’t sound super hard. High-level linking of a large number of deep nets sounds like the right approach or at least a key part of the right approach.

Illya: It is not the case that once we solve “concepts,” we get AI. Other problems that will have to be solved include unsupervised learning, transfer learning, and lifetime learning. We’re also doing pretty badly with language right now.


> You can’t sue your way to AGI. We have great respect for Elon’s accomplishments and gratitude for his early contributions to OpenAI, but he should be competing in the marketplace rather than the courtroom.

Isn't that exactly what he's doing with x.ai? Grok and all that? IIRC Elon has the biggest GPU compute cluster in the world right now, and is currently training the next major version of his "competing in the marketplace" product. It will be interesting to see how this blog post ages.

I'm not dismissing the rest of the post (and indeed I think they make a good case on Elon's hypocrisy!) but the above seems at best like a pretty massive blindspot which (if I were invested in OpenAI) would cause me some concern.


> biggest GPU compute cluster in the world right now

This is wildly untrue, and most in industry know that. Unfortunately you won't have a source just like I won't, but just wanted to voice that you're way off here.


> This is wildly untrue, and most in industry know that. Unfortunately you won't have a source just like I won't, but just wanted to voice that you're way off here.

Sure, we probably can't know for sure who has the biggest as they try to keep that under wraps for competition purposes, but it's definitely not "wildly untrue." A simple search will show that they have if not the biggest, damn near one of the biggest. Just a quick sample:

https://nvidianews.nvidia.com/news/spectrum-x-ethernet-netwo...

https://www.yahoo.com/tech/worlds-fastest-supercomputer-plea...

https://www.tomshardware.com/pc-components/gpus/elon-musk-to...

https://www.capacitymedia.com/article/musks-xais-colossus-cl...


i've physically visited a larger one, it is not even a well kept secret. we all see each other at the same airports and hotels.


Because they are all located in the same small town?


Technically, it maybe the world's biggest single AI supercomputer.

But it ignores Amazon, Google and Microsoft/OpenAI being able to run training workloads across their entire clouds.


I don't think you've been paying attention to the industry even though your posturing like an insider.


The distinction is that larger installations cannot form a single network. Before xAI's new network architecture, only around 30k GPUs could train a model simultaneously. It's not clear how many can train together with xAI's new approach, but apparently it is >100k.



Really? Meta looks to be running larger clusters of Nvidia GPUs already

https://engineering.fb.com/2024/03/12/data-center-engineerin...

This doesn't account for inhouse silicon like Google where the comparison becomes less direct (different devices, multiple subgroups like DeepMind)


Even just Meta dwarfs Twitter's cluster, with an estimated 350k H100s by now.


2 months ago Jensen Huang did an interview where he said xAi built the fastest cluster with 100k GPUs.he said "what they achieved is singular, never been done before" https://youtu.be/bUrCR4jQQg8?si=i0MpcIawMVHmHS2e

Meta said they would expand their infrastructure to include 350k GPUs by the end of this year. But, my guess is they meant a collection of AI clusters not a singular large cluster. In the post where they mentioned this, they shared details on 2 clusters with 24k GPUs each.https://engineering.fb.com/2024/03/12/data-center-engineerin...


What's singular is putting 100k H100s in a single machine. Which, yay, cool supercomputer, but the distributed supercomputer with 5 times the machines runs just as fast anyways.

Huang is still a CEO trying to prop up his product. He'd tell you putting an RTX4090 in your bathroom to drive an LED screen mirror is unprecedented if it meant it got him more sales and more clout.


It's rich coming from Sam Altman -- the guy who famously tried to use regulatory capture to block everyone else.


Game recognizes game.


> IIRC Elon has the biggest CPU compute cluster in the world right now

Do you have a source for this? I don’t buy this when compared to Google, Amazon, Lawrence Livermore National Lab…


The claim seems to mostly be coming fro NVIDIA marketing [0].

[0] https://nvidianews.nvidia.com/news/spectrum-x-ethernet-netwo...


I first heard it on the All-In podcast, but I do see many articles/blogs about it as well. Quick note though, I mistyped CPU (and rapidly caught and fixed, but not fast enough!) when I meant GPU.

[1]: https://www.yahoo.com/tech/worlds-fastest-supercomputer-plea...


Surely Meta has the biggest compute in that category, no? I wouldn't be surprised if Elon went around saying that to raise money though.


Maybe Elon is doing both, competing in the marketplace and in the courtroom. And in advising the president to regulate non-profit AI .


Agree, he is doing both. But if he's competing in the marketplace, it seems pretty off base for Open AI to tell him he should be competing in the marketplace. So I think my criticism stands.


I don’t think their suggestion ever implies that he isn’t.


If the believed he already was competing in the marketplace, then what would be the point of saying "he should be competing in the marketplace rather than the courtroom." ? I'm genuinely trying to understand what I'm missing here because it seems illogical to tell someone they should do something they are already doing.


What you are missing is the context of this phrase. It is almost exclusively used when talking about two entities are already competitors.

Some of your confusion may come because this phrase is intended neither as a statement of fact, nor a course of suggested action. It is instead a rhetorical flourish intended to imply that one company has a subpar product and is using the legal system to make up for that.


>Isn't that exactly what he's doing with x.ai? Grok and all that?

They aren't saying he isn't. But he is trying to handicap OpenAI, while his own offering at this point is farcical.

>It will be interesting to see how this blog post ages.

Whether Elon's "dump billions to try to get attention for The Latest Thing" attempt succeeds or not -- the guy has an outrageous appetite to be the center of attention, and sadly people play along -- has zero bearing on the aging of this blog post. Elon could simply be fighting them in the marketplace, instead he's waging a public and legal campaign that honestly makes him look like a pathetic bitch. And that's regardless of my negative feelings regarding OpenAI's bait and switch.


Eh grok is bad but I wouldn't call it farcial. It's terrible at multimodal, but in terms of up to date recent cultural knowledge, sentiments, etc. it's much better than the stale GPT models (even with search added)


> biggest GPU compute cluster in the world right now

Really? I'm really surprised by that. I thought Meta was the one who got the jump on everyone by hoarding H100s. Or did you mean strictly GPUs and not any of the AI specific chips?


Good point, I don't know if it's strictly GPUs or also includes some other AI specific chips.

Nvidia wrote about it: https://nvidianews.nvidia.com/news/spectrum-x-ethernet-netwo...


oh wow. I think your original assertion is correct. Wow. What a crazy arms race.


People change their minds all the time. What someone wanted in 2017 could be the same or different in 2024?


Sure, but the nuance is Elon only wants what benefits him most at the time. There was no philosophical change, other than now he’s competing.

He's allowed these opinions, we’re allowed to ignore them and lawyers are allowed to use this against him.


That is true of most people and is the most common reason people change their minds.


[citation needed]


> but the nuance is Elon only wants what benefits him most at the time.

Isn't that almost everyone? The people who left OpenAi could have joined forces but everyone went ahead and created their own company "for AGI"

It's like the wild west where everyone dreams of digging up gold.


> Isn't that almost everyone?

Sure. That's why we have contracts and laws to restricts how much one can change without paying or jail. Not every changes are equals.


I am used to their articles being sterile and formal, this reads like some teenager spilling their tea on social media.


so true, it's cringey af


Google/Gemini have none of this baggage.


Google/Gemini are also the only ones who are not entirely dependent on Nvidia. They are now several generations into their in-house designed and TSMC manufactured TPUs.


Broadcom is now the second biggest AI chip producer thanks to Google. Apple recently announced they will also work with Broadcom on something similar.


Google is its own very special kind of baggage, though.


Neither does Anthropic/Claude.


It's amusing how Sutskever kept musking Musk over the years (overpromising with crazy deadlines and underdelivering):

In 2017 he wrote

"Within the next three years, robotics should be completely solved, AI should solve a long-standing unproven theorem, programming competitions should be won consistently by AIs, and there should be convincing chatbots (though no one should pass the Turing test)."

"We will completely solve the problem of adversarial examples by the end of August."

Very clever to take a page from Musk's own playbook of confidently promising self-driving by next year for a decade.


That’s embarrassing and should be noted when he’s treated as a guru (as in today when I guess he gave a talk at Neurips conference) Of course, he should be listened to and treated as a true expert. But, it’s becoming more clear in viewing public people that extreme success can warp people’s perspective.


I mean, he wasn't that far off. The Turing test is well and truly beaten, regardless of how you define it, and I sure wouldn't want to go up against o1-pro in a programming or math contest.

Robotics being "solved" was indeed a stupid thing to assert because that's a hornet's nest of wicked problems in material science, mechanical engineering, and half a dozen other fields. Given a suitable robotic platform, though, 2020-era AI would have done a credible job driving its central nervous system, and it certainly wouldn't be a stumbling block now.

It's been a while since I heard any revealing anecdotes about adversarial examples in leading-edge GPT models, but I don't know if we can say it's a solved problem or not.


> The Turing test is well and truly beaten, regardless of how you define it

Unless the question the human asks is 'How many l's in llama'


Yeah, snark really settles the question, right up until the model gets better. Go try to fool o1-pro with that schtick.


This month, a computer solved the first Advent of Code challenge in eight seconds.

Everyone on Hacker News was saying "well of course, you can't just feed it to a chatbot, that's cheating! the leaderboard is a human competition!" because we've normalized that. It's not surprising, it's just obvious, oh yeah you can't have an Advent of Code competition if the computers get to play as well.

Granted it took seven years. Not three.


I think the achievements in the past couple of years are astonishing, bordering on magic.

Yet, confidently promising AGI/self-driving/mars landing in the next couple of years over and over when the confidence is not justified makes you a conman by definition.

If the number 3 means nothing and can become 7 or 17 or 170 why keep pulling these timelines out of their overconfident asses?

Did we completely solve robotics or prove a longstanding theorem in 2020? No. So we should lose confidence in their baseless predictions.


Self-driving is not so much a technological problem as it is a political problem. We have built a network of roads that (self-evidently) can't be safely navigated by humans, so it's not fair to demand better performance of machines. At least, not as long as they have to share the road with us.

'AI' landings on Mars are the only kind of landings possible, due to latency. JPL indisputably pwned that problem long before anyone ever heard of OpenAI.

Theorem-proving seems to require a different toolset, so I don't know what made him promise that. Same with robotics, which is more an engineering problem than a comp-sci one.


The cars are still worse than humans.


On an uneven playing field, yes. If we'd designed our roads for the robots, the robots would do better.

In any case the robots are getting better. Are we?


Pre transformers paper emails. Fun to read.


Neither they nor Elon can or should be trusted to tell the truth. The only utility this statement should have is to illustrate whatever public narrative OAI wishes to affect.


True. But after Elon's twitter lies and world domination ambitions he showed during the past 5 years, I just can't support his narrative.


> September 2017: We rejected Elon's terms because giving him unilateral control of OpenAI and its technology would be contrary to the mission

Why the rejection in 2017 when in 2024 the company moved towards a similar goal?


Because Sam.


> Here is a cool video of our bot doing something rather clever: https://www.youtube.com/watch?v=Y-vxbREX5ck&feature=youtu.be....

> The HER algorithm (https://www.youtube.com/watch?v=Dz_HuzgMzxo) can learn to solve many low-dimensional robotics tasks that were previously unsolvable very rapidly. It is non-obvious, simple, and effective.

> In 6 months, we will accomplish at least one of: single-handed Rubik’s cube, pen spinning (https://www.youtube.com/watch?v=dDavyRnEPrI), Chinese balls spinning (https://www.youtube.com/watch?v=M9N1duIl4Fc) using the HER algorithm

taken down now. anyone have it?

> Lock down an overwhelming hardware advantage. The 4-chip card that <redacted> says he can build in 2 years is effectively TPU 3.0 and (given enough quantity) would allow us to be on an almost equal footing with Google on compute.

who is this? it isnt cerebras. sambanova?


I like this practice of publicly airing out dirty laundry.


It’s certainly in response to Elon getting more involved in the government.

Recently the CFO basically challenged him to try and use his influence against competition “I trust that Elon would act completely appropriately… and not un-American [by abusing his influence for personal gain]”.

The best thing they can do is shine as much light on his behavior in the hope that he backs down to avoid further scrutiny. Now that Elon is competing, and involved with the government, he’ll be under a lot more scrutiny.


It's naive for anyone to think that Elon won't use his influence in government to empower his companies and weaken his competitors.


I think he is less worried about competitors than the government itself. Specifically, unelected bureaucrats with lots of power and few responsibilities.


Elon now IS an unelected bureucrat with lots of power and few responsibilities. If he's really afraid of that kind of thing, he should throw himself out.


That's what he is doing. He designed the department to have an expiration date.


He will actually use his position in the government to take down his competitors (by, for example, stopping the tax benefit of EV). Elon has grown into the biggest corrupt guy


That would be out of character for someone that opened Tesla's patents and lets competitors use the supercharger network.


He spent $250m+ on helping get Trump elected.

Of course he is going to try and recoup some of that money back.


He already did. Look at Tesla’s stock since the election.


> Now that Elon is competing, and involved with the government, he’ll be under a lot more scrutiny.

That's the cutest fucking thing I've heard this year. In what world is anyone going to scrutinize Elon Musk? He's the right-hand man of the most powerful person in the world. The time for scrutiny was 8 years ago.


Scrutiny: noun critical observation or examination


He'll be under scrutiny, just not by anyone with any power whatsoever to stop or even meaningfully influence his behaviors.


Scrutiny without the ability to exert oversight, control or any modicum of restraint is... utterly useless to everyone except the historians.


No immediate ability to exert oversight, but scrutiny can be a major factor in how things go 4 years, when the voters get to exert control.


>... when the voters get to exert control.

If*. We hope we can.


Well Congress could meaningfully influence behaviors, but it seems very unlikely unless something drastic happens.


I didn't even realize he was American until this year. I thought he was African.


Agreed!(assuming that wasn't sarcasm)

Has a nice "nailing my grievances to the town square bulletin board" feel. Doesn't result in any real legal evidence either way, but it's fun to read along.


This is not airing out dirty laundry. This is calling bullshit on a bullshitter making bullshit claims.


Sounds like Elon's "fourth attempt [..] to reframe his claims" might actually be close to the target.

Otherwise, why would they engage in a publicity battle to sway public sentiment precisely now, if their legal case wasn't weak?


I'm generally in the camp of "I wouldn't miss anyone or anything involved in this story if they suddenly stopped existing", but I don't understand how engaging in a publicity battle is considered proof of anything. If their case was weak, what use is it to get the public on "their side" and they lose? If their case is strong, why wouldn't they want the public to be on their side?

I hope they all spend all of their money in court and go bankrupt.


So much public drama around AI company, just curious how it would impact their brand and relationship with enterprise companies, who usually seek stability when it comes to their service providers


A few weeks ago my OpenAI credits expired and I was billed to replace them. I had no idea this was the business model, fine you got me with your auto-renew scam because you decided my tokens were spoiled.

At some point, OpenAI became who they said they werent. You can't even get chatgpt to do anything fun anymore as the lawyers have hobbled its creative expression.

And now they want to fight with Elon over what they supposedly used to believe about money back in 2017.

Who actually deserves our support going forward?


And now they want to fight with Elon

Elon sued OpenAI, not the other way around


The free credit or the tokens? Because that's a very different story.


One year ago I gave openAi $100 to have credits for small hobby projects that use their API.

They expired and disappeared and then openAi charged me $100 to reload my lost money.

I am sure this is what I agreed to, but I personally thought I was hacked or something because I didnt expect this behavior from them.

They lost me as an API user and monthly chatgpt plus user. I hope it was worth it. They want enterprise, business, and pro users anyway and not my small potatoes.


[flagged]


> They expired and disappeared

There's no button for that.


Expiring tokens is a pretty standard business model. Are they lying and saying the tokens never expire?


Nope they aren't lying, but they lost me as a customer. My Arby's gift card lasted longer than my OpenAI credits, it's a horrible business model and I wish them luck, but I won't be a part of supporting bad models.


Several states banned expiring gift cards.

The gift card companies put up signs saying "never expires!" as advertisements in those states.

Ethics aren't exactly abundant with this stuff.


Don’t give them any ideas. Next time Starbucks will be popping up in the ChatGPT output, offering to convert your unused tokens to rewards there.


Standard for who? That's awful model that probably shouldn't be legal. They're already getting a free loan for a service that gets cheaper to provide over time.

At most, tokens should expire after years of inactivity and multiple warnings to reset the timer.


"I don't like this thing, so rather than not support that business model the government should make it illegal."


What's your argument? Surely you know that whether that's a reasonable position depends on what "this thing" is.

If "this thing" is a business model that puts money in an account and then makes the money expire, I think it should be illegal just like gift card expiration often is.

There's no reason to prefer consumer pressure against predatory practices. And their business would run just fine without that aspect. It's not important to their overall business model.


ChatGPT is way less hobbled than Gemini or Claude.


regardless of what you think of it, the drama is at least entertaining!


I wonder if there are PR people out there, who watched the Wordpress vs. WPEngine disaster from the sideline and took notes.

For me this rhymes with recent history...


What, you can't feel the Christmas spirit?


As an aside, it's weird (and annoying) that they call him "Elon" rather than "Musk".


Off-content-comment: I think this is the first time I’ve seen cloudflare “are you a bot” protection on a corporate blog post before.

I think somehow related to AI companies viewing the web as valuable data to be stolen if you don’t have it and protected property if you do have it.


Maybe a weird question, but how does a capitalist economy work where AGI-performed-labor is categorically less expensive than human-performed labor? Do most people who labor for wages just go unemployed, and then bankrupt, and then die?


There are essentially two answers to this question, and neither are in the interests of capital so there will never be progress if power toward them until forced.

The first is the bill gates "income tax for robots" idea which does a kind of hand wavey thing figuring out how to calculate a robot, how the government will distribute taxes, etc. That one is a mess impossible to get support for and nearly impossible to transition to.

The other idea, put forth by the Democracy in Europe 2025 party, is called a universal basic dividend, which essentially says that to make AI owned by humanity, the utility of automation should be calculated, and as a result, a dividend will be paid out (just like any other stock holder) which is a percent of that company's profit derived from automation. It becomes part of the corporate structure, rather than a government structure, so this one I think kinda has merit on paper, but likewise zero motivation to implement until it's virtually too late


I don't quite follow the logic of either of these proposals. If (a big if) AGI displaces human labor, then most people will not have any income from employment. If that happens, who has the income to purchase the output of the AGI? If the addressable market served by the AGI declines drastically, then where do the profits come from? If the AGI has low/no profits, where does the tax or dividend come from?

The only solution I can come up with to this conundrum is if the output of the AGI is provided for free to people so the value is captured directly without the intermediating commercial steps. However, there will still be friction in this process (in the form of physical inputs) so I don't see how this is a long term sustainable approach.

Good thing I don't believe in AGI so this is, at best, a philosophical debate.


Wealth tax, with the value declared by the owner as a requirement for the government defending the property rights.

Valuations deemed too low (attempted tax evasion) can be challenged by the government by calling an owner's bluff and forcing sale to the highest bidder at the price set by the second highest bidder.


No one has a strong answer for this. This question was the origin of a lot of universal basic income chatter and research beginning in maybe 2014. The idea being, whoever creates superintelligence will capture substantively all future profits. If a non-profit (or capped profit) company does it, they could fund UBI for society. But the research into UBI have not been very decisive. Do people flourish or flounder when receiving UBI? And what does that, in aggregate, do to a society? We still don't really know. Oh and also, OAI is no longer a non-profit and has eliminated the profit cap.


I remember watching an episode of the 1960s police drama, Dragnet.

In one of the episodes, Detective Joe Friday spoke with some computer technicians in a building full of computers (giant, at the time). Friday asked the computer technician,

> "One more thing. Do you think that computers will take all our jobs one day?"

> "No. There will always be jobs for humans. Those jobs will change, maybe include working on and maintaining computers, but there will still be important jobs for humans."

That bit of TV stuck with me. Here we are 60 years later and that has proven true. I suspect it will still be true in 60 years, regardless of how well AI advances.

Dario Amodei, former VP of research at OpenAI and current CEO of Anthropic, notes[0] a similar sentiment:

> "First of all, in the short term I agree with arguments that comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the “10%” expands to continue to employ almost everyone. In fact, even if AI can do 100% of things better than humans, but it remains inefficient or expensive at some tasks, or if the resource inputs to humans and AI’s are meaningfully different, then the logic of comparative advantage continues to apply. One area humans are likely to maintain a relative (or even absolute) advantage for a significant time is the physical world. Thus, I think that the human economy may continue to make sense even a little past the point where we reach “a country of geniuses in a datacenter”.

Amodei does think that eventually we may need to eventually organize economic structures with powerful AI in mind, but this need not imply humans will never have jobs.

[0]: https://darioamodei.com/machines-of-loving-grace


To summarize, fears of AI out competing humans for labor rely on the "lump of labor fallacy" which is the same fallacy employed to justify anti-immigrant sentiment in the form of "they took our jobs"?


Comparative advantage. Even if AGI is better at absolutely all work the amount of work it is capable of doing is finite. So AGI will end up mostly being used for the highest value, most differentiated work it’s capable of, and it will “trade” with humans for work where humans have a lower opportunity cost, even if it would technically be better at those tasks too.

Basically the same dynamic that you see in e.g. customer support. A company’s founder is going to be better than your typical tier 1 support person at every tier 1 support task, but the founder’s time is limited and the opportunity cost of spending their time on first-contact support is high.


I think you assume that there will only be one model. However, as we can see today, there can also be a variety of prices/qualities. Why wouldn't the tier 1 support people be also replaced, but by cheaper models?


I am pretty convinced that's the outcome. Too much credit to say that is their plan, I think they frankly don't care as long as they get to win, control and increase their power within their circles.

None of these people live with empathy, they are all on their own narrative which if it does account for other people is them trying to speed run the story path where everyone hero worships them.


The way comparative advantage works, even if an AGI is better than me at Task A and Task B, if it is better at Task A than Task B, it'll do Task A and I'll do Task B.

I think a lot of people confuse AGI with infinite AGI.


… what

No, you’re confusing finite AGI for an AGI that can only compete with a single person at a time.

In a world of AGI better-than-humans at both A and B, it’s a whole lot cheaper to replicate AGI that’s better at A and B and do both A and B than it is to employ human beings who do B badly.


Again, you're just assuming you can infinitely replicate AGI, and there's no reason to think that's the case. It's only always cheaper to replicate AGI if there is no resource constraint on doing so.


AGI that is competent enough to perform any task better than a human, can perform tasks of controlling robots and mining equipment and construction equipment and factories and compute foundaries, to create more robots and mining equipments and etc. etc.

Look up into the sky tonight. That glowing white-ish disk is made of the resource constraints, and is a tiny fraction of the resources available.

It may be a finite quantity of resources, but with regards to the economic impact it's enough that it may as well be infinite compared to what humans can do… and what we can experience as a consumer. I mean, sure, a McKendree cylinder is bigger than an O'Neill cylinder, but the scale of what's being suggested by AGI is literally everyone getting their own personal fleet of 60 or so O'Neill cylinders — enough to spend a lifetime exploring, to be a playground for anything any of us want. Demand gets saturated.

And even that's not enough.

From the other side of the equation, we humans have a basic energy cost to staying alive.

There are already domains where AI can do things for energy costs so low that it isn't "economical" to pay for the calories needed to keep a human brain conscious long enough to notice that it has seen the result, much less to do that task.

Should that become true across all domains, there's no comparative advantage to keeping us alive. Hide the farm where your food grew behind PV, disassemble the sand and clay in the soil into constituent atoms and use the silicon to make more compute or more power.


If things are bad enough, some people will think Mars sounds better. It won’t be, but people will want to believe it will and marketing will suggest it.


And also if everyone is out of job and no money, who is buying the products made by robots?


The actual money doesn't evaporate, someone somewhere always has some in an account and/or in notes and coins.

If all the money ends up in one place, from one single person to an entire group, if they have no need to spend any of it ever because the robots they control will make everything for them, then the rest invent a replacement currency and contine their own disconnected economy without the results of the automation.

But the very fact that some person or group is capable of hoarding all the money because they no longer need to spend it, means they have no real reason to care if it's taken away or diluted by government orders for "inflationary" (in this case more like "anti-deflationary") money printing.

How well that new economy works for those without automation is difficult to guess and depends on what real resources (if any, currency in the abstract doesn't count but precious metals might) get monopolised by those with automation, though the transition to it would likely be disruptive.

But that disconnect only happens when the robots can, amongst other things, make more robots (if robots can't do that then humans get paid to make robots), at which point money as we use it today stops being relevant — if I had such automation, it stops mattering to me if my tenants pay rent or not, irregardless of if they have that automation themselves, and I can tell a robot to make another robot and then give it to my tenant without meaningful loss to myself.


The whole plan for most of the AI sycophants is to earn a trillion dollars off of selling AI, and whatever happens to the plebs is an afterthought (really, not a thought at all).

I suspect and hope there's gonna be a lot more Luigi Mangione's coming about soon, though.


By artificially recreating scarcity in virtual space and reproducing the same kind economic dynamic there, what Antonio Negri called 'symbolic production'. Think fashion brands, video game currency, IP, crypto, Clubhouse, effectively the world we're already in. There's an older essay by Zizek somewhere where he points out that this already has played out. Marx was convinced 'general intellect', what we know call the 'knowledge economy' would render traditional economics obsolete, but we just privatized knowledge and reimposed the same logic on top of it.


Most people have nothing to contribute in the virtual/knowledge realm. Certainly not enough to live on.


Careful, we are not supposed to discuss how many human jobs will remain. The script clearly states the approved lines are "there will still be human jobs" and "focus on being adaptable". When it seems normal that Fortune 500 execs are living at sea on yachts, that's when you'll know OpenAI realized their vision for humanity.


I would think certain recent events in New York will probably be a bigger impetus for Fortune 500 execs living at sea on yachts


We might have fewer future vigilante executive assassins if our government, and FTC in particular, would deem to enact consumer protections against things like AI-based medical claims recommendations that go beyond initial triage/routing to human handlers.

We're going to continue watching the rapid AI-driven annihilation of the demand for human customer support specialists. They are just the first major rung of the ladder to fall.

Without protections, even UBI, it's hard to imagine things getting safer for anyone.


The question shows a total disconnect from reality. People who are unemployed with no money today don't die. Even street people don't die.

If AGI labor is cheaper and as effective as human labor, prices will drop to the point where the necessities cost very little. People will be able to support themselves working part-time jobs on whatever hasn't been automated. The tax system will become more redistributive and maybe we'll adopt a negative income tax. People will still whine and bitch about capitalism even as it negates the need for them to lift a single finger.


>People who are unemployed with no money today don't die. Even street people don't die.

Small point of contention here, but 100% of people die, and 'street people' exceptionally quickly.

>The tax system will become more redistributive and maybe we'll adopt a negative income tax.

And when the productive economy profits from automation while the government tax revenue decreases from lost income tax, what do you think this arbitrary money printing will do to the USD?


Until now, I didn't think that poverty impacting mortality was a controversial statement. Poverty is the fourth greatest cause of US deaths[1].

Why would I hire a part time worker when I could hire an AI for cheaper? When I said categorically, I meant it.

1. https://medicalxpress.com/news/2023-04-poverty-fourth-greate...


Quite a goal post move from "just go unemployed, and then bankrupt, and then die" to "poverty impacts health"


I didn't say health; you did. I said mortality which is synonymous with death.

Misquoting someone to prove a point is intellectually bankrupt. Take a breath and calm down. Why don't you try stealmanning instead?


"Negatively impacts mortality" doesn't mean "kills" or "causes to die". "Bad for health" is a better interpretation. Do you really believe that unemployment + bankruptcy = death? If you don't believe that, then why are you writing sentences like this:

> Do most people who labor for wages just go unemployed, and then bankrupt, and then die?


Yes. The all-cause mortality hazard ratio for unemployment is 1.85[1]. Nearly doubling mortality is an incredible result. Reducing my argument to a guarantee of death in order to prove it wrong is absurd.

1. https://pmc.ncbi.nlm.nih.gov/articles/PMC4677456/


> People will be able to support themselves working part-time jobs on whatever hasn't been automated

That assumes anything remains that hasn't been automated.

The "G" in "AGI" means different things to different people, because generality isn't really a boolean that you have or don't have, but if the AGI has a G at least as general as human intelligence, then everything we can do will be automated.


Why are they airing their dirty laundry like this? None of what they've said makes either side look good. In fact it makes OpenAI look like the money-obsessed hypocrites we always knew them to be.


Because Elon sued OpenAI


I've vented in previous comments this year about how weird it is that we've collectively become okay with corporations, politicians, public figures, yadda yadda, putting out statements that everyone knows to be bullshit. It's the norm, and it's fucking weird.

So, strictly in that sense, I'm appreciating this level of "openness".


From Musk's email:

"Frankly, what surprises me is that the AI community is taking this long to figure out concepts. It doesn’t sound super hard. High-level linking of a large number of deep nets sounds like the right approach or at least a key part of the right approach."

Genuine question I've always had is, are these charlatans conscious of how full of shit they are, or are they really high on their own stuff?

Also it grinds my gears when they pull out probabilities out of their asses:

"The probability of DeepMind creating a deep mind increases every year. Maybe it doesn’t get past 50% in 2 to 3 years, but it likely moves past 10%. That doesn’t sound crazy to me, given their resources."


You should read what he says about software engineering. He's clearly clueless


I'm interested, can you point me to some interviews or posts with him talking about it?


Amongst people who think probabilistically, this isn't a weird statement. It's a very low precision guestimate. There is a qualitative difference between 50-50, 90-10, 99-1 etc, and it's only their best guess anyway.


Just because you can generate numbers between 0 and 1 doesn't make them meaningful probabilities.

Are these based on data? No, they're rhetorical tools used to sound quantitative and scientific.

Nobody will be applying the calculus of probability to these meaningless numbers coming out of someone's ass.

And most importantly, is he himself willing to bet a significant fraction of his fortune based on these socalled probabilities? I don't think so. So they're not probabilities.


A conversation like this happens every minute between investors, people who work at hedge funds, trader types, bookies, people who work in cat insurance etc. They just think this way. They are "priors" in Bayes world. Based on intuition. Notice the lack of precision. Nobody says "50%" or "10%" to sound scientific. I'm 99.9% certain it's better than using ambiguous terms "likely", "probably", "certainly possible" and so on.


I like to use the the 0.1% cya too, but that makes you sound 99.9% less sure.


1 in a 1000 is probably the best way to do it. 99.9% and 0.1% sound like fake precision, even though they aren't.


The 50-50 probability is way overused. It is like ppl subconsciously think that two outcomes make the odds 1:1.


> Frankly, what surprises me is that the AI community

I came here thinking about this exact part. Well, many of them, but this one in particularly.

What surprises me about Elon is how much he can talk about other peoples' work without doing any of it himself. And yet each time I hear him talk about something I'm well-versed in, he sounds fairly oblivious yet totally unaware of that fact.

His go-to strategy seems to be hand waving with a bit of "how hard could it be"?

He's very fortunate he has competent staff.


At his level of wealth and power he can unfortunately peddle in all sorts of highfalutin technical-sounding nonsense with zero accountability.

Few people like Yann LeCun can afford to call his bullshit and survive the reality distortion field.


I think people kissing your ass all day and viewing wealth as a sign of intelligence (which would make Musk the smartest man in history by a lot), it’s more understandable.


This is common supposedly in models of grandiose narcissism, which is a subtype that is associated with having leadership traits (agentic extraversion). Not saying anyone has it or that it is necessarily a bad thing, but it might lead you to explore some insights into traits that lead to this type of behavior.


The throwaway diss of Blue Origin was pretty funny.

The biggest miss from Elon seems to be that he overestimated Google's lead. Did Google really drop the ball on staying in 1st place and if so why?


This has been discussed.

Yes Google - and particularly DeepMind were far ahead when OAI started. They even have the first paper on Transformers I think. But the people who ended up productising the entire thing was OAI because the incentives weren’t there for Google to make ChatGPT.


They are baiting for a lawsuit. Posting this right now isn't very beneficial for openAI, since they themselves want to loose the self inposed restrictions.


I’m neither an Elon fanboy nor a hater. But I do wonder both what he possesses and is possessed by to have created not just a successful company, but era-defining companies plural.


SpaceX is era defining. The rest are pretty run of the mill.

One could also say that SpaceX happened because the US desperately needed a launch vehicle of their own.


Hmm.

I would argue that Tesla became era-defining by the efforts of Musk; and I get the impression that within the USA Tesla is still leading with regard to electric personal transport.

Worldwide is a different matter: other brands have learned lessons, replicated what he did right, improved on what he did wrong — so while he's still defined the beginning of the era of electric cars, he's no longer important to the ongoing success of them.


well it seems Elon is playing to win at all costs. Win what though "the culture" from that ian banks book?


Whole series of books. Banks would have detested where Musk has ended up, though - he was a committed socialist.


I found all the discussions about Dota pretty amusing. I had no idea it was such a big thing for them early on.


If I’m replaced by a DOTA bot I'm going to be pissed. At least it could be a bot for a good game, like StarCraft or something.


I'm curious why they ended up focusing on DOTA. Prize money? Valve make the game particular amenable to machine interface?


Not sure. Off the cuff, as someone who only played MOBAs when they were Warcraft 3 maps,

* limited choices for actual commands to make, but still lots decision to make.

* a very active competitive community

* easy to evaluate win conditions and (I think?) somewhat easy to evaluate “am I doing well” conditions (like if you fall behind on experience you are doing poorly I think)

So maybe it is a convenient genre.


As in opposition to the non profit it currently is?there's absolutely nothing open about OAI.


Not 100% close to the facts, but from a cursory read this seems deeply dishonest on OpenAI's part to me.

Musk appears to be objecting to a structure having profit driven players ("YC stock along with a salary") directly in the nonprofit...and is suggestion moving it to a parallel structure.

That's seems like a perfectly valid and frankly ethically/governance sound point to raise. The fact that he mentions incentives specifically to me suggests he was going down that line of reasoning outlined above.

Framing that as "Elon Musk wanted an OpenAI for-profit"...idk...maybe I'm missing something here, but dishonest framing is definitely the word that comes to mind.


This seems quite unprofessional.


I would love to know the identity of their GC, the team that does it all


dumb question - in one of the emails they mention ICO. What is that?

> I have considered the ICO approach and will not support it.

...

> I respect your decision on the ICO idea

Pretty sure they aren't talking about Initial Coin Offerings. Any clue what they mean?


Altman created Worldcoin so maybe he did mean Initial Coin Offering.


It’s in an email from January 2018, between Silicon Valley tech bros, next to discussion of fundraising. Of course they’re talking about an initial coin offering.


Why not?


From Brockman's email:

"Our biggest tool is the moral high ground. To retain this, we must: Try our best to remain a non-profit. AI is going to shake up the fabric of society, and our fiduciary duty should be to humanity."

Well, reading this in 2024 with (so-called) "Open"AI going for-profit, it aged like milk.

Also a few lines later, he writes:

"We don’t encourage paper writing, and so paper acceptance isn’t a measure we optimize."

So much for openness and moral high ground!

This whole thread is a masterpiece in dishonesty, hypocrisy and narcissistic power plays for any wannabe villains.

It's amusing to see they keep their masks on even in internal communications though. I'd have thought the messiah complex and benevolence parade is only for the public, but I was wrong.


These people talk like 20th century communists in a Vienna coffeehouse.



> You can’t sue your way to AGI


I wonder who wrote this missive?


GPT-4 :)


Sam Altman said this around a year ago: "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes". I am wondering whether AI has any input for this piece that tries to persuade the public to believe the people that are accusing me is not good either.


> each board member has a deep understanding of technology, at least a basic understanding of AI and strong & sensible morals.

> Put increasing effort into the safety/control problem

... and we are working to get defense contracts which is used to kill human beings in other countries, or fund organizations who kill humans


Important context: Elon Musk's close friend from the PayPal days and fellow libertarian tech billionaire David Sacks has been selected as the Trump admin's czar for AI and crypto.

This is why OpenAI and Sam Altman are understandably concerned.


For additional context: These PayPal guys are very much contra Google and YC (Sam/pg).


I know Elon fucked Brin's wife and is currently fighting with Altman over the OpenAI fortune, but is there actually any broader beef there?


Sacks and PG don't see eye to eye. There is some beef there. PG is a big Altman supporter. To be fair he also thinks highly of Musk as a founder, but does not agree with his politics.

It would seem to me, there is larger rift developing in SV between the YC clique and the AllIn podcast gang.


Yes ... and?

The part that raises eyebrows is how a non-profit suddenly become a for-profit, from a legal standpoint.


I think it's supposed to be juxtaposed to this tweet of his:

> I’m still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn’t everyone do it?


Well apparently that's a very bold faced lie if the timeline claimed by sam altman is even close to being plausible.

Elon Musks image is, for some reason beyond me, one of a "common man" who doesnt know all that much about business (and he's trying to pander to Twitter fans with that).

Even more fascinating is that people are apparently buying into it. As if you can just stumble into that much business success (besides inheriting) without having a very firm grasp on how the corperate structures behind it work.


He is suing to try to stop them becoming a for-profit. This post is showing that he originally supported the idea.


>This post is showing that he originally supported the idea.

Yes ... and?

That still wouldn't make it legal. The lawsuit will be decided based on jurisprudence, not based on "what Elon thinks is right".

This is a very amateurish move, tbh. Did they hire Matt Mullenweg's legal team?


A useless argument. If Elon does, in fact, have the legal authority to prevent OpenAI being for profit due to his initial investment (and these emails seem to indicate that the other founders think he does), then it doesn't matter if he wanted it then, only if he wants it now when the decision is being made. The fact that this is being presented to the public and not to the court is further evidence of its weakness.


A non-profit with a for-profit subsidiary...

Legal, despite the stench.


(Could still be) illegal.

Private inurement is a thing. There are laws written explicitly to prevent this.


> Wrong

?

OpenAI's non-profit+for-profit structure is detailed at their own site: https://openai.com/our-structure/

It's a non-profit with a for-profit subsidiary. Please elaborate on "wrong."


(I'm @moralestapia, but on my phone)

Non-profits have a mission that has to be aligned with a public/social benefit.

No amount of structuring would help you if it turns out that your activities benefit a private individual/organization instead of whatever public benefit you wrote when setting up the non-profit.

All it takes if a judge ruling that [1] is happening, and then it's over for you and all your derived entities, subsidiaries, whatever-you-set-up-thinking-you-would-outsmart-the-irs. Judges can see through your bullshit, they do this 100s of times, year after year.

And also, "oh, but we wanted to do this since the beginning" only digs you a deeper hole, lmao. Do they not have common sense?

I'm surprised that @sama, whose major talent is manipulat... sorry "social engineering", greenlit this approach.

1. https://www.irs.gov/charities-non-profits/charitable-organiz...


Did Elon Musk really only take 9 minutes to read, consider, and respond to Ilya & Greg's thoughtful "Honest Thoughts" email? That seems remarkably brief, crazy.

I'm surprised by OpenAI's resilience through all this drama. It's impressive to see how far they've come from 2017 to 2024. This journey has given me a whole new appreciation for startups and the individuals behind them. Now I better understand why my own past mediocre attempts didn't succeed.

Thank you for sharing this information!


elon lost the bet. now he wants to take revenge. thats all there is.


https://www.youtube.com/watch?v=FBoUHay7XBI&t=345s

one of the few youtube links on this page that is still up


If even OpenAI, who could benefit greatly from his money, warns you about this person, you might want to take it seriously.


> You can’t sue your way to AGI.

Neither of you have anything even _approaching_ AGI. You're two spoiled rich kids babbling about corporate structures and a vision of the future that no one else wants.

> Our mission is to ensure AGI benefits all of humanity, and we have been and will remain a mission-driven organization.

Your mission is entirely ungrounded and you're using this as a defense of changing from a non profit to a commercial structure?

These are some of the least serious companies and CEOs I've seen operate in my lifetime.


Ok, but please don't fulminate on HN*. Comments like this degrade the community for everyone.

You may not owe people who you feel are spoiled rich kids better, but you owe this community better if you're participating in it.

* this is in the site guidelines: https://news.ycombinator.com/newsguidelines.html


Ok, but I think it's clear that what I wrote does not convey violence or vehemence but simple disrespect. A disrespect not born out of their early life and personal history but out of their actions here _today_.

Which I think I'm entitled to convey as these are two CEOs attempting to try a case in public while playing fast and loose with the truth to bolster their cases. You may feel that I, as a simple anonymous commentor, "owe this community better," but do you spare none of these same sentiments for the author himself?


Vehement name-calling amounts to fulmination in the sense that we use the word in the guidelines; especially when a comment lacks any other content, as yours did. It's basically just angry yelling, and that is the opposite of what we're looking for.

This isn't a borderline call; if you were under the impression that a comment like that is ok on HN, it would be good to review https://news.ycombinator.com/newsguidelines.html and recalibrate.

Edit: it looks like you've been breaking the site guidelines badly in other places too, such as https://news.ycombinator.com/item?id=42393698. We eventually have to ban accounts that do that repeatedly, so please don't.


> Neither of you have anything even _approaching_ AGI.

There are so many conflicting definitions of what "AGI" means. Not even OpenAI or Microsoft even knows what it means.

"AGI" is a scam.


Of course it's clear: AGI is achieved when a machine can completely capable of simulating a human being.

At that point remote / office work is 100% over.


That's the funniest part about all of this—the continued posturing that AGI is totally a thing that is just beyond our grasp.


just give us another $100 billion to spend, it's all we need wink


23 trillion.


One of these days if AGI ever does actually come about, we might very well have to go to war with them.

There might be a day where billionaires employ zero humans and themselves merge with the AGI in a way that makes them not quite human any more.

The amount of data being collected about everyone and what machine learning can already do with it is frightening.

I'm afraid the reaction to AI when it actually becomes a threat is going to look like more of a peasant revolt than a skynet situation.


Yes, when one group of people, a small minority of the population, controls the ability to produce food and violence, then we have a serious problem.


> One of these days if AGI ever does actually come about, we might very well have to go to war with them.

They arguably already exist in the form of very large corporations. Their lingering dependency on low-level human logic units is an implementation detail.


> One of these days if AGI ever does actually come about, we might very well have to go to war with them.

And the same conditions of material wealth that dictate traditional warfare will not be changed by the ChatGPT for Warlords subscription. This entire conversation is silly and predicated on beliefs that cannot be substantiated or delineated logically. You (and the rest of the AI preppers) are no different than the pious wasting their lives in fear awaiting the promised rapture.

Life goes on. One day I might have to fistfight that super-dolphin from Johnny Mnemonic but I don't spend much time worrying about it in a relative sense.


Robot soldiers is a today problem. You want a gun or a bomb on a drone with facial recognition that could roam the skies until it finds you and destroys it's target?

That's a weekend project for a lot of people around here.

You don't need AGI for a lot of these things.

We are not far away from an entire AI corporation.


The rules of traditional warfare will still exist, they will just be fought by advanced hyper intelligent AIs instead of humans. Hunter Miller humanoids like Optimus and drones like Anduril will replace humans in war.

War will be the same, but the rich are preparing to unleash a "new arsenal of democracy" against us in an AI takeover. We must be prepared.


> Hunter Miller humanoids like Optimus and drones like Anduril will replace humans in war.

You do not understand how war is fought if you sincerely believe this. Battles aren't won with price tags and marketing videos, they're won with strategic planning and tactical effect. The reason why the US is such a powerful military is not because we field so much materiel, but because each materiel is so effective. Many standoff-range weapons are automated and precise within feet or even inches of the target; failure rates are lower than 98% in most cases. These are weapons that won't get replaced by drones, and it's why Anduril also produces cruise-missiles and glide bombs in recognition that their drones aren't enough.

Serious analysts aren't taking drones seriously, it's a consensus among everyone that isn't Elon Musk. Drones in Ukraine are used in extreme short-range combat (often less than 5km in range from each other), and often require expending several units before landing a good hit. These are improvised munitions of last resort, not a serious replacement for antitank guided weaponry. It's a fallacy on the level of comparing an IED to a shaped-charge landmine.

> but the rich are preparing to unleash a "new arsenal of democracy" against us in an AI takeover

The rich have already taken over with the IMF. You don't need AI to rule the world if you can get them addicted to a dollar standard and then make them indebted to your infinite private capital. China does it, Russia does it... the playbook hasn't changed. Even if you make a super-AI as powerful as a nuke, you suffer from the same problem that capitalism is a more devastating weapon.


>These are weapons that won't get replaced by drones

Those weapons are drones. They're just rockets instead of quadcopters. They're also several orders of magnitude more expensive, but they really could get driven by the same off-the-shelf kind of technology if someone bothered to make it.

And they will get replaced. Location based targeting is in many cases less interesting than targeting something which can move and could be recognized by the weapon in flight. Load up a profile of a tank, a license place, images of a person, etc. to be recognized and targeted independently in flight.

>Battles aren't won with price tags and marketing videos, they're won with strategic planning and tactical effect.

Big wars tend to get won by resources more than tactics. Japan and Germany couldn't keep up with US industrial output. Germany couldn't keep up with USSR manpower.

Replacing soldiers with drones means it's more of a contest of output than strategy.


I am not talking about drones like DJI quadcopters with grenades duct taped to them or even large fixed wing aircraft, I am talking about small personal humanoid drones.

Civilization is going through a birth rate collapse. The labor shortage will become more endemic in the coming years, first in lower skill and wage jobs, and then everywhere else.

Humanoid robots change the economics of war. No longer does the military or the police need humans. Morale will no longer be an issue. The infantry will become materiel.


Like Jonny Depp in that movie.


> Neither of you have anything even _approaching_ AGI.

On that note, is there a term for, er... Negative hype? Inverse hype? I'm talking about where folks clutch their pearls and say: "Oh no, our product/industry might be too awesome and doom mankind with its strength and guaranteed growth and profitability!"

These days it's hard to tell what portion is cynical marketing ploy versus falling for their own propaganda.


“We founded OpenAI as a non-profit, but to create OpenAGI, profit was necessary!!! However, Elon Musk also wanted profit!!!!111 And now, he also created profits!!! So, everything is OK”


Is it even possible for Sam Altman to stop being dishonest? This isn't a method to redress concerns, it's a smear that has nothing to do with the lawsuit.


How could he! thank god that did not happen.


From TFA:

> Summer 2017: We and Elon agreed that a for-profit was the next step for OpenAI to advance the mission

So basically Elon had the same idea as Sam Altman.


Sam too?!! how could he! thank god that did not happen either.


Okay. Is OpenAI now deflecting? Deflecting and reframing.


If I read anything from this it's the OpenAI is looking weak and worried if they are trying to use this to garner support or, at least, generate negative publicity for x.AI / Musk.

Altman being the regulatory capture man muscling out competitors via pushing the white house and washington to move for safety, the whole board debacle and converting from not for profit to profit.

I don't think anyone sees Musks efforts as altruistic.


It's an aside, but these sorts of timelines are very American centric.

I don't know when your autumn ("fall") or summer are in relation to September. Don't mix keys here, either use months or quarters, not a mix of things including some relative to a specific hemisphere.


Following HN's guidelines, I'm going to assume that you're a pretty smart person who knows that our northern/southern hemispheres mean that there are only two options for when summer occurs, and when winter occurs, and that they're essentially just opposites of one another.

If you're reading it from a southern hemisphere viewpoint and find that it doesn't make sense, isn't it quite simple to just go, "Oh, perhaps it's the opposite?" and apply that?

This is hardly the first time I've ever encountered this sort of thing. In fact, as an American, I've had to employ this sort of thinking myself when interacting with people on forums from places like South America and Australia.

It's an easy fix, and it's not that big of a deal, honestly - an innocent little side effect of the fact that we're able to communicate at such a global level[1]. :)

[1] https://youtu.be/PdFB7q89_3U?si=-amoG7EtEpS0xkEW


Since you knew it was American centric, and you knew when autumn or "fall" or summer are in the Northern Hemisphere, where all† of the United States of America‡ is located,

You did, in fact, know when was referred to. It's even possible to realize that "over the summer" can include Christmas in Sydney or Buenos Aires! Mostly people realize we live on a sphere, and what the consequences of that are.

† With apologies to American Samoa.

‡ Country name spelled in full to avoid the predictable bellyaching about the existence of a continent of the same name, or two, depending on where you hail from. Another thing you are not in fact confused about.


OpenAI is an American company


And they intend to "determine the fate of the world". As such, communication shouldn't be American centric.


You're several decades late for that. Almost a century now.


I'd give at least even odds the next century won't be american-centric, despite all this effort. China has a good chance of replacing the USA as the global hegemony, with the EU as a lesser alternative but one with enough strengths to be at least plausible if not the most likely.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security |