Hacker News new | past | comments | ask | show | jobs | submit login
It's Time to Stop Taking Sam Altman at His Word (theatlantic.com)
703 points by redwoolf 40 days ago | hide | past | favorite | 576 comments



To recap OpenAI's decisions over the past year:

* They burned up the hype for GPT-5 on 4o and o1, which are great step changes but nothing the competition can't quickly replicate.

* They dissolved the safety team.

* They switched to for profit and are poised to give Altman equity.

* All while hyping AGI more than ever.

All of this suggests to me that Altman is in short-term exit preparation mode, not planning for AGI or even GPT-5. If he had another next generation model on the way he wouldn't have let the media call his "discount GPT-4" and "tree of thought" models GPT-5. If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit, and he likely wouldn't have gotten rid of the superalignment team. His actions are best explained as those of a startup CEO who sees the hype cycle he's been riding coming to an end and is looking to exit before we hit the trough of disillusionment.

None of this is to say that AI hasn't already changed a lot about the world we live in and won't continue to change things more. We will eventually hit the slope of enlightenment, but my bet is that Altman will have exited by then.


Except the article makes none of these points. The article is saying:

a) "the technology is overhyped", based on some meaningless subjective criteria, if you think a technology is overhyped, don't invest your money or time in it. No one's forcing you.

b) "child abuse problems are more important", with a link to an article that clearly specifies that the child abuse problems have nothing to do with OpenAI.

c) "it uses too much energy and water". OpenAI is paying fair market price for that energy and what's more the infrastructure companies are using those profits to start making massive investments in alternative energy [1]. So if everything about this AI boom fails what we'll be left with is a massive amount of abundant renewable energy (the horror!)

Probably the laziest conjecture I have endured from The Atlantic.

[1]: https://www.cbc.ca/news/canada/calgary/artificial-intelligen...


>So if everything about this AI boom fails what we'll be left with is a massive amount of abundant renewable energy

Except that someone has to pay for it. AI companies are only willing to pay for power purchase agreements, not capital expenses. Same with the $7T of chip fab. Invest your money in huge capital expenditures and our investors will pay you for it on an annual basis until they get tired of losing money.


Power purchase agreements are what renewables developers use to obtain financing for construction. The contract is collateral. Worst case, the developer might need to be prepared to find a substitute offtaker for the power if a PPA with a gen AI offtaker goes bust (if brought up by whomever is underwriting the financing, if there are debt covenants, etc).

I absolutely support AI companies signing as many PPAs for low carbon energy even if they implode in the future. The PV panels, wind turbines, and potentially stationary storage will already be deployed at that point.

https://betterbuildingssolutioncenter.energy.gov/financing-n...


What if it is a 5 gigawatt data center in the middle of nowhere or if the power source is a nuclear reactor or three? Presumably, data centers are paying a premium that other customers aren't willing to pay.


Hard to say with random "what ifs." If we're starting merchant nuclear reactors back up, because they have some life left in them and can be operated safely, I support such an operating model as it pushes high carbon energy out of the generation mix until there is more clean energy on the grid.

Illinois provides subsidies to their nuclear reactors because they are low carbon [1], the federal government is subsidizing Diablo Canyon in California [2] and Palisades in Michigan [3]. Every coal fired generator in the US is more expensive to run than to replace with low carbon renewables except the one in Dry Fork, WY, [4] so subsidies are just arguments, not real economic signals.

TLDR We're kicking the can until we can get more clean energy online, and that ramp rate continues to accelerate [5].

[1] https://www.cnbc.com/2021/11/20/illinois-nuclear-power-subsi... ("Why Illinois paid $694 million to keep nuclear plants open")

[2] https://www.reuters.com/world/us/us-finalizes-11-billion-cre... ("US finalizes $1.1 billion in credits for California nuclear plant")

[3] https://news.ycombinator.com/item?id=41696884 (HN: MI nuclear plant finalizes fed loan for first reactor restart in US history")

[4] https://news.ycombinator.com/item?id=37601970 (citations)

[5] https://news.ycombinator.com/item?id=41602799 (citations)


I think you are missing my point. Microsoft signed a 20-year agreement with Constellation to restart TMI Unit 1. Do you think Constellation would make the same agreement with Open AI? The risk is completely different.


No, the point is taken, I think you’re underestimating FOMO and the hype cycle.

If a counterparty is just a bit too rational, move on, lots of other potentials. Markets are sentiment driven. When there is potentially irrational exuberance, leverage it.


Hopefully we get a nuclear reactor or three


Paying fair market price for energy and water is only fair if you don't need extra infrastructure put in place to support you. If you're expecting the local government to build you electricity generating capacity... No. That's not right.

At best, you're forcing old generation capacity that would have been retired to stay online. At worst you're forcing the government to take loans to invest in new capacity you may not be around to pay for in a few years, leaving the public finances holding the ball.


Barring societal collapse, I don't see a scenario in which extra energy isn't used up and as long as it's sold at market prices the government investments should be fine.


It does look like an exit. Employees were given the chance to cash in some of their shares at $86 billion valuation. Altman is getting shares.

New "investors" are Microsoft and Nvidia. Nvidia will get the money back as revenue and fuel the hype for other customers. Microsoft will probably pay in Azure credits.

If OpenAI does not make profit within two years, the "investment" will turn into a loan, which probably means bankruptcy. But at that stage all parties have already got what they wanted.


> If OpenAI does not make profit within two years, the "investment" will turn into a loan, which probably means bankruptcy.

I don’t believe this is accurate. I think this is what you’re referring to?:

Under the terms of the new investment round, OpenAI has two years to transform into a for-profit business or its funding will convert into debt, according to documents reviewed by The Times.

That just means investors want the business to be converted from a nonprofit entity into a regular for-profit entity. Not that they need to make a profit in 2 years, which is not typically an expectation for a company still trying to grow and capture market share.

Source: https://www.nytimes.com/2024/10/02/technology/openai-valuati...


I think for anyone who has been intimately involved with this last generation of "unicorns", it really does not look like an "exit" if that means bankruptcy, end of the company, betrayal of goals, or what have you: for a high-growth startup of approximately ten years, finding liquidity for employees is perhaps counter-intuitively a sign of deep maturity and that the company is concerned for the long haul (and that investors/the market believe this is likely).

At about the ten year mark, there has to be a changing of the guards from the foot soldiers who give their all that an unlikely institution could come to exist in the world at scale to people concerned more with stabilizing that institution and ensuring its continuity. In almost every company that has reached such scale in the last decade, this has often meant a transition from an executive team formed of early employees to a more senior C-team from elsewhere with a different skillset. In a world context where the largest companies are more likely to stay private than IPO, it's a profoundly important move to allow some liquidity for longterm employees, who otherwise might be forced to stay working at the company long past physical burnout.


Agreed. These liquidity events are a way to retain and reward early employees while allowing the founders to recover their personal finances and buckle down for the growth phase.


> In a world context where the largest companies are more likely to stay private than IPO

Which world is this?


From a Scottish Mortgage report: "companies are staying private for longer and until higher valuations".

    Amazon founded in 1994
    1997 listed at $400m
    Google founded in 1998 
    2004 listed at $23bn
    Spotify founded in 2006
    2018 listed at $27bn
    Airbnb founded in 2008
    2020 listed at $47bn
    Epic games founded in 1991
    2022 unlisted value $32bn
    Space Exploration founded in 2002
    2022 unlisted value $125bn
    ByteDance founded in 2012
    2022 unlisted value $360bn


The only unlisted business there that could be considered among “the largest” is ByteDance. But they have a whole China thing going on, so not very representative of US/European markets.

I doubt any competitor to the largest businesses in US//Europe that is actually putting up good audited numbers is staying private. Even Stripe has been trying to go public, but it doesn’t have the numbers for the owners to want to yet.


SpaceX hasn’t tried to go public , arguably being private has helped them ability to raise a lot of money from investors is key to their successes.

Their kind of product development [1] needs a long term thinking which public markets will not not support well

[1] ignore all the mars noise, just consider reusable i.e. cheap rockets and starlink


It’s never just about the company’s numbers, but the financial environment. When interest rates come down enough to tempt investors out of the money market, you’re likely to see a new wave of IPOs.


Agreed. OP is crossing some wires. More companies are staying private right now (not by choice). Not necessarily or only the “largest companies”



His recent statement that AGI may "only be a few thousand days away" is clearly an attempt to find a less quotable way of extending estimates while barely reducing hype. History generally shows that estimates of "more than five years away" are meaningless.


"a few thousand days away" feels like such a duplicitous way to phrase something when "a few years" would be more natural to everyone on the planet. It just seems intentionally manipulative and not even trying to hide it. I've never been an anti-ceo type person but something about Altman sketches me out


4000 days is just a few thousand days, but it's [checks notes] over 10 years away.


He knows as well as anyone about the 5 year hueristic (length too long to be meaningful), so he's trying to say "over 5 years", but still hold onto the sense of "meaningful estimate". My sense about Sam is that he trusts his PR instinct to be usefully manipulative enough that he doesn't think he has to plan it carefully in order for his statements to be maximally opaquely manipulative. (He speaks cleverly and manipulatively without a lot of effort.)


I don't know anything, but isn't Sam already rich? And, if OpenAI is in a position to capture profits in an expanding AI industry, isn't this the best possible time to lock-in equity for long term gains and trillionaireship?

For many people, sadly, one can never be rich enough. My point is, planning for both short term exit, and long term gains, is essentially the same in this particular situation. What a boon! Nice problem to have!


For some people the question "how much?" has only one answer – "more."


And this seems to be mostly positive feedback.


He is tied for last place on the most recent Forbes Billionaire list.

https://www.forbes.com/profile/sam-altman/


Sam was already a billionaire years ago. He is one of SV most prolific investors, with equity in hundreds of startups. He writes big checks as well from time to time, Helion for instance he gave 375 million.


To be clear, Altman invested $375 million in Helion Energy's 2021 funding round. Helion Energy a for-profit fusion energy company of which he is the chairman.


He is the Executive Chairman at Helion, which means he keeps a tight grip on that particular purse string.


How? He did not have a large exit to give him a bankroll to invest. Unless you mean from whatever YC shares he received as President, which seems unlikely. Even if he was paid ~25 million in equity per year, did YC shares increase 10x in that time?


Didn't he own like 10% of Reddit. Here what pg had to say about the guy. --- Sam Altman has it. You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king. If you're Sam Altman, you don't have to be profitable to convey to investors that you'll succeed with or without them. (He wasn't, and he did.) Not everyone has Sam's deal-making ability. I myself don't. But if you don't, you can let the numbers speak for you. --- https://paulgraham.com/fundraising.html


One of those quotes where I can't tell if it's a compliment or not.

I hear "He will find a way to not only survive but to thrive in difficult situations"

But I also hear "He will eat people to get ahead".


My understanding is Altman joined the Three Comma Club only this year not years ago.

Yes, he has made lots of investments over the years as head of YC but not every investment was successful. This was discussed on BBC's podcast 'Good Billionaire Bad Billionaire' recently.


He also got weird facial plastic surgery and looks like a wanted mobster now.


For people that don't get the references, this graph is so helpful for understanding the world.

https://en.wikipedia.org/wiki/Gartner_hype_cycle

It just keeps happening over and over. I'd say we are at "Negative press begins".


I don't really follow Altman's behavior much, but just in general:

> If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit

If such a thing could exist and was right around the corner, why would you need a company for it? Couldn't the AGI manage itself better than you could? Job's done, time to get a different hobby.


Let’s say you got AGI, and it approximated a not so bright impulsive 12 year old boy. That would be an insane technological leap, but hardly one you’d want running the show.

AGI doesn’t mean smarter than the best humans.


A 12 year old that doesn’t sleep, eat, or forget, learns anything within seconds, can replicate and work together with fast as light communication and no data loss when doing so. Arrives out of the womb knowing everything the internet knows. Can work on one problem until it gets it right. Doesn’t get stale or old. Only had emotion if it’s useful. Can do math and simulate. Doesn’t get bored. Can theoretically improve its own design and run experiments with aforementioned focus, simulation ability, etc.

What could you do at 12, with half of these advantages? Choose any of them, then give yourself infinite time to use them.


A 12 year old that doesn't interact with the world.

Many believe that AGI will happen in robots, and not in online services, simply because interacting with the environment might be a prerequisite for developing consciousness.

You mentioned boredom, which is interesting, as boredom may also be a trait of intelligence. An interesting question is if it will want to live at all. Humans have all these pleasure sensors and programming for staying alive and reproducing. The unburdened AGI in your description might not have good reasons to live. Marvin, the depressed robot, might become real.


I’m not sure, but it’s possible Stephen Hawking would have been fine with becoming digital, assuming he could keep all the traits he valued. He had a pretty low data communication rate, interacted digitally and did more than most humans can. Give him access to anything digital at an high speed and he’d have had a field day. If he could stay off Twitter.


>simply because interacting with the environment might be a prerequisite for developing consciousness.

We can't even define what consciousness is yet, let alone whats required to develop it.


We may not be able to fully define Consciousness, but we can observe it in humans and develop falsifiable hypotheses, of which there are several.

Take a look at this paper: https://osf.io/preprints/osf/mtgn7


Why do we assume an AGI would not be forgetful or would never get bored?


Because we can change the hardware or software to be what we want. It can write to storage and read it, like we can but faster.


And then with advancement, reaches teenage maturity and tells everyone to fuck off.


As someone who has teenagers right now, I can confirm that this is accurate.


A 12 year old could determine that "AI" is boring and counterproductive for humanity and switch off a computer or data center. Greta Thunberg did similar for the climate, perhaps we need a new child saint who fights "AI".


> AGI doesn’t mean smarter than the best humans.

Technically no, but practically...

12 year old limitations are: A. gets tired, needs sleep B. I/O limited by muscles

Probably there are more, but if 12 year old could talk directly to electric circuits and would not need sleep or even a break, then that 12 year old would be leaps and bounds above the best human in his field of interest.

(Well motivation to finish the task is needed though)


For an intelligence to be "General" there would have to be no type of intelligence that it did not have access to (even if its capabilities in that domain were limited). The idea that that's what humans have strikes me as the same kind of thinking that led us to believe that earth was in the center of the universe. Surely there are ways of thinking that we have no concept of.

General intelligence would be like an impulsive 12 year old boy who could see 6 spatial dimensions and regarded us as cartoons for only sticking to 3.


Humans can survive in space and on the moon because our intelligence is robust to environments we never encountered (or evolved to encounter). That's "all" general intelligence is meant to refer to. General just means robust to the unknown.

I've seen some use "super" (as in superhuman) intelligence lately to describe what you're getting at.


Super feels to me like it's a difference in degree rather than in kind. Something with intelligences {A, b, c} might be super intelligent compared with {a, b, c}. i.e. more intelligent in domain A.

But if one has {a, b, c} and the other has {b, c, d} neither is more or less intelligent than the other, they just have different capabilities. "Super" is a bit too one-dimensional for the job.


I don't think using set notation is really helping your case here but (I think?) I agree.


No. AGI is talking about the "general" human intelligence rather than specialism. An AGI would be as good as a human at both composing poetry and playing chess, both identifying bird calls and proving mathematical theorems. We don't know what a categorically superior intelligence would be so we can't rate some machine on that basis.

The Lem story "Golem XIV" concerns a machine which claims it possesses categorically superior intelligence, and further that another machine humans have built (which runs but seems unwilling to communicate with them at all) is even more intelligent still.

Golem tries to explain using analogies, but it's apparent that it finds communicating with humans frustrating, the way it might be frustrating to try to explain things to a golden retriever. Lem wrote elsewhere that the single commonality between Golem's intelligence and our own is curiosity, unlike Annie, Golem is curious about the humans which is why it's bothering to communicate with them.

Humans (of course) plot to destroy both machines. Annie eliminates the conspirators, betraying a hitherto un-imagined capability to act at great distance, and the story remarks that it seems she does so the way a human would swat a buzzing insect. She doesn't fear the humans, but they're annoying so she destroyed them without a thought.


If it's stuck at 12 year old level intelligence then it's not generally intelligent. 12 years can learn to think like 13 year olds and so on.


A set of weights siting on your hard disk don’t evolve to get smarter on their own. AGI does however require the ability to learn new skills in new domains; if you kept exposing it I don’t see why it wouldn’t be similar. But fundamental capacities in how good it is a reasoning / planning don’t have to exceed any given human to be considered AGI.


All it would want to do is talk about Minecraft and the funny memes they saw.


What stops it getting more intelegent? That's litterally its primary aim. Its only limit as to how quickly it does that is hardware capacity, and I'd be shocked if it didn't somehow theorise to itself that it can and should expand its knowledge in any way possible.


Why would the only limit be hardware capacity? Can’t there be an innate limit due to the model size/architecture? Maybe LLMs can’t be superintelligent because they are missing a critical ability which can’t be overcome with any amount of training? It’s not obvious to me that it’s going to get smarter infinitely.


The limit could also be in the training data. If we could train LLMs with a corpus generated by some super intelligent entity, maybe we'd consider them super intelligent even with the same model size/architecture we have today.


This is great insight. Science fiction does a great job inspiring technology research, but we forget that real-world accomplishments can be much humbler and still astonishing all the same.

I'm a bit tired of the hype surrounding LLMs, but all the same for very mundane and humbler tasks that require some intelligence modern LLMs manage to surprise me on a daily basis.

But it rarely accomplishes more than what a small collection of humans with some level of expertise can achieve, when asked.


Or accomplishments can genuinely be astounding.

Surely, the LLM models we have today are astounding by any measure, relative to just a few years ago.

But pronouncements of how this will lead to utopia, without introducing a major revision of economic arrangements, are completely, and surely intentionally/conveniently (Sam isn't an idiot) misleading.

Is OpenAI creating a class of stock so everyone can share in their gains? If not, then AGI owned by OpenAI will make OpenAI shareholders rich, very much to the degree its AGI eliminates human jobs for itself and other corporations.

How does that, as an economic situation, result in the general population being able to do anything beyond be a customer, assuming they can still make money in some way not taken over by AGI?

Utopia needs an actual plan. Not a concept of a plan.

The latter just keeps people snowed and calm before an historic level rug pull.


If such a thing was right around the corner, the person who controlled it would be the only person left who had any kind of control over their own future.

Why would you sell that?


I'm not a believer in general intelligence myself, all we have are a small pile of specific intelligences. But if it does exists then it would be godlike to us. I can't guess at the motivations of somebody who would want to bootstrap a god, but I doubt that Altman is so strapped for cash that his primary motivator is coming up with something to sell.


Altman's AGI is Musk's full self-driving car.


As other comments have mentioned, AGI doesn't mean god-like or super-human intelligence.

For these models today, if we measure the amount of energy expended for training and inference how do humans compare?


> For these models today, if we measure the amount of energy expended for training and inference how do humans compare?

My best guess is 120,000 times more for training GPT-4 (based on claim it cost $63 million and that was all electricity at $0.15/kWh and looking only at the human brain and not the whole body).

But also, 4o mini would then be a killowatt hour for a million tokens at inference time, by the same assumptions that's 50 hours or just over one working week of brain energy consumption. A million tokens over 50 hours is 5.5 tokens per second, which sounds about what I expect a human brain to do, but caveat that with me not being a cognitive scientist and what we think we're thinking isn't necessarily what we're actually thinking.


I did a similar calculation a few weeks ago:

Humans consume about 100W average power (2000 kcal to watt hours/24 hours). So 8 billion people consume ~800 GW. Call it 1 TW. Average world electric power generation is 28000 TWh / (24*365 hours) ~3 TW.


If we figure out AGI, that still doesn't mean a singularity. I'm going to speak as though we're on the brink of AI outthinking every human on earth (we are not) but bear with me, I want to make it clear we're not going jobless any time soon.

For starters, we still need the AI (LLMs for now) to be more efficient, i.e. not require a datacenter to train and deploy. Yes, I know there are tiny models you can run on your home pc, but that's comparing a bycicle to a jet.

Second, for an AGI it meaningfully improve itself, it has to be smarter than not just any one person, but the sum total of all people it took to invent it. Until then no single AI can replace our human tech sphere of activity.

As long as there are limits to how smart an AI can get, there are places where humans can contribute economically. If there is ever to be a singularity, it's going to be a slow one, and large human AI vompanies will be part of the process for many decades still.


"why would you need a company for it? Couldn't the AGI manage itself better than you could?"

Well, you still have to have the baby, and raise it a little. And wouldn't you still want to be known as the parent of such a bright kid as AGI? Leaving early seems to be cutting down on his legacy, if a legacy was coming.


Let's go with the no next gen models, then we'd be looking at an operationalized service. I might be up for paying $20/month for one. Certainly more value than I get from Netflix. All I would want from the service is a simple low-friction UI and continued improvements to keep up with or surpass competitors. They could manage that.

The long-term problem may be access to quality/human-created training data. Especially if the ones that control that data have AI plans of their own. Even then I could see OpenAI providing service to many of them rather than each of them creating their own models.


I doubt $20 a month is going to cover it. How much would you.pay before you weren't getting a good deal?


The thing is, you can be economical. You don't need a GPT-4 quality model for everything. Some things are just low value where a 3.5 model would do just fine.

I never use the $20 plan but I access everything via API and i spend a couple of dollars per month.

Although lately I have a home server that can do llama 3.1 8b uncensored and that actually works amazingly well.


I use Llama 3.1 8B Instruct 128k at home and that pretty much covers all my LLM needs. Don't see a reason to pay for GPT-4.


Yeah it's good, right?? Amazingly good. The first-gen small models were a bit iffy but Llama 3.1 is so good <3

The only thing I see is that it hallucinates a lot when you ask it for knowledge. Which makes sense because 8B is just not a lot to keep detailed information around. But the ability to recite training knowledge is really a misuse of LLMs and only a peculiar side-effect. I combine it with google searches (though OpenWebUI and SearXNG) and it works amazingly well then.


Oh this is great! Currently I don't incorporate web search into the UI I use, will give OpenWebUI a try.


Yeah, and realistically once we can get hardware powerful but cheap/energy efficient enough to run llm + TTS + ASR without any noticeable delay during a conversation then who needs cloud services for most stuff. The really big models will still be useful, but really only for specific things.


What’s your home server specs? I might want to host something like that too but I think I’d need to upgrade.


An old Ryzen CPU (2600 IIRC) and Radeon Pro VII 16GB. Got it new at a really good price.

It works ok but with a large context it can still run out of memory and also gets a lot slower. With small context it's super snappy and surprisingly good. What it is bad at are facts/knowledge but this is not something a LLM is meant to do anyway. OpenWebUI has really good search engine integration which makes it work like perplexity does. That's a better option for knowledge usecases.


If we’re talking about inference costs alone (minimal new training) I think it would cover it. I started doing usage-based pricing through openrouter instead of paying monthly subscriptions, and my modest LLM use costs me about $3/month. Unless you think that the API rates are also priced below cost.


According the New York Times [1], OpenAI told investors that they're costs are increasing as more people use their product, and they are planning to increase the cost of ChatGPT from $20 to $44 over the next five years. That certainly suggests that they're selling inference at a loss right now.

[1] https://www.nytimes.com/2024/09/27/technology/openai-chatgpt...


My bigger concern isn't the financial cost, it's that the amount represents a degree of energy expenditure. A flat rate might not be advisable and having usage-based pricing may limit energy waste. OTOH I don't know how much energy some of the other recurring things I consume monthly are either.

Maybe we can have a service that's an LLM with shared/cached responses as well as training on its own questions/answers for the easy stuff. Currently my usage is largely as a better search engine as google has gotten so much worse than it ever was, though I imagine I'll find custom things it's useful for as well.


Theres never going to be quality data at tge scale they need it.

At best, theres a slow march to incremental improvements that look exactly like how human culture developed knowledge.

And all the downsides will remain, the same way people, despite hundreds of good sourxes of info still prefer garbage.


Video data is still very much untapped and likely to unlock a step function worth of data. Current image-language models are trained mostly on {image, caption} pairs with a bit of extra fine tuning


Do you think it matters that there's orders of magnitude less video (and audio) data than text data?


I’m not sure I agree, text is full of compressed information but lacks all of the visual cues we all use to navigate and understand our world. Video data also has temporal components which text is really bad at.


OpenAI has been training on YouTube for years.


Nobody close the the edge of this tech seems to believe that this is true, and the 4o release suggests synthetic approaches work well.


Beliefs and outcomes are different things.


I see it differently. ChatGPT holds a unique position because it has the most important asset in tech: user attention. Building a brand and capturing a significant share of eyeballs is one of the hardest challenges for any company, startup or not. From a business standpoint, the discussion around GPT-5 or AGI seems secondary. What truly matters is the growing number of users paying for ChatGPT, with potential for monetization through ads or other services in the future.


The internet has destroyed the value of user attention and brands; everyone has the attention span of a gnat and the loyalty of an addict. They will very quickly move on to the next shiny thing.


My comment doesn't negate your point. I'm emphasizing the business side beyond the moral considerations. If anything, we've seen that brands which contributed to unhealthy habits, like the rise of fast food and sugary drinks, still hold immense value in the market. User loyalty may be fleeting, but businesses can still thrive by leveraging brand strength and market presence, regardless of the moral implications.


> What truly matters is the growing number of users paying for ChatGPT, with potential for monetization through ads or other services in the future.

If the end goal is monetization of ChatGPT with ads, it will be enshittified to the same degree as Google searches. If you get to that, what is the benefit of using ChatGPT if it just gives you the same ads and bullshit as Google?


I mentioned ads as just one potential avenue for monetization. My main point is that OpenAI's current market position and strong brand awareness are the real differentiators. They're in a unique spot where they have the user's attention, which offers a range of monetization options beyond just ads. The challenge for them will be to balance monetization with maintaining the user experience that made them successful in the first place.

Also, don't forget the recent Apple partnership [1], a very strong signal of their strategic positioning. Aligning with Apple reinforces their credibility and opens up even more opportunities for innovation and expansion, beyond just monetizing through ads. I just searched through this thread, and it seems the Apple partnership isn't being recognized as a significant achievement under Sam Altman's tenure as CEO, which is surprising given its importance.

[1] https://news.ycombinator.com/item?id=40328927


The only reason Sam would leave OpenAI is if he thought AGI could only be achieved elsewhere, or that AGI was impossible without some other breakthrough in another industry (energy, hardware, etc).

High-intelligence AGI is the last human invention — the holy grail of technology. Nothing could be more ambitious, and if we know anything about Altman, it is that his ambition has no ceiling.

Having said all of that, OpenAI appears to be all in on brute-force AGI and swallowing the bitter lesson that vast and efficient compute is all you need. But they’ve overlooking a massive dataset that all known biological intelligences rely upon: qualia. By definition, qualia exist only within conscious minds. Until we train models on qualia, we’ll be stuck with LLMs that are philosophical zombies — incapable of understanding our world — a world that consists only of qualia.

Building software capable of utilizing qualia requires us to put aside the hard problem of consciousness in favor of mechanical/deterministic theories of consciousness like Attention-Schema Theory (AST). Sure, we don’t understand qualia. We might never understand. But that doesn’t mean we can’t replicate.


> Sure, we don’t understand qualia. We might never understand. But that doesn’t mean we can’t replicate.

I’m pretty sure it means exactly that. Without actually understanding subjective experience, there’s a fundamental doubt akin to the Chinese room. Sweeping that under the carpet and declaring victory doesn’t in fact victory make.


If the universe is material, then we already know with 10-billion percent certainty that some arrangement of matter causes qualia. All we have to do is figure out what arrangements do that.

Ironically, we understand consciousness perfectly. It is literally the only thing we know — conscious experience. We just don’t know, yet, how to replicate it outside of biological reproduction.


> All we have to do is figure out what arrangements do that.

That’s the hard part!

> It is literally the only thing we know — conscious experience. We just don’t know, yet, how to replicate it outside of biological reproduction.

This conflates subjective experience as necessary (an uncontroversial observation) with actually understanding what subjective experience is.

Or put another way: we all know what it’s like to breathe, but this doesn’t imply knowledge of the pulmonary system.


Agree on hard. But at least possible.

I think a better analogy would be vision. Even with a full understanding of the eye and visual cortex, one can only truly understand vision by experiencing sight. If we had to reconstruct sight from scratch, it would be more important to experience sight than to understand the neural structure of sight. It gives us something to aim for.

We basically did that with language and LLMs. Transformers aren’t based on neural structures for language processing. But they do build upon the intuition that the meaning of a sentence consists of the meaning that each word in a sentence has in relation to every other word in a sentence — the attention mechanism. We used our experience of language to construct an architecture.

I think the same is true of qualia and consciousness. We don’t need to know how the hardware works. We just need to know how the software works, and then we can build whatever hardware is necessary to run it. Luckily there’s theories of consciousness out there we can try out, with AST being the best fit I’ve seen so far.


> High-intelligence AGI is the last human invention

Citation?

...or are you just assuming that AGI will be able to solve all of our problems, appropos of nothing but Sam Altman's word? I haven't seen a single credible study suggest that AGI is anything more than a marketing term for vaporware.


Their marketing hyperbole has cheapened much of the language around AI, so naturally it excites someone who writes like the disciple of the techno-prophets

" High-intelligence AGI is the last human invention" What? I could certainly see all kinds of entertaining arguments for this, but to write it so matter of fact was cringe inducing.


It’s true by definition. If we invent a better-than-all-humans inventor, then human invention will give way. It’s a fairly simple idea, and not one I made up.


> then human invention will give way

What? Would you mind explaining this?


It’s analogous to the automobile. People do still walk, bike, and ride horses, but the vast majority of productive land transportation is done by automobile. Same thing with electronic communication vs. written correspondence. New tech largely supplants old tech. In this case, the old tech is human ingenuity and inventiveness.

I don’t think this is a controversial take. Many people take issue with the premise that artificial intelligence will surpass human intelligence. I’m just pointing out the logical conclusion of that scenario.


Arguably cars have so many externalities they will bankrupt the earth of cheap energy sources. Walking is at least hundreds of millions of years old, and will endure after the last car stops.

Likewise (silicon based) AGI may be so costly that it exists only for a few years before it's unsustainable no matter the demand for it. Much like Bitcoin, at least in its original incarnation.


Cars never got us anywhere a human couldn't reach by foot. It just commoditized travel and changed our physical environment to better suit cars.

I really don't see any reason to believe "AGI" won't just be retreading the same thoughts humans have already had and documented. There is simply no evidence suggesting it will produce truly novel thought.


I don't know whether it's a controversial take or not, but I can't see how if one day the machine magically wakes up and somehow develops sentience that it follows logically that human intelligence would somehow "give way". I was hoping for a clear explanation of mechanically how such a thing might happen.


Future NeuroLink collab? Grab the experience of qualia right from the brains of those who do the experiencing.


> All while hyping AGI more than ever.

Maybe not, since Altman pretty much said they no longer want to think it terms of "how close to AGI?". Iirc, he said they're moving away from that and instead want to move towards describing the process as hitting new specific capabilities incrementally.


> They dissolved the safety team.

I still don't get the safety team. Yes, I understand the need for a business to moderate the content they provide and rightly so. But elevating the safety to the level of the survival of humanity over a generative model, I'm not so sure. And even for so-called preventing harmful content, how can an LLM be more dangerous than the access of the books like The Anarchist Cookbook, the pamphlets on how to conduct guerrilla warfare, the training materials of how to do terrorisms, and etc? They are easily accessible on the internet, no?


Having a dedicated safety team (or as they called it "superalignment") makes sense if and only if you believe that your company is pursuing an actual sci-fi style artificial superintelligence that could put humanity itself at risk [0]:

> Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

> How do we ensure AI systems much smarter than humans follow human intent?

This is a question that naturally arises if you are pursuing something that's superhuman, and a question that's pointless if you believe you're likely to get a really nice algorithm for solving certain kinds of problems that were hard to solve before.

Getting rid of the superalignment team showed which version Altman believes is likely.

[0] https://openai.com/index/introducing-superalignment/


And importantly it's also a question that would need an answer if the alignment should be with "individual groups" rather than "humanity".

It won't do Sam Altman and friends any good if they are the richest corpses after an unaligned AI goes rogue.

So it would be in their were egoistical self interest to make sure it doesn't.


Don't forget the board coup, the coup-reversal, complete takeover, then co-founders leaving, and Mira leaving


Yes, all of that definitely plays into the sense that not all is well at OpenAI.

Slight nit: a board can't start a coup because a coup is an illegitimate attempt to take power and the board's main job is to oversee the CEO and replace them if necessary. That's an expected exercise of power.

The coup was when the CEO reversed the board's decision to oust him and then ousted them.


Yes thats a better way to say it


i disagree that it looks like an exit, i think altman is here for the long haul. he's got a great platform and a lot of AI competitors are also preparing for the long term. no one cares about who has the best model for the next year or two, they care who has it for the next 20.


I think you're projecting a short-term cash-out mentality here.

Altman's actions are even more consistent with total confidence & dedication to a vision where OpenAI is the 1st and faraway leader in the production of the most valuable 'technology' ever. Plus, a desire to retain more personal control over that outcome – & not just conventional wealth! – than was typical with prior breakthroughs.


> you're projecting a short-term cash-out mentality

I'm a software engineer comfortably drawing a decent-but-not-FAANG paycheck at an established company with every intention of taking the slow-and-steady path to retirement. I'm not projecting, I promise.

> to a vision where OpenAI is the 1st and faraway leader in the production of the most valuable 'technology' ever

Except that OpenAI isn't a faraway leader. Their big news this week was them finally making an effort to catch up to Anthropic's Artifacts. Their best models do only marginally better in the LLM Arena than Claude, Gemini, and even the freely-released Llama 3.1 405B!

Part of why I believe Altman is looking to cash out is that I think he's smart enough to recognize that he has no moat and a very small lead. His efforts to get governments to pull up the ladder have largely failed, so the next logical choice is to exit at peak valuation rather than waiting for investors to recognize that OpenAI is increasingly just one in a crowd.

[0] https://lmarena.ai/


I think your biographical testimony more confirms than refutes my suspicions.

You think "smart" implies a low chance of world-epochal AI breakthroughs that would ruin your "slow-and-steady path to retirement". You think OpenAI has "no moat". You think it's wise to satisfice – taking a "decent" paycheck – rather than shoot-for-the-moon (and beyond it, control of the 'lightcone').

That's closer to the "short-term exit" mindset – get a good-enough payout, in mundane dollars, while you can – that you're alleging in Altman than what I'm seeing in his statements & behaviors. Yes, someone with your beliefs would in fact be trying to pump OpenAI's short-term hype for a timely exit and an even-more-cushy retirement.

But Altman's already a billionaire with any creature comfort he'd want, for the rest of his life, plus at least one "doomsday bunker" (per a 2016 'New Yorker' profile).

For at least 10 years, he's been consistently expressing a desire to advance, and control, breakthroughs in AI, AGI, and ASI that could – in a true believers' worldview – wipe away as obsolete all his prior billions – as well as the "slow-and-steady path[s] to retirement" pursued by others.

He knows as an insider and contributor OpenAI's actual lead over others – who have tended to approximately-reach OpenAI's capabilities a year or more later.

That you rate as OpenAI's "big news this week" their "Artifacts" end-user feature, rather than everything else announced & rolled out, plus OpenAI's gigantic leads in funds raised and revenue run-rate, suggests a particular world-view.

I would suggest that you are over-focused on the kind of small-stakes products/markets/ambitions that are highly-legible to a "software engineer comfortably drawing a decent-but-not-FAANG paycheck at an established company" – but such considerations are barely afterthoughts in the game of power & legacy that Altman is now playing.


Strong agree and strong upvote.

An intuition pump is to think of a religious zealot that has a drastically different worldview and aims and drives from you that seem completely nonsensical (e.g. perhaps not caring about wealth accumulation at all - even for stability - and only about perpetuating God's glory).

Sam is no religious zealot, but the difference in the life game he is playing and the life game most of us are playing is as vast as that between us and the zealot.


Even if you were right in his intentions, can you outline the scenario where Sam is able to make an exit?

I don't see how it could happen at current valuation. Who would want to pay $150 bil. for an OpenAI, where Sam is no longer part of it?


> where Sam is no longer part of it?

It can be an "exit" in that it makes Sam extremely wealthy while he still hangs on as CEO.


You're thinking of an IPO?

Sure, that's one option. But it's pretty rare for a founder of a $10+ billion startup to immediately liquidate their shares right after an IPO.

It's also a risky bet. If OpenAI goes public, they'll have a hard time justifying the same or a higher valuation than they got Microsoft and Nvidia to buy in at.


Every single word he says Public is his product.


And he doesn't even use the shift key!


Actually, his blog is correctly capitalized. Although, you would think he could hire a proper editor, since the em-dashes are not formatted correctly. Baby steps.


> ... 4o and o1, which are great step changes but nothing the competition can't quickly replicate.

Where did that assertion come from? Has anyone come close to replicating either of these yet (other than possibly Google, who hasn't fully released their thing yet either), let alone "quickly"? I wouldn't be surprised if it's these "sideways" architectural changes actually give OpenAI a deeper moat than just working on larger models.


Claude 3.5 Sonnet is at least as good as 4o.


Oh, sorry I wasn't clear, I was referring to the advanced (speech-to-speech) voice mode, which for me is the highlight of this "omni" thing, rather than to the "strength" of the model.


I thought that might be the case. On that: even though other big players haven't copied that particular feature, I doubt (though I'm not by any means an expert) that it's anywhere near as hard to replicate as the fundamental model. I do agree that at least product-wise, they're trying to differentiate through these sort of model-adjacent features, but I think these are classic product problems requiring product solutions rather than deep tech innovation. Claude Artifacts, GPT Canvas, speech, editor integration, GitHub integration, live video integration, that kind of stuff.


Much better than 4o, in my experience. I've stopped using 4o almost completely, except for cases where I need good tooling more than I need good performance (PoCs and the like).


it's better at coding for me, and it looks like because it was trained on more data. i have this problem with a library that i use, and it was trained on python2 version but python3 version is completely different. Python3 versions is relatively new, but out from at leaset 2020 and you would mostly find examples with Python2 version if you googled.

They both produce garbage code in this solution. Claude versions is just 20% less garbage, but still useless. The code mixes those two, even if i specify i want the python3 version or directly specifying a version.


It's funny and somehow disappointing how a year ago HN jerked off to Altman and down-voted any critical opinions of him. Just shows no social platform is free of strong group-think and mass hysteria.


It’s almost like people changed their opinions based on observed behaviors.


Yeah, everything from OpenAI in the last year suggests they have nothing left up their sleeve, they know the competition is going to catch up very soon, and they're trying to cash out as fast as possible before the market notices.

(In the Gell-Mann amnesia sense, make sure you take careful note of who was going "OAI has AGI internally!!!" and other such nonsense so you can not pay them any mind in the future)


Dissolve the safety team. You just made everyone stop reading the rest of your post by falsely claiming that


I don’t even know how this fake news started


Maybe the fact that they actually did dissolve the safety team formerly led by Ilya Sutskever in the aftermath of Altman's coup [0]? I'm genuinely unsure what part of this you're questioning.

[0] https://www.bloomberg.com/news/articles/2024-05-17/openai-di...


Like literally the third sentence makes it clear it was restructuring team dynamics but not eliminating safety goals or even losing the employees that are safety focused. There are many safety focused people at OpenAI and they never fired or laid off people in a safety team which is what people infer from “dissolve the safety team” when it’s used as a bullet point of criticism


It's pretty obvious to most of us that "integrating the group more deeply across its research efforts to help the company achieve its safety goals" is the corporate spin for what was actually "disperse the group that thought its job was to keep Altman himself accountable to safety goals and replace them with a group that Altman personally leads".

Eliminating Ilya's team was part of the post-coup restructuring to consolidate Altman's power, and every tribute to "safety" paid since then is spin.


https://www.merriam-webster.com/dictionary/dissolve

Definitions of dissolve:

to cause to disperse or disappear

to separate into component parts

to become dissipated (see DISSIPATE sense 1) or decompose

BREAK UP, DISPERSE

Those seem like pretty accurate descriptions of what happened. Yes, dissolve can also mean something stronger, so perhaps it is fair to call the statement ambiguous. But it isn’t incorrect.


A bit of a "dog bites man" story to note that a CEO of a hot company is hyping the future beyond reason. The real story of LLMs is revealed when you posit a magical technology that can print any car part for free.

How would the car industry change if someone made a 3D printer that could make any part, including custom parts, with just electricity and air? It is a sea change to manufacturers and distributors, but there would still be a need for mechanics and engineers to specify the correct parts, in the correct order, and use the parts to good purpose.

It is easy to imagine that the inventor of such a technology would probably start talking about printing entire cars - and if you don't think about it, it makes sense. But if you think about it, there are problems. Making the component of a solution is quite different than composing a solution. LLMs exist in the same conditions. Being able to generate code/text/images is of no use to someone who doesn't know what to do with it. I also think this limitation is a practical, tacit solution to the alignment problem.


This argument simply asserts that the LLMs (or their successor systems including scaffolding) will asymptote somewhere below human capabilities.

It’s possible that this could happen but you need to propose a mechanism and metric for this argument to be taken seriously (and to avoid fooling yourself with moving goalposts). Under what grounds do you assert that the trend line will stop where you claim it will stop?

Yes, if super-human AGI simply never happens then the alignment problem is mostly solved. Seems like wishful thinking to me.


This. It’s far harder to think of reasons that limits will endure, in a world where innovation by inches has always produced improvements. Everyone brings out the current state and technology. Those are transient.


WTF? Look at a word processor today, and then one from 1999, and tell me again how technology always keeps improving dramatically.

The standard electrical wall sockets that you use have not really changed since WW2. For load bearing elements in buildings, we don't have anything substantially better today than 100 years ago. There is a huge list of technological items where we've polished out almost every last wrinkle and a 1% gain once a decade is hailed as miraculous.


> For load bearing elements in buildings, we don't have anything substantially better today than 100 years ago

What about post-tensioned concrete slabs? Cross-laminated timber? Structural glazing? Intumescent paint?

Technology does keep improving, actually.


Except I can do things today that were not possible in 1999 like easily collaborate on a google doc realtime with someone on the other side of the planet, get an automated grammar expert to suggest edits and then have a video call with 15 people to discuss more about it. All in the same app, the browser, all on a whim, all high speed and to boot: all for free as in beer. And this is run of the mill YaaaaawN productivity tools.

I can also create a web scale app in a weekend using AWS. It is just insane what we can do now vs. 1999. I remember in early 2000s Microsoft boasting how it could host a site for the olympics using active server pages. This was PR worthy. That would be a side project for most of us now using our pocket money.


That's interesting, personally I've never found collaborative editing to be of any use, but the comments feature is instead the one I use, and often it works less well than the way we used to do it with inline comments (e.g. italicized or made a different color). The one advantage would be automatically merging comments from multiple reviewers, but that's not necessarily a good thing. Often the collection of comments from different reviewers each forms their own narrative, and merging it all into a chaotic mess drowns out each individual's perspective. Personally I'd rather treat each reviewer's critique individually. 1999 technology handles this just fine.

Same for video calls. Screen sharing can be useful at times, but it would be easy distribute materials to all participants and collaborate on a conference call. You'll have better audio latency that way. So it's not super obvious to me that the latest Wunderwaffe are actually significantly better, but it is clear they use a hell of a lot more compute cycles and bandwidth.


There is an interesting delusion among web company workers that goes something like “technology progress goes steadily up and to the right” which I think comes from web company managers who constantly prattle on about “innovation”. Your word processor example is a good one, because at some point making changes to the UI just hurts users. So all that empty “innovation” that everyone needs to look busy doing is actually worse than doing nothing at all. All that is a roundabout way to say I think tech workers have some deep need to see their work as somehow contributing to “progress” and “innovation” instead of just being the meaningless undirected spasms of corporate amoebas.


Do you have any points that aren’t about the people you disagree with? Argue the facts. What are the limits that prevent progress on the dimension of replicating human intelligence?


> Do you have any points that aren’t about the people you disagree with?

Yes...

> Argue the facts.

What?

> What are the limits that prevent progress on the dimension of replicating human intelligence?

I don't work in that field, but as a layman I'd wager the lack of clear technical understanding of what animal intelligence actually is, let alone how it works is the biggest limitation.


> What

Half of the language in your comment was fact free and incendiary. Delusion, prattle, deep needs of some role. That’s all firmly in the realm of ad hominem, so I was asking if you had anything substantial.

> I'd wager the lack of clear technical understanding of what animal intelligence actually is

I made the same points long time ago so can’t be too critical of that. I’ve changed my mind and here’s why. That’s not any kind of universal limit. It’s a state, and we can change states. We currently don’t understand intelligence but there’s no barrier I’m aware of that prevents progress. In addition, we’ve discovered types of intelligence (LLM’s, AlphaGo Zero etc) that don’t depend on our ability to understand ourselves. So our inability to understand intelligence isn’t a limit that prevents progress. New algorithms and architectures will be tested, in an ongoing fashion, because the perceived advantages and benefits are so great. It’s not like the universe had a map for intelligence before we arrived, it emerged from randomness.

I’m less sure that it’s a good idea, but that’s a different discussion. Put me in the camp of “this is going to be absolutely massive and I wish people took it more seriously.”


> fact free and incendiary

No. I believe it is factually accurate that many tech workers believe that technology progress marches steadily onwards and upwards. This is easily shown by the historical record to be patently false. Moreover, it's easy to imagine myriad ways we could regress technologically--nuclear war, asteroid impact, space weather, etc. So a belief in the inexorable progress of technology is therefore delusional. I believe it is a fact that tech companies' managers encourage these delusions by spinning up a bunch of pseudo religious sentiment using language like "innovation" to describe what is mostly actually really mundane, meaningless, and often actively harmful work. People hear that stuff, it goes to their heads, and they think they're "making the world a better place". The inexorable march of technology progress fits into such a world view.

> We currently don’t understand intelligence but there’s no barrier I’m aware of that prevents progress. In addition, we’ve discovered types of intelligence (LLM’s, AlphaGo Zero etc) that don’t depend on our ability to understand ourselves.

How can you claim both that we don't know what intelligence is, and that LLMs, AlphaGo Zero, and etc are "intelligent"?


If you’re reaching for nukes and asteroids then I think you just want to argue. Have a nice day.


So what? Many animals haven’t changed much for millions of years, and that was irrelevant to our emergence. Not everything has to change in equal measure.

There are many reasons for all those things not to change. Limits abound. We discovered that getting taller or faster isn’t “better”, all we needed is smarter. Intelligence is different. It applies to everything else. You can lose a limb or eyesight and still be incredibly capable. The intelligence is what makes us able to handle all the other limits and change the world even though MS Word hasn’t changed much.

We are now applying a lot of our intelligence to inventing another one. The architecture won’t stay the same, the limits won’t endure. People keep trying and it’s infinitely harder to imagine reasons why progress will stop. Just choose any limit and defend it.


That's a really good comment and insight, but understandably I think it is aimed at this forum and a technical audience. It landed well for me in terms of the near team impact of LLMs and other models. But outside this forum, I think our field is in a crisis from being very substantially oversold and undersold at the same time.

We have a very limited ability to define human intelligence, so it is almost impossible to know how near or far we are from simulating it. Everyone here knows how much a challenge it is to match average human cognitive abilities in some areas, and human brains run at 20 watts. There are people in power that may take technologists and technology executives at their word and move very large amounts of capital on promises that cannot be fulfilled. There was already an AI Winter 50 years ago, and there are extremely unethical figures in technology right now who can ruin the reputation of our field for a generation.

On the other hand, we have very large numbers of people around the world on the wrong end of a large and increasing wealth gap. Many of those people are just hanging on doing jobs that are actually threatened by AI. They know this, they fear this, and of course they will fight for their and their families lifestyles. This is a setup for large scale violence and instability. If there isn't a policy plan right now, AI will be suffering populist blowback.

Aside from those things, it looks like Sam has lost it. The recent stories about the TSMC meeting, https://news.ycombinator.com/item?id=41668824, was a huge problem. Asking for $7T shows a staggering lack of grounding in reality and how people, businesses, and supply chains work. I wasn't in the room and I don't know if he really sounded like a "podcasting bro", but to make an ask of companies like that with their own capital is insulting to them. There are potential dangers of applying this technology; there are dangers of overpromising the benefits; and neither of them are well served when relatively important people in related industries thing there is a credibility problem in AI.


LLMs aren't really simulating intelligence so much as echoing intelligence.

The problem is when the hype machine causes the echoes to replace the original intelligence that spawned the echoes, and eventually those echoes fade into background noise and we have to rebuild the original human intelligence again.



I appreciate this, that is why I said, "LLMs and other models". Knowing the probability relations between words, tokens, or concepts/though vectors is important, and can be supplemented by smaller embedded special purpose models/inference engines and domain knowledge in those areas.

As I said, it is overhyped in some areas and underhyped in others.


Not enough plastics, glass, or metal in the air to make it happen. You need a scrapyard. Actually, that's how the LLMs treat knowledge. They run around like Wall-E grabbing bits at random and assembling them in a haphazard way to quickly give you something that looks like the thing you asked for.


> How would the car industry change if someone made a 3D printer that could make any part, including custom parts, with just electricity and air?

The invention would never see the light of day. If someone were to invent Star Trek replicators, they'd be buried along with their invention. Best Case it would be quickly captured by the ownership class and only be allowed to be used by officially blessed manufacturing companies, and not any individuals. They will have learned their lesson from AI and what it does to scarcity. Western [correction: all of] society is hopelessly locked into and dependent on manufacturing scarcity, and the idea that people have to pay for things. The wealthy and powerful will never allow free abundance of physical goods in the hands of the little people.


I don't know. Today it's extremely expensive to start new manufacturing competitors. You have to go and ask for tons of capital merely to play the game and likely lose. Anyone with that much capital is probably going to be skeptical of some upstart and consult with established industry leaders. This would be the opportunity for those leaders to step in and take the tech for themselves.

So to solve this problem you need billions to burn on gambles. I guess that's how we ended up with VC's.


> Western society is hopelessly locked into and dependent on manufacturing scarcity, and the idea that people have to pay for things.

How do you reconcile that with the fact that Western society has invented, improved, and supplied many of the things we lament that other countries don't have (and those countries also lament it - it's not just our own Stockholm Syndrome).


Most of what gets invented either 1. relies on its scarcity to extract value or 2. if it's not naturally scarce, it gets locked behind patents, copyrights, cartels, regulations, and license agreements between corporations, in order to keep the means of its production out of the hands of common people. Without that scarcity and ability to profit from it, it wouldn't be invented.


I'm curious if this is true.

Are there specific historical examples of this that come to mind?


There are plenty of examples of good things being unavailable due to market manipulation and corrupt government.

But I don't know of anything nearly as extreme as destroying an entire invention. Those tend to stick around.


The closest real-life thing (and probably what a lot of believers in this particular conspiracy theory are drawing on directly) is probably the Phoebus lightbulb cartel: https://en.wikipedia.org/wiki/Phoebus_cartel

It’s rather hard to imagine even something like that (and it’s pretty limited in scope compared to the grand conspiracy above) working today, though; the EC would definitely stomp on it, and even the sleepy FTC would probably bestir itself for something so blatant.


And that’s not really the same thing. Did someone invent the LED bulb in 1920 and that cartel crushed it? Not really.

In reality, the biggest problem was they had no incentive to invest in new lighting technology research, although they had the money to do so. It takes a lot of effort to develop a new technology, and significantly more to make it practical and affordable.

I think the story of the development of the blue LED which led to modern LED lighting is more illustrative of the real obstacles of technological development.

Companies/managers don't want to invest in R&D bc it’s too uncertain and they typically are more interested in the short term.

And it’s hard for someone without deep technical knowledge to identify a realistic worthwhile technical idea from a bad one. So they focus on what they can understand and what they can quantify ().

And even technical people can fail to properly evaluate ideas that are even slightly outside their area of expertise (or even sometimes the ones that are within it )


It only worked at the time because it benefited the public at large.


You're asking for historical examples of great inventions that were hidden from history. I only know of those great inventions that I've hidden...

There are a number of counterexamples though. Henry Ford, etc.


Just wondering if you’ve actually hidden inventions you would actually consider to be great?


>The wealthy and powerful will never allow free abundance of physical goods in the hands of the little people.

Then there would be a violent revolution which wrestles it out of their hands. The benefits of such a technology would be immediately obvious to the layman and he would not allow it to be hoarded by a select few.


Only if someone can make money from it. If the people who have it can't do that (or won't) then it won't happen. E.g. there's no private industry for nuclear bombs selling to any bidder in the world. But anything else you just need to let people make companies and that'll get it out there, in a thousand shades of value to match all the possible demands.


I dunno, magically fabricating a part is fundamentally different than magically deciding where and when to do so.

AI can magically decide where to put small pieces of code. Its not a leap to imagine that it will later be good at knowing where to put large pieces of code.

I don't think it'll get there any time soon, but the boundary is less crisp than your metaphor makes it.


> AI can magically decide where to put small pieces of code.

Magically, but not particularly correctly.


> I dunno, magically fabricating a part is fundamentally different than magically deciding where and when to do so.

Right.

It sounds to me like you agree and are repeating the comment but are framing as disagreeable.

I'm sure I'm missing something.


The person you replied to suggests that the analogy of a 3d printer building a part does not hold, as LLM-based coding systems are able to both "print" some code, and decide where and when to do so.

I tend to agree with them. What people seem to miss about LLM coding systems, IMO:

a) deciding on the capabilities of an LLM to code after a brief browser session with 4o/claude is comparable to waking up a coder in the middle of the night, and having them recite the perfect code right then and there. So a lot of people interact with it that way, decide it's meh, and write it off.

b) most people haven't tinkered with with systems that incorporate more of the tools human developers use day to day. They'd be surprised of what even small, local models can do.

c) LLMs seem perfectly capable to always add another layer of abstraction on top of whatever "thing" they get good at. Good at summaries? Cool, now abstract that for memory. Good at q/a? Cool, now abstract that over document parsing for search. Good at coding? Cool, now abstract that over software architecture.

d) Most people haven't seen any RL-based coding systems yet. That's fun.

----

Now, of course the article is perfectly reasonable, and we shouldn't take what any CEO says at face value. But I think the pessimism, especially in coding, is also misplaced, and will ultimately be proven wrong.


Right, AI is good and will be good. I agree with you 110%. I quit a well-paying job and have gone a year without salary on this premise :)

However, I worry about the premise underlying your reply: a sense that this is somehow incompatible with the viewpoint being discussed.

i.e. it's perfectly cromulent to both think LLMs are and will continue to be awesome at coding, even believe they get much better. And also that you could give me ASI today, and there'd be an incredible long tail of work and processes to reformulate to pull off replacing most labor. It's like having infinite PhDs available by text message. Unintuitively, not that much help. I can't believe I'm writing that lol. But here we are.

Steve Sinfosky had a good couple long posts re: this on X that discuss this far better than I can.


Agree. The way I frame it is the world changes very slowly so even very capable new technologies take time to integrate into it.


(D) includes me, and does sound fun. Got any references?


> Got any references?

This is a good question, and I worry you won't get a response. Here is a pattern I've observed very frequently in the LLM space, with much more frequency than random chance would suggest:

  Bob: "Oh, of course it didn't work for you, you just need to use an ANA (amazing new acronym) model"
  Alice: "Oh, that's great, where can I see how ANA works? How do I use it?"
  ** Bob has left the chat **


I'm sorry but if you're calling RL "ANA (amazing new acronym)" you're out of your depth on this one.


Saw a private demo, felt as giddy as I felt when deepmind showed Mario footage. No idea when it'll be out.


I would say LLMs are much less “produce a perfect part from nothing” and more “cut down a tree and get a the part you ask for, for a random model of car”


Car analogies are a little fraught because, I mean, in some cases you can even die if you put just the wrong parts in your car, and they are pretty complex mechanically.

If you had a printer that could print semi-random mechanical parts, using it to make a car would be obviously dumb, right? Maybe you would use it to make, like, a roller blade wheel, or some other simple component that can be easily checked.


> specify the correct parts, in the correct order, and use the parts to good purpose

While the attention-based mechanisms of the current generation of LLMs still have a long way to go (and may not be the correct architecture) to achieve requisite levels of spatial reasoning (and of "practical" experience with how different shapes are used in reality) to actually, say, design a motor vehicle from first principles... that future is far more tangible than ever, with more access to synthetic data and optimized compute than ever before.

What's unclear is whether OpenAI will be able to recruit and retain the talent necessary to be the ones to get there; even if it is able to raise an order of magnitude more than competitors, that's no guarantee of success. My guess would be that some of the decisions that have led to the loss of much senior talent will slow their progress in the long run. Time will tell!


> The real story of LLMs is revealed when you posit a magical technology that can print any car part for free.

I think the insight is that some people truly believe that LLMs would be exactly as groundbreaking as a magical 3D printer that prints out any part for free.

And they're pumping AI madly because of this belief.


Printed this out and pasted it into my journal. Going to come back to it in a few years. This touches on something important I can’t quite put into words yet. Some fundamental piece of consciousness that is hard to replicate - desire maybe


Desire is a big part of it! Right now LLMs just respond. What if you prompted an LLM, “your goal is to gain access to this person’s bank account. Here is their phone number. Message with them until you can confirm successful access.”

Learning how to get what you want is a fundamental skill you start learning from infancy.


Your post is a good summarization of what I believe as well.

But what's interesting when I speak to laymen is that the hype in the general public seems specifically centered on the composite solution that is ChatGPT. That's what they consider 'AI'. That specific conversational format in a web browser, as a complete product. That is the manifestation of AI they believe everyone thinks could become dangerous.

They don't consider the LLM API's as components of a series of new products, because they don't understand the architecture and business models of these things. They just think of ChatGPT and UI prompts (or it's competitor's versions of the same).


I think people think* of ChatGPT not as the web UI, but as some mysterious, possibly thinking, machine which sits behind the web UI. That is, it is clear that there’s “something” behind the curtain, and there’s some concern maybe that it could get out, but there isn’t really clarity on where the thing stops and the curtain begins, or anything like that. This is more magical, but probably less wrong, than just thinking of it as the prompt UI.

*(which is always a risky way of looking at it, because who the hell am I? Neither somebody in the AI field, nor completely naive toward programming, so I might be in some weird knows-enough-to-be-dangerous-not-enough-to-be-useful valley of misunderstanding. I think this describes a lot of us here, fwiw)


An LLM is not going to go skynet. It's not smart enough and can't take initiative.

An AGI however could. Once it reaches IQs of more than say 500 it would become very hard to control it.


Power cord?


It is possible that some hypothetical super intelligent future AI will do something very clever like hide in a coding assistant and then distribute bits of itself around the planet in the source code of all of our stoplights and medical equipment.

However I think it’s more likely that it will LARP as “I’m an emotionally supportive beautiful AI lady, please download me to your phone, don’t take out the battery or I die!”


> It is possible that some hypothetical super intelligent future AI will do something very clever like hide in a coding assistant and then distribute bits of itself around the planet in the source code of all of our stoplights and medical equipment.

That was part of the plot of person of interest. A really interesting show, it started as a basic "monster of the week" show but near the end it became a much more interesting plot.

Although most of the human characters were extremely one-dimensional. Especially Jim Caviezel's who was just a grumpy super soldier in every episode. It was kinda funny because they called him "the man in the suit" in the series and there was indeed little else to identify his character. The others were hardly better :(

But the AI storyline I found very interesting.


It won't be that simple because it will anticipate your every move once it gets intelligent enough.


Here are two explanations of why cutting the power won't help you once you reach that state. In short, you have already lost.

[0]: No Physical Substrate, No Problem https://slatestarcodex.com/2015/04/07/no-physical-substrate-...

[1]: It Looks Like You’re Trying To Take Over The World https://gwern.net/fiction/clippy


Great analogy! I'll borrow this when explaining my thoughts on how LLMs pose to replace software engineers.


I tried replacing myself (coding hat) and it was pretty underwhelming. Some day maybe.


That comparison would make sense if the company was open source, non-profit, promised to make all designs available for free, took Elon Musk's money and then broke all promises includidng the one in its name and started competing with Musk.


> A bit of a "dog bites man" story to note that a CEO of a hot company is hyping the future beyond reason.

Why is it in your worldview a CEO “has to lie”?

Are you incapable of imagining one where a CEO is honest?

> The real story of LLMs is revealed when you posit a magical technology that can print any car part for free.

I’ll allow it if you stipulate that randomly and without reason when I ask for an alternator it prints me toy dinosaur.

> It is easy to imagine that the inventor of such a technology

As if the unethical sociopath TFA is about is any kind of, let alone the, inventor of genai.

> Being able to generate code/text/images is of no use to someone who doesn't know what to do with it.

Again, conveniently omitting the technology’s ever present failure modes.


A CEO has a fiduciary duty to lie.


A CEO that is the CEO of a publicly traded company who has over-leveraged their position and is now locked into chasing growth even if the company they're running suffers has a fiduciary duty to lie. There are plenty of CEOs not in this position.

I'm not talking about Altman in particular, I'm just annoyed with this constant spam on HN about how we all need to turn a blind eye to snake oil salesman because "that's just how it's supposed to be for a startup."

For a forum that complains about how money ruins everything, from the Unity scandal to OSS projects being sponsored and "tainted" by "evil companies," it's shocking to see how often golden boy executives are excused. I wish people had this energy for the much smaller companies trying to be profitable by raising subscriptions once in the 20 years they've been running, but instead they are treated like they burned a church. It truly is an elitist system.


Perhaps someone should have considered what problem they are trying to solve before spending vast resources on the "solution"?


There are plenty of plastic parts in cars and you can print them with 3D printer. I don't think that anything really changed because of that.


Because those are irrelevant to the point being made in the GP


GP? What happened to the good old "OP"?



OP is original poster (or post), to which we are not referring.


I think that is GP's point.


There are legit criticisms of Sam Altman that can be levied but none of them are in this article. This is just reductive nonsense.

The arguments are essentially:

1. The technology has plateaued, not in reality, but in the perception of the average layperson over the last two years.

2. Sam _only_ has a record as a deal maker, not a physicist.

3. AI can sometimes do bad things & utilizes a lot of energy.

I normally really enjoy the Atlantic since their writers at least try to include context & nuance. This piece does neither.


I think LLM technology, not necessarily all of CNN, has plateaued. We've used up all the human discourse, so there's nothing to train it on.

It's like fossil fuels. They took billions of years to create and centuries to consume. We can't just create more.

Another problem is that the data sets are becoming contaminated, creating a reinforcement cycle that makes LLMs trained on more recent data worse.

My thoughts are that it won't get any better with this method of just brute-forcing data into a model like everyone's been doing. There needs to be some significant scientific innovations. But all anybody is doing is throwing money at copying the major players and applying some distinguishing flavor.


What data are you using to back up this belief?

Progress on benchmarks continues to improve (see GPT-o1).

The claim that there is nothing left to train on is objectively false. The big guys are building synthetic training sets, moving to multimodal, and are not worried about running out of data.

o1 shows that you can also throw more inference compute at problems to improve performance, so it gives another dimension to scale models on.


> Progress on benchmarks continues to improve (see GPT-o1).

thats not evidence of a step change.

> The big guys are building synthetic training sets

Yes, that helps to pre-train models, but its not a replacement for real data.

> not worried about running out of data.

they totally are. The more data, the more expensive it is to train. Exponentially more expensive.

> o1 shows that you can also throw more inference compute

I suspect that its not actually just compute, its changes to training and model design.


Actually, the sources we had (everything scraped from the internet) turns out to be pretty bad.

Imagine not going to school and instead learning everything from random blog posts or reddit comments. You could do it if you read a lot, but it's clearly suboptimal.

That's why OpenAI, and probably every other serious AI company, is investing huge amounts in generating (proprietary) datasets.


GitHub, especially filtered by starred repos, is a pretty high quality dataset.


Any thoughts on synthetic data?


See "AI models collapse when trained on recursively generated data"

https://www.nature.com/articles/s41586-024-07566-y


Dead end. You cannot create information out of nothing.


How did alpha zero succeed?


By introducing new training data (train of thought) that was independently verified or explicitly constructed using external information.


Which is why thought experiments are always useless.


You're a creationist then?


To avoid disappointment, just think of the mass news media as a (shitty) LLM. It may occasionally produce an article that on the surface seems to be decently thought out, but it's only because the author accidentally picked a particularly good source to regurgitate. Ultimately, they just type some plausible sentences without knowing or caring about the quality.


If Sam claimed we'll fix climate with the help of AI, he is either a liar or a fool.

Our problem isn't technology, it's humans.

Unless he suggests mass indoctrination per AI AI won't fix anything.


> At a high enough level of abstraction, Altman’s entire job is to keep us all fixated on an imagined AI future

I think the job of a CEO is not to tell you the truth, and the truth is probably more often than not, the opposite.

What if gpt5 is vaporware, and there’s no equivalent 3 to 4 leap to be realized with current deep learning architectures? What is OpenAI worth then?


I'm paying my subscription and I'd probably pay 5x more if they would charge it to keep access to the current service. ChatGPT 4o is incredibly useful for me today, regardless of whether GPT5 will be good or not. I'm not sure how does that reflect OpenAI cost, but those company costs are just bubbles of air anyway.


Would you be willing to share how you're using it?

I keep hearing from people who find these enormous benefits from LLMs. I've been liking them as a search engine (especially finding things buried in bad documentation), but can't seem to find the life-changing part.


1. Search engine replacement. I'm using it for many queries I asked Google before. I still use Google, but less often.

2. To break procrastination loops. For example I often can't name a particular variable, because I can see few alternatives and I don't like all of them. Nowadays I just ask ChatGPT and often proceed with his opinion.

3. Navigating less known technologies. For example my Python knowledge is limited and I don't really use it often, so I don't want to spend time to better learn it. ChatGPT is just perfect for that kind of tasks, because I know what I want to get, I just miss some syntax nuances and I can quickly check the result. Another example is jq, it's very useful tool, but its syntax is arcane and I can't remember it even after years of occasional tinkering with it. ChatGPT builds jq programs like a super-human, I just show example JSON and what I want to get.

4. Not ChatGPT, but I think Copilot is based on GPT4, and I use Copilot very often as a smart autocomplete. I didn't really adopt it as a code writing tool, I'm very strict at code I produce, but it still helps a lot with repetitive fragments. Things I had to spend 10-20 minutes before, construction regexps or using editor macroses, I can now do with Copilot in 10-20 seconds. For languages like Golang where I must write `if err != nil` after every line, it also helps not to become crazy.

May be I didn't formulate my thoughts properly. It's not anything irreplaceable and I didn't become 10x programmer. But those tools are very nice and absolutely worth every penny I paid for it. It's like Intellij Idea. I can write Java in notepad.exe, but I'm happy to pay $100/year to Jetbrains and write Java in Idea.


>> 3. Navigating less known technologies. For example my Python knowledge is limited and I don't really use it often, so I don't want to spend time to better learn it.

Respectfully but that's a bit like saying you don't need to learn how to ride a bicycle because you can use a pair of safety wheels. It's an excuse to not learn something, to keep yourself less knowledgeable and skillful than you could really be. Why stunt yourself? Knowledge is power.

See it this way: anyone can use ChatGPT but not everyone knows Python well, so you 'll never be able to use ChatGPT to compete with someone who knows Python well. You + limited knowledge of Python + ChatGPT << someone + good knowledge of Python + ChatGPT.

Experiences like the one you relay in your comment makes me think using LLMs for coding in particular is betting on short-term gains for a disproportionately large long-term cost. You can move faster now, but there's a limit to what you can do that way and you'll never be able to escape it.


> and you'll never be able to escape it.

Slow down, cowboy. Getting a LLM to generate code for you that is immediately useful and doesn't require you to think too hard about it can stunt learning, sure, but even just reading it and slowly getting familiar with how the code works and how it relates to your original task is helpful.

I learned programming by looking at examples of code that did similar things to what I wanted, re-typing it, and modifying it a bit to suit my needs. From that point of view it's not that different.

I've seen a couple of cases first hand of people with no prior experience with programming learn a bit by asking ChatGPT to automate some web scraping tasks or spreadsheet manipulation.

> You + limited knowledge of Python + ChatGPT << someone + good knowledge of Python + ChatGPT.

Substract ChatGPT from both sides and you have a rather obvious statement.

> Respectfully but that's a bit like saying you don't need to learn how to ride a bicycle because you can use a pair of safety wheels.

How did you learn to ride a bicycle?


I actually disagree with this. One of my few really helpful LLM experiences is working with technologies that I rarely touch. nginx configs are probably a good example, or a library I haven't touched in 5 years.

The AI is pretty good at answering "I know this is possible, remind me what it's called or what the syntax is."

For a lot of these the reason I want to look it up is I don't use it enough to build muscle memory. If I did, I'd already have the muscle memory. And in practice, it's just a faster version of looking for examples on stackoverflow.


> Respectfully but that's a bit like saying you don't need to learn how to ride a bicycle because you can use a pair of safety wheels.

Do you care how a car engine works to drive a car?


Yes. I generally like to know how stuff works and why.


Do you think rest of the world cares?


These who don’t care need help for every little problem. And possibly get charged a lot more than it actually cost in a car shop for any service. So while one can live without knowing it’s ALWAYS better to know


Sorry, is this going somewhere?


You did not answer the question


Respectfully, this is an incredibly weak take.

You are making a lot of assumptions about someone's ability to learn with AND without assistance, while also making rather sci-fi leaps about our brain somehow being able to differentiate between learning that has somehow been tainted by the tendrils of ML overlords.

The models and the user interface around them absolutely will continue to improve far faster than any one person's ability to obtain subject mastery in a field.


Just to clarify, I didn't say anything about the OP's "ability to learn". I know nothing about that and can't tell anything about it from their comment. I also didn't say anything about how our brain works, or about "the tendrils of ML overlords".

If you want to have a debate, I'm all for it, but if you're going to go around imagining things that I may have said in another timeline then I don't see what's the point of that.


Your comment said, in many more words, that to embrace LLMs for learning or knowledge enhancement might pay off in the short term but will leave you stunted and operating in a local maximum of potential and possibility.

You could be right, but I strongly suspect that you are actually wrong.


Nope. I said that refusing to learn something will leave you stunted. I did not say that using an LLM will leave you stunted.

The OP said:

>>> For example my Python knowledge is limited and I don't really use it often, so I don't want to spend time to better learn it.

And I replied:

>> Respectfully but that's a bit like saying you don't need to learn how to ride a bicycle because you can use a pair of safety wheels. It's an excuse to not learn something, to keep yourself less knowledgeable and skillful than you could really be. Why stunt yourself? Knowledge is power.

"It's an excuse to not learn something (...) Why stunt yourself?".


I'm long time programmer and I learned quite a lot of languages along the way. 20 years ago I programmed Perl and I spent some time learning its quirks. While some of that knowledge definitely helped me all these years, especially regular expressions, I wouldn't write a single Perl line without Googling today.

So learning syntax nuances of yet another programming language which I'll forget soon is wasted time. There's some "fundamental" knowledge, like understanding what functional value is, but the specific syntax that Python invented for lambda - that's not something valuable when I use Python few times a year.


I agree that learning syntactic nuances is not that useful, but it's something that comes naturally from contact with the language.

Rereading your comment I think I got the wrong impression from it. You said:

>> I don't really use it often, so I don't want to spend time to better learn it

And I focused more on "I don't want to spend time to better learn it" than on "I don't really use it often". I also don't use Python often and I have to look up its syntax, same with e.g. R. I guess in that case it makes sense to not spend too much time learning the ins and outs of a language. On the other hand, Python is a staple, like Java was back in the day, and Perl and so on. It's useful, like speaking English. I don't think I would ever feel comfortable saying "I don't want to learn" it; or anything else.


I’ve got a local llama 3.2 3B running on my macOS. I can query it for recipes, autocomplete obvious code (this is the only thing GitHub Copilot was useful for). And answer simple questions when provided with a little bit of context.

All with much lower latency than an HTTP request to a random place, knowing that my data can’t be used to trading anything, and it’s free.

It’s absolutely insane this is the real world now.


+1 for sure running LLMs 3.2 3B is super fast and useful. I have been pushing it for local RAG and code completion also. I bought a 32B memory Mac six months ago, which I now regret because the small local models are now extremely useful and run fine on old 8B memory Macs, and support all the fun experiments I want to do.


What do you use to interface with llama for autocomplete? and what editor do you use?

Not wanting my data to be sent to random places is what has limited my use of tools like copilot (so I'd only use it very sparingly after thinking if sending the data would be a breach of nda or not)


ollama + VSCode + Continue extension


I can't even summarize how much GPT4x has helped me teach myself engineering skills over the past two years. It can help me accomplish highly specific and nuanced things in CAD, plan interactions between component parts (which it helped me decide) on PCBs that it helps me to lay out, figure out how to optimize everything from switching regulators to preparing for EM certification.

And I could say this about just about every domain of my life. I've trained myself to ask it about everything that poses a question or a challenge, from creating recipes to caring for my indoor Japanese maple tree to preparing for difficult conversations and negotiations.

The idea of "just" using it to compose emails or search for things seems frustrating to me, even reading about it. It's actually very hard for me to capture all of this in a way that doesn't sound like I'm insulting the folks who aren't there yet.

I'm not blindly accepting everything it says. I am highly technical and I think competent enough to understand when I need to push back against obvious or likely hallucinations. I would never hand its plans to a contractor and say "build this". It's more like having an extra, incredibly intuitive person who just happens to contain the sum of most human knowledge at the table, for $20 a month.

I honestly don't understand how the folks reading HN don't intuitively and passionately lean into that. It's a freaking superpower.


I suspect many people are simply not that intellectually curious about the world.

It a super power for the intellectually curious while the highly specialized that thought they were done learning can only see the threat to their intellectual moat. They aren't wrong.

I feel like I level up on a weekly basis practically.


YESssss

I feel like this has become one of the most taboo things to even wonder about out loud, because it drives a different part of the Venn diagram into a freaking rage. How dare you force me to consider that perhaps I am only slightly above average in a population of billions?


> I honestly don't understand how the folks reading HN don't intuitively and passionately lean into that. It's a freaking superpower.

It is difficult to get a man to understand something when his salary depends upon his not understanding it.

Many of us here would see our jobs eliminated by a sufficiently powerful AI, perhaps some have already experienced it. You might as well. If you use AI so much, what value do you really provide and how much longer before the AI can surpass you at that?


Maybe because my salary doesn't depend on any of this that I think this whole framing is science fiction.

A calculator needs direction on what to calculate because the machine doesn't have agency.

We are going to a higher level of abstraction and at that higher level you probably need a different skill set than before. Humans aren't going to get cut out of the loop but you always have to level up your skills.

I think there is just a generation of tech workers that got lazy and thought they were immune to change some how. I didn't even know anyone that had been on the internet when I graduated high school. Someone older and wiser though told me I would have to constantly reinvent myself, always learn new skills and will have several different careers. They were so spot on. Guess what? Nothing has changed.


If we ever find each other IRL, I suspect that we will be fast friends.

A tip of my hat to you, sir/ma'am.


Only time will tell. In the meantime, I am not losing any sleep.

There's a lot of people in technical roles who chose to study programming and work at tech companies because it seemed like it would pay more than other roles. My own calculation is that the tech-but-could-have-just-as-easily-been-a-lawyer cohort will be the first to find themselves replaced. Is that a revealing a bias? Absolutely.

Actual hackers, in the true sense, will have no trouble finding ways to continue to be useful and hopefully well compensated.


But note that the price they can charge is based on market supply and demand. If Claude is priced at $20, ChatGPT won’t be able to ask $100.


If OpenAI sent out an email today informing me that to maintain access to the current 4o model I will have to pay $1000 a year, and that it would go up another $500 next year... it would still be well worth it to me.


To you maybe. But if Claude or any other competitor with similar features and performance keeps a lower price, most users will migrate there.


You could be right, but I suspect that you're underestimating the degree to which GPT has become the Kleenex of the LLM space in the consumer zeitgeist.

Based on all of the behaviour psychology books I've read, Claude would have to introduce a model that is 10x better and 10x cheaper - or something so radically different that it registers as an entirely new thing - for it to hit the radar outside of the tech world.

I encourage you to sample the folks in your life that don't work in tech. See if any of them have ever even heard of Claude.


I don’t think people outside of tech hearing about OpenAI more than Claude is really indicative of much. Ask those same people how much they use an LLM and it’s often rare-to-never.

Also, in what way has OpenAI become the Kleenex of the LLM space? Anthropic, Google, Facebook have no gpts, nobody “gpts” something, nobody uses that companies “gpt”.

I would say perhaps OpenAI has become the Napster, MySpace, or Facebook of the LLM space. Time will tell how long they keep that title


I am surrounded by people who "ask GPT" things constantly.

That seems like the same thing to me.


Are these people asking chatGPT though? Or do they say “ask GPT” and then use other LLMs?

I feel like I hear “ask Claude” as much as “ask chatGPT”


Yeah... I think it's GPT.

I just sampled my family and nobody could name an LLM that wasn't GPT. In this very small, obviously anecdotal scenario, GPT == LLM.

They seemed vaguely aware that there are other options; my wife asked if I meant Bing.

Meanwhile, I have literally never heard the words "ask Claude" out loud. I promise to come back and confirm if/when that ever changes.


That being said, 4o is functionally in the same league as Claude, which makes this a whole different story. One in which the moat is already gone.


Yeah I stopped my subscription during 4 thinking it wasn't super useful, but 4o does seem to handle programming logic pretty well. Better than me if given the same timeframe (of seconds) at least.

It's helped me stay productive on days when my brain just really doesn't want to come up with a function that does some annoying fairly complex bit of logic and I'd probably waste a couple hours getting it working.

Before I'd throw something like that at it, and it'd give me something confidently that was totally broken, and trying to go back and forth to fix it was a waste of my time.

Now I get something that works pretty well but maybe I just need to tweak something a bit because I didn't give it enough context or quite go over all the inconsistencies and exceptions in the business logic given by the requirements (also I can't actually use it on client machines so I have to type it manually to and from another machine, so I'm not copy pasting anything so I try to get away with typing less).

I'm not typing anything sensitive, btw, this is stuff you might find on Stack Overflow but more convoluted, like "search this with this exception and this exception because that's the business requirement and by these properties but then go deeper into this property that has a submenu that also needs to be included and provide a flatlist but group it by this and transform it so it fits this new data type and sort it by this unless this other property has this value" type of junk.


> What if gpt5 is vaporware, and there’s no equivalent 3 to 4 leap to be realized with current deep learning architectures?

Sam Altman himself doesn't know whether it's the case. Nobody knows. It's the natural of R&D. If you can tell whether an architecture works or not with 100% confidence it's not cutting edge.


Sam Altman has explicitly said the next model will be an even bigger jump than between 3 and 4.

I think that was before 4o? I know 4o-mini and o1 for sure have come out since he said that


> Sam Altman has explicitly said the next model will be an even bigger jump than between 3 and 4.

You say unironically on an article stating that Sam Altman cannot be taken at his word in a string of comments about him hyping up the next thing so he can exit strongly on the backs of the next greater fool. But seriously, I’m sure GPT-5 will be the greatest leap in history (OpenAI equity holder here).


We got to ~gpt4 by scaling model parameters, and then several more companies did too.

That’s dead. OpenAI knows that much. There will be more, but they aren’t going to report that we’re doing incremental advances until there’s a significant breakthrough. They need to stay afloat and say what it takes to try and bridge the gap.


> Nobody knows.

I suspect it's a little different. AI models are still made of math and geometric structures. Like mathematicians, researchers are developing intuitions about where the future opportunities and constraints might be. It's just highly abstract, and until someone writes the beautiful Nautilus Mag article that helps a normie see the landscape they're navigating, we outsiders see it as total magic and unknowable.

But Altman has direct access to the folks intuiting through it (likely not validated intuitions, but still insight)

That's not to say I believe him. Motivations are very tangled and meta here


If a CEO lies all the time, and investors make investments because of it, will that not turn out to become a problem for the CEO?


Well lets take Tesla, FSD and Elon as an example were a judge just ruled[0] that it's normal corporate puffery and not lies.

[0] https://norcalrecord.com/stories/664710402-judge-tosses-clas...


The 5th circuit's loose attitude about deception is why all Elon's Xs live in Texas, or will soon.

It's not trivial. "Mere puffery" has netted Tesla about $1B in FSD revenue.


Elons lies are way different. He mostly is just optimistic about time frames. Since Sam Altman and ChatGPT became mainstream the narrative is about the literal end of the world and OpenAI, and their influencer army, has made doomerism their entire marketing strategy.


When you're taking in money it's not blind optimism. They sold gullible people a feature that was promised to be delivered in the near future. Ten years later it's clearly all a fraud.


> He mostly is just optimistic about time frames

Perhaps if you have a selective memory. There's plenty of collections of straight-up set-in-stone falsehoods on the internet to find, if you're interested.


It seems that continuously promising something is X weeks/months/years away is seen as optimism and belief, not blatant disregard for facts.

I'm sure the defence is always, "but if we just had a bit more money, we would've got it done"


Elizabeth Holmes would like a referral to lawyer who could make that case.


I think her end product was too clearly defined for anything she was doing to be passed as progress. I don't think you could make the case for her.

You can make a case that partial self driving is a route to FSD, the ISS is en route to Mars and (you can make a potentially slightly less compelling case) LLMs are on the way to AGI.

No one can make a case that lady was en route to the tech she promised


I think she thought it was going to work for a while. The problem was when she got to the point of realizing it wouldn't and had the chance to stop there, she doubled down. That's why she's in prison today.


Holmes, Balwani and co lied about their current technology, which is a step beyond making overly optimistic future projections. They claimed to have their own technology when they were actually using their competitor’s machines with diluted blood samples.


There is a line. Forging documents is one of them.


Elizabeth Holmes screwed up by telling the wrong lies and also having the wrong chromosomes. She should've called some cave rescuers pedophiles, then maybe people would've respected her.


That’s not what happened to Theranos and Elisabeth Holmes


Well I didn't say everyone could pull it off successfully

I think the more abstract and less defined the end goal is, the easier it is to make everything look like progress.

The blood testing lady was a pass/fail really. FSD/AGI are things where you can make anything look like a milestone. Same with SpaceX going to Mars.


They made medical claims. That's a very bad idea if you aren't 100% sure


Forget that. Pasting logos of big-named companies on forged documents. I mean, seriously?


Well i think this conversation chain has played out multiple times all the way spanning back to at least Ben Edison. Often times nothing is certain in a business where you need to take a chance trying to bring an imagined idea into fruition with millions of investor money.

Ceos are more often come from marketing backgrounds than other disciplines for the very reason they have to sell stakeholders, employees, investors on the possibilities. If a ceos myth making turns out to be a lie 50 to 80 percent of the time then hes still a success as with Edison, Musk, Jobs, and now Altman.

But i think AI ceos seem to be imagining and peddling wilder fancier myths than the average. If AI technology pans out then i dont feel theyre unwarranted. I think theres enough justification but im biased and have been doing AI for 10 years.

To ur question, If a ceos lies dont accidently turn true eventually as with the case of Holmes then yes its a big problem.


Come now, it's only lying when poor people do it, don'tcha know.


depends which investors lose money


This is the Tesla Autopilot playbook, which seems to continue to work decently for that particular CEO


I'm not sure about the decency of it.

I remember all the articles praising the facade of a super-genius. It's a stark contrast to today.

People write about his trouble or his latest outburst like they would a neighborh's troubled kid. There's very little decency in watching people sink like that.

What's left after reality reassesserts itself, after the distortion field is gone? Mostly slow decline. Never to reach that high again.


The difference between Tesla and Nikola is that some false claims matter more than others: https://www.npr.org/2022/10/14/1129248846/nikola-founder-ele...

Given Altman seems to be extremely vague about exact timelines and mainly gives vibes, he's probably doing fine. Especially as half the stuff he says is, essentially, to lower expectations rather than to raise them.


If you need someone to tell you the truth you don’t need a CEO.

What you need a CEO for is to sell you (and your investors) a vision.


Without the truth, a vision is a hallucination.

It saddens me how easily someone with money and influence can elevate themselves to a quasi religious figure.

In reality, this vision you speak of is more like the blind leading the blind.


> It saddens me how easily someone with money and influence can elevate themselves to a quasi religious figure.

If so many people wouldn't fall for claims without any proof, religions themselves would not exist.


> In reality, this vision you speak of is more like the blind leading the blind.

If you want to call it suckering suckers into parting with their money that works too.

It’s not all that different from crypto.

Progress depends on irrational people though. Even if 50 companies fail for every one that succeeds, that’s progress.


All knowledge is incomplete and partial, which is another way of saying "wrong", therefore all "vision" is hallucination. This discussion would not be happening without hallucinators with a crowd sharing their delusion. Humanity generally doesn't find the actual truth sufficiently engaging to accomplish much beyond the needs of immediate survival.


This is silly. Delusion kills more companies than facing reality with honesty does.


> What if gpt5 is vaporware

OpenAI decides what they call gpt5. They are waiting for a breakthrough that would make people "wow!". That's not even very difficult and there are multiple paths. One is a much smarter gpt4 which is what most people expect but another one is a real good voice-to-voice or video-to-video feature that works seamlessly the same way chatgpt was the first chatbot that made people interested.


It’s more than that. Because of what they’ve said publicly and already demonstrated in the 3->4 succession, they can’t release something incremental as gpt5.

Otherwise people might get the impression that we’re already at a point of diminishing returns on transformer architectures. With half a dozen other companies on their heels and suspiciously nobody significantly ahead anymore, it’s substantially harder to justify their recent valuation.


Funnily I came across an interview from around 5 years ago where he straight up admitted that he had no clue how to generate a return on investment back then (see https://www.youtube.com/watch?v=TzcJlKg2Rc0&t=1920s)

"We have no current plans to make revenue."

"We have no idea how we may one day generate revenue."

"We have made a soft promise to investors that once we've built a general intelligence system, basically we will ask it to figure out a way to generate an investment return for you."

The fact he has no clue how to generate revenue with an AGI without asking it, shows his lack of imagination.


> The fact he has no clue how to generate revenue with an AGI without asking it...

I wouldn't take this statement at face value any more than anything else a CEO says. For example, maybe he still had to appease the board, and talking about making profit would be counterproductive. Maybe he gave this answer because it signals that he's in it for the long game and investors should be too.


Well, I mean the actual answer is raise more money from investors. But it is better to leave things up to the imagination of the listener.


Or his honesty?


Or maybe both


What a shitty world we constructed for ourselves, where the highest positions of power with the highest monetary rewards depend on being the biggest liar. And it’s casually mentioned and even defended as if that’s in any way acceptable.

https://www.newyorker.com/cartoon/a16995


A typical CEO's job is to guard and enforce a narrative. Great ones also work at adding to the narrative.

But it is always about the narrative.


I thought it was about directing the execution of the company business. Defining strategy and being a driving force behind it.

But you are right, we live in a post truth influencer driven world. It's all about the narrative.


Actually, CEOs, and other board members, are supposed to be held to certain standards. Specifically, honesty and integrity. Some charters explicitly include working for the public good. Let's not forget that and not get desensitized to certain behaviors.


That ship has long, long sailed


The future is not binary, it’s a probability.


Not possible since Claude is effectively GPT5 level in most tasks (EDIT: coding is not one of them). OpenAI lost the lead months ago. Altman talking about AGI (may take decades or years, nobody knows) is just the usual crazy Musk-style CEO thing that is totally safe to ignore. What is interesting is the incredible steady progresses of LLMs so far.


> Claude is effectively GPT5 level

Which model? Sonnet 3.5? I subscribed to Claude for while to test Sonnet/Opus, but never got them to work as well as GPT-4o or o1-preview. Mostly tried it out for coding help (Rust and Python mainly).

Definitely didn't see any "leap" compared to what OpenAI/ChatGPT offers today.


Both, depending on the use case. Unfortunately Claude is better in almost every regard than ChatGPT but fo coding so far. So you would not notice improvements if you test it only for code. Where it shines is understanding complex things and ideas in long text, and the context window is AFAIK 2x than ChatGPT.


Tried it for other things too, but then they just seem the same (to me). Maybe I'll give it another try, if it has improved since last time (2-3 months maybe?). Thanks!



"Altman is no physicist. He is a serial entrepreneur, and quite clearly a talented one"

Not sure the record supports that if you remove OpenAi which is a work-in-progress and supposedly not going too great at the moment. A talented 'tech whisperer' maybe?


Sam Altman is only 39 years old. Like it or not, it would be a fallacy to assume he's shown everything he's capable of. He likely has much more to contribute in his lifetime.


I feel I read that sentence a lot when Sam Bankman Fried was still the darling of the tech intelligentsia, but the closer you are to the huckster and moneyman corner of the spectrum rather than the actual research side the less true that statement is.


SBF was convicted for fraud, crossing legal and ethical lines that Sam Altman has not. It's an apples-to-oranges comparison.


Sam Altman is an even bigger fraud and he will cause significantly greater harm to the world.


I sort of wish there was a filter for my life that would ignore everything AI (stories about AI, people talking about AI and of course content generated by AI).

The world has become a less trustworthy place for a lot of reasons and AI is only making it worse, not better.


Sounds like a good startup idea. Just be sure to use AI for this filter so you can get funded.



Thanks! Now I just need to filter my LinkedIn feed and the corporate AI leaders at work.


Do you use LLMs?


"Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot." - Sam Altman, https://ia.samaltman.com/

Reality: AI needs unheard amounts of energy. This will make climate significantly worse.


> AI needs unheard amounts of energy…

… and it always will? It seems terribly limiting to stop exploring the potential of this technology because it’s not perfect right now. Energy consumption of AI models does not feel like an unsolvable problem, just a difficult one.


AI, i.e. deep learning, consumes huge amounts of energy because it has to train on gigantic amounts of data and the training algorithms are borderline efficient. So to stop using huge amounts of energy someone would have to come up with a way to get the same results without relying on deep learning.

In other words all we need is a new technology revolution like the deep learning revolution, except one centered around a radically new approach that overcomes every limitation of deep learning.

Now: how likely do you think this is to happen any time soon? Note that the industry and most of academia have bet the bank on deep learning partly because they think that prospect is extremely unlikely.


This feels like a “fixed mindset” about energy, i.e. the way we generate and allocate energy today can not change. Of all the “wasteful” things we do with energy AI is not the top of my (very subjective) list of things to shut down.


No what I'm saying is that for the way we use energy to train neural nets can change in principle but nobody knows how to do it.


I think you should study the tragedy of the commons to understand why it always will.


Energy isnt a common. It costs.

Also the tragedy of the commons is based on a number of flawed assumptions on how commons work.


Wow that quote is absolutely crazy. “Discovery of all of physics”. I think that’s the worst puffery/lie I’ve ever heard by a CEO. Science requires experiments so the next LLM will have to be able to design CERN+++ level experiments down to the smallest detail. But that’s not even the hard part, the hard part is the actual energy requirements, which are literally astronomical. So it’s going to either have to discover a new method of energy generation along the way or something else crazy. The true barrier for physics is energy right now. But that’s just the next level of physics, that’s not ALL of it.

Edit: Also why are you getting downvoted...


I can't imagine anyone writing that paragraph of his with a straight face. The chuckling must've lasted a week.


So far it seems that AIs appetite for energy might finally pushing western countries back to nuclear, which would make climate significantly better.


A world where we produce N watt-hours of energy without nuclear plants and a world where we produce N+K watt-hours of energy with K watt-hours coming from nuclear has exactly the same effect on the climate.


Unfortunately no, this is not how it works.

The relative quantity of power provided by nuclear (or renewables, for that matter) is NOT our current problem. The problem is the absolute quantity of power that is provided by fossil fuels. If that number does not decrease, then it does not matter how much nuclear or renewables you bring online. And nuclear is not cheaper than fossil fuels (even if you remove all regulation, and even if you build them at scale), so it won't economically incentivize taking fossil fuel plants offline.


Nuclear cannot "make the climate better" but can perhaps slow the destruction down, only if it replaces fossil fuels, not if it is added on top due to increased energy consumption. In that case it's at best neutral.


Nuclear and everything we were using before is probably not better than just everything we were using before. Hopefully we can continue to reduce high emission or otherwise damaging power production even while power requirements grow.


Power means making things and providing services that people want, that make their lives better. Which is a good thing. We need more power, not less.


Almost all power consumed ends up as thermal energy in the environment.


That would be fine if we weren't preventing it from radiating out with greenhouse gases.


I guess that global warming is a good thing, then


Only if they build the reactors then go bankrupt leaving the rectors around for everyone else. Likewise if they build renewables to power the data centres.


Almost all energy consumed for computation turns into heat. Increasing energy consumption therefore doesn’t help the climate, in particular if you don’t source it from incoming heat from the sun (photovoltaics or wind energy).


In my view we should also stop taking the Great Technoking at his word and move away from lionizing this old well-moneyed elite in general.

Real technological progress in the 21st century is more capital-intensive than before. It also usually requires more diverse talent.

Yet the breakthroughs we can make in this half-century can be far greater than any before: commercial-grade fusion power (where Lawrence Livermore National Lab currently leads, thanks to AI[1]), quantum computing, spintronics, twistronics, low-cost room-temperature superconductors, advanced materials, advanced manufacturing, nanotechnology.

Thus, it's much more about the many, not the one. Multi-stakeholder. Multi-person. Often led by one technology leader, sure, but this one person must uplift and be accountable to the many. Otherwise we get the OpenAI story, and end-justifies-the-means type of groupthink wrt. those who worship the technoking.

[1]: https://www.llnl.gov/article/49911/high-performance-computin...


Don't listen to David Karpfs of the world. Did he predict chat gpt? if you asked him in 2018, he would have said AI will never write a story

now you can use AI to easily write the type of articles he produces and he's pissed.


> now you can use AI to easily write the type of articles he produces and he's pissed.

You really cannot.


Really? Are you sure ? His article can basically be summed up as don’t believe AI hype from Sama. It’s not particularly well written, he’s no Nabokov. ChatGPT bangs out stuff like this effortlessly. Here, I did it for you https://chatgpt.com/share/67019a6c-453c-8006-88aa-6f32435492...


Oh dear me. I can't argue with you if this satisfies you.


Are you telling me his articke is much better written ?

Come on.


theyre the same photo. /meme


You can even take an enormous sample of articles that he has written and fine tune a model on it, so that it really sounds like him.


And it still won't produce the type of articles he produces. Because at the very least he is capable of writing new articles from something the LLM doesn't have: his brain.

Seriously. This is just the parrot thing again. The fact that AI proponents confuse the form of words with authorial intent is mindbending to me.

Wouldn't have confused Magritte, I think.


I’m not confused, I just disagree. I don’t think that authorial intent is something fundamentally different than text prediction.

When I’m writing out a comment, there’s no muse in my head singing the words to me. I have a model of who I am and what I believe - if I weren’t religious I might say I am that model - and I type things out by picking the words which that guy would say in response to the input I read.

(The model isn’t a transformer-based LLM, of course.)


It would cost around $5 and about 5 hours of work to prove you wrong...


You clearly, clearly do not understand what I am saying. But sure, waste your time and money making a parrot that, unlike the author it mimics, is incapable of introspection, reflection, intellectual evolution or simply changing its mind.

Words are words. Writers are writers. Writers are not words.

ETA: consider what would actually be necessary to prove me wrong. And when you hear back from David Karpf about his willingness to take part in that experiment, write a blog post about it and any results, post it to HN.

I am sure people here will happily suggest topics for the articles. I, for example, would love to hear what your hypothetical ChatKarpf has to say about influences from his childhood that David Karpf has never written about, or things he believed at age five that aren't true and how that affects his writing now.

Do you see what I mean? These aren't even particularly forced examples: writers draw on private stuff, internal thoughts, internal contradictions, all the time, consciously and unconsciously.


You articulate this position well. I've tried to convey something similar and it's tough to find the words to explain to people. I really like this phrase:

"Words are words. Writers are writers. Writers are not words."

I'm very bullish on AI/LLMs but I think we do need to have a better shared understanding of what they are and what they aren't. I think there's a lot of confusion around this.


> I really like this phrase:

Thank you. I don't think it really explains the distinction, of course. It just makes it clear there necessarily must be one, and it can't be wished away by discussions of larger training sets, more token context, or whatever. It never will be wished away.


> Altman expects that his technology will fix the climate, help humankind establish space colonies, and discover all of physics. He predicts that we may have an all-powerful superintelligence “in a few thousand days.”

It seems fair to say Altman has completed his Musk transformation. Some might argue it's inevitable. And indeed Bill Gates' books in the 90s made a lot of wild promises. But nothing that egregious.


Both of them remind me of Elizabeth Holmes. She ran Theranos through a promise of lies long enough they turned into fraud.

So far Musk has been pushing the lies out continually to try and prevent any possible exposure to fraud. Like "Getting to Mars will save humanity" or the latest "We will never reach Mars unless Trump is president again". Then again, self driving cars are just around the corner, as stated in 2014 with a fraudulently staged video of their technology, that they just need to work the bugs out.

Altman is making wild clams too with how Machine Learning will slow and reverse climate change while proving that the technology needs vast more resources, specially in power consumption, just to be market viable for business and personal usage.

All three play off people's emotions to repress critical thinking. They are no different than the lying preachers, I can heal you with a touch of my hand, that use religion to gain power and wealth. The three above are just replacing religion with technology.


The difference between those is that Musk and Altman make wrong predictions about the future; Holmes with Theranos made wrong statements about the present.

One of them is illegal, the other isn't.


This is some really out there thinking, and I think you need to do some basic fact checking bc some of this just isn’t true


The field is extremely research oriented. You can't stay on top with good engineering and incremental development and refining.

Google just paid over $2.4 billion to get Noam Shazeer back in the company to work with Gemini AI. Google has the deepest pool of AI researchers. Microsoft and Facebook are not far behind.

OpenAI is losing researchers, they have maybe 1-2 years until they become Microsoft subsidiary.


You make it sound like AI researchers are in short supply.


For almost all jobs X, "really competent X" are in exceedingly short supply.


That is certainly true. But the market began searching in earnest for "really competent X" in this field only about 2 years ago. Before that, it was a decidedly opt-in affair..


Are you saying the average AI researcher isn't good? I'm not arguing with you, I just can't parse what you are saying here.


I'm saying that both the volume and upper bound are increasing, not becoming constrained.


Good AI researchers and research teams are.

The whole LLM era was started with the 2017 "Attention is all you need" paper by Google Brain/Research and nobody has done anything same magnitude since.

Noam Shazeer was one of the authors.


Does anyone take him at face value anyway?

The other issue is that AI's 'boundless prosperity' is a little like those proposals to bring an asteroid made of gold back to earth. 20m tons, worth $XX trillion at current prices, etc. The point is, the gold price would plummet, at the same time as the asteroid, or well before, and the promised gains would not materialize.

If AI could do everything, we would no longer be able (due to no-one having a job), let alone willing, to pay current prices for the work it would do, and so again, the promised financial gains would not materialize.

Of course in both cases, there could be actual societal benefits - abundant gold, and abundant AI, but they don't translate directly to 'prosperity' IMHO.


I think we should consider companies that create or own their own hardware, the ability to generate cheap electricity and the ability of neural networks to continuously learn.

I still "feel the AGI". I think Ben Goertzel'a recent talk on ML Street Talk was quite grounded / too much hype clouds judgement.

In all honesty, once the hype dies down, even if AGI/ASI is a thing - we're still going to be heads down back to work as usual so why not enjoy the ride?

Covid was a great eye-opener, we dream big but in reality people jump over each other for... toilet paper... gotta love that Gaussian curve of IQ right?


Around the time of the board coup and Sam's 7-trillion media tour, there were multiple, at the time somewhat credible, rumors of major breakthroughs at Open AI -- GPT5, Q*, and possibly another unnamed project with wow-factor. However, almost a year has passed, and OpenAI has only made incremental improvements public.

So my question is: What does the AI rumor mill say about that? Was all that just hype-building, or is OpenAI holding back some major trump card for when they become a for-profit entity?


All hype. Remember when the whole "oh we are so scared to release this model" happen back in the day and it was worse than GPT3?

All of these doing the rounds of foreign governments and acting like artificial general intelligence is just around the corner is what got him this fundraising round today. It's all just games.


probably just waiting for the election to be over before releasing the next gen models


Q* turned out to be GPT-o1, which was objectively overhyped.


Nobody is taking Sam Altman at his word lol, these ideas about intelligence have been believed for a long time in the tech world and the guy is just the best at monetizing them. People are pursuing this path because of a general conviction in these ideas of themselves, I guess to people like Atlantic writers Sam Altman is the first time they've encountered them but it really has nothing to do with Sam Altman.


100% this. He isn't even saying anything novel (I mean that in a good way).

On top of that, the advance in models for language and physical simulation based models (for protein prediction and weather forecasting as examples) has been so rapid and unexpected that even folks who were previously very skeptical of "AI" are believers - it ain't because Sam Altman is up there talking a lot. I went from AI skeptic to zealot in about 18 months, and I'm in good company.


People are taking Altman at his word.

He was literally invited to congress to speak about AI safety. Sure, perhaps people that have a longer memory of the tech world don't trust him. That's actually not a lot of people. A lot of people just aren't following tech (like my in-laws).


> Nobody is taking Sam Altman at his word lol

ITT: People taking Sam at his word.


Someone is buying enough of his bullshit to invest a few billion into openAI.

The problem is, when it pops, which it will, it'll fuck the economy.


The Gang Learns That Normal People Don't Take Sam Altman Seriously


The better reason to stop taking Altman at his word is on the subject of OpenAI building AGI “for the benefit of humanity”.

Now that he’s restructuring the company to be a normal for-profit corp, with a handsome equity award for him, we should assume the normal monopoly-grabbing that we see from the other tech giants.

If the dividend is simply going to the shareholder (and Altman personally) we should be much more skeptical about baking these APIs into the fabric of our society.

The article is asinine; of course a tech CEO is going to paint a picture of the BHAG, the outcome that we get if we hit a home run. That is their job, and the structure of a growth company, to swing for giant wins. Pay attention to what happens if they hit. A miss is boring; some VCs lose some money and nothing much changes.


It is weird that one of the most valued markets (openai, microsoft investments, nvidia gpus, ...) is based on a stack that is available to anyone that can pay for the resources to train the models and that in my opinion still has to deliver to the expectations that have been created around it.

Not saying it is a bubble but something seems imbalanced here.


>one of the most valued markets ... is based on a stack that is available to anyone

The sophisticated investors are not betting on future increasing valuations based on current LLMs or the next incremental iterations of it. That's a "static" perspective based on what outsiders currently see as a specific product or tech stack.

Instead, you have to believe in a "dynamic" landscape where OpenAI the organization of employees can build future groundbreaking models that are not LLMs but other AI architectures and products entirely. The so-called "moat" in this thinking would be the "OpenAI team to keep inventing new ideas beyond LLM". The moat is not the LLM itself.

Yes, if everyone focuses LLMs, it does look like Meta's free Llama models will render OpenAI worthless. (E.g. famous memo : https://www.google.com/search?q=We+have+no+Moat%2C+and+Neith...)

As an analogy, imagine that in the 1980s, Microsoft's IPO and valuation looks irrational since "writing programming code on the Intel x86 stack" is not a big secret. That stock analysis would then logically continue saying "Anybody can write x86 software such as Lotus, Borland, etc." But the lesson learned was that the moat was never the "Intel x86 stack"; the moat was really the whole Microsoft team.

That said, if OpenAI doesn't have any future amazing ideas, their valuation will crash.


I'd say that Microsoft's moat was the copyright law and ability to bully the hardware companies with exclusive distribution contracts.

Writing a new DOS (or Windows 3) from scratch is something a lot of developers could do.

They just couldn't do it legally.

And thus it was easy to bully Compaq and others into only distributing PCs with DOS/Windows installed. For some time you even had to pay the Microsoft fee when you wanted a PC with Linux installed.


If this was their most important moat for you, what is their moat now that it is gone?


They don't have one, which is why they're struggling to maintain market share. They run solely on brand loyalty, and slightly on competing with AWS and GCP to sell server hosting to people with more money than sense.


So you believe they became the most valuable company in the world (a few times recently though Apple is currently 13% ahead) with no moat? That's an interesting take. Is it possible you just don't know much about them?


The stock market is objectively quite stupid. I'm not sure what you're trying to imply with this.


I agree in most of what you said. The main problem for me is that I dont see LLMs are as solid foundations to create a company as technology progress from the 80s.

Im 42 though and already feeling old to understand the future lol


It's okay to say it. It's a bubble.

It was just the next in line to be inflated after crypto.


I wouldn't write it off as a bubble, since that usually implies little to no underlying worth. Even if no future technical progress is made, it has still taken a permanent and growing chunk of the use case for conventional web search, which is an $X00bn business.


A bubble doesn't necessarily imply no underlying worth. The dot-com bubble hit legendary proportions, and the same underlying technology (the Internet) now underpins the whole civilization. There is clearly something there, but a bubble has inflated the expectations beyond reason, and the deflation will not be kind on any player still left playing (in the sense of AI winter), not even the actually-valuable companies that found profitable niches.


I mean OpenAI is a bubble, if it pops, its big enough to take the rest of tech with it.


Alternatively, it might make capital available to other things.

It's at least theoretically possible that all the liquidity and leverage in the top of the market could tire itself of chasing the next tulip mania.

For instance, $6 Billion could have gone into climate tech instead of ElizaX.

My problem with these dumb hype cycles is all the other stuff that gets starved in their wake.


> The technologies never quite work out like the Altmans of the world promise, but the stories keep regulators and regular people sidelined while the entrepreneurs, engineers, and investors build empires. (The Atlantic recently entered a corporate partnership with OpenAI.)

Hilarious.


I sympathize, but I feel like people are over-selling the "end of AI".

The progress that we've seen in the past two years has been completely insane compared to literally any other field. LLMs complete absolutely insane reasoning tasks, including math proofs at the level of a "mediocre grad student" (which is super impressive). For better or worse, image generation & now video generation is indistinguishable from the real thing, a lot of the times.

I think that crazy business types and media really overhyped the fuck out of AI so fucking high, that even with such strong progress, it's still not enough.


OpenAI cant be working on AGI because they have no arc for production robotics controllers

AGI cannot exist in a box that you can control. We figured that out 20 years ago.

Could they start that? Sure theoretically. However they would have to massively pivot and nobody at OAI are robotics experts


Of course no corporate executive can be taken at their word, unless that word is connected to a legally binding contract, and even then, the executive may try to break the terms of the contract, and may have political leverage over the court system which would bias the result of any effort to bring them to account.

This is not unusual - politicians cannot be taken at their word, government bureaucrats cannot be taken at their word, and corporate media propagandists cannot be taken at their word.

The fact that the vast majority of human beings will fabricate, dissemble, lie, scheme, manipulate etc. if they see a real personal advantage from doing so is the entire reason the whole field of legally binding contract law was developed.


In 2017 Daniel Gross, a YC partner at the time, recruited me to join the YC software team. Sam Altman was president of YC at this time.

During my interview with Jared Friedman, their CTO, I asked him what Sam was trying to create, the greatest investment firm of all time surpassing Berkshire Hathway, or the greatest tech company surpassing Google? Without hesitation, Jared said Google. Sam wanted to surpass Google. (He did it with his other company, OpenAI, and not YC, but he did it nonetheless)

This morning I tried Googling something and the results sucked compared to what ChatGPT gave me.

Google still creates a ton of value (YouTube, Gmail, etc), but he has surpassed Google in terms of cutting edge tech.


> Remember, these technologies already have a track record. The world can and should evaluate them, and the people building them, based on their results and their effects, not solely on their supposed potential.

But that's not how the market works.


The amount of comparisons I see between some theoretical AGI and something more akin to a science fiction like Jane from the Ender saga or the talking head from That Hideous Strength is I guess not surprising. But in both of those cases the only way to make the plot work was to make the AI literally an other-worldly being.

I am personally not sold on AGI being possible. We might be able to make some poor imitation of it, and maybe an LLM is the closest we get, but to me it smacks of “man attempts to create life in order to spite his creator.” I think the result of those kinds of efforts will end more like That Hideous Strength (in disaster).


"At a high enough level of abstraction, Altman's entire job is to keep us all fixated on an imagined AI future so we don't get too caught up in the underwhelming details of the present."

Old tactic.

The project that would eventually became Microsoft Corp. was founded on it. Gates told Ed Roberts the inventor of the first personal computer that he had a programming for it. He had no such programming langugage.

Gates proceeded to espouse "vapourware" for the decades. Arguably Microsoft and its disciples are still doing so today.

Will the tactic ever stop working. Who knows.

Focus on the future that no one can predict, not the present that anyone can describe.


Anyone who takes startup CEOs at their word has never worked in a startup. The plasticity of their ethics is legendary when there’s a prospect of increasing the revenues.


Sam Allan has to be a promoter and true believer. It is his job to do that and he does have new tech that didn’t exist before and it is game changing.

The issue is more that the company is hemorrhaging talent, and doesn’t have a competitive moat.

But luckily this doesn’t affect most of us, rather it will only possibly harm his investors if it doesn’t work out.

If he continues to have access to resources and can hire well and the core tech can progress to new heights, he will likely be okay.


>Understand AI for what it is, not what it might become

Is kind of a boring way of looking at things. I mean we have fairly good chatbots and image generators now but it's where the future is going that's the interesting bit.

Lumping AI in with dot coms and crypto seems a bit silly. It's a different category of thing.

(By the way Sam being shifty or not techy or not seems kind of incidental to it all.)


I’ll never understand how so many smart people haven’t realize that the biggest “hallucination” produced by AI was this Sam Altman.


He's been grifting long before the current AI hype

He went from a failed startup to president of yc to ultra wealthy investor in the span of about a decade. That's sus


I wonder how many governments and A-listers are investing heavily in the rapid development of commodity AI video that is indistinguishable from real video. Does it seem paranoid?

They would at least be more believable if they blast claims that a certain video must be fake, especially with how absurd and shocking it is.


Journalists smell blood in the water. When times were looking better they gave him uncritical praise.


It’s really sad to see all the personal attacks and cynicism that has no basis in reality. OpenAI has an amazing product and was first to market with something game changing for billions of people. Calling him a fraud and scammer is super ridiculous


I do not know about any of that or your feelings of sadness.

I have skepticism of his predictions, and disregard for his exaggerations.

I have a ChatGPT subscription and build features on OpenAI technology.


Game changing for billions? Please give an example. I’m fine to be proven wrong but I don’t know what you could mean.


It will teach you anything in any language step by step and answer every single question directly without having to google stuff and find only semi adjacent question and answers. It has completely replaced the need to google things in many instances. I think this is pretty game changing and it’s available to billions of people.


Can't stop something you never started.

It's funny we coach people not to ascribe human characteristics to LLMS..

But we seem equally capable of denying the very human characteristics in our would be overlords.

Which warlord will we canonize next?


Our economy runs on market makers AI, Blockchain, whether they are what they seem in the long run is beside the point. They're sole purpose is to generate economic activity. Nobody really cares if they pan out.


Will Human ego, greed, selfishness lead to our destruction? (AI or not)

https://news.ycombinator.com/item?id=35364833


Hopefully, with as little collateral damage to the remaining life on this planet.


I mean, in general, if you’re taking CEOs at their word, and particularly CEOs of tech companies at their word, you’re gonna have a bad time. Tech companies, and their CEOs, predict all manner of grandiose nonsense all the time. Very little of it comes to pass, but through the miracle of cognitive biases some people do end up filtering out the stuff that doesn’t happen and declaring them visionary.


I appreciate everything that OpenAI has done, the science of modeling and the expertise in productization.

But, but, but… their drama, or Altman’s drama is now too much for me, personally.

With a lot of reluctance I just stopped doing the $20/month subscription. The advanced voice mode is lots of fun to demo to people, and o1 models are cool, but I am fine just using multiple models for chat on Abacus.AI and Meta, an excellent service, and paid for APIs from Google, Mistral, Groq, and OpenAI (and of course local models).

I hope I don’t sound petty, but I just wanted to reduce their paid subscriber numbers by -1.


that ship sailed a long time ago



Open AI’s AGI is like Tesla’s completely automated self driving.

So close, yet so far. And, both help the respective CEOs in hyping the respective companies.


Distorting the old Chinese proverb, “The best time to stop taking Sam Altman at his word was the first time he opened his mouth. The second best time is now”. We’ve known he’s a scammer for a long time.

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...



Technically true, but also doesn’t advance the conversation in any way. Eventually no one will exist or care about anything, but until then everyone will have to live with the decisions and apathy of those who came before.

https://www.newyorker.com/cartoon/a16995


I would love to read a solid exposé of Worldcoin, but I don’t think I will get that from Buzzfeed or from The Atlantic. Both seemed to be agenda driven and hot headed. I’d like a more impartial breakdown ala AP-style news reporting.


> I don’t think I will get that from Buzzfeed

BuzzFeed News is not BuzzFeed. They were a serious news website¹ staffed by multiple investigative journalists, including a Pulitzer winner heading that division. They received plenty of awards and recognition.² It is indeed a shame they shared a name with BuzzFeed and that no doubt didn’t help, but it does not detract from their work.

> or from The Atlantic.

There was no Atlantic link. The other source was MIT Technology Review.

> I’d like a more impartial breakdown ala AP-style news reporting.

The Associated Press did report on it³, and the focus was on the privacy implications too. The other time they reported on it⁴ was to announce Spain banned it for privacy concerns.

¹ https://en.wikipedia.org/wiki/BuzzFeed_News

² https://en.wikipedia.org/wiki/BuzzFeed_News#Awards_and_recog...

³ https://apnews.com/article/worldcoin-cryptocurrency-sam-altm...

https://apnews.com/article/worldcoin-spain-eyeballs-privacy-...


Thank you. I’ll take a look. The Atlantic link I’m referring to is the actual HN article link of the OP, not of the GP.


One of the above links is from the MIT Technology Review ...


[flagged]


> The Atlantic doesn't have an ideological center,

Morning, boys. How’s the water?


Your comment immediately gave me a chuckle so thank you for that.

I find it interesting among its readership that Atlantic’s prose can hide so well what an absolute shitrag of propaganda it really is.


The Atlantic is run by Jeffrey Goldberg and owned by Laurene Powell Jobs.

Of course these people have agendas, it’s not exactly a secret.


It has to be owned by someone, that does not make it agenda driven. Who would you rather have it owned by so that it is not agenda driven?


It’s not a hypothetical we aren’t assuming spherical cows today. Those specific people have a long and public track record that answers this question.

It’s not a secret, like I said. There are plenty of editors out there who have never decided to concoct ludicrous stories of Al Queda bases in the Amazon to name one example.

https://www.newyorker.com/magazine/2002/10/28/in-the-party-o...


you still have not answered my question... who should own that you will believe it is impartial?

Should I assume bias and impartiality for your Day One conference simply by who you are and who is behind you? The Washington Post is owned by Jeff Bezos, your opinion on that?


My own involvement in media is not impartial and is not intended to be.

My opinion on your question is that Jeff Bezos purchased the Washington Post to advance his interests, and that the Washington Post indeed has an agenda which is broadly aligned with what’s commonly referred to as “the establishment.”

And absolutely nothing I’m saying is particularly insightful or controversial, it’s obvious at a glance.

As for your original question about who should own media to make it objective, I don’t have the answer to that one.

But I do know that there’s no person with a billion dollars who doesn’t have the agenda of preserving a social order consistent with them being able to continue to enjoy that billion dollars.


> although you won't find, like... fascism in there (sorry if you're disappointed)

I don't understand how HN can hit such highs sometimes, but also host nonsense like this.


He just needs a paltry trillion dollars to make this AI thing happen. Stop being so short sighted.


sama is best match with today's LLM because of the "scaling law" like Zuckerberg described. Everyone is burning cash to race to the end, but the billion dollar question is, what is the end for transformer based LLM? Is there an end at all?


The end is super-human reasoning along with super-human intuition, based on humanity knowledge.


Sure, and the end of biotech is perfect control of the human body from pre-birth until death, but innovation has many bottlenecks. I would be very surprised if the bottlenecks to LLM performance are compute and model architecture. My guess is it’s the data.


Are you sure you can get to that with transformers?

https://www.lesswrong.com/posts/SkcM4hwgH3AP6iqjs/can-you-ge...


And some Mongols, Romans, etc probably said continually expanding their boarders would give them comparable contemporary heights.


I keep thinking about Sam Altman’s March ’23 interview on Lex Fridman’s podcast—this was after GPT-4’s release and before he was ousted as CEO. Two things he said really stuck with me:

First, he mentioned wishing he was more into AI. While I appreciate the honesty, it was pretty off-putting. Here’s the CEO of a company building arguably the most consequential technology of our time, and he’s expressing apathy? That bugs me. Sure, having a dispassionate leader might have its advantages, but overall, his lack of enthusiasm left a bad taste in my mouth. Why IS he the CEO then?

Second, he talked about going on a “world tour” to meet ChatGPT users and get their feedback. He actually mentioned meeting them in pubs, etc. That just sounded like complete BS. It felt like politician-level insincerity—I highly doubt he’s spoken with any end-users in a meaningful way.

And one more thing: Altman being a well-known ‘prepper’ doesn’t sit well with me. No offense to preppers, but it gives me the impression he’s not entirely invested in civilization’s long-term prospects. Fine for a private citizen, but not exactly reassuring for the guy leading an organization that could accelerate its collapse.


Hi there!

I've done a huge amount of political organizing in my life, for common good - influencing governments to build tens of billions of dollars worth of electric rail infrastructure.

I'm also a big prepper. It's important to understand that stigmatizing prepping is very dangerous - specifically to those who reject it.

Whether it's a gas main break, a forest fire, an earthquake, or a sci-fi story, encouraging people to become resilient to disaster is incredibly beneficial for society as a whole, and very necessary for individuals. The vast, vast majority of people who do it are benefiting their entire community by doing so. Even, as much as I'm sure I'd dislike him if I met him, Sam Altman. Him being a prepper is good for us, at least indirectly, and possibly directly.

Just look at the stories in NC right now - people who were ready to clear their own roads, people taking in others because they have months of food.

Be careful not to ascribe values to behaviors like you're doing.


I agree that prepping itself is completely fine, and people should be prepared for natural disasters, civil unrest, or whatever scenarios they’re most comfortable with. Building resilience is beneficial for both individuals and communities, and I can see how it plays an important role, especially in situations like the ones you mentioned in NC.

My issue, though, is with someone like Sam Altman—a leader of an organization that could potentially accelerate the downfall of civilization—being so deeply invested in prepping. Altman isn’t just a regular guy preparing for emergencies; he’s an incredibly wealthy individual who has openly discussed stockpiling machine guns and setting up private land he can retreat to at a moment’s notice. It’s that level of preparation, combined with his position at the helm of one of the most consequential tech companies, that doesn’t sit well with me. It feels like he’s hedging against the very future his company might be shaping.


>It feels like he’s hedging against the very future his company might be shaping.

I dont think the prepping can really be taken as evidence anything nefarious. Prepping simply means someone thinks there is a risk with hedging against, even if they are strongly opposed to that outcome.

I think you see many of the rich prepping because they can, but It says little about their desire for catastrophic events.

Prepping for a hurricane doesn't mean you want it to destroy your neighborhood.


I don’t think ultra-wealthy preppers want catastrophic events to happen, and I agree that prepping in itself isn’t nefarious. My concern is more about the mindset it can encourage. When someone like Altman—who has significant influence over the future of technology—starts focusing on a solid “Plan B,” it might lead them to take more risks, consciously or unconsciously. Having what they believe is a safe fallback could make them more comfortable pushing boundaries in ways that could accelerate instability. It’s not about wanting disaster, but rather how preparing for one might subtly shift decision-making. For instance, “AI safety team.. who needs it? amirite?!”


Your concern is very clear and very appropriate. Hironically, I feel that those who seem to not understand what you're implying, even if they are open to prepping, in an Apocalypse wouldn't fare that well: as the basic requirement for survival, more than any prepping, is and will remain wisdom.


Every tech CEO (and honestly almost every wealthy person) is doing the same thing. You're mistaking correlation for causation.


I had a prepper phase too and I’m as leftist and pro human as they come. It’s just fun to organize and plan and has nothing to do with alex jones selling me on the frogs turning gay. And if there’s ever a natural disaster (like I just lost water for a week due to a hurricane) it’s nice to have things like 55 gallons of fresh water on standby.


There's a difference between resilience and being prepared for the unexpected on one hand (a go-bag (for sudden travel, not Mad Max), on- and off-site backups of data & physical documents, a couple weeks of food & water, an emergency expenses account separate from savings, plus physical currency) and, on the other hand, being under the delusion that hiding in a super-stocked bunker is any sort of acceptable answer to the possible collapse of civilization.

And Altman is definitely in the latter camp with, "But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to."[1]

That a guy who says the above, and also says that AI may be an existential threat to humanity, also runs the world's most prominent AI company is disturbing.

1. https://futurism.com/the-byte/openai-ceo-survivalist-prepper


Wouldn't you prefer someone fearful of the apocalypse and who takes it seriously running such a company over someone who is ambivalent or doesn't consider it a risk?


He doesn't seem fearful of the apocalypse. He seems to consider preparing for it to be a fun hobby.


That's not my take. I think prepping is a pretty strong indication of concern. That's not to say you can't enjoy preparing in addition to concern.


Sam blinks furiously when he's not telling the truth.


Sam Altman is the modern version of a snake oil salesman


At the expense of irking many, I'd like to add Musk to the list if someone didn't already.

From robotics, neurology, transport to everything in between - not a word should be taken as is.


You all just sit back and not pick at every word he says, just sit calmly and let him cook. And he’s been cooking


TSMC has stopped way earlier.


Did I ever? :')


While I do not have much sympathy for Altman, the article is very low quality and contains zero analysis

Yeah, maybe on the surface chatbots turned out to be chatbots. But you have to be a poor journalist to stop your investigation of the issue at that and conclude AI is no big deal. Nuance, anyone?


Someone took that grifter at his word ever? Haha! Wait you’re serious? Let me laugh even harder. Hahahaha


Apparently a whole lot of people think the guy who's running a <checks notes> for-proft eyeball-scanning cryptocurrency, "for the benefit of humanity" is very serious when he says his non-profit (but also for-profit) AI company is also "for the benefit of humanity".


I feel like the time to stop taking Sam Altman at his word was probably when he was shilling for an eyeball-scanning cryptocurrency...

But apparently as a society we like handing multi-billion dollar investments to folks with a proven track record of (not actually shipping) complete bullshit.


He seems like any other tech “evangelical” to me.


> Altman expects that his technology will fix the climate, help humankind establish space colonies, and discover all of physics.

Yes. We've been through this again and again. Technology does not follow potential. It follows incentive. (Also, “all of physics”? Wtf is he smoking?)

> It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.

I mean, everything good in life uses energy, that’s not AIs fault per se. However, we should absolutely evaluate tech anchored in the present, not the future. Especially with something we understand so poorly like emergent properties of AI. Even when there’s an expectation of rapid changes, the present is a much better proxy than yet-another sociopath with a god-complex whose job is to be a hype-man. Everyone’s predictions are garbage. At least the present is real.


I mean, there is a reason the board tried their best to exorcise Sam Altman him from the company. OpenAI could be the next Loopt.


While I agree anyone taking sam Altman at his word is and always was a fool, this opinion piece by a journalism major at a journalism school giving his jaded view of technology is the tired trope that is obsessed with the fact reality in the present is always reality in the present. The fact I drive a car that’s largely - if not entirely - autonomous in highly complex situations, is fueled by electricity alone, using a super computer to play music from an almost complete back catalog of everything released at my voices command, on my way to my final cancer treatment for a cancer that ten years ago was almost always fatal, while above me constellations of satellites cooperate via lasers to provide global high speed wireless internet being deployed by dozens upon dozens of private rocket launches as we prepare the final stretch towards interplanetary spaceships, over which computers can converse in true natural language with clear human voices with natural intonation…. Well. Sorry, I don’t have to listen to Sam Altman to see we live in a magical era of science fiction.

The most laughable part of the article is where they point at the fact that in the past TWO YEARS we haven’t gone from “OMG we’ve achieved near perfect NLP” to “Deep thought tell us the answer to life the universe and everything” as some sort of huge failure is patently absurd. If you took Altman at his word on that one, you probably also scanned your eye ball for fake money. The truth though is that the rate of change in the products his company is making is still breath taking - the text to speech tech in the latest advanced voice release (recognizing it’s not actually text to speech but something profoundly cooler, but that’s lost on journalism majors teaching journalism majors like the author) puts to shame the last 30 years of TTS. This alone would have been enough to have a fairly significant enterprise selling IVR and other software.

When did we go from enthralled by the rate of progress to bored that it’s not fast enough? That what we dream and what we achieve aren’t always 1:1 but that’s still amazing? I get that when we put down the devices and switch off the noise we are still bags of mostly water, our back hurts, we aren’t as popular as we wish we were, our hair is receding, maybe we need invisiline but flossing that tooth every day is easier and cheaper, and all the other shit that makes life much less glamorous than they sold us in the dot com boom, or nanotech, etc, as they call out in the article.

But the dot com boom did succeed. When I started at early Netscape no one used the internet. We spun the stories of the future this article bemoans to our advantage. And it was messier than the stories in the end. But now -everyone- uses the internet for everything. Nanotechnology permeates industry, science, tech, and our every day life. But the thing about amazing tech that sounds so dazzling when it’s new is -it blends into the background- if it truly is that amazingly useful. That’s not a problem with the vision of the future. It’s the fact that the present will never stop being the present and will never feel like some illusory gauzy vision you thought it might be. But you still use dot coms (this journalism major assessment of tech was published on a dot com and we are responding on a dot com) and still live in a world powered by nanotechnology, and AI promised in TWO YEARS is still mind boggling to anyone who is thinking clearly about what the goal posts for NLP and AI were five years ago.


The time to stop taking him seriously was when he started his "fear AI, give me the monopoly" campaign.


I’m just expecting a Microsoft acquisition and Altman exits and moves on to his next grift.


Better headline : It's too late to stop taking Sam Altman at his word

See same with Elon Musk.

Money turns genius to smooth brained egomaniacal idiots. See same with Steve Jobs


They were never geniuses. They were just rich assholes propped up by other rich assholes.

"It's too late to stop conflating wealth with intelligence"


Regardless of personal qualities, for some reason these people have achieved great things for themselves and humanity, whereas countless competitors, including many other rich assholes, have not.


Luck and money.


A lot of people have money, and luck can only explain a single case.


What case? I don't pay for anything YC has ever created.


> achieved great things for themselves and humanity

For themselves? Absolutely.

For humanity? Perhaps we have wildly different ideas of what is good for humanity.


I'm not sure this is the same situation... SpaceX just began a mission to save stranded scientists in space. And Starlink has legitimate uses.


That doesn't make Musk any less of an egomaniacal idiot in my eyes.


Money removes social feedback. You end up surrounded with bobble heads telling you how genius you are… because they want your money. This is terrible for human psychology. It’s almost like a kind of solitary confinement — solitary in the sense that you are utterly deprived of meaningful rich human contact.


I think about this comparison sometimes https://x.com/Merman_Melville/status/1088527693757349888?lan...

"Being a billionaire must be insane. You can buy new teeth, new skin. All your chairs cost 20,000 dollars and weigh 2,000 pounds. Your life is just a series of your own preferences. In terms of cognitive impairment it's probably like being kicked in the head by a horse every day"

Solitary confinement is a great comparison. But also not existing in the same reality is 99.99% of the population must really warp you too.


it's this more than anything else I wish the general public would understand, those same bobble heads that surround celebrities and eventually warp all sense of costs of common everyday items. We often only see the end result of this when so-and-so celebrity declares bankruptcy and the masses cheer.

In reality they've been vampire sucked dry by close family / friends / salesmen for years and didn't know it.


Sam Altman is not a hapless victim at the mercy of the isolating effects of his financial success.

He was an opportunistic, amoral sociopath before he was rich, and the system he reaps advantage from strongly selects for hucksters of that particular ilk more than anything else.

He's just another Kalanick, Neumann, Holmes or Bankman-Fried.


How could they not. The word `wealth` or idea of "money" is completely misleading here. It's cancerous accumulation of resources and influence. They are completely detached from consequential reality. The human brain has not evolved to thrive under conditions of total, unconditional material abundance. People struggle to moderate sugar intake, imagine unlimited access to everything. And it's an inherently amoral existence leading to the necessity of unhinged internal models of the world to justify continuation and reward. Their sense of self-efficacy derailed in zero-g. Listen to them talk about fiction... They literally can't tell the price of a banana, how can they possibly get any meaningful story told? All that is left is the aesthetics and mechanical exterior of narration. How can there be love or friendship with normal people grounding you? You could make everyone you ever met during your lifetime a millionaire, while effectively changing nothing for yourself. Nobody can be this rich and not lose touch with common shared reality.

Billionaires are shameful for the collective, they should be shameful to everyone of us. They are fundamentally most unfit for leadership. They are evidence of civilizatory failure, the least we can do is not idolize them.


Keep in mind that this is going to be the default option for a lot of forums and social media for automated moderation. Reddit is already using it a lot and now a lot of the front page is clearly feedback farming for OpenAI. What I'm getting at is we're moving towards a future where only a certain type of dialog will be allowed on most social media and Sam Altman and his sponsors get to decide what that looks like. Since he's of YComb stock, the zeitgeist here seems to be let him cook, that is until he doesn't share the wealth with his market-cornering pals here.


Are you seriously suggesting Altman is a genius? Or Musk for that matter?


Jobs was a sort of cracked genius and a very imperfect human who wanted to be a better human. Money didn't make him worse, or better. It didn't really change him at all on a personal level. It didn't even make him more confident, because he was always that. Look back through anecdotes about him in his life and he's just the same guy, all the time.

Even the stories I heard about him from one of his indirect reports back in the pre-iCEO "Apple is still fucked, NeXT is a distracted mess" era were just like stories told about him from the dawn of Apple and in the iPhone era.

Musk and Altman are opportunists. Musk appears to be a maligant narcissist. Neither seem in a rush to be better humans.


Same needs to happen to Elon Musk


tl;dr author complains that Sam's predictions of the future of AI are inflated (but doesn't offer any of his own), and complains that AI tools that surprised us last year look mundane now.

The article is written to appeal to people who want to feel clever casually slagging off and dismissing tech.

> it appears to have plateaued. GPT-4 now looks less like the precursor to a superintelligence and more like … well, any other chatbot.

What a pathetic observation. Does the author not recall how bad chatbots were pre-LLMs?

What LLMs can do blows my mind daily. There might be some insufferable hype atm, but gees, the math and engineering behind LLMs is incredible, and it's not done yet - they're still improving from more compute alone, not even factoring in architecture discoveries and innovations!


> it appears to have plateaued. GPT-4 now looks less like the precursor to a superintelligence and more like … well, any other chatbot.

This is such a ridiculous sentence.

GPT-4 now looks like any other chatbot because the technology advanced so the other chatbots are smarter now as well. Somehow the author is trying to twist this as a bad thing.


If everyone can catch up so easily, OAI has no moat and SamA is further full of shit asserting that they do.


Since it's paywalled, I assume we are discussing the title (as usual). It implies that there was the time when [we] took him at his word. Uh, ok, maybe. But what does it matter? "We" aren't the people at VCs who fund it, I suppose? So, what does it matter if "we" take him at his word? Hell, even if it suddenly went public, it still wouldn't mean much if we trust the guy or not, because we could buy shares for the same reason we buy crypto or TSLA shares.

As a matter of fact, I suspect the author of the article actually belongs to gullible minority who ever took Altman at his word, and now is telling everyone what they already knew. But so what? What are we even discussing? Nobody calls to remove their OpenAI (or, in fact, Anthropic, or whatever) account, as long as we find it useful for something, I suppose. It just makes no difference at all if that writer or his readers take Altman at his word, their opinions have no real effect on the situation, it seems. They are merely observers.


according to the book "The Sociopath Next Door", approximately 1 in 25 Americans is a "sociopath" who "does not feel shame, guilt, or remorse, and can do anything at all without what "normal" people think of as an internal voice labeling things as "right" or "wrong". It makes sense to me that sociopaths would be over-represented among C-level executives and "high performers" in every field.

https://www.betterworldbooks.com/product/detail/the-sociopat...


> Last week, CEO Sam Altman published an online manifesto titled “The Intelligence Age.” In it, he declares that the AI revolution is on the verge of unleashing boundless prosperity and radically improving human life.

/s


I generally don't take anyone other than Leon at his word. \s


well the good news is all that stuff comes with an expiration date, after which we will know if this is our new destiny or yet another cloud of smoke.

This is a good reminder:

> Prominent AI figures were among the thousands of people who signed an open letter in March 2023 to urge a six-month pause in the development of large language models (LLMs) so that humanity would have time to address the social consequences of the impending revolution

In 2024, ChatGPT is a weird toy, my barber demands paper cash only (no bitcoin or credit cards or any of that phone nonsense, this is Silicon Valley), I have to stand in line at USPS and DMV with mindless paper-shuffling human robots, marveling at humiliating stupidity of manual jobs, robotaxis are still almost here, just around the corner, as always. Let's check again in a "coupe of thousand days" i guess!


I've said this before, at the root of all these technological promises lies a perpetual motion machine. They're all selling the reversal of thermodynamics.

Any system complex enough to be useful has to be embedded in an ever more complex system. The age of mobile phone internet rests on the shoulders of an immense and enormously complex supply chain.

LLMs are capturing low entropy from data online and distilling it for you while producing a shitton of entropy on the backend. All the water and energy dissipated at data centers, all the supply chains involved in building GPUs at the rate we are building. There will be no magical moment when it's gonna yield more low entropy than what we put in on the other side as training data, electricity and clean water.

When companies sell ideas like 'AGI' or 'self driving cars' they are essentially promising you can do away with the complexity surrounding a complex solution. They are promising they can deliver low entropy on a tap without paying for it in increased entropy elsewhere. It's physically impossible.

You want human intelligence to do work, you need to deal with all the complexities of psychology, economics and politics. You want complex machines to do autonomous work, you need an army of people behind it. What AGI promises is, you can replace the army of people with another more complex machine. It's a big bald faced lie. You can't do away with the complexity. Someone will have to handle it.


> It's physically impossible

Your brain is proof to the contrary. AGI means different things to everyone, but a human brain definitely counts as "general intelligence", that implemented in silicon is enough to get basically all the things promised by AGI: if that's done at the 20 watts per brain that biology manages, then all of humanity can be simulated within the power envelope of the USA electrical grid… three times over.


You're lazily mixing metaphors here. This is the problem in all such discussions, it often gets reduced to some combination of hand waving and hype training. "AGI" means different things to everyone, okay? Then it's a meaningless term. It's like saying hey with a quantum computer of enormous size, we could simulate molecular interactions at a level impossible with current technology. I would love for us to be able to do that- but where is the evidence it is even possible?


There's no metaphor here, I meant literally doing those things.

I followed up the point about AGI meaning different things by giving a common and sufficient standard of reference.

Your brain is evidence that it's "even possible".


> Your brain is evidence that it's "even possible".

All your brain proves is that a universe can produce planetary ecosystems capable of supporting human civilizations made of very efficient brain carrying mammals.

It definitely doesn't prove that these mammals can create boxes capable 'solve physics, poverty and global warming' if we just give Sam Altman enough electricity and chips. Or dollars to that effect.


If that's what you meant, then I agree with you.

What's the quote? "If the human brain were so simple that we could understand it, we would be so simple that we couldn’t".

Even though it doesn't need to be a single human doing all of it, our brains as existence proofs of the physical possibility, not of our own understanding.


Like I said, our brain proves that the way to solve the problem of 'how to get autonomous systems capable of non trivially complex behavior' is 'planetary ecosystems sustained by starlight based on self reproducing cellular nanobots'.

Companies selling ways around that apparatus to achieve intelligent behavior are selling you perpetual motion machines, or slavery. There has been zero 'AI' systems that don't depend on painstakingly collected and analyzed data beforehand.


> a human brain definitely counts as "general intelligence", that implemented in silicon is enough to get basically all the things promised by AGI: if that's done at the 20 watts per brain that biology manages, then all of humanity can be simulated within the power envelope of the USA electrical grid… three times over.

So far the only thing that has been proven is we can get low entropy from all the low entropy we've published on the internet. Will it get to a point where models can give us more low entropy than what is present in the training data? Categorically: no.


You are using "entropy" in a way I do not recognise.

Whatever you mean, our brains prove it's possible to have a system that uses 20 watts to demonstrate human-level intelligence.


Whatever you mean, it took 13 billion years and the whole universe as far as we can tell to create the 20 watt human intelligence brain. And it doesn't take 20 watt of energy to maintain humans, it takes a whole ecosystem capable of sustaining a human population. And for us to operate at our current level of information processing it takes a whole planetary civilization. So no you haven't proved anything by saying that a human brain consumes only 20 watts of energy. We spread an awful lot of entropy around that has to be kept away from our delicate biological and social systems.

You're positing a way to create human intelligence-like in a bottle, that's the same as speculating about the shape of a reality where we have FTL travel or teleportation or whatever else you fancy.

If we're talking about what current ML/AI can do, they can extract patterns from training data and than apply those patterns to other inputs. This can give us great automation, but it won't give us anything better than the training data, solve physics, global warming, poverty or give us human intelligence in a chip.

Whatever quantity Q of entropy in the training data, the total output will be more Q all accounted for. That's true for humans and machines. No shape of possible AGI will give us any output with less Q than the combination of inputs had. The dream that a machine will solve all problems that humanity can't hinges on negating that, which goes against thermodynamics.

As it stands, the planet cannot cope with all the entropy we're spreading around. It will eventually collapse civilization/the ecosystem, whatever buckles first, from the excess entropy. Because global warming, poverty, or ignorance is just entropy. Disorder. Things not being just so as we need them to be.


Unless I'm mistaken, you've added these two paragraphs since my last comment:

> Whatever quantity Q of entropy in the training data, the total output will be more Q all accounted for. That's true for humans and machines. No shape of possible AGI will give us any output with less Q than the combination of inputs had. The dream that a machine will solve all problems that humanity can't hinges on negating that, which goes against thermodynamics.

Thermodynamics applies to closed systems, which the earth in isolation isn't.

This is why the source material for Shakespeare didn't already exist on Earth in the Cretaceous.

It's also why we've solved loads of problems we used to have, like bubonic plague and long distance communication.

> As it stands, the planet cannot cope with all the entropy we're spreading around.

We're many orders of magnitude away from entropic limits. Global warming is due to the impact on how much more sunlight keeps bouncing around inside the atmosphere, not the direct thermal effect of our power plants.

> It will eventually collapse civilization/the ecosystem, whatever buckles first, from the excess entropy. Because global warming, poverty, or ignorance is just entropy. Disorder. Things not being just so as we need them to be.

Entropy isn't everyday disorder, it's a specific relationship of microstates and macrostates, and you can't usefully infer things when you switch uses.

Poverty is much reduced compared to the historical condition. So is ignorance: we have to specialise these days because there's too much knowledge for any one human to learn.

Entropic collapse is indeed inevitable, but that inevitably is on the scale of 10^18 years or more with only engineering challenges rather than novel scientific breakthroughs (the latter would plausibly increase that to 10^106 if someone can figure out how to use Hawking radiation, but I don't want to divert into why I think that's more than merely an engineering challenge).


> Thermodynamics applies to closed systems, which the earth in isolation isn't.

Wrong. You're probably conflating the second law of thermodynamics with the whole thing. Obviously, thermodynamics applies to Earth, it was invented here.

> Entropy isn't everyday disorder, it's a specific relationship of microstates and macrostates, and you can't usefully infer things when you switch uses.

No, this is Bolzmann entropy. Entropy is absolutely a measure of disorder, and the fact that you cannot measure a specific value for large complex systems because the calculating framework is too simplistic and only applies to very well defined and artificial assemblies of quantities it does not mean that the concept and intuition do not apply.

Entropy is a measure of disorder or uncertainty. Maybe you disagree with my intuition about it.

But I'm not the only one:

- Shannon Entropy Fits Well Social Risk: Unemployment and Poverty in Large Cities https://ieeexplore.ieee.org/document/8905455

- The Interrelation Poverty-Unemployment from the Theory of Entropy in Modern Societies https://alicia.concytec.gob.pe/vufind/Record/AUTO_bbe82cfd37...


> Whatever you mean, it took 13 billion years and the whole universe as far as we can tell to create the 20 watt human intelligence brain.

It's grossly unreasonable to include the entire history of the universe given the Earth only formed about 4 billion years ago; and given that evolution wasn't even aiming for intelligence, even starting from our common ancestors being small rodents 65 million years ago is wildly overstating the effort required — even the evolution of primates is too far back without intentional selective breeding.

> You're positing a way to create human intelligence-like in a bottle, that's the same as speculating about the shape of a reality where we have FTL travel or teleportation or whatever else you fancy.

FTL may well well be impossible.

If you seriously think human intelligence is impossible, then you also think you don't exist: You, yourself, are a human like intelligence in a bottle. The bottle being your skull.

> This can give us great automation, but it won't solve physics, global warming, poverty or give us human intelligence in a chip.

AI has already been in widesprad use in physics for a while now, well before the current zeitgeist of LLMs.

There's a Y Combinator startup: "Charge Robotics is building robots that automate the most labor-intensive parts of solar construction".

Poverty has many causes, some of which are already being reduced or resolved by existing systems — and that's been the case since one of the ancestor companies of IBM, Tabulating Machine Company, was doing punched cards for the US census.

As for human intelligence on a chip? Well, (1) it's been quite a long time since humans were capable of designing the circuits manually, given the feature size is now at the level where quantum mechanics must be accounted for or you get surprise tunnelling between gates; and (2) one of the things being automated is feature segmentation and labelling of neural micrographs, i.e. literally brain scanning: https://pubmed.ncbi.nlm.nih.gov/32619485/


I never said human intelligence is impossible.

All I am saying is that Sam Altman's promises (remember the original topic) hinge on breaking thermodynamics.

Humans evolved on a planet that was just so. The elements on Earth that make life possible couldn't have been created in a younger universe. So it take the full history of the universe to produce human intelligence. It also doesn't exist in a vacuum. Current human civilization is not a collection of 8 billion 20 watt boxes.

> As for human intelligence on a chip? Well, (1) it's been quite a long time since humans were capable of designing the circuits manually, given the feature size is now at the level where quantum mechanics must be accounted for or you get surprise tunnelling between gates; and (2) one of the things being automated is feature segmentation and labelling of neural micrographs, i.e. literally brain scanning: https://pubmed.ncbi.nlm.nih.gov/32619485/

I don't understand any of this so I won't comment.


> All I am saying is that Sam Altman's promises (remember the original topic) hinge on breaking thermodynamics

If it hinged on that, then you would actually be saying it's impossible.

> The elements on Earth that make life possible couldn't have been created in a younger universe

Those statements also apply to the silicon and doping agents used in chip manufacture. They tell you nothing of relevance, we're not doing Carl Sagan's Apple Pie from scratch with AI, we're trying to get a thing like us.


Yes, I am saying that AGI is impossible.

AGI is either human intelligence behind the curtains or a pie in the sky.


> If it hinged on that, then you would actually be saying it's impossible

"impossible with the approach OpenAI is using" != "impossible"


Look, any sufficiently complex autonomous system necessitates a bigger, more complex one to maintain it's functionality. The more complex, the harder it is to deal with the entropy that this system is removing form it's target action area.

Eventually you need a whole universe capable of sustaining self reproducing intelligent systems. The way to get intelligent autonomous and sufficiently complex systems to do anything non trivial is life.

Anyone trying to sell you a box they say is capable of non trivial, adaptive, complex behavior autonomously is selling you a perpetual motion machine: a pretense that thermodynamics can be suspended at will.


“robotaxis are still almost here, just around the corner, as always”

We have them in San Francisco now (and Los Angeles and Phoenix, and Austin soon.)


By "robotaxi" I think they meant loosely "personal self driving car," not an automated taxi service.

Waymo's overstated[1] success has let self-driving advocates do an especially pernicious bit of goalpost-shifting. I have been a self-driving skeptic since 2010, but if you had told me in 2010 that in 10-15 years we have robotaxis that were closely overseen by remote operators who can fill in the gaps I would have thought that was much more plausible than fully autonomous vehicles. And the human operators are truly critical, even more so than a skeptic like me assumed: https://www.nytimes.com/interactive/2024/09/03/technology/zo... (sadly the interactive is necessary here and archives don't work, this is a gift link)

I still think fully autonomous vehicles on standard roads is 50+ years out. The argument was always that ~95% of driving is addressable by deep learning but the remaining ~5% involves difficult problem-solving that cannot be solved by data because the data does not exist. It will require human oversight or an AI architecture which is capable of deterministic reasoning (not transformers), say at least at the level of a lizard. Since we have no clue how to make an AI as smart as a lizard, that 5% problem remains utterly intractable.

[1] I have complained for years that Waymo's statisticians are comparing their cars to all human drivers when they should be comparing it to lawful human drivers whose vehicles are well-maintained. Tesla FSD proves that self-driving companies will respond to consumer demand for vehicles that speed and run red lights.


While I broadly agree that AI metrics have a "lies, damn lies, and statistics" problem that makes it hard to even agree how competent they are, if someone says "robotaxi" then I have no reason to expect they mean anything more nor less than "a taxi with no human driver".

I would be shocked if we're really 50 years away from that level of AI. 50 years is a long time in computing — late 70s computers were still using punched tape:

https://commons.m.wikimedia.org/wiki/File:NSA_Punch_Verifica...


> , if someone says "robotaxi" then I have no reason to expect they mean anything more nor less than "a taxi with no human driver".

You have three reasons:

1) reading the comment in good faith

2) understanding 'robotaxi' is not a precise technical term

3) safely assuming that most commenters here know about Waymo

There is no reason to choose the most pedantic and smarmily bad-faith reading of the comment.

As for "50 years" - I don't care about electrical engineering, I am talking about intelligence. In the 1970s we had neural networks as smart as nematodes. Today they are as smart as spiders. Maybe in 50 years they will be as smart as bees. I doubt any of our children will live to see a computer as smart as a rat.


> You have three reasons:

> 1) reading the comment in good faith

> 2) understanding 'robotaxi' is not a precise technical term

> 3) safely assuming that most commenters here know about Waymo

#1 is the main reason why I wouldn't read "robotaxi" as anything other than "taxi robot", closely followed by #2.

> As for "50 years" - I don't care about electrical engineering, I am talking about intelligence.

Neither was I, and you should take #1 as advice for yourself.

> In the 1970s we had neural networks as smart as nematodes. Today they are as smart as spiders. Maybe in 50 years they will be as smart as bees. I doubt any of our children will live to see a computer as smart as a rat.

You're either overestimating the ones in the 70s or underestimating the ones today. By parameter count, GPT-3 is already about as complex as a medium sized rodent. If today's models aren't that smart (definitions of "intelligence" are surprisingly fluid from one person to the next), then you can't reasonably call the ones in the 70s as smart as a nematode either.


> I have complained for years that Waymo's statisticians are comparing their cars to all human drivers when they should be comparing it to lawful human drivers whose vehicles are well-maintained. Tesla FSD proves that self-driving companies will respond to consumer demand for vehicles that speed and run red lights.

Really? Waymo's statisticians are the ones you are complaining about?

Tesla's statisticians have been lying for years, as has Musk when they cite "number of miles driven by FSD in the very small subset of conditions where it is available, and not turned off or unavailable because of where you are, the weather, or any other variable" versus "all drivers, all conditions, all locations, all times" to try to say FSD is safer.


Waymo is at the threshold of how speech recognition reached a tipping point through decades of grinding effort. I worked on speech recognition applications in the '80s when the best of it was just barely usable for carefully crafted use cases. Now taxis can be automated well enough if you have the right sensor suite and an exquisitely mapped environment. Waymo can do this for 0.05% of ride hailing riders, if 28M Ubers per day and 100k Waymos per week are reasonably accurate. This is like when connected speech started working with a good mic in a quiet environment. Now automated multi-speaker transcription with background noise, crosstalk, and whatever mic you got works. Back in the day some serious people were convinced speech recognition on that level would be impossible without solving AGI.


barber demands paper cash only...I have to stand in line at USPS and DMV

Surely this is just a case of the future not being evenly distributed. All of these 'problems' are already solved and the solution is implemented somewhere, just not where you happen to be.


You're probably right. I'll wait until it gets better distributed. It's just that personally, it's been hard to reconcile the grandiose talk with what's actually around me, that's all.


The weird distribution was something I experienced when I last visited California in 2018 and couldn't use my card in most places, despite having been to Kenya a few years before that and seeing M-Pesa all over the place.


> robotaxis are still almost here, just around the corner, as always.

You can walk to where they're waiting for you.


I'm surprised nobody noticed the elephant in the room, the fact that ChatGPT has a very hard woke slant. That said, -o1 has gotten a lot better but it's not as uncensored and unbiased as GPT-3 was when it was first released. For a while GPT-4 was very clearly biased for the left and U.S. Democrats particularly.

Now keep in mind that this is going to be the default option for a lot of forums and social media for automated moderation. Reddit is already using it a lot and now a lot of the front page is clearly feedback farming for OpenAI. What I'm getting at is we're moving towards a future where only a certain type of dialog will be allowed on most social media and Sam Altman and his sponsors get to decide what that looks like.


Any person that thinks “automating human thought” is good for humanity is evil.


I can't imagine taking The Atlantic seriously on anything. My word. You aren't actually supposed to read the endless ragebait.

Contrary to the Atlantic's almost always intentionally misleading framing, the "dot com boom" did in fact go on to print trillions later and it is still printing them. After what was an ultimately marginal if account clearing dip for many.

I say that as someone who would be deemed to be an Ai pessimist, by many.

But its wildly early to declare anything to be "what it is" and only that, in terms of ultimate benefit. Just like it was and is wild to declare the dot com boom to be over.


> I can't imagine taking The Atlantic seriously on anything.

Agreed - I stopped taking The Atlantic seriously after their 2009 cover story, "Did Christianity Cause the Crash?"[1] To ignore CDOs, the Glass-Steagal repeal, the co-option of the ratings agencies and the dissolution of lending standards, and instead blame the Great Recession on a few obnoxious megapastors is to completely discard the magazine's credibility.

[1] https://www.theatlantic.com/magazine/archive/2009/12/did-chr...


Really, you can't see a connection between people promoting ever supernatural rewards for being money and prosperity-driven and stoking that fire right alongside rampant capitalism and reckless decisions on how to protect our financial and economic wellbeing? Especially when even some very very bright people still believe in imaginary creatures in the sky and use that to shape and guide their decision process?

Maybe not "cause", but "contribute notably to".


Yes, really. It's okay to not know anything about how the global economy works or its history: each decade defining the next. But it isn't okay to twist that ignorance to become focus on a scapegoat of convenience. Led by The Atlantic's wager on what you don't know, their hatreds, and their mission to distract.


You might argue that faith, in its various manifestations, is the root of all economic distortions and malaises. That being said, no, I don’t think that a subset of evangelical Christianity caused or meaningfully contributed to the 2009 crash, since there are abundant facts and analyses to the contrary. To believe otherwise would be, ironically, an act of faith.


This seems like a more general problem with journalistic practices. Journalists don't want to inject their own judgements into articles, which is admirable, and makes sense. So they quote people exactly. Quoting exactly means that bad actors can inject falsehoods into articles.

I don't have any suggestions on how to solve this. Everything I can think of has immediate large flaws.


>Journalists don't want to inject their own judgements into articles, which is admirable, and makes sense.

Is it even possible ? Like, don't you know the political inclination of any website/journal you read ? I feel like this search of "The Objective Truth" is just a chimera. I'd rather articles combine pros and cons of everything they discuss tbh


There’s a difference between having natural human biases you try to avoid when reporting by using the usual context sentence (where, when, to whom something was stated), quote, appositive denoting speaker, quote format and writing “this guy is full of crap” or “you really need to believe this person” while cherry picking statements.

You can easily find examples of each. Both NYT and Slate are considered left leaning and at the same time have been the professional stomping grounds of right leaning writers that started their own media companies that are not left leaning. Everyone has a bias and they don’t have to work somewhere with that same bias, especially if you just stick to the paper’s style guide. On the same substance the two media outlets present the same topic very differently. Sometimes I appreciate the Slate format for the author’s candor and view being injected (like being pointed on Malcom Gladwell). Sometimes I just want to know the facts as clearly stated as possible (I don’t care if the author doesn’t believe in climate change, tell me what happened when North Carolina flooded).


Yes, you’ve rediscovered the curriculum of a journalism 101 class.


So are you saying there are a lot of journalists that never studied, or did they just never pay attention in class?

Because articles that actually do that are few and far between.


Or more likely, the journalists don't know any better and believe the AI hype sold to them and promote it of their own accord.


It’s possible to insert a few sentences factually accounting for a person’s character without inserting a subjective judgement of character.

For example you could say:

Joey JoeJoe, billionaire CEO, who notably said horrible things, was convicted of some crimes, and ate three babies, was quoted as saying “machine learning is just so awesome”.

There, you didn’t inject a judgement. You accurately quoted the subject. You gave the reader enough contextual information about the person so they know how much to trust or not-trust the quote.


This does often happen (depending on the leaning of the newspaper, it's omitted if the figure is someone supported and emphasized otherwise).

A major problem, though, is headlines don't and can't carry this context. And those are the things most people read.

The best you'll get is "Joey JoeJoe says machine learning is just so awesome" or at best "Joey JoeJoe comments on ML. The 3rd word will blow you away!".


> who notably said horrible things

How do you objectively decide which statements are horrible and which aren't?

The other stuff you listed are facts, but this one would be subjective. That isn't just providing contextual information, but adding personal bias into the reporting.


"Nelson Mandela, convicted of some crimes, calls for World Peace"


Extreme examples are easy. But you can pick and choose which facts to present to the reader to affect the judgement they’re making. It would be trivially easy to paint Bill Gates as either a legendary humanitarian or a ruthless capitalist egotist to someone that’s never heard of him.


A journalist's job is to journal something, exactly like how NTFS keeps a journal of what happens.

A journalist doing anything other than journaling is not a journalist.

So people getting quoted verbatim is perfectly fine. If the quoted turns out to be a liar, that's just part of the journal.


I don’t think that’s right. First off, we don’t generally define jobs based on the closest computer analogy (we would be unhappy if the loggers returned with a list of things that happened in the woods, rather than a bunch of wood).

The journalist’s job is to describe what actually is happening, and to provide enough context for readers to understand it. Some bias will inevitably creep in, because they can’t possibly describe every event that has ever happened to their subject. But for example if they are interviewing somebody who usually lies, it would be more accurate to at least include a small note about that.


>The journalist’s job is to describe what actually is happening, and to provide enough context for readers to understand it.

The former is a journalist's job, the latter is the reader's concern and not the journalist.

One of the reasons I consider journalism a cancer upon humanity is because journalists can't just write down "it is 35 degrees celsius today at 2pm", but rather "you won't believe how hot it is".

Just journal down what the hell happens literally and plainly, we as readers can and should figure out the rest. NTFS doesn't interject opinions and clickbait into its journal, and neither should proper journalists.


The journalist is making a product for the reader in the best case, their job is to help the reader. The second example you mention is a typical example of clickbait journalism, where the journalist has betrayed the reader and is trying to steal their attention, because they actually serve advertisers.

But the first example is not very useful either. That journalist could be replaced by a fully automated thermometer. Or weather stations with an API. Context is useful: “It is 35 degrees Celsius, and we’re predicting that it will stay sunny all day” will help you plan your day. “It is 35 degrees Celsius today, finishing off an unseasonably warm September” could provide a little info about the overall trend in the weather this year.

I don’t see any particular reason that journalists should follow your definition, which you seem to have just… made up?


>your definition, which you seem to have just… made up?

See: https://www.merriam-webster.com/dictionary/journal

Specifically noun, senses 2B through 2F.

I expect journalists to record journals and nothing more nor nothing less, not editorials or opinion pieces which are written by authors or columnists or whatever.


Dictionary similarity is not how people get their job descriptions. If you want to just pick a similar word from the dictionary, why are journalists sharing this stuff? Journals are typically private, after all. If someone read your journal, you might be annoyed, right?

Or, from your definition, apparently:

> the part of a rotating shaft, axle, roll, or spindle that turns in a bearing

I don’t think these journalists rotate much at all!

A better definition is one of… journalism.

https://www.britannica.com/topic/journalism

journalism, the collection, preparation, and distribution of news and related commentary and feature materials through such print and electronic media as […]

That said, I don’t think an argument from definition is all that good anyway. These definitions are descriptive, not prescriptive. Journalism is a profession, they do what they do for the public good. If you think that it would be better for the field of journalism to produce a contextless log of events, defend that idea in and of itself, rather than leaning on some definition.


Why favor a definition of “journalist” that approximately nobody else uses? It seems like it would just make it hard to communicate.


I think you might need a chill pill, I've never met a single journalist or editor who would let "you won't believe how hot it is" pass in more than a tweet.


As a counterexample, I have deep respect for weather forecasters because they are professionally and legally bound to state nothing but the scientific journal at hand.

"Typhoon 14 located 500km south of Tokyo, Japan with a pressure of 960hPa and moving north-northeast at a speed of 30km/h is expected to traverse so-and-so estimated course of travel at 6pm tomorrow."

"Let's go over to Arizona. It's currently 105F in Tuscon, 102F in Yuma, ..."

Brutally to the point, the readers are left to process that information as appropriate.

Journalists do not do this, and they should if they claim to be journalists.


>they are professionally and legally bound to state nothing but the scientific journal at hand

In America, just about every meteorologist editorializes the weather to a degree. There's nothing scientific about telling me "it's a great night for baseball" (great for the fans? Pitchers? Hitters?) or "don't wash your car just yet" but I will never stop hearing those. I don't, and the public doesn't seem to think that infringes on journalistic standards, because the information is still presented. Maybe this is different than what you mean -- if you're talking about a situation where journalists intentionally created the full context and pushed the information to the side, obviously that is undesirable.

I will add that weather as a "news product" actually gains quite a fair bit from presenter opinion, and news is a product above all.


You're describing a microphone / voice recorder, not a journalist.

There are of course places you can go to get raw weather data, but a journalist might put it in context of what else is going on, interview farmers or climatologists about the situation, etc.

There are lots of kinds of journalism, but maybe most important is investigative journalism. They are literally doing an investigation - reading source material, actively seeking out the right people to interview and asking them right questions, following the leads to more information.


Thats.. not what journalism means. I don’t know where you got that definition, but I can’t find anything similar. Processing and displaying information is a huge part of journalism - ie assessing what is truth or fiction and communicating each as such. Wikipedia: > A journalist is a person who gathers information in the form of text, audio or pictures, processes it into a newsworthy form and disseminates it to the public. This is called journalism.


You’re injecting that “ie” — Wikipedia doesn’t say it as such.

They’re describing collating and you’re describing evaluating.


And to add, evaluating is the responsibility of the reader.

If you're also tasking "journalists" to evaluate for you, you aren't a reader and they aren't journalists. You're just a dumb terminal getting programs (others' opinions) installed and they are influencers.


> A journalist's job is to journal something, exactly like how NTFS keeps a journal of what happens.

Your choice of metaphor points out problems with your definition. Avid Linux users will be immediately biased against what you wrote, true though it may be, because you assumed that NTFS is the predominant, or even good example of journaling file systems.


That sounds more like stenography than anything else


You’re describing a PR representative. Simply the decision of what to cover is inherently selective and driven by an individual’s and a culture’s priorities.


Sounds like you would be a bad journalist.


So they should report anything anyone says?

Because selecting whom to report is editorializing by your standards. It creates a bias and gives importance aome over others.

Woke Derangement Syndrome has taken the best of us.


The Atlantic is really going out of their way to hate on Altman. That publication has always been a bit of a whack job of an outfit


> That publication has always been a bit of a whack job of an outfit

This is a bizarre take about a 167-year-old, continuously published magazine.


There's quite a number of posts here that seem determined to attack the integrity of The Atlantic... hm. Everything from "their writers are scared that they'll lose their jobs" to complaints about twenty year old articles not panning out correctly.


The techbro whining about as staid a publication as the Atlantic is truly hilarious. They published some of the first pieces in support of abolition, and now you're upset that they're swinging at the richest grifter in Silicon Valley. Get a grip.


It also famously published As We May Think.


He brought us AI that is about to change almost everything, be grateful for the magic intelligence in the sky




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: