Hacker News new | past | comments | ask | show | jobs | submit login
Three Observations (samaltman.com)
181 points by davidbarker 1 day ago | hide | past | favorite | 222 comments





> The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. ... Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

Moore waited at least five years [1] before deriving his law. On top of that, I don't think that it makes much sense to compare commercial pricing schemes to technical advancements.

[1] http://cva.stanford.edu/classes/cs99s/papers/moore-crammingm...


> Moore waited at least five years [1] before deriving his law.

OpenAI has been around since 2015. Even if we give them four years to ramp up, that's still five years worth of data. If you're referring to the example he gave of token cost, that could just be him pulling two points off his data set to serve as an example. I don't know that's the case, of course, but I don't see anything in his text that contradicts the point.

> I don't think that it makes much sense to compare commercial pricing schemes to technical advancements.

How about Kurzweil's plot [1]?

[1] https://scx2.b-cdn.net/gfx/news/hires/2011/kurzweilfig.1.jpg


That Kurzweil plot is a bit ancient, up to 1998 or something

There's a better one that goes to 2023 https://www.bvp.com/assets/uploads/2024/03/Price_Computation...

The rate of progress is more like the Moore's law 18 month doubling. That's compute per dollar rather than Moore's transistor density.

I think 10x per year is a bit questionable - it's way out of line with the Kurzweil trend.


Yeah, price performance definitely seems to be the more important metric here. Anyone can get more compute by building a bigger and more expensive chip, but per-dollar metrics can't be gamed so easily. Though even in that plot, it's only doubled every ~2.3 years since 2008.

(I'd be especially interested in amortized price performance, i.e., the number of useful computations from a system over its lifetime, divided by the total cost to build, maintain, and operate it. That's going to be the ultimate constraint on what you can do with a given amount of funding.)


The "10x every 12 months" is pure salesman from sama. An engineer wouldn't do these extrapolations from little data in good faith.

He is talking about cost. Are you saying price didn’t go down 10 times the last 12 months? How much data is too little?

1 year of data is indeed too little if you are trying to forecast one year ahead. Also the pricing is set by OpenAI. We don't know their actual costs decreased by that factor. Only that they cut their prices.

We don't know OpenAI's actual cost, but presumably Sam does know.

I'm sure he does, but why would he ever tell the truth about it except by coincidence?

Cost and price are two different things. Sam said cost, but really he meant price, because even by his own admission, ChatGPT's services are running at a loss

https://twitter.com/sama/status/1876104315296968813?lang=en


The retail price, or the actual cost to deliver? Those are not the same thing. Cost to deliver could actually mean something. Retail pricing is approximately meaningless.

In context, retail pricing is very meaningful. The next sentence is "lower prices lead to much more use". That is, price elasticity of demand is large, and here price is retail price.

I know this is subjective but he is comparing GPT-4 to 4o. The new model definitely felt lighter and faster, so probably cheaper for them to maintain, but at the same time very often gave worse answers than GPT-4.

Compute costs are pretty much the same for high vram cards?

Yeah that was my first thought, don’t sully the name of Gordon Moore with this.

This sounds more like an insight into how things are working at open ai than anything else. And I’m not sure if deep seek and others are going to follow his nice rules.

More generally, transistors are a technical phenomenon…they are either smaller (and work) or don’t. The thing I really don’t feel enough folks appreciate about AGI is it’s a social phenomenon - not in the making of it but in the pragmatic reality of it.

To a sufficient number of folks the current version is AGI, I see students every day trust it more than themselves. To bosses it also might be, if it’s more intelligent than you average employee than that’s sufficiently general intelligence to replace them. So far, I’ve really tried but don’t beyond generating the most basic outline I have yet to find a model that helps with my work, so it’s not intelligent for me.

I’m aware of the benchmarks but they don’t matter outside of places like HN. intelligence is and, likely, will always be social before it is technical and that makes these laws..not useful?


>Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas.

I don't see why we should trust OpenAI's promises now, when they've broken promises in the past.

See the "OpenAI has a history of broken promises" section of this webpage: https://www.safetyabandoned.org/

In my view, state AGs should not allow them to complete their transition to a for-profit.


On what grounds could state AGs stop the move?

> I don't see why we should trust OpenAI's promises now, when they've broken promises in the past.

I don't see what "our" trust has to do with anything. Perhaps you're an investor in OpenAI and your trust matters to OpenAI and its plans? But for the rest of us, our trust doesn't matter. It would be like me saying, "I don't see why we should trust Saudi Aramco."


> It would be like me saying, "I don't see why we should trust Saudi Aramco."

It's completely fair response to say that if the CEO of Saudi Aramco performatively pens an article on how to mitigate the effects of global warming, while also profiting from it, and engaging in no tangible actions to fix the problem.


> It's completely fair response

My question, rephrased, is "so what"? What is my or our trust worth? What does us claiming we no longer trust Saudi Aramco achieve unless we are investors or perhaps other significant stakeholders?


> "so what"?

I totally understand your disenchantment, but if you feel that the mere opinions of the plebs are inconsequential^ and hence pointless, why participate in a public forum at all?

^Demonstrably not true if you look at the history of popular movements that garnered real and durable change, all which gathered momentum from the disgruntled mumblings of the plebs


> but if you feel that the mere opinions of the plebs are inconsequential^ and hence pointless, why participate in a public forum at all?

That gets to the heart of the matter, actually. Personally I participate in order to get new information and learn new ideas. But yeah, being human and flawed, I do end up giving opinions and I notice most people just want to talk about opinions.

But I digress. My question was specifically about the value of saying "I don't trust OpenAI".


Perhaps it was a warning to the naive that might take the article at face value to perhaps reconsider? What is obvious to you, might not be to another. I'd say there are a sizable amount of viewers of this forum are either on the fence or view OpenAI favorably.

And also back to part two of what I said, there's network effects to grumbling. I'd also add, there's a chilling effect to apathy.


Opinions are information?

> Opinions are information?

Not really. That's why there's a saying, "Opinions are like assholes, everybody has one."

Information has to correspond to reality, the only arbiter of truth. When I give you my opinion I can say pretty much anything, usually they correspond to feelings e.g. "I think if you ask her out she'll say yes. I think this because you're my friend and I like you so surely she will."

But I think you already know that opinions are not information and perhaps you're asking rhetorically and hopefully not trolling me?


They are, at minimum, information about a particular persons values and perspective. Collectively, those individual opinions are what shape elections and foment uprisings.

Ask Musk how Tesla sales in EU are doing once enough people started distrusting him directly as a person.

Economic votes are based partly on trust, and with enough swinging in the distrust direction, it would have a direct repercution on OAI financials.


> Ask Musk how Tesla sales in EU are doing once enough people started distrusting him directly as a person.

Have you looked at $TSLA recently?


Sales decreasing, subsidies getting eliminated, PE ratio unreasonable... All set to go to the moon ;)

This time it might be fake tho.


Nonprofits make a social contract, purporting to operate for the public good, not profit.

Trust that their operations are indeed benefiting the public and they are acting truthfully is important for making that social contract work.

Shady companies doing shady things and keeping shady records doesn't incentivize any type of market participants -- investors, consumers, philanthropists.


> Nonprofits make a social contract, purporting to operate for the public good, not profit.

This is obvious (though I disagree that there is a social contract, and if there is, it's worth the paper it's printed on) and everybody is aware what a nonprofit is. But your reply still doesn't answer my question. Another way of asking it is: how many other non-profits have you audited for trustworthiness before this conversation? What was the impact of your audit?

Or is saying "we can no longer trust Sam Altman" just us twiddling our thumbs so we can signal our virtue to others or comfort ourselves in our own powerlessness? In less than a decade he'll have an army of humanoid sentient robots and probably be the wealthiest person on the planet, and we'll still be yelling "we can no longer trust him"?


That you disagree there is a social contract is unsurprising. You strike me as the type to be unaware of social norms.

I've evaluated the veracity and sincerity of many nonprofits and for profit corporations alike.

I have no idea what you're on about regarding sentient robots.


> You strike me as the type to be unaware of social norms

You have a weird way of talking to strangers. But, you know what they say about assumptions.

> I have no idea what you're on about regarding sentient robots.

So you're ignorant about both the state and purpose of OpenAI's research as well as the state of the art in robotics. So why am I even talking to an ignorant person? smh.

Thanks for your effort though.


You quoted something I didn't write?

Regarding the robots and ignorance. You honestly sound a bit unhinged. You ought to touch grass.


> You honestly sound a bit unhinged.

smh. Is that the best you've got or do you have a useful answer to my initial question?

Do you always walk around insulting strangers? It seems like you need the grass you love to recommend so much. It will help you more than anyone else.


What initial question? It seems like you are confusing threads (again).

Is that a pun about cannabis?

I just don't think Sam Altman is gonna be the guy to command a droid army, and I also don't think they'll look humanoid, and I also think us saying he is a dipshit in public helps undermine his efforts to waste vital resources pursuing that dystopia that he may or may not want and almost certainly won't meaningfully achieve.

Maybe we're just operating on different assumptions. And maybe we have different goals. Perhaps I'm just replying to weigh down the conversation and thread, dilute Altman's profiteering propaganda.


2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

First, if the cost is coming down so fast, why the need for "exponentially increasing investment"? One could make the same exponential growth claim for, say, the electric power industry, which had a growth period around a century ago and eventually stabilized near 5% of GDP. The "tech sector" in total is around 9% of US GDP, and relatively stable.

Second, only about half the people with college degrees in the US have jobs that need college degrees. The demand for educated people is finite, as is painfully obvious to those paying off college loans.

This screed comes across as a desperate attempt to justify OpenAI's bloated valuation.


First, it requires exponential investment because

> 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.

Incremental improvements in intelligence require exponentially more resources. Only once that step is achieved can costs be reduced.

Second, intelligence is not the same as education. Education is specialized, intelligence is general. Every modern convenience around you is the result of intelligence.


> The intelligence of an AI model roughly equals the log of the resources used to train and run it.

Didn't Deepseek disprove this? They trained a roughly equal model on an order of magnitude less compute.


No, advances in efficiency were always expected (and indeed required because of this 'law')

The principle is that if DeepSeek had have spent 10 or 100 times as much as they did their model would have been a few times better.

This rule is intended to be applied on top of all of the other advances.


> The principle is that if DeepSeek had have spent 10 or 100 times as much as they did their model would have been a few times better.

If this is the case then OpenAI must have a model that is a few times better. Where is it?


"cost to use" != "cost to train"

Elsewhere he says that model intelligence is determined by the log of the resources used to train it, and this relationship has been constant for many orders of magnitude.

The implication is that it takes exponentially increasing investment to achieve a linear increase in intelligence.

Exponentially increasing costs sound like a bad thing.

But then he says the linear increase in intelligence generates exponentially increasing economic benefits.

Those exponentially increasing benefits sounds like they might justify the exponentially increasing costs.


Ok, but according to this, linear investments generate linear increase in economic benefits.

So why the need to go exponential?


Because increasing investments and profits linearly would mean sub-linear growth in intelligence.

Since Sam's only shot at immortality is reaching ASI in the next few years, sub-linear isn't fast enough.

The purpose of ASI isn't to generate economic benefits for everyone else.

The purpose of everyone else's economic benefits is to generate the ASI so Sam can live forever.


> Elsewhere he says that model intelligence is determined by the log of the resources used to train it, and this relationship has been constant for many orders of magnitude.

If this relationship is constant then you must have a way to quantify "intelligence" in such a way that it can be compared like this. Care to share it?


The costs are coming down fast because of the investment and would not happen independently of the investment. The same reason for Moore's law can be generalized as the learning/experience curve. You could make the argument that the demand for AI has a higher growth rate than the learning rate ex. if AI becomes 10x more cost efficient and results in demand growing 100x, you would also need to follow with an exponential growth in inference investment.

You could also make an argument that the bulk of college degrees are not economically useful. The demand for certain kinds of education and knowledge is certainly finite - but the demand ceiling for general intelligence seems much much higher, and appears to be limited only by the cost of hiring ever smarter people.

We've just gotten through a big second generation of available models, but at least for the projects I'm on, I still feel just as far away as ever as being able to trust responses. I spent a few hours this weekend working on a side project that involved setting up a graphql server with a few basic query resolvers and field resolvers, and my experience with 4o, R1, o3-mini-high were all akin to arguing with too-confident junior engineers that were confused, without realizing it, about what they thought they knew. And this was basic stuff about simple graphql resolvers. I did have my first experience of o3-mini-high (finally, after much arguing) being able to answer something that R1 couldn't, though.

It's weird, because it's still wildly useful and my ideas for side projects are definitely more expansive than they used to be. And yet, I'm really far from having any fear of replacement. Almost none of the answers I'm getting are truly nailing the experience of teaching me something new, while also having perfect accuracy. (I'm on ChatGPT+, not pro.)


The other day o1 hallucinated on a question of some Salesforce object properties that are clearly laid out in the docs. It took me a while to disentangle it because it was very persuasive. I concur with you though, I still find the tech tremendously useful.

Same. I think the tools have gotten a little better with agent mode and all that but it still doesn't get anything even slightly complicated for an API it hasn't seen before. Like it hasn't gotten more intelligent since maybe Gpt 3.5.

I quite like them as somebody who has a huge problem with onset procrastination. I immediately ask them to write something and criticize them and write it myself because whatever it comes up with is so damn stupid.


Have you tried putting the API in context? Gemini has 2M context, where original 3.5 had like 4K and not great with it.

While some large apis may not fit, not many that it hasn't seen before fit that bill. For updated large APIs you can put in the changelog but more work to gather all the detailed API changes and changed documentation on their own.


Even though I have a reasonable intuition and understanding of how a LLM works, I am still awe-struck each and every time I use one. The fact that I have a junior developer at my convenience is a huge efficiency gain. I’ve been able to automate the rudimentary elements and focus my time on the stuff that counts.

At first AI models/systems/agents help employees be more productive.

But i can't imagine a future where this doesn't lead to mass layoffs, or hiring freezes because these systems can replace tens or hundreds of employess, the end result being more and more unemployed people.

Sure there's been the industrial revolution and the argument usually is: some people will lose their jobs but many other jobs will be created. I'm not sure this argument gonna hold this time given the magnitude of the change.

Is there any serious study of the impact of AI on society and employment, and most importantly is there any solution to this problem ?


In terms of solutions: UBI is frequently proposed. Depending on the degree to which AI and automation obsoletes human labor and increases, UBI could become arbitrarily redistributive. Increased output would keep inflation at bay, and the reduced incentive to work would be mitigated by the fact that there almost wouldn't need to be any incentives remaining for wealth creation.

Such an AGI scenario might also raise some old questions about the degree to which wealth disparity-as-an-incentive-for-success remains useful and justifiable. One wonders how Altman would receive this proposal.


I wonder what will happen to millions of people with $3k+/mo mortgages when they’re all unemployed and UBI is only $2k/mo.

They'll probably go bankrupt and have to move to flyover country. But if you've got a steady income and don't need to work then living in flyover country isn't so bad.

Sounds like the mortgage market would go through a correction? Or UBI would be bumped up to $4k. Lots of possible outcomes.

There's ~127m households in the US. At even $2k/mo (read: roughly the poverty line) that's over $3 trillion, which is half the US annual budget. UBI as a justification for throwing millions of people out of work is a joke. But this is obvious: far and away the markets for these companies are other companies looking to replace humans, not humans looking to kick back and let AI make their money for them. No one running any of these companies seriously thinks there's a policy to ameliorate the effects of what they're building, let alone a policy that's actually achievable. The entire stance is "progress gonna progress, figure it out suckers". Not great!

> UBI as a justification for throwing millions of people out of work is a joke

AI is the justification; UBI is just the tourniquet.


Not without pain. Housing correction would lead to foreclosures. Bumping UBI would lead to mortgage rates rising due to the ensuing inflation

30% of people in US pay mortgage, average payment is $2200. I have no idea where even $2k/mo of UBI will come from – when no one works, no one pays taxes. To avoid social unrest some new methods of wealth redistribution will have to appear. Why is nobody talking about this?

If no one works no one pays income taxes; this is one reason why I mentioned sales taxes in another comment. Income taxes are complicated to administer and a bad incentive structure anyway.

Traditionally, sales taxes have been considered regressive because they are linear; you can't bracket them like income taxes, so they heavily burden the most poor. But a UBI is progressive enough that a flat sales tax to fund it should still leave society more equal.


OK, so everyone gets UBI check today. Let’s say 10% of that check will be spent on sales tax. Where the next month check is coming from?

The UBI check goes out every month; the sales tax percentage is adjusted until the revenue matches the amount sent out as UBI (or more optimistically, adjusted until inflation is kept to a stable target).

Let's say that the sales tax reaches as high as 40%.

This means that if you are unemployed and destitute and the UBI represents your entire spending power, ~40% of your UBI check goes back to the government when you spend it. If you are a rich guy living the good life on your property portfolio rental income and MSFT dividends, 40% of your spending also goes back to the government, which is many multiples of your UBI check (which constitutes a trivial amount of your income).

Remember, the UBI doesn't amount to all gross income, even if nobody is employed. When you spend money to buy furniture, somebody gets it as their income, even if it's the shareholders of the IKEA robofactory. The people on the top will end up paying more into the system than they get out.


So you’re assuming rich guys will spend so much that 40% of their spent money will cover 60% of the entire UBI budget? Or do you also want rich guys to pay income tax? No matter how I look at this I don’t see it working without a massive forced wealth redistribution from rich to poor.

It's a mathematical certainty. Let me introduce it another way: the government imposes a 40% sales tax and then distributes the resulting revenue evenly as UBI, however much it is.

Do you still have concerns about the program not covering the UBI budget?

Aside from that, wealth follows a power law, so the rich guys do spend a lot. They have some very fancy champagne. I also elsewhere mentioned that the consumption value of land ownership (the equivalent rent) should be included in the sales tax, so that will be quite a substantial redistributionary force.


yes, UBI is the solution usually proposed but i guess there's a few problems.

First problem i see is UBI is gonna be paid by governments when it's (mega-)corporations making (disproportionate) profits thanks to AI and not employing humans anymore.

Something would need to compensate for this.I guess a new taxes scheme based on the number of AI agents a company would employ ?


If you ask me? A very large consumption tax would probably handle most of it, as well as some form of asset taxation (eg. on land and voting-rights shares, in terms of their equivalent rental value).

This would basically amount to a leveling function across society: people consuming less than the mean earn (much) more in UBI than they pay in sales taxes. Wealthier people who consume more than average would pay in more than they get out. With wealth being power-law distributed, the wealthy would end up paying a lot more.

Then you just tune the UBI value to achieve any desired balance of equality vs incentive to work, while adjusting the tax take to control inflation.

The megacorp wealth flows back to the shareholders, who eventually will want to spend it on something.


You think voting and taxation is gonna work if Altman and co. achieves and monopolizes AGI? They can have autonomous soldiers and weapons while precisely controlling information intake of approximately everyone. You only need be diplomatic when the power imbalance isn’t too huge.

I assume in that case they will pay everyone a small allowance so that people aren’t too much of a nuisance and resistance is limited. As predicted by sci-fi.


The core problem is uneven power distribution. Shaving off some percentages of corp revenue will not compensate enough for the rapidly growing power concentrations.

Giving everyone ability to run open source AGI night do a bit of difference but if the big corps can run 1000x more of it the power differential will keep growing.

There's maybe an opportunity to make (A)I crowdsourced and find better ways to make our shared brain power and GPU power more powerful than the cognitive power of data centres.


I think there are a lot of plausible dystopian AI outcomes, up to and including the extinction of the human species. But that won't stop me from providing my preferred policy recommendations for dealing with the impacts of unemployment caused by automation.

>The megacorp wealth flows back to the shareholders, who eventually will want to spend it on something.

This is called trickle down economics.

>Wealthier people who consume more than average would pay in more than they get out. With wealth being power-law distributed, the wealthy would end up paying a lot more.

The wealthy consume basically the same amount as everybody else in their day to day. They might purchase better quality goods. So consumption taxes are regressive. It is better to tax progressive wealth.


It would be interesting to consider where humans would find fulfillment when what they have done every day for years is rendered worthless.

I like to think about what I did before I had to work. Young me would never worry about finding things to do and none of those things were economically valuable.

Some would pursue less lucrative passions. Some would spiral into lazy Sunday behavior every day.

UBI is nonsensical. Give everyone 5k a month, and everyone is not 5k richer. All you’ve done is decrease the value of the dollar.

Would love to be proven wrong.


For a reductio ad absurdum that might illustrate where your analysis falls down, consider how someone with net worth of $0 and someone with net worth of $100k would each be differently affected by getting a $5k handout, accompanied by a 10% decrease in the purchasing power of a dollar. The end result is not the same as the status quo.

The intent is not to make everyone richer, but to redistribute wealth in an equalizing direction. (In fact this might also make everyone slightly richer on average in real terms, because of second-order effects related to economies of scale that make it slightly easier to meet the growth in demand when wealth is distributed more evenly).

Of course, the decreasing value of the dollar is an undesired side-effect of any government spending; this is why you counterbalance it with taxes to pull money back out of the system and control price growth. These taxes don't exactly cancel the UBI if they disproportionately fall on the wealthy; this is why I like consumption taxes, which fall more heavily on people who have more money to spend.


Your assumption holds in exactly one scenario: One where everybody is currently earning exactly the same amount of money and has the same net worth.

Otherwise, such an UBI would indeed cause substantial redistribution, even after accounting for inflation and prices recalibrating.


To clarify. If everyone in the US is given an extra 5k a month do you believe that the price of goods and services remain the same as they are at present?

If my landlord is charging me 2k for rent, does she not increase it knowing I now have an extra 5k of UBI monthly?

Or, if there’s a bidding war on a home that might typically sell as high as 300k, does that still hold true now that everyone has an extra 5k, or does the house sell for more?

Will a plumber still charge $30 an hour when he has a guaranteed 5k on top of what he’s making? Or will he start to bump his price up to make it worth his while?

I really don’t see how prices don’t just climb and re-equalize so that we’re back at square one. And your response didn’t address that concern.


There's a lot to unpack here, so I hope you don't mind a long and nuanced reply.

UBI on its own is obviously inflationary, like any other government spending. But it can be combined with taxes to reduce the growth in the money supply, as necessary to keep prices stable. Those taxes will tend to be aimed at the wealthy, and will take more than $5k/month away from them, so people bidding for a $300k home might have less purchasing power, not more.

But also, AGI-fueled unemployment is likely very deflationary, and that is the premise of this conversation. Let's talk about your landlord's situation. Sure, your tenant is now getting $5k/month UBI, but they've also lost their job, and so has most of the population. There might not actually be more juice to squeeze out of that lemon than there used to be. Plus, workers won't be bidding up rents in (formerly) high-wage cities for jobs that don't exist anymore. Your unemployed tenant is only staying in town because they like the weather. They could move elsewhere if you charge too much rent; it won't affect their commute.

The effect on prices will really vary a lot by type of good. For things that get effectively automated by AGI (let's say, for example... legal services and architectural drawings), the price may drop precipitously due to increased productivity. For other things (plumbing), the price would go up a lot. It would become quite rewarding to be a plumber.

Lastly, depending on the types of goods demanded, prices can be quite stable. If you give everyone $1k and they use it to buy an iPhone, the price of iPhones might not even go up that much, because it gets cheaper to produce in large quantity. In fact, they may even get cheaper. However, if you give one person $100M and they use it to buy a yacht, you don't get the same efficiency benefits from economies of scale. A more even wealth distribution favors mass production of goods, which keeps prices low.

We don't end up back at square one because:

1) The goal of UBI isn't to make everyone $5k richer, but to shrink the gap between rich and poor in a world where a few people are very rich from owning capital and many are poor due to a decrease in demand for labor.

2) We don't end up back at square one because at the end of the day, an unemployed person getting $5k/month has more purchasing power than if they got $0/month, even if prices do increase.

3) But aggregate prices don't even necessarily have to increase, because of changes to taxes and productivity, and also because of economies of scale.


Prices would climb, but not "re-equalize." Let's say everything got 2x more expensive. People currently making very little could still afford much more. People currently making a lot could afford less.

Right now you are correct that giving everyone 5k would not produce more stuff so the dollars would devalue.

The idea however is that at the same time as giving everyone 5k, AI workers would produce more than an extra 5k/head of stuff.

As an alternative having the AIs produce all the cars and houses we need but giving all the money to Sam Altman wouldn't make sense as no one would have the money to buy them apart from Sam who couldn't use them all.


High interest rates, maybe a wealth tax, ought to do it. Cause deflation while giving UBI.

96% of us are already given ~5k a month for 40hr/week of work.

If UBI is just printed, then sure there would be economic problems; but I think the idea is you redistribute it via taxation.


Oh there have been LOADS over the years. The old ones say this is going to happen, the new ones say this is happening, the new new ones say oh btw our measurement systems for labour market and society during this change period are probably not accurate so we need new ones ASAP.

https://www.mckinsey.com/~/media/mckinsey/industries/public%...

https://www.oecd.org/en/publications/the-risk-of-automation-...

https://news.mit.edu/2019/work-future-report-technology-jobs...

https://arxiv.org/pdf/2306.12001

https://institute.global/insights/economic-prosperity/the-im...

https://ipc.mit.edu/research/work-of-the-future/


None of these links provide any guesses about the impact of AGI on unemployment rates. By AGI I mean AI that is as smart and as capable as an intelligent and highly educated human. What will happen when in a few years companies realize that AI systems do a better job than their human employees?

Probably a similar transition to the Industrial Revolution, but much faster. Laborers had to learn skills that machines weren’t better at.

Any work that is information based is probably in for a dramatic shift.

But maybe some areas that are less vulnerable: - Experiences (food, travel, accommodations, events, sports) - Manual labor (carpenters, plumbers, roofers) - Human connection (caregivers, therapists, teachers, coaches) - Public service (government, police, fire, healthcare) - Executives (CEOs, entrepreneurs)

The Industrial Revolution completely changed the world, but there are still many tasks where a human is better/faster/cheaper than a physical machine, so it didn’t replace everyone. My guess is that there will be some niche domains where humans are preferred to AGI.


Sure. But transition to new professions will be a lot slower than layoffs. We are likely to face unprecedented unemployment rates for at least several years before people learn new skills, and meanwhile AI keeps improving and automates more and more professions (manual labor, public service, etc). Are you sure new professions will appear faster than old ones disappear?

You might be right. I don’t know if new professions will appear though. My guess is that people will transition to those existing fields that are less likely to be dominated by AGI.

Tech has already disrupted industries like travel agencies, print journalism, video stores, bookstores, stock brokers, movies, etc.

And you can see it coming for truck drivers, cab drivers, narrators and voice actors, etc.

So we’ve already lived through a taste of it, but this round will be much broader and much faster IMO.


ALL of these links mention it, maybe you didn't read all the material, the Blair Institute I linked as a whole in depth section just on it, in fact if you cntrl F "unemp" you'll find half the site turns yellow. The whole McKinley report starting at page 7 is about nothing but job market shifts for 10+ pages, and If you look at the OEDC report, you'll find lots of referenced to it, AND the ability to find further research (Mokyr et al., 2015).

I searched for AGI and I didn’t find it, perhaps I missed it? Could you please point me to any discussion of the impact of AGI on unemployment rate?

You're right, I am incorrect I didn't account for your nuance. My bad.

I'll try to dig it up later but that was what I'd been reading on the final part of my comment, I'l try to find the report because it was good (I'll reply to my this comment later tonight if I find it) - but the reports I've read that kinda allude to that is more messaged as "AI automation is deeper than we expect" (I don't know these labour market economists are using the term AGI) - and the most recent recent stuff (late 2024) is basically saying our measurement criteria is wrong so we don't know what is going to happen at all too bad so sad.

Telephone operators is the one most people go to because it happened quickly and it was a lot of people, but there was also a lot of manual work generally available back then.


The report from Tony Blair institute is pretty good, unfortunately they write:

AI capabilities, however, are fast evolving. The next wave is expected to be “strong” or “general” AI (AGI). AGI applications will exhibit more autonomy, be able to adapt to different contexts and not be limited to specific tasks – similar to human capability. Beyond this, AI systems smarter than any individual human (“artificial super intelligence”) and systems smarter than all humans (“the singularity”) are possible further in the future.

For the purposes of this project, we exclude these more advanced versions of AI because their disruptive potential is so great and they are still thought to be some decades away.

Anthropic released some related economical analysis today: https://www.anthropic.com/news/the-anthropic-economic-index - looks interesting.


"AI will replace many workers" feels eerily similar to the message of the early 2000s in the US: "Tech/Knowledge workers from low cost locations will replace many workers from high cost locations." In truth, the high cost locations continued to grow, side-by-side, with low cost locations. To be clear: I am not speculating on the why, rather only sharing what I see.

I firmly believe AI will become heavily taxed and regulated.

It would be foolish to ban it, but equally foolish to allow it to replace a large percentage of your country’s workforce. It would completely destabilize the economy and society as a whole.

AI can do a lot of great things for humanity. Putting vast amounts of people out of a job is not one of those things.

There needs to be more smart people talking about and studying what the future realistically looks like.


Taxing and regulating the AI itself wouldn't work very well as competitors abroad or who skirt the regulations would be able to use it. You might be able to require human supervision so rather than AI replacing a programer who was laid off, the human would become the AIs supervisor.

The world’s richest man has been empowered by the US government to gleefully lay off thousands of federal workers with no published plan beyond “delete government agencies” and his team is using AI as part of the execution. I’m skeptical that there are people in charge who think about anything other than consolidating their own power and lining their own pockets. I think the only AI taxes that might come into play are those that steer people towards services run by those already in power and disincentivize setting up your own AI infrastructure.

The world’s richest man got handed this mandate by someone that was elected by millions of US citizens and he explicitly told them he will do this. Enough people in the US want to see this happen.

I don't know if enough people would not agree to highly tax the corporations if they're themselves out of work and need the money to survive.


Let's look 100 years ahead. With natural selection/evolution replaced by technology, it's not hard to imagine that the human species can survive with a much smaller population. So mass population decline plus space colonization will probably create a steady equilibrium, where the intrinsic value of a human equals the cost of keeping them alive.

Won't there be more jobs for people maintaining and servicing the data centers? Amongst all the other AI infrastructure?

No, data centers require very minimal staffing. Like a few dozen people to staff a data center serving hundreds of millions of users

Plenty of jobs that are decades away from automation. Will a robot climb on your roof to fix your A/C or repair your plumbing soon? Probably not. Now I'm sure some enterprising startup is trying, but seriously, it's not close.

> But muh singularity!

Sure, any day now Optimus will become sentient and replace home builders... or not.

So yeah, probably less lawyers(yay), software techs, accountants, etc. But those dollars will flow to people taking vacations, renovating their homes, eating fancy food, etc. Who knows what the net effect will be in the long term?


Forget singularity, do you think the robotics problem is genuinely hard enough that if we devoted significant amount of intelligence to it, it would not get solved?

>any day now Optimus will become sentient and replace home builders.

I think you're kidding about being "sentient" but it feels like they just have to get somewhat good at a very few tasks and we would be able to automate some large swath of manual labour. We don't need that many fancy tricks to get there. A lot of people are already reporting significant speed ups in Bio research, why wouldn't we see that in robotics?


Reading between the lines, I get the feeling that OpenAI may be starting to feel desperate if they feel the need to drive the hype like this.

The article is written for journalist (and investor). See the footnote.

Totally. "AGI = $100bn of profit" lol

It feels like every major lab is saying the same thing:

https://darioamodei.com/machines-of-loving-grace https://www.wsj.com/video/events/the-race-for-true-ai-at-goo...

Even folks _leaving_ OpenAI, who have no incentive to drive hype, are saying that we're very close to AGI. https://x.com/sjgadler/status/1883928200029602236

Even folks like Yoshua Benjio, Hinton are saying we're close to it. The models keep getting better at an exponential.

How much evidence does one need to dispel the "this is OpenAI/sama hype" argument?


> The models keep getting better at an exponential.

Isn't it the opposite? Marginal improvements require exponentially more investment, if we believe Altman. AI is expanding into different areas, and lots of improvements have been made in less saturated fields, but performance on older benchmarks has plateaued, especially relative to compute costs.

Even if you focus on areas where growth is rapid, the history of technology shows many, many examples of rapid growth hitting different bottlenecks and stopping. Futurists have predicted common flying cars for decades and decades, but it'll be a long, long time before helicopters are how people commute to work. There are fundamental physical limitations to the concept that technological advancement does not trivialize.

Maybe the problems facing potential AGI have relatively straightforward technological solutions. Maybe, like neural networks already have shown, it will take decades of hardware advancements before advancements conceived of today can see practice. Maybe replicating human-level intelligence requires hardware much closer to the scale of the human brain than we're capable of making right now, with a hundred trillion individual synapses each more complex than any neuron in an artificial neural network.


Sam plainly wrote that intelligence of the log of training resources, but that's presumably written in the context of GPT4 style LLMs. The intelligence gains we're seeing right now are not a result of a 100x increase in traditional training resources but rather new ways of training and agentic processes.

These are all people running AI labs! They want investment, and what better way to get investment than to tell people you're going to create Terminator? The people leaving OpenAI are joining other labs – their livelihoods depend on AI companies receiving investment: "it is difficult to get a man to understand something, when his salary depends on his not understanding it".

> The models keep getting better at an exponential [sic].

We don't know if this is true. A lot of growth that appears exponential is often quadratic (https://longform.asmartbear.com/exponential-growth/) or follows a logistic function (e.g. Moore's law).

Additionally there's a LOT of benchmark gaming going on, and a lot of the benchmark solving is not down to having a process that actually solves the problems; it just turns out that the problems already kind of lie in the span of text on the internet.


I feel insane cognitive dissonance when I read a comment like this. I know/hope what you're saying is in good faith and you aren't trolling. Yet my own experience on how good these models have become and how rapidly they're improving makes me feel like we're talking about 2 completely different things.

Screw the benchmarks, it feels insane how much utility these models already provide in my life and that they keep getting better. I guess all my problems are simple and and "lie in the span of text on the internet", but they're still extremely valuable to me.


I like AI, and I use it every day. I'm not a software dev, but I do write software professionally, and I also use AI for designing and architecting solutions. I feel the technology is eminently useful, and it's made me maybe 20-30% more productive, which is massive. I can reconcile all this without believing a lot of the hype and outlandish claims, and without feeling it's a cognitive disonance.

People are capable of having really deep conversations with ELIZA (program which just asked questions about things you said). I think there's a kind of "ooh it's language it must be clever" which I think is mistaking the symptom for the effect.

I'm not denying that the large language models may have some (marginal) utility, I'm saying that they're not going to magically turn into Skynet, no matter how much people dream.

I suspect they are not going to have as many applications as people claim: we are a few years in now and we see that the applications are trailing WAY behind the huge level of investment into building datacenter infrastructure. There's some good stuff in FT AlphaVille about this.


> How much evidence does one need to dispel the "this is OpenAI/sama hype" argument?

For AGI hype, we need exactly one piece of evidence that machine AGI exists, or that we know how to build it. Otherwise, it's an exaggeration that AGI is imminent - otherwise known as hype. Or maybe it's hope, but sama suggests it should be an expectation.


> 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.

> 2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use.

> 3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature.

My own editorial:

Point 1 is pretty interesting as if P != NP then this is essentially "we can't do better than random search". So, in order to find progressively better solutions as we increase the (linear) input, we need exponentially more resources to find the answer. While I believe P != NP it's interesting to see this play out in the context of learning and AI.

Point 2 is semi well-known. I'm having trouble finding it but there was an article a while back talking about algorithmic efficiencies to the DFT (or DCT?) were outpacing efficiencies that could be just attributed to Moore's law. Meaning the DFT was improving a few orders of magnitude faster than just Moore's law would imply. I assume this is essentially a Wright's law but for attention, in some sense, where more attention to problems leads to better optimizations that dovetail with Moore's law.

Point 3 seems like it's almost a corollary, at least in the short term. If intelligence is capturing the exponential search and it can be re-used to find further efficiency, as in point 2, you get super-exponential growth. I think Kurzweil mentioned something about this as well.

I haven't read the whole article but this jumped out at me:

> Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity.

A bald faced lie. Their mission is to capture value from developing AGI. Any benefit to humanity is incidental.


The chief hype officer is back at it again. Altman is defending exponential progress when everything points to incremental progress with signs of diminishing returns. Altman thinks benefits will naturally trickle down when everything points to corporation replacing employees with AI to boost their profit margins. I now understand why some people say Sama might be wrong: https://www.lycee.ai/blog/why-sam-altman-is-wrong

"assume the sale" is the marketing tactic. ie, in every pitch, project that agi (whatever that means) is here, and is a persistent threat.

unfortunately, ai is not search or social. there are no network-effects here. so, get-big-fast is not going to work. and slowly the masses start waking up and asking what the hell is this good for except as a fancy auto-complete.


Unfortunately there is essentially a loose federation of quasi-religious "cults" that have emerged around the topic of AI. For an otherwise mostly secular group of people, the lure of reinventing Abrahamic religion in their own image was inescapable. I feel like most sensible people took the off ramp by the time Roko's Basilisk popped up, but as we saw with the Zizians that's not always the case.

So there's this thread of taking two assumptions at face value I see a lot here and elsewhere:

1. What we call "AI" now is actually some kind of AI, and the rest is just scaling up.

2. It's inevitable that AGI would conform to sci-fi tropes.

Meanwhile we've been watching BILLIONS spent on data centers, power for data centers, water for data centers... all of that going in one end, and LLM's coming out the other.

As long as the future AI overlord requires enough power and water to run a city, and the best it can manage amounts to a fun show, I'll keep my alarmism in check.

But Altman, man he really know his audience, and he's going to sell sell sell, to an audience that's been primed on fiction and religion to believe in him like some kind of blank-faced prophet.


The article you linked is from Sept '24 and points to the ARC-AGI test as "evidence" that we're not getting close.

We're in Feb '25. ARC-AGI (at least the version they're referencing) already has been solved by AI at above average human level.

>everything points to incremental progress with signs of diminishing returns.

Seems like everything in just Dec '24/ Jan '25 points the other way. These models are already helping PhDs in novel research, they're already getting super human at coding (yes yes, they're not perfect and I'm sure someone on HN has this weird coding job that AI can't replace yet and they're very excited to shit over AI), but they've already replaced a lot of real software dev jobs.

Also aren't you contradicting yourself?

> everything points to incremental progress with signs of diminishing returns

> corporation replacing employees with AI

If we have incremental progress, how are corporations going to replace employees with AI?


to get the score they had on ARC AGI, they had to fine-tune o3 on ARC AGI. That is hardly a sign of general intelligence or emergent capability.

PhD novel research ? What is the novel research discovered by an LLM ever since the emergence of ChatGPT ? None. Despite all the knowledge these models accumulate in their weights they haven't been able to connect the dots and discover a lot of things humans haven't discovered, autonomously.

Replace which software engineering job ? They are useful, sure; good at benchmarks, yes; but not a drop in replacement of any software engineer.


> getting super human at coding

They're getting super-human at _competitive coding_, which is essentially identifying and writing algorithms. They _are not_ good at general coding, as demonstrated by their subpar scores at benchmarks like SWE-bench, and even those aren't particularly representative of what a real coding job is.


>subpar scores at benchmarks like SWE-bench

The last few models have remarkably improved on SWE-bench too. o3 scores 73%, this number was in the low teens 16 months ago. Willing to wager that SWE benchmark gets saturated before the end of 2025.

> aren't particularly representative of what a real coding job

I don't know about that, large swath of "real world" coding is writing plumbing and UIs for CRUD apps, they're getting really good at that as well. Anecdotally, engineers I know have gotten insanely productive with tools like Cursor.


> Altman thinks benefits will naturally trickle down when everything points to corporation replacing employees with AI to boost their profit margins.

That's not what he is saying. He is saying that this investment in AI will yield incredible returns and power. The investor will dominate the next decade(s) and thus you should invest. Of course, he has to say it in a careful way not to alarm politically correct people. But, in essence, he is trying to create investor FOMO to drive his next round.


yeah that's mostly hype to get the next funding round

He explicitly stated progress is logarithmic.

> 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.


I’m not sure anyone is convinced this will empower individuals. On the contrary: if we get this tech “right” enough, the inequality gap will become an inequality chasm… There is no financial incentive to pay humans when a machine is a fraction of the cost.

If each human body needs 0.2 acre of land to grow the food necessary for subsistence, what happens when the price of intelligence keeps dropping and one person's intelligence (even when directed towards the highest-value use!) is not enough to afford the use of that land? In other words, what happens when humans are no longer economically viable?

Jevon's Paradox means that the demand for intelligence will keep rising as the cost drops, so I can't help but expect a steady increase in the economic value of land _when used for AI_. It'll take a long time before it exceeds the economic value of land when used for human subsistence, but the growth curves are not pointing in encouraging directions.


AGI / human overlords that cater to AGI won't need earth resources for more than a few decades. There will be a bootstrapping period, reliant on earth resources but soon it will be much cleaner to have solar/nuclear in space and have robotic mining of raw resources from asteroids.

The next fifty years are going to be quite interesting.

> Over time, in fits and starts, the steady march of human innovation [alongside monumental efforts of risk mitigation of each new innovation] has brought previously unimaginable levels of prosperity and improvements to almost every aspect of people’s lives.

Anyway, AI/AGI will not yield economic liberation for the masses. We already produce more than enough for economic liberation all over the world, many times over. It hasn't happened. Why? Snuck in here:

> the price of... a few inherently limited resources like land may rise even more dramatically.

This is really the crux of it. The price of land will skyrocket, driving the "cost of living" (cost of land) wedge further between the haves and have-nots.


This is a great point. Life is tough because we are all competing in a game. Tweaking the rules of the game so that each basket is worth more points doesn’t make the game easier for any player.

From Henry George:

> Now, to produce wealth, two things are required: labor and land. Therefore, the effect of labor-saving improvements will be to extend the demand for land. So the primary effect of labor-saving improvements is to increase the power of labor. But the secondary effect is to extend the margin of production. And the end result is to increase rent.

> This shows that effects attributed to population are really due to technological progress. It also explains the otherwise perplexing fact that laborsaving machinery fails to benefit workers


>explains the otherwise perplexing fact that laborsaving machinery fails to benefit workers

I disagree, the reason why workers don’t benefit is because they are mostly paid to put hours in. Owners claim the gains of better machinery because they reason it is a capital investment at the business level.

Really I don’t see why see why this is perplexing. What is really perplexing is that some economists thought that productivity gains would somehow accrue gains for workers.


You say this isn't perplexing while commenting on an article by one of the most important people in industry repeating exactly this fallacy?

HN is full of people who happily and earnestly propagate this "obvious" falsehood.


> Anyway, AI/AGI will not yield economic liberation for the masses. We already produce more than enough for economic liberation all over the world, many times over. It hasn't happened.

In Marshall Sahlins's Stone Age Economics, he studies work time of hunter gatherer tribes in Africa, Papua New Guinea, the Amazon etc. They often less work less than 40 hours a week. The hunter gatherer painting in caves in Chauvet seemed to have leisure time. Less hours than some fresh college grad pounding out C++ or C# for Electronic Arts any how.

The past 50 years has seen the hourly wage stay flat while the profit workers create is sucked up by the heirs.

Paying for four years if college to get a CS degree, then studying Leetcode, interning, working cheap as an associate/junior used to be seen as a good path, but obviously since late 2022 this has stagnated for most.


> The past 50 years has seen the hourly wage stay flat while the profit workers create is sucked up by the heirs.

Are you sure?

https://fred.stlouisfed.org/series/MEFAINUSA672N

The reality is that so much wealth has been created that the US has seen rising wages AND rising inequality, with an increasing proportion of growth ending up benefiting capital, not labour.


Wait, that chart shows a 30% growth in family income. For a per-worker income comparison you'd need divide by the increase in dual-income families. Which has also increased by about 30% since 1975.

Actually, real median personal income has increased 50% since 1975: https://fred.stlouisfed.org/series/MEPAINUSA672N

I believe you may have seen some number somewhere, it would be useful to have that to silence detractors.


> Paying for four years if college to get a CS degree, then studying Leetcode, interning, working cheap as an associate/junior used to be seen as a good path, but obviously since late 2022 this has stagnated for most.

What?

Go to a state school, median amount is ~11k a year. Then study Leetcode for a month (free or $35). Buy Cracking the Coding Interview ($33 new). Get a job as a software engineer making a median wage of $140k

Only on HN could someone see this opportunity set and think it's stagnated.

https://www.bankrate.com/loans/student-loans/average-cost-of...

https://builtin.com/salaries/us/software-engineer


Can you define what you mean by economic liberation? For most of economic history, I think the definition would have been freedom from famine and slavery and I think on both counts we've been wildly successful. Both basically only occur in failed states, where there's either no government at all (Somalia, Sudan, parts of Iraq and Syria) or the government is an authoritarian dictatorship (North Korea, Venezuela)

I ask, because I think you might be overestimating the ability of the current global system to produce "economic liberation", depending upon what you mean. A rough estimate of GDP per capita would put it at ~$20K if you adjust for purchasing power, less than $15K if you don't. That's well below what most people in the developed world would consider to be free of burden or worry and it assumes a completely equal distribution of all production, which is obviously unreasonable.

We need to continue to push the ball forward in growing wealth by continuing to improve productivity, which is going to require continued advancements in technology like mass adoption of AI.


Oh sure, I'm not one of those "the world is so awful" people. We've made immense progress on a lot of very important dimensions.

Fair point on the current output not producing enough to really give people their time back.

And yes I agree, we should continue pushing productivity forward. My point is only that productivity growth by itself does not necessarily yield anything close to the optimal distribution of its benefits. In fact we have good reason to believe that higher tiers of technology which produce more technological leverage owned by fewer and fewer people are naturally antagonistic to optimal distribution of its benefits.

I'll take a fairly expansive definition of "optimal distribution" here to just say we should shoot at least for a distribution of wealth that is socially and politically stable for a free society in the long run.


a) to be free from the enforcement of someones desperate desire to hard-code envy into children via products, ads and media

b) to not have schools and teachers filter and then reinforce pupils for jobs that serve class construction

c) to make sure that every kid can get it's near infinite and non-hazardous (before creative construction) LEGOs that they can play with in peaceful environments where fathers and mothers have enough on their accounts to provide peaceful and unhealthy-stress-free environments and food and water

c) so that anyone can at least try long enough, if they so wish, to become a polymath scientist, artist and craftsman, and if it doesn't work out, to never have to be angry at some pre-emptively envious people who create and abuse crisis to drive pointlessly higher and higher prices that are NOT supported by any logic or otherwise reasonable justification


>A rough estimate of GDP per capita would put it at ~$20K if you adjust for purchasing power, less than $15K if you don't.

Is that spread across only working adults or does it include children? A better metric to examine might be per family, if you have it.


> Can you define what you mean by economic liberation?

Not having to work outside of providing basic necessities for yourself and the rest of humanity.

As for GDP - economists have been embarrassing and discrediting themselves for decades - please don't cite their propaganda in discussions about the real world.

You need to measure real world things. Can we grow enough food, can we build enough shelter, can we transport goods, can we make clothes, basic medicine? Without working 40 hours/week? The answer should be obvious.

This insane talk of productivity is more economist propaganda. We've been plenty productive for a long time. The problem is an incompetent elite and a populace that continues to not believe their lying eyes, citing 'experts'.


This is a key point. Given current overall levels of economic productivity, we should all be working 3-4 day weeks, at most.

To whom do the benefits of all this newfound efficiency accrue?


To the 1%? Just look at the historic wealth distribution chart. It’s wild.

https://en.m.wikipedia.org/wiki/Wealth_inequality_in_the_Uni...


> The price of land will skyrocket

Is that really true?

Let's look at some data from Google, the World Bank, etc.:

     US land area: 3,532,316 square miles

     US population: 334.9 million

     640 acres per square mile

     ( 640 * 3,532,316 ) / 334,900,000 =
     6.750 acres per person

     Fertility rate: 1.66 births per woman
     (2022) World Bank

     GDP growth rate: 2.5% annual change
     (2023) World Bank
So, per family of four, that would be

     ( 4 * 640 * 3,532,316 ) / 334,900,000
     = 27 acres per family of four
"Skyrocket?" With the fertility rate of 1.66, the US population is falling so that the acres per person is increasing.

A guess is that people are crowded into tall buildings in dense cities to reduce costs of communications. So, yes, in dense cities, the cost of land per acre is comparatively high.

But the Internet is reducing the costs of communications meaning that people can move to land that is less densely populated and cheaper.

So, for the near future, tough to believe:

> The price of land will skyrocket


If this logic worked, all land and therefore rent would currently be ~free since there's ample open space all over the continent.

The price of land is set by the productivity of it. Productivity goes up (by increased density, public infrastructure, private investment, or technological advancement) -> price of land goes up.


Actually, the part that that person was focusing on was that the fertility rate is below the replacement rate. Yeah, if we also decrease our immigration rate, our population could peak out even in a decade or two: https://www.axios.com/2023/11/09/us-population-decline-down-...

"Price of land goes up": In simple terms, the Internet is providing huge areas of land suddenly now feasible for use; the larger supply will lower US average costs per acre.

Empirically hasn't happened. It turns out people enjoy being near other people, so actually the increase in productivity has outstripped preference for less-dense living.

> We already produce more than enough for economic liberation all over the world, many times over. It hasn't happened. Why?

Because directing material resources towards that end requires a certain aggregate amount of willing/aligned general intelligence, which we simply don't have as a society. Failure after failure of socialist states demonstrate that human general intelligence is on average, uninterested in working towards the greater good. AGI won't have such limitations.

The blog post even addresses this: "the cost of intelligence and the cost of energy constrain a lot of things".

However, I actually do agree with you, but for a different reason. AGI is highly unlikely to yield economic liberation for the masses. Not because we won't have the intelligence capacity, but because the iota of people that will have their hands on the levers of power will in all likelyhood be uninterested in tending to the masses of now economically useless people.


> The socioeconomic value of linearly increasing intelligence is super-exponential in nature.

I'm still stuck thinking about this point. I don't know that it's obviously true. Maybe a more bounded claim would make more sense, something like: increasing intelligence in the short-term has big compounding effects. But there's also a cap as society and infrastructure has to adapt. And I don't know how this plays out within an adversarial system where people might be competing for scarce resources like human attention.

Taken to the extreme, one could imagine a fantasy/scifi scenario where each person is empowered like a god in their own universe, allowing them to experiment, learn and and create endlessly.


30 years ago, no one in my native place has seen telephone. Now every single person young and old are connected to the internet, conduct banking and other transaction via phone and many many other things.

Many unimaginable things happened in last 30 year from vantage point of my native place. Assuming same level of of transformation for next 30 year it will be still massive progress. Given current tech and AI, the rate of progress for next 30 years will be far greater than last 30. So I am believer in AI and tech in general will make massive progress in next decade


> 2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

This is so exciting. I guess NVIDIA's Project DIGITS [0] will be the starting point for a bit more serious home lab usage of local LLMs, while still being a bit like what Quadro used to be in the pro/prosumer market in the 00s/10s.

Now it's all RTX, and while differences still exist between pro gamer cards and workstation cards, most of what workstation GPUs were used back then is easily doable by pro gamer cards nowadays.

Let's just hope that the quoted values are also valid for these prosumer devices like Project DIGITS.

Also, let's hope that companies start targeting that user base specifically, like a Groq SBC.

[0] https://nvidianews.nvidia.com/news/nvidia-puts-grace-blackwe...


“The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute.”

Hasn’t the latest big improvements in LLMs been due to change in approach/algorithm? Reasoning models and augmenting LLMs with external tools and internet access (agents and deep research).

As far I can tell classical pure LLMs are only improving modestly at this point.


The very term "AGI" is a confession that the I in AI doesn't stand for intelligence.

What is even the opposite of "general intelligence"? Specialized intelligence?

But AI already has a large spectrum. It's not an expert system in this or that. It's quite general. But it's not actually intelligent.

We should replace "AGI" with "AAI": Actual Artificial Intelligence.


"AGI" because computers beat humans in many specific areas of intelligence. Chess playing used to be considered AI, for example. It wasn't too long ago that top human-level Go play was considered out of reach for computers, now computers exceed humans. But there isn't a system that matches or exceeds human intelligence in all aspects. That's what the G is for.

We can't even simulate a C. elegans worm with 1k cells and 302 neurons. But, sure, a few more GPUs and we'll have AGI!

This seems to be assuming that AGI would be similarly difficult to full brain emulation? This doesn’t seem obviously true.

> This doesn’t seem obviously true.

Is this kind of "casting doubt" some kind of cheat code that suspends all disbelief?

Anyway, yes it does seem absolutely obviously true that if you want to reproduce intelligence you must (at least at first) simulate it.


It is easier to make a flying machine than it is to make a machine that flies by the exact principles that birds use to fly (with flapping wings and such).

Do you have an argument for why general intelligence couldn’t be like flight in this way? You just said that it is obvious. I do not share this intuition. Could you give an explanation for why you expect it to be unlike flight in this respect?


> It is easier to make a flying machine than it is to make a machine that flies by the exact principles that birds use to fly

do you think that "flying machines" aren't simulated before manufacturing?


I believe airplanes were made before the first good simulations of bird flight.


The first aircraft? Ok.

And how far did aircraft get before we were able to make functioning wings that work by flapping in the way of birds?

(Actually, have we even done the latter? I imagine we have, but I’m not sure.)


This claim is already falsified by existing AI capabilities, which neither required simulating how humans solve similar problems, nor required understanding how to hand-code the relevant algorithms (and indeed we don't even have the interpretability tools to know what those algorithms are yet!).

Reasoning models are more intelligent than C. elegans.

Biological neurons != artificial neurons. You don't need to simulate biology to simulate intelligence. In the same way you don't need to flap wings to fly, or muscle fibers to move around.

> the economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential.

I think this is extremely myopic to assume things like “have much more time to enjoy with our families” unless that time is because you’re unemployed. Every major technology over the past couple hundred years has been paired with such promises, and it’s never materialized. Only through unions did we get 8h days and weekends. Electricity, factory farming, etc has not made me work any less, even if I do different things than I would have 200 years ago.

I think it’s also odd to assume the only things preventing curing all disease is the lack of intelligence and scale. There are so many more factors that go into this, and into an already competitive landscape (biology) which is constantly evolving and changing. With every new technique innovated (eg CRISPR) and every new discovery (eg immunotherapy) proven, the directions of what’s possible changes. If AGI is thru LLMs as we know it (color me skeptical), they do not have the ability to absorb such new possibilities and change on a dime.

I could go on and on but this is just a random comment on the internet. I understand the original post is meant to achieve certain goals at a specific word length, but not diving into all of to see possibilities (including failure modes in his extraordinarily optimistic assumptions) is quite irresponsible if he is truly meant to be a leader for a bold new future.


The line you quoted caught my eye too and confused why it was in there without an explicit or implicit mention of "in ten years we'll be so productive we'll have 3-day work weeks".

I think you make a great point about LLM's being able to absorb new possibilities. I like to cite the invention of the MRI machine as an incredible innovation that took three independent discoveries over forty years to realize, not to mention commercialize. Maybe if (big if) when AGI comes it will be able to spit out dozens of inventions that are some culmination of similarly independent discoveries but I'm not going to hold my breath. We have to remember they are just extremely good regurgitation machines which, in my experience, why I've found they're nearly useless in niche programming tasks.


> we can now imagine a world where we cure all diseases

Sure we can imagine it. However to make it happen, it's not enough to imagine the end goal, you need to understand and execute every single step.

I suspect his lack of knowledge about what's actually involved allows him to imagine it.

ie I notice he hasn't declared a world free of software bugs and failures ( a much easier task ), before declaring a world free of bugs in human biology.


The crux of the challenges AGI presents is hardly mentioned as a footnote in this blog:

> In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.

The primary challenge in my opinion is that access to AGI will dramatically accelerate wealth inequality. Driving costs lower will not magically enable the less educated to be better able to educate themselves using AGI, particularly if they're already at risk or on the edge of economic uncertainty.

I want to know how people like sama are thinking about the economics of access to AGI in more broad terms, more than just a footnote in a utopian fluff piece.

edit: I am an optimist when it comes to the applications around AI, but I have no doubt that we're in for a rough time as the world copes with the economic implications of it's applications. Globally, the highest paying jobs are knowledge workers and we're on the verge (relatively speaking) of making that work go the way that blue collar work did in post-war United States. There's a lot of hard problems ahead and it bothers me when people sweep them under the rug in the name of progress.


> Still, imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work.

I really don't want a future in which I had to supervise 1 million of real-but-relatively-junior virtual coworkers. I would prefer 1 senior over 1 million juniors. I'm in the programming industry and I don't think that coding might be scaled.


"as we get closer to achieving AGI, we believe that trending more towards individual empowerment is important; the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy."

The AI being controlled by megacorps scenario conveniently left out.


Currently, OpenAI is the only real big corporation that controls it, but DeepSeek and the myriad academic researchers working on making the models even more efficient are smaller scale and fairly disruptive.

> even more efficient are smaller scale and fairly disruptive.

your tiny powerful PC "against" data centers running tickytackytock style brainwashing, priming rythms over days, weeks, months, personalized and perfectly timed with all that IoT you pass every day everywhere, just for the sale, the vote, disrupting your cognitive processes, watching you react and act, testing and quantifying your interactions and their (crawling:) psycho-social envelopment of your development... berry undisruptive, if you ask me, all while you think you're so very normal, thank you very much.

not much to hope for except "uuuuh you can makes animes and animations for bluestergram and threats" and ADverse content for le Tube and run automated companies whose supply chains will be just that and maybe maybe maybe train a wAIfu to post on XXX to hopefully find an accelerating ( or exhilarating) and adequately perverted one made of flesh and so on ...


At this stage, it's looking entirely possible that large closed models aren't viable as a moat, because, by virtue of making the model available for use, you open yourself up to having your weights distilled. AI, like anything else, will hit diminishing returns, because there's only so far ahead you can look without error bars exploding, and beyond a certain point, writing "better" code is an academic exercise. Under these circumstances, it would be reasonable to expect that the average person could have a decent amount of useful intelligence at their fingertips.

The cynic in me suspects that the result will be zero public access, and a b2b AI service model only from the big players. The optimist in me hopes that the genie has already been unbottled, and open weights (if not necessarily open source) has enough momentum to ensure that people can host this stuff on their own if they so choose. In both cases, training data will be so locked down it'll make breaking into the Pentagon look like an excursion to the sand pit.


Yep. There's a big difference between personal truth and statiscal truth.

In the short term, an individual can avoid the megacorp manipulations. But society will change and those changes will affect the individual.


Meta, Alphabet, Amazon, and Microsoft have considerable control over state of the art models.

Even pointing that out is kind of a distraction from the real issue: DOGE is currently an excellent demonstration that the distinction between authoritarian governments and megacorps is a mostly-artificial one, which can and will be blurred to irrelevance if doing so is convenient for the powers that be.

Sovereignty is the ultimate business, because you get paid just for existing. You become the ultimate rentier, because you own all land by default, and have a monopoly on violence.

The British East India Company outright ruled vast areas of India; it was eventually nationalized, and all territories it controlled were ceded to the crown because it became too powerful in its own right.


"Fascism should more properly be called corporatism because it is the merger of state and corporate power." — Benito Mussolini


Yeah, I've seen the two sites that claim that it's a fake quote. The link you posted contains a bunch of doublespeak in an attempt to avoid the fact that fascism leans very heavily into corporatism.

So just say that and don't post a fake quote, jeez. Wikiquote has it under "misattributed":

https://en.wikiquote.org/wiki/Benito_Mussolini#Misattributed

"You won't win anyone over with fake quotes" -- Joan of Arc


I didn't post a fake quote. I pasted a quote that two sites -- one of which is nearly a direct copy of the other -- claim is fake.

Do you disagree that fascism leans heavily into corporatism? Surely you realize that's the point here, right? If your entire argument is "I don't believe that Mussolini actually wrote that" then we have nothing to discuss.


Whether it's fake or not is missing the point. Either way, Italian fascism was corporatist. But "corporatism" doesn't mean what people think it means. It refers to dealing with society in terms of different fields, professions, and industries, with "corporation" referring to groups that represented the common interests of those. It's not used here in the common sense of "legally recognized business" and it doesn't refer to rule by said businesses.

> AI will seep into all areas of the economy and society; we will expect everything to be smart.

I fear this is correct, but with "smart" in the sense of smart TVs. In economic terms, TVs are amazing compared to just a few years ago - more pixels, much cheaper, more functionality - but in practice they spy on you and take ten seconds to turn on and show unskippable ads. This is purely a social (legal, economic, etc) problem as opposed to a technical one, and its solution (if we ever find one) would be likewise. So it's frightening to see someone with as much power over the outcome say something like this:

> In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.

When capital has control of an industry, but voluntarily gives little pieces of it out to labor so that they can "share" the profit, I think we all know how that turns out. It does seem possible that AGI really will get built and really will seep into everything and really will create a ton of economic value. But getting from there to the part where it improves everyone's lives is a social problem, akin to the problem of smart TVs, and I can't imagine a worse plan to solve that problem than leaving it up to the capitalist that owns the AGIs.


Trying to regulate AI so that it is used predominantly for good and not for evil is as futile as trying to regulate steam in the early 19th century to be used for good and not for evil. It turned out steam was mainly used for good, but nothing could stop the navies of the great powers build steam powered battleships.

Nothing will stop the Chinese CCP to direct AI towards more state surveillance. Or any number of actors to use AI to create extremely lethal swarms of drones.


I wonder who wrote this? Doesn't sound like Altman's voice.

I wonder who theorized this? Altman isn't known for having models about AGI.

To the actual theorist: Claiming in one paragraph that AI goes as log resources, and in the next paragraph that the resource costs drop by 10x per year, is a contradiction; the latter paragraph shows a dependence on algorithms that is nothing like "it's just the compute silly".


> Altman isn't known for having models about AGI.

His blog posts about AGI predate OpenAI.

https://blog.samaltman.com/machine-intelligence-part-1

https://blog.samaltman.com/machine-intelligence-part-2


Hadn't seen that before, and despite being bog-standard Bostrom it's still more of an attempt to hold a theory than I'd seen associated with him before. Note nonoverlap of writing style and theory with the present post.

> one of our reasons for launching products early and often is to give society and the technology time to co-evolve

Is this really true? O3 (not mini) is still being held for ""safety testing"", and Sora was announced so far before release.


"Anyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025"

There's a lot to unpack there. Maintain an internal 10-year technological lead compared to what's public with OpenAI?


"as we get closer to achieving AGI"

AGI as defined by OpenAI as s "AI systems that can generate at least $100 billion in profits", right? Because what they are doing has very little to do with actual AGI.


> "AI systems that can generate at least $100 billion in profits"

Like... a calculator? or Python?


Linear Regression could be the world's first AGI /sarcasm

> In some sense, AGI is just another tool in this ever-taller scaffolding of human progress we are building together. In another sense, it is the beginning of something for which it’s hard not to say “this time it’s different”; the economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential

yeah, right. What world is this guy living in? An idealistic one? Will AI equally spread the profits of that economic growth he is talking about? I only see companies getting by on less menpower and doing fine, while poor people stay poor. Bravo. Well thought trhough, guy who now sells "AI".


> 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.

We are burning cash so fast and getting very little in return for it. This is a death spiral and we refuse to admit it.

> The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use.

We are entirely unconcerned with accuracy and refuse to see how the limitations of our product will not allow us to follow this simple economic aphorism into success.

> The socioeconomic value of linearly increasing intelligence is super-exponential in nature.

You see, even though we're burning money at an exponentially increasing rate, some how this /linear/ increase in output is secretly "super-exponential" in nature. I have nothing to back this up but you just have to believe me.

At least Steve Jobs build something worth having before bullshitting about it. This is just embarassing.


the day your neural network has 1 quintillion connections between each other wake me up because that is when we are getting anywhere close to AGI, billions and trillions are rookie numbers

>one quintillion connections

What a load of nonsense.


TLDR:

1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.

2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use.

3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature.


The only true part is 2.

1 is plainly false. Enormous ressources have been poured into models since the end 2023 and the "intelligence" (for lack of a better term) has stayed roughly the same, around the level of GPT-4. Nothing has happened since then.

3 is a philosophical opinion, not based on any falsifiable evidence.

All in all: mostly FUD, as per the usual MO.


1 is plainly false. Enormous ressources have been poured into models since the end 2023 and the "intelligence" (for lack of a better term) has stayed roughly the same, around the level of GPT-4. Nothing has happened since then.

this would be true only for people who have used the same model since 2023 :) Jesus!


How is 1 false? Log improvement means for 10x the cost the model is 2x as good. For 100x the cost, the model is 3x as good.

Not a curve to be happy about TBH. You need to simultaneously find big efficiency wins and drive up costs substantially to get 4-5x improvements, and it is probably impossible to maintain good year on year improvements after the first 2-3 years when you get all the low hanging fruit.


1 is plainly false. Enormous ressources have been poured into models since the end 2023 and the "intelligence" (for lack of a better term) has stayed roughly the same, around the level of GPT-4. Nothing has happened since then.

You need to spend some quality time with o1-pro and/or Gemini Pro 2.0 Experimental. It is not the case that there has been no progress since GPT4. CoT reasoning is a BFD.


I bet he just prompted the LLM with these bullet points and generated most of the article from it.

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html


With all due respect, I wouldn’t be surprised if Sam Altman uses his own software to help him write his blog.

AI has to be the solution to everything to justify the kind of investments going into it right now.

Sam’s a savvy businessman so he obviously understands that and goes few steps further. He promises exponential returns and addresses any regulatory and societal concerns. This piece is strategically crafted, not for us, but for investors.


Only two of those are observations.

lol, lmao even

1) give me more money and i will make you rich 2) don’t look at deepseek 3) I repeat: there is no reason to not keep giving me EXPONENTIALLY more money to boil all of the oceans


> Systems that start to point to AGI* are coming into view

And my coffee machine is plotting the Singularity...Let it be for the record in the future, AGI will be known as A Grand Illusion.


I refuse to listen to anyone who has done nothing in life but maximize their wealth and power, play corporate political games, and grift investors for all they're worth. Shame the actual innovators don't blog much, usually because they're too busy doing the actual work.

> AGI is a weakly defined term, but generally speaking we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields.

Is it just me, or is that an incredibly weak/vague definition for AGI? Feels like you could make the claim that AI is at this level already if you stretch the terms he used enough.


It's funny, there used to be a pretty solid definition of AGI: a system that can pass the Turing Test. Then we got there, and it turned out that passing the Turing Test is actually pretty specialized and doesn't mean the system is as smart as a human in all aspects.

So much negativity all the time. The fact remains that AI models are out in the public domain. If you’re unable to see how that can improve the lives of an average person, that’s a failure of imagination.

Don’t let megacorps dominance prevent individual action. The best way to be hopeful is to effect change in your immediate surroundings. Teach people how to leverage AI, then they won’t be hold hostage to the tyranny of the beauraucrats - doctors, lawyers, accountants, politicians, software engineers, project managers, bankers, investment advisors etc.

Yes, AI makes mistakes .. so what? Humans do too.

Credit where credit is due - Sam may be no saint, but OpenAI deserves credit for launching this revolution. Directly or indirectly that led to the release of open models. Would the results have been the same without Sam? Nobody knows, not a point worth anybody’s time debating.

Given most of us here are software engineers, it’s natural to feel threatened, there will be those of us whose skills will be made obsolete. And some of us will make the jump to the land of ideas, and these tools will empower us to build solutions that previously required large companies to build. Perhaps that might mean that we focus less on monetary rewards instead of change, as it becomes ever so easy to effect that change.

To those whose skills will be made obsolete - you have a choice on whether you want to let that happen. Some amount of fear is healthy - keeps our mind alert and allows us to grow.

There will be a growing pain, as our species evolves. We’ll have to navigate that with empathy.

Change starts from you. You are more powerful than you can imagine, at any given point.


yapyap 1 day ago [flagged] | | [–]

rolls eyes

bro is like the second biggest conman of the ‘20s

“AGI” mhm, sure


each subsequent post is worse than the one before who is the audience for this drivel

> *By using the term AGI here, we aim to communicate clearly, and we do not intend to alter or interpret the definitions and processes that define our relationship with Microsoft. We fully expect to be partnered with Microsoft for the long term. This footnote seems silly, but on the other hand we know some journalists will try to get clicks by writing something silly so here we are pre-empting the silliness…

OpenAI and Microsoft signed a profoundly silly contract with vague terms about AGI! It's petty to snipe about journalists asking silly questions: they are responding to the fact that Sam Altman and Satya Nadella are not serious people, despite all their money and power.


> In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention.

Marxist analysis from Sam Altman?

> We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.

I can't imagine a world in which billionaires (trillionaires?) would happily share their wealth to people whose work they don't need anymore. Honestly, I have more faith in a rogue ASI taking over the world to install a fair post-scarcity society than on current politicians and tech-oligarchs to give up a single cent to the masses.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: