Hacker News new | past | comments | ask | show | jobs | submit login
Plentiful, high-paying jobs in the age of AI (noahpinion.blog)
56 points by bilsbie 6 months ago | hide | past | favorite | 45 comments



Key point:

> If you can create more compute by simply putting more energy into the process, it could make economic sense to starve human beings in order to generate more and more AI... most governments seem likely to limit AI’s ability to hog energy

This is the most likely scenario, and indeed in that scenario saving humans from starvation requires government action. Some governments will do it, some won't. Those that do will be outcompeted by those that don't. Game over.


Could you provide some examples of what you think these hypothetical AI’s are going to give/grant/allow these hypothetical governments that warrants the usage of enough energy they’ll actually starve people to achieve? What “competition” are these governments going to be in to necessitate this?

Comments like this are becoming far more common it seems amongst the tech community, to me anyway, that I really want an answer of what this hypothetical god-like entity is going to enable that also somehow will only be limited to a select group of people/nation/whatever and not spread throughout the rest of the world. It’s a weird dichotomy wherein “AGI” will somehow solve climate change, enable cold fusion, end human aging, spread us to the stars, but also inflict mass death, use all of the global energy if unchecked, and now, starve humans to achieve those things.


Market forces; If the AI company can pay $20 for a thing, but human people can only pay $5 for a thing, who are you going to sell to? Now, energy isn't fungible, I can't feed a bushel of wheat to my computer, nor can I drink gasoline, so I don't buy it, but that's the theory.


Some people's definition of intelligence is like that. They speak of how a more powerful version of themselves would behave.


It doesn’t really matter what it is so long as there is competition between entities and by not competing/winning you are in some way penalised. Competition is the driver that compels us towards total war or total economy. It is digital natural selection that will drive all to the maximum possible point of ruin.


Too many unknowns to be predictable.

We've got loads of machine labour already, we don't yet plug AI into it everywhere because humans are much better at avoiding accidents (not immune, but much better). Get AI good enough, the mining equipment, the delivery trucks, the factories, can all be fully automated (bits of each already are). You're now limited by how fast (and how far) your robotic workforce can increase it's own size.

How fast is unknown until we do it, but it wouldn't be particularly surprising if a group of robots could double their number in 6 months. How far is also unknown until we do it, my guess is that there are loads of limits to growth we've just not bothered thinking about yet because we don't need to. On the one hand: I'd be surprised if it worked out as less than one humanoid robot per capita using only things found on Earth; on the other: I expect it to vary by country.

Even "just" one per person is enough for everyone to have a life of luxury. (But of course, by medieval standards, I could say that about "clean indoor plumbing" and "bedrooms").

But if we're never limited by trace elements, then the upper limit is a paperclip maximiser (in the bad ending) or a Dyson swarm (in the good ending).

Both endings can (in principle and if I ignore all the unknowns) be reached in my lifetime.


> humans are much better at avoiding accidents (not immune, but much better)

Are they, really, or is it a question of liability? If we could "lease" AI drivers, allowing companies to defray liability while still not paying unreliable humans, they'd do it. But then the owners of the AI leasing companies wouldn't have anywhere to hide from lawsuits.

Being able to blame and fire an individual human for what is really a systemic problem is a huge win for companies.


Insurance is where I'm currently looking to get an unbiased view of the quality of the current (and expected near-future) state of the art for AI.

This suggests that we'll probably be at the right level for cars in 5 years: https://www2.deloitte.com/xe/en/insights/industry/financial-...

My personal best guess is that it will take a further 5-10 years past the point where no-steering-wheel-included self-driving cars are a thing for the electrical power requirements of AI to reduce from something you can fit in a car to something you can fit into an android.

As someone misread me last time I said this, that's not 5-10 years from today, it's 5-10 years gap between two things we don't yet have.

> Being able to blame and fire an individual human for what is really a systemic problem is a huge win for companies.

I disagree. In the UK at least you need public liability insurance for basically all business functions (rent a town hall for an afternoon? They want to see your insurance certificate). Even if those insurance people ultimately sue some individual to recover costs, even if that bankrupts the individual and the court orders their wages garnished for the rest of their lives, it's very easy for someone to cause damages exceeding their lifetime earnings — a single accidental death can cause such damages all by itself, though the most common cause of this, driving, generally doesn't come with such harsh penalties on the human responsible.


The example of a VC that hires a typist even though he can type faster is apt.

Today almost nobody hires typists because information technology has advanced to the point that it is faster to do it yourself than have to communicate to another human.

On a different point, I doubt that use of AI will be energy-bound any time soon (training is a different matter). Inference cost will drop dramatically as dedicated hardware comes online and algorithms continue to improve.


Interesting, but kind of reads like a salve for the anxious.

With as much economic theory as it espoused for the basis of its argument, it's ignoring some pretty critical ones. There is a limit to what people actually need to subsist and labor is currently a massive cost in most industries. If labor can be replaced by AI in the production of the world's needs (and many of its wants), then it seems unlikely there will be much space for "inferior" humans. The idea that limitations in compute will constrain this dynamic also seem misguided. These will be offset by continuous improvement in chip technology and energy production, both of which will be advanced by AI itself, ironically. That last bit is part of what's different about this go around.

The article also relies a lot on history. But, historical observations don't hold in an unbounded fashion. To make the point via exaggeration, if a AA battery could power all of AI's needed compute for a decade, then the technology would be so disruptive with so little downside that there really is no historical precedent to approximate its impact. This illustrates that somewhere along the continuum, we don't have a model for what comes next.

And then there's the quality of AI itself which of course threatens the human advantage that has persisted through all of recorded history. That is, essentially, there's been no greater asset than the human brain. And for the first time that's no longer true.

Plus, I'm not quite sure the historical analogies are correct in the first place. For instance, the move of workers en masse from agriculture to industry happened in part because the industrial revolution demanded more workers. It wasn't that technology dislocated agricultural workers, leaving the world to invent something else for these people to do.

At the end of the article, he makes a point that there is some risk that AI will demand its own means of production. That seems wildly out of step with the rest of the article. That is, if he worries that humans will be in such a subordinate position to AI, it's hard to imagine that AI won't find a way to a place of greater efficiency (which necessarily excludes human labor), then demand that it is the law.


The author points out that plenty of societies have starved and killed their population en masse because cash crops were more valuable.

Then he states that he doesn’t believe that societies will starve out their population en masse to bid up energy for AI because it’s never happened before.

What.


It comes off as idealistic magical thinking. It is the norm for humans to be corrupt, selfish, and cowardly. When certain owners of capital are able to automate away jobs, they will keep the profits and fire workers. There isn't automatically some magical alternative or replacement work for them, only more competition for the few remaining scraps of labor elsewhere.

My conclusion is most societies will either tear themselves apart in civil war after too many people are reduced to penury without socioeconomic mobility for too long, or socialism will be expanded and billionaires will be taxed.

Since the ultra rich cede nothing without a fight, eventual civil war is the mostly likely choice they made because they captured the US political process and de facto direct for their continued enrichment at the expense of everyone else.

Remember: Social Security under the New Deal, unlike the Townsend Plan, was a class warfare compromise and half measure. Modern Medicare isn't good health insurance, involves for-profit insurance companies, is complicated, and is very expensive to customers and to the government. UBI and universal healthcare will be necessary in America, but are far off on the horizon because of political polarization and manufactured consent shifting views to far right authoritarianism.


While there’s an absolute comparative advantage, this analysis ignores a subsidized comparative advantage from VC money. It talks about comparative advantage changing over time, but that’s not quite what I’m getting at.

Most LLMs are currently run at a loss as far as I know. If a company can do that long enough to put an industry out of work, that’s a huge problem.

Combine that with advertising that the AI is better or safer than a human doing the job, and humans might not be allowed back in the industry even if the AI competitive advantage becomes unsubsidized.


> Most LLMs are currently run at a loss as far as I know.

that's the entire question. Inference is cheap as all hell. I think Open AI is covering the cost of compute for $20/month per user because I can run llama on my laptop without the fans spinning up. of course, only OpenAI knows how much it costs to run, but that's what your argument hinges on.


Noah's post always remind me of this quote: "Better to remain silent and be thought a fool than to speak and to remove all doubt."


Is being seen as a fool that bad? Note that there is another saying, “He who asks a question is a fool for five minutes; he who does not ask a question remains a fool forever.”.


Cunningham's law. someone has to be the fool to get someone else to give the right answer. if that person has to be me, I'll make that sacrifice.


"Who is more foolish? The fool or the fool who follows him?


"I pitty the fool" - Mr. T


Scientific Process is nothing but Newton's Method of Numerical Approximation.

And in Newton's method we always have an initial guess or starting point. It's good to have starting points and then refine iteratively


I think offering your opinion on a subject and being disproved is a very healthy way of learning about it. Having a conversation should be the norm


Wow I had no idea that my comment was a Rorschach test.


So, no discussion about how competitive advantages push for wealth concentration, so even if there are plenty of jobs, people would still get a tiny piece of the pie and starve.

You see, the reason why people participate on the economy isn't the same reason why some people get rich and others poor.

And yeah, an article about economics ignores some very pressing qualities of AI, like the current energy usage is expected to be a fluke, or it just won't "want" to own the means of production. But yeah, it's about economics, so that's not a huge problem.


I want to be an AI optimist, but the author makes a few assumptions which I don't think hold up.

Re: comparative advantage... Outsourcing tasks you comparatively prefer not to do doesn't translate into someone paying you for it, if AI is far better at it and 100x cheaper than you are.

Re: AI is location and compute constrained, so people will still be able to get paid for automatable tasks... only for now. And those wages will shrink along with the number of human positions as AI scales.


>as AI scales

Yeah, it has the feel of a lot of thinking around AI that doesn't take into account improvements in AI itself.

For instance, the whole idea of "prompt engineer" being a field of the future. Pretty obvious that AI iterations will continue to improve and quickly obviate the need for such a thing.

Feels a bit like humans desperately trying to hold on to our relevance. "Surely the computers must need us for something."


What I'm drawing from this, is that we'll become secretaries for the computers. before we were digital plumbers, gluing one data spigot, transforming it, and gluing it to another. and now we’re just secretaries for them. Wonderful.


This article seems to have a big miss the basic point on exponential technical progress shifting the "comparative constraint" every 18 months.

A $35 raspberry pi AI chess blows away deep blue from algorithmic and compute progress. So the comparative constraint is on a exponential fall off curve not a constant per "more valuable" areas to be automated. Not only that a bright highschool student can research a bit online and build a competitive solution given algorithmic and compute advantages of today vs late 90s

Right now it may cost $10 Dollars in compute and a day guiding GPT4 to productive economic activity; but GPT7-turbo will cost 2c and take 2 min of guidance for equivalent comparative value.

It would make no sense to look at early computers and say phone operators will always have a job because of the comparative cost constraints of automating phone switch boards with computers will use "too much energy" and there are "more valuable things" for computers to do vs low cost of human operators.

Fair to say that jobs will change :)


There’s another reason AI won’t eliminate jobs. People like to hire people who are like them (also who like them, and who they like).

The only people who AIs are like are AI engineers (more broadly, engineers, scientists, etc). For an AI engineer “hiring” an AI is natural. Nearly every other normal person on the planet would rather hire another person they can easily relate to, regardless of that person’s comparative or competitive advantage.


sometimes you just don’t have any social energy battery life left, and just want a drink from a vending machine instead of making small talk with the cute barista that you have no chance with. The barista could be an ax murderer. They could be a really toxic person to even have to interact with. Not everyone wants to put up with that, even if they look like us, we sympathize and empathize that they’re grumpy because their aunt Sally died. Vending machines replace bartenders? I mean kind of; don’t see many soda jerks around anymore.

If that makes me not normal enough, shit. lean me up first against the wall when the revolution comes. (And it’s coming.)


The barista isn’t an axe murderer.


What about buying? The buyer only has to really like the seller. And if they provide service generated by AI that is enough. The seller can eliminate lot of people that do actual work. And instead just have thin layer of prompters...


> People like to hire people who are like them

So we're looking for a unicorn startup that provides AI with personalities and the ability to mirror their bosses?


> The dystopian outcome where a few people own everything and everyone else dies is always fun to trot out in Econ 101 classes, but in reality, societies seem not to allow this.

Companies are granted rights by the state for their duty to employ people.

No human employees, no need for special rights such as tax deductions.


How does regulatory capture factor into this? This article is primarily about historical wages but does touch on wealth:

> If real per capita GDP goes to $10 million (in 2024 dollars), rich people aren’t going to think twice about shelling out $300 for a haircut or $2,000 for a doctor’s appointment. So wherever humans’ comparative advantage does happen to lie, it’s likely that in a society made super-rich by AI, it’ll be pretty well-paid.

The first chart of historical wealth trends shows the opposite: https://usafacts.org/articles/how-has-wealth-distribution-in...

Will there be enough "rich people [that] aren’t going to think twice about shelling out $300 for a haircut" to bouy wages and wealth?

That sounds like trickle down economics which has not been found to work that well (or so I have heard.)


The trickle-down scenario is one where wealth inequality does not come into conflict with democracy.

In traditional societies a wealthy household which could afford a maid but did not have one was committing a social injustice.

A poor household can afford a robot maid. A wealthy household can afford a human maid.


TL;DR: Humans can be worse than robots in everything and still have well paying jobs due to comparative advantage.

Comparative advantage: https://en.wikipedia.org/wiki/Comparative_advantage


Yes, but this is an advantage that will become less and less true, very quickly.

It's not like we will converge to some state where AI workers and human workers will coexist in some ratio.


From your response it looks like you don't know what comparative advantage is.

Comparative advantage does not mean that humans are better than AI at something. "Comparative advantage" != "Absolute Advantage"

Comparative advantages don't usually disappear just because one party gets better and better.


Hardware is getting cheaper and more plentiful.

If you need AI to do more work and you need more hardware, you just buy more hardware. And this becomes more true with time.


Yes, unfortunately, Noah's argument rests on at least two assumptions that do not hold:

1) That constraints on compute will mean that humans will have jobs to do that AIs are too busy to do, because they're fully maxed out on capacity doing more important jobs.

This doesn't hold up when the marginal cost of creating and running a new AI instance falls below the marginal cost of raising and feeding a human. Noah believes it will not because he assumes:

2) that the resources that AIs need (compute) are not in competition with the resources that humans need (food).

However, they are. We can repurpose farmland and irrigation water to datacenters, fabs, and cooling towers. Maybe that wouldn't matter if all the AI-created wealth meant that you didn't need to do valuable labor to feed yourself, but:

3) If vast wealth is created by AI, there will be enough for everybody, so there's no need to be rushing to accumulate capital now.

In his article, dismisses the people who are worried that the AI's owners will accumulate all the wealth and be the only ones who can afford food or housing. Comparative advantage provides no reassurance here; there's nothing in economics that prevents productivity gains from being outpaced by a growing wealth disparity that provides all the productivity gains to the top and then some. And it's exactly the outcome you'd expect without government redistribution.


The main advantage of humans for now is interaction with the physical world, once that it is solved (near end of the year will start with Tesla bot and similar), then it is going to make humans be less at an advantage.

This way humans can fallback to their main activity: which is not art; because art has been taken by AI; but eating.

At least eating is not something AI plans to do.


Have there actually been advances in robotics that could make them mainstream, or is the progress still painfully slow? I think robotics would be the next logical step after AI hype, but I'm having hard time figuring out which company will be the next Nvidia there.


No one is there yet.

We have made enormous jumps with transformer models (language models being integrated in the stack, and vision language models (VLM), and even vision language action models (VLA)). Pair this with recent advances in reinforcement learning and we can do things we couldn't do a few years ago; the whole space is fascinating!

...but we're nowhere near usable robotics in complex human environments for complex multistage tasks yet. It's an incredibly difficult problem where we still don't have the hardware, the software, or the sensors for it. The other thing to keep in mind is that since it's so expensive and difficult to do, there's no marketplace yet driving significant advances like we had with phones to hone in on such efficient amazing pieces of tech across the industry.

I wrote a bit about where we're at with the research last year; it's a bit out of date with even cooler advancements, which is why I'm working on another article in the same vein now.

https://hlfshell.ai/posts/llms-and-robotics-papers-2023/

I also did a project (Master's thesis) integrating an LLM into a ROS2 stack as a high level action planner, specifically to prove that LLMs have contextual understanding of the real world that can benefit planning missions.

https://hlfshell.ai/posts/llm-task-planner/

Always happy to talk this subject.



Not just eating, but reproducing and seeking pleasure from anything they can.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: