Cynical view: without new regulation in the area of AI, it will reduce the value of labor in many domains and completely eliminate the need for it in many others. Profits will go to companies like OpenAI, unemployment will rise and people will be left to fend for themselves, and it's exactly what's going to happen.
Realistic view: Unemployment is actually at an all time low despite centuries of industrialization, automation, etc.
AI technology is becoming a commodity at a rapid rate. OpenAI has some nice data moat but their tech is being copied left right and center and much of what they do has been replicated successfully by others; including some open source projects. I don't see OpenAI end up with all the profit here.
AI is a transformative technology for sure. But just like previous introductions of transformative technology it won't play out as doom predictors predict.
Most of the goods we buy and consume are actually produced in parts of the world where workers are exploited just fine without the help of AI. The dystopia already exists; just not in our little bubble. And a lot of those places have leveled up quite a bit economically in recent decades. So things aren't that bad anymore.
Our own past is actually built on the dystopia of the industrial revolution where people had no rights and worked until they dropped dead. Most of us on this forum have jobs that most of those people would not have considered real work. Hence us procrastinating on hacker news instead of doing real work.
AI will cause more of that to happen everywhere. But we'll find ways to keep ourselves busy. And more free time means that we can do things that are valuable to us. And what are economies other than just the accumulation of things we value? It used to be that we mostly valued not starving to death. Most of our economies were basically related to food production. Now food production is only a tiny part of our economies. We found more valuable things to do. Whatever AIs do or don't do, we'll find a way to find new things that are valuable to us. AIs simply expand our economy to include more such things. That's what transformative technology does. It grows our economies.
> Unemployment is actually at an all time low despite centuries of industrialization, automation, etc.
You've got an interesting point there. But I'm wondering, isn't this mostly true for places like the US? Looking at it globally, it's a bit of a mixed bag: the global unemployment now is higher than 30 years ago, for example.
And about the industrialisation bit – I mostly agree with you, but let's not forget the hard fought battles for fair work conditions. We got to where we are because people stood up for their rights, not just because machines started doing the heavy lifting. The original post seems to nudge towards more rules or better safeguards with AI, kind of like what happened with the rise of factories. Are you not in favour of that?
Small sidenote: calling your own view 'realistic' might put some people off. It sort of implies other opinions are not, you know?
> Small sidenote: calling your own view 'realistic' might put some people off. It sort of implies other opinions are not, you know?
I'm just countering the cynical view here; which at least puts me off. Anyway, there's always somebody that is going to disagree. To me the cynical view is historically always there and usually wrong.
I don't actually agree that AI is causing any perceived worker injustice. The US is a bit special because it generally seems be a bit different than the rest of the world in terms of a lack of worker protections. Like getting decent health insurance, job protections, and not being forced to work crazy hours just to get slightly over the poverty line. Whether you agree with that or not, a lot of that predates the whole "AI is bad" debate and is simply the result of decades long policy. Rolling back some of that or changing those policies is a different topic. IMHO that would be a good thing regardless of AI or any other transformative economic effects of other innovations.
And to counter that, I've mostly lived in places where things arguably are not that bad. People get decent insurance. They don't work crazy hours. And they mostly get paid fair wages for what they do. There are some exceptions to that of course. But people are doing pretty OK and I don't think that will change because of AI. I just don't see the need for a lot of pre-emptive measures here.
Globally, we have more people than ever and they are wealthier and more healthy than ever. Sure, there are some pretty grim outliers but that's mostly in places with despotic regimes and really crappy economies. That too is not caused or made much worse by AI.
The opposite of a cynical view is an optimistic one, not a realistic one. Optimism like yours has been as wrong as cynicism throughout history, and its unrealistic to believe otherwise.
I wholeheartedly agree (at least in the near future of a decade or so). IMO the only thing HN needs to worry about is that this round it could be the programmers that are one of the careers obsoleted and all the really well paid jobs will be things that require being good with your hands. So maybe people who are paid to think have jobs, but it is paid more like retail work.
Which; y'know, fair enough if that does happen. Worse problems to have. Mankind will be in a great spot. I always wanted to learn to weld.
> much of what they do has been replicated successfully by others; including some open source projects
This is true, except for GPT4 which no one has been able to even come close to in actual usage.
As long as OpenAI maintains the lead by making the very best model available via a consumer-friendly interfaces, and via API, I'd wager they remain in the lead.
That could be a temporary thing and is mostly just a side effect of them having a lot of money and infrastructure. I don't think most of the world is ready to just defer to them and give them all their data. There's a lot of incentive to come up with alternative solutions. The more OpenAI earns, the bigger the incentive to work around them.
As for the UI and UX. Chat GPT looks a bit like a rush job. Midjourney and others figured out that discourse wasn't half bad as a UI and it kind of snow balled from there. Chat GPT basically took that and did not add very much to it. It's very middle of the road as a chat UI actually. Completely unremarkable. Without knowing what they did exactly, it looks to me like someone spun up some javascript project, found some chat related libraries and components and knocked together a prototype in a few weeks. I've been there and done that actually on a project a few years ago. It's not that hard and a very sensible thing to do for them.
The value of chat gpt is of course in the quality of the conversation, not the UX. I expect a lot of innovation around this topic during this year and it might not be OpenAI that leads that. UX is so far not their core strength. I know they are hiring pretty aggressively to fix that. But it's not a given that throwing money at the problem will ensure an easy victory here. World + dog is focusing on outdoing them on this front. They'll have a lot of competition and investment.
> Realistic view: Unemployment is actually at an all time low despite centuries of industrialization, automation, etc.
A key word is "centuries" - previously this industrialisation was not such a rapid process (took centuries indeed) - people had time to reskill and new generations didn't pursue professions of their parents if those jobs got automated.
This time it's more possible that someone who is going to college this year, graduate in 5 years then maybe will work 5 years to find out their professions got completely automated 10 years from now.
That 'unemployment is at an all time low' statistic is disingenious, one needs to include the sibling statistics that 'under employment is currently at the highest it has ever been." People are working crap jobs that merely maintain, they are not in their careers of education, they are in those "bullshit jobs" that abuse them.
> Realistic view: Unemployment is actually at an all time low despite centuries of industrialization, automation, etc.
As a whole, yes... the problem is something else: quality of employment. Good paying, unionized jobs - the backbone of all Western economies in the "boom age" between WW2 and the fall of the USSR - in farming, mining, manufacturing and industry that employed lots of people in the past have either been lost to technological progress (farming) or gone off to China, India, Taiwan, Vietnam and Thailand - mostly because of massively lower wages, but also (especially in silicon industry) due to massively more permissive environmental protection laws.
What's left for people to make a living is mostly either low-skill and extremely low-pay stuff that reasonably can't be automated (cleaning!), a bit of medium-skill stuff like tradespeople, and high-skill intellectual jobs (STEM). Now that a lot of the high-end jobs is being threatened by AI as well, high-skilled people from there will also be pushed down, intensifying the competition for lower rungs of society even more.
And let's face it: this will be dangerous, particularly as ever more and more of the share of global wealth concentrates in the hands of very few people. Simply from a wealth relation, Elon Musk, Jeff Bezos, Larry Ellison, Warren Buffett and Bill Gates are each richer than actual medieval emperors related to what the common people had. This is not sustainable, and eventually (re)distribution fights will break out.
ETA: Just came in - in the last three years, despite a global pandemic wrecking entire economies, followed by the first land-grab war by an imperialist power ever since WW2 and the resulting economic consequences, the top 5 of the uber rich actually more than doubled their wealth [1], at the expense of everyone else. Clearly, this cannot go on for much longer.
> AI will cause more of that to happen everywhere. But we'll find ways to keep ourselves busy. And more free time means that we can do things that are valuable to us.
As if. Any free time we got gets immediately usurped by the need to take up a second job just to make rent, not to mention that there hasn't been a significant reduction in hours-worked for decades (to the contrary, "expected" aka unpaid overtime has become the norm). Women didn't enter the workforce because of feminism, women entered the workforce because capitalism needed more workers to exploit - with the nasty side effect now becoming evident that young people don't become parents at all or significantly late in their careers, worsening the demographic collapse.
> Our own past is actually built on the dystopia of the industrial revolution where people had no rights and worked until they dropped dead. Most of us on this forum have jobs that most of those people would not have considered real work. Hence us procrastinating on hacker news instead of doing real work.
Part of that fight for workers' rights led directly to Karl Marx and Friedrich Engels inventing Communism. My history on that topic is a little vague, so it may be mere ignorance that I have no reason to think neither foresaw Stalin.
Likewise for capitalism, given what (little) I know of Adam Smith[0], I don't think he would've foreseen the Irish potato famine.
Smith and Marx both saw the world changing, the era of feudalism passing and fading, and the need for a new system to replace it. What we have now is neither what Smith nor what Marx advocated, though bits of each are still popular.
So… what's the AI version of the February Revolution? What's the AI version of the Great Depression (as in 1929–1939, I didn't mistype "Great Recession")?
I can very easily see ways that AI can bring about surveillance to make the Stasi blush. Those amateurs were drilling holes in walls and putting bugs in watering cans, today we carry trackers and bugs in our pockets voluntarily, and even when those are restricted, laser-microphones are simple enough to be high-school student projects, and WiFi can be modified to run as wall-penetrating radar that can do pose detection with enough resolution to give pulse and breathing rates.
The Paperclip Optimiser is basically the reductio ad absurdum of capitalism's disregard for environmental impact and externalities, except that software generally has bugs and pre-LLM AI generally hasn't shown the slightest sign of what people would consider "common sense", which makes it… my Latin is almost non-existent, "reductio non absurdum"? For what AI may do.
Even between those two examples, while it's certainly possible on paper for AI to give us all lives of luxury with minimal to no work required… from the point of view of the pre-industrial age, so did machine tools, so did the transition from alchemy to chemistry (despite chemical weapons), so did electricity and the internal combustion engine (despite the integrated CO2 emissions), so did atomic theory (despite the cold war)… and despite that, we still have 40 hour weeks.
So perhaps we'll all end up like aristocrats, or perhaps rents (literal and metaphorical) will go up to take the full value of whatever UBI[1] we are given.
[0] while I doubt politicians who quote him know any better than me, this cynicism may be borne from the last decade of British Prime Ministers…
[1] IMO, UBI is the only possible way for a sustainable society where AI is at the level of the smartest human, and in practice it's necessary well before AI is that capable — if a suitably embodied AI can do every task that an IQ 85 human can do, for a TCO/time less than your local minimum living wage[2], you've already got 15% of your population in a permanent economic trap.
I also think that UBI can only avoid a hyperinflation loop when the government distributing the UBI owns the means of production, because if they don't then the people who do own the AI will be tempted to raise prices to match the supply of money.
But there's always the temptation for a government to exclude some group, for one reason or another — "Oh, not them, they're foreign. Not them, they're criminals. Not them, they're too young. Not them, they're not smart enough. Not them, they're…", and it's very hard to make those exclusion lists smaller, as those on the less-money list wield less power, and also everyone else would have to lower their expenses if they undid it and shared their wealth more equally.
That is already an issue with technological progress. Peoples protesting because machines are more efficient and removing jobs. The issue being the profit of increased tech not being shared correctly among humanity.
This issue has not been solved. I am glad there are other peoples becoming aware of it with the rise of IA.
If we have slow takeoff, AI will be like any other automation. It will increase economic productivity. Capital owners will benefit the most, but everyone else will also benefit, because it's not zero-sum. People will lose their jobs but new jobs will be created and everyone will get richer.
If we get ASI, then that's a paradigm shift and all bets are off.
then you will simply see those jobs being moved/outsourced to the third world without regulations, just like virtually all manufacturing did.
china, russia, india and a ton of other countries won't give a shit about 'global moratorium on AI research' and 'assault GPU ban'. US+EU are a fraction of the world's population.
In some industries, prices will follow wealthier people as more profit will come from gouging those on the lucky side now much larger wealth divide, and the middle class will vanish.
You saw an example of this in ecommerce during the pandemic's economic upheaval. Luxury goods recorded increased sales, as did bottom of the barrel retailers. Services and items in the comfortable middle class we all deserve suffered decreases.
I was not talking about scarcity, I was talking about the perversions of markets in an increasing wealth gap. There is no scarcity of brand name clothes for example. It's only expensive because people with excess money choose to pay for it, but it cost cents to make and there is no mismatch between supply and demand, and it is often not better than cheaper clothing.
Why would you need a monopoly? They aren't competing on price. It's an irrational market.
It depends what business you're in. If you're a company making yachts, supercars, or private jets, then owners consume much more than workers.
As wealth shifts to fewer hands, companies making mass-market goods are forced to drop prices, squeezing their margins and forcing consolidation and further automation, as the buying power of the customer base disappears. Investment capital shifts into the luxury sector where demand is growing, and prices and production quantities increase.
More and more of the economy gets dedicated to serving the needs of the wealthy (which is essentially what what it means for the rich to get richer).
The short answer is because markets are not monolithic.
You can have all kinds of price and market distortion as long as it’s a small group of people or a group of people that has a significant wealth effect over those that do not have it.
Just like you have services and products specifically for multimillionaires and billionaires, where you can get things that nobody else has, and in many cases even aware exist, you can see that happen for broader parts of society without having a deleterious effect on the overall survival of the human species.
It’s just market specialization, and we haven’t seen it with commodities yet, but we could see it with commodities in the future, and that would really change the game if commodities become available, only to those who are inside of a small pool of people that are making all the money with artificial intelligence, and the rest of us have no access to it. That is a very real possibility.
When sufficient profit motive exists to serve commodity markets for small numbers of people instead of the broad collective, a wage/price acceleration condition could be had which we haven’t seen before where more-and-more resources are locked up by a smaller and smaller but wealthier and wealthier small part of the population, essentially creating a wage / price spiral but inside of a wealth effect. Think of it is the impact of capital, and you begin to see why well concentration could lead to commodity concentration, in ways that we’ve never seen before. We have some examples that are similar to this happening right now, with the consolidation of industry, after industry by private equity, which is unprecedented.
AI has the possibility of accelerating these trends.
We have seen this happen many times, where broad and cheap food groups become speciality food groups and product categories, rise in price as they become popular with the smaller wealthier elite groups (think of ox tails, bluefin tuna, certain purebred dogs, fine wines and cheeses, etc) they become out-of-reach to the general society.
We haven’t seen this for commodities but there is little reason that it couldn’t also come into effect in broad commodity markets with automation and AI remove the need for wage classes.
Ai is the automation of intellectual work, but also the ground technology for advanced robotics and automation powered by AI. The impact will be more profound than anyone here applying 20th century economic theory and colloquialisms really understands. The potential for interruptions is profound, and AGI and it’s ongoing improvements is very likely like a new nuclear threat.
People like to apply the logic of the wave theory of Internet, mobile, radio, television, etc. you can go all the way back to the weaving machine, the printing press, and the cotton gin.
The real question is whether or not this time is different. You know everything is not different until the time that it actually is and when you start thinking about building a digital brain, you’re not just talking about creating another transformational technology revolution, you’re talking about replicating the creature that actually creates those technology waves. That’s slightly different and deserves a little more consideration many of the people talking about this AI revolution tend to give it. Instead of creating something new that changes society, we’re creating something to replace the thing that creates those technology tools and waves in the first place. Something like us.
Like all technology revolutions, the resulting profits and productivity can be used for the uplift of humanity and for social good, or it can be used for selfish motivations. And to be fair, it could be used for selfish motivations, and end up benefiting all of humanity if the system allows for those selfish motivations to benefit the larger society. But that comes down to wise governance and careful shepherding by those in power. Gene Roddenbury is prescient as always.
So, in short, you either end up with amplification, like you’re talking about, which creates a stratification between the classes of benefit from AI technology, and the classes of those who were left behind, were you see a general transformation of the overall society that brings everyone into a new era of productivity that changes everything on earth as a paradigm-style transformation. Which model applies? The question is this: is it different to build a new machine or a machine that dreams up and makes new machines?
Look up what the business model for in-app purchases is. In short, it's a few rich "whales" that sustain those economies, everyone else is just providing free labor to amuse the "whales".
optimistic (slightly distopian) view: governments will step in and build national artificial intelligence machines. OpenAI will be competing with open source LLMs. we'll all re-create society structured around a priesthood / ceremonial worship of a big ai that symbolizes how great your "tribe" is and compete with the followers of the other gods.
I agree, except for the OpenAI part. These subscriptions are currently much cheaper than labor - about 100x even for poorly paid jobs - and so there is simply not that much value to capture in terms of total size of the economy. In other words, if the employee costs $2000 and OpenAI charges $20, then $1980 are captured by the company using the tech, not OpenAI. So there would be a problem of course, but it‘s not like the value would necessarily go just to the tech industry. Instead it might go increasingly to those who own „the means of production“ to use a Marxist term for analysis purposes.
And if the prices they charge go up, it should be possible to compete with open products.