Hacker News new | past | comments | ask | show | jobs | submit login
AI won’t replace humans, but humans with AI will replace humans without AI (hbr.org)
239 points by sahin on Aug 5, 2023 | hide | past | favorite | 232 comments



I'll use analogies with previous technologies.

Agriculture shaped humans to create buildings, create social classes with different levels of power, create armies, governments and rulers to protect themselves from other tribes. Gradually these grew up became empires and created monotheist religions to substitute animist religions.

The Industrial Revolution shaped humans and culture to move to cities, work in factories/offices instead of home/fields and to think in more abstract terms (math, written words, etc).

Cars shaped human society, governments and urban plans to be heavily dependent on roads and all the car assistance economy. Think Houston/Los Angeles vs. Copenhagen/Amsterdam.

Radio, television and publicity shaped our daily lives to consume a lot of crap we don't really need, from cigarettes to diamond rings.

Social media shaped our political discourse into tribal stupidity and paranoia.

So just use basic Marshall McLuhan: the media is the message. Technology shapes humans and culture. AI will shape humans and culture. We just don't know how.


> Cars shaped human society, governments and urban plans to be heavily dependent on roads and all the car assistance economy. Think Houston/Los Angeles vs. Copenhagen/Amsterdam.

Copenhagen and Amsterdam were well on their way to becoming as car dependent as any other American city. It's not just that those cities were built before the car, it's that at some point they decided to stop shaping their city for the car.


> Amsterdam were well on their way to becoming as car dependent as any other American city.

This claim gets trotted out quite often, but it is only valid if you are specifically referring to the development of Amsterdam’s bicycle infrastructure. Even during those years when Amsterdam saw a rise in car traffic, the city never stopped offering public transit of the sort that most Americans could only dream of. So, the average resident was not forced to own a car.


Also, it's worth noting that American cities demolished good infrastructure (for bikes, pedestrians, and transit) to make way for the veiled promises Big Auto and Big Oil. The transition to car dependence in America was a willful and deliberate act, albeit heavily influenced by lobbying and propaganda by auto and oil industries.


I don't think that occurred through an act of God though. The point was that those cities didn't follow the American model because of the will of their people.


Agriculture did not shape monotheism. It happenrd both on nomadic and non nomadic nation's, as well as polytheistic religions of Asia. No correlation here.

Meanwhile currently AIs are just an empty linguistic models that have no proper knowledge, nor can tell what's true from what's fully BS, and will defend both or ask for forgiveness for telling both: truth and rubbish hallucination


Indeed oldest known surviving buildings are thought to be of religious and monumental in nature.[1] As far as we know it could be the religion which made people settle and do agriculture. [2]

[1] https://en.wikipedia.org/wiki/List_of_oldest_known_surviving... [2] https://www.nationalgeographic.com/magazine/article/gobeki-t...


AI will change society by getting rid of the necessity of vast numbers of people to be involved in production. Factories will become like farms, which need few people to produce goods for billions thanks to combine harvesters and so forth that were 20th century inventions. You'll have auto plants out in the middle of nowhere overseen by a few technicians. Cities will be for the poor. People will be stacked and warehoused on UBI. This is "Ze Sustainable awwnd incluzive fewcha [said in a thick german accent]."

Here, let me pull up Klaus Schwab's "Covid-19: The Great Reset."

"As far back as the Black Death that ravaged Europe from 1347 to 1351 (and that suppressed 40% of Europe’s population in just a few years), workers discovered for the first time in their life that the power to change things was in their hands. Barely a year after the epidemic had subsided, textile workers in Saint-Omer (a small city in northern France) demanded and received successive wage rises. Two years later, many workers’ guilds negotiated shorter hours and higher pay, sometimes as much as a third more than their pre-plague level. Similar but less extreme examples of other pandemics point to the same conclusion: labour gains in power to the detriment of capital. Nowadays, this phenomenon may be exacerbated by the ageing of much of the population around the world (Africa and India are notable exceptions), but such a scenario today risks being radically altered by the rise of automation, an issue to which we will return in section 1.6. Unlike previous pandemics, it is far from certain that the COVID-19 crisis will tip the balance in favour of labour and against capital. For political and social reasons, it could, but technology changes the mix."

Schwab, Klaus; Malleret, Thierry. COVID-19: The Great Reset (p. 39). Forum Publishing. Kindle Edition.


> and created monotheist religions to substitute animist religions

I often see this interpretation that monotheism is somehow more “advanced” than polytheism, even Age of Empires suggested so but why exactly? Because Christianity and Islām became so popular?


Arguably the key point for Christianity and Islam wasn't that they were monotheistic, but that they were _proselytising_; their adherents had a strong incentive to spread them. Granted, a polytheistic religion is probably less likely to be proselytising; "those people over there have our gods and we have our gods, that's fine" is an easier proposition if you're okay with multiple gods in the first place. But it's not universal; notably the Romans tended to actively try to spread aspects of their religion to other polytheists (while also adopting foreign polytheistic concepts; Roman religion was messy). Conversely, Judaism does not proselytise.


> Conversely, Judaism does not proselytise.

That is a later development. In the first millennium AD there were active attempts to export Judaism, the most famous result of which was the conversion of the Khazar kingdom to Judaism. And the very etymology of the word “proselytising” refers to Judaism making some inroads among other inhabitants of the Mediterranean.


Fair, it happened occasionally. In most cases where it’s well-documented, though, it seems like it was more realpolitik than the sort of “spreading the good word” approach of the other monotheistic faiths; AIUI the details of the Khazar case are a bit of a mystery.


I agree with your point that proselytizing is key. That and birth rates.

Slight tangent: Even under the frame of animist religions, there is sometimes the notion that the more worshippers a god has (and the more devout), the more powerful that god becomes. So using this frame, monotheistic religions have a more powerful god to use to "conquer" other gods.


This makes a lot of sense from a perspective where the "gods" are essentially ancient AI running on distributed (neural) hardware and communicating with itself via our standard human sensory channels that are able to hijack our wetware in proportion to our devotion. Monotheism then becomes a form of centralization of processing, restricting societies to a (more) unitary direction of intent [at the cost of much waste at the level of individual processing units (people)].

A slight tangent to your slight tangent: Monastic regimens look an awful lot like proof-of-work... Is the next step in facilitating the human-machine meta-minds minting AI crypto-religions?

Also, taking into account the earlier thread on Sci-Fi vs Sci-Fantasy, with a comment mentioning that science fiction tends to be current social commentary in the guise of the future: self-aware (and self-minting) currency a la Accelerando is an interesting analogy for religious, spiritual, cultural, and economic movements.


Birth rates wouldn't historically have been a big differentiator. There are edge-cases, I suppose (some early Christian sects encouraged complete celibacy, for everyone!) but in general effective contraception wasn't available, and even where it _did_ exist (Romans _may_ have had some sort of somewhat effective contraceptive/abortifacient drug, say), it didn't attract all that much religious interest until the modern era.


It’s that way in the Civilization tech tree, so it must be true!


Maybe by creating more unity in their societies?


Alternatively, follow the pre-Christian Ancient Roman approach and maintain unity by assimilating foreign religious practices. I think polytheism makes this method easier.


At certain points in its history, Christianity was fairly eager to ingest aspects of existing religions if it meant that they got the account (conversion). This eventually became controversial (eg see https://en.wikipedia.org/wiki/Chinese_Rites_controversy), but a lot of early regional Christian saints, say, are, well, suspiciously similar to pre-existing deities or other pagan figures.


Christianity has assimilated other religions and their practices, though.

As a prime example, many elements of Christmas such as the yule log originate from so-called pagan religions such as Norse.


> Christianity has assimilated other religions and their practices, though.

And over the years Christians have persecuted many when their beliefs didn't agree with the dominant interpretation. The Romans were relatively tolerant of indigenous beliefs, as long as they didn't lead to rebellion.


> I often see this interpretation that monotheism is somehow more “advanced” than polytheism, even Age of Empires suggested so but why exactly?

In your life of plenty, poor people with nothing, were being ritually humiliated several times over, damming their mental health in hell. One god is less harmful to mental health.

Considering religion is psychological warfare in the absence of todays personal surveillance technological world, you would have thought some religions could be more forgiving for peoples misfortune, but is polytheism a classic example of power unchecked?

Hinduism was designed to normalise human deformities in a region struggling with environmental toxicity and poor health.

Christianity is best for a European climate, Islam is best for a Middle Eastern climate.

Religion was an attempt in the absence of simple items like pen and paper or infrastructure like schools and roads, to educate people through story telling & place names in order to protect them from harm.

Thats why all Christian churches are typically built on natural healthy springs and is considered holy water.

Christian churches are typically in the shape of a cross, Knights Templar churches are round.

And dont look a gift horse in the mouth when its smuggling gold to and from the Crusades! Has anyone tried that in their computer games or is that a knight mare?


> The Industrial Revolution shaped humans and culture to move to cities, work in factories/offices instead of home/fields and to think in more abstract terms (math, written words, etc).

Surely you aren't suggesting that mathematics, language, philosophy, or abstract thought in general is only 300 years old. Surely not.


These ideas becoming mainstream - that's what is just 300 years old. Before few people could read...


> governments and rulers to protect themselves from other tribes.

More like it facilitated small groups taking over larger communities or communally-usable resources, creating governments to manage their wealth and underlings and armies to attack and loot other tribes.


I see all this as the evolution of language, because that's how ideas are preserved and replicated. Ideas evolve over time. They form an evolutionary system, moving much faster than biology. Humans and AI both rely on the corpus of language to learn and apply ideas.

We're no longer the only way ideas can replicate - LLMs have joined the party. Language gained a new mechanism for self-replication. We should focus less on humans vs AIs, and more on the language corpus itself, the true source of intelligence.

This corpus was built through experience, slowly and collaboratively, by billions of people in parallel. No single human could have done it alone - it's smarter than any one of us, the product of our collective work and luck.

When you train an LLM on it, the model becomes generally competent. But did the competence come from the model or the data? Should we obsess over architectures, or look to our datasets?


I've heard that before, mostly on Ray Kurzweil talking about singularity.

But I have a problem with this usage of the "evolution" and "evolutionary system" expressions. I believe that it entails "improvement", that it is judgmental.

I think it is self-centred and presumptuous to imply that the world improved toward us.

I don't think Darwin used the expression "evolution" with a judgmental meaning. He only meant that it was an adaptation to existing environmental conditions.


And failure to adapt leads to extinction.

AGI has the potential to be an improvement on humans because it has the potential to be relentlessly rational, reality-aware, and self-correcting.

We don't - most likely can't - operate like that. Although we like to flatter ourselves that we're a rational and intelligent species all of us live in an unaware haze of myths, factoids, media-seeded fake narratives, self-serving fantasies, and outright reality denial.

We have a rational process called "science", but politically it lives in a special box. If it starts to interfere with established power hierarchies it's shouted down.

And hardly any humans science at all.

Current AI is just a warm-up for what happens next. Metaphorically it's the equivalent of 8-bit computing. 32-bit AI will be entirely different, very possibly incomprehensible, and potentially far more threatening.


If AGI is something that becomes real, I don't think it would be recognizable to us for long, I don't think it will be a type of steady state even like humans, reading books to it's chidren in the cloud and get really good at coding and similar tasks, and improve itself infinitely (into what?). I think life has a special property in that it is constrained, so it exists. Imagine being able to become pretty much anything, what would you choose?

We can't even comprehend what a system like you describe would evolve into, but I don't think we can even fathom it, it's not going to be like some ultimate task master who gets 18 whole in ones at gold on Friday and take all our jobs, for some reason ?

What I think we're doing is applying our problems and the things we'd want (infinite self-improvement) because they're things we feel would be an advantage to us. A robotic system though? Not sure it would care.

It seems like AI is the ultimate end game, but I'd imagine if we survie the AI era, we'll be past it pretty farking fast.


It's just self replication. Ideas that are useful in some way get replicated, other ideas forgotten. They travel a lot and mutate. They all depend on human interests, but exist beyond individual humans.


Nice to see that the idea of Richard Dawkins' meme lives on, fittingly enough.


> Agriculture shaped humans to create buildings, create social classes with different levels of power, create armies, governments and rulers to protect themselves from other tribes

When looking at Europe and all the different languages spoken, along with the over arching religion of the region if not the world, Catholicism, headquartered in Europe, are you not left wondering why they didnt put more effort into getting different tribes to communicate in a common dialect to break down barriers?

Or if Latin was their common dialect, then does it show that royalty and govt's are the divisive one's pitting people against each other, continuing to get a free pass whilst their atrocities are airbrushed out of history?


Controversially, agricultural societies late to industrialization, have more population now and are replacing early industrial societies due to lower birth rates in industrial workers.

So I doubt humans with AI will replace humans without.


Now compare quality of life between both.


Quality of life in a very narrow materialistic definition for an individual? Or overall and sustainable society quality of life?


Enjoyment from life.


What is enjoyment though? Suicides, mental health issues, obesity, various substances abuse…

Let alone that such enjoyment is on next generations. Wether by wasting resources, abusing environment. Or generation that will never be born thanks to crappy birth rate.

So much enjoyment and such a high quality of life that people ain’t willing to procreate. Yay. Usually nature reacts in such a way in opposite environment.


> Suicides, mental health issues, obesity, various substances abuse…

Yeah, because those don’t exist in third world countries.

> Let alone that such enjoyment is on next generations. Wether by wasting resources, abusing environment.

Oh boy, here we go.

> Or generation that will never be born thanks to crappy birth rate. So much enjoyment and such a high quality of life that people ain’t willing to procreate. Yay. Usually nature reacts in such a way in opposite environment.

You base your assumption on a belief of good birth rate=good life=what nature thinks is good, which is completely incorrect.

Give me at least one reason why me and my girlfriend should sacrifice her health, career, mental well-being, money and time to do something we both don’t want to?


> Yeah, because those don’t exist in third world countries.

Of course they do. That's just a natural part of human existance. But dealing with those issues is vastly different. Nowadays in „developed“ world people are pretty much left to their own devices. And some asshats go as far as saying that trying to help those who suffering is bad. „Fat acceptance“ is one of the worst.

> You base your assumption on a belief of good birth rate=good life=what nature thinks is good, which is completely incorrect.

That's how it goes in nature, doesn't it? Life form in a suitable environment starts replicating till it meets natural boundaries. Once environment is no longer suitable, it starts to shrink. I wouldn't call it „good life“ or „bad life“. That's just how the world rolls.

> Give me at least one reason why me and my girlfriend should sacrifice her health, career, mental well-being, money and time to do something we both don’t want to?

I don't have to give you a reason. Your environment was supposed to. But this is a very interesting. You're basically saying that a good part of your quality of life relies on not having kids. Not on your high-quality-of-life environment. Vice versa is correct too IMO. People in poor countries would have higher quality of life if they stopped having kids. Less resources towards kids/education/whatever, more towards nice stuff. But this is a bit like maxing out credit cards. And then calling that high quality of life. Just on a societal level. But some people deep in debt seem to enjoy high quality life, don't they? cough US federal debt cough.


Points for originality, at least you don't even pretend to bother to do a denial spiel.


> Controversially, agricultural societies late to industrialization, have more population now

If you pointed the above out 123 years ago, would the British Empire change your mind?

Population is just one metric of success.

>So I doubt humans with AI will replace humans without.

But they might out live them whilst the manmade chemicals that affect health, like (per|poly)fluoroalkyl (PFAS) that took decades for the law to recognise and legislate against, in much the same way understanding dialects of RegEx in conversation becomes more common place in every day conversation, in order to understand the evolving world we live in.

Has survival of the fittest evolved into survival of the intelligent?


What does the population have to do with the parent's point?


Population advantage translates into victory in simple Darwinian analysis.

Not that I particularly agree in this case but that’s what I learned in my human nature college course.


> So just use basic Marshall McLuhan: the media is the message.

Also McLuhan: every communication medium is an extension of man.

AI is a turbo extension of man's ability to string noises together into meaningful phrases: phrases which, like language in general, project the speaker's thought into the mind of the recipient.

AI magnifies this into a stream of babble that can be generated at zero marginal cost and at light speed, altho with the caveat that the stream may have self-reinforcing perturbations (internal feedback loops ?) that may or may not refer back to "reality".


Analogies are treacherous. There is absolutely no guarantee that just because all these technologies shaped humans, this new one will shape them in similar ways and scope.



These are all bad analogies, because they are just better tools. They help with manual labor or the speed of transmission.

Computers actually did replace human computers (literally the origin of the word, people who sat all day and did math).

But now we're creating artificial minds. And I haven't heard a good argument why they can't replace humans pretty much everywhere.

Human minds are limited by the size of the skull. Digital minds don't have that limitation. They can, seemingly, be arbitrarily large (and maybe arbitrarily smart, as a result).


Everyone who says AI won't replace humans conveniently ignores the fact that Homo sapiens is the only member of the Homo genus left. Because we out-smarted, out-bred, and out-killed all the others.


> Everyone who says AI won't replace humans conveniently ignores the fact that Homo sapiens is the only member of the Homo genus left. Because we out-smarted, out-bred, and out-killed all the others.

Everyone who says AI will replace humans conveniently ignores the fact that Homo sapiens is the only member of the Homo genus left because we out-smarted, out-bred, and, especially, out-killed all the others.

And we (collectively, at least, not individually) know where the AI’s power hookups are.


Until it's running off a heavily-shielded fusion battery with a lifetime of a century, I assume? :)


Yeah, don't do that. Just in case.


monontheism is far from an universal consequence of agriculture. It's really limited to Abrahamic religions and religions influenced by them.

Pre-christian Roman empire, Ancient Greece, China, India are all examples of advanced ancient civilizations that were happily polytheistic


I think that homo sapiens was happiest in the hunter gatherer era, right before we've started to think more and more in abstract and conscious ways. Sure, it's cool we can fly humans to the Moon and robots to Mars, but everybody is miserable in the capitalist hamster wheel and an increasing number of people are getting depressed and committing suicide.

The other day I was at the lake and looking at ducks, contemplating this. Is a duck really less happy than an average human? Sure, they can't buy fancy iPhones and might get eaten tomorrow, but it's all relative anyways, right?

And before you start suggesting that I'm depressed: I'm not, I have been lucky. I have a happy life, great job, great friends and family. But still, one can't stop and wonder about these things...


> homo sapiens was happiest in the hunter gatherer era

I get the point you're making (progress is an illusion etc.) and have some sympathy with it, but starvation and the constant threat of being killed or dying slowly of a toothache is surely not a 'happy' life.

Personally I think a fulfilling life doesn't need much happiness at all, just contentment with your situation. There isn't much contentment these days, it seems.


> but starvation and the constant threat of being killed or dying slowly of a toothache is surely not a 'happy' life.

But how do you know if a duck is thinking about this? This reasoning only makes sense from a perspective of a 21st century homo sapiens. I think that happiness is relative; I don't believe a duck contemplates how live would be not thinking about where to get the next bite of food.

The same way that people 200 years ago led happy lives. However, a person that lives today but only has access to technology and medicine, while still knowing and seeing that the others have access to today's technology and medicine, would be super unhappy.


There’s nothing odd about that thinking. You’re just scratching the surface of philosophy and you should continue.

Nothing is an inevitability and tomorrow will look different than anyone here or anywhere else tries to predict.

Don’t stop wondering. But I definitely recommend exploring others’ wondering. Especially when it has nothing to do with tech and all to do with your own existence.


Do you have have any good book recommendations?


Beware dogma. It’s mostly trappings. Study classics, eastern and western and everything besides. Then reading Carl Jung and modern theoretical physicists will then blow the doors off of what you think you knew. Feynman sees ways around everything. Bohm sees behind everything. Whitten’s trying to connect it all together. They will all point you to other readings worth some time.

After that reading Robert Anton Wilson, Terrance McKenna, Ram Dass, and Tim Leary will leave you probably going back to the start to find all of the things you’ve missed. Don’t take them at their word, either. They use a lot of tricks, but it’s just to add perspective.

Then if you’re still feeling wild take a journey through comparative religion and it will add another dimension to it all yet again.

…then you will go all the way out and back again. Don’t take anything for granted. It’s your life you’re living, and all of these writers and more could recognize at least that. Just don’t forget your good humour. Wilson will remind you of that.


Thanks a lot!


With a field moving as quickly as AI right now, you should probably take any predictions over what it will be able to do and not do with a big grain of salt, especially if they are made for any timespan greater than the very near term future (months? weeks? That term will itself get shorter and shorter as time goes on).

Very few people, including the author of this study most likely, predicted a few years ago where AI would be today. That should tell you something.


The model of how AI would influence work, including splitting work into tasks, classifying tasks by what type of AI could complete it, the complementarity of prediction with other aspects in an organization, etc., all have been studied for decades. The people working on this, including the author of this study, have not changed their conclusion dramatically recently.

What has changed is the apparent capacity of LLMs: they got convincing earlier than other specialists thought possible. However, that timing isn’t the most relevant aspect of industrial economics and transaction cost theory. Many people make that mistake, but Oliver Williamson himself has corrected that misunderstanding in front of me several times.

What matters is that, when the technology is available, people will reorganize functions.


This may be the case with AI today, but I’m doubtful it will be in the future.

Many of the problems with AI the author is using as premises to justify his thesis are already being addressed and working towards a solution for daily.

Nonetheless, even if humans are required, it would be 1/50th or 1/100th less, a sufficiently high enough reduction that the outcome would remain the same.


I often see this sentiment regarding AI or automation and it seems to be predicated upon the belief that the amount of work is fixed.

I disagree with this premise, there is still a lot of work to do. We don't even have space elevators, nanobots, immortality drugs and all other manner of technologies yet, and just like every other technological innovation in history, I assume that we will reallocate people to do said work. Work expands to fill the time and space (and workforce) available.


> the belief that the amount of work is fixed.

You would have to define work for this to be meaningful. Generally, demand for "work" is unlimited if the price is zero, but demand falls to zero as price increases. Humans are willing to work for at least what buys them "bare necessities", food and maybe shelter (in a madmax-style world.)

In these discussions, it is suspected that AI will "underbid" larger groups of workers than in previous waves of automation.


> it seems to be predicated upon the belief that the amount of work is fixed.

I've mentioned this before in other AI threads, but humanity has an uncanny ability to use all resources available. As you say, the amount of work will only grow with the increased productivity from AI.


The amount of work isn't fixed, but the definitions of useful work have changed.

AI threatens to make all human-sourced work redundant. Because AI is really an automated culture machine - in the widest sense. It will eventually be able to innovate, rather than replicate, culturally ground-breaking work in art, science, engineering, business, politics, and media.

What jobs are humans going to do then? All that's left is manual labour - and eventually that will be automated too.

This isn't exaggeration. It's already happening in a very limited way with today's super-crude AI 1.0. AI 5.0 won't be a step change, it will be something entirely new.


Of course, if something was actually intelligent just as we are, but didn't need food etc, then the 'means of production' would now be wholly and completely owned by capital, and there would be no place for workers of any sort.

But with LLMs we are as close to that as climbing Everest takes us to the Moon. No small feat, but not particularly relevant. After all, all they do is return statistically likely text responses from text inputs, based on a very large corpus of text previously written by humans. Any claim above that is technical hubris and/or funding strategy.


Robotics is much more difficult than software. It will be quite a while until an AI can autonomously build a space elevator. Until then, people will still work. And after that, perhaps we can finally sit back and relax.


It‘s not more difficult, it‘s simply that it cannot be done using the same approach.


I guess it depends on amount of talent as well. Lots of people are stuck in low end jobs while the demand for high skilled labor is also high. Maybe some argument is that if low skilled workers can do high skilled labor with the help of intelligence augmenting machines that demand can be met finally.


Yeah people generally gloss over the bit where automation doesn't have to be anywhere close to 100% effective to cause massive disruption (unless we drastically modify our economy which seems incredibly unlikely to happen in any proactive soften-the-landing way).

The unemployment rate at the height of the Great Depression was "only" about 25%. If 3 people doing the work of 4 is already a huge problem, 1 person doing the work of 50 or 100 is massive.

The obvious and oft-repeated response to this would be to point out how automation has been going on for many decades (arguably centuries) and so far people have just evolved into different jobs, which is true, but it seems like a risky bet to me to assume that trend will continue if/when a non-task-specific automation technology has the potential to very quickly cut wide and deep across an awful lot of the job sectors that currently have reasonably desirable jobs.

The speed of disruption, I think, is likely to matter quite a lot.


> The unemployment rate at the height of the Great Depression was "only" about 25%. If 3 people doing the work of 4 is already a huge problem, 1 person doing the work of 50 or 100 is massive.

So your hypothesis is that the Great Depression was caused by a sudden 25% increase in productivity?

If so, what data do you have to back this up?


High unemployment means fewer consumption spending, which according to Adam smith is the invisible hand fueling the economy. Remove this hand so to speak, and the economic cycle is at risk of being disrupted. Think of an economy like an ecological environment entangled with interdependence.

I think generally it’s hard to think that modern economics has only really exist for the past couple hundred years. The last major shift that created it? The cotton gin starting the Industrial Revolution. AI is akin to the cotton gin.


To be clear: I'm not rejecting the hypothesis, simply asking for supporting data. Ie. data that links the supposedly higher productivity to the higher unemployment at precisely that point in time.


> So your hypothesis is that the Great Depression was caused by a sudden 25% increase in productivity?

No, that isn't the hypothesis nor is it relevant to the point I am making.

The cause of the 25% unemployment rate in the Great Depression is actually completely insignificant to why I mentioned it. The point being made with that is to highlight that 1 in 4 people no longer being in a position to make a steady living was enough to cause an economic situation so bad that nearly 100 years later in the USA it's still treated as the common example of how bad things can get when things are Really Really Bad.

The oncoming rush of automation across virtually every aspect of the "knowledge worker" economy easily has the ability to create a situation in which the percentage of people made redundant is an order of magnitude more than that benchmark 1 in 4, and since it is likely to cut across so much of the economy very fast all at once there is likely to be no time for the economy to reasonably deal with.

The go-to advice for people who are automated out of jobs is "retrain and find another one", how do you do that when the disruption is "everything everywhere all at once" and your competition for the new jobs is likewise coming from everywhere all at once and the job contraction impacts nearly every company?

How scary this all sounds is probably colored to some degree by where you live and all of my concern is admittedly biased towards the USA where our current social safety net systems are woefully inadequate for dealing with this sort of disruption (even in the short term) and still constantly under attack.


> The cause of the 25% unemployment rate in the Great Depression is actually completely insignificant to why I mentioned it. The point being made with that is to highlight that 1 in 4 people no longer being in a position to make a steady living was enough to cause an economic situation so bad that nearly 100 years later in the USA it's still treated as the common example of how bad things can get when things are Really Really Bad.

The cause isn’t irrelevant, in fact, you've reversed cause and effect.

> The oncoming rush of automation across virtually every aspect of the "knowledge worker" economy easily has the ability to create a situation in which the percentage of people made redundant is an order of magnitude more than that benchmark 1 in 4,

So 10 in 4? Seems... unlikely.


> So 10 in 4? Seems... unlikely.

See the word "percentage" which you quoted and would suggest I meant 1 in 40?

Or are you doing The Internet Thing of not actually reading/understanding and just trying to look for things to clumsily attempt to dunk on without being able to elucidate whatever it is your own point is supposed to be?


> See the word "percentage" which you quoted and would suggest I meant 1 in 40?

Mixing fractions and percentages doesn't change that an order of magnitude more than 1 in 4 (25%) is 10 in 4 (250%).

If you meant 1 in 40, you should have either said “less” instead of “more” or, “the denominator in the fraction of people made redundant is an order of magnitude more than in that benchmark 1 in 4” in place of “the percentage of people made redundant is an order of magnitude more than that benchmark 1 in 4”, and in either case that’s 2.5% , which while not insignificant if intended as a delta on top of normal unemployment, is less than the delta in a most recessions (the Great Recession peaked at about +5% from the pre-recession trough, which wasn't itself a long-term steady baseline), is also not significant enough to warrant the kind of hyperventilating you are doing. Its also very much not the obvious intent of the words you wrote.


Assuming centaur-dominance in the economy rather than pure-AI dominance, I can easily believe that we can work just as hard to produce 16x as much stuff — that's the income disparity between normal Americans and the recent job posting for $900k in Netflix, implying normal people can be motivated by it, implying there's stuff to care about buying with it.

Steps up from that are plausible, but less certain. Do we all want personal pleasure yachts? Flights on a Dragon capsule or a Starship?

-

But there are too many paths the future might yet take for me to be confident of anything much.

If the AI are always narrow, but can be made superhuman for any given skill in 6 months, that has one set of issues; if they are fully human-like general but we never make them more than IQ 85 (plus access to whatever narrow AI exist by then that actual humans also use), that's a different set of issues; and that's just off the top of my head.


The 900K salary isn't related to "producing more stuff", but is instead a way to acquire a very rare skill that would unblock their development (in their mind, obviously).

It's basically trying to ensure they get this person away from whatever place or job they happen to be currently working in. If AI was able to empower many people with this skill, they wouldn't be paying anywhere near 900K by simple offer/demand.


That's separate to my reason to use it as an example.

So, I'm a very unusual person in that I can't be bothered to spend more than about €12k/year. The idea of owning most stuff just doesn't motivate me. I earn significantly more than €12k, because I think the immigration office wants me to, but I'm not motivated by it myself.

If everyone was like me, most advanced economies would just collapse as everyone went down to 5-20 hours per week.

The observation that the economies have not collapsed, demonstrates that most people do actually want to acquire stuff.

The observation of a job offer for $900k strongly implies that there are people who are motivated to acquire $900k (less taxes) worth of stuff each year.

From that it follows that at least some people will be motivated to work just as much as they currently do, in order to earn about 16x as much money, in order to spend it on 16x as much stuff.

(16x is here an observation, not an upper limit).


I think you‘re missing a few things:

1. Not everything is stuff. You can book a flight for 5000 Euros to the other end of the world.

2. Not all prices are the best. You might overpay significantly for said flight in order to not have to think about finding a cheaper one. Same for everything else, such as grocery shopping.

3. Housing. It‘s easy to imagine someone buying a house for 5 million. Depending on the location this may not even buy an _extremely_ fancy house. If you don‘t like debt and want to pay it down in 10 years then 900k (and what really remains of that? 400k?) might be useful. Even if you don‘t care about „stuff“.

Maybe all you care about is not having to care about it.


None of those matter, IMO.

A flight ticket itself may be ephemeral, but the plane and the fuel are not. Overpaying for convenience becomes oversupply for convenience, which at the scale of "everyone gets it" requires massive overproduction.

Houses will indeed always have a "location location location" premium; but the dynamics of this also changes if everyone has far more economic power — $900k is currently enough to commute by private plane, opening up many small places that are currently too out-of-the-way to get such attention, though I don't have any idea if that can actually scale given airport capacities and densities.


Produce 10x what exactly? Earth at current consumption level is not sustainable. where the fuck do you have the resources to produce 10x


Which bits are and aren't sustainable depends mostly on tech.

10x solar power and we get meet all current electricity demand in a climate friendly way; 10x lithium etc. mining and we can roll out electric vehicles much faster; 10x bioreactors makes… well, very little difference to ethical eco-meat, but mainly because it's still new and the kinks have yet to be worked out, but the principal still applies.

(Unfortunately, my point would still stand even if higher production only sped up disaster; the tragedy of the commons is because everyone is motivated to consume more than is sustainable given the size of the group).


In theory the more productive we are the higher "upper bound of sustainability" is. Cheap energy can be used to recycle garbages from the ocean, recapture carbon from the atmosphere, etc. We just choose not to.


For a few decades already growth of consumption/GDP has been decoupled from growth of energy and resource usage.

Roughly speaking, we consume more by consuming "better" things rather than "larger" things. A $1000 iPhone is much "more consumption" than a 1990s cell phone plus portable CD player plus a calculator, but takes less resources than those things. Even more so when consumption of digital entertainment replaces live entertainment.


Why is it not sustainable? Are you talking about fossil fuel usage? Because that can be solved if we had the technology to solve it, via carbon capture, geoengineering, etc, via the help of AI even.

The Earth is a massive planet relative to human scales, there are more abundant resources in it than we can use in a thousand lifetimes.


Did humans with computer replace humans without computer?

Did programmers with google search and stack overflow replace programmers without these?

It doesn't and it won't happen in any time soon.

How could humans with probabilistic LLM which cough up hallucination replace humans without it when the answer to my previous two questions are NO.


> Did humans with computer replace humans without computer?

Yes, they did. Or rather: everyone was forced to catch up and those that didn't just got left behind.

Those of us who saw the PC revolution still remember the time when typewriters were an everyday artifact. We also saw how everyone was forced to choose between updating or being left behind - no matter how fast your secretary was, a human with carbon paper is no match for a human that slightly tweaks a template and sends 10 copies to the printer. And a human with a calculator will always be slower than a human with Excel.


The only important caveat here is that businesses and organizations tend to become inflexible over time. For example there are plenty of businesses that still use fax machines today. Obviously that is inefficient and that businesses should be at risk of competitors undercutting them.

But in reality you'll see that poorly run businesses can end up continuing for decades.

So we have entered into this weird situation where Hollywood actors and writers are demanding huge compensation for work that is becoming relatively cheap to produce with new technology. It is going to be interesting to see this play out.


> So we have entered into this weird situation where Hollywood actors and writers are demanding huge compensation for work that is becoming relatively cheap to produce with new technology. It is going to be interesting to see this play out.

Thing is, while actors can be replaced or massively augmented by AI, writing by definition cannot until we have actual AGI solved. No matter what you ask ChatGPT, its outputs are as finite as its training material is.

And even for actors, people are already beginning to loathe too much CGI, see the decline of the Marvel Cinematic Universe. As long as there are enough people actually preferring live humans, Hollywood execs can get f.cked with their dreams of collecting the hundreds of millions of dollars they pay actors for themselves.


>As long as there are enough people actually preferring live humans

I don't think that's a high bar. Maybe they prefer A class live humans, but there are certainly plenty of "filler" holes that most people won't ever perceive as missing.


> Yes, they did. Or rather: everyone was forced to catch up and those that didn't just got left behind.

Do a plumber, a baker, a policeman need a computer to do their job? That's right, they don't. So men with computers did not "replace" those without.


The last plumber I hired had a website, spotted a leak with a flir camera (and had a pile of interesting niche electronics), and sent me an invoice he generated through an app on his phone via email... Maybe there are places where you can hire some analog types on the cheap, but where I am the computerized ones are way more efficient.


Even if those people aren't using computers at their job (and I even disagree with that), they still use computers. A baker with a computer and internet access has access to more recipes than a baker without a computer. A baker with a website and an online presence is finding more customers than a baker who only exists in the real world.

As a matter of fact, the new boiler my plumber (who has a webpage and I found online, mind you) recently installed has computer chips and circuits. My plumber seems very technically literate and knowledgeable about the tech as well.

Police offices are now using AI to catch people. Even before, they were using computers in criminology for DNA samples.

So all of the professions you listed use computers all the time for their job. Both directly and indirectly. A baker without internet, a plumber with no computer knowledge, a police office stuck using analog tools are all worse off than their tech savvy counterparts.


> A baker with a website and an online presence is finding more customers than a baker who only exists in the real world.

I can say for sure that this is simply not true. There are bakeries without websites that make a killing, while at the same time there are smaller ones in the same areas trying to drum up business online with less success.


I view the parent's comment as a statistical statement (e.g. better off on average), not a literal absolute statement. That's usually what people mean when they make statements about large sets of people/things.


Yeah, that’s how I viewed it too. I figured it was just an assumption on their part, rather than an accurate statistical statement.


You're picking exceptions to the rule and claiming that those exceptions invalidate the whole rule. Sure some bakeries are just better than others, a good bakery with no website will do better than a bad bakery with a website.

But all things considered, having an online presence always helps your business. A fantastic bakery with a website will get more customers than a fantastic bakery with no website.


I think you picked the worst example ever. The success of a bakery is based on 3 things:

1) location

2) location

3) location

Everything else doesn't matter. Nobody goes online every day to see "Gee, I wonder which artisanal bakery I'm going to drive to today, and I'm going to make my decision based on which bakery has the flashiest website".

If websites mattered, then small businesses wouldn't have the crappiest websites ever.


Wow, only location, huh? Doesn't matter if their product is actually good? And I guess if there are two bakeries in the same location (happens every day in these places called "cities"), I will just pick randomly then.

If you really think bakers don't benefit from using computers, then you're just not thinking very hard. I always look up the reviews of the bakeries I go to if I'm buying a cake or something. I wouldn't risk buying something at a bakery without checking online if it's worth it or not.

>If websites mattered, then small businesses wouldn't have the crappiest websites ever.

If websites DIDNT matter, these small businesses would not have websites at all. I'm not saying you need a good website. You just need any website for discovery. Your example actually supports my arguement - why would small businesses be wasting time with websites unless it helped them?


> I wouldn't risk buying something at a bakery without checking online if it's worth it or not.

You wouldn’t risk spending on $7 on a loaf of bread to try out whether a bakery is good or not?

I’m sorry but that just sounds ridiculous on its face. The average reviewer has no taste to begin with.

> You just need any website for discovery.

No, what they need is to claim ownership of a Google Maps entry and a Yelp page. I don’t know if I’ve ever visited a bakery’s website in my life, but I've found plenty by searching Google Maps.


The last thing I bought from a bakery was a $50 dollar cheesecake for a birthday party. And you can bet I looked up reviews from at least 5-10 different bakeries. And also visited their websites. But sure, whatever you think.

You're just doubling down on your own stupid argument and not providing any interesting rebuttals to what I'm saying. The reviewers are wrong, Yelp is wrong, the internet is wrong, bakers CANNOT benefit from using the internet! Go get your birthday cake from the first bakery you see, I don't care. You're not really raising good points. If anything, you're just highlighting how little you know about using the internet to find new businesses. The fact that you've never visited a baker's website just means you either don't care or know about picking high quality products when it matters. It's a you problem, not the baker's problem.


Fair enough, I wasn't thinking about expensive one-off purchases.

Still don't get why they need a website. An out of date menu showing prices from the 2010s, I guess? Just look them up along with reviews on Google Maps or Yelp and call them to order the cake. But hey what do I know, I've only done it dozens of times... this past year alone.


Instead of me trying to explain it to you, maybe just try it yourself.

You saying "I've never needed this and I never tried it, so I don't understand why anyone else would" isn't exactly a robust argument. In fact, why would I bother listening to your opinion on a feature you've never even tried? Most websites have info about hours, prices, products, FAQs that are up to date. Your mental model of what's in a website is wildly off, especially in 2023 when everyone's on social media.


This is just obviously not true. Any location which is good enough that a bakery could survive without any repeat business would have such high rents that no bakery would be able to survive (or at least, it would be very difficult).


> A baker with a website and an online presence is finding more customers than a baker who only exists in the real world.

Oh yeah there's a nice bakery 400km from here… I'll just go there rather than the usual one down the street! /s


You've only got two bakeries where you live? One 400km away and one down your block? You've never found a new restaurant online? You've never looked at reviews on yelp?

Idk what to tell you. Based on your example (only search results for bakeries are 400km away), the problem is where you live, not the internet.


I'm sure there are bakeries from here to the 400km away one. But even if they are 5km, why bother? Unless the nearby one is so bad that it can't be considered altogether.


If you live in an area where there's only one bakery in town, then there's no pressure at all for that bakery to improve their process. What is your point? I live in a city where it's common to see multiple bakeries on the same block and where I could visit a new bakery every week for the rest of my life.

Again, it sounds like your little town or wherever you live is just lame. If there is no competition, there's no reason to innovate. I'm sure every business in your (real or hypothetical) town is years behind the curve in every metric if your only factor for patronizing a business is "well, it's the only one close by".


Ah, a firm believer in the bullshit of the invisible hand! I find horoscopes to be much more science based to be honest.

> your little town or wherever you live is just lame

I'm sure you're aware that the term "lame" is offensive, discriminatory, and should really be avoided?


>I'm sure you're aware that the term "lame" is offensive, discriminatory, and should really be avoided?

I get a strong feeling that you were bullied in school when you were a kid.


I get a strong feeling that you're an ableist and generally an asshole.


Well at least I have more than one bakery in my neighborhood, so it could be worse.


Yes I'm sure the invisible hand will magically make them decent, lol.

Enjoy your fantasies mr internet bully :D


> Do a plumber, a baker, a policeman need a computer to do their job?

Plumbers (and for that matter, all tradespeople) can get far more efficient at their jobs if they use computers to their advantage. Modern heating systems, i.e. anything above "a gas boiler that circulates water and fires up if the water gets < 70 °C and shuts off >90 °C" can't be designed these days any more without the aid of simulation software that accounts for the effects of insulation or the variety of air-conditioning systems. The more energy efficient, the more complex it gets to design and to operate. Carpenters these days don't do much measurement by hand - they design something in CAD and get exactly fitting wood parts out of their saw machines. Electricians, where do I even start with these, as modern homes are filled to the brim with smart tech. The only thing AI can't do for now is actually run cables and pipes, but give it 10-20 years and construction labor will be done by Boston Dynamic robots - we're already seeing giant 3D printers for concrete or brick-laying robots.

Bakers just the same. Most of bakery, with the exception of artisanal crafters that take pride and charge appropriate pricing, is automated these days - including the supply chain. There aren't that many humans involved in the production of staple foods any more, this is why everything has gotten so cheap and plentiful over the last decades. Some stuff, like picking asparagus, can't be done by robots yet, but that's bound to change.

And policemen... just look how much they're using computers already. No more "send a chopper and a dozen cars to follow a suspect", no more stakeouts, ANPR cameras and AI are enough (there's been a HN submission a week-ish ago about that).

Yes, all of these jobs can be done by computers, but far far less efficiently - and declining birth rates will put more and more pressure on all kinds of jobs to be either eliminated entirely ("paper pushers" and similar bullshit jobs) or be replaced by computers.


The police thing, have a link?


Unfortunately not, I can't remember the exact title of the article, but there's a number of articles on the topic:

[1] https://www.wral.com/story/license-plate-reading-cameras-hel...

[2] https://news.ycombinator.com/item?id=34300713

[3] https://news.ycombinator.com/item?id=36881133


Oh, found it. The pigs used AI to detect a "drug trafficker": https://news.ycombinator.com/item?id=36772253


Police definitely needs all the tech it has, beginning with auto detecting license plates to countless other tech they use.


> Do a plumber, a baker, a policeman need a computer to do their job?

Yes, mostly. Very much so the police officer, which is why they generally have, for decades, had a computer terminal built into their patrol vehicles.

But, yes, modern workers (and esoecially business owners) in each of the other fields often use computers, even if not for what you might consider the core defining part of their job, for parts that are practically important to doing it in the modern environment, even if its just the computers almost everyone carries in their pocket.


Let's take an example about plumbers:

Plumber A has a van and a phone number attached to it. They're also in the yellow pages.

Plumber B has a website with a real-time calendar of available dates and estimated rates for different jobs. They also buy relevant ads on Google and Facebook.

A is pretty much relying on word of mouth, might be excellent at their job but won't be getting much new business - young people (people under 40) don't especially enjoy cold-calling strangers for availability and rates.

I know that I'l pick B in an instant. I've called way too many tradesmen who have answered clearly while driving in traffic and then they start digging up their physical appointment notebook at the same time.


Yes they do. All plumbers rely on internet to get their job. Bakers advertise and are found by people on through their phones. Policeman, increasing rely on computers for data verification and all sorts of communications


I would argue that nowadays they do. Maybe not for the actual physical parts. But the running of a business (pretty much all online now), advertising and marketing, communicating with customers - email is essential. If you aren’t able to use a computer you would be left behind by the people that are.


How do you thinnk plumbers and bakers order their products and promote their services? The ones with computers beat the ones without.

And police is already a heavy user of computers


Policemen literally have computers in their patrol vehicles and a lot of police work is computer work.

Plumbers need to order stuff online, manage invoices, have an online presence to get and communicate with customers, have digital instruments that might feed into their field laptop.

Bakers use digitalized machines and have their recipes down to an almost exact science. They use computers all the time to calibrate stuff.


> Did humans with computer replace humans without computer?

Certainly for a whole host of tasks.

> Did programmers with google search and stack overflow replace programmers without these?

Also true for many tasks too, a programmer who refuses to use these resources for certain types of work will be an order of magnitude slower in many cases and will be replaced by programmers who aren't.

LLMs are just beginning to become useful, but there will be a point where they're indispensable, those who refuse to use them will eventually find themselves falling behind. It's will eventually become equivalent to a programmer who refuses to move off punch cards or refuses to use modern version control.


We're peak hype cycle for LLMs right now so we have an order of magnitude more ridiculous shit said about them than would otherwise be normal. The trough of disillusionment has yet to set in.

They will be useful for a set of tasks but all of the people hyperventilating over this being the new industrial revolution might consider calming down a little.

The high priests of capitalism have been blaming lost jobs on automation since forever - largely because this narrative makes a convenient, impersonal scapegoat.


I'm curious what the "hype cycle" looked like for the discover of electricity, the invention of the combustion engine, or for the invention of the transistor. Who knows if the LLM paradigm will be more like those moments vs the failed hype of something like the hydrogen fuel cell.


I'm old enough to remember how it worked for the Internet. It was treated as a curiosity for nerds for a long while. The whole time I thought it was completely underrated and would change everything - thats why I decided to get into tech.

Then there was a short, sharp "it's going to change everything" hype cycle that lasted no more than a few years followed by a trough of disillusionment that coincided with the recession that was around 2003 or so.

I thought almost all parts of the cycle were exaggerated from the part where it was a curiosity for nerds to the "pets.com craze" where "the high street would disappear completely".

With LLMs it's very similar but rather than a meaningful shift in technological development underpinning it it's something more like a parlor trick with a few practical applications.


Then sadly big business and government figured out how to turn that promise into the dystopia of today.


a parlor trick lol. Here we are seeing statistics applied on abstract terms as human knowledge, how this is something with few practical applications?


Because the results and actual efficiency gains are flaky, inconsistent and mostly limited to boilerplate?

GPT and friends are very impressive, yes; if you’re also impressed by your intern blindly copying stack overflow.


Maybe I am a fun of fuzzy logic, but I find gpt less flaky than most people that I read about, I think it will evolve. I feel gpt will even allow us to create new branch of mathematics applied at psychology.

I can't understand why you compare it with an intern copy paste, it's simply absurd to compare. Looks just a way of reducing the argument and drawing parallelism where there are none. Yes, some people use it to program, No, not everyone uses it for programming


> With LLMs it's very similar but rather than a meaningful shift in technological development underpinning it it's something more like a parlor trick with a few practical applications.

What makes you think this, I'm assuming you've tried gpt-4? I've personally gotten a lot of productivity out of it. Assuming it gets a lot better, I don't see how it's not going to minimally become an indispensable tool, and at maximum completely transform the future of humanity.


That would be a safe assumption, yes. Im clearly not a complete moron.

It reduced my productivity when I used it.

Between having a slightly faster and slightly customized version of google vs. being sent down misleading rabbit holes, the latter won out.

I found a few niche use cases where it proved useful but it was all out of proportion to the hype.

I've paired with other people who were bullish about LLMs using them and witnessed them falling down rabbit holes. I found the whole experience bizarre - it was like seeing a new "social reality" take precedence over what was happening in front of our eyes. Computer says yes.


LLMs are neural networks. If you understand their strengths and weaknesses they are extremely useful. Use LLM's to explain abstract concepts, discuss ideas, analyze text, as an interactive tutor, not as a datastore of facts to be recited verbatim (like a search engine). That is the worst possible way to use LLMs and will result in hallucinated facts. If that's how you're doing it then you're doing it wrong. Neural networks such as LLMs are fundamentally not made for factual recall. LLMs are designed for solving natural language understanding tasks.

I have found GPT-4 very useful for understanding concepts and solving specific problems in programming, science, maths, psychology, relationships, and producing creative writing, by having conversations with it, going back and forth deeper into these topics. But I would never use it as an API reference. Raise it up to the conceptual level and you will be surprised at what it is capable of.


That’s a ridiculous statement. Hydrogen fuel cells will eventually power all cars.



As someone who somewhat works with AI it's good if your data is good and terrible if it's not, my job is literally to clean up AI failures, and due to bad data or dealing with interpretation a lot can be done with data good or bad. My work helps retrain the model, but sometimes because of bad management we have unclear answers on how to interpret the data and this leads to some coworkers training it wrong and you'll see this linger between projects. Now if the people were better trained the data would be too but oh well. Someone wants to cut costs. I think this kind of issue will always exist no matter the model, you can't really make up data you don't have (I mean you certainly can but it all has to be figured out how to fix it when it does just start guessing poorly) and sometimes there are no answers but bad answers in some edge cases.


Text editing and version control solved very specific problems. E.g. the former allows quick iteration and the latter allows to analyze changes and work in parallel.

Which problems do programmers have that only AI could help with?


> Did humans with computer replace humans without computer?

You're obviously not aware that "computer" was a profession, a job before becoming a machine, just watch the "Hidden Figures" movie. Have you seen a human computer later?



What? The answer is obviously "yes", especially to your first question. Your whole premise is wrong.


Quite easily. Everyone is fixated on LLMs delivering a complete product or providing 100% accurate information. Too much of a focus on hallucinations.

First, humans don't deliver 100% accurate information, so let's keep the bar at something reasonable. Secondly, complete products are not necessarily the only value of LLMs. LLMs are pretty solid when it comes to helping break out of some sort of creative block. Think about it this way - when you're trying to creatively solve a hard problem, what is one of the best mechanisms to help? Perspective changes. LLMs are extremely helpful when assisting with perspective changes (driving/helping iterate on different creativity tools like combination, association, etc).

I can't get the full article to load for some reason - but so far I seem to see people mostly arguing that LLMs will do the work for us and it will be amazing or subpar, etc - but neglecting, which I assume is the point of the article, that LLMs will be a co-pilot for humans and assist in ways like perspective shifting to find faster/better/more innovative solutions.


If you exchange the word "replace" with "Displace" then it works better i think.

Humans with AI wont cause the extinction of humans without it... but there will be a elevation of "class" for those with it.


Fundamentally what do we need to survive?

Water, food, clothing, a place to call home, dignitary.

Replacing is a strong word. There are many people who live Tech agnostic lives.

When I leave Seattle and SF, most strangers I talk to haven’t even heard of GPT.

This is HN so we’re biased towards specific problems.

However access to quality food, water, electricity, internet, shelter, sanitation, opportunity still remains a problem for a large chunk of the population.

And LLMs which are fundamentally next prediction algorithms don’t make a huge difference to someone’s survival ability.


The problme isn't if humans get replaced, some will, but that the value of the single human drops.

Humans become the expandable operator of the tools that do most of the work.


if anything the opposite is the case. As tools get more sophisticated and potent the value of human judgement goes up given the increased leverage of tool use. Tools are force multipliers. Humans today are much less expendable, and much more valued, than they ever were historically. (on display in just about every profession, even on battlefields)


A lot of people who are bullish on LLMs and AI are quite clearly excited to get other humans out of the loop. I think movie execs would absolutely love to get the writers and actors out of the picture, ideally not paid a dime for anything going forward.


>Humans today are much less expendable, and much more valued

These people don't sound valued to me

https://www.boredpanda.com/toxic-baby-boomer-advice/


My grandparents can’t fill in their taxes because they don’t know how to use the internet. That’s the only way you can fill in taxes as far as I know (in The Netherlands). They rely on me to do it.

Some people don’t have that luxury.


Too soon to answer, but LLM provide advantages, yes


At a ratio of 10:1. This is bullshit framing designed to refocus people on human competition (which everyone gets trained for from school), just don’t look over at the robots who never sleep, never get worse, rapidly expand their capabilities, don’t need vacation, etc. Once people understand the robots will never stop replacing them, shit will get much more real.

Proof is in the earnings calls - see IBM dangling firing 30% of backoffice because of AI and begging shareholders to stick around for the payoff.


It's amazing how more people dont see that theyre confusing whipped up shareholder FOMO with reality.

I couldn't believe it when I read the Economist gushed over Copilot replacing programmers last year.

I can see why their investor audience is into the concept of swapping programmer salaries with margin and theyve clearly watched movies like the Terminator or whatever. But, their "journalists" were clearly oblivious to the reality that copilot offers incremental gains in productivity at best.


The caveat being for now. No one knows how much better the LLM paradigm can get.


Or a completely novel AI algorithm.


What AI will do is facilitate the creative process. Instead of painstakingly creating variations. AI will present us with ALL the variations. That, in a nutshell, is the power of generative AI.


In theory, for the studios who care about good products, yes. I really do look forward to making a good base (say, a rigged 3d model) and having AI fix the seams in a skin or add some extra little flair to an animation. Those will be very nice accompaniments to what is currently very tedious tasks to hand fix.

But I think we both know how and where large copoprate entities are going to use these tools. They made it very clear given their whole refusal to deal with unions thing.


I think Jensen Huang (or his sales team) came up with this quote first:

"While some worry that AI may take their jobs, someone who is an expert with AI will."

He said this at the NTU Commencement Speech in May 2023.


The problem is this expert will not take one job but ten, so thousand experts kill 9000 jobs.


Kind of a vapid piece overall but I do agree with the thesis. Whatever the limits of generative AI may be, or what adjunct and different technologies may come, I think the future will see something similar to the steam shovel or drill when it comes to AI and other “knowledge” enhancing tech. John Henry managed to beat the steam drill at a great expense, but it didn’t reduce the number of people laboring on tunnels - it just changed the skill set from a strong back to a strong engineering ability.


> it just changed the skill set from a strong back to a strong engineering ability.

When you have a machine that can do it all, there will be no skill to change to.


I’m a pretty big general purpose hyper intelligent AGI skeptic. I think for any practical foreseeable future AI will serve as an adjunct to the human mind vastly improving our abilities, but the overall agency and broad ability to synthesize goals and instruments will be still a human task for some time. To wit, a lot of discussion of AGI essentially boils down to how to keep agency in the human domain permanently by lobotomizing AIs enough, ala LLM safety finetuning today. I’m more a fan of broad general AI open models mostly because I think they’ll stay within the bounds of adjunctive tools for some time, and any threat they may serve to humanity and it’s reliance is best obviated by everyone everywhere having access all the time. Similar to how we keep order by pitting humans intelligence against human intelligence. Etc.


You never know. We could get AGI next month … The world would change overnight. The thought of how it’s all going to play out scares me.

That said I think LLMs are overhyped.* If we are to get AGI, it would probably be a novel model.

* It’s basically advanced auto complete that statistically guess its way to an answer - rather than reason its way to an answer; feels … “unsafe” and it often does produce completely BS.


While at some level that’s true (autocomplete) at another level it unlocks a abductive reasoning ability for machines that prior AI failed at. While it’s not reasoning per se, it absolutely makes probabilistic inference over an abstract semantic space that’s remarkable. For instance you can use a multimodal generative AI to take a photograph of a saloon and ask it how to make money and it’ll describe playing poker at the poker table and working at the bar for money, then when prompted describe how to navigate to the table given the objects in the room. This is a remarkable extension to current AI - which could actually perform the navigation and plan routes, even use goal based agents to instruct the generative model to plan a way to make money. I actually am not that worried about the risks of wandering mind or hallucinations, I’ve found ways to detect when it wanders (for instance, creating an api to call including an echo() for textual response and validating the output and regenerating responses until it conforms, then doing domain verification on the API parameters)

But that gets to one of my core beliefs - LLM and other generative models are tools who require constraint enforcement, agents to direct, verification and validation, deference to inductive and deductive systems, optimizers, solvers, etc. The fact they can’t compute primes or solve quadratic equations doesn’t impress me - because we have those tools already. Focusing on what they’re weak at and ignoring what they’re amazing at is foolish and really small minded. It’s interesting that you can train them to do some of these tasks, but trying to use them as a calculator when we have calculators is absurd, trying to use them as information retrieval systems is doomed to fail, trying to use them to be a complete solution to literally anything is simplistic.


By that point, choosing what the machines should do becomes the skill.


Machines will be able to do that too. Hell, non-AI algorithms are already doing that - e.g. social media algorithms picks what should be promoted to individuals; of course they are working for the social media company rather than those individuals but my point still stands.


"What to do" has lots of different layers. AI might be able to decide which ads to promote to individuals, but can it decide the weightings of the factors it takes into account to make that decision? Can it decide to promote ads to individuals in the first place?


> but can it decide the weightings of the factors it takes into account to make that decision?

Is there a reason why you think it wouldn’t be able to do that?


Because I'm not sure that AI is 'socially' or 'emotionally' intelligent enough to make those kinds of judgements.


This is only true in markets in which increased efficiency = more success. Many markets do not function this way; e.g., the art market. Making more paintings more efficiently does not make you a better-known artist. In many cases, it actually hurts you.

I would bet that more markets become like the art market: dependent on "intangibles", nepotism, personality, and other qualities that have nothing to do with efficiency, and therefore, won't benefit that much from AI tools.


Very good. There will be an increasing demand for pure human beings. Those who can practice divine creativity, due to their pure personality, body and spirit. The "intangibles".


>Many markets do not function this way; e.g., the art market. Making more paintings more efficiently does not make you a better-known artist. In many cases, it actually hurts you.

Art is also a good argument for where this is a good argument. More art =/= more success, but the ability for an artist to quickly make good art == more success. Be it for iteration or production (especially 3d models). You still can't guarantee success, but if you can use less artists to make more art it tips the scales.


No, I don’t think that is true at all in the traditional fine arts market, which is what my comment was referencing. Scarcity is a major factor and putting out tons of AI-generated art is basically the opposite work process of the highest-priced artists.


I see. I was referring more to media arts. animation, games, movies. These could have huge boons from letting AI polish up the bases or fill in somegaps.


Yes for those types of media I definitely agree that AI will have an effect. But for the traditional “fine art” market, the rules are very different.


Serious question: has anyone ever read an original/insightful thought in an HBR article?


HBR is the IBM of business publications. You won't get fired for sharing their articles.


Many interesting headlines.


For the most part I think it’ll mainly further Microsofts dominance in the office space. You can see this with the recent EU anti-trust and Teams, where it makes little sense for enterprise organisations to not chose Teams when they already have Office365 licenses. You see the same to some degree in RPA (robot process automation ) already. Why would your organisation shell out a couple of hundred thousand dollars for UIPath licensing when Power Automate is $15 a month and $250 for the big VM bots?

I think it’ll be the same with AI. GPT writes all our non secret documentation and it has a lot of options for use in non-programming automation as well. I know Microsoft is a major investor in it, but once those tools become basic Office365 tools then every other AI seller is going to become obsolete in many cases. It’ll be interesting to see how the world handles that dominance in the office space, because so far, we really haven’t.

So I think this is going to be much more a question of how we’re all going to be paying a Microsoft “tax” to use AI efficiently, than it’ll be about non-AI vs AI. I mean, you could probably use iSheets (or whatever the Apple Excel is called), Google Sheets, LibraOffice or similar, but I’ve never heard of an EU enterprise org that doesn’t use Microsoft Excel.


> GPT writes all our non secret documentation

Who read it? LLM may be great at coughing up seemingly good long chunk of text, but human is not capable of reading all of these verbose text because of the simple fact that GPT can generate texts faster than human reading speed.

So I bet what will happen in that case is, humans use LLM to summarize these LLM-generated verbose documentation. What a wonderful bureaucratic world will that be?


That was google's demo for the AI in gmail. You write 3 words explaining what you want to say to an AI, which inflates it; and on the other side another AI goes back to hopefully the same 3 words that were originally written.

Tons of pollution for this shit.


It writes better JSDoc than any of us. Anyone who consumes a Typescript method is likely to read it, it’s the tiny text that pops up if you code in VSC.

If you go through my history here on HN (you don’t have to I’ll sum it up), you’ll see that I’m not a fan of LLMs for code generations. In fact I’m not convinced they will ever be good enough to be useful for it. For documentation, however, at least for JSDoc GPT is incredible. I really mean it when I say that it writes it better than we do. It’s able to figure things out and describe it in ways that still amazes me from time to time.

It does require you to be good at naming things.


This is an opportunity to mention that LibreOffice constitutes a solid and worthy alternative to MS Office, which does not require you to be chained to Microsoft or any sort of licensing. It will also _not_ send your data silently to MS or the US government or any other third party.

For online versions, Collabora (a commercial company) offers a FOSS online office suite based on LibreOffice: https://en.wikipedia.org/wiki/Collabora_Online

and if anyone wants to integrate AI use in an office suite, those are the most interesting avenues. Perhaps together with free LLMs.

Caveat: I ain't seeing that the LO code is very pretty, mind you (they like ABI stability so many things aren't allowed to break.)


I wish we’d see more adoption of alternatives, but I work in non-tech enterprise where it’s not even called “spreadsheets” but rather “excels”.

I’m not sure the value is really there in the office platform if it wasn’t because it’s what every new hire is used to. This is very anecdotal but I’m really not a very good consumer of IT, to the point where I often stun people with my profession because I really suck at using digital devices for most things, and I never had a hard time using one of the office alternatives. The tie in with Teams and OneDrive/Sharepoint is nice and all, but it’s also often sort of terrible.

I don’t expect things to change though.


> because it’s what every new hire is used to.

LibreOffice and MS Office are quite similar. Unless you're a VBA programmer, you will get used to it very quickly. In fact, you can even change the UI to be similar to MS Office' ribbons:

https://hexus.net/tech/news/software/127301-libreoffice-62-n...

But there also lies the rub, because MS Office' ribbon mechanism has, in my experience, reduced typical users' proficiency with the application, and they are less likely to be aware of more advanced functionality.


Let‘s assume for a moment that AI will replace humans _in the economy_. Then what? Well, either it fully does that, then it won‘t matter. We will allocate resources in a different way or there will be a revolution. Or it does only partially, leaving some (physical) labor. Then these jobs will pay well and people will go there.

For everything else AI will replace nothing. The economy is just a part of life and if AI really takes over it will be an insignificant part.


I laugh everytime someone posts an article with a definitive title ina field where nobody knows shit about what's coming. Hear me Mr. Krugman?


Yeah, thats spooky, because the next step is Humans with neurochip will replace Humans with AI, and then AI will replace everything


People keep saying things like "humans always adapt to new technology"

Is that not the point here? Start at the basics - Agriculture has created a world where I cannot live off the land in my area. I would have to go to a location that is not dominated by an Agriculture focused society (condensed living with farms on the edges supplying food to the population via trade).

Look what happens to people who have nothing to offer for food. They become dependents of the state.

There is no reason to undermine the very real possibility that after another breakthrough or three in AI society is going to fundamentally shift in the impacted areas in a way where if you are not one of the AI people then you will have nothing to offer and thus you too will become a dependent of the state.

Hopefully non-ai society will remain able to function independently - with non-ai-boosted farms continuing to trade with non-ai-boosted workers. Maybe technological middle grounds get eroded and we see an explosion in Amish-style communities.

It is very possible however that the power of ai-based systems takes off and all the people involved with it simply completely ignore the rest of society, and the rest of society will be boxed out of the resources they need to live independently.

For centuries the ultimate reality check for ignore the needs of the many has been revolution. For all that time, surprise attacks, rebellion, physical power in numbers, etc, all existed.

Revolution is not going to be on the table with advanced AI. It's arguably already off the table, but if one side has advanced AI (not even AGI) and the other doesn't- its over. Automatic early threat detection, autonomous kill machines, control over all strategic resource and power generation, etc.

The only revolution still on the table is going to be political. We can absolutely succeed in preventing dystopia if we leverage the power of the state to actually be able to sustain 90% of the population becoming dependents


An insightful comment, and one that the majority of the people here choose to be blind to. There is already signs of the bifurcation in technology between 2020s and Amish. It's really hard to live at the technological level of the 1950s or even the 1990s because if you don't have a smartphone you're fucked. And you are also right about the loss of revolution. So many starry-eyed hopefuls here think AI will be a check on power, an equalizer, that reforms will keep us from going down a path of global authoritarianism, but the truth is we will only be free if the power goes out for the last time, or we have a Butlerian Jihad - and although I use the term semi-ironically, I'm also serious.


People mostly don't get replaced, they adapt. LLMs are a faster but less reliable Google search. They will add to your work load because now you do more, faster. Technology increases the work load. Life with technology only gets busier and more complicated.


Most people act fairly mechanical at this point. Everyone is so busy out competing eachother, they might as well be sentient bots doing errands from their massive todo lists. Ai is just going to make everything more obscure and absurd.


Humans that are just as skilled as the average ones with AI will continue to be in demand. AI levels the playing field for a lot of people but the stars will still shine


The stars will be used to train AI and then will be replaced by cheaper employees and the trained AI


Which means that the stars will have to keep pushing to stay up to date on the latest and greatest tech that hasn't yet made its way into the AI training data.


Useless race. As soon they reach that level, AI learns it too


There will always be a need for the exceptional


For the most part maybe tru but at the low end of yhe job market I disagree.

My current robo/vac/moo does a better job more frequently than any human I paid to sweep the floor. The robot is less than a months wages and no paperwork.

One of the large employment sectors in our area is landscaping landscaping and I could see 75% of that labor going away.

The amount of manual repetitive labor available for humans is probably already dropping.


A lot of landscaping is conspicuous consumption. Just like a 50k Rolex, people buy it because it makes them seem rich.


I guess that is a good argument that the people paying for landscaping always find things to pay for.


This take is so 3 months ago.


That's really generous.

The top comments in here are taking this article at face value and non-ironically. We love to occasionally question the quality of minds on HN "these days", but this time it just seems... quite severe.


And Human with AI will be more efficient than human without AI. So there will be need for less humans for a given task. Therefore AI will replace humans, just not all of them. And that's not new. The same thing happened with softwares and with machines. The solution : produce more goods or services. Until we have an energy crisis and we need to rationalize all this.


These MBA thought pieces are so behind the curve its embarrassing if not humiliating.

When your only real job (conditioned over a near-century of industrial capitalism) is to prepare warm bodies to populate stable corporate power hierarchies but society is reeling from challenge and disruption to strife and malfunction what exactly is your reason to exist?

Just take the very first sentence:

> Just as the internet has drastically lowered the cost of information transmission, AI will lower the cost of cognition

Now investigate how the "internet" has worked out for individuals and corporates and apply the same to "AI". What do the authors think about digital oligopolies, surveillance capitalism [1] and all that.

The digital (=information) incompetence of the corporate world is what has ushered the current dystopia in the first place. Business schools have profitably underwritten this gross mismanagement for ages.

Then this:

> The best place to learn [...] is YouTube. YouTube has, oh my God, so many tutorials in so many domains.

Can somebody please pull the plug on this joke? Its too painful.

[1] Zuboff joined Harvard Business School in 1981 where she became the Charles Edward Wilson Professor of Business Administration and one of the first tenured women on the HBS faculty


>Even your spam killers. Remember how bad spam used to be for a while, and then overnight it went away? Because people deployed machine learning systems.

Didn't mostly Bayes filter kill spam? Wouldn't call that AI.


The definition of AI is very vague these days.

But Bayes filters are trained on good and bad data, and then learn over time automatically by what people mark and unmark as spam.

This is exactly what things you would call AI do.

A person or team manually creating filters to block spam would have a hard time keeping up with the effectiveness of a learning Bayes model.


People call anything past a single if statement "AI" nowadays =)

It's like everything was "fuzzy logic" in the 90s.


Wow! Not sure I've even heard the phrase "fuzzy logic" since December 31, 1999. What did happen to it?


I used to have unbelievable loads and then during covid I sort of forgot to look what was going on. Once I finally checked I could not believe the drop in spam attempts. This was after the war in Ukraine started so I was never sure if that was part of the drop off.

Currently what I have is a a lot of persistent attempts at email delivery or login.

If spam does get through it is usually 2 spams from the same source set on a schedule as to seem inconspicuous.

Has been my experience.


I'm still full of spam anyway. Especially in my work email.


There's not much of a distinction, multiple people will still be replaced by one because AI exists. I'm not saying it shouldn't exist, but this sounds like word play to hide the impact.


Wait till people start getting cancelled and impoverished by having their AI accounts closed. Sure, they're still free, but they won't be able to get a job.


Llama for the win.


Am I the only one who finds these rambling interviews hard to read? Many articles already have too much fluff around the core information, but interviews are even worse.


I mean, "humans with AI" replaced "humans without AI" the moment AI was invented. Is that what passes for "insights" at HBR these days?


I thought for a second going into this that it might be an interview from at least 6 months ago. But no, it's from yesterday.

As much as I prefer not to criticize the individual, Mr. Lakhani here is clearly not the most insightful person to be speaking about this. The advice he dishes out is objectively no better than the advice I heard a friend's grandfather give a month ago (for context, he is 78 and was a mechanic his entire life & only started using a smartphone 3 years ago).

Going into this I also expected the title itself to be an ironic nod -- but no, seems like they meant this 100% literally. We're only a few days away from having this sentence printed on coffee mugs -- and not because of HBR.

Shame on HBR. Every dog has its day -- and decline.


The net result, as we see with basic skills such as growing your own food, cooking, purifying water, building a shelter, fixing the tools you need, etc. is a bunch of adult babies that are completely dependent on others for their wellbeing.

I'm not saying we should all grow our own food etc. but it does seem as though we are headed to something where we make such a great system that we only breed weakness and helplessness.

Maybe that is the price of progress.


I’m concerned about this ideas about weakness and helplessness. I mean, if we look at history, we can see that these “resourceful” people were often malnourished and victims of natural calamities, although we have a survivor bias as we can only see those who were lucky enough to survive.

I wonder if this romanization of struggle is related to the statistic where 6 percent of Americans believe they can defeat a bear with their bare hands.


Maybe we have skins cells that look out at the freedom and autonomy of some single cell organisms and think to themselves that something has been lost participating in this human project.


AGI will replace humans when it eventually arrives. The ability to copy paste and backup minds is too strong.


I don't really know why people think AGI would be some type of static event. Like there will just be this thing called AGI, that is some type of mind, who mindlessly lives forever backing itself up to the cloud? It truly seems absurd ? "Minds" are useful for humans and biological creatures to support and protect their bodies. It's yet to be seen how much need there is for a mind outside of this context.


Someone has been reading Iain Banks.


Seems like people who are taller are going to replace me. And post about it on their socials


This is such a catchy term. Copium usually sounds very catchy.


It's true in 2023.


"Humans with AI will heavily profit from Humans without AI"

FTFY


AI will replace humans. At what rate and when is the only question. The idea we don't want that is only pitched by control hungry authoritarians who need subjects to define their existence by.


So we will become robots in a Ship of Theseus way.


keep telling yourself that




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: