Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What do you expect will be the real impact of AI on society in 10 years
16 points by doubtfuluser 6 days ago | hide | past | favorite | 26 comments
I started to get more curious on this, especially beyond the obligatory “genAI will do everything.” What are your thoughts on societal impact. As I see it currently: knowledge work is (surprisingly?) the first group of people seeing massive job losses and replacements from work. In the past “knowledge” was a scarce resource, now the AI delivering knowledge will become the sparse resource. So people owning / hosting it can sell it and make lots of money-> in the current setup this means the few rich in AI will get richer. In addition letting money work for yourself will also stay (investment) so again rich people will become richer.

Interesting is the question about physical labor. The economics of pushing atoms in the physical world is nowhere near the economics of pushing electrons (bytes), so if you are not part of group 1 (entrepreneurs) or group 2 (investors), doing physical work is something that will earn you some money (I also expect care work to stay, since people will probably prefer for a long time to have humans care for them). But this means that still group 1 and 2 will be the big winners, paying some money to group 3.

Where do you disagree? Where do you see a different outcome? I’m curious to learn about your thoughts






Students in school even post ChatGPT let alone better “AI” will find their growth limited. They will never learn how to solve complex math or logic problems or how to write.

You will also see long term affects in the industry as the pre-AI generation leaves the market.

It was already hard for entry level developers to break the can’t get a job <-> don’t have experience cycle. It is even harder now.

Before there was always some simple busy work that senior developers didn’t have time to do so you would hire a junior level developer who needed to be told exactly what to do. LLMs are already as competent as a junior developer. Why hire them?

I see the next level of hallowing out to be mid level experienced “ticker takers” who just take well defined business use cases off the board and do the work. For non software companies, a lot of that work has already been outsourced to SaaS offerings where businesses hire a consulting company to do the implementation (various ERPs, EHR/EMR systems, Salesforce, ServiceNow, etc)


More generally, a large segment of the population outsources their thinking to ChatGPT. The outcome of that will not be good, especially with the current mismatch between what ChatGPT actually is and what the public think it is. People will put too much trust in it, and it will ruin lives. Even for those it doesn't ruin, outsourcing your thinking is a poor way to get smarter.

Socrates: I heard, then, that at Naucratis, in Egypt, was one of the ancient gods of that country, the one whose sacred bird is called the ibis, and the name of the god himself was Theuth. He it was who invented numbers and arithmetic and geometry and astronomy, also draughts and dice, and, most important of all, letters.

Now the king of all Egypt at that time was the god Thamus, who lived in the great city of the upper region, which the Greeks call the Egyptian Thebes, and they call the god himself Ammon. To him came Theuth to show his inventions, saying that they ought to be imparted to the other Egyptians. But Thamus asked what use there was in each, and as Theuth enumerated their uses, expressed praise or blame, according as he approved or disapproved.

"The story goes that Thamus said many things to Theuth in praise or blame of the various arts, which it would take too long to repeat; but when they came to the letters, "This invention, O king," said Theuth, "will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered." But Thamus replied, "Most ingenious Theuth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess.

"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

- Plato's dialogue Phaedrus 14, 274c-275b


Well, you know, that's just, like, Plato's opinion, man. (Or maybe Socrates's.)

Plato's opinion, that we accurately know what it was 2300 years later, because he wrote it down for us.


> Even for those it doesn't ruin, outsourcing your thinking is a poor way to get smarter.

How many phone numbers do you know in your head versus when you were younger? I know three phone numbers from memory - my wife’s, my mom’s (she has had the same phone number since I was born) and an aunt that lives at my grandparents house since they passed and she has the same phone number.

When you go to a new city, do you try to find your way around like you use to or do you use GPS?


> More generally, a large segment of the population outsources their thinking

Absolutely. I find myself sometimes being tempted to just ask Claude for the most vanal things. It's going to get more and more tempting.


Sam will promise in 2035 that AGI is very close and probably will happen at the end of the year, same as every year (Elon als still promises FSD is close and probably comes out EOY, just that FSD might actually be realistic by then).

People use AI a little here and there but nothing much will have changed due to it. Mostly more work for people needing to correct AI mistakes.


I know Sam says AGI is close, but has he ever actually said that it will "probably happen by the end of the year"? I don't follow his pronouncements very closely, but I don't recall noticing him actually going that far.

I strongly suspect that after a few false steps, which always happen when people fall for the hype, AI will just be a labor saving device, like power tools, etc. Certain categories of work won't be necessary any more, just like we don't have file clerks in the age of databases, but other new jobs will fill their places.

The general level of productivity will rise, and most of the benefits of that increase won't be seen by the workers. 8(


We consider the human brain and thought structure to be close to perfect. In fact, we know that it can be easily deceived or misled. It can make wrong decisions because it is very easy to fall into cognitive errors. We are trying to copy this structure in developing artificial intelligence. That is why it seems very common for it to hallucinate or give wrong answers. When the reasons that cause people to make wrong decisions are eliminated, maybe what artificial intelligence gives in the same way can be much more realistic and accurate. Maybe then we can talk about Artificial General Intelligence. They use it very easily. That is why it does not seem possible to predict in the future whether humans will fix artificial intelligence or artificial intelligence will change our thought structure.

I think kids will have a hard time learning and being smart with AI chewing everything for them.

I've read a few stories about parents questioning the over-use of AI from their child, adding to that I've seen my fair share of adult who cannot do anything without asking ChatGPT first.


It will accelerate and deepen the alienation that is already epidemic, and will lead to a great deal of societal, economic, and personal trouble.

will it lead to an increase in human fertility? less/no work pressure, more free time, more financial security

I feel like it's the opposite. These labour-saving technologies are used to concentrate wealth and make workers' lives more precarious.

different view but for that I need to pm/dm you. Any way that is suitable/comfortable to you. I'm not a scammer and I don't have malicious intentions.

It's not the same old boring responses which bring more uncertainty.

It's obvious but it's not that obvious that you would likely get it from current ai reasoning models.


Now I'm intrigued. Why not post it publicly?

It has shaken up my core beliefs and assumptions.

Reading these have validated some of my points. https://pastebin.com/y1i6KiY1

I don't want to mislead anyone. Neither do I want to cause trouble to myself or to anyone else. Things can be troubling.

I am open to posting it but I would like to discuss it with someone on HN before doing so.


Consolidation of power.

How would that look like? And what would be the societal implications?

The current version of it is that senior engineers - the ones that can talk to stakeholders and manage complexity not “senior” as in “I codez real gud” - are having not too much problem finding jobs that pay well. But non senior “ticket takers” are struggling.

That will get worse when you need fewer non seniors


everything

Well the most pressing question is whether it will kill us all. There are good reasons to suspect that; Nick Bostrom's Superintelligence: Paths, Dangers, Strategies (2014) remains my favorite introduction to this thorny problem, especially the chapter called "Is the Default Outcome Doom?" Whether LLMs are sufficient for artificial superintelligence (ASI) is of course also an open question; I'm actually inclined to say no, but there probably isn't much left to get to yes.

A lot of smart people, including myself, find the argument convincing, and have tried all manner of approaches to avoid this outcome. My own small contribution to this literature is an essay I wrote in 2022, which uses privately paid bounties to induce a chilling effect around this technology. I sometimes describe this kind of market-first policy as "capitalism's judo throw". Unfortunately it hasn't gotten much attention even though we've seen this class of mechanisms work in fields as different as dog littering and catching international terrorists. I keep it up mostly as a curiosity these days. [1]

That future is boring; our current models basically stagnate at their current ability, we learn to use them as best we can, and life goes on. If we assume the answer to "Non-aligned ASI kills us all" to be "No", and the answer to "We keep developing AI, S or non-S" to be "Yes", then I guess you could assume it would all work out in the end for the better one way or another and stop worrying about it. But we'd do well to remember Keynes: In the long run, we're all dead. What about the short term?

Knowledge workers will likely specialize much harder, until they cross a threshold beyond which they are the only person in the world who can even properly vet whether a given LLM is spewing bullshit or not. But I'm not convinced that means knowledge work will actually go away, or even recede. There's an awful lot of profitable knowledge in the world, especially if we take the local knowledge problem seriously. You might well make a career out of being the best informed person on some niche topic that only affects your own neighborhood.

How about physical labor? Probably a long, slow decline as robotics supplants most trades, but even then you'll probably see a human in the loop for a long time. Old knob-and-tube wiring is very hard to find expertise around to distill into a model, for example, and the kinds of people who currently excel at that work probably won't be handing over the keys too quickly. Heck, half of them don't run their businesses on computers at all (much easier to get paid under the table that way).

Businesses which are already big have enormous economic advantages to scaling up AI, and we should probably expect them to continue to grow market share. So my current answer, which is a little boring, is simply: Work hard now, pile money into index funds, and wait for the day when we start to see the S&P500 start to double every week or so. Even if it never gets to that point this has been pretty solid advice for the last 50 years or so. You could call this the a16z approach - assume there is no crisis, things will just keep getting more profitable faster, and ride the wave. And the good news is if you have any disposable capital at all it's easy to get a first personal toehold on this by buying e.g. Vanguard ETFs. Your retirement accounts likely already hold a lot of this anyway. Congrats! You're already a very small part of the investor class.

[1]: https://andrew-quinn.me/ai-bounties/


Its not going to kill us all, it's only going to make us all very unhappy.

Prove it! Dr. Bostrom would be very happy to be wrong, to say nothing of myself.

I can no more prove my speculation than Dr. Bostrum can prove his, and have no more need to do so than he does. All any of us can do on this topic is speculate. We'll only know for certain what's going to happen when it happens.

If you can't prove it, then debate it! Dr. Bostrom's argument is not mere speculation - he's a trained philosopher, with a comprehensive argument and plenty of his own assumptions laid bare. If there is any thought to your logic beyond "It is what it is, what will be well be", surely you can find something concrete in there you can challenge.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: