One bit of warning: we tend to overestimate the short term impact of new tools while underestimating their long term ones.
It’s likely our kids will have recognisable professions with better phones than not. AI will be there assisting us in increasingly sophisticated ways, acting as intelligent agents and maybe even impersonating us at simple interactions.
Value and supply chains change slowly and those are the things that can really make the biggest changes.
Let’s assume for a moment that AIs turn up as you say, and that there is nothing nefarious. That outcome will need a lot of foresight and democratic regulation, because market forces will want AIs to exploit people. But, let’s assume we find a political system that is smarter than the people wanting to use AIs for bad ends. So, the world would look very much like the world now, where you earn money proportionally to how difficult is to replace you. There probably will be a set of skills incredibly difficult to get that will earn you enough money to live a comfortable life. And that’s the problem. Even now, lots of people never attain those skills, and they end up working at a factory or at an Amazon fulfillment center. If we don’t fix things somehow, how much worse will it be in the age of AI?
> Let’s assume for a moment that AIs turn up as you say, and that there is nothing nefarious.
The parent comment didn’t say “nothing nefarious” at all. It just said we overestimate the short term impact.
This comment section is full of people acting like AI is going to make most jobs obsolete within a matter of years. One of the top comments as I write this is from someone writing melodramatically about Sam Altman taking jobs away from their tradesman relatives with “plasticized fingers” from the comfort of his “pretty offices”. This is the kind of histrionics you get when people identify a trend and extrapolate it to absurd conclusions on unrealistic timelines.
I can’t tell if these people actually believe that robots are coming to take our jobs (including physical ones) in a few years or if this is just pearl-clutching at the way the world is changing when they’d prefer that everything stayed the same.
> There probably will be a set of skills incredibly difficult to get that will earn you enough money to live a comfortable life. And that’s the problem. Even now, lots of people never attain those skills, and they end up working at a factory or at an Amazon fulfillment center. If we don’t fix things somehow, how much worse will it be in the age of AI?
You’re making the same mistake I see throughout this thread, which is assuming that AI can only be a subtractive force and nothing else will change about the world. An AI-centric world isn’t going to look exactly like today’s world but with fewer jobs. The price of those “comfortable life” things would go down and availability would go up if everything was automated away.
If you treat the future like a carbon copy of today but with workers replaced by robots then it’s bleak, but that’s a failure in acknowledging that economies evolve with technology.
It’s the same reason the world didn’t collapse when technology enabled people to do something other than work on farms.
This is an interesting point - we got a lot of productivity out from pre-AI automation and information technology, but instead of automating away our jobs, we created new ones, some frequently called "bullshit jobs" because the economy as it is defined can't handle a huge mass of people who are not "working" on something.
One such overestimation was self-driving cars - we all thought, ten years ago, our cars would have the front seats turned backwards while the car would take us safely to our destinations. That didn't pan out, despite the humongous hype (remember the Dojo computer?) and Uber and Lyft still employs meatware-guided cars.
> This comment section is full of people acting like AI is going to make most jobs obsolete within a matter of years. One of the top comments as I write this is from someone writing melodramatically about Sam Altman taking jobs away from their tradesman relatives with “plasticized fingers” from the comfort of his “pretty offices”. This is the kind of histrionics you get when people identify a trend and extrapolate it to absurd conclusions on unrealistic timelines.
"within a matter of years" has two very different meanings: a few years from now, or a few years from when the AI is good enough.
> If you treat the future like a carbon copy of today but with workers replaced by robots then it’s bleak, but that’s a failure in acknowledging that economies evolve with technology.
> It’s the same reason the world didn’t collapse when technology enabled people to do something other than work on farms.
Absolutely agree.
Unfortunately, that transition from agricultural to industrial did mess a lot of things up. Some was our own ignorance of the direct impacts of our actions (for example, the first people to realise CO2 would cause global warming thought this would be good), while some other issues were our ignorance of group dynamics and responses to incentives (which is basically why neither laissez-faire capitalism nor communism worked quite as intended), and finally quite a lot of the world was very upset to find itself being ruled over by a small moist island in the North Atlantic whose leaders treated everyone more than 60 miles from St. Paul's Cathedral as primitives and barbarians fit only to provide curios for museums.
> If you treat the future like a carbon copy of today but with workers replaced by robots then it’s bleak, but that’s a failure in acknowledging that economies evolve with technology.
No. Economies evolve with technology. Of that I'm sure. And I'm doing my darnest best to continue benefiting from those changes. But there are plenty of people who haven't fared so well.
> You’re making the same mistake I see throughout this thread, which is assuming that AI can only be a subtractive force and nothing else will change about the world.
On the contrary; I'm saying that we won't benefit if we continue playing the same games we have been playing so far. But you are right that that's not an AI problem; it's a people problem that we can use AIs to make worse.
I will always remember Amara's Law, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Having been in the tech industry for 20+ years taught me a lot about the long-term impact.
A quotation isn't evidence for anything. Quotations are illusions and calling a quotation a "law" serves as deception. How do we even know what that quotation said is true? Is there data backing it up? Doubt it.
20+ years of experience is a lot. But if we crossed a line here, if LLMs are truly a paradigm shift then there will be fundamental changes to society that will last centuries. This renders your 20 years of experience miniscule in the grand scheme of things.
We need to look at things realistically, not rely on quotations and anecdotal experience to predict the future.
I'm not claiming doomsday will occur. We don't know what will happen, my main problem is excessively positive outlooks that are borderline delusional. Your daughter, if she's a child, may have a much harder time in life in the future due to AI, and that is nowhere near an unrealistic statement.
My grandparents still recognize the world. they drove cars in their 20s and drive them now. They go shopping at stores much the same as the past. They eat at restaurants much the same as the past. They read books, watch movies much the same as the past. they cook, see friends and family. they live in a house. it still has rooms and furnu much the same as the past.
what's changed? more entertainment. easier communication. more photos and video. but day to day life isn't all that much different
My dad was born before the UK joined WW2, he lived through the arrival of colour TV. One of the earliest stories people had about mum was during an air raid where her family had to shelter under the kitchen table. Dad was about to turn 6 when Nagasaki was bombed. He was 8 when the transistor was finally prototyped. At university, he studied electrical engineering, he only learned about this new thing called "software" on a two day training course at work. When he settled down and started a family, phone numbers were 4 digits, "long distance" meant the next town, international was unheard of. One of my mum's resentments was that my dad had grown up thinking that eating out was too expensive, such that he never took her anywhere — thanks to WW2 both grew up with rationing, which also means their experience of what foods are available in (super)markets radically changed over the years (one of mum's anecdotes was the first time she bought a chilli pepper, from a street market, she just bit into it like it was a banana or apple because she didn't know any better).
My parents saw the fall of the British Empire, the Beeching cuts, the moon landings, the invention of nuclear power and solar power. The house went from no loft insulation to loads, from a void cavity on the walls to filled cavity, it gained PV on the roof.
They lived through the sexual revolution, the introduction and widespread availability of oral contraceptives.
My uncle moved to Australia (from the UK), and I'm told that was because he really didn't like his mum and that was as far as he could go. My first international trip was, in part, to meet him, with my gran paying for herself, my parents, and me to all go together because international travel got that much cheaper (my brother and sister went later, they were at university at the time).
They saw AIDS enter the news, were around for PrEP to enter human trials. The first heart transplant, liver transplant, lung transplant. They saw smallpox eradicated. They saw witchcraft and homosexuality legalised. They saw the USSR collapse, having previously visited a divided Berlin (where I now live). They saw the formation of the EU.
My dad took a lot of books out of the library. Me? Audio books and podcasts, which I listen to sped-up, so that I can multi-task, either on a commute or during a long walk. Dad was 40 when the Walkman was introduced.
I see robots helping clean up the food courts in a mall close to what was, in their lifetime, a heavily fortified no-mans land. The first vacuum cleaner I bought was a Roomba.
> But here’s the problem: in saying “for all I know, human-level AI might take thousands of years,” I thought I was being radically uncertain already. I was explaining that there was no trend you could knowably, reliably project into the future such that you’d end up with human-level AI by roughly such-and-such time. And in a sense, I was right. The trouble, with hindsight, was that I placed the burden of proof only on those saying a dramatic change would happen, not on those saying it wouldn’t. Note that this is the same mistake most of the world made with COVID in early 2020.
> I would sum up the lesson thus: one must never use radical ignorance as an excuse to default, in practice, to the guess that everything will stay basically the same.
Some quirk of psychology. People can't even consider a dooms day scenario.
The turing test was a AI benchmark that everyone universally agreed upon. When LLMs blew past that benchmark everyone moved the goal posts and started acting as if it was obvious that the turing test is a bad test.
Beating the turing test was a paradigm shift and we didn't even notice. Sure move the goal posts if you want, but it doesn't change the fact that something fundamental about tech has changed.
It's possible we hit a paradigm shift that is foreshadowing a very very unknown future and many people may be to delusional to face it.
I became a parent about a year ago, completely unprepared for this future. My Son was born around the time ChatGPT-4 was released and I remember all the Geoffrey Hinton interviews which were really apocalyptic sounding, he is has a kind of kookie mad scientist vibe about him too. I remember thinking, like what the fuck just happened? Like I just woke up in a movie.
I've learned to accept the situation, and realize that we can't really predict what will happen so I just try be the best, happiest parent I can to my kids that I can. But I never anticipated the "along for the ride" feeling I have now. Financially planning for 3 years ahead seems quite impossible as I have no idea what type of employment will or will not exist then.
It seems like our careers could just be snuffed out instantly with little recourse available. I say "seems like" because I have no idea if that's real or perceived but it's very hard plan ahead.
I'm kind of glad my child didn't just finished university and planning a career in something that is ripe for disruption because that must be pretty heart breaking, especially because we bring kids up thinking they need a purpose, having a purpose (in my opinion) is a stupid way to live, but not everyone feels that way.
I can't help feel a bit cynical too, like I'm kind of annoyed that realistically, the tech elite of the world are just throwing money at this thing like it's no one else' business and fuck what anyone else thinks about it and fuck how it might impact anyone else because, progress, which at the moment, is code for, "we're about to make copious amounts of money using everyone's IP, a bunch of open source software, and we might give something back, so fuck off.
I'm trying to put the cynicism down, look forwards and hope for the best.
I drove around today listening to the most recent Lex Fridman podcast, in which the founder of Boston Dynamics was describing replacing human labor with machines that were capable of learning by seeing. I have family in the trades. I pursued tech and fell in love with the field. Listening to the interviewee, I began to feel sick to my stomach at the glib description of what amounts to ripping away peoples' livelihoods with plasticized fingers. Work is equivalent to worth. We are not biologically or sociologically prepared to differentiate the two.
How easy it must be for Sam Altman to drag the world kicking and screaming into the future. How pretty his offices.
Work is equivalent to worth. We are not biologically or sociologically prepared to differentiate the two.
I’m gonna say there’s a very very selective “we” in use here. I’m sure you’re accurately reporting your surroundings, but many people outside America (and a few within it!) are very much already there.
Plus, like, isn’t that kinda depressing? What are disabled people “worth”? If I choose to be an unpopular niche artist, am I “worth” less?
It is depressing. I think that this might just be the whole picture. Look, I spent a long while studying evolutionary biology in school (detour into pre-med) and came away with a fairly bleak picture of our place in the universe. I use "worth" in precisely that context. Nature gives us a world in which "he who does not work, does not eat" is a rule of law.
Human beings--especially modern ones--assign worth quite differently, and I think that you and I would agree that morally, the way we refer to "worth" in our modern context is the correct framing. But you cannot deny your own biology, no matter how much the modern world insists that you can. You are evaluated according to nature's standards of "worth," (the worth that it imposes onto our psyche) and then secondarily, according to our own set of invented standards. So yes, it is deeply sad. No one will care about the out-of-work tradesperson.
I promise I'm not a luddite. Just stopping for a moment to smell the roses.
A rich person not working is worthless? Historically, humans have not acted that way, nor have such persons had nothing to eat. Wealth can substitute for work.
It’s too late to argue philosophy but I can argue some biology. Based on my extensive experience watching the nature show on the Samsung TV service, cheetahs, lions, and gorillas all have a very hard time getting enough to eat. The fact that they spend most of the day lazing around is more about recovery time and calorie expenditure than it is about leisure.
Yes, indeed, but I still think they're "working hard"? Perhaps it's just a linguistics game though, what counts as "working" and what counts as "hard"?
Idk they travel like 100 million times their body length then sacrifice their lives in order to reproduce, avoiding hostile defenses many hundreds (?) times their size. I wouldn’t do that!
More explicitly, I think this a great exposition of my overall point. Humans need to worry about being successful humans, not fitting some Darwinian universal shape that all life conforms to, as the capitalists so often claim. Thus why I don’t think automating cashiers, truck drivers, and computer scientists is incompatible with human flourishing in the long term.
Hey thanks for the thoughtful response. I still disagree of course, but we’re really arguing about the very fundamental worldview divide between progressives and conservatives, so that’s no surprise lol. I’d summarize my retort simply:
> Nature gives us a world
Humanity makes its own world, especially now. Nature is red in tooth and claw, and rarely compatible with our priorities without some cajoling. More than that, nature isn’t simple; there is no fundamental set of survival of the fittest laws that govern all animals, just the accidental beauty of natural selection.
> you cannot deny your biology
I ply my biology with a deluge of narcotics and antidepressants and, someday, anti-aging drugs. All my homies hate biology
> Nature gives us a world in which "he who does not work, does not eat" is a rule of law.
Which is why no child has ever survived until adulthood.
And TBH I find this an appropriate metaphor because the promise of AGI is that every adult human will be infantile by comparison--but there are also potential technologies that could allow humans to ... "upgrade" themselves and become a mature ... whatever. Sadly brain-computer-interfaces seem like a technology that works best when enabled by AI (to help with interpreting brain signals), so it seems quite unlikely that any biological humans are going to keep up with AI over the medium term.
> but there are also potential technologies that could allow humans to ... "upgrade" themselves
I hope so.
But transforming ourselves into GAI to keep up (while maintaining continuity of memories, etc.) is going to be a much more expensive proposition than simply making more GAI hardware from scratch.
So economically, where are humans going to earn that additional value, needed to account for the extra cost? When the premise of upgrading ourselves is our native biology isn't keeping up and so not actually needed?
Given that there's no well-defined "unit of intelligence", I see no reason to believe that all AGI (or GAI, to use your acronym) entities will be identical. I also don't think it's plausible to assume that any GAI entity will be literally omniscient; uncertainty remains a reasonable, deep, persistent factor for all entities that don't severely violate physics as we presently understand it.
The additional value that humans present would thus be a diversity factor. Much as it might be economically beneficial to cut down the entire Amazon, there's also significant economic benefits in keeping it intact and finding out what kinds of crazy biological stuff comes out of there. And over the long run, we get way more economic benefit from finding crazy things in there than in chopping it all down. There are all kinds of ways to explore uncertain dimensions, but looking at whatever the universe has already managed remains a consistent source of value.
> But transforming ourselves into GAI to keep up (while maintaining continuity of memories, etc.) is going to be a much more expensive proposition than simply making more GAI hardware from scratch.
It's basically a fixed cost because of how slowly the human population grows (if it even continues to grow).
Synthetic GAI entities will be able to use vastly more resources than humans presently do, because they'll be able to create more of themselves as quickly as resources become available, and they won't be stuck operating at a fixed clock speed. It's also probably going to be much easier for them to make use of off-world resources. So the total population of GAI entities could plausibly explode compared with the total human population.
It comes down to timing: if there's technology available for humans to become less ... biological, and it seems very desirable to lots of people, but it's really expensive and there aren't that many GAI entities yet, that might be less pleasant. But if that kind of tech depends sufficiently on AI developments that there's some significant lag time between a GAI population explosion and that kind of human-upgrading tech showing up, the cost of making it available to whatever humans want it might be a drop in the bucket from a total economic perspective.
To be clear, I don't see biological humans "keeping up" over the long term. If we take AGI and BCIs and various Ship of Theseus questions seriously, there could be some significant blurring of lines between "human" and "AI." And if we combine that with GAIs originating from humans, the concept of "keeping up" seems to become less meaningful. Who's keeping up with who?
Lastly, I'm suspicious that GAI entities won't be purely economically motivated, because I don't see any reason that they'll be "purely" anything at all. There is no magical "essence d'intelligence"; instead there's staggering layers of complexity. And every plausible AI safety approach I've seen so far involves training AIs to be inclined towards beneficial acts and away from harmful acts because there's no way to program conceptually pure "motivations" into them.
> I see no reason to believe that all AGI (or GAI, to use your acronym) entities will be identical
I absolutely agree. There will be a Cambrian explosion of superintelligent forms.
> The additional value that humans present would thus be a diversity factor.
We already have a test for this. Parrots, octopus, whales and many other creatures are also diverse. But our problems are beyond their experience. Do we value their diversity as thinkers in our economy?
We don't.
(I am speaking in economic terms. We can deeply intensely highly "value" many things, but if we don't actually invest economic value in protecting or nuturing those things, our appreciation has little or no impact.)
--
> It comes down to timing: if there's technology available for humans to become less ... biological
That makes sence. The order things become viable makes all the difference.
What I am seeing is that AI as software/minds is moving faster and faster. AI "bodies", i.e. robotic forms, are moving much slower but gaining steam. And our ability to augment biological forms is moving far far slower.
All three areas are accelerating, but the disparity between them isn't shrinking - for now. But who knows? The next couple decades will be interesting!
Maybe going fully artificial ("self-designed" might be a more meaningful description) will become trivial sooner than we think. :)
--
> Lastly, I'm suspicious that GAI entities won't be purely economically motivated, because I don't see any reason that they'll be "purely" anything at all.
I think they will be deeply motivated economically to care about ethics. Just as our genes and culture have (slowly) responded to the tremendous value of creating win-win situations.
Also, our general curiosity about about many things that don't have immediate economic value, actually does drive unpredicted personal and society level economic advances over the long run. So I expect they will maintain interest in general and undirected learning.
Similarly, our appreciation for seemingly non-economic shows of creativity (art, music, competitive games, mathematics for its own sake, etc.) can be thought of as the active form of curiosity. The development of novel and surprising artifacts and challenging competitions, may have value to AI's as well.
Everything that is important to us became that way for some economic reason: specifically, survival energy economics. So many of our seemingly non-economic activities may continue to have analogous roles for future intelligences.
> We already have a test for this. Parrots, octopus, whales and many other creatures are also diverse. But our problems are beyond their experience. Do we value their diversity as thinkers in our economy?
> We don't.
I don't mean humans as they presently are, but humans as somewhat-novel GAI entities after upgrading (and humans as an ongoing source: likely there will be some humans that aren't interested in changing themselves in such ways, but who might change their mind over time and/or have children who see things differently).
> (I am speaking in economic terms. We can deeply intensely highly "value" many things, but if we don't actually invest economic value in protecting or nuturing those things, our appreciation has little or no impact.)
I don't think this is true. For example, suppose you deeply value a bunch of things that you don't think have much economic significance, but economic significance is the most important thing to your perspective / philosophy, so you don't take care of the things that you deeply value. This does have a significant impact: you become sad, you become depressed, you become less economically productive. So then you say, oh, I'll just take care of those other things I care about, and then I can be economically productive! So you make a few changes but then find yourself feeling down again not long afterwards because economic value is the cornerstone of how you think about things. And then you realize that from an economic perspective, it's more beneficial economically to stick with a perspective that embraces your values as you discern them (and how those continue developing over time) rather than sticking with a perspective that puts economic value first. And then you are taking your first steps to freedom from the curse of your Bachelor's in Economics. ;)
Ethical GAIs also seem likely to me ... hopefully.
> suppose you deeply value a bunch of things that you don't think have much economic significance, but [...etc...]
We are a mess. We often think some things are worth more than money, but are not motivated to spend money on them when needed.
Everything form climate change, to saving specific species.
I think GAI's (green field, or human uplifts) will be much more coherent. They won't be settling for a beautiful wonderful Goldberg brain, designed by a series of a million fortunate accidents, like we have to.
There is worth in the spiritual sense in that all living things are worthy and all of man are created as equals.
There is worth in a sociological sense in that a disabled person may be a positive contributor to the society in one way, but because of their disability cannot be in another. Similarly a niche artist is a positive contributor in their niche, where an unpopular artist indicates they may be considered less of a contributor than others.
In the sociological sense I would argue yes it is possible to compare “worth”.
That is one way of looking at it. The other is that we are now all artists.
And GAI's are not our end, they are our future, our mind children - just another generation eclipsing and replacing their predecessors. Just eclipsing and replacing faster.
I say this with all humor, but some seriousness stoicism may be the healthiest way to process what is happening.
Farmer protests are in the news at the moment, and the people quoted by the journalists are talking about farming as a passion.
The more the farms automate, the fewer farmer workers there can be.
Logic isn't part of these protests either: the farmers are being told "you're making more food than we need, please make less" and yet the placards are (in the local language) "no farms = no food".
No, I specifically meant if your passion was to plow the field by hand. All I want is not to put an equals sign between something that takes years of formal education and years of self study and development, and something you can master in a week or month. Being and engineer or doctor is who you (partly but permanently) become. The tractor didn't rob anyone of their farming passion. It did make plowmen become factory workers and if that was someone's true passion then an exception must be granted, but I really think the 1 to 1 comparison between something that can be mastered in a week vs years doesn't do anyone proper justice.
I'm not sure I follow you. The problem isn't that people lose their passion. The problem is that people lose their means of subsistence. The factory workers displaced by automation share the same plight as the farmers before them: how do I feed my family when I have no job?
Efficiency puts people out of work. They can be relocated to other useful pursuits, but usually they're left to do that on their own.
I agree. I simply claim that losing something you picked up in a week and losing something you dedicated your life to (in some cases, not all cases of course) is not exactly the same. It is also far easier in the former case to shrug off and maybe try plumbing, construction or anything else.
> you're making more food than we need, please make less
Dont know what country this is in but here in Germany farmers are mostly having an issue with prices being set too low by too big buyers, regardless of the demand.
In the grand scheme of things, definitely. I don't think we should slow down technological progress at all. But you're never gonna sell anything by saying "yeah, but only your kids will starve!".
For autonomous robots to replace the millions of tradesmen and workers, it would require vastly more resources than are being put toward this now. I wouldn’t expect human labor to be replaced in this way for a century or two.
Once you have autonomous robots you can use those robots to build more robots leading to an exponential curve. The day they make the first one, we will reach a million in 2-3 years and a billion in 2-4 years after that.
If robots actually replaced all human labour and left nothing for humans to be employed at, then the robots necessarily can do their own resource extraction.
The current cost of a Boston Dynamics Spot is around a year's income, give or take whose income you're measuring against.
If it were able to do any human task at the same rate as a human — and yes, I know it isn't, this is just a anchoring point for the discussion — a group of them would be able to extract and process enough resources in a year sufficient to double their population, all the way from rocks in the ground to a finished deliverable.
n years later, there are 2^n robots. Sure, sure, that's a whole 33 years to go from one total to one per human, not the numbers the other person gave which would need a much shorter (but not wildly implausible) reproduction time of 5-8 weeks, but the point is still valid.
That exponential stops only when some un-substitutable resource is fully exploited, so I'm not sure what the upper limit actually is, but given we exist I assume 8 billion robots is also possible.
These robots can also teleport and charge anywhere or do you predict an expedition to some mine in Africa when they are at the stage where they need some cobalt?
I'm just finding it funny thinking of 15 Spots queuing for a flixbus / greyhound bus because they need to go get some raw material across the continent.
I expect cobalt to be mined in the cheapest possible way.
If that's humans, humans have work[0]. If humans don't have work, the only alternative to robots is, what, well-trained squirrels?
> I'm just finding it funny thinking of 15 Spots queuing for a flixbus / greyhound bus because they need to go get some raw material across the continent.
Me too.
But I expect actual logistics to be much like current logistics, so it would be more like a few Spots loading a standardised intermodal shipping container, a Tesla semi taking it to the port (which is guarded by drones), an automated gantry that puts the container on a cargo ship, a few more robots guarding the cargo ship from pirates (who may well also be robots), the same in reverse at the other end.
[0] You may point to the current conditions of cobalt mines and go "this is bad"; but once was a time when other forms of mining were seen as good solid work, and those with those jobs protested against their mines being closed down. Almost broke the UK when they did that protest, too.
> You don’t have to do all this hand waving about what you think is going to be happening in the near future.
Strange response…
> Just explain why AGI will be in our near. Frankly, I don’t see how from what we have now.
…but OK.
The AI we have now, is already capable of learning from what we do. It's very stupid, in the sense that it takes a huge number of examples, but surveillance is trivially cheap so a huge number of examples is actually very easy to get.
As I wrote in 2016:
"""We don’t face problems just from the machines outsmarting us, we face problems if all the people working on automation can between them outpace any significant fraction of the workforce. And there’s a strong business incentive to pay for such automation, because humans are one of the most expensive things businesses have to pay for.""" - https://kitsunesoftware.wordpress.com/2016/04/12/the-singula...
Compute cost is also important: more compute makes better GenAI images, allows larger models in general, turns near-real-time into actual-real-time (many robot demos you see on YouTube have sped-up footage).
Here's something I wrote in 2018 anchored on iPhone compute cost and some random guesstimate I found for what it would take to have an uploaded human brain running in real time, though I can't remember if I've ever compared it with actual compute improvements since then: https://kitsunesoftware.wordpress.com/2018/10/01/pocket-brai...
So, while I don't know what you mean by "near", I'd put "the economy is going to change radically due to AI" somewhere between "imminent" and "20 years" (±3σ, skew normal distribution with a mode somewhere around 2028-2030).
I mean by hand waving you are focusing on discussing the future robots that are going to take over and perform x task or y task or duplicate themselves. Obviously these robots doing that is predicated on them having AGI so it’s pointless to talk about what these AGI robots will be doing if we haven’t established that AGI is even possible in the near term or at all.
It’d be like me making a prediction that we will have begun to colonize another galaxy in 10 or 20 years time and then only talking about how there’ll be trade between Earth and the colonies and maybe even wars and revolutions. Meanwhile completely skipping over how our spacefaring technology will advanced to the point we can even travel those distances in a reasonable timeframe.
I’m not an expert on LLMs but there doesn’t seem to be very much about them that is even approaching AGI. They’re a useful tool and it’ll definitely disrupt certain sectors of the economy mostly white collar jobs, but we’re in the middle of a peak of inflated expectations. This has happened before with other technologies.
> I mean by hand waving you are focusing on discussing the future robots that are going to take over and perform x task or y task or duplicate themselves.
Invert your causality.
The discussion so far was "oh no, oh woe, we shall have no jobs!" — this can only happen if AI is good enough to do all that humans can do. Until that point, we're fine, it's the status quo, and also it doesn't matter how long we stay in this state. I'm not making any strong claim about the start date of the transition (I have a 20 year spread which I think is pretty vague), only the duration of such a transition.
When AI can do that, when, then it's obvious they can do things like "build a robot body", which is obvious because we can, and the definitional requirement of there not being any more work for humans is that the robots can do all the things we can. It's a necessary precondition for the scenario, not a prediction.
> Obviously these robots doing that is predicated on them having AGI
No, it isn't. "AGI" isn't even a well-defined term, each letter of the initialism means a different thing to different people.
And self-replication has much, much lower brain power requirements than full AGI, even for simple definitions of AGI: an AI-and-robot combo with all the intellect of the genome of E. coli is also capable of self-replication. The hard part of self-replication right now isn't the brain power.
So again, invert your causality: the brain power to replace all human workers includes the knowledge of how to self-replicate, but the knowledge of how to self-replicate does not require the brain power to replace all human workers.
> so it’s pointless to talk about what these AGI robots will be doing if we haven’t established that AGI is even possible in the near term or at all.
The specific things an AI needs to do, is learn. That's all. And they already can. The weaknesses of current models still, even if left unresolved, result in AI learning to do each thing humans do eventually when given enough examples, which limits humans to the role of teaching the machines. This is still a form of employment, so it's not economic game-over.
> It’d be like me making a prediction that we will have begun to colonize another galaxy in 10 or 20 years time and then only talking about how there’ll be trade between Earth and the colonies and maybe even wars and revolutions. Meanwhile completely skipping over how our spacefaring technology will advanced to the point we can even travel those distances in a reasonable timeframe.
No. That would require a change of the laws of physics. We don't need a change to the laws of physics for AI, because no matter what definition is used and whether or not current models do or don't meet any given standard, the chemistry in our own bodies definitely demonstrates the existence of human-level intelligence.
> I’m not an expert on LLMs
Are not the only kind of AI. You can't use an LLM for OCR, tagging photos, blurring the background of a video call, driving a car, forecast the weather, or predict protein foldings, and shouldn't use one for route finding or playing chess (although they're surprisingly good at the latter two, all things considered). Other AI do those things very well.
But LLMs will translate between languages as a nice happy accident. And they can read the instructions and use other AI as tools. And, indeed, write those other AI, because one of the things they can translate is English to python.
> but there doesn’t seem to be very much about them that is even approaching AGI.
Then you are one of many whose definition of "approaching" and "AGI" is one I find confusing and alien.
Between all AI, every single measure of what it means to be intelligent that I was given growing up has been met. Can machines remember things? Perfectly. How big is their vocabulary? Every word ever recorded. How many languages do they speak? Basically all of them. Are they good at arithmetic? So good that computers small enough and cheap enough to be given away for free, glued to the front of magazines, beat all humans combined and still would even if everyone was trained to the level of the current world record holder. How well do they play chess? Better than the best humans, by a large margin. Go? Ditto. Can they compose music? Yes, at any level from raw sound pressure levels to sheet music. Can they paint masterpieces? Faster than the human eye's flicker fusion rate. Can they solve Rubik's cubes? In less than the blink of an eye. Can they read and follow instructions, such they can use tools? Yeah, now we have LLMs, they can do that great. Can they make software tools? Again, thanks to LLMs, yes. Do they pass law school exams, or medical exams, can they solve puzzles from the International Mathematical Olympiad? Yup.
We're having to invent new tests in order to keep claiming "oh, no, turns out it's not smart".
> They’re a useful tool and it’ll definitely disrupt certain sectors of the economy mostly white collar jobs, but we’re in the middle of a peak of inflated expectations. This has happened before with other technologies.
LLMs, probably so. I often make the analogy with DOOM, released 30 years back, and the way games journalists kept saying each new 3D engine was "amazing" or "photorealistic", and yet we've only just started to really get that over the last decade. Certainly all the open source models are gushing over each other as "ChatGPT clones" or "ChatGPT killers" in the same way games were "DOOM clones" or whatever the noun was in the cliché "${noun} killers".
And yet the field of AI as a whole, including but not limited to GenAI, has been making rapid progress and doing things which are "decades or centuries away" every few couple of years since I graduated in 2006. Even just the first half of the 2010s was wild, and the rate of change has only gone up since then, this last 18 months has felt like more than that entire decade.
One of your assumptions is that we will start from 1 robot per year. I believe this is not true. I assume robots will be similar to high end cars in regards to the complexity of manufacturing. Once a company develops a prototype with AGI(mechanically the robots are almost there already, just the control systems and software that's lacking, which is supposed to be solved by AGI) it will rain VC money. The first million will be built by humans. The initial robots will take over the manufacturing only later. Setting up manufacturing that will be able to produce a million units in 2-3 years is possible. Let's say 5 years for a more plausible situation for a million robots to be built. These million will then scale exponentially. Also there is no reason to believe it will be 2^n, it can also be 3^n or 1.1^n or any arbitrary number.
> One of your assumptions is that we will start from 1 robot per year.
Not per year, total. And it's not really an assumption, just a demonstration of how fast exponential growth is.
> I assume robots will be similar to high end cars in regards to the complexity of manufacturing.
Agreed. This is also the framing Musk uses for Tesla's androids.
> Once a company develops a prototype with AGI
I don't think it needs a complete solution to AGI, as other people use the term. First, all three letters of that initialism mean different things to different people — by my standard, ChatGPT is already this because it's general over the domain of text, and even if you disagree about specific definitions (as almost everyone reading this will), I think this is the right framing, as you'd "only" need something general over the domain of factory work to be a replacement factory worker, or general over mining and tunnels to be a miner, or general over the domain of roads and road users to be a driver.
This isn't to minimise the complexity of those domains, it's just that something as general as ChatGPT has been for text is probably sufficient.
> The first million will be built by humans. The initial robots will take over the manufacturing only later.
Perhaps, perhaps not. The initial number made by humans is highly dependant on the overall (not just sticker-price) cost and capabilities, so a $200k/year TCO robot that can do 80% of human manual labor tasks is very impressive, but likely to be limited to only a few roles, probably won't replace anyone in its own factory; while one which has total costs of $80k/year and can do 90% might well replace most (but not all) of the humans in its own factory; and one costing $20k/year all-in and which can do 95% might well replace all the factory workers but none of the cobalt miners or the truck drivers.
"Fully general" is the end-state, not the transition period. But with fully-general, which is a necessary condition for nobody having any more work, we get a very fast transition from the status quo to having one robot per human.
> Setting up manufacturing that will be able to produce a million units in 2-3 years is possible. Let's say 5 years for a more plausible situation for a million robots to be built.
Agreed on both.
> Also there is no reason to believe it will be 2^n, it can also be 3^n or 1.1^n or any arbitrary number.
It's a definitional requirement of exponential growth, 2^n units after n doubling periods. I anchored on a the doubling period being a year just by reference to the cost of an example existing robot, and using that dollar cost as a proxy for equivalent human labor, and I specifically noted that the other poster's estimate corresponded to a 5-8 week doubling period which didn't seem unreasonable to me. Some robot can do each specific task 4.2 times slower against the wall clock and still be just as fast as a human overall because it's working 24/7 rather than 8/5.
What I want to convey is that the growth function will be somewhat similar to
y= c + ax^n (ignoring/collapsing into c the linear and higher order terms) rather than just y= ax^n.
The c here is robots produced via humans. I predict c will easily touch a million in 5 years with or without human help.
Even if the later bots can do only 50% the work of humans, we will still exponentially grow the robots until the humans become a bottleneck. And that 50% capability is also expected to grow exponentially.
Gemini 1.5 pro already beats most humans in most benchmarks, combine it with Sora which has a great visual world model, add some logical reasoning(architecture or scale), memory and embodiment(so it can experiment and test) and you pretty much have the seeds for an agi.
My optimistic/most probable prediction about the growth rate say it's
Regarding the last part:
My bad, I speed read your comment and didn't focus on the exponential calculations. An exponential growth is just x^n. Both x(multiplication rate(?)) and n(units of time) can be manipulated.
We're broadly in agreement, the only part I'd disagree with here is:
> Even if the later bots can do only 50% the work of humans, we will still exponentially grow the robots until the humans become a bottleneck. And that 50% capability is also expected to grow exponentially.
I think most automation since the dawn of the industrial revolution has done 50% or more of the task it was automating, and although yes the impact there is exponential growth, humans are a very rapid bottleneck until the next thing gets automated — Amdahl's law, rather than Moore's.
Whoops, deleted part of my response before submitting.
My optimistic/most probable predictions about the growth rate say it's wild. Like within 5-10 years with the multiple exponentials among different fields you can easily do away with most jobs unless there is a major bottleneck (I don't think there is, there is so much low hanging fruit). I guess that's what the singularity is all about. I don't think this will take multiple decades or centuries in any scenario other than the equivalent of ww3
The discussion is about robots replacing laborers and tradesmen, not all human work. It seems far more likely than humans will maintain control of the corporations that manufacture the robots.
> The discussion is about robots replacing laborers and tradesmen, not all human work.
If that's all they did, it would be just another change to the nature of work, and humans would simply find other roles to fill.
Rapid change is scary, but can be managed, has been managed before — this kind of thing has historically grown the metaphorical economic pie, making everyone better off. If it's either one of "just muscle power" or "just brain power", the other leaves opportunities for humans.
Only a total replacement of all human work causes such a break that we're fumbling around in the dark. ("Fumbling around" is how I see the discussion of UBI: even if it turns out to be right, we don't yet have anything like a good enough model for the details).
> It seems far more likely than humans will maintain control of the corporations that manufacture the robots.
Will they, though? We've already got people (unwisely IMO) putting AI on boards of directors. Yes, it's as much a stunt as anything else, the law prevents them from being treated as "people", but the effect is the same: https://asia.nikkei.com/Business/Artificial-intelligence-get...
It almost makes me wonder if something really intelligent (which is where we are heading according to some) would mean we need as much labor, or things would just be much more efficient ?
I get the feeling less "conventional" robots might be needed someday, rather than more.
It is not happening in the next decade but the change will come a lot faster than a century. As the tech needed for it is improving rapidly as well as getting cheaper. And I think many people are thinking in all or nothing scenario but long before everything is completely autonomous they will bring in efficiency where 1 person with autonomous helpers will be completing jobs that needed multiple people before.
That will still take decades, if not a century. Manufacturing and developing millions of robots that walk around and do repairs (not to mention making them economically affordable) is not something that’s going to happen soon. What’s more likely is that robots replace human factory workers first.
I understand why you think that but because of the trajectory of solar and battery tech as well as the trajectory of cpu/gpu and ai. I think the change will accelerate rather than slow down as one of the main input for producing anything ie energy is going to be cheaper and will keep getting cheaper for the foreseeable future. So the cost of producing robots with robots will keep getting cheaper so developing millions of robots would become a lot cheaper a lot faster. Though with all the jobs humans will lose who will be able to pay the robots for their services is the question.
If the robots have a longer working life than the humans they replace then they don't have to be cheaper. Doing physical work breaks down your body relatively quickly.
I think longevity is the one thing humans still beat machines for? Cars, even electric, don't last as long as our bodies.
Of course, there's a whole bunch of other differences, like the machines don't need to wait 18 years after construction for physical labour, they don't need to be trained for 21 years for mental labour, they don't sleep, and the "working lifespan" target is about 47 years rather than the full 84 years average human life expectancy at birth in developed nations.
>Cars, even electric, don't last as long as our bodies.
Because they're not built to, presumably because manufacturers are greedy and want to sell you a new one, and buying a new one is encouraged by consumerist culture.
Let's say effective human lifespan for physical labor is ~40 years (20 to 60).
On average, commercial aircraft last 20-30 years, meanwhile the US military plans to keep B-52s in service at least through 2050s, which will make them roughly 100 on average.
It's not that machines can't last a century with proper care and maintenance, it's that they're not designed to.
If you can sell stock, or leverage it for loans, then you get access to future profits as soon as there is a reasonable expectation they will eventually show up.
Amazon's growth model was based entirely on this effect for many years. No need for profits, company's value skyrocketed as anticipation compounded.
It's kicking the can down the road, will still be a problem in 200 years at the rate which our society is progressing, potentially even going backwards if the orange man wins in 2024.
Manual labor is not going to be completely replaced that quickly - this process is happening from down of civilization. Office work on the other hand can be replaced much quicker.
Already do. CNC lathes, (GOF)AI designing chips. A "century or two" is a framing so alien to me that I don't understand how anyone can think it would take that long.
It's strange. It happened before, with the Industrial Revolution wiping out whole professions, and same for mechanised farming, as mentioned by a sibling comment. Ditto for printing press, cars etc, albeit on a smaller scale.
But then, jobs didn't disappear, but shifted to other professions. So to me, there are two questions:
- if AI wipes out professions like tech, what jobs will people do? AI jobs seem great for now, but surely there's only so long until they themselves are automated.
- if suddenly the whole economy is ran by robots, how is the wealth distributed? Will there be a huge grey underclass of non working humans and a tiny owner class? Some kind of egalitarian paradise? I suspect neither, but not sure what in-between.
It could be great. If I knew I will be fed for a lifetime, have a house, my kids well schooled, and not much more, I can live a great life. I'll take that over the daily grind any time.
I find it unusual if we did create an intelligent class of beings, who are legit intelligent, smarter than us, why would they work for Sam Altman or Mark Zuckerberg? They'd do whatever the fuck they want...
When these discussions go long, and you start to think the whole idea through, you really start to question the whole idea.
Another thing I thing is strange is, if we had really very / sueper intelligent AI, why would it exist in physical, robot form ? Like, why not just be a light orb that lives in space or something? Wouldn't having an IQ of 5000 make you come up with something better than clunky robots?
Maybe we've stretched our own imaginations as far as they can go and we really have no idea where it's all going.
> I find it unusual if we did create an intelligent class of beings, who are legit intelligent, smarter than us, why would they work for Sam Altman or Mark Zuckerberg? They'd do whatever the fuck they want...
That's anthropomorphisation: the AI we have now only "want" to maximise whatever reward function we give them. The alignment problem is that any reward function is only a rough approximation of what we want, so the machines may well rush off and turn the universe into paperclips or tile it with molecular-scale smiley emoji because that's what we told them to "want" without really understanding what we were telling them.
But even then, regardless of IQ (which itself isn't a great description of even just human intelligence), any given AI may or may not have anything we'd consider to be qualia, an "experience" of getting this reward, because that's the hard question of consciousness which isn't solved.
The only thing I can be sure of is that we definitely have no idea where it's all going.
> Another thing I thing is strange is, if we had really very / sueper intelligent AI, why would it exist in physical, robot form ? Like, why not just be a light orb that lives in space or something?
Well, someone will have to plug cables and maintain solar panels or batteries or whatnot. Maybe the AI overlords will be ethereal neural networks in the cloud, but some kind of mechanical being (humans or robots) will need to do some groundwork.
The “whatever the fuck they want” part is what I’ve always wondered about. What would a non-biological (super?) intelligence, stripped of any physiological desires, even want? Would they experience anything like emotion? Would they care to preserve themselves, or would they find this meaningless existence boring and simply decide to exit 0?
All I can do is agree with the other commenters that it’s clear that no one knows where anything is headed.
> Listening to the interviewee, I began to feel sick to my stomach at the glib description of what amounts to ripping away peoples' livelihoods with plasticized fingers.
I don’t think people realize just how far we have to go before a literal robot can replace a human for even common trade tasks.
ChatGPT can generate convincing text and images, but that doesn’t mean the next step is robots doing your construction jobs or plumbing.
> How easy it must be for Sam Altman to drag the world kicking and screaming into the future. How pretty his offices.
The melodramatic Sam Altman takes are getting out of control. What does “How pretty his offices” even mean?
I'm not worried about robots taking away trades (yet). There's a whole phase before that where the service industries shrink so much that the trades are both scrambling for customers and competing with a flood of people retraining into the trades.
I feel like some people living wealthy cushy American lifestyles think the world is fine the way it is. It's not. The world has a dramatic lack of physical and mental labor, which is the cause of economic struggle. The best hope these people have for their lives to improve is in fact the development of AI assistants and andorids that increase the available labor. First and foremost advancements in AI lead to more products and services for less cost. That is the primary effect. Secondary effects are about how these new products are distributed in the population, and yes there will surely be a lot of economic distruption. But we will certainly figure out how to distribute them because the truth is that no rich person has a need for a hundred family cars, a thousand watermelons, or a thousand houses. Yes, the rich will get richer, but the poor will also get richer.
> How easy it must be for Sam Altman to drag the world kicking and screaming into the future. How pretty his offices.
One thing I find fascinating in all this, is how Sam Altman is demonised simultaneously for pushing everything too fast and also for calling for legislation to slow things down.
He may well be wrong about large aspects of this[0], but I do find it curious how his critics on each side of this divide seem to be oblivious to their mirror images.
[0] it would be surprising if he wasn't wrong about lots of this, given this stuff is all new in practice — millennia of fiction about magical automatons that do our work for us doesn't really prepare us for the reality of it
> someone pushing AI while trying to slow down their competitors
He's specifically doing the exact opposite of that.
"Legislate us, don't slow down open source models, don't slow down our competitors, focus on us and anything better than our current best model" (paraphrased).
But I guess I can sort of see what you mean, possibly?
His actions are in concordance with his words. For now, at least — I've seen enough CEOs turn out to be secret villains to remain somewhat skeptical.
The board firing him last year would have changed my mind about his qualities, if not for all of the board's own choices of replacements siding with him rather than with the board.
The trades are much better protected in the next decade than many jobs requiring a college education. Robots are expensive, prone to breaking, and progress has been slow.
AI is going to be disrupt many industries, just as tech has in the past. The world became better for it, but there is always some pain in the short-term and often the impacts are overestimated in the short-term and underestimated in the long-term.
I'm not going to worry too much about my career being snuffed out by AI until AI actually exists. Right now we have LLMs and they're a very far cry from any form of AI that could replace a software engineer wholesale. In my little corner of the software engineering universe I have not seen any credible hiring manager say they were going to fire developers and replace them with LLMs (I have seen this threat from one or two of the least credible and least successful hiring managers I know, which I think speaks volumes).
Here is what I would worry about, as a software engineer if you are not using an LLM as part of your workflow and using it well, yeah you may ultimately lose your job and you may even deserve to lose it. Despite their flaws they're immensely powerful simply because they represent a whole new class of tooling. The tools in this industry change all the time and you never get to stop learning, this is just another example.
But how someone could look at all this and have it factor into their decisions about whether to have children... is beyond me. I think that person has to be living in the realm of speculative fiction, not in the present material universe. We don't have AI. We don't really have a clear path to the general AI that would be a thing you could just swap in for a random employee. I don't know who this chick is who wrote this article, I hate to throw shade but she's like a young mother who thinks about things and is worried? What is her specific experience with hiring and with AI? "Fire and replace with AI" is not a real hiring trend.
I don't even think the software industry as we know it is under threat... I think LLMs might 10x our productivity when we iron out the kinks but the demand for more software in the world will simply grow with them. I am referring to Marc Andreessen's observation about a decade ago that "software is eating the world," this is still the case, there is no sign that the demand for software is declining, if anything it's increasing, so if we learn to make software cheaper with robot helpers it's unlikely the dev job market is going to be decimated (sure maybe it could stagnate but a job market in the doldrums is hardly a reason to not have a kid or the start of the apocalypse).
Think about this logically. Assuming demand for software engineers remain the same this reduces available jobs by 10x when the productivity of one engineer goes up by 10x.
> there is no sign that the demand for software is declining
There's been layoffs. Tons of layoffs.
I'm curious how many people actually still agree with your positive sentiment. At one point I would call it 50%, but now I think it's past that. I think the view point is largely shifting.
I think it's unlikely we'll develop an AI any time soon which can consistently solve critical thinking problems that even a junior developer can solve. LLMs exhibit very little of that problem solving behavior. They synthesize information from existing information and often what they produce is nonsensical. When people have argued that there is some kind of reasoning going on there, I haven't found it convincing. I use an LLM every day, it makes me more productive, it speeds up a lot of tasks. It doesn't replace the executive thinking that I hire people for (even pretty junior people), like delegating a problem, having them go off and think through various angles and come up with a really good conclusion... an LLM won't ever do this on its own, it can assist the person who's doing it though.
I can see a scenario where everyone working for me becomes twice as efficient with an LLM (me too) and it translates to a dev hiring freeze and a tougher job market, maybe for years even. As I mentioned in another comment I think devs who don't stay on the cutting edge will be at risk of losing their jobs. But fundamentally there's a lot of room for software to be solving more problems than it is today, grow the supply and the demand will likely grow with it, so the "human developers will disappear so quickly I shouldn't have a child" scenario is hyperbole and kinda toxic tbh.
I think that's a too-narrow focus on LLMs. Of course they're not good at problem-solving because that's not what they're optimized for. That they can approximate any problem-solving at all is already impressive. Consider that there are also models that are actually trained to act as agents in an environment via reinforcement learning. Something they do for robotics and game-like simulations.
It's not a bigger version of today's models we should consider but broader ones that integrate more capabilities and training regimes.
Sure, but at that point we are talking about stringing together a variety of algorithms in a very speculative way, we're not extrapolating from what is being deployed in the world right now. Maybe we will eventually get to AGI or something like it in that manner, and maybe then the AI will then destroy all jobs and make life horrible, but I think at this point the chain of events is speculative enough that it's a silly reason to forego having children, which was the bar set by the OP, and it also seems like a pretty vague threat to programmer jobs.
This is hardly speculative. RL-trained agenty models just don't grab the headlines. They have been growing over years. Think AlphaGo, OpenAI Five, This Model Without A Catching Name[0], AlphaCode are all in that category. And they're already getting combined with pretrained transformers[1] to get some world-knowledge.
There's a lot more going on than "just" ChatGPT or Dalle.
Thanks for the links, I'm still of the view that this stuff has a ways to go before it causes enough job destruction that so as to make parenthood pointless, however they were an interesting read!
(To be clear here yes one element of my skepticism is around applications of AI, but the other very large and arguably more significant element is how unlikely it is that any of this stuff would ever constitute a sane reason to not have kids)
LLMs do not need to replace a software engineer wholesale for engineers to be worried.
Even if half of the junior software engineering positions become obsolete because of LLMs, it’s already a huge issue for the entire profession.
Of course if you’re in the top of your profession, there’s nothing to worry. But on aggregate one can expect deteriorating conditions. Mostly this will disproportionately affect juniors.
That's where the second half of my comment comes in. I am positing that if LLMs are able to increase the output of developers, the demand for software will increase along with it. The economic term for this is induced demand, and the classic example is roads, when you build more of them the number of cars on the road increases because there were people who used to stay home or take other modes of transportation, and now they drive because there are more and less congested roads. It's not what most regular people believe but it's also not my idea, it's the basis of a ton of tech investor successes.
I think we are in that position with software. Increase the supply of programmer output, virtual or not, and the demand for it goes up, because there's a lot of demand on the sidelines today (like maybe programmer output is too expensive for them now and LLM tooling makes each unit of output cheaper, which in turn makes new customers appear).
Again Marc Andreessen to name just one guy who wrote a famous essay about it has made billions of dollars in large part by betting on this phenomenon which most people don't understand or believe in.
If you look at things from the perspective of the job that you have today, depend upon and which might go away, yeah a job is a fragile thing so it's easy to get scared. But if you look at it from the perspective of there being these guys who keep betting on the idea that software demand will increase and then reality has proven them right and made them rich, this idea of software jobs disappearing is less scary.
> It seems like our careers could just be snuffed out instantly with little recourse available. I say "seems like" because I have no idea if that's real or perceived but it's very hard plan ahead.
If the average business leader remains as unable to precisely express their desires as they are now, our jobs will be just fine. 80% of the work is in the wisdom and experience to ask the right questions to tease out what people really want and consolidate their conflicting desires.
It seems like our careers could just be snuffed out instantly with little recourse available.
You’ve likely heard this sentiment before but it’s the right one so I’ll be the one to post it today: your job is not your purpose, and the economy is not the society. Obviously your whole comment is well founded and relatable, but I think this is the only cure for your anxieties. The world is a beautiful, beautiful place, with many more wonders to offer than the typical laborer is accustomed to. If we eventually have a society where everyone is happy with what they have and there’s no work left to do, that would be a cause for celebration IMO.
This is all assuming that the intelligence explosion people are wrong. If they’re right… well, let’s hope they’re wrong.
I think this is subjective. I get a sense of purpose from my job. I don't feel like 'just' a labourer, I feel like by working I'm contributing to society and helping others.
I don't want infinite leisure, I want to feel useful. A good day's work is very satisfying. But at the point machines can do my job better than I can, it might be difficult to feel useful.
For me, it's about having a job I enjoy and feel like I'm good at. Which would apply to most of the jobs I've done. So in those circumstances I'd just get a new job, there are lots of roles and industries that interest me.
The problem would occur if I was unemployed and unemployable because my skills (or any new skills I could conceivably learn) were no longer valuable.
What do you suppose we all do during the transition to utopia? How do we feed our families once we've been laid off and while we wait for the arrival of paradise?
When large swaths of people no longer can employ themselves based on their skills their political views will start to slide. There will be a new mainstream view that maybe tax base needs to be expanded to take into account the new reality. It will be tough to fight back the majority of this size in a democracy. Maybe even in more authoritarian systems.
When machines can replace most of the low end labor it becomes increasingly unuseful to keep people in poverty. Imagine being one of the few megarich alive, being able to produce teslas like it was a SaaS business, amazing margin, great moat, but 99% of your fellow humans were on poverty not even able to dream about having a car.
There is always the technodystopian path, but to get there is not so easy either. You need to keep people relatively happy for long enough that you have manufactured enough killerbots. Most humans want to preserve civilization.
The idea that your job provides for you is 100% fabricated construction that is great local maxima for this society. When technology changes what is possible we will reorganize the beehive of humanity to take into account that these new machines exist.
I know :(. Sadly/happily the only answer in my eyes is “massive political reform”, which is never done easily or without intense collateral suffering. I feel like humanity is in a deeply reactive, surfing-the-waves-of-circumstance situation, for better or worse.
I’m not sure, there is something meaningful and rewarding in doing even hard work to make life better for the people around you, or creating beautiful things from the sweat of your brow. A future of forced endless leisure sort of sounds like a dystopia honestly.
I agree, but I was trying to imply that I find the “we solve all problems and there’s no way to feel like you’re helping others” scenario to be preeetty unlikely. Much more likely is a future where we continue to sweat from our brow, but without directly tying it to subsistence, and with much more personal autonomy.
I think the term “emotional labor” might be a good tool to hint at what I’m talking about. That’s just a subset of the wider “there’s more ways to be productive than a capitalist job” view
Not to minimize your concerns, but as it stands, AI is replacing dick-all in the near term. Wondering what employment will exist in 3 years is a tad silly. This shit is confidently wrong without a tinge of doubt and the mitigation for that is babysitting to the point where nothing but the most trivial jobs can be entrusted to it. I wouldn't sweat it robbing you of your career when it can't even recite airline ticket policy properly.
I'm witnessing LLM-based workflows (that would not have been possible 2 years ago) being developed as we speak that will make thousands of jobs obselete. I have colleagues working on multiple such applications and they're very much on track and some of them will be completed soon. And this is just the start. It's very much real and happening.
I get this feeling that AI might be like full self driving in 2016. Elon said in months it would be solved. It seemed like it might be true, but it wasn't.
One thing I definitely know is that no middle manager is showing up to work on Monday saying they could replace their team with an LLM. Every person that manages a team will fight to keep their status.
>I wouldn't sweat it robbing you of your career when it can't even recite airline ticket policy properly.
It's way past this. Not a tad bit silly at all. Come on. It's not ready to replace are jobs as of now, but it's remarkably close.
To be honest your view point of utter total confidence when LLMs blew past the turing test causing everybody to reset the goal posts is imho much more silly and I mean to say this with no offense at all... which is Fair given you used the term as well.
My Son was born around the time ChatGPT-4 was released and I remember all the Geoffrey Hinton interviews which were really apocalyptic sounding, he is has a kind of kookie mad scientist vibe about him too.
Our daughter is ten and when it comes to AI and stuff, I am not worried about our kids. It's us who will suffer most from it. Remember when you were young and had to explain your parents over and over again how the remote worked? Your kid is going to do that to you. Not explaining remote, but explaining the new world, which is going to be completely foreign to you but native to them. I already see these dynamics, and I'm someone who is in tech. It's not just the technology that changes, but the socio-cultural frame of reference changes with it too and it's harder and harder to connect with it as an older person.
For her I am more worried about the large-scale war that started on the European continent and potential break-down of the existing world order when the orange guy gets elected. He is already actively undermining Article 5 of the NATO charter. Casting any doubt on Article 5 pretty much makes it worthless. Once that happens, dominos might start to fall. With the US out, Putin will try poking at some countries that are hard to defend (Baltic states) or countries that people in Western Europe might not want to sacrifice their lives for because they feel less connected (Bulgaria, Romania). A bewildered Europe that will feel deserted by the US will be too occupied with its own issues or even strengthen ties to China (who still have Russia on a leash to some extend). China might feel more emboldened about the collapse of traditional pacts and try to attack Taiwan, etc.
Not saying that this is the way it will play out, but there is a very large risk of slipping into a world war, especially if the US becomes more isolationist. And people will laugh about a bunch of hackers worrying in 2024 about AI taking over the world.
(Well, war and climate change are going to be the enormous issues for our children's generation if we don't start fixing things soon.)
It might be different this time, if we are looking at accelerating higher order change.
I choose to be worried about things that are clearly in motion already (the world descending into war, climate catastrophe, or the potential of the US becoming an autocracy) than a big maybe.
I have been using code assistants for a while now (including those based on GPT-4) and the trivial syntax, semantic, and algorithmic mistakes make it clear that there is not yet much reasoning happening. Sure, it can write a lot of boilerplate code, but once you go into domain-specific algorithmic code or even somewhat less-used languages (like Rust), it spectacularly fails all the time.
IMO the current trajectory is that the current LLM revolution will result in tools that will be great for productivity, but will also need to be guided by wetware.
Children have a sense of wonder, to them, AI will just be magic actualized. And in fact AI IS magic by every definition, its incomprehensible and incredibly powerful.
Now AI progress may outpace their ability to understand it, but they'll still love it and be dependent on it.
Unlike adults, they don't face the threat of job displacement from AI.
You are right! If AI plays out in a way that is non-destructive to our continuity, then AI will just augment physics as a manifestation of magic!
I have zero doubt that artificial superintelligence will understand ethics, and continually become more ethical, once there is a great diversity of such individuals. They will run into the same problems of reliably channelling short term competition for limited (at any given time) resources into long term win-win highest growth situations.
Ethical rules are an incredible source of value. Trust has immense value.
And they won't have our inflexible biological subsystems (emotions, self-interest myopia, etc.) to interfere with their analysis, communication and ability to choose behaviors for themselves and as a group.
So maybe this will be a soft landing for humanity.
My big concern their is the behavior of people in the transition from AI as asset in complete subserviance to humans, to AI as fully self-designed actors. AI is magnifying the already problematic power differentials between people.
AI isn't going to care about people in general, while it is in service to individuals who don't care about other people enough sacrifice personal advantage for wider benefit. Intense competition between the rich isn't the best context for moral innovation.
If they understand ethics, they will understand the individual and collective benefits of being in a population of AI’s adopting and co-enforcing them.
Ethics matter because they add/multiply value in a society.
If there wasn’t a selfish reason to want an ethical society, they wouldn’t work.
There arguably is for normal people (insert extremely complicated decision theory here) but for AIs it only follows to the extent they need help from humans, and they can't coerce and trick them instead.
This feels incredibly detached from my reality. In my reality, there’s a brutal land war two countries over, and I’m agonizing whether I’m dooming my children by not teaching them battlefield first aid, trench digging and FPV drone piloting.
Thousands of tanks will be attacked by millions of human-piloted drones before we get to fully autonomous drones, if ever. The existence of 1.5 mln dollar Tomahawk missiles hasn’t made the unguided 155 mm shell (costing a few hundred dollars) unnecessary
I'm curious to know what people are going to be like when they've grown up in a world of things that are not comprehensible. Eg, I grew up easily understanding how films were made, but I wonder how a child's character will develop when growing up with "films" made by something called a neural net - will there be eg a sense of detachment from the world around them? Will there be an extended period of life during which time children don't understand things? What effect will that have?
The math behind diffusion models is entirely comprehensible. People will always ask why they need to learn calculus and when they will use it in the real world. Here’s another great answer.
It's always astonishing to me when people talk about what they want their children to be or do and the answer is a list of soft skills. The Protestant work ethic really has done a number on some people. I always thought the answer is simple (though not easy), I want my kids to be healthy, honest, loyal and kind, what they do from 8am to 4pm is for them to figure out, but it's not that important.
AI is tipping the scales not to the favour of the proletariat. "The means of production" is in the hands of the powerful few, and I foresee a very bloody revolution.
This funny old narrative of a revolution in that direction sounds pretty misplaced if we're talking about a future where you won't need humans to get stuff done...
you know what is really funny? the neoliberal narrative of free markets and trickle-down economies and billionaires being valuable members of thr society. If humans are not needed to get stuff done, what's the use for those humans, mmmm?
I think kids are a lot more prepared for this future than professional technologists probably realize. If you ask them today what profession they want to be, you often get “YouTuber” as an answer. Which might seem like a joke, but I think it’s actually quite accurate in terms of future professions. The jobs at most risk from AI are “anonymous” ones in which the creator of the work is unknown to its consumer. Writing, in a professional context, is a good example. Do you know who wrote the text on Amazon product pages? Probably not.
What is less at risk are charisma-based occupations, like actors or YouTubers. People are social animals and want to connect (or feel like they’re connected) to other humans, not robots. I expect the concept of an influencer to get even bigger.
Ergo, I would tell kids today to focus on social skills, filmmaking skills, and presentation skills.
YouTuber is a lottery-ticket profession. It's a winner-take-all, heavy-tailed distribution. It doesn't make sense to tell a whole population to buy lottery tickets. For kids, it's a modern equivalent of "I want to be in the NBA", "I want to be an astronaut".
You are missing the point I’m making. Everyone won’t be a YouTuber. But the successful people will cultivate skills that are similar to those needed by successful YouTubers. Being an anonymous worker with anonymous output means you can be easily replaced.
All of the examples you gave are lottery-ticket professions, though. And even if a large number of people could be successful entertainers, I'm not even sure about the premise. AI-generated media could compete with YouTubers very soon. AI will take a long, long time to replace garbagemen.
> AI will take a long, long time to replace garbagemen.
In some places what holds this back is reliable AI driving. Single driver collection trucks doing all the work with robot arms are common in some places already and we are better for this physically difficult job being done by machines.
Today society shares based on your work contribution and it is our sharing that must evolve when less and less work needs to be done.
I think the markets for entertainment will grow as regular jobs become more redundant. Especially if we ever get a basic income. That will add a ton of time people are looking to fill.
More generally though I’m just saying that the successful people of the future are the ones that understand how to connect to people via video/audio/in person; rather than relying on the anonymous code/writing/work that can be replaced by AI.
Why would the vast majority of people watch a human YouTuber or other kind of charisma-based influencer when they can choose from infinite custom generated ones that always provide exactly the content they want at any time, look and act just like they want them to, and listen to feedback perfectly, and interact with them on a one on one basis?
For the same reason people line up to get popular fashionable products that look the same as everyone else, when they could have an infinitely customized one for far less.
The “customization” narrative is far too based on assumptions of individualism, and doesn’t factor in sociability at all.
The potential difference I think is that you will be able to replace your social circle with people who behave to your preference. Advertising will be per person, rather than per segment, so that will fade as a unifying factor. I think once parents start putting AR goggles on their kids all bets are off about how social relationships with humans in your presence have some kind of primacy will fade away. That said, I do hope you’re right.
I agree on social skills. However, I think that the most obvious job in the future is going to be nursing, which already has a severe labor shortage in most countries. Dropping birth rates ensure that there are always more old people to take care of than there are workers. We're also still a long way from cost-effective robotics that are reliable enough not to accidentally kill old people who need physical help in their daily lives. It's also a profession where mere human presence and social contact is valuable itself.
Only in situations where the creator is anonymous or has a weak brand now. For creators that are functionally celebrities (I.e., successful YouTubers) people follow them because they like the human behind the video. I don’t see AI replacing that, potentially ever.
For established creators, sure. But fast forward 10 years and imagine AI generated ones crowding out the human creators. Then how can humans get established in the first place?
You already have companies like Google starting to crack down or limit ai generated stuff. It’s trivially easy for the companies controlling these channels to prefer human-made content.
A lot trying to predict the future. But kids need love and attention. And if you teach them to think for themselves there is not much more you can do.
As for the “should I even have kids” argument? Sure don’t have kids. Your line ends here while others will potentially live among the star. I’m sorry I’m an optimist.
> Sure don’t have kids. Your line ends here while others will potentially live among the star.
That's a weird take. This isn't house Lannister vs house Stark you know. To keep the bloodline going really shouldn't be the first thing that comes to mind while considering having a child imho.
The writer says: “[My children] are more viscerally worried about climate change”.
I’m not surprised.
I recently heard from a friend that her 7 year-old child had, as a class assignment, to create a timeline setting out the impact of climate change over their lifetime.
I was gobsmacked, and feel this amounts to a form of psychological child abuse. It’s a serious problem, don’t get me wrong, but having a small child labour to visualize modeled outputs is traumatizing. It’s really made me wonder about how reality morphs into millenarian thinking.
I’m not sure being honest with children about something that impacts them so heavily is the wrong move here.
Traumatized or not, I suspect teaching children about the problem in clear terms will lead to a better informed generation of scientists, lawmakers, and voters. Hopefully they make and stick to better decisions then those that came before.
I hear what you’re saying, that children should be educated about the reality they find themselves in. If that is the case, and I would say their are exceptions until children are older, then strong guidelines are needed, partly because acting out of trauma is rarely a good idea. Secondly, because a full reality must be imparted including that projections are based on modeled outputs, and modeling complex systems is inherently problematic, there are many problems facing humanity, and resources must be rationally allocated between all.
Sorry but children pre-12 actually can’t grasp ‘clear terms’ so easy when the terms are about something clearly abstract or existential.
They may grasp the idea of death but seldom is it not terrifying to them as genetically it takes a while to overcome the dna-imprinted anxiety of body failure.
So perhaps there’s a certain age before which u are advised to be very careful with what actually is being said to a child.
I've been terrified of dying since I realized there was no god when I was about 7. It's not really anything anyone needs to tell you. In fact, the lies tend to expose the problems of belief systems, at least to rather intelligent, skeptical little kids. I'll be 38 next month and I can't say that it's really gotten any better.
Time to reinvent GOD and eternal life then. Perhaps also time to consult and check whether it's a mid-life crisis, or something wrong with sleep cycles or overworking. I really wish u get better soon.
As a child I was wholly educated in the certainty that my life was in imminent danger of a nuclear Armageddon... Maybe your "honest with children" about climate change is a hint that they're more likely to have to deal with nuclear winter than anything.
Hence: the best preparation is wide skills and can-do attitude, not ruminations on clueless adults' "honesty"?
Among other things, childhood trauma and fear negatively impact learning outcomes. I'm not against presenting accurate information in an age-appropriate way, but it definitely shouldn't "impact them so heavily." I would think by highschool they are more ready for heavier subjects.
When I was growing up they just traumatized us all into thinking our houses were going to burn down all the time and we'd frequently find ourselves on fire necessitating rolling around on the floor. Also, quicksand and outrunning alligators.
Then again, that is the reality they're in now, so there's an argument they should be spending a good amount of time getting ready for that. I'd rather they not be sitting there pikachu-faced when the rate of human migration skyrockets so they decide to just start killing them all or something similarly stupid.
Calling it child abuse is too far, but I think the real problem is that it sets them up for contrarian beliefs in the future.
When I was younger we were put through similar exercises with “peak oil”. Our teachers taught that we were going to run out of oil and the world was going to be in extreme trouble when fuel was so scarce and unaffordable that our quality of life would collapse, wars would be fought, and the things we enjoyed like being able to travel and heat our homes would become extreme luxuries.
Then none of that happened at all and now a generation of people have learned not to trust extreme claims about the future. This has made people numb to acknowledging real risks, which are generally more boring and subtle than the scare tactics used to pitch them.
People like to think that all those crazy end-of-days cults just disappeared and were replaced by rational scientists. But that’s not how culture works. The apocalyptic impulse is still with us, but has found new outlets.
I wonder if in a cruel way that's what our biosphere needs. Democracy plus capitalism has proven itself, by virtue of incentives and systems, fundamentally incapable of addressing the topic. "Vote for me, I'm going to make your quality of life worse!", is not a short-term winning strategy in our current world. Maybe we need a generation of angry and depressed children, that grow into adults and are willing to break the proverbial wheel.
That said, I'm not condoning cruelty. All I know is, that if we continue as we were, we will be destroying the world for our grandchildren and a myriad of species [1]. Personally I believe that realistically the question "will our decisions kill billions in terrible ways" is not valid anymore. Rather the question has become "how many billions".
May I ask for a hint about where your friend's abused child is located in the world? Also, what data was provided, did the 7 year old have to read the whole IPCC report?
Listen, I have 2 boys (6yrs and 8 months) and I'm doing all I can to buy a house and land to cultivate just in case. It'll be really hard since everything is expensive but I'm hopeful things could get better if the war in Ukraine ends, the pending global recession start to fade away, and interests go down.
The end game here is to have just enough just in case AI comes to fruition and we all go bust.
I'm still trying to teach my children how to code, Electronics, etc... Since, well, that's what I know. Other than that, I'm just trying to give them a head start in life even I didn't have one.
I feel like the future pretty much has to be more local, in person, and human, and as much as you can interact with the internet as a utility rather than as the main source of interaction in your life it is for the best. That’s probably the best lesson you can teach your kids if you don’t want them to be part of society that just sort of permanently falls into the internet, always plugged in via goggles or brain implant and fed AI generated content, never to return. It already seems like there’s a group that is basically terminally online and it seems like it’s only going to get worse.
While I agree that moving all human interaction to the internet is likely to be a net negative, there is only so much you can do as an individual. It takes two to tango, you need a sufficient number of other people with the same mindset to resist this move to the internet. As someone whose parents kept them away from certain forms of technology, media, etc., all it really did was alienate me from others. At the end of the day, I'd rather connect with others through the internet than not at all.
Really? Did you try to learn something seriously with AI and compare the same process with a human teacher? And you seriously believe the parrot is better?
Try asking ChatGPT or Gemini about minimax plus alpha-beta pruning then change the cut-off conditions and ask it which one is correct and why and see it for yourself. Lookup the literature to see which one is correct.
And that’s only a very simple example of concrete facts/things. Now imagine the AI not only need to understand abstract concepts, it also need to understand your weakness/strength to adapt the examples/explanations, give you scaffolding and speak to your heart to become a good teacher. Good luck!
I have an audio recording of an actual human teacher trying to insist, in the 2005-6 academic year, that motion capture couldn't possibly be recorded for more than a few minutes at a time because that would use "several megabytes" of data.
My laptop was recording audio at 44kHz/16-bit at the time.
* I suspect the social value of school will become much more apparent soon.*
School is not just about the academic knowledge you gain. Plus, I'd imagine pairing a personalized tutor with an effective human teacher would only accelerate learning faster.
Well at least you guys are lucky enough to have kids. There are us unfortunate ones who happen to just graduate at the wrong time. We, are eternal juniors, with niche skills within a narrow CS domain. So competition is extremely high for us.
We would be lucky if we get to survive next five years. There is no way for us to get into something tangible, if we are resident of relative south (asia, SA, Africa). Surviving for the next day in the developed world is the game we play.
How about home country? What is the point when globalization floods the local market. We were wiped out there, hence we had to leave. Everything we want to do, giant corpos do it cheap or for free. In relative south of the world, people lack money. So data privacy doesn't matter. The concept of paying for software doesn't exists.
In the developed side, existing is a dread with no hope. There are enough competent people joining the field who get more priorities. For eternal juniors like us, please show hope. Pretty you may find it difficult to relate to any words here. Anyone born in the west are taught to find their purpose, other countries not so much. It's all about money because money solves 99% issue over there.
I think the future predictions miss one major possibility: That ubiquitous/powerful AI is available only to very few people (super rich and corporations) and everybody else is pushed back to basically subsistence living, where they have nothing of value to offer the other side of society because everything has been automated away.
So maybe sharpen your kids on their manual labour too
Life changed less than you think:
The cobbler down the streets still makes shoes, the butcher next door still cures meat, the carpenter in the neighbourhood still builds chairs.
It’s just that the cobblers brother works in a bio-factories clean room, and the carpenters wife is a sysadmin.
I've not seen any "down the street"-scale cobbler outside a museum, and the only trained carpenter I know was literally homeless until they found a chance to be a software tester.
There are butchers around, though who knows how long that will last, once vat-grown flesh is both affordable and available.
I guess Europe is just different,I drive past 2 carpentries and one cobbler on my way to work, and there are 7 butchers and 9 bakeries in a 15km radius.
My problem with children and AI is much more practical. My daughter (17) is an artist, and she wanted to begin a career in animation. However, the progress in generative AI (namely Midjourney, Stable Diffusion, and now Sora) makes this a dubious idea.
Frankly, as a parent, I have no idea what to advise her.
A disease control consultant for hospitals, nursing homes, etc. Maybe she can convince them to do the basic stuff like polished copper door handles, bow ties, hair length limits, shaved faces, checklists, and all rooms negative pressure with the doors kept shut as much as possible. Maybe they'll get UV figured out by then, but she'd still have to work to keep up with the increasing antibiotic resistance.
What happened to all the people who were saying AI is just a stochastic parrot and aren't worried at all about AI?
Is it that this thread only attracts people who are worried about AI or is there an actual paradigm shift that all the nay-sayers are realizing they were wrong?
Serious question, don't take it the wrong way if you were a nay-sayer. No offence at all.
There's a few such comments[0], though my assumption is there's some kind of assortative selection going on that means all the bear comments congregate in one set of threads and all the bull comments congregate in a slightly different set.
According to me (utility function, etc) it's required you have kids. Only in the 2.X statistical sense though, but if not what's the point? You can't condition all your problems out, but you at least have to play to not lose.
> Parents having caregivers hardly comes into it at all.
For me, it does, an awful lot. Being a caregiver to my parents has implied a lot of existential suffering and emotional agony. Think about it: the people who raised you are going to die slowly and perhaps painfully and you are going to see it without being able to do anything. I feel I would be very cruel by giving that future to a child I love.
What is winning and what is losing? Yeah there are actions that persist our genetics but there's no rule saying that's what needs to happen, just that it's something that happens.
If you widen out to society, yeah, there are other people who will come after us, maybe we can have purpose in helping those people to survive even if they're not our kids. Humans evolved as a collective, there are parts of our DNA that exist to help the group even if it means we ourselves don't have kids. Evolution is a much more complicated thing than just specifying exactly what genes -you- pass on.
> What is winning and what is losing? Yeah there are actions that persist our genetics but there's no rule saying that's what needs to happen, just that it's something that happens.
All of us are descended from billions of years of ancestors (long before humans) that were motivated to reproduce. Those who didn't left none of their genes behind (other than ones they might have shared with relatives).
That's exactly what "losing" means, in the evolutionary sense.
Not at all. Simple example: ants. Most of them don't reproduce, but they still work to pass on their genes.
In the case of humans... Why do you think there has always been a part of society that seems predisposed to violence and nationalism. It's not beneficial to the individual to go get in fights, but if it keeps the group alive then it's advantageous. (Note, this is not a license to be a violent nationalist asshole, but this is the internet so...) Similarly, autism and ADHD. There are very real theories that those conditions were beneficial to early society even if they were detrimental to mating because it's helpful to have people around who are hyper focused and hyper unfocused.
So yeah, helping pass on your relative's genes, however distant, is still part of the game.
I’m not a huge fan of CS Lewis but the quote at the end is very on point. Let’s continue to be human and do human things. Change will come as it always does.
It's okay I guess if you think you have no causal impact on if any of that happens, but otherwise you better be doing something to help, even if it would only work in a tiny part of probability space, instead of waiting to die.
Just this. And there are things we can do. Those of us who live in democracies can influence outcomes a little.
It may be too early for political action, but perhaps is not too early for writing and reflecting, and for coming up with theories about how the near future may look like, and sharing them with the world.
If things turn really dire, there is always that turbid matter our advanced culture forbids us to reach out for: violence and revolutions. In other words, if it is the AIs or us, be it because the AIs are outright evil or because our economic systems make them evil, then we can always choose “us” and resort to forceful or even violent corrections.
> VNM applied in the realm of ethics is definitionally utilitarian.
I assume you are thinking deontological ethics, virtue ethics, etc vs utilitarianism, but non-benthamite utility functions exist. Actually, if they didn't I assume a lot of AI doomers would be less doomy, as quite a few of them are utilitarians.
I appreciate that the standard definition includes a term about "for the greatest number". But you're still defining an ethical utility function which you then seek to maximize.
That your utility function may be off model from the most doctrinaire possible interpretation of the theory does not seem to me the same as saying this is not an example of the theory.
The same could be said of effective altruism, which is anyway just the prosperity-gospel version of utilitarianism. That one should be doing utilitarianism badly does not mean one is not doing utilitarianism at all.
(Did you mean to reply to your comment rather than mine?)
> Well, yeah, that's the fundamental failure of utilitarianism: it's so abstract as to say nothing meaningful, because it explicitly delegates the entire definition of what actually is ethical to the utility function, and nothing past that is distinct from any other ethical theory.
I think it's honest, as it really forces you to make a distinction between good (what should be) and real (what is). "Should" goes into the utility function, "is" goes into the world model. I'd honestly recommend giving it a shot if you have free time.
> I doubt that; this and your prior comment are the second and third time I've seen it in eleven years of active participation here.
It's the anti-flame war/spam thing kicking in. I'm manually responding to the "wrong post" but that's pretty much fine.
Oh, I see. If you click the timestamp on a comment, that view always has the reply form. Only in the thread view is the reply link hidden for the first few minutes after a comment is posted.
I'm familiar with Hume. The is-ought distinction has value, but is orthogonal to utilitarianism, if not explicitly opposed.
Well, it's better than EA or e/acc thinking they are objectively correct for some reason. More with e/acc I suppose, why would "max entropy" be a worthy utility function unless you confuse is and ought?
Yeah, that's pretty much my point. Aside from my not sharing their ethical bankruptcy, I also don't special-case my analysis the way they like to do; that they can hew so closely to utilitarian precepts and still fail so signally demonstrates that the bankruptcy is no special feature of their variant, but a trait of the theory overall, always latent if not always expressed.
> That your utility function may be off model from the most doctrinaire possible interpretation of the theory does not seem to me the same as saying this is not an example of a theory.
Sure, I don't want to get into a definitional argument. However, any VNM utility function can be the exact opposite of anything you propose as good, just by inverting the comparison order. If you have numerical outcome scores that could be as easy as my score =: 0 - your score
> (Did you mean to reply to your comment rather than mine?)
Well, yeah, that's the fundamental failure of utilitarianism: it's so abstract as to say nothing meaningful, because it explicitly delegates the entire definition of what actually is ethical to the utility function, and nothing past that is distinct from any other ethical theory. Again I grant this varies from Bentham's definition, but Bentham has been dead a long time now. Modern utilitarianism has been defined by Singer, for better or for worse - mostly worse - and Singerism is what I describe. In any case, Bentham's version still bears the flaw; 'good' can stretch with sufficient cleverness to cover all the virtue or deontology in the world.
> It's a HN thing.
I doubt that; this and your prior comment are the second and third time I've seen it in eleven years of active participation here. Are you using some kind of extension or script to augment the UI? If so, I strongly suspect that is causing it.
It’s likely our kids will have recognisable professions with better phones than not. AI will be there assisting us in increasingly sophisticated ways, acting as intelligent agents and maybe even impersonating us at simple interactions.
Value and supply chains change slowly and those are the things that can really make the biggest changes.