I think we need to consider what the end goal of technology is at a very broad level.
Asimov says in this that there are things computers will be good at, and things humans will be good at. By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.
That is definitely how I wish things were going. But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be. We are not far even now from a prompt accepting a request such as "Write a another volume of the Foundation series, in the style of Isaac Asimov", and getting a complete novel that does not need editing, does not need review, and is equal to or better than the quality of the original novels.
When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.
> When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.
Read some philosophy. People have been wrestling with this question forever.
It depends on what you are trying to get out of a novel. If you merely require repetitions on a theme in a comfortable format, Lester Dent style 'crank it out' writing has been dominant in the marketplace for >100 years already (https://myweb.uiowa.edu/jwolcott/Doc/pulp_plot.htm).
Can an AI novel add something new to the conversation of literature? That's less clear to me because it is so hard to get any model I work with to truly stand by its convictions.
You could have said the same thing when we invented the steam engine, mechanized looms, &c. As long as the driving force of the economy/technology is "make numbers bigger" there is no end in sight, there will never be enough, there is no goal to achieve.
We already live lives which are artificial in almost every way. People used to die of physical exhaustion and malnutrition, now they die of lack of exercise and gluttony, surely we could have stopped somewhere in the middle. It's not a ressource or technology problem at that point, it's societal/political
It's the human scaling problem. What systems can be used to scale humans to billions while providing the best possible outcomes for everyone? Capitalism? Communism?
Another possibility is not let us scale. I thought Logan's Run was a very interesting take on this.
Evolution is not about being better / winning but about adapting. People will adapt and co-exist. Some better than others.
AIs aren't really part of the whole evolutionary race for survival so far. We create them. And we allow them to run. And then we shut them down. Maybe there will be some AI enhanced people that start doing better. And maybe the people bit become optional at some point. At that point you might argue we've just morphed/evolved into whatever that is.
> I think we need to consider what the end goal of technology is at a very broad level.
"we" don't control ourselves. If humans can't find enough energy sources in 2200 it doesn't mean they won't do it in 1950.
It would be pretty bad to lose access to energy after having it, worse than never having it IMO.
The amount of new technologies discovered in the past 100 years (which is a tiny amount of time) is insane and we haven't adapted to it, not in a stable way.
This is undeniably true. The consequences of a technological collapse at this scale would be far greater than having never had it in the first place. For this reason, the people in power (in both industry and government) have more destructive potential than at any time in human history by far. And they do not act like they have little to no awareness of the enormous responsibility they shoulder.
> But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be.
Comparative advantage. Even if that's true, AI can't possibly do _everything_. China is better at manufacturing pretty much anything than most countries on earth, but that doesn't mean China is the only country in the world that does manufacturing.
Why not? There's the human bias of wanting to consume things created by humans - that's fine, I'm not questioning that - but objectively, if we get to human-threshold AGI and continue scaling, there's no reason why it couldn't do everything, and better.
Why not - IMO you perhaps underestimate human complexity. There was a guardian article where researchers created a map of a mouse's brain, 1 cubic millimeter. Contains 45km worth of neurons and billions of synapses. IMO the AGI crowd are suffering expert beginner syndrome.
Humans are one solution to the problem of intelligence, but they are not the only solution, nor are they the most efficient. Today's LLMs are capable of outperforming your average human in a variety (not all, obviously!) of fields, despite being of wholly different origin and complexity.
I don't think I agree. I'm trying to point out the 'expert-beginner' problem. We don't realize how much is involved in human intelligence. To the extent we think it is easy, that AGI will be here in a couple years. It's the same reason that in software "90% done is 90% left to go." We are way under-estimating what is involved with human intelligence.
An analogy I think is like crypto problems that would require 1 billion years to compute. Even if we find a way to get that 100x more efficient, we're still not coming up with a solution anywhere near in our lifetimes.
> Today's LLMs are capable of outperforming your average human in a variety (not all, obviously!) of fields
My impression is many of those are benchmarks that are chosen by companies to look good for VCs. For example, the video showing off Devin was almost completely faked (time gaps were cut out, tasks were actually simpler and more tailor made than they were implied to be).
Something I was trying to convey to a non-technical stake holder is that some tasks are stupid easy for humans, but insanely hard for computers - and vice versa. A big trick was therefore to delegate some things to humans and some things to computers. For example, computers are excellent at recollection and numerical computations - while humans can taste salt easily and tell you when something is too salty or undersalted trivially. In my opinion, AGI is an attempt to have computers do those things that are trivial for humans, but insanely tough for humans. There is a long, long way to go; getting that first 50% is the easy part, the last 50% (particularly the last 30% and the last 5%) IMO is several hundreds (if not thousands) of __magnitudes__ harder.
- Despite the flood of benchmark-tuned LLMs, we remain nowhere close to engineering a machine intelligence rivaling that of a cat or a dog, let alone within the next 5 to 10 years.
- The world already hosts millions of organic AI (Actual Intelligence). Many statistically at genius-level IQ. Does their existence make you obsolete?
> Despite the flood of benchmark-tuned LLMs, we remain nowhere close to engineering a machine intelligence rivaling that of a cat or a dog, let alone within the next 5 to 10 years.
Depends on your definition of "intelligence." No, they can't reliably navigate the physical world or have long-term memories like cats or dogs do. Yes, they can outperform them on intellectual work in the written domain.
> Does their existence make you obsolete?
Imagine if for everything you tried to do, there was someone else who could do it better, no matter what domain, no matter where you were, and no matter how hard you tried. You are not an economically viable member of society. Some could deal with that level of demoralisation, but many won't.
Here's a passage from a children's book I've been carrying around in my heart for a few decades:
“I don't like cleaning or dusting or cooking or doing dishes, or any of those things," I explained to her. "And I don't usually do it. I find it boring, you see."
"Everyone has to do those things," she said.
"Rich people don't," I pointed out.
Juniper laughed, as she often did at things I said in those early days, but at once became quite serious.
"They miss a lot of fun," she said. "But quite apart from that--keeping yourself clean, preparing the food you are going to eat, clearing it away afterward--that's what life's about, Wise Child. When people forget that, or lose touch with it, then they lose touch with other important things as well."
"Men don't do those things."
"Exactly. Also, as you clean the house up, it gives you time to tidy yourself up inside--you'll see.”
Let me paint a purpose for you which could take millions of years. How about building a Atomic Force microscope equivalent which can probe Calabi Yau manifolds to send messages to other multiverses.
Suno is pretty good at going from a 3 or 4 word concept to make a complete song with lyrics, melody, vocals, structure and internal consistency. I've been thoroughly impressed. The songs still suck but they are arguably no worse than 99% of what the commercial music business has been pumping out for years. I'm not sure AI is ready to invent those concepts from nothing yet but it may not be far off.
No, it's not normal. The output is almost always song lyrics annotated with markup like [Bridge], [Chorus] etc. I think they're using something from OpenAI with a system prompt and/or domain-specific training on top.
It's not a pure AI output - I generated a bunch of lyrics in text (which doesn't use credits), selected the best one (obviously), padded them out with some repetition, entered a style, generated the audio a few times, selected my favourite audio, and edited the audio (poorly) by repeating a few bars of the intro to make it longer. You don't see the times it generated lyrics about X.509 certificates (even though the prompt was for them to be a valid X.509 certificate) or the times the vocals were unintelligible.
I think generative AI does work as a toy. You can ask for all sorts of insane nonsense and laugh at what the program spits out to fulfil your request. I was a paying customer of AI Dungeon 2 (before the incident where OpenAI and/or the Mormons broke it in a poor attempt to impose safety rules).
And while I'm looking at my Suno outputs list, the reason I ever bothered to use it was to see if it could render these lyrics as a ripoff of "Pure Imagination" from Willy Wonka (it cannot because it only makes actual music): https://suno.com/song/19d1a90d-9ed6-4087-94e5-89e41363726e?s...
(I'm assuming that you can open these pages just by having the links. Some of them are set to public visibility.)
Meaning is in the eye of the beholder. Just look at how many people enjoyed this and said it was "just what they needed", despite it being composed of entirely AI-generated music: https://www.youtube.com/watch?v=OgU_UDYd9lY
There's a "Altered or synthetic content" notice in the description. You can also look at the rest of the channel's output and draw some conclusions about their output rate.
(To be clear, I have no problem with AI-generated music. I think a lot of the commenters would be surprised to hear of its origin, though.)
> By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.
This complementarity already exists in our brains. We have evolutionary older parts of brain that deal with our basic needs through emotions and evolutionary younger neocortex that deals with rational thought. They have complicated relationship, both can influence our actions, through mutual interaction. Morality is managed by both, neither of them is necessarily more "humane" than the other.
In my view, AI will be just another layer, an additional neocortex. Our biological neocortex is capable of tracking un/cooperative behavior of around 100 people of the tribe, and allows us to learn couple useful skills for life.
The "personal AI neocortex" will track behavior of 8 billion people on the planet, and will have mastery of all known skills. It is gonna change humans for the better, I have little doubt about it.
Asimov says in this that there are things computers will be good at, and things humans will be good at. By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.
That is definitely how I wish things were going. But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be. We are not far even now from a prompt accepting a request such as "Write a another volume of the Foundation series, in the style of Isaac Asimov", and getting a complete novel that does not need editing, does not need review, and is equal to or better than the quality of the original novels.
When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.