Sure, but humanity has been able to make additional human-level intellects since basically forever by having sex. The dream has always been to make superhuman intellects.
There is more underneath this than was in eg blockchain or metaverse hype. But there is zero guarantee that OpenAI will be the one to eventually come out on top. If anything, the continuous drama they seem to be generating makes it less likely that they come on top vs one of the other big AI startups.
To stick with the metaphor, it's not super unclear if agency is within grasp or on the other side of an abyss. LLMs are definitely an improvement, but it's not at all clear if they can scale to human-level agency. If they reach that, it's even more unclear if they could ever reach superhuman levels given that all their training data is human-level.
And finally, we can see from normal human society that it is hardly ever the smartest humans who achieve the most or rise to the highest levels of power. There is no reason to believe that an AI with agency would an inherent "it's over" scenario
What is happening right now is so obvious that people have been predicting it for over 60 years. It is ingrained in our culture. Everyone knows what happens when the AI becomes smarter than us.
If you 'see no reason' then you are looking through a microscope. Lift your head up and look around. Agency isn't black and white, it is a gradient. We already have agency to some degree, and it is improving fast.
We kill millions of humans with agency every day on this planet. And none of them immediately die if we stop providing them with electrical power ... well I guess a small amount of them do. Anyway, we'll be fine. If the AI come, can they immediately stop us from growing food? How can an AI prevent my orchard from giving apples next year? Sure, it can mess up Facebook, but at this point that'd be a benefit.
How can you shut down something that can multiply into data centers in different countries around the world?
I'm not sure you realize this, but computers control everything. A dumb bug shut down windows computers around the world a month ago. A smart AI could potentially rewrite every piece of software and lock us out of everything.
Those medicines your friends and family need to stay alive? Yea the factories that produce them only work if you do what the AI says.
I don't want to be a hater but according to the UN about 61 million humans die per year, so that only comes out to ~167k per day rather than millions. Most of those will die from old age too, rather than "being killed".
Your main point is true though, even superhuman AI would have a rough time in actual combat. It's just too dependent on electricity and datacenters with known locations to actually have a chance.
I’m sure the superintelligent AI will convince humans to transport its core while plugged into a potato battery. But honestly did the supply chain attack on Lebanese pagers last week teach you nothing? AIs should be great at that.
> We kill millions of humans with agency every day on this planet. And none of them immediately die if we stop providing them with electrical power ... well I guess a small amount of them do. Anyway, we'll be fine. If the AI come, can they immediately stop us from growing food? How can an AI prevent my orchard from giving apples next year? Sure, it can mess up Facebook, but at this point that'd be a benefit.
1. IMHO, genocidal apocalypse scenarios like you describe are the wrong way to think about the societal danger of AGI. I think a far more likely outcome, and nearly as disastrous, is AGI eliminating the dependency of capital on labor, leading to a global China Shock on steroids (e.g. unimaginable levels of inequality and wealth concentration, which no level of personal adaptability could overcome).
2. Even in your apocalypse scenario, I think you underestimate the damage that could be done. I don't have an orchard, so I know I'm fucked if urban life-support systems get messed up over large enough area that no aid is coming to my local area afterwards. And a genocidal AI that wasn't blindingly stupid would wait until it had control of enough robots to act in the real world, after which it could agent-orange your orchard.
1. A massive loss of jobs like that would lead to a better redistribution of wealth. I don't believe the US would adequately react, but I trust the rest of the world to protect people from the problem.
2. Which robots?? Until robots can, without any human input, make any sort of other robots, then there is nothing to fear from AGI. Right now they barely walk, and the really big boys can solder one joint real good. That's not enough.
> A massive loss of jobs like that would lead to a better redistribution of wealth...
How do you come to that conclusion?
I think we'll get massive inequality because it'll be far easier to develop new technology than change the core ideology of capitalist society through some secular process. If a "better redistribution of wealth" were in the cards, it probably should have happened already, and probably cannot happen after the 99% loose most of their economic power by being replaced by automation.
If the 1% are forward-thinking, they'll push a UBI scheme to keep the 99% from getting too disruptive before they become irrelevant.
Who knows what a conspiracy theorist thinks? But honestly the fact that geoglyph construction techniques seem to have evolved over at least several hundred years to become ever clearer does seem to be a(n even stronger) point against. If it was aliens, wouldn't they have had the perfect method from the start? In fact, if it was actually aliens, why would they not just burn the images into the soil from orbit with a laser instead of laboriously piling up rocks?
>In fact, if it was actually aliens, why would they not just burn the images into the soil from orbit with a laser instead of laboriously piling up rocks?
They may have tried and accidentally destroyed all the other test planets with their death Star.
My recollection is that the main hypothesis among these crowds (where's the conspiracy?) is that the lines are human made, but in response to 'visiters'.
The main problem with their system is not that it doesn't work with every app out there, the main problem is that voice control is just not that desirable for most apps. It has its place in applications where you're already using your hands, such as when driving or when cooking or something like that, but even there you can have much richer, faster and more private interaction with a (touch)screen than would be possible with voice controls. Imagine if everyone in the entire office is constantly responding to whatsapp messages by saying "reply to xyz", then speaking the entire message out loud. Or in a restaurant, in the subway or at a crowded concert.
I'm not so sure about that - voice control is super useful when it works. Walking down the street for example it would be far easier to reply without having to use both hands, look down at the phone and type.
The problem is that it has to work _all of the time_ and be perfectly analogous to the non-voice control equivalent.
Otherwise, even in basic things like dictation it fails, because if you message a friend using dictation, the punctuation, capitalization, mumbles and other things make it seem jarring to them as if someone else is suddenly speaking.
The other glaring problem being that you'd have to entrust this shady company with keys to your digital kingdom so that they could perform actions on your behalf from their cloud.
When they inevitably get hacked and your money gets stolen, you won't have any recourse because you authorized them to act on your behalf.
Of course this could be solved if their models ran locally on your phone and interacted with target apps through accessibility APIs, but then they're just another app.
Every part of their business is either not practical or downright insane if you think about it for more than a minute.
Anyone with the patience to read through an existing github repo can do this? If there are valuable bits to be salvaged it's easy enough to copy-paste them into a new library if you so choose.
Unless you meant copying source code from GitHub, which isn't an option if the program is a 1998 PlayStation video game and you don't have the source code for it.
reply