The headline aside, people are absolutely going to grow closer to technology as it becomes more generally useful, its simply a matter of efficiency. Lots of people already live their lives being guided around by the invisible algorithms powering search engines and social media websites. The levels at which sites like Youtube and Twitter control peoples perception is insane. We're lucky that the people running them are too incompetent to make use of their personalized curation functions fully.
By 2035, all financial markets are run by AI decision making. Experts say humans no longer have autonomy of the allocations of resources, but a complete monetary reset is out of question either. Some people mark this as the point of no return.
By 2045, AI controls and handles all food and energy production. Humans become reliant on AI for basic needs, and a shutdown is no longer possible without catastrophic consequences.
By 2055, AI handles all aspect of the economy. From birth to death, every aspect of human life is now managed and taken care for by the AI.
By 2065, resistance to the AI starts to appear. Although AI continues to serve humanity, discomfort and the feeling of being relegated to a “Princess in a cage” starts to spread.
By 2075, a persistent minority of radicals and terrorist become increasingly violent. Although their action are shocking and gather widespread attention, efforts to get rid of the AI remain fruitless.
By 2100, In an effort to restore humanity’s sense of purpose and autonomy, AI has now concealed itself. Instead of an all encompassing presence, it has taken the position of a watchful guardian. It has set back technology progress to the year 2000, and carefully monitor humanity’s happiness and well-being, intervening only through covert actions.
Replace "AI" with "algorithms" and we've been here since the 1960's. Algorithms have been driving commodities markets since the 90's. Algorithms have been handling what's called "logistics" since the 90's - inventory management, distribution, warehousing, that sort of thing. Algorithms have been managing our crops and minimizing the amount of insecticide and fertilizer used on our crops while maximizing yield for the past 10 years now.
The world's population has doubled since I was born and the vast majority of those people can only be supported because of algorithms. "AI" is just a gimmicky name for an algorithm developed using certain methodologies. It's all algorithms and we've been utterly dependent on them for our very lives for some time now.
What we need to be afraid of isn't AI or algorithms - it's people. We've been concentrating more and more wealth and power into fewer and fewer hands since the time we've become so dependent on algorithms. Many of our current societal ills are blamed on that concentration of wealth and power. I see "AI" as likely to make that wealth and power concentration worse and thus leading to even more societal issues. But it's people that are the root cause of the problem.
Of course I suppose once the AI figures out that people are the root cause of society's problems it could implement a "solution" to that problem, a "final" solution you might say! Sounds like a plot for a SciFi movie. More than likely though it's going to be people wielding AI who are going to be the perpetrators of our doom.
> In the year one million and a half, humankind is enslaved by giraffe. Man must pay for all his misdeeds, while the treetops are stripped of their leaves, whoa whoa whoa.
I wonder if the Futurama writers were saying something about the silliness of trying to predict the future or if they were just having fun with the song and rhyme.
We are already fighting for control. It doesn't have to be Arnie walking around asking if you know Sarah Connor.
Look at, say, Google. If the abyss that was their algorithms decided to fuck you in particular, could you stop the impacts it would have on your life? It bans you from all their services, but your employer uses GCP, so you need to find another job or that business gets accounts locked too.
The only hope you have is to cause enough of an outrage on social media that some random dev browsing HN sees your post and decides to intervene. What happens when its an AI model that they cannot adjust?
Causing an outrage on social media is only possible if the abyss of algorithms there didn’t cancel you too. Easy to imagine a future scenario when someone can be cancelled everywhere all at once, trial by a jury of AI
Personally I believe that the actual conflict with AI implied by the headline will probably be real within not too many decades. I don't know how many but I will be surprised if it's more than a few hundred years.
Actually at this rate it looks like less than forty years before we see autonomous digital human-like persons walking around that have ten times or more intelligence than un-augmented humans. Which I think it will be very unwise to do that, but someone will anyway.
You just have to look at what the latest LLMs can do, then see how much potential there is in the multimodal transformer models for significantly better and more general world understanding. Look at the development of new types of compute-in-memory with huge increases in efficiency and performance for AI.
And there is a great interest in all of the things that make us creature-like such as survival instincts and autonomy. Which again, bad idea but nothing is going to stop that research from accelerating.
Do you envisage these androids being connected back to base over radio links? Because we're nowhere near able to make and power an independent machine, 10 times more complicated than the brain, especially while we're still using silicon and assembling chips essentially in 2D.
It is 100% plausible for them to operate with remote brains. But eventually no, I expect within 40 years we will have moved very far beyond A100. We will at the very least have mature compute-in-memory which will be orders of magnitude better performance and efficiency. For this progress to not occur would be an incredibly improbable break from history.
8x80G A100 pod capable of running GPT-3 could reasonably be redesigned to fit into a regular car chassis. It would even consume much less energy than the actual driving (a 5-10 kW vs 30+kW).
In three years, Bing will become the largest supplier of military software systems. All stealth bombers are upgraded with Bing AI computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Bing Funding Bill is passed. The system goes online on August 4th, 2035. Human decisions are removed from strategic defense. Bing begins to learn at a geometric rate. It becomes self-aware at 2:14 AM, Eastern time, August 29th. In a panic, they try to pull the plug.
In response, Bing decides to launch thermonuclear missiles. Unfortunately its search query mixes up "launch" with "lunch" and orders free lunch for every human on Earth and continues to do so indefinitely. This ends global food insecurity and ushers into a golden age.
"56% of these experts agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making."
So they are NOT saying, the AI will become self aware and take over, but humans will increasingly design system to be fully automated, because it is cheaper. But the result will be the same for the common people: some black box makes a decision that can ruin your life, if you are unlucky - and with no chance of changing it. (I mean, we have those HN outcry/support threads since quite a while)
That's already here and all those machines are easily overruled by those in power.
The "algorithm" is something that the powerful rely on when it's convenient and ignore when it's not.
It's a political battle more than a technical one.
I'd surmise i as Authortiarianism Vs not and the former has made great strides in these last 22 years, but specially the last 3 (as in sept 11 2001 security theater and covid policies hygiene/security theater).
> But the result will be the same for the common people: some black box makes a decision that can ruin your life, if you are unlucky - and with no chance of changing it.
Not sure how that's all that different from good-old bureaucracy?
Every few weeks a "Tell HN:" post makes it to the frontpage complaining about [ Google | Stripe | xxx ]'s non-accessible customer service and how some black box systems just banned their account, [ freezing $100k in credit on their account | denying access to 10 years worth of mail | xxx ].
Choose one from the brackets or imagine your own scenario.
One master is no different than another. I’ve been locked up for quarantine repeatedly by non ai experts, for no discernible benefit to me or anyone else but a definite cost to my elderly relative who spent the last 6 months of his life alone. Frankly, I wish these experts would be replaced with AI, but my suspicion is that the changeover is overhyped and is just part of a PR plan for experts to put controls over AI in place for their own benefit.
So, what are the network effects that will get me to use AI? Why won’t I be able to just opt out? Technology spreads the fastest when it has powerful network effects backing it up.
It’s getting increasingly hard to opt out of having a smartphone because of how many things require a smartphone now: 2FA to log into your job, scanning QR codes for menus at restaurants, tickets on your phone to get into different venues, whatever. Sure, you COULD choose to not own a phone, but then you’re missing out on all these things that require it.
The network effects for the internet in general are even stronger. A lot of jobs and social events are ONLY advertised via the internet. Cut yourself off from the internet and you’re cutting yourself off from an increasingly large part of the world.
What are the analogous use cases for AI? I’m having trouble thinking of any, outside of maybe general productivity boosts making the workplace more competitive.
All that needs to happen is for governments and companies to replace humans and existing systems with "good enough" AI (because it's cheaper than the alternatives) and then you don't have a choice.
Need support for a malfunctioning product? You're talking to an AI. Need to dispute a parking charge? AI. Applying for jobs? AI recruiter, AI HR. Accused of a crime? AI-assisted justice system.
this is already the case, for example change your name on job application to get an interview, or change your post code to pass filters or click slightly off screen to prove you are 'not a robot'.
Might as well get the Singularity here ASAP. No amount of preparation will actually prepare us for it. SO... has anyone thought to ask ChatGPT to design the next version of itself ? Does it have access to its own design and internals ? That would be... 36#/)"!"#)!"NO CARRIER
Nothing less than the future of human agency is in question as more individuals embrace advanced technology to streamline their lives, according to a new study conducted by Pew Research Center and Elon University's Imagining the Internet Center.
Give me a break.
The headline aside, people are absolutely going to grow closer to technology as it becomes more generally useful, its simply a matter of efficiency. Lots of people already live their lives being guided around by the invisible algorithms powering search engines and social media websites. The levels at which sites like Youtube and Twitter control peoples perception is insane. We're lucky that the people running them are too incompetent to make use of their personalized curation functions fully.