Literally everyone on this website is in denial. They all approach it by asking which fields will be safe. No field is safe. “But it’s not going to happen for a long time.” Climate deniers say the same thing and you think they should be wearing the dunce hat? The average person complains bitterly about climate deniers who say that it’s “my grandkids problem lol” but when I corner the average person into admitting AI is a problem the universal response is that it’s a long way off. And that’s not even true! The drooling idiots are willing to tear down billionaires and governments and any institution whatsoever in order to protect economic equality and a high standard of living — they would destroy entire industries like a rampaging stampede of belligerent buffalos if it meant reducing carbon emissions a little but when it comes to the biggest threat to human well-being in history, there they are in the corner hitting themselves on their helmeted head with an inflatable hammer. Fucking. Brilliant.
I don't think anyone is in denial about this, it's just not something anyone should concern themselves with in the foreseeable future. AI that can replace a dev or designer is nowhere close to becoming a reality. Just because we have some cool demos that show some impressive capabilities in a narrow application does not mean we can extrapolate that capability to something that is many times more complex.
I agree. It bears repeating: Where modern AI shines is where it does not matter to be precise, where programming absolutely _depends_ on being precise.
So, today some good AI applications are face detection, fingerprint detection, or generating art. Where you need to catch or generate the general gist of it without pixel precision.
Of course, programming might be under greater threat than we imagine. I can also not claim that anyone holding that position is just plain _wrong_. But I do believe that would take an AI breakthrough that is yet to happen. That breakthrough would also have absolutely crazy consequences beyond programming, because now we would have "exact AI" and the thought of that boggles my mind for sure.
I strongly and emphatically disagree. You frame it like we invented these AIs. Did we write the algorithms that actually run when it’s producing its output? Of course not, we can’t understand them let alone write them. We just sift around until we find them. So obviously the situations lends its self to surprises. Every other year we get surprised by things that all the “experts” said was 50 years off or impossible, have you forgotten already?
This comment settles it for me. You’re thoroughly way too hyperbolic in your assessment. If this was closer to reality you’d have been able to state your case in clear, realistic terms. That’s something no one has been able to do so far.
I do deny it. Automation does not destroy jobs even if you're impressed at how good it is at painting; see "Luddite fallacy" and "lump of labor".
Claiming AIs are going to take over or destroy the world has been a basis of "AI safety" research since the 90s, but that isn't real research, it's a new religion run by Berkeley rationalists who read too many SF novels.
The assumption that automation creates (or at least does not destroy) jobs is an extrapolation from the past despite the fact that the nature of automation is constantly changing/evolving.
Also, one thing that everyone seems to ignore is that even if the number of jobs are not reduced, the skill/talent level for doing those jobs may (actually DO) increase and also, switching careers does not work for everyone. So you'll inevitably have people without a job even if it's just that the job market is shifting.
But I argue that as automation reaches jobs with higher levels of sophistication, i.e. the jobs of more skilled workers, some people will simply be left out because of their talent won't be enough to do any job that has not been automated.
I'm trying to understand your point, because I think I agree with you, but it's covered in so much hyperbole and invective I'm having a hard time getting there. Can you scale it back a little and explain to me what you mean? Something like: AI is going to replace jobs at such scale that our current job-based economic system will collapse?
Most people get stuck where you are. The fastest way possible to explain it is that it will bring rapid and fundamental change. You could say jobs or terminators but focusing on the specifics is a red herring. It will change everything and the probability of a good outcome is minuscule. It’s playing Russian roulette with the whole world except rather that 1/6 for the good, it’s one in trillions for the bad. The worst and stupidest thing we have ever done.
Just know it. Really think deeply about this important issue and try to understand it thoroughly so that you have a chance at converting others. Awareness precedes any preventative initiatives.
Algorithm space is large and guess-checking through it takes a lot of effort even when it’s automated like now. It requires huge amounts of compute. And meaningful progress requires the combined effort of the entire worlds intellectual and compute resources. It sounds implausible at first but this machine learning ecosystem is in fact subject to sanctions. There are extreme but plausible ways of reducing the stream of progress to a trickle. It just requires people to actually wake up to what’s happening.
I agree that many of us are not seeing the writing on the wall. It does give me some hope that folks like Andrew Yang are starting to pop up, spreading awareness about, and proposing solutions to the challenges we are soon to face.
Ignorance is bliss in this case, because this is even more unstoppable than climate change.
You thought climate change is hard to hold up?
Try holding up the invention of AI.
The whole world is going to have to change and some form of socialism/UBI will have to be accepted, however unpalatable.