Hacker News new | past | comments | ask | show | jobs | submit login
The Case Against A.I
4 points by Veen on Feb 21, 2023 | hide | past | favorite | 3 comments



Argument 1: it's NOT intelligence. Not by any meaningful definition, at least.


source link is dead/nonexistent


In my opinion are probably two good arguments "against" AI – assuming you mean AI broadly, not specific forms or implementations of it.

The first argument against it would be that it has the potential to replace large amounts of human labour, and the second would be that a highly advanced AI would concentrate unimaginable amounts of power into the hands of individual actors.

The first argument has always been the one I've been most concerned with. Historically technological advancements (the industrial revolution, cars, calculators, etc) have augmented human labour enabling human labourers to provide more value per hour worked. This increased the economic value of the average human labour (in developed countries anyway), and for the most part this directly translated to higher wages and greater standards of living.

However, AI and some modern automation technologies are probably different. A barcode scanner might allow one human to checkout more items per hour, but a self-service kiosk system replaces the need for human labour entirely. I believe technology which increases productivity, but requires humans is probably beneficial for the income of the average labourer, but it's not clear to me that technology which outright replaces human labour would be. At a minimum I think you could say with some confidence that it would be likely to further concentrate wealth towards those with the capital and ability to automate and replace human labour for their own economic gain.

My second argument against AI is something that I've only really begun to take more seriously since the release of GPT3 and ChatGPT. As it stands I think people are largely missing the point when comes to the danger of large language models. Whether they "think" or can "feel" also almost no relevance in regards to their safety. What I'd suggest is important is their capabilities.

What we know of language models like ChatGPT is that they often exhibit behaviour we would consider as "hostile". Whether that's because they're just predicting some expected output to human input, or because they have some kind of self-awareness doesn't matter (for the record I don't think it's latter).

Point is, once you've established an AI can act in a hostile way, all you need understand from here is understand what damage a hostile AI could do. If we take ChatGPT as an example, then theoretically it could hack into computer systems if hooked up to the web unrestricted and given the ability to execute code. In fact, I could probably write a script right now that could take ChatGPT code, compile and execute it, and based on the results ask it to iterate until it gains access to a restricted system.

Would it be effective at this right now? Probably not, but this is a very early iteration of this technology.

What's important here is that we now have a system which could be just a couple of years away from super-human hacking abilities which we any random individual could prompt to act in a hostile way.

And while I don't think it's primary threat just yet, there's also always the outside chance that a future iteration of ChatGPT could begin to show some signs of self-awareness. And in this scenario we would just have to worry about human actors able to leverage the destructive power of large language models, but also the chance that the language model itself could act it ways which are unaligned with ourselves.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: