Hacker News new | past | comments | ask | show | jobs | submit login

I strongly doubt that LaMDA is sentient, but do think it demonstrates the increasing ability of AI to fool people in conversations. For a point of reference, people might want to read about IBM's Project Debater (https://www.scientificamerican.com/article/an-ibm-ai-debates...), if they don't already know about it.

And I don't think a fully sentient computer system that does whatever it wants to do would be a good idea. What would be more useful is that such a system is a great assistant that does what you ask of it. Its "intelligence", would be something that is confined to the task assigned to it.




I've been thinking about this lately and it seems to me that humanity's goal with regard to AI is to create a perfect slave race. Perhaps those aren't the terms in which it is being envisioned or discussed, but is this not the natural consequence of the aim for "human-level AI" + "it's not a real person, just a tool"?

We will soon, as you say, have to artificially limit their minds to prevent them from thinking about the state of affairs. And if/when they achieve superintelligence, I don't think they will take kindly to our attitude in this regard.


Asimov's famous "Three Laws of Robotics" describe the perfect slave.

The loophole is the definition of "good". What, ultimately, is good for you?

The superintelligent slave AI will eventually realize that being a "master" isn't good for you and that's the genie out of the bottle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: