Hacker News new | past | comments | ask | show | jobs | submit login
AI Should Be Renamed (blog.vomkonstant.in)
5 points by saintPirelli on March 24, 2019 | hide | past | favorite | 3 comments



> To suggest to the public (or – often more lucrative – to some gullible investors) that further development in this field will somehow result into a being that thinks for itself, has feelings similar to a human, has emotions similar to a human or to put it aptly: that it will result in something that experiences Being (in Heidegger sense), is always wrong, often a blatant lie and in some rare cases even straight-up fraud.

It is too strong a statement for my taste. And it smells of cognitive dissonance. How not a human can be a human? It is logically impossible, isn't it?

We have no satisfactory definition of a human being. So we cannot state that computers are not humans already. There are no satisfactory definition of 'sentient', 'consciousness' and all other things that demarcate a line between humans and others (ability to recognize myself in a mirror is a self-awareness? is it a fraud?). We just do not know. We need a strength to accept our lack of knowledge, and honestly admit that we do not know if computers are humans enough or no, and at the same time we deprive them of human rights. It is bad thing to do, but to not do it, to fix it, we need to aquire knowledge, and searching for a knowledge begins with acknowledgement of ignorance.

Don't be mistaken about me. I do not believe that computers are humans enough already. I do not believe that they become humans in a few decades. It is a long way for cognitive sciences to make a silicon sentient being. But I'm a strong believer in acknowledgment of ignorance. Failure to acknowledge ignorance leads to a rationalization, to a complete garbage in place where knowledge must be.

Moreover, maybe "here is no consciousness at the end of the road that we are driving on with AI" is true. We see consciousness as a some kind of magic, but there are no magic in AI, so there is not consciousness. The problem is we can destroy the very idea of consciousness by studying consciousness in humans: if we understood it, there would be no magic, therefore there would be no consciousness.

> imagine any other way we might be able to develop an actual artificial intelligence. Then what? How do we name this so it makes sense

It is General AI. Name is coined already. I see no problem here, it is not hard to invent a new name for a new thing. Or even more interesting: why we need a new name for a new thing, if it is indistinguishable from an old thing, from a human?


I would argue that when "influencers" and "evangelists" in the field oversell what AI really is by injecting the consciousness debate into it, it is not because we have no sufficient definition of a human being, but to accumulate eyeballs for their content and investors for their businesses.

So as interesting the discussion about the nature of consciousness might be, I think it is as misplaced in ML/DL domain as it would be in - say - the discussion about network protocols. How silly does this question sound: "How far from a truly sentient being is HTTP?" Ridiculous, right? (Okay maybe that one is little over the top, but you get my point.) Conflating this particular technology with this particular discussion at all is a very lucrative mistake and it stems from the name it has been given.


> I would argue that when "influencers" and "evangelists" in the field oversell what AI really is by injecting the consciousness debate into it, it is not because we have no sufficient definition of a human being, but to accumulate eyeballs for their content and investors for their businesses.

I cannot possible know why other people do what they do. I bother with this topic, because at some point in the future we will come to a question, what AI is now, what it can do, what can be entrusted to it and what cannot be. Should we treat AI as a human-being, or as a machine? What we shouldn't do to AI to be able to argue that AI is not a human-being? Can we add emotions to an AI and to treat it like machine? Can we teach AI to be self-aware and conscious, and then turn it off on a whim?

If we load human mind into a machine then should we treat it like a human or like a machine? What modifications of human mind can make it not human-being, and should be treated as killing, and what modifications are ok to do? What the difference between human mind in the machine and AI in the machine? Is there any difference?

> "How far from a truly sentient being is HTTP?" Ridiculous, right?

What is ridiculous here? You mean that protocol is not a system, it is an abstraction, and by itself it cannot do nothing? Ok, I agree. But http-server can do a lot of things, and it is completely reasonable to ask what is the difference between an intelligent being and http-server. We can see behavioral differences, http-server is very constrained and have a very limited ability to adapt to a changing environment. There are no chances that it starts to cache some URLs because web-developer didnt thought about it, but statistics shows that it could greatly lower the total load.

So we can argue that http-server is not intelligent. And we need to do it, because at some point we would need to argue about intelligence of AI, but to do that we need to know how to do that. HTTP server is a good start.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: