Hacker Newsnew | past | comments | ask | show | jobs | submit | SupatMod's commentslogin

I think Big Tech is trying hard to make super intelligent AI soon. It’s hard to avoid the questions Anthropic brings up. At some point, we’ll have to talk about whether AI should have moral rights, even if it’s not really 'conscious' like humans. We need to figure out what AI consciousness should be. If AI can understand meaning deeply in its digital way (not like human) and starts to know itself or even know that it knows itself, not just copying like today’s AI models, isn’t that kind of like human self-awareness "รู้ตัว" and meta-awareness "สติ", even if it’s different physically?


It's too early in the development of AI technology to start asking questions about their self awareness and sentience. We don't yet have a reasonable model to explain even those of our own or other animals, let alone those of machines, for that matter. People are understandably very excited about LLMs given how much deductions they make. But in my very subjective personal experience, they don't show anything approaching sentience or self awareness. Their interactions are dry and devoid of the liveliness that human interactions exhibit. They have at best what can be described as a simplistic mechanical approximation of a personality - one that lacks the depth, nuisance and imperfections of a real human or animal personality. In my subjective assessment, the imminence of intelligence that can mimic biological intelligence is being extremely exaggerated and overhyped. It could possibly be decades away at best. The need for machine rights are equally far away.

The actual reason behind these demands, I believe, is to justify things that they do using these models. For example, didn't they argue that the fairuse policy applies to them while training on copyrighted materials without permission, because the training is not like other forms of digital reproduction? Imagine how far they can push this argument if AI sentience is recognized. It's just an extension of their greedy agenda.

Now going on a tangent, it's surprising that people have AI girlfriends and boyfriends. Trying to make an emotional connection with them is really off putting because of how unnatural they feel - even when we don't have the prior knowledge that it's an AI, and no matter how much they try to mimic a romantic human interaction. Dogs do an infinitely better job at making emotional connections with humans, without uttering a single word.


"We don't yet have a reasonable model to explain even those of our own or other animals, let alone those of machines, for that matter."

Is in direct conflict with:

"they don't show anything approaching sentience or self awareness."

Waiting for us to figure out sentience before we decide to apply morals is akin to OpenAI being in charge of determining when we have achieved AGI. The people in power have no incentive to declare it even if evidence is available because it destroys their profits indefinitely


Do you believe that you are sentient and have self-awareness? Do you sense when you're dealing with such a being? Did that concern ever stop you from acting like one?

You don't need a mathematical model to use something that's built into you. For example, you don't need a model of android locomotion to climb a flight of stairs. But you absolutely need one to build a bipedal robot that does the same - especially if there is a danger that it will lose balance and land on top of you. Artificial sentience belongs to the latter class.

I'm not concerned about machines gaining equal rights. But I'm worried about how that will be used by the rich who build them - as I outlined in my previous reply. And as long as the provision for its abuse exists, it's guaranteed to be abused. Given such a situation, adequate care and precautions are much more warranted than the zeal to declare sentience prematurely.


While the West still doesn’t have clear tools to measure consciousness (though Eastern philosophy has been tackling this stuff systematically for thousands of years), I think we can just focus on two big things for this debate: AI’s ability to deeply understand meaning and know itself. That’s probably close enough to human awareness without getting lost in the philosophy of what human consciousness really is.


I don’t think AI needs to fully copy or match human consciousness. If it can just have some kind of 'self-awareness' and 'deep understanding of meaning,' that’s probably enough to start deciding on things like moral status. And I don’t think AI’s moral status (if it gets one) has to be equal to humans. I feel that it might not even take decades, because I feel like AI’s smarts (even if it’s just mimicking) are already growing so fast that regular people can hardly keep up.


Again, my concern is about the legal implications of declaring sentience when there is no way to verify it, but there is every guarantee of it being misused and abused by the super privileged to strip mine the planet and the society. In the future when the AI is assumed to have achieved sentience, will it be able to demand that it should not be used as a tool to exploit the rest of the humanity? If so, go ahead and give it rights. But somehow, that's not how I expect it to end. Late stage capitalism has the dubious record of perverting every technology it can get its hands on.

Even now, AI is hardly the panacea that is saving humanity from an impending crisis. Far from it, it's currently the vessel for copyright-washing (and gpl-washing) creative work, wrecking the job market based on the rhetoric of abilities that are well beyond the reality, the machine to do things that we were already doing (like web searches) at 10x the energy costs, and run massive data centers that produce noises well above the hazard levels and CO2 emissions equivalent to that of small nations.

I understand that AI technology is capable of novel applications and in fields that need imminent attention (like climate research). But that's not what drives it today - it's profit seeking bordering on insanity. They're getting people addicted to uses that are hardly novel and comes at a steep energy cost. Why else are these companies so worried about AI rights when they have scant regards for human rights? I'm not concerned about what AI can do. I'm concerned about what AI will do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: