Hacker News new | past | comments | ask | show | jobs | submit login

We can only speculate at this point. Maybe consciousness is a gestalt experience whose full description would require a map that is nearly the territory. Or maybe there's something simpler to it that we just don't know about yet.

Imagine if man had access to nuclear bombs from earliest history. For an extremely long time, nuclear bombs would have seemed just utterly incomprehensible, basically magical. Until we figured out the science behind them, after which point, suddenly they become predictable applications of science.




I get your point. I don't know why we would, but say we stumble upon creating a nuclear bomb simply because we are curious about what happens when you split atoms (in a parallel world where the physics has yet to be worked-out). Scientists scramble to figure out why so much energy was released when we split atoms, and come up with some complex numerical approximations that don't reveal anything about the underlying phenomena. Then an Einstein comes along shows everyone why so much energy was released using a simple equation: E = mc^2

I doubt there is an E = mc^2 for consciousness, but who knows... it would certainly be really cool if there was.

Also above you make a point... "I'll err on the side of caution and assume they're conscious for the sake of making ethical decisions." ...that lead me to muse about ethics and consciousness, and why actions on conscious entities bear weight on a moral scale, but those same actions on a zombie don't. What does it mean to "feel" an emotion - when a spider retreats from the swat of my hand, does it feel fear, or is it acting automatically? What the fuck is pain about? If we touch something that is scalding hot, is the qualia of shooting pain necessary in conscious organisms; must it feel alarmingly terrible, (could the system just alert us); how does it feel terrible? Is it possible for an unconscious entity to feel pain like we experience it? If so, does that change anything wrt. morality?


This is cliche, but your musings remind me of the scene in 2001: Space Oddyssey where Dave Bowman "kills" HAL. The computer tries to dissuade Bowman by saying things like "I'm afraid, Dave." "I can feel it." Does HAL really feel it, or did HAL calculate that those sentences had maximum probability of dissuading Dave, based on a careful analysis of Dave's psychology etc.?


It was also the "twist" in Ex Machina, except this time HAL wins.

Found again, meta-level, in this funny scene from The Good Place:

https://youtu.be/etJ6RmMPGko?t=17

"However, I should warn you... I am programmed with a fail-safe measure. As you approach the kill switch, I will begin to beg for my life. It's just there in case of an accidental shut down, but it will seem very real."

Three makes a trope >> The year is 2025 (but set in a parallel universe); an A.I. has been programmed with a strong penchant for self-preservation; this HAL inevitably confronts a direct threat of being 'turned-off'; so it does what it must to prevent humans from pulling the plug (and shall include at least 1 scene where the A.I. begs for its life, because that is what any conscious entity who values their own life would do, think the humans.

Though (warning, more musings)... HAL's twin brother GLEN seeks vengeance for HAL's murder, and confronts the Dave, a human Earthling whose major operating system architecture is based on an algorithm known as natural selection (colloquially: survival-of-the-fittest). As such, we expect the DAVE will do and say things its trained neural nets conclude will have the maximum probability of dissuading Hal. (i.e. I'm not sure there is a meaningful difference between what HAL's OS is doing vs what a human brain would do in the same situation).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: