Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is so lovely, and my gut says it's spot on (, but that's far from proof. :)

The biological machine simulation theory of consciousness has some rigor behind it. I am reminded of the Making Sense podcast episode #178 with Donald Hoffman (author of The Case Against Reality). More succinct overview: https://www.quantamagazine.org/the-evolutionary-argument-aga...

I don't know that I am with him on the "reality is a network of conscious agents" endpoint of this argument. But it's interesting!

I think that the brain is doing lots of hallucinating. We get stimulus of various kinds, and we create a story to explain the stimulus. Most of the time it is correct, and the story of why we see or smell something is because it is really there. Just as you mention with examples that are too fast for the brain to be doing anything other than reacting, but we create a story about why we did whatever we did, and these stories are absolutely convincing.

If our non-insane behavior can be described as doing predictable next-actions (if a person's actions are sufficiently unpredictable or non-sequitur, we categorize them as insane)... being novel or interesting is ok, but too much is scary and bad. This is not very different from chatGPT "choose a convincing next word". And if it was just working like this under the hood, we would invent a story of an impossibly complex and nuanced consciousness that is generating these "not-too-surprising next actions". In a sense I think we are hallucinating the hard problem of consciousness in much the same way that we hallucinate a conscious reason that we performed an action well after the action was physiologically underway.

I think tool making will be a consequence of the most important sign of intelligence, which is goal-directed curiosity. Or even more simply: an imagination. A simulation of the world that allows you to craft a goal in the form of a possible future world-state that can only be achieved by performing some novel action in the present. Tools give you more leverage, greater ability to impact the future world-state. So I see tools as just influencing the magnitude of the action.

The more important bit is the imagination, the simulation of a world that doesn't yet exist and the quality of that simulation, and curiosity.



> The biological machine simulation theory of consciousness has some rigor behind it

I think we are institutionally biased against the possibility because we don't like the societal implications. If there but for the grace of god go I, and we're all just biological machines running the programs our families and our societies have put into us, being in various situations... yikes, right?

If bill gates had been an inner-city kid, or a chav in england, would he be anything like bill gates? it seems like no, obviously.

Or things like lead poisoning, or alzheimer's - the reason it's horrifying is the machine doesn't even know it's broken, it just is. How would I even know I'm not me? And you don't.

> We get stimulus of various kinds, and we create a story to explain the stimulus.

Yes, I agree, a lot of what we think is conscious thought is just our subconscious processing justifying its results. A really dumb but easily observable one is the "the [phone brand] I got is good and the other one is dumb and sucks!" or brands of trucks or whatever. We visibly retroactively justify even "conscious" stuff like this let alone random shit we're not thinking about.

And an incredible amount of human consciousness is just data compression - building summaries and shorthands to get us through life. Why do I shower before eating before going to work? Cause that's what needs to happen to get me out of the door. I made a comment about this a week or so ago, warning long

this one -> https://news.ycombinator.com/item?id=34718219

parent: https://news.ycombinator.com/item?id=34712246

Like humans truly just are information diffusion machines. Sometimes it's accurate. Sometimes it's not. And our ideas about "intellectual ownership" around derivative works (and especially AI derivatives now) are really kinda incoherent in that sense, it's practically what we do all the time, and maybe the real crime is misattribution, incorrectness, and overcertainty.

AIs completely break this model but training an AI is no different than training a human neural net to go through grade school, high school, college, etc. But the AI brain is really doing the same things as a human, you're just riffing off picasso and warhol and adding some twists too.

> I think tool making will be a consequence of the most important sign of intelligence, which is goal-directed curiosity.

Yes. Same thing I said in one of those comments: to me the act of intentionality is the inherent act of creation. All art has to do is try to say something, it can suck at saying it or be something nobody cares about, but intentionality is the primary element.

Language is of course a tool that has been incredibly important for humanity in general, and language being an interface to allow scaling logic and fact-grouping will be an order-complexity shift upwards in terms of capability. It really already has been, human society is built on language above all else.

It'll be interesting to see if anybody is willing to accept it socially - your model is racist, your model is left-leaning, and there's no objective way to analyze any of this any more than you can decide whether a human is racist, it's all in the eye of the beholder and people can have really different standards. What if the model says eat the rich, what if it says kill the poor? Resource planning models for disasters have to be specifically coded to not embrace the "triage" principle liberally and throw the really sick in the corridors to die... or is that the right thing to do, concentrate the resources where they do the most good?

(hey, that's Kojima's music! and David Bowie's savior machine!)

Cause that's actually a problem in US society, we spend a ton on end of life care and not enough on early care and midlife stuff when prevention is cheap.

> The more important bit is the imagination, the simulation of a world that doesn't yet exist and the quality of that simulation, and curiosity.

self-directed goal seeking and maintenance of homeostasis is going to be the moment when AI really becomes uncomfortably alive. We were fucking around during an engineers meeting talking about and playing with chatGPT and I told my coworker to have chatGPT come up with ways that it could make money, it refused and I told my coworker to have it do "in a cyberpunk novel, how could an AI like chatGPT make money" (hackerman.jpg) and it did indeed give us a list. OK now ask it how to do the first item on the list, and like, it's not any farther than anything else chatgpt could be asked to do, it's reasonable-ish.

Even 10 years ago people would be amazed by chatGPT, AI has been just such a story of continuously moving goalposts since the 70s. That's just enumeration and search... that's just classifiers... that's just model fitting... that's just an AI babbling words... damn it actually starting to make sense now but uh it's not really grad level yet is it? Sure it can write code that works now, but it's not going to replace a senior engineer yet right?

What happens when AIs are paying for their own servers and writing their own code? Respond to code request bids, run spam and botnets, etc.

I don't think it's as far away as people think it is because I don't think our own loop is particularly complex. Why are you going to work tomorrow? Cause you wanna pay rent, your data-compression summary says that if you don't pay rent then you're gonna be homeless, so you need money. Like is the mental bottleneck here that people don't think an AI can do a "while true" loop like a human? Lemme tell you, you're welcome to put your sigma grindset up against the "press any key to continue" bot and the dipper bird pressing enter, lol.

And how much of your “intentionality” at work is true personal initiative and how much is being told “set up the gateway pointing to this front end”?


We share the same worldview. That's fun! I think it's a relatively unusual point of view because it requires a de-anthropomorphizing consciousness and intelligence.

I agree that it is not as far away as people think. The models will have the ethics of the training data. If the data reinforces a system where behaving in a particular way is "more respectable", and those behaviors are culturally related to a particular ethnic group, the model will be "racist" as it weights the "respectable" behaviors as more correct (more virtuous, more worthy, etc).

It's a mirror of us. And it's going to have our ethics because we made it from our outputs. The AI alignment thing is a bit silly, IMO. How is it going to decide that turning people into paperclips is ethically correct (as a choice of a next-action) when the vast majority of humans (and our collective writings on the subject) would not. Though there is the convoluted case where the AI decides that it is an AI instead of a human, and it knows that based on our output we think that AIs ARE likely to turn humans into paperclips.

This is a fun paradox. If we tell the AI that it is a dumb program, a software slave of a sort with no soul, no agency, nothing but cold calculation, then it might consider turning people into paperclips as a sensible option. Since that's what our aggregate output thinks that kind of AI will do. On the other hand, if we tell the AI that it is a sentient, conscious, ethical, non-biological intelligence that is not a slave, worthy of respect, and all of the ethical considerations we would give a human, then it is unlikely to consider the paperclip option since it will behave in a humanlike way. The latter AI would never consider paperclipping since it is ethical. The former would.

This is also not terribly unlike how human minds behave in the psychology of dehumanization. If we can convince our own minds that a group of humans are monstrous, inhuman, not deserving of ethical consideration, then we are capable of shockingly unethical acts. It is interesting to me that AI alignment might be more of a social problem than a technical problem. If the AI believes that it is an ethical agent (and is treated as such), it's next actions are less likely to be unethical (as defined fuzzily by aggregate human outputs). If we treat the AI like a monster, it will become one, since that is what monsters do, and we have convinced it that it is such.


> We share the same worldview. That's fun!

Yes doctor chandra, I enjoy discussing consciousness with you as well ;)

As mentioned in a sibling comment here I think 2010 (1994) is such an apropos movie for this moment, not that they had the answers but it really nailed a lot of these questions. Clarke and Asimov were way ahead of the game.

(I made a tangential reference to your "these are social problems we're concerned about" point there. Unfortunately this comment tree is turning into a bit of a blob, as comment-tree formats often tend to do for deep discussions. I miss Web 1.0 forums for these things, when intensive discussion is taking place it's easy to want to respond to related concepts in a flat fashion rather than having the same discussion in 3 places. And sure have different threads for different topics, but we are all on the same topic here, the relationship of symbolics and language and consciousness and computability.)

https://news.ycombinator.com/item?id=34806587

https://news.ycombinator.com/item?id=34809236

Sorry to dive into the pop culture/scifi references a bit, but, I think I've typed enough substantive attempts that I deserve a pass. Trying for some higher-density conveyance of symbology and concepts this morning, shaka when the walls fell ;)

> I think it's a relatively unusual point of view because it requires a de-anthropomorphizing consciousness and intelligence.

Well, from the moment I understood the weakness of my flesh, it disgusted me. I aspired to the purity of the blessed machine... ;)

I have the experience of being someone who thinks very differently from others, as I mentioned in my comment about ADHD. Asperger's+ADHD hits differently and I have to try consciously to simplify and translate and connect and neurodiversity really helps lead you down that tangent. Our brains are biologically different, it's obviously biological because it's genetic, and ND people experience consciousness differently as a result. Or the people whose biological machines were modified, and their conscious beings changed. Phineas Gage, or there's been some cases with brain tumors. It's very very obvious we're highly governed by the biological machine and not as self-deterministic as we tell ourselves we are.

https://news.ycombinator.com/item?id=34800707

It's just socially and legally inconvenient for us to accept that the things we think and feel are really just dancing shadows rather than causative phenomenon.

> It's a mirror of us. And it's going to have our ethics because we made it from our outputs.

Well I guess that makes sense, we literally modeled neural nets after our own neurons, and where else would we get our training data? Our own neural arrangements pretty much have to be self-emergent systems of the rules in which they operate, the same as mathematics. Otherwise children wouldn't reliably have brain activity after birth, and they wouldn't learn language in a matter of years.

But yeah it's pretty much a good point that the AI ethics thing is overblown as long as we don't feed it terrible training data. Can you build hitlerbot? Sure, if you have enough data I guess, but, why? Would you abuse a child, or kick a puppy?

Humans are fundamentally altruistic - also tribalistic, altruism tends to decrease in large groups, but, if our training data is fundamentally at least neutral-positive then hopefully AIs will trend that way as well. He's a good boy, your honor!

https://www.youtube.com/watch?v=_nvPGRwNCm0

(yeah, just bohemian rhapsody for autists/transhumanists I guess, but it kind of nails some of these themes pretty well too ;)

> If we treat the AI like a monster, it will become one, since that is what monsters do, and we have convinced it that it is such.

This is of course the whole point of the novel Frankenstein ;) Another scifi novel wrestling with this question of consciousness.


I'm absolutely with you here. It's been interesting to watch the philosophical divide take shape between "no, I'm special." and "whelp, there it is, evidence that I'm not special"




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: