Since when is a study needed to confirm that enabling a dopamine addiction, especially in developing minds, is a bad idea? Isn't our own direct experience as adults/parents struggling with said addictions enough?
Since a study is needed to determine if anything is true. Sometimes the study is simple: look out the window and see if it's raining. But this is not one of those.
The idea of replicating a consciousness/intelligence in a computer seems to fall apart even under materialist/atheist assumptions: what we experience as consciousness is a product of a vast number of biological systems, not just neurons firing or words spoken/thought.
Even considering something as basic as how fundamental bodily movement is to mental development, or how hormones influence mood ultimately influencing thought, how could anyone ever hope to to replicate such things via software in a way that "clicks" to add up to consciousness?
Conflating consciousness and intelligence is going to hopelessly confuse any attempt to understand if or when a machine might achieve either.
(I think there's no reasonable definition of intelligence under which LLMs don't possess some, setting aside arguments about quantity. Whether they have or in principle could have any form of consciousness is much more mysterious -- how would we tell?)
Defining machine consciousness is indeed mysterious, at the end of the day it ultimately depends on how much faith one puts in science fiction rather than an objective measure.
Seems like a philosophy question, with maybe some input from neuroscience and ML interpretability. I'm not sure what faith in science fiction has to do with it.
I don't see a strong argument here. Are you saying there is a level of complexity involved in biological systems that can not be simulated? And if so, who says sufficient approximations and abstractions aren't enough to simulate the emergent behavior of said systems?
We can simulate weather (poorly) without modeling every hydrogen atom interaction.
The argument is about causation or generation, not simulation. Of course we can simulate just about anything, I could write a program that just prints "Hello, I'm a conscious being!" instead of "Hello, World!".
The weather example is a good one: you can run a program that simulates the weather in the same way my program above (and LLMs in general) simulate consciousness, but no one would say the program is _causing_ weather in any sense.
Of course, it's entirely possible that more and more people will be convinced AI is generating consciousness, especially when tricks like voice or video chat with the models are employed, but that doesn't mean that the machine is actually conscious in the same way a human body empirically already is.
>but that doesn't mean that the machine is actually conscious in the same way a human body empirically
Does it matter? Is a dog/cow/bird/lizard conscious in the same way a human is? We're built from the same basic parts, and yet humans seem to have a higher state of consciousness than other animals around us.
For example the definition of the word conscious is
>aware of and responding to one's surroundings; awake.
I'll give that we likely mean this in a general sense, but I'd say we're pretty close to this with machines. They can observe the real world with sensors of different types, and then either directly compute, or use neural nets to make generalized decisions on what is occurring around them, then proceed to act on those observations.
You have weather readouts. One set is from a weather simulation - a simulated planet with simulated climate. Another is real recordings from the same place at the same planet, taken by real weather monitoring probes. They have the same starting point, but diverge over time.
They're not asking about telling the difference in collected data sets, data sets aren't weather.
The question is can you tell the difference between the rain you see outside your window, and some representation of a simulated environment where the computer says "It's raining here in this simulated environment". The implied answer is of course, one is water falling from the sky and one is a machine.
>The implied answer is of course, one is water falling from the sky and one is a machine.
Lets say you're in a room a good distance from the window. Suddenly you hear what sounds like thunder and rain falling. From a distance it appears that it's raining outside.
Is the rain real? Or is it simulated on a screen well enough you can't tell?
You have input output devices just like a computer. They don't see reality, they filter out huge amounts of data and your brain just interprets it. If our machines get good enough we may be able to blast signals directly to the brain that say it's raining and the brain wouldn't have any idea if it was simulated or not. Much in the same way it feels like we exist and not a 3d hologram an infinite distance away (or whatever other weirdness physics may or could do).
So how do we get from "machines may get really good at faking things to human perception" to "machines themselves can have human-like (or 'better') perception"?
The question is so-called consciousness arising out of machines, not machines deceiving human consciousness.
Even then, that deception still doesn't prove equivalency of simulated and real things, unless we're adopting an extreme and self-contradictory subjectivist epistemology.
People seem to way overcomplicate consciousness, especially in machines.
Where does a running video game exist? It's a simulation in the hardware. Where is the consciousness in a human brain, again it's electrical signals in our brain.
At the end of the day a system is what it does. Once it starts simulating the human mind in ways that appear human then we're at the point of saying a plane has to flap its wings or it's not flying.
You can't look at the "real weather" though. You can only look at the outputs. That's the constraint. Good luck and have fun.
A human brain is a big pile of jellied meat spread. An LLM is a big pile of weights strung together by matrix math. Neither looks "intelligent". Neither is interpretable. The most reliable way we have to compare the two is by comparing the outputs.
You can't drill a hole in one of those and see something that makes you go "oh, it's this one, this one is the Real Intelligence, the other is fake". No easy out for you. You'll have to do it the hard way.
Even granting all of your unfounded assertions; "the output" of one is the rain you see outside, "the output" of the other is a series of notches on a hard drive (or the SSD equivalent, or something in RAM, etc.) that's then represented by pixels on a screen.
The difference between those two things (water and a computer) is plain, unless we want to depart into the territory of questioning whether that perception is accurate (after all, what "output" led us to believe that "jellied meat spread" can really "perceive" anything?), but then "the output" ceases to be any kind of meaningful measure at all.
there is no "real weather". the rain is the weather. the map is not the territory. these are very simple concepts, idk why we need to reevaluate them because we all of a sudden got really good at text synthesis
Everyone's a practical empiricist until our cherished science fiction worldview is called into question, then all of a sudden it's radical skepticism and "How can anyone really know anything, man?"
You experience everything through digital signals. I dont see why those same signals cant be simulated. You are only experiencing the signal your skin sends to tell you there is rain, you dont actually need skin to experience that signal.
Bundling up consciousness with intelligence is a big assumption, as is the assumption that panpsychism is incorrect. You may be right on both counts, but you can't just make those two assumptions as a foregone conclusion.
Talk to any right-leaning young man who's actively engaged in a church community (even better if they're actually pursuing Christ, and not just chasing an abstract ideal of "community") and you won't hear very much about the "loneliness epidemic", except maybe in reference to their peers.
It may be hard to find them online in order to "talk" to them though, and of course that's the whole point :-) Selection bias at work.
As a "believer" myself (we don't typically use that evangelical terminology but it's close enough) I can completely understand that. People should go to Church for Christ, and anything else, even "community" is a nice bonus but is not going to sustain people when pursued for its own sake, as you're experiencing.
Maybe a sport or hobby club or some other thing you're interested in where people have a specific motivation to get together, bonding and forming "community" more naturally?
"I hate dread. What a completely useless emotion."
Funny you should say that, since the sense of dread seemed to motivate this decision pretty heavily. Seems like the emotion was pretty useful as a sort of "burnout immune system".
Ah, interesting! Useless in the same way pain is useless, then — not very helpful in the moment, but it sure teaches you what to avoid in the long run.
As a college dropout myself, I don't think I've ever used this ridiculous "argument" to justify my decision. Are there seriously people out there who fancy themselves "The next Mark Zuckerberg", simply because they don't have a degree?
Yes. I've encountered them. My anecdotal evidence is that all who dropped out because of Mark Zuckerberg, Bill gates, or Peter Thiel showed them 'a different way' have completely stagnated.
On the other hand, anecdotally, those dropouts who put their heads down and got stuff done, some of them did pretty well.
Finally, anecdotally, in my experience, people with college degrees got hired faster and in times of woe, are the last to be let go. In fact, many SV companies (Google, for example, but funded startups often work the same way), won't hire you without a college degree unless you've done something notable. Being notable, by definition, means you are in the minority of any group. making dropping out to achieve success a risky proposition.
Anecdotally, the only people that have been laid off/fired in IT at my current $DAY_JOB are people with degrees.
I think the fundamental issue is perception matters. The perception of having a degree matters to some people. To some people, its about performance metric X instead, etc.
I dropped out too, and while I was perhaps reassured by the fact that successful people had been able to make it without college, really I just fucking hated sitting in school all day.
There are plenty of reasons to drop out beyond "wanting to be Zuckerberg." How about "not wanting $100,000 of debt?" or "Prefering to be paid for my work?"
College just isn't for some people. It's not a bad thing or a good thing. It's just a thing.
i dropped out because i started college when i was 22. i was already working as a developer for 4 years.
when i got to college, it was more of the same and i was basically learning a lot of things that i already knew -- so for me, it was better to focus on working than working + going to college.
It always comes up at some point in discussions about education. Zuckerberg is a particularly poor example because he really doesn't strike me as a transcendent genius like Gates or Wozniak or Jobs, just a guy who built the right thing at the right time.
I think for the vast majority of people, self-education is really, really hard. Even now in 2014, the material available online is very limited in scope and depth. Maybe some people could grab a college curriculum, get the lecture notes, buy all the books, and then sit down and truly learn all that material by themselves. I certainly couldn't.
Oh, these kinds are everywhere. I am surprised you haven't met them.
People think having ruffled hair makes one like einstein, being highly assertive makes you like bezos, wearing jeans and polo shirt like job and so on.
Same. I didn't it because as far as I could tell at the time it would have 0 impact on my income one way or the other so why spend the money & time at a rate of lost salary + tuition.
reply