There have been some promising studies showing rapid-onset relief for major depression following the administration of ketamine. Glutamate seems to be getting more attention these days, in terms of its potential role in mental illness.
Personally I think "generate healthy discussion" should be the goal of all HN threads, which is why I think comment-count penalties are unproductive. How do you tell the difference between "vigorous debate" and a "flame war"? If the algorithm utilized NLP to analyze the tenor of the discussion, that would be one thing, but upvotes vs. comment count is a troubling metric. I want to read threads where people are passionately discussing a topic with long back-and-forth debates. If that's a "flame war" then I'd like more flame wars, please.
I'm doubtful we can really "prove" true unconsciousness exists (in the sense that subjective awareness completely ceases) until we have a much more advanced understanding of how experience arises in the brain. Right now the most we can say is "X results in an amnesic, non-responsive state". After all, if we rely on self-reporting, how can someone remember being unconscious if they have no memory of it?
Also, the article keeps saying the claustrum, but don't we have one in each hemisphere? Which one did the scientists stimulate, and what would happen if you stimulated the other one?
Enjoyed the shout-out to Bohmian mechanics. Nonlocality may be weird, but you know what else is weird? Everything else about quantum mechanics. I'd actually prefer a nonlocal deterministic theory to a local indeterministic one, though I know that's just a philosophical preference. Still, I wish Bohmian mechanics was more popular; I wasn't even aware it existed until recently.
Multiverse is perfectly deterministic without all the issues with Bell's Inequality and needing a superfluous particle in addition to the wavefunction. I think pilot wave is throwing the baby out with the bathwater.
> No one would bat an eye if a used car salesman put his own personal interest above the interest of his customer?
That could be because they expect used car salesmen to behave unethically. In that case, their "not batting an eye" would indicate that they don't feel a responsibility to address this injustice, not that the act under consideration is perfectly just by their standards.
> Why should we then be outraged when someone sells you an annuity that isn't the ideal product for you?
Because it strikes us as violating our ethical principles. From my perspective, business should be about providing value for the customer. Taking advantage of people to get their money is terrible, and I would think less of anyone (car salesman or financial planner) who did so.
A good salesman persuades customers to take action that is beneficial to both parties.
Sorry to hear the news. I interviewed with Earbits at one point and they seemed like really good guys. If I may offer some gentle post-mortem constructive criticism:
Earbits seemed very focused on their value proposition to musicians. Their goal seemed to be "the Google AdWords of Music", but here's the thing: if you want to become the Google AdWords of music, you first have to become the Google of Music. The value proposition to the listeners is key.
To their credit, Earbits did have a value proposition for listeners: music discovery. But this meant their competition was Pandora, music blogs, Pitchfork.com, word of mouth, and all the other ways we find music in the 21st century. Another commenter here says "music is a tough nut to crack," but the real issue here is that music discovery is a tough nut to crack.
You could counter that Earbits differentiated itself by focusing on independent, unsigned artists; that's a noble mission, but if you're going to commit to that as your value proposition, you have to ask yourself if there's a market for it. Is finding unsigned artists really a pain point for most music listeners? I'm not convinced.
Anyway, sorry it didn't work out. Wish you guys the best of luck in your future endeavors.
You're spot on. Unfortunately, our approach was a double edged sword. No company who licenses a big catalog of music first ever survives to tell the tale. But building a two sided marketplace meant splitting half of our already thin resources in half, switching back and forth between pleasing listeners and pleasing bands, and we just weren't able to build the consumer experience necessary. In retrospect, had we focused on mobile first, built mostly listener features until that product was great, then focus on the artists, we may have had a better shot. The problem with that is, had we not built enough stuff to wow the bands and labels, we would have never gotten the kind of artists we did. We had to prove two different concepts on 1/100th the budget of our competitors, who only serve one side of the market.
Perhaps the point being made was that the individual motives/goals of the employees are irrelevant to assessing the overall motives of the company as a whole. Personally I'd say it's whatever the CEO's goals are, but shared agency is a weird thing.
> Imagine such a computer program fooling every human it interacts with for decades at a time, I think that would say something about ourselves and I think it would render the question of consciousness meaningless.
Maybe we're operating with different notions of "the question of consciousness", but I have to disagree that a perfect simulation of a human mind would dissolve the problem. And by "the problem" I mean the Hard Problem of Consciousness, the issue of how the hell the subjective experience of being arises in a lump of bloody meat (or in anything, really).
I suppose my question to you is this: do you believe there is something it is like to BE that simulation, to have a first-person subjective experience from the perspective of the computer?
I agree with you that elevating certain aspects of reality to "supernatural" status is unhelpful; it's the equivalent of saying "sorry scientists, but this stuff is out of bounds." (I'm also not convinced that dividing reality into natural and supernatural is even a coherent distinction, but that's another discussion.) However, I have to take issue with your suggestion that the answer to consciousness is "much simpler... than we are willing to admit." It may turn out that a future science will come up with a very satisfying answer to the Hard Problem of Consciousness, but I suspect that this will require a paradigm shift so radical that it will make current neuroscience look like phlogiston theory.
>Maybe we're operating with different notions of "the question of consciousness", but I have to disagree that a perfect simulation of a human mind would dissolve the problem.
Because human brains are magic?
> And by "the problem" I mean the Hard Problem of Consciousness, the issue of how the hell the subjective experience of being arises in a lump of bloody meat (or in anything, really).
You have your answer, the effect of consciousness clearly arises from physical matter that is subject to natural laws of physics and shaped blindly by natural selection. That should tell you that it is quite possible. For example, for a related concept of Free Will, I think we can confidently say that for all intents and purposes we don't have any, and yet our brains produce a very powerful subjective feeling of having it. This is why I think Consciousness is a much simpler problem than it really is. We have a cognitive bias for seeing the world in a very specific way, just as we have a cognitive bias in visualizing 3-dimensional spaces and we have incredible problems visualizing higher dimensions (heck, we can't even visualize a 2-D or 1-D space without embedding it in a 3-D space).
Consciousness and free will are very powerful illusions hard-wired into our brains, unless you think the particles that make us up, somehow don't follow deterministic (or random in case of QM) natural laws?
See, this is the problem with this virulent strain of Scientism making the rounds right now. There's this tendency to divide ideas into two camps: "explained by our current understanding of physics" and "magic". It seems like a knee-jerk reaction to religion, like anyone that posits that we don't have a complete understanding of reality is therefore a crypto-theist. It completely misunderstands the nature of scientific progress, the evolution of our conceptions of reality in response to empirical evidence.
It's easy to picture human progress as millennia of trudging through continually-decreasing ignorance and finally arriving at the Correct Answers, but this can be a dangerous view. Science works best when we aren't constrained by dogma -- when we're allowed to "think outside the box" and consider the world in new and revolutionary ways.
People in the thrall of this Scientism often act like raising the Hard Problem of Consciousness is some kind of intellectual weakness, that we're just not able to "get over" the fact that subjective experience happens to emerge from physical matter. Nobody here (or at least not me) is saying that the Hard Problem means that subjective experience is "magic" or that we have anthropomorphic souls that float up to heaven with angel wings when we die. The Hard Problem of Consciousness is still a scientific problem, and I'm not saying it can't be solved by science. But telling me that Strong AI will basically solve this issue (and that any protestation is an appeal to "magic") comes across as hand-waving; the simulation of a conscious entity does not help me understand why being in the world feels like anything.
Do you believe a perfect simulation of a human mind would result in something with an identical behavior to that of the person whose mind we're simulating? yes or no.
If yes, do you disagree that both are conscious? If you do, then you're contradicting the assumption in the question (that identical behavior implies identical properties). If no, then, again, you're contradicting the assumption in the question. You think consciousness is not determined by behavior. Ok, then what determines whether or not something is conscious?
> Do you believe a perfect simulation of a human mind would be able to create something with an identical behavior to that of the person whose mind we're simulating? yes or no.
Sure, I'll admit the possibility exists.
> If yes, do you disagree that both are conscious? If you do, then you're contradicting the assumption given in the question (that identical behavior implies identical properties).
I must've missed this... who was saying that identical behavior implies identical properties? Sure, I'd probably disagree that the A.I. is conscious in a subjective sense, and I'd also probably disagree that identical behavior implies identical properties. People can imitate each other without taking on the properties of the imitated.
I don't really have an empirical "test" for subjective consciousness beyond my own immediate, first-person experience of it. This may sound like a concession or even a defeat, but I think I'm allowed to posit that phenomena exist which we currently lack the empirical tools to investigate. "Currently" is the key word; as I said before, it is arrogant to assume consciousness will forever remain a mystery to scientific inquiry, just as it is arrogant to assume it must be a simple extension of existing theories.
I admit I have nothing beyond my own experience to validate the idea of subjective perception, and I have no evidence beyond intuition as to whether or not a machine can "experience" input the same way a brain can. However, I think I'm still entitled to believe that subjective experience is a real phenomena whose nature can and should be explained, and that our scientific understanding is presently inadequate for this task.
EDIT: I can understand the fear of relying on intuition. After all, it's the same thing that led us to believe that lightning came from the gods. But that doesn't mean that we should throw out the entire experience of perceiving lightning. Clearly lightning is a phenomena we experience, but we still don't understand how photons entering our eyes produce the subjective experience of blinding whiteness, or how vibrations from thunder translated into electrical signals by the ear result in the subjective experience of the sound itself. The information is in the brain, but we still don't know how information becomes experience. This doesn't mean we have to explain it via gods, but it does mean we still have something left to explain.
>See, this is the problem with this virulent strain of Scientism making the rounds right now.
We have a very good understanding of the fundamental forces and particles that govern the brain and our everyday experience. That doesn't mean that we'll use the vocabulary and mathematics of fundamental physics to explain brain processes, just as we don't use particle physics vocabulary when we model Hurricanes or explain cell processes. Nevertheless, whatever model or explanation you come up with for Consciousness better square with those fundamental physics otherwise you're going to be in the crackpot territory. That's not Scientism, that's just a fact.
>There's this tendency to divide ideas into two camps: "explained by our current understanding of physics" and "magic".
Again, Quantum Mechanics, and the Standard Model (as well as the laws of Chemistry that abstract those) are not going away. Evolution and Natural Selection is not going away either. That constrains the kinds of explanations we will have for Consciousness. If you think understanding Consciousness will overturn either the Standard Model or Evolution, you're going to be very disappointed. Again, that's not Scientism, that's just a smart prediction.
>But telling me that Strong AI will basically solve this issue (and that any protestation is an appeal to "magic") comes across as hand-waving
I didn't say it will solve Conciousness. I think Conciousness is an ill-defined concept, but yet many people have very strong feelings about. I speculated we'd probably see it as such when (if) we are capable of building such a strong AI, probably before that.
You should try harder not to assume the people you're talking to are morons. The person you're debating doesn't think evolution is going away, not even a little bit, not even for a second. That you don't recognize this means you should be more charitable in understanding his point of view.
Eh, I'm willing to cut them some slack. I talked about science as a continuous process of revision, which might give the impression that I'm making a Pessimistic Induction argument (http://en.wikipedia.org/wiki/Pessimistic_induction) against all scientific conclusions. I think macspoofing was just trying to counter with examples of scientific theories that seem pretty airtight, and that's fair. Also I basically accused them of falling prey to mindless Scientism, which was admittedly a bit harsh (seeing people attribute "magical" explanations to skeptics can trigger my rage mode, apparently).
Anyway, this whole thing is very controversial (they don't call the Hard Problem "hard" for no reason), and I can see the appeal of trying to safeguard the scientific process of knowledge-building from the messy weirdness of subjective experience. Time will tell if consciousness can be explained by a more advanced physics. I'm certainly looking forward to it.
>That you don't recognize this means you should be more charitable in understanding his point of view.
Should that understanding have come before or after being accused of 'Scientism', which feels like an insult, but I can't be sure because I don't really know what that means in context.
>The person you're debating doesn't think evolution is going away, not even a little bit, not even for a second.
I didn't imply that he did. The point I was trying to get across is that there are some real constraints on the type of explanations we'll have with respect to Consciousness. We are not going to need unknown exotic physics to explain it, and whatever the answer is it will stay comfortably within the current Evolutionary framework. Obviously that could be wrong, but I wouldn't bet on it. This should not be a controversial statement.
You actually can't prove _anybody else in the world_ besides you has a first person subjective experience. Trying to prove a magic talking box has experience when you can't prove another moist robot has experience is moving the goalpost too far out.
require a paradigm shift so radical
Not really, we just need more high speed training data. It'll all be available to way too many people, governments, and companies within the next few years.
> You actually can't prove _anybody else in the world_ besides you has a first person subjective experience.
That's absolutely true. My wife and I sometimes joke that each of us is an incorporeal figment in the other's dream. It's impossible for me to know with certainty that I am not the only real consciousness in existence. It's an interesting line of thought, but ultimately unproductive; what could I do if it's true? If it's not, then the validity of all of the other consciousnesses is just as pressing as mine. Either way it behooves me to behave as though it is true, so I suppose that's the starting point for all of my thoughts on this topic: I and all other humans have a first person subjective experience. I agree with (parent? gp?) that describing the nature of that experience is non-trivial.
> Trying to prove a magic talking box has experience when you can't prove another moist robot has experience is moving the goalpost too far out.
Maybe I'm just confused about what goals we're talking about. If your goal is to understand cognition, then sure; a highly intelligent machine is a great way to do that. However, the comment I replied to was suggesting that strong AI would "render the question of consciousness meaningless"; that's a far stronger claim, and in my opinion, unrealistic. I think you and I are actually in agreement on this one... if anything, I was arguing against A.I. having the goalpost of understanding subjective experience.
strong AI would "render the question of consciousness meaningless"; that's a far stronger claim, and in my opinion, unrealistic.
That's actually a great point many people misunderstand, largely due to nobody pinning down a definition for "Strong AI."
I prefer the term "computational consciousness" instead of Strong AI. It gets the point across better about future AI actually _thinking_ and _experiencing_ instead of having people misconceive AI as just a clever if/else decision tree.
Strong/Hard AI (and human consciousness) is a combinations of algorithms and data. People have hard-wired algorithms for processing data from their senses. Some people have better hard-wired algorithms than others. But, if you take the smartest person alive today back in time and raise them in an isolated environment (i.e. limit their data intake), they won't be the same person and they won't be able to think the same thoughts.
Summary: AI = mostly data, with algorithms to help organize/cluster/recall things. You can't have recall and intent of agency without a self-directed consciousness controlling the internal state<->external world feedback gradient.