Believe it or not, an incident in 1917 involving Arthur Conan Doyle, inventor of Sherlock Holmes, is instructive.
The "Cottingley Fairies" were imagined when a couple of teenage girls took photographs of themselves with pictures of fairies. 
The thing that is important is that, to my eyes and I think to a typical person of this era, these photos of girls with cutouts of fairies look like ... exactly that. When I first saw these pictures, I couldn't believe anyone could be fooled by them. But this period, circa 1917, was a period when photos only recently had appeared and so only recently had photo fakes. So the skill to spot the difference had only recently appeared.
Which is to say, I'm pretty sure the author is correct that deal of the OpenAI text generated isn't intelligent text generation but stuff with enough of the markers of "real" text that people might not notice it if they weren't paying attention.
Moreover, I strongly suspect that if this sort of sham were to become more common, the average person's ability to notice it would increase significantly.
Tons of people believe in crude "ufo" and "bigfoot" and "chupacabras" and "lock ness" and such photos, well into today though.
As fake text becomes more common the tools to make it will become more advanced to the point where we can't tell it apart from the real thing.
Couple that with scale and it'd be game over for distributing written information across the internet. No one would be able to believe anything they see online any more.
Although, weirdly, that actually sounds like a decent use case for a blockchain.
But it's reasonable to say this could do a bit of damage to "moderate value targets". Given that you already some portion of retirees today "infected" with fake-news obsessions. Not only would have personalized spam/social-engineer but you could train the AI further on what worked
once you even had a lowish success-rate.
All that said, it seems like the OpenAI text generator would not be such a customized social-engineering-constructor. Rather, such a thing would have to be trained by the malicious actors themselves, who have their own data about what works. So the now-always-in-the-background question, OpenAI's shyness to release code justified, seems like still a no.
If it's cheaper to automatically create noise than it is to automatically remove it - public debate in internet becomes impossible.
Reddit where each account has bitcoin wallet connected. Every comment/post/upvote costs like 0.1 cent, every upvote on your comments/posts gives you 0.09 cent.
The rest is used for running the website (so no adds).
If it costs me real money to have an opinion that runs contrary to the herd, I'm not going to spout my opinion regardless of whether that opinion is factual and accurate.
That whole thing seems dangerous to me for some reason that I can't pin down.
Ultimately I think we will come around to the idea of verified digital identities almost everywhere. You could still have an AI agent spam in your name (or pseudonym), but you could not pretend to be multiple people.
I can see politicians using the service as a propaganda channel, but they already do the same with free services, and this way at least it would cost them something.
The first iteration of Tribute to Talk will be pretty basic with a simple transactional model. A pays B to start talking to B, B can block A at any time. But the smart contract developers are working on more sophisticated schemes for the future.
Here are two related discussions on our discuss forums:
Visibilty Stake for Public Chat Room Governance
PRBS protocol proposal - An incentivized Whisper like protocol for status
If you want more precise answers don't hesitate to post there, Ricardo loves to discuss these topics
I remember reading once that a machine had finally passed the Turing test, but when I looked in detail at what some of the judges on the panel had thought was a human talking, I realized how subjective the test was.
People's modern perceptions about bots are much more evolved than when the test was first theorized, so now it is the time to do an actual Turing test.
It will probably fail, but we are surely close to the point where an AI will actually pass it.
My guess is we are 10 years away from that moment. It will be like the movie "Her".
What is wrong with her hands: https://en.wikipedia.org/wiki/Cottingley_Fairies#/media/File...
Clearly motion blur, but none in the supposedly flying fairy https://en.wikipedia.org/wiki/File:CottingleyFairies3.jpg
Looks exactly like roger-rabbit style splicing: https://en.wikipedia.org/wiki/File:CottingleyFairies4.jpg
That sounds hokey, but to explain briefly: GEB indicates that your self (your consciousness of being someone) is a swirling self-referential symbolic process (a "strange loop"); Dawkins indicates that your self is a kind of evolved meme whose function in nature is to further your family of genetic replicators; Wittgenstein indicates that your self is a habitual user of language where deep meaning is not as important as social function; and Markov chains indicate that your self's use of language can be modeled at least to a rough approximation by extremely simple statistics.
So I clearly remember wondering "Am I just a kind of slightly more advanced Markov chain?"
I think this is also the unsettling core question of Blade Runner: are we also artificial?
I wonder what theologians might say about this question.
If you listen to small children's babbling, they sound exactly like little Markov chains. As they start to get older, their 'next()' function is informed more and more by semantic connections, reasoning, chains of association etc. until they're talking as people, not just like people.
We tend to think of language as separate from the rest of life, maybe because it's so transportable, but in a way it's strange to imagine an intelligence that only deals in language, and not even the language of "its own species."
A baby babbles I guess for fun but also because it's part of the process of playing with the world to learn to cope with it and to become an effective person. So talking, walking, eating, etc, are all part of the same general activity of life, and they all have their own forms of "grammar."
The semantic connections and associations go all across embodied life; you can't really use human language without being a person who also sees, moves, eats, loves, etc.
Wittgenstein's Philosophical Investigations starts from the first paragraph by quoting Augustine:
> ‘When they (my elders) named some object, and accordingly moved towards something, I saw this and I grasped that the thing was called by the sound they uttered when they meant to point it out. Their intention was shewn by their bodily movements, as it were the natural language of all peoples: the expression of the face, the play of the eyes, the movement of other parts of the body, and the tone of voice which expresses our state of mind in seeking, having, rejecting, or avoiding something. Thus, as I heard words repeatedly used in their proper places in various sentences, I gradually learnt to understand what objects they signified; and after I had trained my mouth to form these signs, I used them to express my own desires.’ (Augustine, Confessions, 1. 8.)
Wittgenstein thinks this is a good example of a misunderstanding of how language and language acquisition works. Then he formulates an understanding of language that focuses less on meaning and signification and more on social activity and speech acts.
I have a feeling that I myself am babbling right now, I don't know exactly what my point is and I'm hungry for breakfast...
Yeah, exactly. Almost everything a small child does is a directed attempt to generate training data, whether they're talking to you or talking to themselves or grabbing random things or trying to crawl into traffic.
I am not finding the link right now, but there were some researchers who attached a mic to very young babies that could pick up very faint sounds and an earphone to their caregivers and the researchers listened carefully to the baby and signaled the caregiver to touch the baby every time the baby made a (basically inaudible) intentional speech sound. After a short time the baby started producing a lot more of those sounds.
Similarly, since babies can’t really talk, caregivers can advance their communication by a few months by teaching them a simple sign language.
(Disclaimer: I didn’t do either of these things with my 2 year old. Just read about it.)
There's something I remember reading about, years ago, by I think a linguist: there's a point during a child's development where their language skills appear to suddenly get worse, which they thought was because the kid stops just rote repeating and instead is trying to conjugate words themselves (and getting it wrong because it's so new).
> I wonder what theologians might say about this question.
I'm not a theologian, but I was very catholic for first 22 years of my life, and for me the Blade Runner was more about the relation with god than about the question "are we artificial".
Basically it was a prometheic/messianic story - Man searches for God to get answers and fight for salvation.
And the answer he got was "whatever, I don't care, your life has no meaning and you cannot be saved". And then the Man forgives the God and dies - reversing the Jesus story.
It resonated strongly with me, world is much more consistent with incompetent God that doesn't care, than with a loving, caring and omnipotent God.
As for "are we artificial" - what does it even mean? My former religion accepted evolution, so the question was "was it God that caused evolution, on is there no God and it was just an accident".
Actual piloting (critical thinking, reasoning, creativity) requires more mental effort and is much slower. Perhaps our brains are optimized to have the pilot train the autopilot (so to speak) when necessary, but otherwise leave things to the autopilot? I suppose that's why training, muscle memory, and practice are so important.
I don't think any of this is controversial but it does seem a lot more human activity runs on autopilot than we thought.
For some reason, we as humans seem to like thinking of our conscious / deliberate pilot as ourselves, and the subconscious / autopilot as some form of "Other" somehow cohabiting our bodies.
(I'd expect this viewpoint to be especially true for the academic / programmer crowd here on Hacker News, who stereotypically tend to be more skilled in logical / deliberate forms of thought, and comparatively lacking in the intuitive / automatic, such as social skills).
However, the subconscious is as much YOU as the part of which you are more aware, and, in fact, probably has a GREATER effect on your action. In conclusion, the autopilot is as deserving of the term "actual" pilot, as the deliberate pilot is.
I was not referring to the subconscious as autopilot. To stretch this already tenuous analogy further the subconscious would be the flight control software.
I'm saying during our waking supposedly conscious experience our conscious selves (pilot) is actually rarely in control. Most of our choices are automatic (autopilot) and the conscious mind retroactively invents rational explanations for them when we bother to notice them at all.
Free will (if it exists) is probably almost entirely contained within the ability of the pilot to repeatedly tweak the autopilot settings, which is entirely indirect and a far smaller degree of control than we like to suppose.
If you consider evolution this isn't so surprising. Intelligence is just one strategy for adaptation and evolution repurposes and builds on top of existing structures. Why wouldn't the rational conscious mind evolve as a tweak on top of an unconscious intelligence, which itself is a tweak on top of subconscious/instinctual behavior? If animal studies are proving anything it's that intelligence is a spectrum and many supposedly human behaviors (concept of self, tool use, et al) are present in other species.
From what I understand, Heidegger's phenomenology is related to this piloting and autopiloting, exemplified by the way a hammer only appears as explicit conscious representation to a woodworker when something is wrong with it. The hammer's normal relation to the woodworker is just its ordinary function; the tool's own presence recedes.
But then I also wonder if there's something suspicious about the pilot/autopilot dualism. It seems to mirror dualisms like culture/nature and animal/human. Maybe what we think of as piloting is not as critical, rational, and creative as we are inclined to believe?
We see the same thing with auto racing. Really experienced drivers "feel" as if the car is an extension of themselves. Its mechanical nature disappears beneath conscious thought.
My hypothesis (admittedly based on little evidence) is this is an optimization function due to conscious thought being a relatively slow process. Once the autopilot has integrated the necessary functions the conscious mind can get out of the way and focus on more "important" things. This appears to apply to memory as well: unless the situation is in some way extraordinary the brain doesn't bother keeping full details in long-term storage. When interrogated later our minds just make up the likely details and call it good enough.
> But then I also wonder if there's something suspicious about the pilot/autopilot dualism. It seems to mirror dualisms like culture/nature and animal/human. Maybe what we think of as piloting is not as critical, rational, and creative as we are inclined to believe?
If it helps I was thinking of the mind as four layers: unintelligent instinctual/automatic systems, subconscious processing, autopilot, and pilot (conscious rational mind).
That said I think you are correct: being critical, rational, or creative is probably rarer than we like to believe. Maybe it is partially a cultural belief, as if admitting we are just cruising through life most of the time makes us seem stupid or un-human?
It sounds like you have said something objective. But really perception of time is a completely subjective phenomenon as well, constructed from unconscious processes which create conscious experience.
People in great fear report time slows down, but also top tier athletes.
Thinking hard about something is one particular kind of unified subjective experience.
I'm talking about measuring reaction time or brain imaging studies.
For top athletes autopilot kicks in and reacts to the situation, then issues commands to our subconscious body control processes, which then issues nerve impulses to begin movement, all before the pilot (prefrontal cortex?) has even perceived the situation, let alone made any decisions. The autopilot knows how to do this via repeated trailing guided by the pilot function.
You can also observe this in brain imaging studies which can show the body reacting before any thought took place. If pressed people will invent a rational justification for their behavior but the brain images prove this is entirely post-hoc most of the time.
My theory is this is due to conscious thought being so much slower, but I don't have any proof.
In a process commonly known as dreaming? I don't think that it is a coincidence that new tasks car we are currently learning to perform (that are still "piloted") often appear in the occasional snapshots of that somehow cross the boundary to our daytime consciousness.
I doubt the school teachers who failed his mathematics exam would have major issues finding at least some of the problems in the generated texts he gave as examples.
So asking the question whether humans are "artificial" stems from a place of low empathy, I think.
You need to distinguish consciousness from ego/free will. Consciousness is the fact that there's something rather than nothing, subjectively. That you seem to experience: sights, sounds, sensations, emotions. Under that definition consciousness is something that cannot be fake (no matter what's the nature of the universe; no matter whether you're asleep or awake), simply because you experience things.
Ego/free will is a separate concept and is indeed an illusion (or an evolutionary artifact if you like The Selfish Gene). There's a lot of evidence for that, the simplest being that no mainstream physical theory allows us to have made choices any differently than we have (barring true randomness like quantum mechanics predicts; but it's also easy to see that that's not freedom, just plain randomness).
There are books on the topic, I don't have a slightest hope to get the point across, but I think if you ponder it long enough then you can come to the conclusion even on your own. You can call this familiar pattern "myself" but it's not like there's any ego that you can find in there.
And what I mean here is not "let go of your self be free and enlightened". It's just that it's all more reasonable from purely rational perspective. There are a lot of patters that you can observe in and around you. You can call them all John Doe, but when you examine these patterns, you can exclude some as something that's "not you", just a thing that happens here. If you keep doing that I don't see how can you extract what you call ego. You could group some selfish behavior patterns and associate them with ego as in when we say somebody has a big ego, but I don't think that's what you mean here.
Fine, but what happens when I die? My body is still there but it’s not really me anymore. I’m gone. The body left behind is just a husk. So it seems that I am not just a hunk of matter, but at least a hunk of matter imbued with a dynamic pattern of activity: breathing, perceiving, reacting, speaking, and so on.
Fine, but what about sleepwalking? In some sense it’s me who’s doing the things the sleepwalker does, but in some crucial sense it’s not really me. That’s a subtle and strange distinction, but we make this distinction in everyday life. I don’t blame someone for snoring, and when I feel annoyed I recognize I am being irrational.
And so on until you start to refine a picture of the person’s self as something like that body’s everyday nexus of thoughts, emotions, and decisions, being the result of socialization and growing to adulthood, especially within a narratively coherent life.
There’s no need for a homunculus ego in some infinite regress of ultimate causality—that is indeed a nonreal, fantastical kind of self, the kind of self that early Buddhists criticized the brahmins of their time for promoting as the true self.
Real selves are just developed, cultivated, socialized entities that arise as psychological realities. There might be more complex, nuanced structures than just “one body, one self.” But the insistence that selves are just nonexistent delusions seems to me like an unnecessarily provocative way of formulating something.
Or if it's "nexus of thoughts, emotions, and decisions" then maybe you think more about patterns of behavior. If you would call it just as a currently observed patterns of behavior and update it as behaviors change then I think it's just a matter of naming them or not, they are clearly there.
But my point is that there is no nexus. Thoughts, emotions and decisions are there but there is no single central point to them apart from maybe current point in time which is basically a story that allows you to reason about the world as explained in .
But even in common sense distinction you are talking about seems very vague. You "say something before you think" you do something "on autopilot" or you're coding while being so deep in the flow that you are not aware of yourself etc. You or not really you?
You can blame someone for an outburst of anger (he's not sleeping) to then realize he had a tumor in the brain pressing against the amygdala. It can be the same story with yourself.
So where is self? Naming people is useful. It's not about that. It's just that we tend to look and talk about some inner pattern inside that pattern without really ever finding it.
Even treating whole body and behavior as a pattern seems somewhat context dependent. Maybe I was part of Milgram experiment or fought in some war - "that wasn't really me".
I'm sorry if it sounds provocative. I know that "losing self" has some associations that don't necessarily promote rationality.
I'm just interested in how people organize it in their heads.
That's how some Buddhist texts approach the question, typically with a horse cart or a wheel as the object they demonstrate lacks a fixed essence.
To which I say, sure, fine, there is no essence. There's still a table, a cart, a self! We don't need such essences. We don't need to be able to pinpoint the exact location or center of every entity we take to exist.
It is indeed interesting to look at edge cases and borders, like what happens to the self during states of deep meditative absorption, for example. Well, let's say it temporarily dissolves, like when you heat up a piece of wax. Maybe that's accurate, maybe not.
Buddhists do talk about the self like this, that it comes and goes, that settling into samadhi makes it calm down and fade into a more diffuse state, and so on. They also say that the self is ultimately an "illusion", but in the same sense that everything is an illusion: it is temporary, compounded, dependent, etc, while we sometimes are deluded to think otherwise, e.g. that our soul is eternal which is of course a common belief (that I do not hold).
Buddhists also always add that the teachings about "not self" are not to be taken as metaphysical claims, but as useful instructions for teaching a practice, the practice of meditation leading to liberation. Or in a lighter sense, you can practice having a less limiting self-definition, or to accept that your self is dynamic and expansive.
Still, people exist, and "self" is basically just a word I use to denote myself as a person. In my solitary flow states I am in a different state than in ordinary social situations; maybe I am a bit like a chameleon, too.
People are extremely complex and marvelous, so they exist in many different ways, within many different kinds of relations and environments, and they are constantly changing and adapting, but they are also constantly maintaining and preserving.
That's all a bit of a ramble, I didn't have time to organize these thoughts properly!
This doesn't necessarily follow. Emergent properties are still real, even if you can't separate that property from the system that generates it.
The fact that we are talking about it, is not enough for it to exist.
Let's say there is a ray of light from the sky. And people start talking that there's a white tower standing on the ground as high as the sky. Everybody knows what we are talking about, it's just that we go there, examine it, and it turns out it's just a ray of sun from behind the cloud.
Not such a great example because a white tower would still be much more clearly defined than the ego is, but that's what came to mind.
The way I see it, your perception of there being "a little you" is basically a self-referential part of your brain taking a bunch of status readings from all over your brain and using them to generate the sensation of the little you doing whatever it is that you're doing.
The homunculus is there in your head, but he's just a picture on the screen in your Cartesian theatre.
To be clear: the mystery is there - no-one has yet shown how minds do work - it is the assumption that it must be forever so that is a matter of faith (as is the opposite view; the issue is why a person would lean one way or the other.)
I think humans employ markov chains (MCs) all the time. (Back in the days of symbolic AI, these became popular as 'frames', AKA case-based reasoning.)
But it's clear that human cognition far exceeds the capacity of simple probabilistic devices like MCs, much as context-free grammars exceed finite state machines. Eventually it became clear that too much of intelligence cannot be modeled viably using simple probabilistic mechanisms like MCs or frames (like memory, logic, and learning).
I'm hopeful that work like GPT-2 will accomplish the same revelation for the limits inherent in probability-based models via deep nets. As long as AI models fail to model semantics explicitly, they will forever create only narrow savants or general morons.
But so is a rock, so I would argue it is not an overly illuminating fact. Furthermore I suppose you have this very convincing experience of free will, an experience that seems pretty robust even when facing such facts as the brain is just made of chemicals, so I wouldn't worry about it.
"A human being is not mindless or mentally deficient without language, but he is severely restricted in the range of his ideas."
To defend the original metaphor: without language, a person might be akin to a Markov chain generated from the relatively small 'corpus' of individual experience.
While the 'well socialized' individual can draw on the vast range of human experiences shared in language. They have access to a much larger corpus.
Imho many theists are very aware of these alternative interpretations, but due to all of them being rather unsettling and potentially existential-dread inducing, they chose the "Welp, I rather go with the God thing, that's less hassle" route.
Nothing wrong with that, we all have a mind of our own that allows us to frame our world view in the way most convenient/understandable to us.
But sadly it seems these difference in world view too often prevent us from agreeing on a consensus about how to go about things or even where to go in the first place.
Oh yes, absolutely. https://www.sciencedirect.com/science/article/pii/S014976341...
It advances the hypothesis.
> Dawkins indicates that your self is a kind of evolved meme whose function in nature is to further your family of genetic replicators;
I'd say your self is rather a battlefield for such memes.
My actions are undoubtedly being driven primarily by a series of electrical, chemical, and other reactions going off in my mind and throughout my body. Manipulate my brain in various ways and you can manipulate my behavior in various ways. In this regard I'm effectively a glorified automaton. Yet the catch here is that there is something inside here, 'me', observing all of this happen and having the perception of controlling it. When I write a program to generate a random number I find it inconceivable that suddenly some entity poofs into existence observing itself imagining it's deciding on a random number only to inevitably decide on the number that my pseudo-random algorithm had already predetermined given its initial state.
And similarly, even if we made vastly more complex systems that could create a passable replication of human behavior - I do not think there would, at any point, suddenly appear some entity within that machine suddenly imagining itself driving the deterministic decisions occurring within. A religious individual would call this 'me' your soul. I'm more compelled by the simulation hypothesis for reasons beyond the scope of this post. But in either case this is something that will undoubtedly never going to be proven in any way during our lifetimes, if ever. So it's a place where an individual must come to their own conclusion based on very limited information.
That a bad decision could have unimaginable consequence here is undoubtedly what drove things such as Pascal's Wager . Though he failed to consider of course that life itself could be a test. Willingness to adopt views one does not genuinely believe for hope of future reward and convenient social graces, is probably not something that would score so well. Quite the burdensome consideration, life is.
 - https://en.wikipedia.org/wiki/Pascal's_Wager
Why would you think something like that??!! I'm really curious of your reasoning...
To me and other peoples like me, both reason and intuition are "99.9%" sure that this is how it all works. Life / the universe / math / is full of emergent phenomena/properties, things that you'd say "suddenly appear". Even in pure math, even basic and boring areas like number theory have deep structure in them unseen if you just think from the "basic principles" step by step.
The illusion is the opposite kinds of things: things where you (mistakenly believe) you understand the full causal sequence of things and that there are no extra things that can pop out of the darkness and surprise your reason. This is plain arrogance. "Step by step rational reasoning" doesn't work except in a very very small number of cases. Because natural processes can only rarely be approximated to a human-brain-friendly number of steps - most of the time you can't reason A -> B -> C because there's a gazillion steps from A to C, and they're all no-linear eg. you can't "compress them" into a fewer number of steps (btw, this is the insight behind deeps neural networks and "deep learning" - very very basic math operations + some adequate non-linear transforms between them ---> "emergent" intelligence).
"Rational thought" as understood naively by lots of non-techical people is a weird distorted aberration.
If I were to subscribe to mystical viewpoint, it would likely be some form of pan-teism or "the entire universe is conscious / pure consciousness" whatever thing, it's the only thing that would even remotely make sense... the whole idea of "soul" and all that ghost-in-the-machine and "chinese room argument" nonsenses deriving from that... I almost feel that you people thinking this way are an entire different species from us, how can you "compute" like that? It's almost as if human brains "diverged" at some point and produced two different types of minds with completely different views of the world...
No need for a soul.
Initial set of the network (DNA, Hox genes, and a million others) and the environment "is all" that is needed for that fire to light up in there.
Not always, unfortunately. Severe developmental disorders sometimes prevent the forming of that consciousness.
So I code a fleshy looking automaton that it's indistinguishable for you from a human. Every action and communication is 100% convincing. Surely you feel some empathy towards it (since you are not able to tell) and you may even want it not to suffer. Does that change after I tell you what it "really is"? How about other people?
I still can't figure out how very smart people who know a lot of science, look at this world and say "here - science", "there - also science", and then there's me. I know how my brain works and that you can manipulate my behavior just by touching it, but apart from it there's also the entity. It's obviously not there but I'm also sure it is.
Emergent behavior of complex systems. Is it so hard to believe that we are one?
If you want some romance in all this then how about the universe looking at itself. Is it burdensome?
> So I code a fleshy looking automaton that it's indistinguishable for you from a human. Every action and communication is 100% convincing. Surely you feel some empathy towards it (since you are not able to tell) and you may even want it not to suffer. Does that change after I tell you what it "really is"? How about other people?
Your comment has nothing at all to do with the comment you quoted. Whether something is merely convincing to you or I is irrelevant–since it wouldn't actually be cognizant either way. The parent comment was discussing meta-cognition and the fact that even if you made a replica it wouldn't have meta-cognition like we do.
Before brushing Pascal's Wager aside, one should remember that Pascal was one of the main founding fathers of probability itself. We're not talking about some lightweight mucking around in the mud of primitive mankind's ignorance. The man knew what he was talking about.
While the book is science fiction, it does make an interesting case, and some of it is grounded in actual research.
There was also another book that made it clear to me, that "the stars do not belong to mankind". Something about the spiritual awakening of humankind, leading do another evolutionary tree for our children, while the adults are left to die, knowing they'll never be able to explore the universe. Forgot the name, but still think about this, too.
Lastly, the Three Body Problem with it's "Dark Forest" theory. I'm not completely convinced by the idea, but it's thought provoking.
Various insights into evolution, biology, materialism, or what have you can't really negate the reality of what's going on today. Or if they seem to do so then the insight is probably incomplete. I'm reminded of the way people use rational scientific rhetoric to exclaim that religion is irrational and dumb; well, but how about using that rational science to investigate how and why religion is a part of human psyche and society? Etc.
I’ve actually been looking for the source for sometime so I could read it.
It's not irrational to posit that you are more than merely a 'bag of particles'.
Just because scientific materialism, taken to it's extreme, might want to describe us as such, does not mean it is true.
Scientific materialism is only one metaphysical perspective, based on assumptions - such as the universe is ordered and can be described with a set of rules. There is no full evidence of this, it's just an assumption. Given that some of the material universe seems to 'mostly' adhere to a set of equations, and because it's objective ... we like scientific materialism a lot, but we also have to remember it's not the only way to look at things.
Consciousness itself, or rather, life, the perspective of 'the observer' could be the reality that matters. The expression of life itself is the interesting thing that only seems 'miraculous' from the perspective of materialism because it's literally denied by it -> that materialism can't seem to describe life is not so much a realization of science, rather it's an assumption that we started with: the universe is just a pile of particles, ergo, we are a pile of particles. The later does not follow the former as a logical conclusion, rather, the assumption that 'everything is just particles' basically implies the later.
It may very well be more rational to accept that life / consciousness is 'real' - and it seems to transcend our materialist conclusions because materialism as a metaphysical perspective just doesn't fully work, i.e. there's a hole in it.
Consider that we ultimately developed logic / reasoning / scientific materialism mostly to enable our lives and expression i.e. it's just a Tool, not a Truth.
1) I gotta say though when you talk of "the observer" it throws me off as it sounds like the typical quantum woo twisting of the observer effect, perhaps you meant something else? what do you mean by "the observer"?
2) Regarding "the universe is ordered and can be described with a set of rules. There is no full evidence of this, it's just an assumption." this has proven so far to be a good assumption (as seen by the massive amount of scientific knowledge and verified predictions accumulated) and if anything it seems all evidence points to exactly this. Is there evidence that there the universe is more than just 'a pile of particles'? (although that is a somewhat simplistic way to put it)
3) Trying to distill the comment, it seems the main argument is along the lines of "science can't explain life itself and/or consciousness, therefore there must be more" is that a fair assessment? and in that case what would you convince you of the oposite? for e.g what if "life" is well understood and can be reproduced in a lab etc.. what if we can reproduce most human-like intelligence with AI, etc... in other words, what would (realistically) change your mind to the opposite?
Humans in every culture since the dawn of time have referred to 'spirit' or that which seems to animate matter.
Yes - 'laws of the universe' we take as a given because they seem to work for us, in paper fairly well.
But you know what we also take as 'a given'? That you are alive.
'Your life' is kind of more important to you than science. Life itself, and the expression of it, seems to be our #1 concern.
That once branch of thought, Scientific Materialism doesn't by definition allow for life to exist, doesn't deny the nature of life.
1) Not 'quantum observer' - your spirit, soul, or some other scientific description. The word doesn't matter.
2) The evidence the universe is more than a pile of particles is life itself. And consciousness.
3) "Science can't explain life" - it's worse: Scientific Materialism rules it out completely by definition. If we decided that 'the universe is mathematical rules' - then - 'there is no life'. Creating life in a test-tube probably won't give us the answer.
FYI Science also has a problem describing why simple objects can ultimately make up very complicated ones with different problems, it's called 'emergence', it's a field of study.
Finally, I'll refer you to the the concept of 'biocentrism' - which is a more material outlook at the subject without getting so overtly metaphysical, and it's done by real scientists. 
Its why i promised myself to never try and understand how cars work. They just work.
If only i could apply that to just about anything..
The latter suggests that the brain is a prediction engine; and things we do are just the brain minimising prediction-error for systems at various levels.
(SSC is concerned about the dangers of AGI, and sees GPT-2 as imitating what people do).
The Creator of dimension, time, space, matter, energy and the subtle mathematical laws that govern their interactions will ALWAYS be beyond our comprehension, but we can understand a bit of It in very small, very abstract slices.
Mostly, however, we are here to enjoy this wonderful creation and for that reason we are created as moral creatures with an animalistic body. The most subtle law of the universe, that only we live under, is the Law of Karma. This law dictates that we must use our abilities to learn and choose, via our free will, to self-evolve out of our mammalian capacities of pack warfare and alpha-dominance games (most people naturally live above their reptile potential, that is why there are so few serial killers). This is why all the Great Teachings emphasize compassion towards all our neighbors as the destination for a spirituality that is born of an inward seeking for self-improvement via a connection with our Magnficent Creator.
There are many ways for a human being to enjoy this creation, which starts first with our body. There is the physical pleasure of eating and sex; the pleasures of having friends and family, perhaps having children of our own; the pleasures of athletic feats (Alex Honnold WOW!), mental feats (chess, mathematics), creative feats (art, writing, performance), as well as scientific feats that explore the nature of the universe in all its grandeur.
Our intrinsic sense of morality is built into us as a feedback mechanism to nudge us away from mammalian competitive strife and towards truly human cooperation, where those that have the means choose to help those that lack, where all oppression -- based upon ethnicity, form of religion (including none at all), sexual preference or identity -- is stamped out in favor of a free society of equals that each enjoy the respect and comfort that this planet provides when generosity and compassion are the rule.
Such compassion also requires us to fight oppression in all its forms, both personally and as societies and cultures. This is group compassion that stems from individual morality and the understanding that we are all in this together.
The Law of Karma's primary function is to feedback into ourselves the happiness or unhappiness resulting from our treatment of others. This is why so very few ultra-wealthy people are happy: they have built their empires upon the misery of the workers they have used and discarded for the lowest price possible. Note that there was a notable exception I saw crop up a few months ago where a very successful health care company founder gave very large bonuses to his employees in preparation for going full non-profit. He did this out of gratitude and generosity, knowing that his hundreds of millions of dollars was more than he needed and was built upon their backs. That is the essence of the spiritual path. It matters not which form of religion (if any at all) he adheres to. We are measured by our hearts and how we tune our minds to live the truth of selfless positivity over selfish negativity.
The misery upon the Earth in 2019 is the direct result of our free will's ability to choose the most horrific path due to Lennon's "Instant Karma" not existing. Karma is much more subtle than that. You can see its results on Trump's face and those of every person aligned with him. Yes, they can have the pleasure of domination of others or wealth and power, but pleasure is NOT happiness. Happiness comes from within.
This is a part of the Sufi Message of Love. All human beings must unite to selflessly create "On Earth as it is in Heaven" because each of our free wills are equal and the people who lie, oppress and keep secrets have an advantage over the truthful, meek, and kind people in that they not only have chosen to live unfettered from their consciences (the part of us that is the source of our morality) but take pleasure in the misery they inflict upon others. It is difficult for those not yet on the spiritual path to understand how evil a person can become for the simple fact that until we begin to fight against our own vices we do not know how deep human pathology can grow.
We are perfectible just as our software and machines can be made perfect, if we put in the effort and pay attention to the details; don't worry, the universe will test us ;-). Yes, we are all born imperfect, none greater than any other, but we also ALL have the ability to learn and self-evolve from vice into virtue. To reach that perfection, however, we must go within ourselves and beg our Creator for help. That humility and seeking then opens up our potential for only then are we truly living up to our potential to know the bits we can about our Creator and the magnificent tools we, ourselves, are to explore this universe in peace and harmony with each other and the Earth itself.
I suggest anyone interested in this Message to look into Coleman Barks' translated poems of Rumi. His UCTV presentation "Rumi and the Play of Poetry" is on Youtube and is excellent.
All our problems are caused by a lack of love, and no solution that does not emphasize love as its foundation is only a band-aid.
"The Way goes in." --Rumi
For those who ask for proof of what I speak, you must experience this truth for yourself by activating your own free will. If you believe that what I say does not exist, you will be correct from your perspective. That doesn't mean you aren't capable of exploring this sphere of creation or that you are not behoven to the Law of Karma; it just means you haven't opened your spiritual eyes, ears and mind to its reality and remain in the realm of scientists that shunned Boltzmann and Einstein for their expansion of our understanding of this universe. It is your free will's decision to accept this Message and try, or to deny it and remain as you are. There is no compulsion in religion and I am commanded to love everyone anyway. That is why I try to speak of the sublime joy I experience in my life as a result of trying.
Peace be with you all. We love you. The evil, selfish people are destroying our beloved Earth and inflicting misery on countless human beings.
And even scientists often seem to write a bunch of meaningless filler that feels scientific in their papers, presumably because that kind of text needs to go in that place in their paper.
Another thing, from the article:
"Simple correlations also seem sufficient to capture most polite conversation talk, such as the weather is nice, how is your mother’s illness, and damn that other political party."
You know how hard I had to work at that? I used to be incapable of small talk or maintaining a conversation. I could talk only in those "deep structures" but struggled to put that in sentences that formed a natural part of a conversation. I worked hard at those "simple correlations"; they were not so simple for me.
I'm suddenly wondering if my problem might be related to my youngest son's speech problem. When he was 3, he couldn't speak sentences; his sentences were just 3 meaningful words in a row. He never babbled, unlike his best friend, who always babbled in long, incoherent Trumpian sentences. He got special speech therapy for half a year which helped immensely, and now, at 4, he makes excellent sentences, if a bit staccato and clumsy, and certainly without any kind of natural flow.
He never babbles, though. I think his Markov chain is broken and he replaced it with an rule based system that he reworked to produce language.
You need a degree of alien anthropology to be able to respond to what's really important in a conversation - an extremely socially capable deaf friend of mine pointed out, for instance, that body language is more important than verbal content in most casual interactions. These kind of insights are kinda hard to gain from a neurotypical, non-deaf perspective like mine, because you're a bit like a fish that doesn't realize he's swimming in water.
My older son, who is verbally very strong, is often nearly expressionless.
There is a huge literature on the relation between logical reasoning and verbalizing, which the author sadly ignores.
This is certainly not how I approached math, and it's the first I've heard anyone say it, even.
Instead, I'm good at math because I enjoyed it. It's simple and logical and my mind worked really well in that way. There was never anything standing in my way of learning math, so I always just picked up any new math easily. Later, because I was already so good at math (and so many people were bad at it) I sought out more math courses as a way for more easy A grades.
Never was it a conscious effort to set up my career or social status.
Damn I'd even expect most "cool kids" having more respect for someone better at math (all else being equal), even if their social context won't allow them to show this in any form.
Math, instead, got mostly derision from other kids and little to no respect from teachers or parents. No, "cool kids" never had even an ounce of respect for math nerds. If they secretly had any respect for them, they certainly never showed it.
And what's the point of trying to gain respect that nobody expresses? It's certainly not something that would be worth pursuing just to get that respect.
It sounds like you're tying yourself up in knots to explain something that everybody already understands. Math is intrinsically fun, but only if you can cut through the ruinously bad educational system and the difficulty of getting started.
Intrinsic motivation makes sense to learn about the world to some extent, but there was not much to learn for hunter-gatherers to survive.
Being good at math gives you no status in life. People are proudly anti-intellectual when it comes to math, so the most you'll get if you're quite good at at math is "oh wow, that's cool. next topic". People who really like math, simply like math, despite it not winning them any social favors for the most part.
Mathematics is quite interesting and beautiful in its own way, so for you to say it's mostly "out of wanting to escape their low status" is both rude and uninformed.
Also, it's true that quitting doesn't do you any favors, but people quit things all the time. Especially math. Or new years resolutions. It's definitely not a tenable argument as to why people would stick with math.
You could just as well argue we're all paper-clip maximizers, and simply interpret any evidence to the contrary as short-sighted. The only sense in which you can reduce everything humans do to sex is in the irrelevant and unfalsifiable sense.
That's correct, but why would that reduce trust?
Our brains evolved to make decisions. Yes, they were constrained by a need to survive and reproduce. But those two goals are not the same thing. And the existence of those constraints does not actually preclude any other mode of operation.
You're being absurdly reductive when you conflate any motivation with a need to reproduce.
This is why your argument is nonsense. You're basically trying to define everything as sex. You're playing a silly word game so you can feel smug about this directionless and immature insight.
It’s funny that he uses the phrase “seem smart” when we humans can’t give a hard definition of intelligence. In the quote he makes it seem like intelligence is coupled with IQ and mathematical ability yet concedes to the thought that one could “sound smart” in language. He also says those same people could be creative, funny, and relatable so why not just define different metrics do intelligence here and say that they actually are smart (albeit in different ways). I can assure you no one would “sound smart” when discussing advanced mathematical theories if their grammar was bad and no one understood the branch of mathematics they were in (a counter example where one could be smart through IQ and mathematical ability metrics but not be able to generate coherent speech).
Any suggestion that talent-for-math = general-intelligence is actually rather dumb. Ditto for assumptions about poor math skills, which can easily be a product of poor teaching rather than unusually low native ability.
If IQ tests measure anything, it's raw mental speed and memory - useful traits, but not nearly enough to draw a bounding box around general intelligence, which also includes abilities such as intuitive modelling, creative originality, and informal inference.
As the cliche goes, smart people can do stupid things in at least some situations.
Raw high IQ is just as likely to get you to wrong conclusions quickly as it is to give you useful predictions. If your modelling skills don't give you a good working model of the situation you're in, you're going to have a bad time.
Outside of core STEM, modelling depends on social and cultural experience and contextual training. If you don't have those, you're going to be handicapped even if you have a stratospheric IQ.
Except that this is literally what happens; this correlation between seemingly unrelated cognitive tests is referred to as "the g-factor". https://en.wikipedia.org/wiki/G_factor_(psychometrics)
To be fair, this doesn't actually contradict most of the rest of what you say. But this correlation does suggest that there are some shared factors (whether innate, or developed, or both) that affect many or all kinds of "practical intelligence"; one might reasonably call these factors "general intelligence".
I wouldn't call that clear at all! Of course no matter how intelligent a person is, there will be environments in which they do poorly. Feynman would do poorly in the environment called, "Everyone find Feynman and beat him up". But that environment is very contrived, or, more formally, has a high Kolmogorov complexity.
Legg and Hutter argue quite strongly for single-dimensional practical intelligence in this paper (I don't agree with their reasoning, but the point is that it's definitely not blatantly "clear" that practical intelligence is multidimensional): https://arxiv.org/pdf/0712.3329.pdf
Ever wonder why incompetent people get promoted? Or why consultants can sell projects using only buzzwords? Ever had that strange conversation where two colleagues are seemingly discussing something absurd, obvious, or impossible, but they think they're being clever?
Except if you actually paid attention to what he was saying, you'd realize that he was simply quoting the definitions of things he had learned over a sufficient time. He had a great vocabulary as well, and would put it in a good, well-versed academic paragraph.
The fact is that, unless you seriously paid attention to what he was saying - you'd think he's making a really deep point about something you don't understand. In actuality - he was simply going from one definition to another.
The amount of effort required to refute him is way above it takes him to blab about anything. He'd go around the issue without ever answering the question.
First, there's a ton of evidence that people also get promoted/appreciated for the right reasons, not just because they're a fancy Markov chain of buzzwords (example: serial entrepreneurs, like Musk).
Second, there's underlying reality that eventually comes crushing people that fail to meet the expectations that they had built around themselves.
You're gonna have to post some of that evidence please.
Y are examples of what people want (wealth). X1 are examples of "valid" reasons to be recognized as useful and therefore attain wealth, X2 are examples of "less valid" or "completely invalid" reasons.
And I didn't bother to point out particular studies validating my claims because much of this has been known to humans for close to a 100 years. In the same vein as linking to a proof of the undecidability of the halting problem would be excess references given the nature of HN community.
If you want an example study look at the "Health and longevity" section on the above linked Wikipedia page.
Or to directly support my thesis about X1 and X2 separation, first look at Wikipedia IQ page and the "Social correlations" section, it's generally around 0.5:
Or look at your own link.
Second, height is correlated to about 0.29 with various measures of success:
Basically, people get promoted for being competent at their current job. But they are being promoted into a job they may not actually be competent at.
Sure, some people also get promoted or appreciated mistakenly or for the "wrong reasons", but that's often being done on purpose and not by accident.
Sure but it's when people get promoted for the wrong reasons everyone gets frustrated.
> Second, there's underlying reality that eventually comes crushing people that fail to meet the expectations that they had built around themselves.
Not in a noisy environment. Stories abound of complete incompetents who take some inadvisable risk, only to find it paid off handsomely.
This really resonates with my experience, especially with people I work with.
I hope in the future people are trained to tell these sorts of generated texts from real texts. I think including some test for this would greatly improve our hiring procedure
Doesn't that go against the mission of OpenAI? I thought they were about making technology publicly accessible to everyone so that it can't be abused by only a few people. This makes them seem more like a business with proprietary data.
In outline you create the simplest possible narratives with a strong emotional kick - preferably one that induces anxiety in the receiver, and/or blames an outgroup for all the bad things that are happening.
Then you can sell yourself as the solution to the anxiety and fear.
The narrative itself can be nonsense. It needs a certain superficial narrative coherence, but that's all.
[Later, on the weekend] Wait, we're going where?
For me a giveaway is both how quickly and how well they take in what I'm saying; that is, how much processing gets done? E.g one of my really intelligent friends would have already connected what I'm saying with what they know about me, and would have already guessed at what I'll say next. This isn't just the domain of intelligent people, but for me, how quickly it happens is telltale sign.
Intelligence can be blinding as much as it is enlightening, though, and I prize kindness and compassion far more than intelligence, which our culture puts on a pedestal.
The Turning test requires an ongoing conversation between an interrogator and a subject. I think even an interrogator "on autopilot" (whatever that means) would pretty quickly notice if a subject's responses contain "obvious absurdities".
I suspect that twitter counts the same. It's something we don't apply much attention to because it does not look like a human being talking to us.
In fact I suspect that a whatsapp that records one human speaking and then plays it back will have a different attention spike than the text based idea
Edit: of course I wrote the above on autopilot.
Fortunately, the solution is simple: to have a Turing test of the interrogator.
E.g. imagine a game where mafia (AIs) can eliminate actual humans from the game by convincing their fellow humans, that eliminated people are actually the mafia (e.g. AIs).
> "Early in the Reticulum — thousands of years ago — it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information," [he] said. "... [Crap] — a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. ... But it didn't really take off until the military got interested. ... Artificial Inanity systems of enormous sophistication and power were built ..."
Would it be possible that once we manage to eliminate obvious logical and contextual mistakes in the generated texts, they could be used to discover alternative (and consistent) views of the world (e.g. about art, philosophy...)?.
The AI would be able to create a huge number of theories and it's possible that some of them would be both interesting and original.
It would be a kind of restrained (restrained because we would prune mistakes we do not want the AI to make and wouldn't just be a random typing) infinite monkeys way of exploring theories about the world.
It would be even funnier if we could filter the subset of generated tests that is testable :)
They usually went into great detail on the matter, and it has the advantage of being actually based on someone's real experience of the world, rather than just randomly aligning with it.
I just worked through 'Gravity's Rainbow' and was very mindful and careful in my reading. It was a great experience, but at the same time it was fairly boring in comparison to something like social media.
>> Aragorn drew his sword, and the Battle of Fangorn was won. As they marched out through the thicket the morning mist cleared, and the day turned to dusk.
> Yeah, day doesn’t turn to dusk in the morning.
I interpreted this to mean a literary jump in time, from morning to day to dusk, in one sentence.
So, the text itself doesn't bother me. What scares me is the ability for an AI to overwhelm us with a volume of content such that the signal to noise ratio is exponentially amplified through massive amounts of auto-generated content.
Add some randomly generated sentences to a text that students need to learn and then ask them to identify these sentences in the text.
Hopefully we can find a way to counteract this in a systematic way. Perhaps the trick would be to punish'low order correlation' text in the first place.
Every kid knows about these. They're called trick questions, and they've been fooling students at all levels for centuries.
This is because casual conversation is predicated on EQ and not IQ. In order to be able to ascertain IQ you need to actually test for it, opportunities to demonstrate it aren't going to come up randomly.
This should stare smart people in the face, but it seems we have a blind spot for the general uselessness of our own intelligence in normal situations.
The author goes on to discuss interviews, and I'd argue that EQ is generally more important in thriving and producing on a team than IQ is as well, with a few important exceptions.
Secondly -- Is there research to validate it as a predictive measure of success?
I vaguely try to check for whether someone has heard about things that they say they're interested in. If someone says they love rockets and space stuff, I'll see if they've heard of the rocket equation. If they like computers I'll see if they claim to be able to code.
There's also the other side, people can volunteer that they think vaccination causes autism or they'll ask my star sign.
Oh I get what you mean about EQ now...
Doesn't the existence of GANs restrict the space where discriminators can win to NP problems?
"Humans who are thinking fast are not general intelligences."
> Whatever ability IQ tests and math tests measure, I believe that lacking that ability doesn’t have any effect on one’s ability to make a good social impression or even to “seem smart” in conversation.
> If “human intelligence” is about reasoning ability, the capacity to detect whether arguments make sense, then you simply do not need human intelligence to create a linguistic style or aesthetic that can fool our pattern-recognition apparatus if we don’t concentrate on parsing content.
In the context of the article, these are troubling assertions for the author to be making. They seem to be implying that people who struggle with mathematics are fundamentally less intelligent that those who don't, in a way that cannot be picked up by chatting to them.
If I understand correctly, the author furthermore seems to be saying that a GPT-2 style text generator will sooner be able to match the conversation of such a person than of someone more well-versed in formal mathematical reasoning.
This seems factually wrong to me; I think the author vastly underestimates complexity of the subconscious processing that people do in order to come to the viewpoints they hold, and to transform ideas into coherent speech.
As a related point / analogy, the process by which humans do conscious mathematics (such as arithmetic) is inherently slow and inefficient, whilst at the same time it manages to perform incredibly advanced "calculations" in the process of being a highly-advanced motion-control system.
I posit that the human process for synthesizing ideas is happening primarily in this more complex underlying format, which we are still some way off from being able to simulate (though I do believe we will be able to, eventually).
The author's conclusion seems a bit like seeing that computers are better at arithmetic than humans are and thus concluding that they will soon surpass us in intelligence.
Furthermore, the author's reasoning seems demeaning to people who struggle with mathematics and explicit logical reasoning, and is a few steps from a claim that such a person is inherently less "conscious".
To claim that a strong grasp of formal reasoning is necessary for those in a position of policy and decision making is one thing. But to assert (without substantial evidence to back it up) that someone with low mathematical-logical reasoning ability has speech which is significantly easier to emulate because it fundamentally contains less content seems to be simply a form of intellectual/academic self aggrandizement.
I’ve got a degree in physics from a top 3 university and I have met individuals more intelligent than me who suffered through various math classes, which I believe was largely due to a lack of experience with the machinery of math or formal reasoning.
They are saying exactly that.
I wonder what the author's response would be when speaking with an individual vastly more intelligent than himself, who, interrupting the author mid-sentence says, "sorry, this is such a simple concept, I don't converse with imbeciles", and walks off.
It may be one form of intelligence, but certainly a brilliant writer, a gifted musician, or an exceptional artist can all be considered intelligent even if their ability to grok logical constructs is limited compared to those that spend their waking hours doing just that, and almost certainly have been honing this skill for their entire lives.
Ability at abstract reasoning is invisible to outsiders unless the bot can also transmit their information to others, as well as understand transmissions from others and react appropriately (constructively or entertainingly).
AFAIK, up to now, none of the measures of synthetic intelligence have tried to measure the flow of information from and into a bot -- its efficiency, coherence, or relevance. I think the rise of master aper bots like GPT-2 and Q&A bots like Watson that beautifully model syntax and rhythm yet no semantics may finally force this issue to the surface. To wit, information matters more than style.
Frankly, I welcome the arrival of bot overlords like these. Maybe they'll motivate us humans to pay more attention to the meat of what we hear, read, and say, and therein act less robotic ourselves.
I know other traditions for labelling people stupid, that centrers around them lacking skill with driving or carpentry, and this "maths" ability tradition seems to be largely the same thing.
Ericson described a "confusion technique" that is in evidence in lectures that Hubbard gave later in Philadelphia. You'd catch him saying things that somebody might say in a lecture but that people don't. For instance he would continuously say something wrong and 'correct' himself. (e.g. "The Japanese Alphabet has 48 letters, or was it 46 letters?"; quotes around 'correct' because it
was all bullshit anyway)
Have people listen to lectures like that with a malfunctioning tape recorder for hours with high social pressure and structured communication, that will turn their brains to mush. No wonder Scientology practice is twice as harmful per hour as what other cults do.