
Humans who are not concentrating are not general intelligences - jseliger
https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/
======
joe_the_user
One of the things are about fakes is that they evolve over time.

Believe it or not, an incident in 1917 involving Arthur Conan Doyle, inventor
of Sherlock Holmes, is instructive.

The "Cottingley Fairies" were imagined when a couple of teenage girls took
photographs of themselves with pictures of fairies. [1]

The thing that is important is that, to my eyes and I think to a typical
person of this era, these photos of girls with cutouts of fairies look like
... exactly that. When I first saw these pictures, I couldn't believe anyone
could be fooled by them. But this period, circa 1917, was a period when photos
only recently had appeared and so only recently had photo fakes. So the skill
to spot the difference had only recently appeared.

Which is to say, I'm pretty sure the author is correct that deal of the OpenAI
text generated isn't intelligent text generation but stuff with enough of the
markers of "real" text that people might not notice it if they weren't paying
attention.

Moreover, I strongly suspect that if this sort of sham were to become more
common, the average person's ability to notice it would increase
significantly.

[1]
[https://en.wikipedia.org/wiki/Cottingley_Fairies](https://en.wikipedia.org/wiki/Cottingley_Fairies)

~~~
coldtea
> _The thing that is important is that, to my eyes and I think to a typical
> person of this era, these photos of girls with cutouts of fairies look like
> ... exactly that. When I first saw these pictures, I couldn 't believe
> anyone could be fooled by them. But this period, circa 1917, was a period
> when photos only recently had appear and so only recently had photo fakes.
> So the skill to spot the difference had only recently appeared._

Tons of people believe in crude "ufo" and "bigfoot" and "chupacabras" and
"lock ness" and such photos, well into today though.

~~~
drdeca
Nitpick, but, isn’t Loch Ness just the name of the lake?

~~~
jvanderbot
How do you _know_ the lake is real?

~~~
kaybe
I've seen it. It could have been a very convincing fake, but.. if that doesn't
count, then nothing does.

~~~
IIAOPSW
How do I know you're not in on the conspiracy.

~~~
Apocryphon
Maybe it was just an oversized puddle, and not actually a lake.

------
mbrock
This article and the Overcoming Bias post it links to ("Better Babblers")
remind me of when I was in high school and was reading stuff like _Gödel,
Escher, Bach_ and Dawkins's _The Selfish Gene_ (the two first books I ordered
after mom allowed me to use Amazon) while also learning online about
Wittgenstein and statistical text generation (Markov chains). All this led to
a kind of crisis of identity because in their own ways all those things point
to a deconstruction of the self.

That sounds hokey, but to explain briefly: GEB indicates that your self (your
consciousness of being someone) is a swirling self-referential symbolic
process (a "strange loop"); Dawkins indicates that your self is a kind of
evolved meme whose function in nature is to further your family of genetic
replicators; Wittgenstein indicates that your self is a habitual user of
language where deep meaning is not as important as social function; and Markov
chains indicate that your self's use of language can be modeled at least to a
rough approximation by extremely simple statistics.

So I clearly remember wondering "Am I just a kind of slightly more advanced
Markov chain?"

I think this is also the unsettling core question of _Blade Runner_ : are we
also artificial?

I wonder what theologians might say about this question.

~~~
xenadu02
We clearly possess an autopilot and an actual pilot function.

Actual piloting (critical thinking, reasoning, creativity) requires more
mental effort and is much slower. Perhaps our brains are optimized to have the
pilot train the autopilot (so to speak) when necessary, but otherwise leave
things to the autopilot? I suppose that's why training, muscle memory, and
practice are so important.

I don't think any of this is controversial but it does seem a lot more human
activity runs on autopilot than we thought.

~~~
pure-awesome
I'd like to contest the use of the term "actual" pilot for the the more
deliberate pilot.

For some reason, we as humans seem to like thinking of our conscious /
deliberate pilot as ourselves, and the subconscious / autopilot as some form
of "Other" somehow cohabiting our bodies.

(I'd expect this viewpoint to be especially true for the academic / programmer
crowd here on Hacker News, who stereotypically tend to be more skilled in
logical / deliberate forms of thought, and comparatively lacking in the
intuitive / automatic, such as social skills).

However, the subconscious is as much YOU as the part of which you are more
aware, and, in fact, probably has a GREATER effect on your action. In
conclusion, the autopilot is as deserving of the term "actual" pilot, as the
deliberate pilot is.

~~~
derangedHorse
You can’t just say it’s “as much YOU” as an argument against the autopilot
analogy. A plane’s autopilot is still part of the plane as well and will also
likely have a greater effect on the action (of flying the plane) than the
actual pilot.

------
mcv
To be honest, I've read plenty of articles that read like those examples:
superficially they seemed to make sense, but when you pay attention to the
logic behind it, it makes no sense. And those were articles written by real
people, by journalists.

And even scientists often seem to write a bunch of meaningless filler that
feels scientific in their papers, presumably because that kind of text needs
to go in that place in their paper.

Another thing, from the article: _" Simple correlations also seem sufficient
to capture most polite conversation talk, such as the weather is nice, how is
your mother’s illness, and damn that other political party."_

You know how hard I had to work at that? I used to be incapable of small talk
or maintaining a conversation. I could talk only in those "deep structures"
but struggled to put that in sentences that formed a natural part of a
conversation. I worked hard at those "simple correlations"; they were not so
simple for me.

~~~
pasabagi
A key to smalltalk is to understand the game that's being played. When you're
talking about something technical, the point is to discover and deliver useful
insights on the subject at hand. When you're doing smalltalk, the point is to
make the other feel comfortable and accommodated, to entertain, and be
entertained. Therefore, it's more about trying to understand what the other
person is feeling (this is usually a matter of looking at body language,
imagining what's going on in their lives) then it is about delivering useful
insight. Usually, if in doubt, listening attentively and trying to work out
what makes the other person tick is a productive strategy.

~~~
mcv
I've often felt like an alien anthropologist in conversations. Even if I was
aware of the game and saw what's going on, I wasn't part of it. I needed an
extra processing step that made me always too late to say the right thing.

I'm suddenly wondering if my problem might be related to my youngest son's
speech problem. When he was 3, he couldn't speak sentences; his sentences were
just 3 meaningful words in a row. He never babbled, unlike his best friend,
who always babbled in long, incoherent Trumpian sentences. He got special
speech therapy for half a year which helped immensely, and now, at 4, he makes
excellent sentences, if a bit staccato and clumsy, and certainly without any
kind of natural flow.

He never babbles, though. I think his Markov chain is broken and he replaced
it with an rule based system that he reworked to produce language.

~~~
pasabagi
I have a feeling that an alien anthropologist approach is ultimately better
than the reactive approach - I grew up with an autistic brother who was
nonetheless socially capable, simply through distilling social problems into a
set of hard rules. Sometimes you get edge cases and he gets into trouble, but
for 90% of the time, he was actually way more socially able than I was, simply
by having a better-formed understanding, less overwhelmed by immediacy and
assumptions.

You need a degree of alien anthropology to be able to respond to what's really
important in a conversation - an extremely socially capable deaf friend of
mine pointed out, for instance, that body language is more important than
verbal content in most casual interactions. These kind of insights are kinda
hard to gain from a neurotypical, non-deaf perspective like mine, because
you're a bit like a fish that doesn't realize he's swimming in water.

~~~
mcv
Oh, body language! My 4 year old son is extremely expressive with his face and
posture, possibly because of his verbal problems. Even when he could barely
talk at all, he was very good at expressing what he wanted or needed. He's so
expressive that I've always thought he should become an actor.

My older son, who is verbally very strong, is often nearly expressionless.

------
CarelessSmirch
Most normal people decide at some point that math is not for them in order not
to embarrass themselves by claiming competence in a domain in which they are
not confident at all. And hence, compared to nerds or seriously talented
individuals who can confidently claim this status, they never spend much time
on math and eventually seriously lack skills even in simple logical reasoning.
This is basically math anxiety. Nerds, on the other hand, are incentivized to
gain social status via math skills as a potential escape of their low status.
I think this mainly explains the author's observations that these two skills
are not correlated very much, basically an introversion vs extroversion
polarization based on social expectations and incentives. Math is also
intimidating, so I'd imagine someone with some experience in it also develops
overall higher inhibition and hence is a worse verbalizer.

There is a huge literature on the relation between logical reasoning and
verbalizing, which the author sadly ignores.

~~~
wccrawford
>Nerds, on the other hand, are incentivized to gain social status via math
skills as a potential escape of their low status.

This is certainly not how I approached math, and it's the first I've heard
anyone say it, even.

Instead, I'm good at math because I _enjoyed_ it. It's simple and logical and
my mind worked really well in that way. There was never anything standing in
my way of learning math, so I always just picked up any new math easily.
Later, because I was already so good at math (and so many people were bad at
it) I sought out more math courses as a way for more easy A grades.

Never was it a conscious effort to set up my career or social status.

~~~
CarelessSmirch
Some individuals are perhaps purely intrinsicly motivated, but I think it's a
very tiny minority. I also think there is a good chance that intrinsic
motivation itself is a status enhancing adaptation, evolved by runaway
selection. So ultimately, you are executing this adaptation whether you want
or not, much like this fish carefully creating beautiful patterns in the sand,
not knowing why he is doing it. It's all about sex.

[https://www.youtube.com/watch?v=B91tozyQs9M](https://www.youtube.com/watch?v=B91tozyQs9M)

[https://en.wikipedia.org/wiki/Fisherian_runaway](https://en.wikipedia.org/wiki/Fisherian_runaway)

~~~
whatshisface
If nerds were pursuing math to get status they would quit when they got
absolutely no status.

~~~
Faark
But they do get status. With their teachers. With their parents. And, probably
the most important, with their peers, aka other nerds.

Damn I'd even expect most "cool kids" having more respect for someone better
at math (all else being equal), even if their social context won't allow them
to show this in any form.

~~~
wccrawford
Being good at math got me some "kudos" occasionally, but _nothing_ like being
good at almost anything else. Art (music, writing, singing) was way better to
be good at. Even other nerdy things like spelling bees and programmer got
_way_ more acclaim than math.

Math, instead, got mostly derision from other kids and little to no respect
from teachers or parents. No, "cool kids" never had even an ounce of respect
for math nerds. If they secretly had any respect for them, they certainly
never showed it.

And what's the point of trying to gain respect that nobody expresses? It's
certainly not something that would be worth pursuing just to get that respect.

~~~
CarelessSmirch
Long-term planning/deferred gratification. You were setting up a career as a
source of status later in life.

~~~
whatshisface
> _Long-term planning /deferred gratification. You were setting up a career as
> a source of status later in life._

It sounds like you're tying yourself up in knots to explain something that
everybody already understands. Math is intrinsically fun, but only if you can
cut through the ruinously bad educational system and the difficulty of getting
started.

~~~
CarelessSmirch
Such intrinsic interest to that extent was sexually selected for, so we're
doing it only because of that. That's all I'm saying.

Intrinsic motivation makes sense to learn about the world to some extent, but
there was not much to learn for hunter-gatherers to survive.

------
derangedHorse
“Whatever ability IQ tests and math tests measure, I believe that lacking that
ability doesn’t have any effect on one’s ability to make a good social
impression or even to “seem smart” in conversation.“

It’s funny that he uses the phrase “seem smart” when we humans can’t give a
hard definition of intelligence. In the quote he makes it seem like
intelligence is coupled with IQ and mathematical ability yet concedes to the
thought that one could “sound smart” in language. He also says those same
people could be creative, funny, and relatable so why not just define
different metrics do intelligence here and say that they actually are smart
(albeit in different ways). I can assure you no one would “sound smart” when
discussing advanced mathematical theories if their grammar was bad and no one
understood the branch of mathematics they were in (a counter example where one
could be smart through IQ and mathematical ability metrics but not be able to
generate coherent speech).

~~~
TheOtherHobbes
Practical intelligence is clearly multidimensional, and there's no reason why
someone who scores well on one dimension should also score well on the others.

Any suggestion that talent-for-math = general-intelligence is actually rather
dumb. Ditto for assumptions about poor math skills, which can easily be a
product of poor teaching rather than unusually low native ability.

If IQ tests measure anything, it's raw mental speed and memory - useful
traits, but not nearly enough to draw a bounding box around general
intelligence, which also includes abilities such as intuitive modelling,
creative originality, and informal inference.

As the cliche goes, smart people can do stupid things in at least some
situations.

Raw high IQ is just as likely to get you to wrong conclusions quickly as it is
to give you useful predictions. If your modelling skills don't give you a good
working model of the situation you're in, you're going to have a bad time.

Outside of core STEM, modelling depends on social and cultural experience and
contextual training. If you don't have those, you're going to be handicapped
even if you have a stratospheric IQ.

~~~
rntz
> there's no reason why someone who scores well on one dimension should also
> score well on the others.

Except that this is literally what happens; this correlation between seemingly
unrelated cognitive tests is referred to as "the g-factor".
[https://en.wikipedia.org/wiki/G_factor_(psychometrics)](https://en.wikipedia.org/wiki/G_factor_\(psychometrics\))

To be fair, this doesn't actually contradict most of the rest of what you say.
But this correlation does suggest that there are some shared factors (whether
innate, or developed, or both) that affect many or all kinds of "practical
intelligence"; one might reasonably call these factors "general intelligence".

------
lordnacho
> Whatever ability IQ tests and math tests measure, I believe that lacking
> that ability doesn’t have any effect on one’s ability to make a good social
> impression or even to “seem smart” in conversation.

Ever wonder why incompetent people get promoted? Or why consultants can sell
projects using only buzzwords? Ever had that strange conversation where two
colleagues are seemingly discussing something absurd, obvious, or impossible,
but they think they're being clever?

~~~
YjSe2GMQ
While I agree to some degree this also sounds overly dramatic.

First, there's a ton of evidence that people also get promoted/appreciated for
the right reasons, not just because they're a fancy Markov chain of buzzwords
(example: serial entrepreneurs, like Musk).

Second, there's underlying reality that eventually comes crushing people that
fail to meet the expectations that they had built around themselves.

~~~
tokai
>First, there's a ton of evidence that people also get promoted/appreciated
for the right reasons

You're gonna have to post some of that evidence please.

~~~
YjSe2GMQ
Sure, the correlations between X1:{IQ, conscientiousness} and Y:{income,
educational attainment} are stronger than between, say X2:{agreeableness,
height, race} and Y.

Y are examples of what people want (wealth). X1 are examples of "valid"
reasons to be recognized as useful and therefore attain wealth, X2 are
examples of "less valid" or "completely invalid" reasons.

~~~
tokai
Your opinions are not evidence no matter how much you dress them up in
notation. I asked for evidence as I could use it. Self-discipline is a better
indicator for income and education than IQ.[0] So I know for a fact that your
a priori fact is not valid. It's all good to share opinions, but please do not
present them as reality when they are _not_.

[0]
[https://journals.sagepub.com/doi/10.1111/j.1467-9280.2005.01...](https://journals.sagepub.com/doi/10.1111/j.1467-9280.2005.01641.x)

~~~
YjSe2GMQ
Oh well there's a misunderstanding. Conscientiousness is the psychometric term
for self-discipline (not 100% the same, but extremely close):
[https://en.m.wikipedia.org/wiki/Conscientiousness](https://en.m.wikipedia.org/wiki/Conscientiousness)

And I didn't bother to point out particular studies validating my claims
because much of this has been known to humans for close to a 100 years. In the
same vein as linking to a proof of the undecidability of the halting problem
would be excess references given the nature of HN community.

If you want an example study look at the "Health and longevity" section on the
above linked Wikipedia page.

Or to directly support my thesis about X1 and X2 separation, first look at
Wikipedia IQ page and the "Social correlations" section, it's generally around
0.5:
[https://en.m.wikipedia.org/wiki/Intelligence_quotient](https://en.m.wikipedia.org/wiki/Intelligence_quotient)

Or look at your own link.

Second, height is correlated to about 0.29 with various measures of success:
[https://www.researchgate.net/profile/Daniel_Cable/publicatio...](https://www.researchgate.net/profile/Daniel_Cable/publication/8545837_The_Effect_of_Physical_Height_on_Workplace_Success_and_Income_Preliminary_Test_of_a_Theoretical_Model/links/00b495390a8394aea7000000.pdf)

------
matcalaphi
Interesting. BTW I thought it was ironic that in the first part of the post
the author points out the lack of logical connections between sentences in
GPT2-gerenated texts, while in the second part she changes the topic to
performance on IQ and math tests, which have nothing to do with how well
people can detect the failure-modes of GPT2 - after all, people with high IQ
or math scores can easily be inattentive when reading texts. Maybe this post
itself was generated by GPT2?!?!?!?!? (or GPT...3?!?!?)

------
wdutch
> what the median student actually learns seems to mostly be a set of low
> order correlations. They know what words to use, which words tend to go
> together, which combinations tend to have positive associations, and so on.

This really resonates with my experience, especially with people I work with.

I hope in the future people are trained to tell these sorts of generated texts
from real texts. I think including some test for this would greatly improve
our hiring procedure

~~~
commandlinefan
When I was young, I thought I was just naturally "gifted" (I was in the
"gifted" program in grade school after all, so there was that). I figured that
I was one of the smart people who could figure out anything and not one of the
dumb people who had to work to understand things. I felt this way for a while,
right up until I ran into calculus. Man, calculus chewed me up and spit me
out. I had this sort of epiphany when I realized that, no matter how smart I
was or was not, there were things out there that were difficult for me to
understand and there may well be things out there that may be impossible for
me to understand. I got to be a bit more humble after that - but I interact
with people who, I believe, have never had that experience or, worse, ignored
it: if I can't figure it out, and I know I'm smart, it must be pointless, so
I'll ignore it. I depressingly suspect these people of mistaking need to
concentrate with stupidity, regardless of the topic.

------
memory_grep
> Due to our concerns about malicious applications of the technology, we are
> not releasing the trained model.

Doesn't that go against the mission of OpenAI? I thought they were about
making technology publicly accessible to everyone so that it can't be abused
by only a few people. This makes them seem more like a business with
proprietary data.

~~~
Invictus0
Their mission is to build safe AI. It's pretty clear how this model can be
abused and I am glad they are not releasing it.

------
jka
So the next corollary to this would be - if you wanted to sneak information
past people, what would you need to do to their concentration in order to
achieve that?

~~~
TheOtherHobbes
Politics, economics, and religion have been playing this game for centuries.
Advertising and PR are relatively recent newcomers, but operating in the same
space. If you want to know how it's done, look at how the experts do it.

In outline you create the simplest possible narratives with a strong emotional
kick - preferably one that induces anxiety in the receiver, and/or blames an
outgroup for all the bad things that are happening.

Then you can sell yourself as the solution to the anxiety and fear.

The narrative itself can be nonsense. It needs a certain superficial narrative
coherence, but that's all.

------
0x38B
>I’ve noticed that I cannot tell, from casual conversation, whether someone is
intelligent in the IQ sense.

For me a giveaway is both how quickly and how well they take in what I'm
saying; that is, how much processing gets done? E.g one of my really
intelligent friends would have already connected what I'm saying with what
they know about me, and would have already guessed at what I'll say next. This
isn't just the domain of intelligent people, but for me, how quickly it
happens is telltale sign. Intelligence can be blinding as much as it is
enlightening, though, and I prize kindness and compassion far more than
intelligence, which our culture puts on a pedestal.

~~~
HiroshiSan
I'd consider it more a sign if they were able to do that and not know you.

------
brianberns
> The point is, if you skim text, you miss obvious absurdities. The point is
> OpenAI HAS achieved the ability to pass the Turing test against humans on
> autopilot.

The Turning test requires an ongoing conversation between an interrogator and
a subject. I think even an interrogator "on autopilot" (whatever that means)
would pretty quickly notice if a subject's responses contain "obvious
absurdities".

~~~
lostmsu
Just a random associated though: I wonder if the game of Mafia [1] is somewhat
better way to discern intelligences, than the Turing test.

E.g. imagine a game where mafia (AIs) can eliminate actual humans from the
game by convincing their fellow humans, that eliminated people are actually
the mafia (e.g. AIs).

[1]
[https://en.wikipedia.org/wiki/Mafia_%28party_game%29](https://en.wikipedia.org/wiki/Mafia_%28party_game%29)

~~~
pas
So-so. But it makes emotion a lot more involved. And faking emotions might be
hard for humans, it is trivial for AI, and also easy to appeal to our empathy.

~~~
lostmsu
The point of the game would be to eliminate AIs, not mafia. No in-game
"nights", just a free for all discussion with periodic elimination rounds.

------
shams93
I got more intense results from intentionally shallow models such as only
training on the work of a single author but then letting the shallow training
run for long periods. What I got was sometimes useless but sometimes amazing
stuff. It came up with a strange kind of plotline and even characters that
were kind of like the source author but it was still a unique piece of work.
It has its own voice that I think you lose when the model uses too many
sources.

~~~
lostmsu
Care to share a link to the code of the training setup?

------
d--b
See “thinking fast and slow”
[https://en.m.wikipedia.org/wiki/Thinking,_Fast_and_Slow](https://en.m.wikipedia.org/wiki/Thinking,_Fast_and_Slow)

------
bcgraham
An excerpt from Anathem (Neal Stephenson, 2008):

> "Early in the Reticulum — thousands of years ago — it became almost useless
> because it was cluttered with faulty, obsolete, or downright misleading
> information," [he] said. "... [Crap] — a technical term. So crap filtering
> became important. Businesses were built around it. Some of those businesses
> came up with a clever plan to make more money: they poisoned the well. They
> began to put crap on the Reticulum deliberately, forcing people to use their
> products to filter that crap back out. ... But it didn't really take off
> until the military got interested. ... Artificial Inanity systems of
> enormous sophistication and power were built ..."

------
shimfish
A while ago, before I realised the obvious truth that arguing with people on
Facebook isn't exactly productive, it dawned on me that many people mistake
the ability to form a grammatically correct sentence for thinking. It's nice
to see a more formal argument for that theory.

~~~
commandlinefan
An awful lot of people think the grammatically correct part is optional, too.

------
primordialsoup
I agree. I tend to skim through articles and even books, and that's no good.
Instead of the ten books I "read" last year, had I put in effortful attention,
focused and thought through only two books last year, I would have come out
ahead.

~~~
ericmcer
Thats very true, it's also true that most books don't demand a high level of
attention. Most media in general is designed to require zero effort on the
part of the consumer.

I just worked through 'Gravity's Rainbow' and was very mindful and careful in
my reading. It was a great experience, but at the same time it was fairly
boring in comparison to something like social media.

~~~
robertkrahn01
"Reader, Come Home" by Maryanne Wolf is a good book on the topic of how new
technology influences our reading habits.

[https://www.goodreads.com/book/show/35887237-reader-come-
hom...](https://www.goodreads.com/book/show/35887237-reader-come-home)

------
nfc
Slightly off topic but I found contemplating this possibility amusing:

Would it be possible that once we manage to eliminate obvious logical and
contextual mistakes in the generated texts, they could be used to discover
alternative (and consistent) views of the world (e.g. about art,
philosophy...)?.

The AI would be able to create a huge number of theories and it's possible
that some of them would be both interesting and original.

It would be a kind of restrained (restrained because we would prune mistakes
we do not want the AI to make and wouldn't just be a random typing) infinite
monkeys way of exploring theories about the world.

It would be even funnier if we could filter the subset of generated tests that
is testable :)

~~~
SiempreViernes
Sure, but there is an easier way: just read what other cultures have come up
with!

They usually went into great detail on the matter, and it has the advantage of
being _actually_ based on someone's real experience of the world, rather than
just randomly aligning with it.

------
Geee
It this related to feeling stupid in conversations? I think most people just
auto-generate a lot of nonsense which throws me off. Maybe other people can
somehow guess what the point was behind the nonsense.

------
everdev
Some of these generated examples still make sense to me:

>> Aragorn drew his sword, and the Battle of Fangorn was won. As they marched
out through the thicket the morning mist cleared, and the day turned to dusk.

> Yeah, day doesn’t turn to dusk in the morning.

I interpreted this to mean a literary jump in time, from morning to day to
dusk, in one sentence.

So, the text itself doesn't bother me. What scares me is the ability for an AI
to overwhelm us with a volume of content such that the signal to noise ratio
is exponentially amplified through massive amounts of auto-generated content.

------
Tepix
I wonder - could this be used to ensure attention?

Add some randomly generated sentences to a text that students need to learn
and then ask them to identify these sentences in the text.

------
fouc
GPT-2 & human writers with a poor grasp of logic both produce content that
might seem sensical on the surface, but ultimately nonsensical at a deeper
level.

Hopefully we can find a way to counteract this in a systematic way. Perhaps
the trick would be to punish'low order correlation' text in the first place.

------
darkerside
> But if you ask an exam question where the deep structure answer differs from
> answer you’d guess looking at low order correlations, most students usually
> give the wrong answer.

Every kid knows about these. They're called trick questions, and they've been
fooling students at all levels for centuries.

------
vinceguidry
> I’ve noticed that I cannot tell, from casual conversation, whether someone
> is intelligent in the IQ sense.

This is because casual conversation is predicated on EQ and not IQ. In order
to be able to ascertain IQ you need to actually test for it, opportunities to
demonstrate it aren't going to come up randomly.

This should stare smart people in the face, but it seems we have a blind spot
for the general uselessness of our own intelligence in normal situations.

The author goes on to discuss interviews, and I'd argue that EQ is generally
more important in thriving and producing on a team than IQ is as well, with a
few important exceptions.

~~~
alexandercrohde
Firstly -- Is there even actually a standardized/normed EQ test?

Secondly -- Is there research to validate it as a predictive measure of
success?

~~~
vinceguidry
I'm not aware of any, no. EQ is still a somewhat fuzzy concept. It needs to be
more well-defined before it can become a fruitful research topic.

------
derangedHorse
The examples he presents of false text can all basically be discriminated from
human generated text by looking for logical consistency. I wonder if this will
be something attention focuses to within schools. With all these fake news
articles, spam emails, and fake viral images I think this new generation
should be trained to recognize these things early on. Even if we can’t make
discriminating classifiers through technology as of yet, we can hopefully
train ourselves to discriminate between fake and real media in the meantime.

~~~
the8472
> Even if we can’t make discriminating classifiers through technology as of
> yet

Doesn't the existence of GANs restrict the space where discriminators can win
to NP problems?

------
novalis78
Not quite correct: Publius Ovidius Naso (“the one with the nose”) Looks like
AI > human blogger already.

------
samuelyoussif
English is not my native language. Can someone please explain to me the title
of the article. I read the article and I get the point, although, is that
syntactically correct english ?!

~~~
CarVac
(Humans (who are not concentrating)) are not (general intelligences)

~~~
commandlinefan
English is my native language and I still don't understand the title even
after you explained it, but the article was interesting anyway. Maybe "Humans
who are not concentrating may as well be dumb humans?"

~~~
Retra
Humans who are concentrating are acting with general intelligence. Humans who
are not concentrating are acting with lesser kinds of intelligence. The title
is pointing out that human intelligence is not always the 'general
intelligence' that is supposedly unique to humans. It is grammatically
correct, but you have to understand that "general intelligence" is a term of
art.

------
viach
That's the real danger of AI, not the "raise of machines". It gives
theoretical foundation for saying "humans who are not X are not Y".

------
pure-awesome
> I’ve taught public school teachers, who were incredibly bad at formal
> mathematical reasoning (I know, because I graded their tests), to the point
> that I had not realized humans could be that bad at math — but it had no
> effect on how they came across in friendly conversation after hours. They
> didn’t seem “dopey” or “slow”, they were witty and engaging and warm.

> Whatever ability IQ tests and math tests measure, I believe that lacking
> that ability doesn’t have any effect on one’s ability to make a good social
> impression or even to “seem smart” in conversation.

> If “human intelligence” is about reasoning ability, the capacity to detect
> whether arguments make sense, then you simply do not need human intelligence
> to create a linguistic style or aesthetic that can fool our pattern-
> recognition apparatus if we don’t concentrate on parsing content.

In the context of the article, these are troubling assertions for the author
to be making. They seem to be implying that people who struggle with
mathematics are fundamentally less intelligent that those who don't, in a way
that cannot be picked up by chatting to them.

If I understand correctly, the author furthermore seems to be saying that a
GPT-2 style text generator will sooner be able to match the conversation of
such a person than of someone more well-versed in formal mathematical
reasoning.

This seems factually wrong to me; I think the author vastly underestimates
complexity of the subconscious processing that people do in order to come to
the viewpoints they hold, and to transform ideas into coherent speech.

As a related point / analogy, the process by which humans do conscious
mathematics (such as arithmetic) is inherently slow and inefficient, whilst at
the same time it manages to perform incredibly advanced "calculations" in the
process of being a highly-advanced motion-control system.

I posit that the human process for synthesizing ideas is happening primarily
in this more complex underlying format, which we are still some way off from
being able to simulate (though I do believe we will be able to, eventually).

The author's conclusion seems a bit like seeing that computers are better at
arithmetic than humans are and thus concluding that they will soon surpass us
in intelligence.

Furthermore, the author's reasoning seems demeaning to people who struggle
with mathematics and explicit logical reasoning, and is a few steps from a
claim that such a person is inherently less "conscious".

To claim that a strong grasp of formal reasoning is necessary for those in a
position of policy and decision making is one thing. But to assert (without
substantial evidence to back it up) that someone with low mathematical-logical
reasoning ability has speech which is significantly easier to emulate because
it fundamentally contains less content seems to be simply a form of
intellectual/academic self aggrandizement.

~~~
rimliu
Being good at math (here I am talking more than just arithmetics) usually is a
very good proxy for being good at manipulating abstractions. And, imho, that's
at least one of the cornerstones of the intelligence.

~~~
virtualwhys
The issue that the GP is pointing at is that the ability to reason in the
abstract the author is implicitly stating to be _the_ marker of intelligence.

It may be one form of intelligence, but certainly a brilliant writer, a gifted
musician, or an exceptional artist can all be considered intelligent even if
their ability to grok logical constructs is limited compared to those that
spend their waking hours doing just that, and almost certainly have been
honing this skill for their entire lives.

~~~
randcraw
I think the second essential part of the GP's marker for intelligence is the
ability to form sentences that convey information, and do it efficiently.

Ability at abstract reasoning is invisible to outsiders unless the bot can
also transmit their information to others, as well as understand transmissions
from others and react appropriately (constructively or entertainingly).

AFAIK, up to now, none of the measures of synthetic intelligence have tried to
measure the flow of information from and into a bot -- its efficiency,
coherence, or relevance. I think the rise of master aper bots like GPT-2 and
Q&A bots like Watson that beautifully model syntax and rhythm yet no semantics
may finally force this issue to the surface. To wit, information matters more
than style.

Frankly, I welcome the arrival of bot overlords like these. Maybe they'll
motivate us humans to pay more attention to the meat of what we hear, read,
and say, and therein act less robotic ourselves.

------
amai
Don't we know this already from Kahneman (Thinking, fast and slow)?

"Humans who are thinking fast are not general intelligences."

------
EGreg
How did the question answering occur? The computer correctly found that the
race lasted seven days??

------
kanox
tl;dr: APPLY YOURSELF

------
PaulHoule
L. Ron Hubbard spent some time in Arizona in the early 1950s, when legendary
hypnotist Milton Ericson was lecturing.

Ericson described a "confusion technique" that is in evidence in lectures that
Hubbard gave later in Philadelphia. You'd catch him saying things that
somebody might say in a lecture but that people don't. For instance he would
continuously say something wrong and 'correct' himself. (e.g. "The Japanese
Alphabet has 48 letters, or was it 46 letters?"; quotes around 'correct'
because it was all bullshit anyway)

Have people listen to lectures like that with a malfunctioning tape recorder
for hours with high social pressure and structured communication, that will
turn their brains to mush. No wonder Scientology practice is twice as harmful
per hour as what other cults do.

