
Are Humans Intelligent? An AI Op-Ed - jerezzprime
https://arr.am/2020/07/31/human-intelligence-an-ai-op-ed/
======
tsimionescu
The article states the following:

> I picked the best responses, but everything after the bolded prompt is by
> GPT-3.

Based on this, I am pretty sure that the order of paragraphs and the general
structure (introduction, arguments, conclusion, PS) are entirely the product
of the editor, not of GPT-3. I'm assuming that this is at the level paragraphs
and not individual sentences, which does leave some pretty good paragraphs.

Another question that I don't know how to answer is how different these
paragraphs are to text that is in the training corpus. I would love to see
what is the closest bit of text from the whole corpus to each output
paragraph.

And finally, human communication and thought is not organized neatly in a
uniform level of difficulty from letters to words to sentences to paragraphs
to chapters to novels or anything like that, and an AI that can sometimes
produce nice-sounding paragraphs is not necessarily any part of the way to
actually communicating a single real fact about the world.

I still believe that there is never going to be meaningful NLP without a
model/knowledge base about the real physical world. I don't think human
written text has enough information to deduce a model of the world from it
without assuming some model ahead of time.

~~~
dqpb
> I still believe that there is never going to be meaningful NLP without a
> model/knowledge base about the real physical world.

I think this article is quality enough to constitute meaningful NLP. But, your
questions about the amount of human intervention are key. If it takes several
hours to a day to produce one of these, then it's not really that meaningful.
If one person can produce 100 of these in a day, that's pretty meaningful.

------
czzr
I wish I could see the original of this - this quote “As with previous posts,
I picked the best responses, but everything after the bolded prompt is by
GPT-3“ could mean anything from a minor improvement to this being essentially
human written, and there’s no way to tell.

------
jasode
With GPT-3, I guess lots of upcoming stories of various discussion forums
getting the "Sokal Affair"[1] treatment. We'll keep amusing each other by
trolling everybody with more fake GPT-3 stories.

I think GPT-3 is very convincing for "soft" topics like the other HN thread _"
Feeling Unproductive?"_[2], and philosophical questions like _" What is
intelligence?"_ where debaters can just toss word salad at each other.

It's less convincing for "hard" concrete science topics. E.g. Rust/Go articles
of programming to improve performance.

An interesting question is what happens when the input to the future GPT-4 is
inadvertently fed by lots of generated GPT-3 output. And in turn, GPT-5 is fed
by GPT-4 (which already ingested GPT-3). A lot of the corpus feeding GPT-3 was
web scraping and now that source _is tainted_ for future GPT-x models.

[1]
[https://en.wikipedia.org/wiki/Sokal_affair](https://en.wikipedia.org/wiki/Sokal_affair)

[2]
[https://news.ycombinator.com/item?id=24062702](https://news.ycombinator.com/item?id=24062702)

~~~
wcoenen
> _A lot of the corpus feeding GPT-3 was web scraping and now that source is
> tainted for future GPT-x models_

It might be possible to filter out GPT-3 generated text from future training
data. Simply feed part of the text to GPT-3, and if it is way too good at
predicting what follows you throw it out. The same trick could be used to
detect students writing essays with it.

This trick will stop working though as more variants of good text prediction
algorithms appear, unless we can do the same test against each one.

~~~
desertrider12
If you wanted to generate a longer essay by GPT3 but make it really hard to
tell, you could just get it started with a 1-2 sentences. Then let it generate
N-1 tokens (where N is the window size, I guess 2048), and write a few tokens
yourself (and repeat). Then I don't think there's any way to reverse engineer
the internal state of the model at any point in the text. It would be like
trying to find a string that hashes to a given value.

~~~
asimpletune
Maybe like hashing though there can be weaknesses in the algorithm still.

------
QasimK
I’m continually amazed - flabbergasted - by GPT-3. I’ve read stories, articles
and HTML written by it and each time I am shocked at how good the output is.
This essay made me laugh!

It’s practically indistinguishable from a human. Not a creative, insightful
and unique human. But an average human? Yes, I cannot tell the difference.

I must repeat that - I cannot tell the difference!

This can probably completely replace or supplement most online content that I
see including news, certainly on the vacuous side of things of which I think
there is a lot of content.

Those online recipes with irrelevant life stories before them? Replaced. Those
opinion pieces in news? Replaced. Basic guides to tasks? Probably replaceable.

I know I probably only see the best output, and it would be nice if I had more
context, but the peak performance is amazing

The twitter video showing GPT-3 generate HTML based on your request? I think
there’s a lot of potential. I don’t knew whether it can, in general, live up
to these specific examples though.

~~~
tsimionescu
If you pay just a little attention, you absolutely can: GPT-3 is not saying
anything. Even the 'lowest' humans are usually trying to communicate something
when they are telling a story, or teaching you how to do a basic skill, or
giving you directions.

GPT-3 can't do any of that. It can pick up clues from the text to produce
incredibly realistic sentences that are related to the broad topics of some
text, but it's all smoke and mirrors in the end - there is no model of the
world getting expressed in communication, it is just mindless aping of similar
speech.

And yes, this basic skill that GPT-3 has is enough to replace some human
tasks, like inventing plausible sounding stories for a recipe or perhaps even
taking news from one site and writing them on another with slight alterations.
Perhaps it will even be able to take some facts and weave them into a speech
about that topic.

But it is not even close to doing something like real journalism, even at the
level of a car mechanic telling you what happened down at the mall.

~~~
I-M-S
GPT's text as quoted in the article has a clear thesis stated upfront, later
expands on it with examples, summarizes its arguments at the end, and does all
this with gusto.

You cap your comment by a non-sequitur that GPT is not going to replace
journalism.

IMO the piece of text generated by GPT offers more insight and is wittier than
yours.

~~~
tsimionescu
My comment was not an essay, it was a response to someone else's comment. The
GP was explicitly saying that GPT-3 could probably produce many of the news
content they read, and I was replying to that.

Please also note that it's not very clear to what extent the text of the
article is edited - the non-bold text is written by GPT-3, but I don't think
it was produced as a single block of text. Instead different parts (sentences?
Paragraphs?) were produced individually, selected by a human from many other
responses, and assembled together in the shape we are shown. The train of
thought among the paragraphs is most likely entirely human work, not AI work,
and only the best sounding paragraphs out of a lot of gibberish were likely
selected.

It would also be interesting to see how close those paragraphs are to
something in GOT-3's training corpus, in terms of structure if not explicit
language.

------
dexen
_In AI research, the territory is not the map._

A half-joking prediction:

at some point we'll solve all arbitrarily hard milestones for AIs and will
still find ourselves 'nowhere near having real general intelligence'.

At that point we might start questioning our assumptions about intelligence.

~~~
falcor84
That sounds interesting, could you please expand on that? Do you have in mind
any particular facet of general intelligence that a milestone cannot be
defined for?

~~~
dexen
Ah apologies for having been unclear. No, not like that - I was thinking about
more _meta_ aspects of it:

\- shared context (shared biological & cultural heritage)

\- recognition & willingness to ascribe intelligence

Basically trying to imply that the _general AI problem_ is of similar nature
to the _10x programmer problem_ \- communication & recognition, or lack of
thereof.

A 10x programmer makes a hard problem look easy. A general AI makes people
around it feel very smart & productive.

Which is also why I prefaced it with, _The terrain is not the map_ \- we
humans know surprisingly little about _our_ intelligence.

[edit]

Case in point, an older discussion about intelligence in animals:
[https://news.ycombinator.com/item?id=21772648](https://news.ycombinator.com/item?id=21772648)

------
naringas
> So what does it mean to be intelligent? It means to be able to do nothing.

I think this AI is onto something

------
hosh
This seems to me like a very technologically sophisticated version of the
ancient myth of Narcissus and Echo.

As brilliant as it is, I think this speaks more to how we as humanity think
about ourselves than it does about AI.

------
plaidfuji
Impressive? Absolutely. Monetizable? Unclear to me, but probably somewhere
within the vast ad/chatbot/garbage text generation service-scape.

Scary? Not GPT-3, but when GPT-6 or 7 gets involved in the political realm,
that’s when people will take notice. This essay has a glimmer of “humans can’t
be trusted to govern themselves” - and it’s not entirely unconvincing.

~~~
spanhandler
IMO it's (human control, at high levels) all over as soon as a major
corporation or state finds a way to gain significant competitive advantage by
handing over the reigns to machines. Absent sustained, vigorous opposition and
punishment by an alliance of basically everyone else, we'll compete ourselves
right into having no choice but to let AI run things, or else become
subservient to and marginalized by those who do.

In other words by the time it's a choice, I doubt it'll even be a choice.

~~~
Nasrudith
Think about that logic for a bit about distribution of expertise vs control.
Reducto ad absurdum wouldn't that say we should get rid of doctors? They have
an insurmountable competitive advantage if we personally aren't at least a
canidate for medical school. Never in our lives could we do better than them.

I think the sheer stupid fear leading to a tall poppy competence treatment is
far more dangerous than the dark future ever could be from actually competent
entities in charge that we could never do better than.

~~~
spanhandler
I'm not at all following what tall poppies and getting rid of doctors has to
do with my view that we're not likely to _choose_ whether machines start
giving us orders, but rather to just _find ourselves in that situation_ and
with few other options in the nearish future. I'd liken it to the invention of
the state, more than anything else, and similarly practically-unavoidable once
the advantages are made clear. It's just something that'll happen to us as a
result of competitive forces soon after it's made possible, not some
deliberate choice we're all going to make collectively, let alone
individually.

... either that or all this machine-learning and "AI" development stalls out
and we never get much better at it or anything like it. I guess that could
happen, too. Doubt it, but you never know.

What "stupid fear"? I'm not even sure it's a dark future, and, as I expect is
now clear, I don't really think my or anyone else's opinion on that would
affect whether it happens, anyway. It's either possible and so (damn near)
inevitable, or not, so won't happen. What I _don 't_ think it is, is any kind
of choice we're going to make.

------
mellosouls
From the same site, this brilliant attempt at comedy writing - with some
passages better than many human comedy writers:

[https://arr.am/2020/07/22/why-gpt-3-is-good-for-comedy-or-
re...](https://arr.am/2020/07/22/why-gpt-3-is-good-for-comedy-or-reddit-eats-
larry-page-alive/)

~~~
FeepingCreature
> cut to Larry at his computer typing away at GPT-4, and running the model.

> Peter sits in his office, staring out the window, muttering: “I’m telling
> you it’s a bad idea. A bad idea.”

> He gets a text from Larry, that says “Oops.”

------
weeksie
GPT-3 will revolutionize spam, twitter astroturfing, and high school essays
that you didn't study for.

~~~
falcor84
>high school essays

I suppose it's a gradual slope here, with spell checkers on the one side, a
grammar checker a bit further ahead, then Google Doc's autocomplete, and then
close to the very end of the slope you have this system that just needs a few
sentences and completes the entire essay for you.

One question then is - at which point on the slope does the comment stop being
a direct product of your mind? Another question is - in what ways does this
distinction matter?

~~~
weeksie
I think that one of the neat new narrative art forms that GPT-3 like
algorithms will make possible is a prose cinema where the author is more like
a director, prompting agents to react to one another. Like lining up billiards
shots. I think a bunch of people are doing stuff kinda like that now at least
in regard to poetry generation.

At what point does the billiard ball stop being an object of your will?

------
vadansky
How much of GPT-3 impressiveness is our ability to read meaning and create
patterns where there is none? Similar to our ability to anthropomorphize?

------
Xeing0ei
That's impressive.

A few weeks ago, GPT-3 generated content looked like nonsensical content
farm's content to me. Today, this article makes points and follows an
argumentative line.

There are still a few oddities, but this time, it looks like thinking and not
just putting related words next to one another with proper grammar.

~~~
jjoonathan
Ditto, this seems much more coherent than previous GPT-3 output. Have people
gotten better at prompting / selecting output? Or is this a fake GPT-3 output
written by a human?

~~~
dougmwne
I have been trying out different prompts for a week now and this seems
plausibly written by GPT-3. There's quite a bit of luck involved since each
time you submit the same prompt you can get a different response. Some prompts
produce garbage output and some good prompts produce a range of outputs from
lame to amazing. There's quite of bit of cherry picking and selection bias
involved. No one publishes the uninteresting responses and you don't read or
comment on them. Still, I think this is all quite amazing and it seems like
it's close to ready for commercialization.

~~~
keenmaster
Quick, someone train an AI on labeled GPT-3 outputs. They can be labeled as
"good" "ok" "bad" "bad grammar" "convincing argument" etc...

There's a website, scribophile.com, for crowdsourced literary criticism. We
should make something similar but for AI training. The key difference is that
the critique of AI output would be more structured (in the form of labels you
can apply to sentences, words, paragraphs, etc...that you highlight) than
unstructured.

Another website that is more structured than scribophile but is used for
crowdsourced photo critiques instead is photofeeler.com. An interesting thing
they do is that if someone is consistently "harsh" or "lenient" on photos, as
measured by their standard deviation from the average rating, their feedback
is adjusted accordingly. My email is in my bio for collabs.

Note: No one should dare submit GPT-3 content to scribophile. It is a
beautiful, sacred, and fragile place for humans only.

------
koeng
Whenever people talk about how GPT-3 can’t do a lot of things that humans can
do, I always think back to the “Bitter Lesson”. I don’t want to believe that
general AI will just come from a stupid amount of compute, but it might.

~~~
elmo2you
I highly doubt that.

Ever since I had my first encounters with AI, somewhere in the early 90s, one
thing has remained rather constant: the extraordinary successes of AI
originate mostly from when it is used for tasks we humans perform naturally
poorly at, and when confined to a narrow applications.

Most of the often touted amazing general purpose AI is largely just a lot of
smoke and mirrors. The main reason for that is another thing that also hardly
changed since the early 90s: a lot of profit is made from convincing
(/fooling) investors into believing that AI is far more powerful/useful than
it actually is.

------
marta_morena_25
There is pretty much one thing advances in AI tells us. Most of humanity is
nothing more than a statistical approximation algorithm. But that doesn't mean
human intelligence is. What is fundamentally lacking from modern AI is the
ability to "invent". They can perfectly (at least very soon) approximate the
behavior of "Joe". But they get nowhere close to even touching anything like
the Einstein's of humanity.

The main problem I see with AI is that it is very easy to approximate "general
human intelligence", which is essentially equal to "being indistinguishable
from the Joe next to you". But it is a completely different league to actually
advance the human race. For that, statistical approximation will never work.

The next step is to create AI that innovates. As long as that isn't done, all
we have is a demonstration of how "unintelligent" most human beings really are
(i.e. nothing more than a statistical approximation + pattern matching...
Instagram and social media essentially is like an AI forcing function for
human beings, to make them become average).

And yes, we can couple AI with things like a Go-Engine, SAT solver, theorem
provers, etc. to give them abilities beyond what humans can do in these
categories, but who builds that? Humans... As long as AI can't build an AI for
a category it knows nothing about and has had no training for, that AI remains
"as unintelligent as a brick". All it can do is reproduce what its creator
taught it.

That isn't necessarily a bad thing at all. This could still be extremely
useful for society and put a new evolutionary pressure on the human race to
become "above" average. Something that has been utterly lacking in the past
century. With general, yet stupid AI becoming a reality soon, >90% of humanity
is rendered obsolete. This will cause a significant pressure to improve on an
unforeseen scale, which is probably a good thing overall.

Truly intelligent AI on the other hand, might as well lead to our immediate
extinction, since it renders the entirety of the human race irrelevant.

~~~
ausbah
How about consciousness? On the surface GPT3 may appear to act like a human,
but by almost any definition of you choose - it still lacks consciousness.

I don't think anyone would throw you a murder charge if you dumped the servers
hosting GPT3 into the ocean.

Or maybe they would and statistical approximation is all humans are under the
hood despite all our insistence we are more "sophisticated".

~~~
stupidcar
Despite the supposed "mystery" of consciousness, it won't be hard to reproduce
in an AI. Take a general intelligence system, give it the ability to sense its
own mental state (e.g. by feeding the output of its intermediate layers back
into its inputs). Pretty soon it will start building a conceptual
representation of itself and the evolution of that representation over time
will be conscious self-awareness.

~~~
root_axis
> _it won 't be hard to reproduce in an AI_

It is fundamentally impossible to determine whether or not "consciousness" has
been successfully "reproduced".

~~~
FeepingCreature
Things that are fundamentally impossible to determine are necessarily
irrelevant.

------
naringas
here's a parragraph in which I replaced "brain" with "scientist" and mouth
with "experimental data"

> The point of this is to form a hypothesis. If the scientist and the
> experimental data say the same thing, then the scientist will think it has a
> hypothesis that is correct. But if the experimental data and the scientist
> say different things, then the scientist will think it has a hypothesis that
> is wrong. The scientist will think the experimental data is right, and it
> will change its hypothesis.

------
Nasrudith
One potential area for the future is "augmented writing" where writers aren't
as we think of today but more editors who feed in prompts and possibly
rearrange and tweak to get better results than just their meat brains could
come up with. There would be a diversity of styles and approaches of course.

Imagine say some trying to maintaining training sets per individual character
and finding that they would not only provide better lines but choose different
actions.

------
wslh
Related question that I don't know how to quickly find on Internet: imagine
that IQ is a good measure for intelligence. I read Ainan Celeste Cawley[1][2]
has an IQ of 263 (again, believe this number is accurate for a moment). How do
you measure an IQ of 500, 1000, or 5000? I mean, not the actual test but how
the test structure would change from measuring normal and outlier IQs?

Disclaimer: I am not an avid science fiction reader but interested in sources
talking about superintelligence [3]. Is superintelligence more of the same or
it is more about having different layers interconnected?

[1]
[https://en.wikipedia.org/wiki/Ainan_Celeste_Cawley](https://en.wikipedia.org/wiki/Ainan_Celeste_Cawley)

[2] [https://www.rd.com/list/highest-iq-in-the-
world/](https://www.rd.com/list/highest-iq-in-the-world/)

[3]
[https://en.wikipedia.org/wiki/Superintelligence](https://en.wikipedia.org/wiki/Superintelligence)

~~~
jerf
There's a few measures of "IQ". The most common one today centers at 100, and
defines a standard deviation of separation on the task of being "generally
intelligent" as 15 points. An IQ of 263 is a claim that that person is nearly
in the 11th standard deviation above average, which corresponds to a claim
that they are roughly 1 out of 6 × 10^26. This is not a plausible claim. The
number is meaningless.

IQ tests would have to be constructed to measure somewhat similar
intelligences of a population large enough to have a meaningful "population".
The scales could then potentially be roughly calibrated to each other, but
they wouldn't really be translatable. The task of constructing them would be
up to the intelligences in question.

It is possible no such measure could exist; as the "size" of the intelligence
increases, the number of degrees of freedom of "intelligence" almost certainly
increases, just as we can be good at bugle but terrible at piano as humans
(and that's already a fairly microscopic focus in the grand scheme of human
activities), but those statements are almost meaningless to ask about a
raccoon, even if we give them raccoon versions of the instruments. Even at
human scales, while IQ seems to measure _something_ , we can see the measure
is getting fairly strained. You probably need an increasingly multidimensional
"number" as the intelligence continues to scale up.

As for what intelligence is, all we have are other hypotheses that are on the
one hand clearly related to the question at hand, yet on the other, not the
answer. Arguably, AI like GPT-3 is also a measure of our best definition of
"intelligence". If we could completely clearly define it, we could probably
implement whatever it is we defined.

~~~
wslh
Please forget one moment the definition of intelligence or if IQ is the right
measure: how do you imagine a superintelligence in comparison with an
intelligent human? I mean it is obvious that superintelligence is not, just,
about processing information faster but has some structural changes and the
ability to connect dots in different abstraction layers. Just guessing. I
imagine that one achievement will be solving math theorems starting with math
principles in the way AlphaZero achieves chess or go mastery.

~~~
jcranmer
If what intelligence is is unclear, than how can one reasonably attempt to
speculate what having more of it actually means?

~~~
wslh
You can imagine stuff that is unclear and speculate about alternatives.

------
darepublic
Another annoying GPT piece where there is no ability for a regular member of
the public to verify it. I guess in applying to the beta I should have said
under 'what do you plan to do with this' \- "post on social media cherry-
picked examples that hype up GPT-3".

------
noiv
Turns out language is no longer an indication of intelligence. Any new
proposals?

~~~
Tade0
Never was really, as indicated by the Chinese Room Argument:

[https://en.wikipedia.org/wiki/Chinese_room](https://en.wikipedia.org/wiki/Chinese_room)

~~~
johnc1
Highly debated argument though; to me it's like saying you CPU doesn't
understand HTML and your browser is running on a CPU, hence it can't
understand HTML either. Scott Aaronson explained it nicely too:
[https://scottaaronson.com/democritus/lec4.html#:~:text=Searl...](https://scottaaronson.com/democritus/lec4.html#:~:text=Searle%27s%20Chinese%20Room)
. Even the wikipedia page mentions many reasonable counter-arguments.

------
motohagiography
Did they train it on Stephen Fry novels? If we deepfaked this text onto his
voice and image, I think we might have something better than how Martin Amis
turned out.

------
scottlocklin
I'm coming around to the idea that people who are impressed by "results" like
this are NPCs who probably could be replaced by GPT-3.

~~~
isochronous
I mean, it's not going to win any philosophical debates, but these kinds of
results are WORLDS better than they would have been just a couple of years
ago. I have to wonder what would it take to impress you.

------
kangnkodos
My mouth says, "This sucks". Then my brain says, "This sucks". Therefore my
brain thinks I have a correct hypothesis.

------
namenotrequired
Only the insightful analogies are missing. I wonder if there's a better prompt
to come up with those?

Edit: anecdotes -> analogies

------
holoduke
wonder if one day a forum like hackernews appears where gpt-x bots are posting
comments on cool articles and blogs created by the very same bots. very deep
complicated topics are discussed no human ever understand. If our progression
in these fields will not come to a halt then this must happen one day.

