
Philosophers on GPT-3 - freediver
http://dailynous.com/2020/07/30/philosophers-gpt-3/
======
Animats
GPT-3 demonstrates that a huge volume of what's written is mostly bullshit.
This is very upsetting to some. See "The Digital Zeitgeist Ponders Our
Obsolescence" in the linked article. What comes out of this system is better
than most comments on political blogs, and sometimes better than the articles.

On what would this approach do badly? "How-to" material, I suspect. Trained on
auto repair manuals, it could generate new, plausible, but useless, auto
repair manuals. This gives us an insight into what's wrong. It lacks adequate
ties to the real world.

This is the "common sense" problem I've discussed previously. Figuring out
what's going to happen next in the real world is often not a problem in word
space. It's a problem in a different kind of space. The shape of that space is
a big unsolved problem in AI.

~~~
dwohnitmok
I think perhaps what's more upsetting is that GPT-3 flips the traditional
notions of what machines are good at and what humans are good at on their
respective heads.

GPT-3 seems to indicate there's a chance that "creative" domains such as
poetry, literature, music, etc. will be taken over by AI (i.e. AIs will have
superhuman performance) before "logical" domains such as logic, mathematics,
and the sciences.

This means that it is becoming more and more conceivable to more and more
people that sometime in the foreseeable future an AI will be better than any
human along any dimension you choose to measure, even when it comes to the
ability to elicit emotions and reactions in other humans.

~~~
theontheone
I think you hit the nail on the head, with the salient point here being that
_in the near future_ "creative" things will be automated first (see Image GPT,
Jukebox, etc. Google has 100 billion dollars cash and countless TPUs, best
engineers, infra, etc - they could probably replicate results far better than
each of these OpenAI projects within a few years). One of the things that got
me into ML research was the notion that we could automate a lot of the hard
work humans do every day (agriculture, cooking, desk jobs, etc) so that humans
could do things that were uniquely theirs & interesting, that were _human_ ,
that were beautiful... Unfortunately it turns out that classical music and
waxing poetic are easily generative in an enjoyable way. In the most ironic
fashion possible, it turns out that the very thing we do when we conduct ML
research, what you call the "logical domain", is one of the only things that
stays human-only in the foreseeable future.

GPT-3 and other projects seem to drive hype cycles in the tech community and
convince people like Elon Musk that the AGI revolution is near. But I think
recent progress is just another example of machine learning models being able
to generalize on super large datasets, even if it's the biggest model so far.
It's not clear to me that larger models will solve this in the limit; take the
way GPT3 fails on addition past a certain number, and the fundamental
inability for transformers to learn certain algorithms. It is certainly still
possible for this type of large dataset, large model style of ML to make human
life better in many ways - like Tesla is trying to do with self driving cars,
or Covariant with automating Amazon-like jobs. But I think when it comes to
tackling the hard problems of true intelligence, we're missing a dimension
somewhere.

~~~
spacechild1
Disclaimer: I'm a composer

> Unfortunately it turns out that classical music and waxing poetic are easily
> generative in an enjoyable way

On the contrary, I would say that generating convincing and original classical
is an incredibly hard (if not impossible) task. All the current music AI
projects give results which may sound “good“ to a casual listener, but they
sound horribly wrong to any educated listener. The reason is that AI can only
imitate the surface, but completely misses to recognize/synthesize larger
structures. This might be ok for some background noodling in a TV drama, but
not for the concert stage.

Finally, we rarely perceive art works in isolation. We know and appreciate the
fact that a certain work has been created by a certain person in a certain
time.

~~~
sriku
The reality is likely neither here nor there - i.e. computing may have more to
offer to the creative endeavor than creators would like to admit, but still
leave an obvious gap which technologists might be loathe to admit.

It may be instructive to look at David Cope's [1] work (what he calls
"recombinant music" [2]). Cope's been writing algorithms to compose in the
styles of the masters (Mozart/Chopin/et al) for about 3 decades now, well
before the recent surge in "AI". His techniques are much less sexy for the
"deep learning" enthusiasts, and yet he managed to outrage an audience of
connoisseurs who assembled to listen to a "lost Chopin piece" only to be told,
after they shared their applause, that it was composed by a computer taught to
mimic Chopin's style (the composition was _performed_ by a musician). The
response, in my opinion, also points to music as a social constructed
experience and not purely attributable to the sound signal itself. i.e. if I
give you a romantic background story for a lost composition of a master, you
may be inclined to experience the piece in a more favorable light than if I
told you it was generated by an algorithm (or the converse).

You're absolutely right that the musical output of the current crop of "AI"
projects (especially the ones using deep learning / neural networks) are
crappy to even a modestly trained listener .. or even a lay untrained listener
for that matter. However, more involved modeling (such as Cope's) has produced
some very compelling results decades ago, so it would be a mistake to assume
that the current crop won't get close enough [3]. The fact that DL systems
don't need to be instructed in the way Cope has had to encode his musical
understanding is also something to be considered in the evaluation as well as
in scoping their capabilities going forward.

[1]:
[https://en.wikipedia.org/wiki/David_Cope](https://en.wikipedia.org/wiki/David_Cope)
[2]: [https://www.recombinantinc.com](https://www.recombinantinc.com) [3]:
[https://deepmind.com/blog/article/wavenet-generative-
model-r...](https://deepmind.com/blog/article/wavenet-generative-model-raw-
audio) (see "Making Music" section and examples there)

~~~
spacechild1
I am also a computer musician, btw, so I am well aware of the creative
potentials of algorithmic composition. ;-)

However, we have to make a clear distinction between creative and recreative
methods. David Cope's work is impressive, but it focusses on the recreation of
existing musical styles. This is interesting from a musicologist perspective,
but not very interesting artistically.

I would certainly say that deep learning generates lots of interesting
“material“ (like many other methods of algorithmic composition), but we still
need a human being to curate, edit and assemble the material into a meaningful
piece of art.

Finally, I think the current AI debate can be very fruitful for the arts. In a
way, it raises similar questions as the concept of the “readymade“ and the pop
art movement did in the 20th century.

Btw, I'm currently working on an opera which uses AI generated lyrics :-)

~~~
Isinlor
Humans also need other humans to curate their work. We are comparing AI not
only to the best composers alive, but also to the best composers ever. Nobody
remembers millions of failed musicians.

BTW - I'm curious, what do you think about birds songs? Are their songs
interesting artistically? How do you think they were composed?

~~~
spacechild1
Oh, you're opening up a huge topic there. Actually, there have been
philosophers who claimed that the beauty/sublimity of nature was ultimately
superior to the sensations produces by the arts. You can find this reasoning
in Kant's "Kritik der Urteilskraft", for example.

On the other hand, you have composers like John Cage (or more recently: Peter
Ablinger) who claim that the act of listening itself can be/create art,
blurring the borders between nature and art. There are conceptual pieces which
only consist of listening instructions.

Finally, bird "songs" have been used as the source material for musical
composition for centuries. You can find it in Beethoven, Mahler, Debussy,
Stravinsky, etc. Olivier Messiaen even was a hobby ornithologist; he
faithfully transcribed hundreds of bird songs and used them in his music (see
for example his piano cycle "Catalogue d’oiseaux").

As for the question of who composed the actual bird songs, the answer probably
depends on the theological background of the person you ask ;-)

------
randomsearch
One consequence of GPT-3 is that I am now highly sceptical of the human
provenance of any HN comment on an article about GPT-3. It has made my HN
experience objectively less enjoyable, because I’m constantly expending effort
to spot nonsense and avoid wasting time reading it.

Perhaps most worrying is not how “human-like” GPT-3 can be, but how “GPT-3
like” humans can be. When I am in “nonsense-detection” mode, I drill down into
paragraphs to spot non-sequiturs etc and I find plenty of HN comments are
rambling, contradictory, or I just can’t ascertain the meaning of the text.

If anyone gets this far through my comment, you may now be wondering if I’m
hilariously posting a GPT-3 output (I am not). I wonder how a human might seek
to convince others that they are not GPT-3. I think using unusual exotic
rarely-encountered vocabulary or word combinations or sentence structure that
GPT-3 is unlikely to pick up would help. Or referring to current events in a
way that makes sense (that lockdown in Greater Manchester would be an example
for people in the U.K.).

It certainly has the power to ruin HN and other forms of debate online.
Perhaps one consequence will be more video chat and audio calls (until deep
fakes become great) and then a retreat to the physical world for serious
discussion.

~~~
unabst
Can GPT-3 make a valid point?

Good writers have something to say, and they don't waste words saying it.

Someone just good with language is an editor. Or a babbler. Or a rapper.

~~~
NaNtales
Some of the most talented and pertinent writers of our time are rappers. I
would encourage you not to dismiss them so readily.

~~~
unabst
Yes. But they are also good writers, no? After the premise that good writers
have something to say, it should be clear I am referring to the "just rapper"
rapper.

------
hirundo
"GPT-3 on Philosphers" could be more interesting. I would like to read a
response by GPT-3 to these essays. It's only fair.

~~~
dougmwne
This is not exactly cherry picked, but I did play with the prompts till I
could get GPT-3 to write an article in the first person in response to the
article, instead of other random output. This is the first successful attempt.

GPT-3 on Philosphers by GTP-3
[https://pastebin.com/3AEtjv35](https://pastebin.com/3AEtjv35)

~~~
weswpg
I refuse to believe that a person didn't write that. every sentence is
relevant to the thesis and coherently follows from the previous thought

~~~
dwohnitmok
I totally believe it. GPT-3 is good.

Here's another paste using the same prompt as dougmwne. Everything from "by
GPT-3" onwards is written by GPT-3. This was the second try (I deleted the
first one). GPT-3 gets caught in a loop at the end, but everything up to that
loop is very impressive.

[https://pastebin.com/p3kjgqVB](https://pastebin.com/p3kjgqVB)

~~~
fernly
Oh geez, that end, where it's getting stuck? WHO DOES THIS SOUND LIKE?!?

> But maybe I'm drawn to it because I'm good at it. Maybe I'm drawn to it
> because I'm good at abstract reasoning. Maybe that's why I'm drawn to it.

Hint: you've heard him in a recent press conference...

~~~
minikomi
"I have the best reasoning folks."

Am I on the right track?

~~~
sjwright
I wonder if this might be life reflecting art (or whatever) if the GPT-3
corpus is seeded with contemporary writing. Trump’s words are likely the most
repeated of anyone in the past few years—within the anglosphere at least.

------
freediver
Chalmers: "As for consciousness, I am open to the idea that a worm with 302
neurons is conscious, so I am open to the idea that GPT-3 with 175 billion
parameters is conscious too."

~~~
skissane
I think consciousness is something dynamic – constantly learning from and
reacting to your environment, constantly changing your environment and being
changed by it in turn. And that's what biological neural networks do. By
contrast, systems like GPT-3 still have a hard boundary between two different
modes of operation – learning and application of that learning – which makes
them much more static. And that makes me doubt their consciousness.

I think, if some future system could get rid of the boundary between training
and runtime, so that training happened continuously – then it would be closer
to consciousness in my view. (It would also mean that different instances of
the system would begin to diverge and become unique individuals, because even
if the initial training was identical, the ongoing operation would be
different, and the networks would diverge over time.)

~~~
WClayFerguson
Your definition of consciousness drags in a lot that is not required. You are
conflating it with intelligence too much.

Consciousness is what it _feels like_ to have a thought, an emotion, an idea,
in that exact moment while you are feeling it. It can happen in a brain at a
moment, during a single second of time, and doesn't have anything whatsoever
to do with learning, reacting to environments, or any of that other stuff the
normally goes along with life.

Consciousness itself is the actual experience itself. Nothing more. Nothing
less. This is why we think anything with a brain has some kind of
consciousness. If anything it's likely more related to quantum mechanics and
waves than it is to "information processing" which is the common misconception
even AI experts have.

~~~
Stupulous
It seems unlikely to me that consciousness wouldn't be tied to intelligence.
If consciousness had no direct involvement with intelligence, there would be
no reason for pain to hurt or pleasure to feel good. An irrelevant
consciousness could present as anything at all, but we have one that presents
us with a coherent reality that at least resembles the one our bodies exist
in.

It must be either a side effect of intelligence or something that human
intelligence uses to an end. Either consciousness is something composed of
information processing, or it is something inherent to the universe that has
some evolutionarily efficient use towards information processing. I favor the
former.

I believe this very strongly. That said, the subject matter is a personal
obsession, and I would love to hear counterpoints.

~~~
amarte
I've heard consciousness described as "the felt presence of immediate
experience," which I've found to be an excellent description of the experience
of being embodied in the world -- of being conscious. If consciousness is an
emergent phenomenon, meaning if there are atoms flying around spontaneously
assembling into more and more complex forms of order until some critical point
of complexity is reached and consciousness appears, what's the point of "being
conscious" at all? If the assembling of particles into forms of order is
what's fundamental, surely that process could just go on and on without any
bit of it feeling embodied. It seems to me like the universe could be exactly
the same without "the felt presence of immediate experience"/consciousness.
Atoms would be whizzing around, people would be pontificating, GPT-3 would be
chugging away. It would all just be kind of "empty" \-- all surface no
substance. I don't need to feel embodied for the world to be the way it is,
yet I do, and I struggle to understand why that is.

~~~
Stupulous
This is a question I've considered for a long while. As programmers we can
easily see that no set of behaviours require consciousness.

I touched on this in my previous comment, it is my belief that consciousness
is not the only way that intelligence can be made, but that it is somehow
efficient for the purposes of evolution. Using consciousness may consume the
least energy (the brain uses a lot of energy), take the least genetic material
to describe, have the safest learning curve (so that children are more
intelligent and more likely to survive), or any combination of these and other
features.

I think of experience as a sophisticated mathematical object with useful
functionality. We have a disconnect with physical reality, and a strong
connection with informational reality. I can assert that I exist, and the
abstract model of my phone I keep in my head exists, but I can't assert that
the phone exists and in reality its existence is very different from how I
perceive it. It certainly seems like I am an information construct that was
formed within a physical reality.

Beyond that I'm mostly in the dark though. You can see that consciousness is
involved in learning and adapting- you are highly conscious of new skills and
change, but old skills sink into the subconscious and you gradually ignore
repeated stimulus. You can see that consciousness integrates much of our
intelligent functionality (perception, memory, executive function) and you can
feel that your role is to run things. How is experience related to all of
this? I do not know.

~~~
amarte
In your first comment you proposed that, "Either consciousness is something
composed of information processing, or it is something inherent to the
universe that has some evolutionarily efficient use towards information
processing."

Sometimes I try to imagine the later case, and it really flips reality on its
head. The limit and most extreme case is that reality is fundamentally
experiential -- that is, what comes first is "being", "feeling", "embodiment",
and through this lens is found structure, objects, form, etc. Obviously this
is just the reverse of the idea that consciousness emerges from an underlying
physical substrate performing complex processes.

Either way, there is a definite correlation between the two -- feelings have
their correlate molecular, biochemical basis, and molecules working together
through processes have their transcendent embodiment as feelings experienced.

The question of "what is real?" can boil down to this: are things external to
consciousness fundamentally real and consciousness an ephemeral, emergent
flourish floating "on top", or is consciousness real and everything observed
by it a kind of flourishing of it?

This is a bit of a rabbit hole with many different paths to fall down, as I'm
sure you know. Scientific knowledge is rooted in observation and the dusting
away of uncertainty to reveal an objective reality we all share. From this
standpoint, the objective substrate being revealed and it's complex processes
is taken as fundamental, and we have all the great successes of scientific
knowledge to show as justification for this to be true. The only hole seems to
be, why the hell am I embodied, then? -- why am I conscious at all? Life would
probably be easier if I didn't see that hole and want to search for more
satisfying answers!

~~~
Stupulous
I wrote out my thoughts on my answers to the two questions, and they wound up
being long and a little tangential to the bulk of your comment, so I figured
I'd throw in a thanks for the thought-provoking reply. I am enjoying this
conversation.

What is real?

Consciousness self-asserts: (1a) 'I think, therefore I am' (or else, 'thought
is occurring, so thought must exist'). If you accept the reasoning there, you
can also grapple in (1b) 'I see blue, therefore blue exists', etc.

In that sense, our consciousness is a rare example of something that
definitively exists. A statement like 'there is a rock in space called Earth'
would be false if we lived in a computer simulation. The correct statement
becomes 'there are a bunch of numbers representing a rock in space called
Earth, in this computer'. Consciousness doesn't answer to the abstraction in
the same way. 'I see a rock in space that I think of as Earth' is true
regardless of whether you're inside of the simulation.

We can also assert that reality exists, as far as (2) 'there is a thing that
my experience interacts with which I do not consciously control and which
exhibits complex behavior', and also, (3) 'I exist (per 1a), therefore I am
somewhere. I can perform computations, therefore the place I am in must allow
for computations to occur. I have experience (per 1b) therefore there I am
somewhere in which experience can exist. Reality exists (per 2) therefore
there must be something sophisticated enough to produce it.'

But that's strictly an informational definition, again equally true whether or
not you're in your own dream- it only addresses the complexity of the mind
producing the dream.

So to conclude: information is quintessentially real. Our consciousness and
reality are real at least to the extents that they are information, which are
'very much so' and 'a lot, maybe more', respectively. Physical reality as we
know it might be real, Occam's Razor says 'probably', Simulation Hypothesis
says 'probably not'. Anyone's game. I think that a physical reality of some
form must exist in order to perform computations and produce information, but
I'm open to a rebuttal.

And then why the hell am I conscious? This seems to be the crux of the matter.
It is my opinion that the answer is of the form 'consciousness solves problem
X efficiently along dimensions Y and Z' where X is some fundamental component
of intelligence, and Y and Z are environmental constraints. I think it's
unlikely that the answer is related to the fundamental makeup of the universe.
Evolution follows the path of least resistance, and entangling our minds with
some innate property of quanta, from the scale of proteins seems more
challenging than other conceivable non-conscious solutions to general
intelligence.

~~~
amarte
This is such a monumental subject lol. I keep returning to this trying to come
up with some kind of adequate response but it's like I'm standing at the base
of a mountain and I can't find much to grab hold of that doesn't just crumble
away after I apply a little pressure.

I definitely follow you up to your last paragraph and it all rings true to me,
however I don't quite understand, "It is my opinion that the answer is of the
form 'consciousness solves problem X efficiently along dimensions Y and Z'
where X is some fundamental component of intelligence, and Y and Z are
environmental constraints." Maybe the rest of what I have to say is just
because I don't understand the fundamental component or constrains very well.

To me mathematics is the limit of description. I can assign a word to some
observable thing and distinguish it from all other observable things. I can
draw a picture of it to distinguish it even more precisely. I can use various
mathematical techniques to describe it even better, perhaps even to arbitrary
degrees of precision. But I fail to see how any mathematical technique can
capture --the feeling of-- happiness, pain, etc.. These embodiments can not be
fully realized by description alone. They can be pointed to, hinted at, and I
think great artists can stir echos of them in other people, but actually
experiencing them is beyond the capacity of description. That's why I wonder
if experience/consciousness is something fundamental. A subsequent worldview
would have as its central concern 'beings' instead of 'objects'; it would not
exclude any current or future science, it would just shift it's focus away
from abstractions and toward experiential beings -- with conscious beings,
which we are, perhaps a special case of a much larger set. The gains would not
be material, but perhaps there would be some improvements in the ways we
interact with ourselves, each other, and our surroundings.

~~~
Stupulous
>consciousness solves problem X efficiently along dimensions Y and Z' where X
is some fundamental component of intelligence, and Y and Z are environmental
constraints

There are two criteria I'm addressing here. Consciousness is either physical
(produced in the universe) or informational (produced in the mind).
Consciousness is either important to intelligence or incidental to
intelligence. My position, which I'll justify below, is
informational/important. If you accept that consciousness is manufactured in
the mind and important to intelligence, that means we evolved it. Because it
is a widespread evolved trait, it very probably is an effective solution to a
problem against environmental constraints, towards the larger goal of
reproduction.

Constraints might include the amount of genetic data needed to produce a
useful output, how well it deals with failure cases, how well it responds to
genetic mutations or how well it withstands viruses or cancer. The kind of
stuff that is irrelevant from the perspective of an intelligent designer like
us with access to basically limitless indestructable computational resources.

Physical/important I responded to previously, but briefly: the big issue is
scale. Humans run on proteins and large organic molecules. If there was
something nonmathematical at that size and in our bodies, we would very
probably know about it by now.

Both informational/ and physical/ irrelevant are 'side effect' models. They
have at least two flaws. Consciousness follows attention, not brain activity.
If I do something subconsciously, I am engaging the same neurons but not
producing the same side effects. Consciousness is not a disconnected
afterimage of intelligence because I am aware of it and can perform reason on
it. It affects and is affected by my brain. If it's a side effect, it's one
that has been knitted into me, presumably to some benefit.

So what does that make consciousness? Taking it as an informational tool to
some end, we can probe some interesting questions. Self-assertion, which I
referred to earlier, is an interesting mathematical property. A set of rules
that allow the system within them to prove its own existence? And it's a
global property across all conscious experience, that's certainly of note. The
benefit of consciousness seems to be related to awareness of self and
environment (that's all experience seems to be) as well as executive function-
we experience a sense of free will, presumably because evolution wants us to
help run things from here. There's a remote possibility that free will is
real, and consciousness is somehow an non-deterministic process. That and
beyond are all speculation, though.

The belief system you describe is how I got out of nihilism and escaped what
was an agonizing conflict between romanticism and realism (I like the song
Imitosis by Andrew Bird for depicting that conflict). There's a cold,
meaningless reality out there, but somehow there's meaning that is made of it.
We matter even though (or because) if we didn't, nothing would.

------
freediver
I was trying to find the right analogy for GPT3 in the real world. Impressive
when working but mostly unreliable. The first thing that came to mind was
Tesla's Autopilot. The closest "non-AI" I could come up with was the
supplement industry (second one was 'a mistress'). Now, love or hate GPT3
there are two things to consider:

\- Supplement industry, with all its flaws, is a huge part of modern society
so GPT3 or its derivatives might become too

\- GPT4 is again likely to be 10x 'stronger' than its predecessor. Where will
that leave us?

~~~
andrewprock
"Where will that leave us?"

With spam generators so good they can fool everyone?

~~~
robertk
Relevant xkcd. [https://xkcd.com/810/](https://xkcd.com/810/)

~~~
Ajedi32
Perhaps even more relevant: [https://xkcd.com/632/](https://xkcd.com/632/)

------
minimaxir
I'm not fond of both the recurring philosophical and the Skynet/AGI angles
that keep popping up regarding GPT-3. "But to [analyze statistical
distributions of text] really well, some capacities of general intelligence
are needed" isn't correct either; no one would call attention mechanisms used
in Transformer models as evidence of intelligence, it's math.

It's easier to argue it's not GPT-3 that's advanced, but it's _humans_ that
are simple.

~~~
VikingCoder
"no one... it's math."

Brains are structures in the physical world and you can model their behavior
with math. ("Model" is not the same as "perfectly model." Imagine a three-body
problem where you can come up with plausible solutions. That's not the same as
an exact solution.)

~~~
GreenHeuristics
is a mountain in Nepal a google maps url?

~~~
VikingCoder
Is your brain my brain? No, two distinct physical things are two distinct
physical things.

What happens when you multiply 2 times 5? You got 10? That's the same answer I
got!

Thoughts are executed by a brain, which can be modeled and executed by a
mechanical brain.

------
wrnr
I think GPT-3 is a technical marvel but it did not reveal anything about
language that wasn't already known by linguists for at least a 100 years. I've
read this funny dialog generated by GPT-3 between Bezos and Page, but even
that was edited by the "author". At the end of the day it nothing more than an
advanced form of Dada poetry:
[https://en.wikipedia.org/wiki/Dada#Poetry](https://en.wikipedia.org/wiki/Dada#Poetry)

~~~
runawaybottle
It’s manipulative power is more what I’m interested in. I want to see what
percentage of the general population can be manipulated by it. Could care less
if it’s a pure design that simulates mastery of a language.

I have a hunch it will work on 50% of people. I’m willing to bet your average
13 year old would gobble up gpt articles without knowing any better.

~~~
bdamm
I suspect very many adults would also gobble up GPT articles without knowing
any better too. Just witness the production machine of most news
organizations. They can write entire "stories" around a single tweet. The new
data is less than 140 characters and that gets turned into a 5 minute spot.

~~~
skybrian
To be fair, a lot of tweets don't make any sense out of context, so to make
them comprehensible, you might need to add a lot of context.

------
hooande
I keep thinking...what is GPT-5 going to look like? It might be a lot like the
movie "Her". And that's what, maybe five years away?

Obviously they can't just keep scaling up the number of parameters or working
memory size. And the way it displays new behaviors in response to specifically
worded prompts is still a little weird. But this is already very close to
having applications that we could only dream of.

GPT-3 is definitely over hyped. But there is something real happening here.
I'm still not entirely sure what to make of it.

~~~
ThalesX
Exactly what I was talking with a friend about yesterday.

I really believe we're two orders of magnitude away from having really useful
smart home assistants. The kind that 'understand' your intent. And it's a bit
mind boggling that we're brute forcing understanding, but it seems to be
working great.

Recently I've been really into AI Dungeon with the GPT-3 model. I'll give you
an example of a snippet from a story I played and it's not a singular example,
plenty of insightful / amusing encounters. I'll go so far as to even say that
it's more fun than interacting with a lot of people I know...

> "You follow the Fool through the village. It's fairly large, but in poor
> condition. The buildings are all thatched roof cottages and most of them
> look like they're about to fall down. There are people walking around, but
> they all look miserable.

> 'Are you happy?' the Fool turns, and asks a woman standing at an alleyway.
> She's pale and dressed in rag.

> 'No' she responds.

> 'Do you want to be happy?' the Fool asks, pulling a bright red flower from
> his bag.

> 'Yes' she says, cautiously accepting the flower.

> 'That's so sweet,' you say. ( __this is me writing __)

> 'No, it isn't," the Fool says. 'It's lazy. Happy people don't ask to be
> happy. They just are.'"

\--

I looked this quote on Google to make sure it wasn't just taken as a whole: No
results found for "Happy people don't ask to be happy. They just are.".

\--

I was really impressed with the depth of it.

~~~
cambalache
> I really believe we're two orders of magnitude away from having really
> useful smart home assistants.

Maybe I lack imagination, but , smart home assistants to do what? I dont want
an Alexa on steroids at home. The intersection of GPT-3-like-based devices
that can be:1)really useful. 2)Easy to interact with.3)Respect my privacy, is
empty and I suspect it would be for a long time.I need help at home with
chores: prepare lunch, clean the house, do the dishes, do the laundry,paint a
room,etc, no GPT-5 will be useful for that.

I dont want to pay (neither in money nor with my privacy) just to talk to a
device to make trivial or contrived things.

A case of GPT-3-like agent as a virtual assistant for your personal/home
online work is more plausible, but I think it is still more than 2 orders of
magnitudes away.

------
haecceity
As for consciousness, I am open to the idea that a worm with 302 neurons is
conscious, so I am open to the idea that GPT-3 with 175 billion parameters is
conscious too. I would expect any consciousness to be far simpler than ours,
but much depends on just what sort of processing is going on among those 175
billion parameters.

How can he say that without defining what he means by conscious first?

~~~
robertk
Because he has spent an academic lifetime studying that question, and
recognizing the unique obstinacy of that word, in that you cannot pre-commit
to define it despite some intuitive consensus on what it may mean.
[https://iep.utm.edu/hard-con/](https://iep.utm.edu/hard-con/)

~~~
haecceity
If he thinks GPT-3 is a simple consciousness then any language model would be
a simple consciousness. That doesn't make intuitive sense.

~~~
ccozan
Maybe language itself is an attribute of conciousness? Means, the image of
self and the image of the environment, must some how reflect back, and this
reflexion is expressed by .. language? Be it, sound , gestures, visual,
chemical.

------
gmaster1440
I also wrote a short piece[1] on philosophical implications of GPT-3. Noticed
some comments are quick to dismiss all philosophical implications entirely
since GPT-3 is just dumb algorithms. Open to debating any coherent arguments.

[1] - [https://www.markfayngersh.com/post/the-thinking-
placebo](https://www.markfayngersh.com/post/the-thinking-placebo)

------
simplify
I was pretty hype on GPT-3, but got doused when considering its cost. Aside
from the $4-12 million to train, just how much energy does it cost to run a
query?

~~~
gwern
Fraction of a penny. 100 pages is <$0.05 of electricity, and a single prompt
will yield <1 page at most before it hits the 2048 BPE limit, so figure at
worst, <$0.005 of electricity.

~~~
simplify
Interesting. How many computers/servers/etc are involved in those queries?

------
sroussey
Does GPT-3 actually exist? I only see stories about it, but don’t see it or
access to it at all.

Maybe it’s hanging out on Clubhouse...

~~~
est31
access exists through a closed beta program who also can only send queries to
it, not download the model. This site contains links to a google form to get
onto the wait list of beta user candidates: [https://openai.com/blog/openai-
api/](https://openai.com/blog/openai-api/)

~~~
Lambdanaut
There are also certain users that do have access to that API that provide
public access to it through their own user interface.

~~~
2bitencryption
can you list any? I know about AI Dungeon, but are there any others? Do they
all impose some type of filter (like AI Dungeon does) that shapes your input,
or do any provide "raw" access?

I just want to get my hands on GPT-3 so bad...

~~~
Lambdanaut
Look at AI Dungeon again. Notice the Custom option.

------
Animats
Well, we finally have The Great Automatic Grammatizator. (1953) [1]

 _" No, sir, honestly, it’s true what I say. Don’t you see that with volume
alone we’ll completely overwhelm them! This machine can produce a five-
thousand-word story, all typed and ready for dispatch, in thirty seconds. How
can the writers compete with that? I ask you, Mr Bohlen, how?’"_

The story is amusing. They buy out all the hack writers and crank out content
under their names. The very few good writers they leave alone; they're not a
significant part of the market.

[1]
[http://www.bookophile.weebly.com/uploads/6/4/0/8/6408830/the...](http://www.bookophile.weebly.com/uploads/6/4/0/8/6408830/the_great_automatic_grammatizator_and_ot_-
_roald_dahl.pdf)

------
novalis78
“ GPT-3 does not look much like an agent. It does not seem to have goals or
preferences beyond completing text, for example. It is more like a chameleon
that can take the shape of many different agents. Or perhaps it is an engine
that can be used under the hood to drive many agents.”

Maybe the worm is a GPT3 analogue with the difference that a desire to
replicate and seek nutrients is built into its nano-robotic physical shape
thanks to a set of genes. So then what if one could train on massive amounts
of data of physical worm-world interactions - perhaps they resultant robot
would appear ‘conscious’ and like a true ‘agent’ simply because it’s in a body
now and capable of navigating an environment...

------
ggm
Keep focussing on the word _mindless_ and then ask yourself semantically, why
mindless and intelligent feels like a contradiction in terms.

(I do not personally ascribe any intelligence to GPT-3 and this is not the
start of the singularity. There is no evident free-will, nor new knowledge
synthesised as theorems from existing knowledge, no introspection, no
understanding)

~~~
doubleunplussed
Humans do not have free will either. What we call free will is just a useful
abstraction over systems that are hard to predict and traffic that respond to
incentives.

GPT-3 can add arbitrary four digit numbers (with low error rate), it knows how
addition works despite the numbers it's adding not being in its training data.

I think you need other criteria to rule out GPT-3 as a 'mind'

~~~
ggm
[edit: _I pass over the free-will vs determinism. I think its not fruitful for
what GPT-3 Is compared to what we are. If I am wrong, if I missed a nuance, do
elaborate._ ]

I think you need other criteria to include GPT-3 than the (admitedly really
interesting) point that it has constructed addition from the training data. I
would have a lot of subsidiary questions around that, and I note you said
"knows" which is kind of red-rag-to-a-bull here. What is the "knowing" of its
basis to have constructed addition from the training?

The main questions here would be:

1) did it also "intuit" subtraction and why not? what did it do when it hit
negative numbers and can it add negative numbers to positive numbers and
negative numbers to negative numbers?

b) has it shown any inductive logic outcome and got to multiplication and
division? Even just integer..

4) can it detect implied sequence order even when the textual labels are
wrong?

~~~
visarga
There are examples of other language models that learned to do hard math
problems such as symbolic integration which are difficult even for
professionals empowered with dedicated (non learning) software.

------
woah
In a lot of these, you get the sense that the philosopher has not spent a lot
of time or any time interacting with gpt-3

------
perl4ever
I just had a thought...what if you asked GPT-3 to write a script for ELIZA,
given e.g. DOCTOR as an example?

~~~
ggm
once the bill hit, you would convert to PARRY very quickly

------
Baeocystin
I'm starting to think Scott Alexander's 2018 essay 'Sort by Controversial' is
in danger of losing its 'epistemic status: fiction' label, and I'm not joking
when I say that.

[https://slatestarcodex.com/2018/10/30/sort-by-
controversial/](https://slatestarcodex.com/2018/10/30/sort-by-controversial/)

~~~
ggm
Epistemology is under-rated. Fiction becomes fact more rarely than people
think, because the components of fiction which are factualised are hypotheses
couched as fiction about potential reality.

That said, this is a great comment and if we were in reddit I would gold you
fast. I always sort by "new"

------
typeformer
This comment was not composed by GPT-3 but you probably couldn’t tell if it
was...

~~~
ggm
The correct use of the apostrophe is a strong determinant you cannot be human.

~~~
kangnkodos
It could be Lore, but not Data.

------
mensetmanusman
I was waiting for ‘this post was written by GTP3’ near the end

~~~
ggm
Ah. so we are now hunting for "last post" status. The problem being unlike
first post, its always losable. Which reminds me of "the game" and if anyone
forgot they are playing it: I just reminded you, and by definition: _you lost_
[edit: _and by definition I will too, and just did maybe_ ]

------
mensetmanusman
I’m glad OpenAI hasn’t released this into the wild.

It is a component of a super weapon. If all information was to be free, we
would all be dead.

------
iamgopal
Imagine someone making propoganda news generator out of it.

------
Zedmor
Are those essays GPT-3 generated?

------
YeGoblynQueenne
>> Why the hype? As is turns out, GPT-3 is unlike other natural language
processing (NLP) systems, the latter of which often struggle with what comes
comparatively easily to humans: performing entirely new language tasks based
on a few simple instructions and examples. Instead, NLP systems usually have
to be pre-trained on a large corpus of text, and then fine-tuned in order to
successfully perform a specific task. GPT-3, by contrast, does not require
fine tuning of this kind: it seems to be able to perform a whole range of
tasks reasonably well, from producing fiction, poetry, and press releases to
functioning code, and from music, jokes, and technical manuals, to “news
articles which human evaluators have difficulty distinguishing from articles
written by humans”.

I have to stop at this before going on with the rest of the text. GPT-3 is,
indeed, "pre-trained", on a huge corpus of unstructured text. The "range of
tasks" in the passage above are actually all just one task, language
generation. In fact, GPT-3 is specifically "pre-trained" on exactly that task,
generating language. It doesn't need to be fine-tuned any further.

I think what the article is trying to say is that GPT-3 can perform some tasks
that aren't usually thought of as strictly language generation tasks, such as
machine translation or question answering, without having been specifically
trained to do those things. "Not specifically trained to do those things"
means that, e.g. instead of being trained with examples of sentences in one
language and their translation in another, GPT-3 is trained with examples of
arbitrary text respresenting a snapshot of many, diverse uses of language.
Presumably, these diverse uses of language include translation, so GPT-3 also
picked up the ability to do some translation. The same goes for question
answerning etc. However, it should be noted that GPT-3 on the whole does not
score very highly on metrics designed to measure performance on such let's say
side-tasks. It certainly scores nowhere near as highly as specialised systems
e.g. for machine translation etc. This is before looking at what those metrics
actually try to measure which is usually only a very loosely defined sense of
success [1].

Anyway I suppose I'll go on with the rest of the article, but I just wanted to
flag this inconsistency up because it's a typical example of the subtle
misunderstandings that circulate in the lay press about systems like GPT-3.
Ultimately, for anyone who wishes to understand what systems like GPT-3 are,
the best source is the relevant research and of course a bit of background in
the subject and history of AI. If that sounds like a tall order, well tough.
You can't replace knowledge of a field of research with more than 50 years of
history with a quick read of five or six online articles and a dozen tweets.

_________________

[1]
[https://www.skynettoday.com/editorials/state_of_nmt](https://www.skynettoday.com/editorials/state_of_nmt)

