
Welcome to the Next Level of Bullshit - jelliclesfarm
http://nautil.us/issue/89/the-dark-side/welcome-to-the-next-level-of-bullshit
======
reilly3000
At risk of being too pedantic even for HN, I have to say that a news article
written by GPT-3 isn't "Fake News". Maybe you could called it "artificially
authored news" or something, but nothing about synthesizing and regurgitating
words is inherently fake. "Fake News" is a loaded term that fact-checkers tend
to avoid for its ambiguity. Generally its usage refers to disinformation,
which is the use of media to intentionally deceive the reader for political or
social motivations.

Its terrifying to imagine artificially authored disinformation, but from my
sparse understanding of GPT-3, it wouldn't be the right tool for crafting
novel disinformation without a lot of inputs for its user. Disinformation is
dangerous when represented as truth by platforms with credibility, and no
content creation tool can garner and wield credibility on its own. That said,
it could certainly wreak havoc with mass commenting campaigns and such.

------
warent
"GPT-3 is a marvel of engineering due to its breathtaking scale. It contains
175 billion parameters (the weights in the connections between the “neurons”
or units of the network) distributed over 96 layers. It produces embeddings in
a vector space with 12,288 dimensions."

I don't know much about AI, though I do know about programming, and to me this
vaguely smells like "our program is so great because it has 1 million lines of
code!"

Does the number of parameters, dimensions, etc, really have anything to do
with how breathtaking and marvelous something like this is?

~~~
donw
It does.

We can see a very strong correlation between brain size, and overall
intelligence within the animal kingdom. The larger the brain, the smarter the
animal.

Effectively, GPT-3 has a bigger brain.

~~~
neatze
you statement is just false, [https://en.wikipedia.org/wiki/Brain-to-
body_mass_ratio#/medi...](https://en.wikipedia.org/wiki/Brain-to-
body_mass_ratio#/media/File:Brain-
body_mass_ratio_for_some_animals_diagram.svg)

This without including animals such as parrots and octopuses.

~~~
donw
I stand corrected!

Pretty sure it holds for hominids, though.

~~~
neatze
If read this correctly, it does hold in very narrow sense, when compared
within hominid evolutionary path only.

In general size of brain is proportional to body mass, more sensors bigger
brain, has nothing to do with intelligent effective behavior, to large extent,
arguably.

To best of my knowledge there is no known method that would estimate minimum
amount of neurons even for simple problems, let alone complex one, however
there is convergence of some form cerebral cortex between species, but octopi
break this model to large extent (might we wrong here). Things get even more
complicated when you account for space between individual neurons.

------
curiousgal
The biggest bulkshit, to me, is people confusing pattern matching with
intelligence. Sure, the model is outputing coherent text, but it has no
fucking clue what it's talking about.

~~~
GarrisonPrime
Fair enough, but the argument could be made that even human-level intelligence
is just an advanced degree of pattern matching.

------
johndoe42377
Well, this could be explained in a few meta-principles or just principles of a
proper (non-abstract) philosophy.

1\. A map is not a territory. Weighted connections are not semantic relations.

2\. Environment and its laws and constraints comes first.

3\. Language is a tool of describing What Is, not a tool of producing what
could be.

4\. Like untyped lambda calculus, applying anything to anything produce
bullshit.

5\. A proper use of a language require a type discipline which reflect the
laws and constraints of the environment, and reject sentences with is not
type-correct.

Everything else will produce a bullshit. Theoretical physics and other
abstraction based fields are thus flawed.

------
axegon_
For years we've been hearing how AI will build robots and wipe out humanity
and the bollocks that is the trolley problem. I remember when I was reading
the Unsupervised Cross-Domain Image Generation[1] paper and my immediate
thought was "yep, I can see this going south". And sure enough, not long after
deepfakes became a thing. GPT-3 is absolutely astonishing in terms of it's
capabilities and I'd love to be able to dig into it's inner workings and
scroll through it's code. The truth is there are three stoppers for the large
majority of people who would love to exploit it.

1\. Data. For better or worse obtaining a dataset that big isn't a great deal
if you really want to. Gutenberg project, wiki corpus, the reddit dumps -
difficult but definitely doable.

2\. Costs - training the model costs $5M which is a considerable amount of
money by anyone's standard(rich people will also have a second thought when
they hear that number). But there is a catch - the hardware is becoming more
and more accessible. Remember when a server grade GPU like P100 was ~10k a
piece? And now the high end 30X series are 1/10th of that and have better
specs... Adjust those numbers and you get something close to 20x price
decrease in the course of 4 years(iirc p100 came out 2016).

3\. Finding the people with the adequate knowledge to build something like
this. This I think is the only blocker at this point. Realistically we are
talking a few dozen people on earth that have the mental capacity to build
something like this.

If there is one thing that I see as a potential threat in this field:
information losing credibility.

[1] [https://arxiv.org/abs/1611.02200](https://arxiv.org/abs/1611.02200)

~~~
phobosanomaly
Regarding point number 3, I wonder if there might be more than we think, but
they are sequestered in various defense projects in different countries around
the globe?

~~~
peterlk
I think there are probably more than a dozen people, but they are working
other projects. GPT-3 is cool, but it's not really commercially viable yet.
There are lots of more immediately profitable projects to work on in the large
enterprises that employ these capable people.

------
leshokunin
We tried GPT-3 for our email startup (Mailscript), initially as a fancy way to
detect and understand the content of an email. That didn't work out great,
because it's really prone to false positives and ultimately requires more work
than doing fancy Regex. We're hopeful it will solve other problems we'll run
into though, but we're not going to push for it before we find the problems.

I'll share the lessons learned from the implementation if that's interesting
to people here.

------
phobosanomaly
Maybe bullshit could wind up being useful in unexpected ways.

It could be an interesting tool to use to workshop ideas.

For example, if you were trying to work your way through an idea, you could
throw various aspects of your idea at it, and it would throw back a slightly
different take.

Rubber-duck debugging, but the duck talks back, and you can throw your
fundamental assumptions about life at it.

------
grensley
Oh god, was this written by GPT-3 too?

Feels like we're headed for the next level of SEO dark ages, where the shovel-
ware content that was written by humans before can now be automated.

~~~
snuxoll
I was literally sitting waiting for the punchline, and was mildly
disappointed.

