
What If the Singularity Does Not Happen? (2007) [video] - mpweiher
http://longnow.org/seminars/02007/feb/15/what-if-the-singularity-does-not-happen/
======
Mediterraneo10
Visions of the Singularity in the near term failed to account for two things:
1) obtaining regulatory approval for innovations in various spheres (medicine,
aviation) takes _forever_ , 2) innovations that rely on the consumer market
are slowed by people wanting to use their present purchases for some amount of
time before shelling out again for the next generation of tech.

An exponentially faster development of technology is limited by how fast the
workings of government agencies and consumers' minds operate.

I rolled my eyes when I read Kurzweil's predictions of medical advances in his
early-millennium books. Even if we invent nanobots or whatever so quickly, the
FDA and its counterparts are not going to allow applying those technologies
the very moment they are invented.

~~~
skissane
> Even if we invent nanobots or whatever so quickly, the FDA and its
> counterparts are not going to allow applying those technologies the very
> moment they are invented.

I've often wondered if countries (such as maybe China or India) where medical
devices/pharmaceuticals/etc are more lightly regulated may some day end up
pulling ahead of countries like the US with stricter regulations.

Of course, stricter regulations exist for a reason – to reduce incidence of
death or injury from insufficiently tested treatments. But, there is always a
risk-reward tradeoff–being willing to tolerate a greater incidence of
death/injury due to new treatments may increase the progress of medical
science–and some societies may collectively decide to locate that tradeoff at
a different position than what the US (or even other Western countries) does.

~~~
nighthawk648
> to reduce the incidence of death/injury

... Or you know large scale existential crisis. Nanotechnology is freaky man.
Could be the easiest way for reality to be rewritten without anyone noticing..

------
jeanvalmarc
The video didn't load for me, but there was a similar talk on Youtube from
Vinge:
[https://www.youtube.com/watch?v=RzRuPGnJxCs](https://www.youtube.com/watch?v=RzRuPGnJxCs)

~~~
melling
You can also read Vinge's essay from 1993:

[https://edoras.sdsu.edu/~vinge/misc/singularity.html](https://edoras.sdsu.edu/~vinge/misc/singularity.html)

~~~
blacksqr
"Within thirty years, we will have the technological means to create
superhuman intelligence. Shortly after, the human era will be ended."

Four years to go... will we make it?

~~~
melling
China has made AI a priority. Hard to say what's going to happen but I would
expect the AI arms race between the US and China to have quite an impact.

[https://www.technologyreview.com/s/609038/chinas-ai-
awakenin...](https://www.technologyreview.com/s/609038/chinas-ai-awakening/)

------
lacker
To me a singularity does not seem to be approaching. In particular it doesn’t
seem like technology is “accelerating faster and faster”. The broadest metric
for human progress is GDP growth, and in the U.S. GDP growth hasn’t really
been accelerating, it’s been around 2.5% a year for the past decade or so.

AI in particular has made some advances but most basic activities remain
undoable. AI cannot fold my laundry or generate a believable sentence well
enough for those features to make their way into a consumer product. Voice
recognition like Alexa is the only big impact of AI so far, which places it
below advances like “touch screen glass” in terms of its impact on society.

It could certainly change. The biggest question right now is whether self
driving cars will work, or whether it will fizzle. In the 50’s many car
companies thought the next step was inevitably personal airplanes, and that
fizzled. So it’s possible it just won’t happen.

~~~
taneq
Really? How much progress was made on thinking machines in the 19th century?
How much between 1900 and 1950? 1950 and 2000? 2000 and 2010? 2010 and now?

I think each of those timespans is very roughly equivalent. In the past 10
years voice recognition has gone from useless to ubiquitous. Neural image
labeling and segmentation likewise. Physical robots have gone from taking a
few faltering steps to doing parkour. None of that counts as "AI" any more
because it works and we know how, but the rate at which new capabilities are
moving from the realm of "AI" to the realm of "normal engineering" is
increasing exponentially.

------
JackFr
I really like Vernor Vinge's fiction, and he's got some really big ideas, but
sometimes in contexts like these I'm surprised by how shallow and provincial
the thinking is. He seems unable to step out the present western 20th/21st
century worldview.

The singularity (if it comes) should feature intelligences whose modes and
motivations are so superior and advanced, they are literally incomprehensible
to us. Questions like "Will the machines allow humans to live?" are ill-posed.
As human philosophers can't agree on the meaning or the existence of free
will, will the machines? Will they have will, free or otherwise? Or is free
will itself the mere shadow on the wall of the a richer thing that we can't
truly, ever comprehend? So talking about what the machines/intelligences
'want' is getting ahead of ourselves.

~~~
loup-vaillant
> _Questions like "Will the machines allow humans to live?" are ill-posed._

I don't think so. Even if the machine doesn't care about you (it may not even
have a notion of what is a human), you are still made of atoms it could use
for something else.

~~~
taneq
Are you still a living human if it just shoops you off into a virtual world
indistinguishable from your current one, so it can use your atoms for
something more productive? What about if you upgrade yourself to the point
where you're no longer made of mortal meat?

~~~
loup-vaillant
> _Are you still a living human if it just shoops you off into a virtual world
> indistinguishable from your current one_

As far as I can assess, yes I am. (For a couple reasons, including my
disbelief in philosophical zombies, I believe in mind uploading).

> _so it can use your atoms for something more productive?_

Wait a minute, running a Matrix takes energy no matter how you put it. Energy
the machine could use for something else… If the machine doesn't care about
me, why would it even bother? It would just take the atoms and discard any
information (that is, me) that it does not need. Running a Matrix is a strong
indication that the machine _does_ , in fact, care about me.

> _What about if you upgrade yourself to the point where you 're no longer
> made of mortal meat?_

That's more delicate. I'm not sure exactly what would ensure the continuity of
my identity. Mere mind uploading most likely would, but upgrades to my own
brain… I think the answer will depend on what neurosciences and such tell us.

~~~
taneq
All very good answers!

For the second one I feel that a combination of scientific curiosity and
altruism would be enough for most AIs up to the weakly godlike level to keep
us around.

For the third, I don't think continuity can be a significant factor in
identity, because even we as meat-humans don't experience it, and given the
possibility of physically-indistinguishable duplicates, it leans on Cartesian
dualism. Also once we can upload our mind to a data file, it does weird things
to a whole bunch of current morals which are based on the scarcity of any
given human mind.

~~~
ThrowawayR2
> _altruism_

Some of the worst human beings I've ever encountered were also extremely
talented and intelligent engineers. There is zero reason to assume a
hyperintelligent AI would automatically be altruistic.

~~~
loup-vaillant
More specifically, the AI is likely to be scientifically curious because it is
a likely way to further its goals: understand the universe so it can be more
efficient at doing whatever it is it is programmed to do.

Altruism on the other hand sounds more like an _end_ goal, not just an
instrumental, intermediate goal. Unless the AI is _specifically_ programmed to
be altruist, it will likely be utterly indifferent. Which if it becomes
powerful enough will be more like trampling an ant without even noticing it.

Then comes the _really_ difficult part: programming altruism. What does that
mean exactly? Ultimately, it boils down to tell the AI what we want, except we
don't exactly know what we want, let alone what we _will_ want once we learn
more, become wiser etc. We don't want the AI to keep us in soft rooms with
children toys and morphine shots just because we asked it to keep us safe and
happy…

~~~
taneq
"All altruism is enlightened self-interest" \- Heinlein

------
simplecomplex
What if the singularity already happened? How would we know? It’s
unfalsifiable unmeasurable woo-woo.

The singularity and AGI are apocalypse for atheists.

~~~
the_af
Agreed. There is a really funny article by idlewords (who also regularly
comments here) about this:
[https://idlewords.com/talks/superintelligence.htm](https://idlewords.com/talks/superintelligence.htm)

Not only is the Singularity religion for atheists, it also sparked
tremendously bizarre offshoot beliefs, like that "Roko's Basilisk" meme that
the "Eliezer Yudkowsky" crowd was briefly obsessed with.

~~~
burnte
"Not only is the Singularity religion for atheists"

I also don't understand this statement. I asked the parent about why the
singularity and AGI are "the apocalypse" for atheists, but I also don't follow
the logic of this statement. Granted, I don't think there will ever be a
"singularity", but I fail to see how it is in any way linked to atheism. Maybe
a small subset of people, like you said, a "bizarre offshoot".

~~~
the_af
Ah, yes. Sorry, I'm an atheist myself and I can see how my sentence could be
misconstrued.

Let me rephrase: there's a bunch of Silicon Valley technologists and like-
minded people, who consider themselves enlightened atheists, and who have
fallen prey to a different kind of religion: an apocalyptic tech-religion with
a set of distinct beliefs such as the Coming of the Singularity, the Evil AI,
the Upload Your Mind to the Cloud and Live Forever cult, etc. Not everyone
believes in the same subset, and not everyone believes in the more bizarre of
these beliefs.

As an atheist myself, I definitely didn't mean that all or even the majority
of atheists will fall for this. I meant that a specific subset of tech-minded,
nerd, and likely well-to-do atheists tend to fall for these techno-religions.

It's a "religion for atheists" because it's a religion that tries to hide the
trappings of a more traditional religion, for people who presumably reject
them and who don't consider themselves open to new age pseudoscience either.
But tell me it's not cute how Ray Kurzweil has faith in reaching immortality
(salvation) in his own lifetime.

~~~
burnte
Thanks for the explanation. I agree there's a small group of virtual-theists
like Kurzweil, he's been a bit off kilter for many, many years. I'd love to
upload my mind into VR when my meatbag expires, but the chances of that
happening in my life are near zero, and not much better for brain uploading in
general, IMO.

------
tlb
Another risk is that, as the singularity seems closer and closer, people are
going to throw up their hands, stop learning math and science figuring the
machines will do it all, and generally check out. If too many people do this
too soon, sustainable recursive self-improvement won't happen and we'll be
stuck with AI that doesn't quite work and nobody to fix it.

~~~
gpderetta
people still play chess even though machines are a class above.

~~~
tlb
Chess is fun, but will people enjoy fixing old security bugs?

You could imagine an AI that's very good at adding features to itself, but
just can't get security right because it doesn't think adversarially.
Corporate software groups often have this same limitation.

------
aj7
This is ridiculous, a sudden singularity. The reason is that computation
(brain) power required shows no signs of singular growth, just steady growth.

------
katabasis
At this point the most likely future looks like one where we use all of our
resources, wreck the environment, and enter a new dark age. The potential
benefits/dangers of AI seem distant in comparison.

~~~
tabtab
Some people _like_ the dark ages. The advantage goes to aggression and
physical strength. Some would rather compete in such a world. It could be why
they seemingly battle against civilization.

~~~
the_af
I know this is totally not the point of TFA, the parent comment or yours, but
apparently the current consensus is that there was no such thing as the "Dark
Ages", and it wasn't a particularly illiterate or barbaric age:
[https://going-medieval.com/2017/05/26/theres-no-such-
thing-a...](https://going-medieval.com/2017/05/26/theres-no-such-thing-as-the-
dark-ages-but-ok/)

~~~
katabasis
Fair enough – I was speaking in a more general sense. A better historical
analogy would probably be something like the phenomenon Jared Diamond
describes in his book "Collapse"[1].

[1]:
[https://en.wikipedia.org/wiki/Collapse:_How_Societies_Choose...](https://en.wikipedia.org/wiki/Collapse:_How_Societies_Choose_to_Fail_or_Succeed)

~~~
the_af
Sure, I know it wasn't your point. I just couldn't resist commenting because I
read that article recently :)

Your link is interesting, I didn't know about that book!

------
jbob2000
Ok, I have this theory and it's probably kinda ignorant, so I welcome some
criticism.

It looks like biological intelligence is purely a product of having an
electrical network with _tons_ of connections. That's the brain, right? And
less intelligent species have less connections, so we can probably draw a
straight line from low intellect - low connectivity to high intellect - high
connectivity. My argument is that intelligence is a product of the complexity
of an electrical network.

So will machine intelligence be purely a product of creating an electrical
network with tons of connections? If I took billions of computers, wrote some
basic logic for how they communicated with each other, and then wired it all
up, would I get an intelligent being?

If those suppositions hold true, then all we need to do to reach AGI is to
continue to invest in the internet (as it is the most advanced electrical
network we've made to date). Continue to add more computers and continue to
remove hindrances to the operation of the network (censorship, rate limiting,
political logic). Then it will just magically wake up one day and say "hello
world".

~~~
tbenst
A neuron uses far more than electricity to communicate. The number of
neurotransmitters and receptor types alone is staggering. There’s a reason why
biology evolved past using gap junctions. The computer science view of the
brain compares how the brain works to FLOPS, much like how previous
generations compared the brain to horsepower from the steam engine. Better
analogy, but still naive.

~~~
ben_w
This is an important thing that optimists often miss. If you naively apply
simple estimates of how many flops it takes to simulate a mind… well, I have
and it would be possible to put ~35,000 real-time human minds onto all the
iPhones Apple sold this year:
[https://kitsunesoftware.wordpress.com/2018/10/01/pocket-
brai...](https://kitsunesoftware.wordpress.com/2018/10/01/pocket-brains/)

There’s also the problem that we don’t yet know how ignorant we are about our
minds. I’m sure neuroscientists do at least have a rough idea by now of how
much there is left to learn, but I’d be surprised if they are confident enough
to be able to estimate the cost of the remaining required research to less
than a factor of 10, especially as we necessarily started with the easy-to-
learn knowledge.

