
The AI Misinformation Epidemic - aaronyy
http://approximatelycorrect.com/2017/03/28/the-ai-misinformation-epidemic/
======
ExactoKnight
The AI and Singularity hype irks me, because I'm genuinely in agreement with
Peter Thiel's argument that technological progress is actually _decelerating_
relative to how it was moving 60 years ago once you look beyond the
advancements we have experienced in information technology and finance.

~~~
Banthum
True that. 1880-1950 (70 years) took us from train-and-telegraph to jet-and-
atom bomb. Horse and carriage was the top-tier tech for personal
transportation in 1880, and they were just starting to understand that
diseases are often caused by these little blobby creatures called germs.

1950-2020 (70 years) will take us much less distance.

Almost everything we're developing seems to be about information processing.
If you look at the technologies that _actually do things in the physical
world_ , it seems like almost nothing has changed since the 60's. Cars,
trains, bridges, rockets, sewers... all more or less the same.

~~~
paperpunk
I don't think it's reasonable to equate 'technologies that existed since the
60s' with 'technologies that are ubiquitous today'.

I'm in the bottom 15% of my country (UK) by income and I have access to things
like: near-instantaneous hot water at all times, affordable next-day delivery
services from a pocket-sized device I carry around with me, international air
travel that is affordable for me, efficient fridge/freezer technology, fresh
groceries from around the world at every local convenience store.

And yes things look a lot less impressive if you filter out information-
technology related things. But advancements will always look less impressive
if you filter out the most impressive advancements. The fact that we are close
to permanently connected across geographical boundaries now isn't something to
be dismissed out of hand. The level of effectiveness and miniaturisation of
communications devices (e.g. 'true wireless' earphones with mobile internet)
is really approaching the point of being practical telepathy insofar as it can
be used.

~~~
ExactoKnight
Almost everything you listed there already started becoming common by the
1960's.

Step back and forget about _informational technology._ Look at the world of
atoms, not at the world of bits.

Since the decommissioning of the Concorde, our fastest commercial means of
transportation has actually been getting _slower_ , not faster. Man hasn't
reached farther out in space than the moon missions of the 1960s. The first
half the of the 20th century brought us: antibiotics, electricity,
automobiles, air travel, rockets, space travel, satellites, radio, reliable
clean drinking water, indoor heating, laundry machines, diswashers, widespread
indoor plumbing, and massive improvements in sanitation / literacy.

The second half of the 20th can't even _come close_ to this level of
technological growth.

~~~
notahacker
Not sure I'd consider stuff like _the vast majority of human knowledge sitting
in my pocket_ to be less useful than the dishwasher because it has fewer
moving parts or _most people can afford to fly_ to be less of an achievement
than _the megarich can fly 50% faster_.

Recent decades might a disappointment compared with the millenialist
interpretation of exponential curves, but that's only the same as the
disappointment a Victorian idealist might feel with the lack of stuff that
happened in the twentieth century as a whole given that neither utopian
socialism nor the Second Coming has occurred yet.

~~~
coldtea
> _Not sure I 'd consider stuff like the vast majority of human knowledge
> sitting in my pocket to be less useful than the dishwasher because it has
> fewer moving parts or most people can afford to fly to be less of an
> achievement than the megarich can fly 50% faster._

Not sure anybody told you to consider that.

What they did ask you to consider is that technological progress, outside of
the digital realm, has been slowed down.

~~~
meri_dian
The fact that a relatively low income individual has access to a large number
of goods and services which were once considered exclusive to the wealthy is a
clear indication that technological progress has continued at a steady pace.

While 'slowed down' or 'sped up' are difficult to quantify, the fact that 3
billion human beings have gained access to smartphone technology over the last
decade seems to support the idea that technological progress has in fact sped
up.

Increasing air transportation speed is a very narrow application of technology
and hardly constitutes a meaningful gauge of technical innovation.

~~~
coldtea
> _The fact that a relatively low income individual has access to a large
> number of goods and services which were once considered exclusive to the
> wealthy is a clear indication that technological progress has continued at a
> steady pace._

No, it's just a clear indication of market efficiencies and/or better
engineering.

It doesn't say anything about what the grandparent asked for: the rate and
magnitude of new scientific/technologic discoveries.

~~~
soundwave106
One era (early 20th) is the industrial revolution; one era (approx. 1960-now)
is the information revolution. It is true that the industrial revolution has
slowed down and mostly focused on incremental improvements, but it seems
strange to discount the information revolution gains completely. There's been
a _lot_ of information revolution gains.

That said, I would argue that medicine too has made some pretty significant
advances in the 2nd half of the 20th century... mostly in surgery techniques
(transplants are 2nd half 20th century), scanning techniques (NMR and CT both
were 2nd half 20th century), and pharmaceuticals. Significant vaccines (polio,
measles, mumps) were 2nd half of the 20th century developments. Genetic
science has made huge gains as of late. Etc.

------
TuringNYC
Here is my annoyance with AI Hype: people seeking extra tailwinds pitch their
startups as "ML companies" even with the most tangential usage of ML. It
drowns out real ML companies. Most people cannot tell the difference.

At a hackathon recently, someone pitched their app as an "ML-driven app"
though the only ML in there was some 1-line language translation feature they
were consuming off Watson REST services for a tangential feature on their app.

Meanwhile, my submission actually used a self-trained CNN on a custom dataset
using TensorFlow and changes to start/end layers on the NN. The image
classification features were the core of the app and it wasn't something that
was just a wrapper over an Off-the-shelf API. We actually tried multiple
networks and went thru the trouble of parameterizing everything.

At the end, I wonder how many judges actually understood the difference in
effort/value to the two attempts at ML.

~~~
AndrewKemendo
Difficulty level 1: The only ML in there was some 1-line language translation
feature they were consuming off Watson REST services for a tangential feature
on their app

Difficulty level 2: Self-trained CNN on a custom dataset using TensorFlow and
changes to start/end layers on the NN

Difficulty level 3: Custom rolled 12 layer CNN trained with with novel hand
labeled data.

How many people to you think know the difference between these and the fact
that each step is a magnitude or more harder than the last? That's not even
getting into trying to apply research grade stuff.

~~~
erkkie
Why should people be able to tell the difference vs the outcome of those
products, unless someone (investors?) is specifically just buying into the AI
hype itself? (in which case, this is nothing different from when everything
was "social")

~~~
TuringNYC
I'm not saying people should be able to tell the difference, nor am I
discounting the value of the final outcome. Just that pitching an app as ML-
driven, when it is barely 1% ML-driven is deceptive and mis-informs the
public.

Consider taking the side mirror from a Ferrari, putting it on a Ford Escort,
and then pitching the Ford Escort as a race-car. The Ford Escort may be a
great family car, but it is _not_ a sports car. People who don't know the
difference might come to think of it as a sports car, and might even question
the wisdom of spending money on a Ferrari.

~~~
dragonwriter
If that 1% is essential to the purpose of the app, it seems reasonable no
matter what % of the code of the app it represents.

~~~
TuringNYC
Going along with my racecar example...the side view mirrors are essential...so
does that make the Ford Escort a racecar?

~~~
dragonwriter
The analogy doesn't work, because "racecar" isn't a technology used in the
side mirrors.

------
apsec112
I'm very interested in this topic, but I'd hold off on discussing until the
other posts are out. The current post is largely just a summary of what future
posts will say, and doesn't cover much by itself.

------
charles-salvia
Yeah, terms like "machine learning" and "AI" have basically become buzz words
which, to most laymen, probably encourage the idea that we're on the cusp of
creating Data from Star Trek.

Unfortunately, the reality is that.... sorry... it's mostly just statistical
algorithms based around regression and intermediate calculus. State-of-the-art
"deep" neural networks are not really anything like the absurdly parallel,
asynchronous biological networks that power our neo-cortex - rather, they're
basically an application of matrix multiplication designed to "learn" a
function by iteratively minimizing an error value using gradient descent. It's
still very unclear what, if anything, this algorithm has in common with how a
human brain actually operates. It turns out that these kind of statistical
algorithms can work pretty well when you have petabytes of data to learn from.
But we're still not anywhere near the unsupervised learning capabilities
demonstrated by a human infant.

~~~
eli_gottlieb
> It's still very unclear what, if anything, this algorithm has in common with
> how a human brain actually operates.

Not very much at all. Most psychological evidence shows that human beings seem
to operate off composable, hierarchical generative models and probabilistic
inference. Deep neural networks are basically just huge continuous circuit
approximators.

~~~
akyu
>composable, hierarchical generative models and probabilistic inference. Deep
neural networks are basically just huge continuous circuit approximators.

These two things are not mutually exclusive.

~~~
eli_gottlieb
What sort of neural network implements human-style learning and inference?

~~~
shahbaby
numenta's HTM

------
Fricken
Welcome to the club. Now you know how Economists, healthcare professionals,
skateboarders, basket weavers and anybody else whose domain knowledge runs
deeper than the average joe's feels whenever their area of technical expertise
becomes the subject du jour for the public at large.

~~~
ThomPete
In Danish we have a word "fagidiot" which means idiot of your field. It's the
kind of things that make it impossible for someone who's been in the army to
enjoy a movie if they don't use the machine gun properly or a designer to
appreciate the casting in the end if the typography is not kerned properly.

The things that matter to someone who spent a lot of time in any given field
are rarely the important things to anyone else.

Edit: And no it's not pronounced with a hard _g_ but with a soft _g_.

~~~
JSoet
Similar, but more similar to lots of the people mentioned in the original post
is the german phrase "Gefährliches Halbwissen" or 'dangerous half-knowledge':
when you know just enough to seem knowledgable about a subject, but actually
have a very basic surface knowledge.

(ps. I'm not a native german speaker, so sorry if I misrepresented the phrase,
but this is how it was explained to me)

------
akyu
I'm not sure which I find more annoying; the AI hypers, or the who people who
insist that machine learning is "basically just matrix multiplication". As a
researcher none of it makes a difference to me either way.

~~~
sddfd
I think part of the excitement is justified: computers can translate between
speech and text, and text of different languages.

On the other hand neural networks are most likely just one tool in a larger
toolset that will be needed for general AI.

~~~
Retric
Computers don't really translate languages. They take language sample from A
into something close to B, and people translate something close to B into
language B.

I don't mean this in some abstract philosophical argument, rather as an
outgrowth of how they operate. Modern methods are much better than this, but
even the most simple mechanical translation of each word in language A into
one and only one word in language B allows people to get some value. But,
improving without understanding has some inherent limitations. Humans run into
similar problems when they try and translate complex source material they
don't understand.

Generally this is not a major problem, but it can be.

------
bluetwo
Remember those AT&T commercials "You Will..."?

They won a bunch of awards for them but most of the technologies, if they came
to be, were brought to you by people other than AT&T.

Same thing with the next steps in AI. They might happen, but they aren't going
to come from the companies who currently have large marketing departments
looking for something to hype.

------
tempodox
The epidemic starts with calling it “AI”.

~~~
lottin
Agreed, FI -for fake intelligence- would be more accurate :)

~~~
nostrademons
I think AI is accurate, but it's actually short for "aggregated intelligence".
Most AI algorithms can't actually generate "intelligence" as we know it.
Rather, they _aggregate_ the intelligence of the humans who contributed to
their training data, and then generalize it to situations where those humans
are not present. If the humans are stupid (as we saw with Microsoft's Tay
chatbot), the AI is going to be pretty stupid as well. If the humans are
racist, the AI is going to be racist as well.

~~~
visarga
> Most AI

AlphaGo would be an example. It was trained by self play.

~~~
nostrademons
Bootstrapped by supervised learning though: it was trained with the results of
30 million human games before the reinforcement learning kicked in.

------
throw2016
The amount of glee at unemployment and disempowerment expressed during most AI
discussions here is truly disturbing, especially given the nascent stage of
things. The irony of laments about HIB and outsourcing then sit starkly.

It reveals not a genuine excitement about technology progress and a better
world but a darker underbelly of a self obsessed and insular tech community
composed seemingly of closet tinpots itching to climb up the food chain. How
can this lead to any positive outcome?

Same old, only replaced by a new group. Technology can leap but humans
mindsets remain stagnant around power and greed.

------
bischofs
Stop calling it "Machine Learning" and "Neural Networks" if you don't want the
attention. Do we refer to code or computers as "Machines?". That kind of
language is just begging for skynet references.

Any sort of elaborate processing network could be called "Neural" or maybe
even a fancy linked-list . Something more reasonable like a learning network
would keep the sensationalism down. Im sure we can come up with better
names...

~~~
justin66
> Do we refer to code or computers as "Machines?"

I do, sometimes. I don't know where I picked up the habit, to be honest, but
it might date back to the eighties or nineties. (I know I'm not the only one)

~~~
bischofs
Fair enough, but take anything in computers and replace it with "Machines" and
it illustrates the point. ram could be called "Machine memory" or a network
could be called a "Machine communication matrix" or in the reverse "Terminator
3: rise of the computers"

~~~
justin66
Yeah, I won't try to defend the "machine learning" moniker, except to point
out that it's not _nearly_ as dorky as "deep learning."

------
jacquesm
A lot of start-ups are hyping themselves as deep-learning or something close
to that in order to make themselves appear 'hot'. Recently I looked at a
company that had absolutely nothing to do with deep-learning or even any kind
of machine learning whatsoever and that _still_ managed to sprinkle the
various buzz-words with great regularity throughout their investor targeted
docs.

What I don't get about this behavior is this: It won't work, and in fact is a
net negative, so why do it in the first place?

Investing is a trust thing, if you break trust before you even get to talk to
your potential investor there is no way you will raise money from them.

~~~
thearn4
What were the technical buzzwords that companies used to signal hotness in the
industry before the ML hype train (startup or not)?

I know on the process & management side for large organizations, there's been
a revolving door of Lean/Agile/6 Sigma/ISO 9000, each with their own set of
certifications, colored belts, and army of advisors/consultants.

I like the idea of always being retrospective and asking what can be improved,
but it's made me a bit cynical seeing management chase fads based on whatever
a vendor tells them.

~~~
jacquesm
Web 2.0, Semantic web, cloud, async, big data... you could keep this up for
quite a while.

~~~
cr0sh
I tend to think this whole "hype marketing" thing has been going on for a very
long time; I can't quantify how long.

I do know that if you look at various technology marketing around the decade
after the release of Capek's RUR (1920) - almost everything that was remotely
mechanical or automatic or electronic was referred to as a "robot".

So it has been going on at least that long, and I suspect longer.

------
ilaksh
Most of the people who have a strong machine learning or deep learning
background are actually unaware of the large amount of existing research that
applies to artificial general intelligence (AGI) because that group is just
not mainstream.

But there are groups who are combining cutting edge neural network research
with AGI and seriously trying to build general intelligence. Researchers who
are working on narrow AI tasks and are familiar with this will continue to be
incredulous right up to (and possibly beyond) the point where they see a
system that seems intelligent to them.

~~~
wiricon
Care to share links to those labs' websites/relevant papers? I'm a deep
learning researcher, and I've been noticing this gap that you mentioned
between the DL community and what everyone else in AI is doing, and think it
might be worthwhile trying to bridge that gap.

~~~
ilaksh
Google for publications by Deep Mind, Ogma, Good AI, Open AI, Numenta,
OpenCog, 'Towards Deep Developmental Learning', projects related to new DARPA
program 'L2M'. Also search for projects labeled AGI.

~~~
claytonjy
I'm unfamiliar with most of those, but aren't Deep Mind and Open AI pretty
standard sources of knowledge in the DL community these days? I say this as a
practitioner who has read some things from both, and had the impression that
while cutting-edge, they are both "traditional" DL-focused institutions at the
moment, not so concerned with AGI.

------
didibus
Sounds like only a primer, I couldn't find any glimpse at what the problem was
though?

~~~
nfd
It's a series of articles the author is starting. Check back soon.

~~~
ndh2
Um, no. At this point it's some PhD candidate claiming to start a series of
blog posts. Do you know how many blog posts there are claiming to have a
second part?

Either do something or don't. Don't waste other peoples time publishing your
grandiose plans. If you want accountability, tell people you actually know,
not the internet.

Not so much his fault really, but I'm a bit disappointed that this got upvoted
so much.

------
Chris2048
Author says (in comments) in support of Singularity being nonsense:

> _Technology_ is not a quantity

Er, yeah, "technology" is a complicated thing, but it can be considered a
quantity. Computing power grows, so does automation and material knowledge.
Dismissing a useful concept as religion because it isn't mathematically
rigorous isn't very objective.

------
tim333
>see the recent Maureen Dowd piece on Elon Musk, Demis Hassabis, and AI
Armaggedon in Vanity Fair for a masterclass in low-quality, opportunistic
journalism

I finally read through most of the 8000+ words of it and it's not that bad -
mostly a bunch of interview quotes from Musk, Kurzweil et al and some of it's
a bit sceptical eg:

>When I mentioned to Andrew Ng that I was going to be talking to Kurzweil, he
rolled his eyes. “Whenever I read Kurzweil’s Singularity, my eyes just
naturally do that,” he said.

------
psyc
IMO, the practical sorts, such as this author, who want everybody's vision for
AI to be as narrow as their present-year work is, suffer from the reverse
affliction to what they think futurists suffer from. I will call this
affliction "rationality-signaling".

------
graycat
From all I can see, nearly all the present AI boils down to one word -- hype.

From what is really going on, nearly all of it looks like (1) in some cases a
lot of new data, (2) some new, faster processor hardware, e.g., based on
graphical processor units although the x86 processors are astoundingly fast
anyway, able to manipulate the new data, (3) for what manipulations to do on
the data, some tweaks of some of the work of L. Breiman and his _CART --
Classification and Regression Trees_ , (4) fitting with S-shaped ( _neural_ )
sigmoid functions, and (5) the radar, etc. engineering of autonomous vehicles.
E.g., the _neural networks_ might be able to simulate the operation of a
neuron in a worm.

What I don't see is (A) much progress in better methods for how to manipulate
the data, that is, the basic applied math and (B) progress in working with
concepts and causality -- in the history of science, progress with concepts
and causality did well where the new methods would need Nevada full of disk
drives of data. E.g., space flight navigation is based, first, on Newton's law
of gravity and second law of motion, not on fitting massive of data via _deep
learning_.

Uh, when the AI people use classic regression analysis as in Draper and Smith,
etc. and the IBM Scientific Subroutine Package, SPSS, SAS, R, etc. to find
some regression coefficients, they claim that their machine _learned_ the
coefficients. Gee, I didn't see that in, say,

C. Radhakrishna Rao, _Linear Statistical Inference and Its Applications:
Second Edition_ , ISBN 0-471-70823-2, John Wiley and Sons, New York, 1967.

or Breiman's _CART_.

Again, the rest I see looks like hype.

E.g., it appears that there is a basic, clever, publicity, hype idea: Whenever
do some technical work, give it a catchy name. Then just use the name in the
hype and ignore what is really going on in any math or science.

E.g., a while back I published some work in statistical hypothesis testing
that is both multi-dimensional and distribution-free. The intended use was for
high quality hypothesis testing of _zero day_ problems in computer networks
and server farms. Alas, I neglected to give the work a catchy name!

The OP seemed surprised that a big, famous company might say things where they
know better. Why not? Just assume that they are trying to sell something to
some people with money.

Similarly for the news media: They want eyeballs for the ad revenue. That the
newsies are willing to write junk to get eyeballs goes back at least to
Jefferson's remarks as in

[http://press-
pubs.uchicago.edu/founders/documents/amendI_spe...](http://press-
pubs.uchicago.edu/founders/documents/amendI_speechs29.html)

and are not nearly new.

We have long had a good filter to apply to the writing of the newsies: Does
the writing meet common high school term paper writing standards for solid
references and primary sources? Rarely is the answer yes.

My view of the printed news is that it can't compete with Charmin, not even
with cheap, house-brand paper towels. For the electronic versions, they are
not useful even as fire starter, shredding for cat litter, or wrapping dead
fish heads -- that is, are useless.

So, don't read them.

And don't debunk them, either -- debunking wastes your time and is something
the newsies long since have ignored. The newsies have no shame.

Ignore the newsies. They ignore the debunking efforts.

My startup manipulates some data, and how is from some applied math I derived.
From what I've seen of AI, my work would qualify as quite good and innovative
AI -- besides the work is solid theorems and proofs. Still, I see no good
reason to give my work a catchy name or call it AI. E.g., I'm not trying to
fool anyone. Or, why would I want to associate my good work with a lot of hype
to fool people?

~~~
nfd
You'd do it to make money. Boatloads of delicious, steaming venture capital.
Mmm.

~~~
graycat
I can say with high confidence that there is not even one venture capital
person in the US who would invest even 10 cents in my work, call it AI or not,
before they see usage significant and growing rapidly, and then they are not
investing in the math or the _AI_ but just the traction and its rate of
growth.

Besides, I'm a solo founder with a meager _burn rate_ so that by the time I
have the traction the VCs want, I will be nicely profitable with plenty of
cash for growth just from retained earnings.

Or, my back of the envelope arithmetic is that with common ad rates from ad
networks, a $1000 server kept half busy 24 by 7 would generate $250,000 a
month in revenue. For just one server, a cheap Internet connection, just one
employee, that's a heck of a profit margin and plenty of cash for 10 more
servers. Half fill those, say, in two spare bedrooms I have, with some window
A/C units, some emergency power supplies and an emergency generator, and I
will have annual revenue more than a VC seed or Series A equity check. And,
"Look, Ma, 100% owner of just an LLC and no BoD!".

Sure, once I have the traction, VCs will call me, and then I will check and
tell them about all the times I sent them e-mail they ignored and explain that
my plane has already left the ground, has altitude, and is climbing quickly
and it's too late to buy a ticket.

Sure, I typed in all the 25,000 programming language statements myself and
implemented my math derivations in 100,000 lines of typing -- lots of in-line
comments! The code is all in just Microsoft's Visual Basic .NET with ADO.NET
for getting to SQL Server and ASP.NET for the Web pages with IIS for the Web
server plus a little open source C called with Microsoft's _platform invoke_.

I wrote my own Web page session state store using two collection classes and
some TCP/IP sockets with class de/serialization -- sure, could have used
REDIS, but my code is so short and simple writing my own was likely easier.
Besides, now I'm about to copy that code, rip out most of it, and get a log
file server that I will like a lot better than what I'm using now from
Microsoft.

Otherwise the code looks ready for at least first production, to, say, well
past $250,000 a month in revenue. At IBM's Watson lab, I wrote AI code that
shipped as a commercial IBM Program Product, and what I've written for my
startup is more solid (got an IBM award for some of the code I wrote in an all
night session -- let one of our programmers be done in the next afternoon
instead of two weeks and got a MUCH nicer result for the customers -- trick
was to do some things with entry variables to keep some run-time code on the
stack of dynamic descendancy).

My project needs data, and I have a lot but need to get more.

Curiously, there's some good news: My development computer was crashing about
five times a day, apparently a hardware problem, maybe on the motherboard. I
did mud wrestling with it and, then, shopping for parts for a new computer.
But my old computer now, for no good reason, is no longer crashing! The
computer has plenty of free disk space for some more data. So, for now I get
to set aside all the system management mud wrestling of getting a new computer
and getting all the software moved to it and running, can review the most
critical parts of my code a third time, write the log server, write some code
to make working with SQL Server easier, get some more data, and do an alpha
test, a beta test, and get some publicity. Maybe I will even go live with my
development computer, get some revenue, and get a really nice first server.

The problem is important; the math is solid; the code is solid; I suspect a
lot, at least enough, people will like the results (it's intended to please
essentially everyone on the Internet), etc., but no VC in the country wants
anything to do with my work now. Nothing. Zip, zilch, zero.

Lesson: To VCs, nothing but nothing matters but traction.

So, the flip side of that lesson is an

Opportunity: Be a solo founder where the traction the VCs want is enough for
profitability and plenty of cash for organic growth.

The founder of Plenty of Fish was a solo founder who eventually sold out for
$500+ million. Some old remarks by A16Z confirm the possibility of a one
engineer unicorn -- at

[http://a16z.com/2014/07/30/the-happy-demise-of-
the-10x-engin...](http://a16z.com/2014/07/30/the-happy-demise-of-
the-10x-engineer/)

with in part

"This is the new normal: fewer engineers and dollars to ship code to more
users than ever before. The potential impact of the lone software engineer is
soaring. How long before we have a billion-dollar acquisition offer for a one-
engineer startup? "

I agree!

I saw the need -- obvious enough. I cooked up a new solution with a new UI,
UX, and some new data to be just what some new math I derived needed. Easy
enough -- I've worked harder single exercises in Rudin, _Principles of
Mathematical Analysis_. The work was easier than my Master's paper, my Ph.D.,
and any of the papers I've published. I wrote the code for the math and
checked it various ways including with programming those calculations again in
another language to check. I designed the data base tables, the Web pages, and
wrote the code for the Web pages. Then the code for the session state store.
Then I got sick, then got well, then my computer got sick and got well, now
I'm back to progress. So far, being a solo founder, nothing particularly
difficult. No reason to need a co-founder.

------
zackchase
Some strident Singularitarians joined the comment thread.

~~~
kordless
I'm from the camp the singularity already happened! ;)

~~~
jodrellblank
I'm from the camp the singularity is a continuous process.

When you travel away from the Sun, the light from it gets dimmer and dimmer
and dimmer, until it becomes single photons appearing with more and more
delays but on average energy of 'dimmer' light.

Fly towards the Sun and there isn't a sudden cutoff point where it goes from
dark to bright, or bright to incandescent, it's a continual increase.

Same with life, lots of people born, lots of intensity around birth and infant
children and parents and education, intensity diminishes with age as people
spread out and dissipate, everyone's looking back in at the source of new life
while getting further and further from it and more and more separated from
each other and lower energy levels and less and less 'happening'.

Singularity? Moving towards the intensity with technological development and
more intricate, faster, and sheerly _more_ connectivity all over, more
encoding of ideas into information patterns.

Unless you want to imagine it like flying towards the Sun becoming falling
towards the Sun, becoming a specific point where everyone burns up and dies.
:/

~~~
beaconstudios
the whole premise of a singularity is that we will hit a point of exponential
technological growth, extrapolated from the accelerating pace of technological
development over human history and usually based on the idea that we'll make a
self-iterating AI which will lead to runaway intelligence. The "singularity"
specifically is intended as a tipping-point where the acceleration of
technology becomes faster than human capability (i.e. the approaching-vertical
line on an exponential curve).

~~~
arca_vorago
Yep, this is why the singularity is a zero-sum game... I expect many entities
to vie for the title, but the first one there will subsequently dominate...
which is why it's my intention to program my conciousness into my own AI which
seeks to become the singularity and therefor my digial conciousness version
will become the ultimate god of the universe.

 _mad scientist laugh_

~~~
beaconstudios
race you to the pseudoreligious finish line!

------
TuringNYC
Since we are on the topic, how do the big firms (Google, Facebook, etc) view
Singularity University? Is it considered positive, neutral, or negative? Would
appreciate inside perspectives.

~~~
tim333
Well, Google chipped in $3m and hired Kurzweil so presumably they kinda like
it.

------
not_a_terrorist
Took a course with a few classes that touched AI.

First thing the prof wrote on the board:

"Artificial Intelligence = Human Stupidity"

Spent a few hours analysing all aspects of that statement. Was entertaining.

------
uwu
what's with the claustrophobia-inducing thick black border around the page

~~~
somestag
That's interesting; I didn't even notice it.

But the border doesn't scale with window size, and I tend to keep my windows
large on an already large monitor, so the relative effect was small and
(subjectively) even pleasant. I can't get too mad at the design choice, but
maybe it should scale in some way to window size.

------
unityByFreedom
Someone tell Musk. Oh wait, he profits from such misinformation. Nevermind.

~~~
prvnsmpth
Care to elaborate?

~~~
acover
One way he profits is in the expectations of self driving cars. If people
expect it in 3 years and it to be 10x better than humans [to quote musk] then
there is huge value in any company that can provide it.

Tesla has a market cap larger than Ford. Tesla also has been on the edge of
bankruptcy every couple years and recently raised $1 billion in capital.

~~~
nradov
Tesla funding is driven more by proven demand for the product (huge order
backlog with paid deposits) than any belief in autonomous vehicle technology.
That plus historically low interest rates which have caused investors to do
silly things in search of higher returns.

~~~
unityByFreedom
> huge order backlog with paid deposits

fully refundable deposits, whose quantity hasn't been disclosed since very
early on

