
Predicting where AI is going in 2020 - alfozan
https://venturebeat.com/2020/01/02/top-minds-in-machine-learning-predict-where-ai-is-going-in-2020/
======
szc
Honesty, repeatability, numerical analysis. Canonicalization.

Honesty: how many times was the exact same data processed? Was the result
cherry picked and the best one published? For the sake of integrity how is it
possible to scientifically improve on this result? (example, your AI outputs
some life altering decision?)

Repeatability: In science, if a result can be independently verified, it gives
validity to the "conclusion" or result. Most AI results cannot be
independently verified. Not indepentently verifiable really ought to give the
"science" the same status as an 1800's "Dr. Bloobar's miracule AI cure".

Numerical Analysis: performing billions / trillions of computations on bit-
restricted numerical values will introduce a lot of forward propagated errors
(noise). What does that do? Commentary: Video cards don't care if a few bits
of your 15 million pixel display are off by a few LSB bits, they do that 60 or
120 frames a second and you don't notice. It is an integral part of their
design. The issue is, how does this impact AI models? This affects
repeatability -> honesty.

If error of a quantized size is a necessarily required property to achieve "AI
learning that converges", there is still an opportunity for canonicalization
-- a way to map "different" converged models to explain why they are
effectively the "same". This does not seem to be a "thing", why not?

In my opinion, in 2020, the AI emperor still has no clothes.

~~~
fluffything
For most of the engineering applications I work on, AI is useless.

When we talk about controlling machines, our control algorithms have
mathematically proven strict error bounds, such that if we provide an input
with a particular maximal error (e.g. from a sensor that has some error
tolerance), we can calculate what's the maximum error possible in the response
that our model would produce, and then use that to evaluate whether this is
even an input that should be handled by the current algorithm or not.

These control algorithms all take some inputs, and use them to "predict" what
will happen, and using that prediction, compute some response to correct it.
These predictions need to happen much faster than real time, since you often
need to perform an optimization step to compute an "optimal" response.

These predictions are usually computed using a reduced-order model, e.g., if
you had to solve a PDE over 10^9 unknowns to compute the actual prediction,
you can instead reduce that to a system with 10 unknowns, by doing some pre-
computation a priori. Most tools to do these kinds of reductions developed in
the last 60 years come with tight error bounds, that tell you, depending on
your inputs, the training data, etc. what's the largest error than the
prediction can have, so you can just plug these in into your control pipeline.

People have been plugin in neural-networks to control robots, cars, and pretty
much anything you can imagine into these pipelines for 10 years, yet nobody
knows what the upper bound on the errors that these neural-networks give for a
particular input, training set, etc.

Until that changes, machine learning just makes your whole pipeline
unreliable, and e.g. a car manufacturer must tell you that in "autonomous
driving" mode you are liable for everything your car does, and not them, so
you have to keep your hands on the driving wheels and pay attention at all
times, which... kind of defeats the point of autonomous driving.

\---

Prediction: we won't have any tight error bounds for real-world neural
networks in the 2020-2030 time frame. These are all non-linear by design
(that's why they are good), error bounds for simple non-linear interpolants
are pretty much non-existent, people have tried for 20-30 years, and real-
world NNs are anything but simple.

~~~
red75prime
Control algorithms are a part of the problem. What about input data? There's
nothing that comes close to NNs in answering a question, say, "Is there
pedestrian ahead and what he/she will probably do?"

A control system doesn't need to be end-to-end neural, by the way.

~~~
fluffything
> What about input data?

What about it? You get input data from data sources which, e.g., in a car it
would be a sensor. The manufacturer of the sensor provides you with the
guaranteed sensor accuracy for some inputs, which gives you the upper bound on
the input error from that source.

That is, in a reliable control pipeline, the upper bounds on the errors of
data sources are known a priori.

Sure, sensors can malfunction, but that's a different problem that's solved
differently (e.g. via resiliency using multiple sensors).

> A control system doesn't need to be end-to-end neural, by the way.

Who's talking about end-to-end neural nets for control? If a single part of
your control pipeline has unknown error bounds, your whole control system has
unknown error bounds. That is, it suffices for your control pipeline to use a
NN somewhere for it to become unreliable.

This doesn't mean that you can't use control systems with unknown error bounds
somewhere in your product, but it does mean that you can't trust those control
systems. This is why drivers still need to keep their hands on the steering
wheel on a Tesla: the parts of the pipeline doing the autonomous driving use
NNs for image recognition, and the errors on that are unknown.

This is also why all "self driving" cars have simpler data sources like
ultrasonic sensors, radar, lidar, etc. which can be processed without NNs to
avoid collisions reliably. You might still use NNs to improve the experience
but those NNs are going to be overridden by reliable control pipelines when
required.

~~~
red75prime
> That is, in a reliable control pipeline, the upper bounds on the errors of
> data sources are known a priori.

Now we have natural neural networks in the control loop for some reason
despite their unknown error bounds. To give another example: is there a sensor
for vehicle placement relative to road edges with known upper error bound,
which is less than width of the road? No, we have GPS, radar, lidar, camera
data that we need to interpret somehow.

A car that reliably avoids collisions (Can it, though? It needs to predict
road situation to do it reliably), but can occasionally veer off the road,
doesn't strike me as particularly safe.

> to be overridden by reliable control pipelines when required.

Those reliable pipelines needs to be mostly reactive. And there's a limit on
what they can do. You can't avoid a collision when a car emerges from around a
corner with 0.1 seconds to react. You need complex processing of those
"simple" data sources to detect zones that can't be observed right now and to
assess a probability of the said situation.

All in all, we already have unreliable human part in the control loop of a
vehicle. A control system that is provably robust in all real world situations
will be the ultimate achievement and not a prerequirement for wide use of
self-driving cars.

~~~
fluffything
> Now we have natural neural networks in the control loop for some reason
> despite their unknown error bounds.

Not in any automatic control loops. All control loops that do this have a
human as the final piece of the pipeline, and that human is legally
responsible for the outcome of the control loop. That defeats the point of
automatic control.

> To give another example: is there a sensor for vehicle placement relative to
> road edges with known upper error bound, which is less than width of the
> road?

No, which is why these control pieplines have a human in control. For the
experimental pipelines that do not have a human in the end of the control
loop, they do have other control pipelines to avoid collisions, and the only
that their control algorithms guarantee is a lack of collisions, not the
ability for the car to stay on a lane. That is, the car might leave the lane
under some conditions, but if it does, it will detect other objects and avoid
crashing into them (although those objects might crash into it).

> All in all, we already have unreliable human part in the control loop of a
> vehicle. A control system that is provably robust in all real world
> situations will be the ultimate achievement and not a prerequirement for
> wide use of self-driving cars.

Right now, control loops without proven error bounds are not allowed by
certification bodies on any control-loop in charge of preserving human lives
in the aerospace, automotive, medical, and industrial robotics industries.

Allowing control-loops without known error bounds to be in charge of human
lives would lower the current standards of these industries. Could be done
(the government would need to create a new kind of regulation for this), but
at this point it is unclear how that would look like.

~~~
sudosysgen
Well actually, they are allowed in the automotive world, re: Tesla.

They won't necessarily lower the standard for some applications if the
alternative is just regular people at the wheel.

~~~
fluffything
> Well actually, they are allowed in the automotive world, re: Tesla.

This is false: Tesla's "autopilot" isn't a control-loop, much less a certified
control-loop. What Tesla calls "autopilot" is actually a "driving assistant".
It is not in charge of controlling the car, but it is allowed to assist the
driver to control the car.

This is a subtle but very important difference, since this is the reason that
Tesla tells its owners to always keep the hands on the steering wheel, and
Tesla cars are required to disable their "driving assistant" in countries like
Germany if the driver hands leave the steering wheel. It is also the reason
that Tesla is not liable if a car with "autopilot" kills somebody, because the
"autopilot" isn't technically driving the car, the driver is.

The word "autopilot" comes from the aerospace industry, where pilots are not
required to keep their hand on the controls or pay attention when the
"autopilot" is on, and the manufacturer is liable if the "autopilot" screws up
(e.g. see Boing). Tesla's usage of the word "autopilot" to refer to driving
assistants is misleading and dangerous.

A true car "autopilot" is what people call "Level 5" in the autonomous driving
community. Elon Musk promised to ship 10.000 self-driving Level 5 Tesla taxis
in 2019, and now mid 2020. We'll see about that.Elon Musk has been saying that
Level 5 will happen next year for the last 10 years, so at least when it comes
to autonomous driving, their predictions have been consistently wrong.

Most CEOs who adventure to predict when Level 5 will happen say something like
"not before 2030". And Waymo's goal is to achieve """Level 5""" in a very
small restricted area of Phoenix downtown for some restricted weather
conditions some time between 2020-2030, but it is unclear what the road after
achieving it would be for certification, and the road from there to actual,
real, Level 5 is still unclear at this point.

------
TACIXAT
>Human babies don’t get tagged data sets, yet they manage just fine, and it’s
important for us to understand how that happens

I do not really understand this. Human babies get a constant stream of labeled
information from their parents. Contextualized speech is being fed to them for
years. Toddlers repeat everything you say. Is this referring to something else
that babies can do?

~~~
tsimionescu
There may be some kind of labeling encoded in genes. One thing that it is safe
to assume is genetically encoded somehow is that sounds made by your
parents/humans around you is worth repeating while other sounds are not.

However, past that, the actual sounds themselves, and any association to
meaning, are pretty far from tagged data sets. Stuff like the specifics of
language (e.g. that a dog is called 'dog') are definitely learned, and
children learn them with typically only a handful of stimuli, often a single
one.

For contrast, imagine training a model with raw sound data tagged only with
"speech" vs "not speech" (and probably only a few thousand data points at
that) and I will be amazed if it can recognize a single word. And babies don't
just learn words, they learn their association to things they see and hear,
and grammar, and abstract thought.

Do note that it is very likely that human brains can learn all that because
they have some good heuristics built in. We definitely know some stuff is
"hardware" \- object recognition, basic mechanics, recognizing human faces and
expression, and others. We are pretty sure higher level stuff is also built in
- universal grammar, basic logic, some ability to simulate behavior seen/heard
in other humans. This specialized hardware was also most likely learned, but
over much, much greater periods of time, through evolution over hundreds of
millions of years (since even extremely old animals are capable of picking out
objects in the environment, approximating their speed etc).

~~~
TheOtherHobbes
There seems to be a spectacular underestimation of the amount of training data
humans experience.

Not only does socialised human intelligence require at least a decade of
formal education, but it also spends a lot of time in a complex 3D environment
which is literally hands-on.

It's true some of the meta-structures predispose certain kinds of learning -
starting with 3D object constancy, mapping, simple environmental prediction,
and basic language abstraction.

But that level gets you to advanced animal sentience. The rest needs a lot of
training.

For example - we can recognise objects in photographs, but I strongly suspect
we learn 3D object recognition first - most likely with a combination of
shape/texture/physics memory and modelling - and then add 2D object
recognition later, almost as a form of abstraction.

Human intelligence is tactile, physical, and 3D first, and abstracted later.
So it seems strange to me to be trying to make AI start with abstractions and
work backwards.

~~~
tsimionescu
Well, babies start picking out objects within weeks or months after birth. And
many birds and mammals are much faster than that. That's not a huge amount of
data to learn something so abstract from scratch, especially given the limited
bandwidth of our data acquisition.

Furthermore, for other kinds of human knowledge, the learning process is very
rarely based on data. After the acquisition of language, we generally seem to
learn much more by analogy and deduction than by purely analyzing data. The
difference is evident, since we can often pick up facts with a single
datapoint, even in small children in kindergarten.

Also, getting back to your point on how we start AI - if you try to take a
neural network and throw 3D sensor data at it, and immediately start using its
outputs to modify the environment those sensors are sensing, I suspect you
will not get any meaningful amount of learning. You probably need a very
complex model and set of initial weights to have any chance of learning
something like 3D objects and their basic physics (weight, speed and hwo those
affect their predicted position). I would at least bet that you wouldn't get
anywhere near, say, kitten accuracy in one month of training.

Related to 3D objects vs 2D, I completely agree.

------
jeffshek
I love PyTorch, but I’m not confident the claim that it is the most popular is
close to true. The cited link, which brings up a lot of new research is in
PyTorch simply doesn’t account for the amount of TensorFlow in production.

Sure, a lot of academics may be embracing PyTorch, but almost all production
models have been in TensorFlow. Tesla is a huge notable example that’s using
PyTorch at scale.

I do suspect that the split of TensorFlow 1 and 2 is perhaps one of the worst
times for TF 2, many teams will likely try out PyTorch instead.

I think both are amazing frameworks, however TF was designed for Google Scale
.... which leads to a lot of difficulties since 99.9 are not at Google scale.

~~~
rckoepke
Depends on how you measure it, of course. However, stackoverflow survey,
google trends, and github octoverse all show PyTorch is on a steep upward
trajectory that recently reached effective parity with TensorFlow and has not
yet started slowing down.

------
DagAgren
Away.

(Or less snarkily,
[https://twitter.com/iquilezles/status/1212377355349417986](https://twitter.com/iquilezles/status/1212377355349417986))

------
jansbor
Is it just me that are wondering why they did not use AI to predict where AI
is going in 2020?

~~~
TheOtherHobbes
They did, but they didn't understand the results.

------
graycat
I believe we will (1) find some basic data structures and algorithms to do
_real AI_. (2) At first it will be able to do I/O only via text or simple
voice. (3) Due to (1) it will learn very quickly from humans or other sources.
(4) Soon it will be genuinely _smart_ , enough, say, to discover and prove new
theorems in math, to understand physics and propose new research directions,
to understand drama and write good screen plays, to understand various styles
and cases of music and compose for those, etc.

Broadly from (1) with the data structures it will be able to represent and
store data and, then, from the algorithms, manipulate that data generating
more data to be stored, etc.

In particular it will be able to do well with _thought experiments_ and
generation and evaluation of _scenarios_.

Good image understanding will come later but only a little later; the ideas in
(1) may have to be revised to do well on image understanding.

~~~
tsimionescu
You believe that we will achieve (1) in 2020? Or do you believe that we will
achieve this at some point in general?

~~~
graycat
Sorry, from reading another post about predictions for the next decade, I was
thinking by 2030, not just 2020!

Besides, for AI, just 2020 seems a bit too short!

------
corporateslave5
Natural language processing is going to absolutely decimate content on the
internet, forcing everyone into walled gardens

~~~
etaioinshrdlu
Via seo spam and other types of spam that are indistinguishable from real
content?

~~~
antupis
Probably more likely hitting that annoying point that you cannot be quite sure
is article machine-generated lorem ipsum which sounds convincing but does not
have any real information behind it. something like
[http://news.mit.edu/2015/how-three-mit-students-fooled-
scien...](http://news.mit.edu/2015/how-three-mit-students-fooled-scientific-
journals-0414) but with scale.

------
bitL
\- individual GPUs will hit a plateau at around 25TFlops in FP32 due to
Moore's law and thermal dissipation however it will be easier than ever to
interconnect multiple GPUs into large virtual ones due to interconnect tech
improvements and modularization of GPU processing units

\- only large companies will be able to train and use SOTA models with
training costs in $10M-$100M per training run and those models will hit law of
diminishing returns quickly

\- 50% of all white collar jobs will be automated away, including a
significant chunk of CRUD software work. Increased productivity won't be
shared back with society, instead two distinct wealth strata of society will
be formed worldwide due to scale effects, like in Latin America (<1% owners,
>99% fighting for their lives).

\- AI will make marketing, ads and behavioral programming much more intrusive
and practically unavoidable

~~~
mkl
I think you're talking about a much longer period than just 2020. The job
prediction seems unlikely to happen within the next decade, even.

------
dijksterhuis
The page forwards itself to a spam google survey for me? Page history fills
with the spam survey and cant navigate back to the article. iOS safari with
reader view enabled.

~~~
kzzzznot
I experienced the same behaviour

------
sillysaurusx
Re: PyTorch TPU support, has anyone checked it out beyond "it works"?

There are many aspects of TPUs that I'm not convinced are easy to port:
Colocating gradient ops, scoping operation to specific TPU cores, choosing to
run operations in a mode that can use all available TPU memory (which is up to
300GB in some cases), and so on.

These aren't small features. If you don't have them, you don't get TPU speed.
The reason TPUs are fast are _because_ of those features.

I only glanced at PyTorch TPU support, but it seemed like there wasn't a
straightforward way to do most of these. If you happen to know how, it would
be immensely helpful!

As far as predictions go, AI will probably take the form of "infinite
remixing." AI voice will become very important, and will begin proliferating
through several facets of daily life. One obvious application is to apply the
"abridged" formula to old sitcoms. (An "abridged" show is when you rewrite it
using editing and new dialog, e.g.
[https://www.youtube.com/watch?v=2nYozPLpJRE](https://www.youtube.com/watch?v=2nYozPLpJRE).
Someone should do Abridged Seinfeld.) AI audio as already made inroads on
Twitch, where streamers like Forsen allow donation messages to be read off in
the voice of various political figures (and even his own voice). The Pony
Preservation Project was recently solved with AI voice
([https://twitter.com/gwern/status/1203876674531667969](https://twitter.com/gwern/status/1203876674531667969))
meaning it's possible to do realistic voice simulations of all the MLP
characters with precise control over intonation and aesthetics.

Natural language AI will continue to ramp up, and people will learn how to
apply it to increasingly complex situations. For example, AI dungeon is
probably just the beginning. I recently tried to do GPT-2 chess
([https://twitter.com/theshawwn/status/1212272510470959105](https://twitter.com/theshawwn/status/1212272510470959105))
and found that it can in fact play a decent game up to move 12 or so. AI
dungeon multiplayer is coming soon, and it seems like applying natural
language AI to videogames in general is going to be rather big.

Customer support will also take the form of AI, moreso than it is already. It
turns out that GPT-2 1.5B was pretty knowledgable about NordVPN. (Warning:
NSFW ending, illustrating some of the problems we still need to iron out
before we can deploy this at scale.)
[https://gist.github.com/shawwn/8a3a088c7546c7a2948e369aee876...](https://gist.github.com/shawwn/8a3a088c7546c7a2948e369aee876902)

AI will infiltrate the gamedev industry slowly but surely. Facial animation
will become increasingly GAN-based, because the results are so clearly
superior that there's almost no way traditional toolsets will be able to
compete. You'll probably be able to create your own persona in videogames
sooner than later. With a snippet of your voice and a few selfies, you'll be
able to create a fairly realistic representation of yourself as the main hero
of e.g. a Final Fantasy 7 type game.

~~~
nl
I did try training a Pytorch BERT-derived model on TPUs (on Colab) and it
didn't work out-of-the box. (Where didn't work=was using the CPU)

I didn't dig into it to find out why.

~~~
zak
(I'm one of the Cloud TPU product leads)

We've seen multiple BERT-related PyTorch models training successfully on Cloud
TPUs, including training at scale on large, distributed Cloud TPU Pod slices.

Would you consider filing a GitHub issue at
[https://github.com/pytorch/xla](https://github.com/pytorch/xla) or emailing
pytorch-tpu@googlegroups.com to provide a bit more context about the specific
issue you encountered?

Here's the current PyTorch/TPU troubleshooting guide, which provides
information on how to collect and interpret metrics that are very helpful for
debugging:
[https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.m...](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md)

Thanks!

~~~
riku_iki
> BERT-related PyTorch models training successfully on Cloud TPUs

How do you see it? Do you look at your client's code?

~~~
nl
Google wrote BERT and they provide technical support to the FB Pytorch TPU
port so it's not entirely surprising. RoBERTa, (Fb's variant) would be a good
candidate to test it with.

~~~
zak
We only see code when customers open-source it or otherwise explicitly share
it with us. We are directly in touch with several customers who are using the
PyTorch / TPU integration, so we hear feedback from them, and we also run a
variety of open-source PyTorch models on Cloud TPUs ourselves as we continue
to improve the integration.

------
drongoking
I'm generally pessimistic about predictions of the future. In this case I
can't help but smile. They're trying to predict how a field (AI), which deals
with complex adaptation, will intelligently adapt its adaptive techniques in
the coming year, within an environment (we humans) that are themselves
changing behavior while adapting to AI. That's approximately three meta
levels. Good luck, guys!

------
y1tan
I predict the broader ML/DL community will keep pumping out iterative papers
that push the ball just a little bit forward while maintaining job security :
Gatekeeping, no one thinking outside of the box, benchmark putting, just
enough for the appearance of progress, and nothing broadly innovative or
disruptive. The applications of ML/DL will continue to be gimmicky consumer
products that have questionable valuable, questionable profit potential, add
even more to disinformation/misinformation, produce more informational noise,
only serve to rebuff a big corp's cloud offerings, and waste people's time. I
predict tons more 'bought' articles that hype up AI technology for the typical
'household' names. I predict the same ol' echo chamber of thought and
reinforcement of 'gatekept' ideology. I expect a number of more prominent
articles critiquing the shortfalls of the technology. I expect a number of
young minds steeped in DL/ML coming to the realization that it's not what they
expected... That its a big profit/revenue story for Universities and
established corporate platforms. I expect a number of them to realize ML/DL is
truly not "AI" or anything close to it. That they aren't doing cutting edge
research and that they are not allowed to think outside of the echo chamber of
'approved' approaches.

I predict more useless chatbots that utter unpredictable word salads. I expect
more gimmicky entertainment focused uses of it. I expect more assistants being
adopted for data collection. I expect more people who aren't busy or doing
anything important, using assistance assistants and text-to-speech to speed up
their tasks so they can waste more of their time on social
media/youtube/entertainment. Samsung Neon is coming out in some days.. making
use of that 'Viv' acquisition.

I expect more feverish attempts at attacking low hanging fruit jobs with
overly complex solutions. I predict failures in a number of startups targeting
this. I predict no pronounced progress in self-driving cars nor any particular
grand use for them. I predict several hollow attempts to overlay symbolic
systems over ML/DL or integration attempts of it with ML/DL from prominent AI
figures. I predict pronounced failures in this effort cementing a partial end
to the hype of ML/DL.

I predict we will get a pronounced development outside of run-of-the-mill
corporate/academic gatekept/walled garden ML/DL that will forge a new and
higher path for AI. Hinton's words from prior years will have been heeded and
the results of a new approach to AI presented. A change of guard, a break from
the necessity of a PhD, a break from the echo-chamber of names, and a broader
and more deeply thought out vision. Disruption not of low-hanging-fruit but
disruption directed at the heart of the AI/Technology industry... So that we
may finally progress from this stalled out
disinformation/misinformation/hype/gatekeeping/cloud/all-your data-belongs-to-
us cycle.

It's 2020 after-all, time for a new age.

~~~
visarga
Apparently there is nothing positive possible in AI, based on your
predictions. Is there?

~~~
The_rationalist
Why would there be? Nobody has any sound roadmap on how to encode semantic
meaning of words in a database. This preclude any hope of AGI.

~~~
visarga
Have you heard of word embeddings, and recently of contextual word embeddings
based on attention?

------
mark_l_watson
The linked page threw up a suspicious looking overlay. I left the site, too
bad since I wrote a blog with my predictions last night and wanted to compare
my AI predictions.

~~~
CodeGlitch
If you use Firefox, just enable "Reader View" for the site (just hit F9).
Removes all the crud around the page and just shows the text and required
images.

------
zackmorris
Some axioms that I'm not seeing talked about much:

* Artificial general intelligence (AGI) is the last problem in computer science, so it should be at least somewhat alarming that it's being funded by internet companies, wall street and the military instead of, say, universities/nonprofits/nonmilitary branches of the government.

* Machine learning is conceptually simple enough that most software developers could work on it (my feeling is that the final formula for consciousness will fit on a napkin), but they never will, because of endlessly having to reinvent the wheel to make rent - eventually missing the boat and getting automated out of a job.

* AI and robot labor will create unemployment and underemployment chaos if we don't implement universal basic income (UBI) or at the very least, reform the tax system so that automation provides for the public good instead of the lion's share of the profit going to a handful of wealthy financiers.

* Children aren't usually exposed to financial responsibility until around the age of 15 or so, so training machine learning for financial use is likely to result in at least some degree of sociopathy, wealth inequality and further entrenchment of the status quo (what we would consider misaligned ethics).

* Humans may not react well when it's discovered that self-awareness is emotion, and that as computers approach sentience they begin to act more like humans trapped in boxes, and that all of this is happening before the world can even provide justice and equality for the "other" (women, minorities, immigrants, oppressed creeds, intersexed people, the impoverished, etc etc etc).

My prediction for 2020: nothing. But for 2025: an optimal game-winning
strategy is taught in universities. By 2030: the optimal game-winning strategy
is combined with experience from quantum computing to create an optimal search
space strategy using exponentially fewer resources than anything today
(forming the first limited AGI). By 2035: AGI is found to require some number
of execution cycles to evolve, perhaps costing $1 trillion. By 2040: cost to
evolve AGI drops to $10 billion and most governments and wealthy financiers
own what we would consider a sentient agent. By 2045: AGI is everywhere and
humanity is addicted to having any question answered by the AGI oracle so
progress in human-machine merging, immortality and all other problems are
predicted to be solved within 5 years. By 2050: all human problems have either
been enumerated or solved and attention turns to nonhuman motives that can't
be predicted (the singularity).

------
juskrey
Fat tails

~~~
bordercases
This is the content I come to Hacker News for.

