
Ask HN: What will the state of AI be in 10 years? - hsikka
Hey HN,<p>I&#x27;m a graduate student studying ML and Theoretical Neuroscience, and I&#x27;ve been thinking about what the state of ML will be a decade from now.<p>What are your thoughts?
======
he11ow
I can offer a perspective as someone whose graduate studies in machine
learning were during the "AI Winter", the time period after the first hype
wave of NN and before the second wave we're currently in today.

The math progress has not been substantial, frankly. The very same ideas were
there before. Sure, there are some new ideas, but they too are the result of
the real fundamental change, which was the scale-up in compute and in memory.

So where I see a good analogy is were computing was in the age of the
mainframe and these huge computers that Bill Gates and his chums were fiddling
with in assembly, compared with personal computing.

ML is going to be everywhere, the way databases are everywhere. To an extent,
it'll be an extension of SQL, in the sense that if SQL is about "counting
stuff on data" then ML is a more sophisticated take on that idea. Where you'll
have data, you'll have an engine that extracts stuff from it the ML way. I am
guessing transfer learning will play a substantial part in this, but I'm less
certain on that statement.

Just like with personal computers, our expectations of what you can "just do"
will adjust. But all of that will not stem from progress in ML, just that
current technology becomes more widespread. As far as the cutting edge of ML,
judging by the sort of papers I come across, I think we're currently at the
stage of doing more-of-the-same only bigger.

Again, I find history a really useful teacher in that sense. Take a trip to a
science museum to see that people have always first taken any technology
they've had to its utmost boundary, alongside the diminishing gains, before
something new came along and the curve restarts. We're already in diminishing-
gains territory, but it's hard to say how long that kind of momentum lasts.

------
ArtWomb
Whoever leads in artificial intelligence in 2030 will rule the world until
2100

[https://www.brookings.edu/blog/future-
development/2020/01/17...](https://www.brookings.edu/blog/future-
development/2020/01/17/whoever-leads-in-artificial-intelligence-in-2030-will-
rule-the-world-until-2100/)

~~~
Aperocky
> It is useful to think of technical change as having come in four waves since
> the 1800s

> steam engine, electric power, information technology (IT), and artificial
> intelligence (AI).

Just no. AI has not come, it hasn't even started. What people call AI today is
literally faster IT infrastructure in implementing math. The proper term for
what we see today is machine learning and it is based purely on statistics.

------
hsikka
I personally think that there is still enormous room for improvement in
applying even modern cutting edge techniques to real world usecases and
industry.

Also, I think interpretability and the introduction of better understood
priors will lead to extremely large, sparse networks being used across
multiple modalities in applications and software.

------
mrfusion
I think we’ll have started integrating the three laws of robotics into all ai
systems by that point. I know the Asimov books show lots of ways that can go
wrong but it still makes me feel safer.

The only roadblock is I can’t think of any way to build in the three laws in
such a way that they can’t easily be removed.

~~~
psv1
Asimov-level AI is still fiction and most likely remain that way for a long
time.

------
streetcat1
\- AutoML will perform most of the model development [ This will occur within
the next 5 years or even less].

\- Custom chips will perform the inference and probably the training. For
example, Tesla managed to create a chip, which is 10x cheaper than a GPU for
training.

\- There will be some breakthrough that will combine logic and stat. This
would eventually lead to AGI.

~~~
Aperocky
> AutoML will perform most of the model development [ This will occur within
> the next 5 years or even less].

Similar to how languages that are 'designed for non-programmers' fared?

~~~
streetcat1
No. The current "no-code" tools which really have their origin in visual
programming in the '80s are trying to convert icons to logic, which cannot
work well. I.e. you can generate CRUD, but you cannot convert logic.

The ML dev process is an optimization process and not a decomposition process
(I.e. it is stateless).

Instead, it is much more experimental (and hence declarative). I.e. you can
define a search space, and substitute the data scientist time, with
computation time.

This is the idea of "software 2.0"

[https://www.youtube.com/watch?v=zywIvINSlaI](https://www.youtube.com/watch?v=zywIvINSlaI)

This is how Tesla operate today for its autopilot project.

