
Marc Andreessen on the atomization of AI - sdebrule
https://techcrunch.com/2016/09/13/marc-andreessen-on-the-atomization-of-ai/
======
Xcelerate
The common saying is "more data usually beats a better algorithm". However,
this is only for short term progress. Long term progress in the field of AI
clearly requires better algorithms, and doing more with less data is exactly
the kind of problem that a startup in the field could solve with a clever
idea.

That said, these startups will have to be extremely research heavy — how does
one draw the most talented researchers to work for a small group of people
with minimal pay over working for a large company with a high salary and all
the computational resources you might need? And what kind of product could
theoretical research produce in the short term that would keep a startup
afloat long enough to come up with ground breaking ideas?

~~~
outsideline
You're right on many points. My observation is, when it comes to deeper Ai
that trends towards AGI, the private sector seems lost in terms of the big
ideas and applications. The VCs are also lost and have no idea on how to
appreciate fundamentally groundbreaking R&D development ventures.

To be honest, most of the funding and deeply interesting work for ground
breaking Ai research lies in the public and defense sector. They have more of
an idea and vision for what Ai can be applied to than the private sector.
Leaps and bounds beyond.

In the private sector, I get asked questions like 'Can it be used to predict
if someone will click on an ad'. In the public sector, I get a 40 page
technology acquisition pdf outlining a forward thinking application of Ai.

People forget that self driving cars arose from Darpa research. The latest
innovations in computer security utilizing Ai came from a Darpa competition.
IBM's neuromorphic chip : darpa funding. Darpa and other defense programs also
have a slew of Ai development projects open to researchers.

Ultimately I find this a bit ridiculous given how vocal some individuals in
the tech field are about the 'dangers of Ai'. They harp and harp about what
could happen if more capable Ai gets developed but you don't see their money
anywhere near the groups doing deeper development in Ai.

Lastly.. Yes, better solutions and more deeply inspired approaches to Ai will
trump hoards of data and compute power any-day. Thank goodness for the public
sector pushing forward fundamental research and development in Ai.

~~~
Eliezer
> Ultimately I find this a bit ridiculous given how vocal some individuals in
> the tech field are about the 'dangers of Ai'. They harp and harp about what
> could happen if more capable Ai gets developed but you don't see their money
> anywhere near the groups doing deeper development in Ai.

...because if you think AGI is dangerous, you should fund groups trying to
develop it? I am puzzled by the presentation of this proposition as if it is
an absolutely obvious sequitur requiring no defense, such that failure to
follow through on it is "ridiculous".

~~~
outsideline
Yep. "The best way to predict the future is to create it"

Don't stretch your mind to far on this one. If someone or various groups
develop AGI and you have no stake in them, you will have no say in how it gets
used. For once, people need to be honest with themselves about this.

You're not going to be able to dictate from a third party consortium of PhDs
how someone should run their company or develop their software.

------
epberry
I think your second question is the biggest thing. If you're developing some
groundbreaking technology or doing heavy research you face a bunch of gnarly
questions. How to convince investors when the climate favors revenue and
customers over technological promise? How to attract users or customers to
some subset of what you're doing while you build the larger thing? How to
explain to everyone what you're doing if, as in research, you don't have a
100% clear idea of where you're going to end up?

~~~
outsideline
Not everyone has 'vision' and its hard convincing those who don't have it to
see the potential in your idea before its fully formed. In cases of
fundamental technology like Ai, there should be a national effort to fund
individuals and people who want to passionately pursue it. Europe has been on
top of the ball when it comes to this lately.

Thankfully we live in an age in which the most important research to
developing fundamental things like Ai is 'time'. If you have an internet
connection and a reasonably decent computer, you have enough resources to
create AGi.

If you passionately care about seeing your idea through, save up money, move
to a cheaper location, wire your computer to the internet, and get to work.

Once you've proven your vision, who can deny you then? Slam it on the
boardroom table, disrupt present day business models, and be the next tech
giant.

------
thr0waway1239
I was thinking about this notion of using simulated worlds for doing training.
An open source simulated world would be fantastic - to the extent that it is
actually realistic. For example, maybe collect archived gameplay from some
MMORPG and use it to build the simulated world. As new players play in the
game, the gameplay data is collected and available for the community to use.
Obviously this is not well thought out, but it would be an excellent candidate
for a group like OpenAI, assuming they aren't already building something like
that.

I wonder if it will be a chicken and egg problem though? Maybe you cannot even
create decent gameplay without already having a trove of data, but bad
gameplay will not attract more data from potential players?

~~~
dharma1
[https://gym.openai.com](https://gym.openai.com)

Just needs integration with popular game engines like UE and unity3d, at the
moment it mostly works with 8bit game emulators

~~~
thr0waway1239
Fantastic. We should find a way to place these learning environments at random
locations on a world map and interact with them via the popular VR devices :-)

------
jcannell
This notion that AI/ML requires tons of data that only big tech companies have
is at least partly a myth.

Big private datasets are important only in narrow domains - if you want to
train an ANN to do ad prediction or something in medical imaging, sure.

But longer term and for the most valuable applications, the required data is
plentiful and free. The data advanced robots (ex. self-driving cars) and AGI
need to learn is all around us and it is free. Progress here is limited by
compute/algorithms, not data.

