
Open-sourcing DeepMind Lab - happy-go-lucky
https://deepmind.com/blog/open-sourcing-deepmind-lab/
======
saycheese
In case it's not obvious, DeepMind is Google:

>> "DeepMind was founded in London in 2010 and backed by some of the most
successful technology entrepreneurs in the world. Having been acquired by
Google in 2014, we are now part of the Alphabet group. We continue to be based
in our hometown of London, alongside some of the country's leading academic,
cultural and scientific organisations in the King's Cross Knowledge Quarter."

------
jakozaur
Looks like OpenAI set some standards. E.g. OpenAI Gym that encourage others
like DeepMind to open-soure more training sets.

Also gaming seems to be driving a lot of innovation. In 1990s games drove
CPU/GPU advances, while now they seems to be perfect training for future AI
deep-learning algorithms.

------
amelius
This sounds ambitious.

I wonder if they can also address the following problem. Currently, deep
learning toolkits need thousands of training images to classify images of,
e.g., dogs and cats. A human, in contrast, could learn the difference between
a dog and a cat by looking just at a single example (or perhaps a few). So
right now, deep learning is too much "simple" pattern matching, and too little
real "AI".

~~~
BooglyWoo
I'm not convinced that a person who's never seen animals before could tell the
difference between all future dogs and cats from a single training example.
Humans draw upon a lifetime of learning and experience to achieve this 'one
shot learning' capability.

If you take a pre-trained convnet (which, by analogy is like a person who has
had 'life experience' of looking at objects), and extract activations for
unseen object categories, in many cases you CAN one-shot-learn these new
object categories. Try feeding them into a SVM or use L2 distance between test
images and the one-shot exemplar image.

On top of this, there's a lot of work on memory-augmented nets and meta-
learning for learning new categories on the fly.

~~~
spynxic
I'd argue that it's less beneficial to learn new categories as it is to simply
recognize when categories differ between samples.

For example, with bears -- I personally know of black bears and polar bears. I
can be a little more detailed with fish but with dogs there are dozens of
"different" [easily recognizable] types within the same category of "dog".

------
modeless
From the paper:

DeepMind Lab is built on top of id software’s Quake III Arena (id software,
1999) engine using the ioquake3 (Nussel et al., 2016) version of the codebase,
which is actively maintained by enthusiasts in the open source community.
DeepMind Lab also includes tools from q3map2 (GtkRadiant, 2016) and bspc
(bspc, 2016) for level generation. The bot scripts are based on code from the
OpenArena (OpenArena, 2016) project.

------
cee_el1234
Is this meant to be a competitor to the just released OpenAI Universe ?
[https://news.ycombinator.com/item?id=13103742](https://news.ycombinator.com/item?id=13103742)

~~~
elefanten
No, not directly. DeepMind Lab is a 3D environment that can be highly
customized -- looks like its built on an old Quake engine. Their pitch seems
to include a lot of real world task simulation. OpenAI Universe is made to
sandbox and emulate existing PC software being used with mouse and keybaord
input.

At least, that's my non-expert understanding.

~~~
john_reel
So if I want to make an AI for a game, I should use OpenAI?

------
rahrahrah
> There are two parts to this research program: (1) designing ever-more
> intelligent agents capable of more-and-more sophisticated cognitive skills,
> and (2) building increasingly complex environments where agents can be
> trained and evaluated.

I find this puzzling. If your goal were to create an human-like AI (which I
always assume is at least partly implicit in these ambitious projects), it
seems to me that the trickiest part is to determine what rewards make an
optimization algorithm "human". How rewards weight and interact amongst
themselves is where the mistery is, isn't it? So why isn't this part of the
research program? Any deepminder wants to weight in on this?

~~~
nl
_which I always assume is at least partly implicit in these ambitious
projects_

No serious researcher is even contemplating that problem yet, except as a
thought experiment. These projects are more about working out how to work out
what questions to ask to direct research which might lead to more generalised
AI.

~~~
rahrahrah
Alright thanks.

Do you want to recommend reading regarding the state of the art? Technical
papers Ok.

~~~
nl
SOTA on what specifically? DeepMind is probably the world leader on
reinforcement learning, so
[https://deepmind.com/research/publications/](https://deepmind.com/research/publications/)
isn't a bad place to start.

------
pedalpete
I find it interesting that they specify "3D vision from a first person
viewpoint". Can somebody explain to me the significance of first person
viewpoint vs 3rd person (or other)?

~~~
abrookewood
I assume it is because a first person perspective would have more applications
in the field of robotics - e.g. twin forward-facing cameras on something like
Boston Robotics robotic dog.

------
faragon
Where is the source code?

~~~
happy-go-lucky
> The assets will be hosted on GitHub alongside all the code, maps and level
> scripts.

------
iotb
Why is there such low activity on this thread vs the OpenAI thread?

~~~
Qworg
One is free and open, with many different software options and partnerships.

The other is free and open, but governed by a corporation, with a single
software option and no partnerships.

