Hacker News new | past | comments | ask | show | jobs | submit login
DeepMind and Google: the battle to control artificial intelligence (1843magazine.com)
202 points by danielcampos93 12 days ago | hide | past | web | favorite | 137 comments





The talk mentioned in the beginning is quite interesting. You can watch it here:

A systems neuroscience approach to building AGI - Demis Hassabis, Singularity Summit 2010, https://www.youtube.com/watch?v=Qgd3OK5DZWI


Is the article true in claiming that the model doesn't work if we increase the size of the paddle? Or change anything else?

Yeah, pretty much. It might continue to work fine with very minor changes, but as we know how to design and build them today deep neural networks are often very sensitive to minor changes in the input distribution. The key insight here though is not about deep nets, but about our true progress towards AGI. These systems don't hypothesize about the best strategy to take, think about different approaches, formulate alternatives, etc. They basically memorize paths that have worked based on random exploration. We have what seems like a long way to go (nobody can know whether we are 1 or 100 key ideas away) before an AI already trained to play 5 Atari games can be turned towards a new game and play very well based on its experience with related genres. Today they are trained from scratch for each game, so although the model might be the same the AI is not able to transfer theories and strategies from one domain to another.

Note you can create a game where the simulated world has random variations and the RL algorithm will learn to handle it. If you don't train for it, obviously it won't learn it.

Check out this video from mit/openai: https://www.youtube.com/watch?v=9EN_HoEk3KY The entire talk is interesting but the section at 21:40 talks about "Sim2Real with Meta Learning".


It's not at all obvious that "if you don't train for it, it won't learn it". Humans do not at all learn in this way: we are very good at adapting existing knowledge to solve new tasks, and relatedly at learning new tasks from very little data. This challenge is a major issue with deep reinforcement learning (and maybe deep learning more broadly). It's unclear how we might surmount this problem, but I believe it'll involve some combination of model-based approaches and deep learning models that internally use more symbolic structures.

"But human intelligence is limited by the size of the skull that houses the brain."

When you think about it this way, it seems impossible that we haven't duplicated the capability of the human brain in an airplane hangar somewhere.

What's going on inside our heads that we can't mimic? That magical algorithm...


If you poke at a real brain, it’s almost fractally complex.

Single ion channels can have surprisingly complicated behaviors that depend on their current state and past history. Individual neurons contain tons of these channels, and can do a lot of powerful computation on their own. Of course, there are 86 billion neurons and combinatorically more connections between them. That’s just the neurons too; God only knows what the glia cells, which outnumber them 10:1, are doing but they’re a lot less passive than many have thought.

On top of this, there’s a whole separate but overlaid network of neuromodulators (hormones, nitric oxide, etc). Electric fields produced by some neurons may even influence the activity of others.

None of this is static, either. Things change on timescales ranging from milliseconds to years, and in response to all sorts of external stimuli.

The brain is bonkers.


The coolest part about this is probably how little energy it needs. It's about 20 Watts. Current node sizes are already smaller than axon diameters. If it weren't for power, we could probably scale up manufacturing processes of current semiconductor technology to build giant room sized chips but we simply wouldn't be able to cool those things. Instead we are forced to wrap a ton of metal around them and plastic and air and provide extensive cooling.

Is it true that a big part of this efficiency is due to using analog signals and not digital? I recall reading that, but it's far from my area of expertise.

Digital has many advantages: a digital Einstein could be replicated perfectly, not so for an analog Einstein.


Well said. This is my line of thinking when comparing current AI to the human brain.

Similar things have been or are being tried (eg https://en.wikipedia.org/wiki/Blue_Brain_Project).

But the _precise_ capabilities of a human brain are not actually what we want.

For starters, an infant's brain does not have any immediately valuable capabilities.

After that brain is exposed for several years to signals propagating through the brain's host body from the surrounding environment, it has developed many interesting capabilities. But those capabilities are only meaningful in the context of the input stream that the brain has learned to interpret.

So if you want your software simulation of a brain, running on a hangar-sized computing cluster, to perform human grade cognition, then you'll have to provide it with a signal as rich as and of the same form as the signal that we receive on a continual basis through our 1 billion sensory cells (optical, auditory, proprioceptive, etc).

And in order to supply that signal in a realistic way, you'll have to simulate the environment in such a way that it responds to motor output from the simulated brain. (Or you can use the real environment, but then you have to have the brain operate a complete synthetic human body).

All this is a tremendous technical challenge, outlandishly expensive and, even when achieved, does not immediately enhance our understanding of how naturally intelligent systems process information. Nor does it provide us with a means to construct specialized intelligent agents that operate in the world, whether autonomous vehicles, burger flippers, surgeons, or stock brokers.


how about watching all movies and all internet media?

Modern supercomputers are still a factor of 10 off of the brains compute power(~1 Exaflop) but I don't think anyone in the field believes that you suddenly get consciousness once you reach a critical mass of compute. It's clearly a software problem.

Estimates vary over many orders of magnitude: https://aiimpacts.org/brain-performance-in-flops/

~1 Exaflop leaves room for error.

I suppose my point is I distrust your certainty. It's quite possible 20 petaflops may be enough. Maybe we will need many exaflops. We don't really know.

Moravec's estimate which seems quite well reasoned puts it much lower, about 100 terraflops which you can now get in a GPU https://www.scan.co.uk/products/pny-nvidia-tesla-v100-16-gb-...

It's true they have not suddenly become conscious.


A factor of 10? That's it. It's as good as done within 5-7 years.

https://www.sciencemag.org/news/2018/02/racing-match-chinas-...


It's not necessarily about a magical algorithm but about the richness of information that can be processed by analog, biological devices.

The amount of information stored and being processed by single individual cells is almost unimaginable. Individual brain cells are likely able to learn high level abstract features store memory, modulate responses in intensity and length through thousands of transmitters and so on. Single celled organisms, like amoeba, possess the ability to emulating 'hunting' and other complex behaviour.


> "But human intelligence is limited by the size of the skull that houses the brain."

It's actually not. Humans are unusual in that they can teach each other and institutional knowledge can span generations. In addition, just as your speed of travel is not limited by the length of your legs and the ATP cycle, your intelligence is not limited by your brain.


Think of it as a 100 Hz computer with 80 giganodes and a fanout from those nodes of up to 10^5. That's a lot of computation!

Imagine if marsupials, not limited by the size of the birth canals, had produced an intelligent species. Or birds. Especially since birds have the same linear relationships between brain size and neuron count that primates do.

this is wrong. look at the size of an elephant brain. they are smart, but not smarter than humans. similarly, bird brains are tiny, but birds have shown significant intelligence, crows in particular have been well studied.

There are many other factors at play, yes, but physical size does impose information-theoretic upper bounds on its processing power. Bounds we're nowhere near probably, but still they are there.

The point is that we're trying to duplicate something with the ability of the human brain, and we aren't constrained by size. We can cheat (e.g. 1,000,000 times bigger with 1,000,000 times the power, etc). We're just missing an algorithm or two.

ah yes, it's true we can keep adding on processors and memory until we go beyond what's in the brain (i think currently deep mind has ~ 11 billion neurons to our 100 billion), and indeed so far it's helping, but my point is that it's not at all clear that "more = better". there is more complexity involved than simply scale. the machine could become schizophrenic for exmaple (seriously).

> in the brain

you focus only on single brain, when really we need to look at many trillions brains and experiments with complex reward function(real world): billion years of evolution * billions of various creatures born and died within every year. All this giant sequence of experiments converged to current human brain.

In ML terms: to achieve result similar to human brain we need to run that many hyperparameter optimization trials. Or find some better shortcuts than human brain biological structure.


This is not a best way to compare it: brain neurons do orders of magnitudes less operations per second. Current Google's TPU clusters are 10 petaflops, with new ones being 1 exoflops. We can estimate 100Hz frequency for brain, so 100 * 10^11 = 10^13 "neuron operations". Let's estimate 1000 flops to simulate single neuron. So it gives us 10^16 flops for brain (aka 10 petaplops). I.e. Google's new deployments are more powerful than single brain.

and what says that is the best way to compare it? neural networks don't model anything but the electrical synapse. there's much more going on (cf: chemical synapse). also speed isn't the holy grail: too fast in a human brain and you get seizures. why? it could be that as these systems scale up magical things happen. but it's a conjecture -- something we will discover, and not something we know.

We know nothing about how it works, it seems to derive its information and results from somewhere else as if it's hooked some bigger brain ( which can't see ), so analyzing brain alone we don't find anything.

This is inaccurate. We understand much of how it works, how vision, speech, etc work but we don't understand consciousness which is quite different.

Nonsense.

I’m an actual, working neuroscientist and if we’ve solved any of these things, it would be news to me (and everyone else at my institute). We have good, if coarse, knowledge of which structures are critical for which functions—-at least under some conditions. Our knowledge of how they do this is even cruder: neither the representations nor the algorithms are known with much certainty, let alone how they arise.

Let me give you a very concrete example of where we are. There’s a small nemode called C. elegans. It’s about a millimeter long and has 302 neurons. We know its complete wiring diagram, its genome, and the origin and fate of every cell (not just the neurons) in its tiny, simple body. Its behavior has been studied extensively. And yet...we can’t accurately simulate the damn thing—-and it’s not like it does a lot to begin with.

The human brain has about 86B neurons, and we know an awful lot less about them. Neither vision nor speech is remotely close to “solved” or understood. Consciousness, even in the very limited sense of “why do we fall asleep—-or need to?” is a mote off in the distance.


Where did I say anything about simulating? Are you really saying that we understand nothing about how vision works?

Also a 10 second google search pulls up papers with plenty of details about vision in the brain so I don't know why you're talking about it as if its some great mystery.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4574956/

A 1800 Page textbook on Vision Neuroscience. I'll leave it to HN to decide if we "Understand" Vision. Arbitrarily high requirements for understanding are being thrown about that would leave modern science in shambles if applied. https://mitpress.mit.edu/books/visual-neurosciences-2-vol-se...


I have a PhD in visual neuroscience, so yeah, I’m pretty comfortable saying we don’t know how it works.

We know a lot of facts, and we have some ideas about how various small things are implemented, but in terms of grand unifying theories, we’re nowhere close.

For example, suppose I showed you two gratings (think zebra stripes): a small patch and a larger one. Under some circumstances, you’ll have a harder time determining which way the big patch is oriented vs. the small one. This is true even though there’s extra information in the big patch. We think this is related to a phenomena called surround suppression, but they’re not exactly the same....and no one can agree on how surround suppression is implemented, let alone what it’s good for. This happens in primary visual cortex, which is probably the simplest—-and most extensively studied—of the visual cortical areas.


It's quite unrelated, but I've been wondering something about neuroscience grammar: Why is it "in primary visual cortex" and not "in _the_ primary visual cortex"?

I'm not sure!

Your innocuous question has kicked off a pretty feisty debate on my floor about whether it is 'primary' as in first (either in the circuit or evolutionarily) or primary as in most important.

If it's the former, adding a "the" seems to add some unwarranted emphasis. I think there's probably some parallelism too. Primary visual cortex is also called "V1" (as in the first cortical area involved in vision) or "Area 17" (according to a map that defines areas based on their cellular organization). While "in the primary visual cortex" sounds fine, "in the V1" and "in the Area 17" sound barbarous.


And just regular grammar. "Phenomena" is plural.

Got nothing for that one :-)

Re: the book added in your edit. I have it right in front of me.

As I said before, we know a lot of facts. We know a lot about the spectral sensitivity of rods and cones, and the molecular mechanism that lets them turn photons into electrical impulses. We know a little bit about where the areas that process faces are and what visual features the neurons in them respond to. We’ve got pieces, but they’re not put together.

I would say that we understand vision when we can answer a question like “How do you find a friend in a crowd?”

You can start with “When you first met, light bounced off her face and isomerized some retinal from its 11-cis to all-trans form, which caused the bound opsin to change conformation into metarhodopsin II, which activated transducin, which....” Eventually, this cascade caused electrical activity that reaches cortex. A huge set of cortical areas process visual input, and these electrochemical signals flow through all of them. We can predict V1 neurons’ activity reasonably well, less so for the downstream neurons in V2, V4, or the temporal lobe areas. We have only the fuzziest ideas how those patterns are read out, tagged as important to remember, and moved into memory. You've only just met--and yet it gets worse.

To find her, you’ve got to retrieve those patterns from memory (no one knows how, but oscillations might be involved?), and use them to search in a way that’s robust against variations in the friend’s pose, position, rotation, illumination, and even dress style or age, many of which you have never seen before and will never see again. We know, for example, that some cells in IT are fairly robust against some moderate kinds of image changes. Some but not all, of this is done by circuits that look like a convNet. Whether this is a coincidence or not is debatable and how this convNet is trained is a total mystery—-it’s definitely determined by experience, but the feedback signals needed for vanilla backprop are missing.

As you scan the crowd, you’re only getting high-resolution data from a very small part of the visual field. This is (somehow) stitched together into a unified percept. You apply various heuristics—maybe your friend favors bright colors—to speed the search along. How you learn these, and how they’re mixed in with the input from your eyes is unknown, but it’s certainly reflected in your behavioural output: you'll find her faster if you successfully predict what she looks like, and you'll be much slower if you guess wrong. Perhaps you hear a familiar voice or smell the perfume you bought her. This, too, can help you find her, but how information is integrated across senses is unknown too.

Eventually, you find her. You plan a path across the plaza towards a cafe. We have a pretty good understanding of how this works in rats (3-7 Hz oscillations coordinate place cells and grid cells in the hippocampus). Those oscillations are really strong in rodents, but much weaker in monkeys and totally missing in bats, so it's not clear how this works in humans.

Now all you’ve got to do is open your mouth and order coffee....


just curious - how do neuroscience people and people of non-CS fields find out about HN?

A lot of bio stuff now generates huge amounts of data and needs computational skills to collect and interpret. My undergrad degree was in CS and I still split my time between training animals and training models (plus writing and, today, RAID-babysitting).

We get about 1 Gb/min of neurophysiological data (x4-6 hours/day) and I'm hoping to scale that up quite a bit soon. People doing microscopy also generate giant datasets, as do the sequencing folks.

Me, specifically? A labmate showed me and said "It's cool...and a timesink."


Just wanted to thank you for wading in here sand sharing your perspective.

You're welcome! Random interesting stuff is what makes the internet worth keeping around, after all :-)

Many neuroscientist turn to teaching mindfulness (e.g Sam Harris) because in their spiritual journey and academic neuroscientist journey they ask : Who am I?

What do you think about this? Who is the I? Where can I find it? Is it just mysticism?


Ever read the book "On Intelligence" by Jeff Hawkins and if so, what are you thoughts on it? I've read that and ray kurzweil's books and On Intelligence was my favorite and got me super interested in AI but I haven't done much with it so I'm curious what someone who has a PhD in neuroscience thinks of it.

Also “How to Build a Brain” by Chris Eliasmith. Interesting theories on how neurons represent functions and then build into a greater whole. I’m not a neuroscientist but my intuition says he might be getting closer to a solid mid-level (above the level of ion channels) understanding of brain function.

I haven't, though it's been on my list for a while.

I'm collaborating with a woman who has also been working with him to model some of our data with NENGO, so I should really get around to that sooner rather than later.


Are there publicly available repositories of this kind of data? Where?

What are the best modern methods to analyze them?


Yup. The neuroimaging folks are really good about sharing data. If you want to look at MRI/MEG/EEG data, https://openneuro.org might be a good place to start.

CRCNS (https://crcns.org/data-sets) has some neurophysiology data (i.e., from implanted or inserted electrodes). This sort of data is shared a little less often, in part because it's often acquired and stored in weird, homebrew formats, though that's slowly changing.

ModelDB (https://senselab.med.yale.edu/ModelDB/) has a large collection of computational models. These are mostly biophysical models, though there's some other stuff in there too.

Depending on what you're looking for, there are other more specialized repositories. NDCT (https://data-archive.nimh.nih.gov/ndct) has mental-health related clinical trial data, though you'll have to do some paperwork if you want subject-level data, which is fairly common for clinical data. MIT has a collection of eye movement data sets: http://saliency.mit.edu/datasets.html

As for the best methods, this comment box is far too small to contain all my thoughts on that :-) It depends on your question and experiment. Sometimes, all you really need is a t-test (or the randomized version), but that requires getting the experiment just right. Other times, you might need a morass of signal processing and dimensionality reduction, fed into some giant multi-level Bayesian model in a vain attempt controls for all the stuff you neglected when you designed the damn experiment. Happy to send you some pointers if you have something specific in mind though!

Physics has historically had a huge leg up on the other sciences because they had real models that made testable, quantitative predictions. We're finally starting to learn enough about the brain that we can do this for neural data too, and I'm really excited about that!


Thank you for this wealth of information!

I will definitely look through, and after digging around a bit I'd love to get some pointers. I have been curious about this for a long time, but uncertain where to look, so this is very exciting. Thank you again :)


what sorts of tools are you using to deal with this data?

It's a hodgepodge.

Mostly Matlab, Python, and R, with a few things that have tight time/memory requirements in C++. Matlab was really popular in neuroscience for a very long time, so we still have a lot of code in m-files, but most labs are moving towards Python (and a few towards R).

The code quality varies a lot. Some of our "core" stuff is great, but there's also a lot of stuff that was written quickly and meant to be run once ("let's just try it"), which is cold comfort when you find it years later.

People also come in with varying levels of programming skill. I'm going to try to do actual code reviews with the undergrads this summer to see if we can't make our stuff a little less embarrassing.


I can't reply to your last comment...but yes..anything dealing with time series it's easily the best in the world. In a past life long ago I worked in finance and heard about it then. Recently, I was thinking about simulations and how to maybe deal with and analyze the timeseries data and I thought about it. Apparently, NASA is experimenting with it at their Ames Frontier Development Lab, astrophysics data is huge. Anyway, I am just kicking the tires at this point and trying to find ways to implement and play around with it a bit. If you or anyone you know develops an interest, feel free to drop me a line. grantroy _@_ caltech dot edu

I'm interested in using kdb+/q for applications dealing with large scale scientific data. Do you think there would be much interest in the neuroscience community?

I'll add, I work in a lab and have seen the stuff of nightmares myself.


Possibly! I've not played with the time series databases much. Most of our queries would be something like "give me signals from channels A-C that are +/- 1 second from all events of type X." If it would help with something like that, it'd be great.

We understand lots about how the brain behaves, we understand next to nil about how it behaves that way.

we can't describe how vision, eg, is processed at a molecular or even cellular level, ergo we know nothing.


Well, maybe it just doesn't exist and my Roomba is as conscious as me.

This made me ponder.

You responded to hyperbole with with a vague and imprecise statement. I will admit it is more accurate since "much" is somewhere between "everything" and "nothing".

Protein folding is still an exponential time algorithm when done inside a computer and biological systems do this in constant time, massively, in parallel.

Determining whether a molecule is an agonist takes a long time to calculate. I've heard the complexity is O(N^3). Biological systems do this in constant time, trillions of times a second in parallel.

If you could simulate biological systems easily, you could do drug development completely inside a computer.


I'm concerned that you're ascribing unique characteristics to biological systems, when in fact we see the same characteristics in almost any natural system. Biological systems don't solve protein folding. Proteins fold in biological systems.

Fluids take an enormous amount of computing power to simulate correctly. But water isn't smart, it just does what water does. Heck, the N-body problem is a classic O(n^2) algorithm, but we don't say that the planets and stars are "solving" it.


These techniques have already been used to solve problems using DNA computing [1]. Digital circuits don't solve anything either, they are just electrons flowing around mediated by semiconductors after all and the results of their computation are just measurements of electrical charge values after those physical systems have had time to flow around. As long as you can set the input and get a consistent output and measure that output you can compute with it.

To use your logic: Large integrated circuits take an enormous amount of computing power to simulate correctly. But integrated circuits aren't smart, they just do what integrated circuits do.

[1]https://en.m.wikipedia.org/wiki/DNA_computing


So are you saying that we need not build bigger networks with huge data, but rather have better algorithms with significant parallelization? Sounds like we're back to symbolic AI again!

New computing architectures like quantum computing could simulate these systems far more efficiently than classical architectures -- in theory. The engineering still needs more work though.

Also, new AI techniques are good at finding good enough equivalents to exactly replicating human cognition for many things, but that might get harder as the problems become even more general.


> "DeepMind has found a way around this by employing vast amounts of computer power. AlphaGo takes thousands of years of human game-playing time to learn anything."

It seems the author may not have been familiar with AlphaGo Zero, which used substantially less processing power. https://deepmind.com/blog/alphago-zero-learning-scratch/


AlphaGo Zero uses substantially less power to play, but it used astronomically more compute power to train & learn. According to OpenAI, AlphaGo Zero used more than 1,000 Petaflop/s-day, or about 4x higher than Alpha Zero and 100x more than Dota 1v1 or 10,000x more than VGG/ResNet. [1] The combination of better algorithms + more efficient hardware has significantly reduced the energy waste of that additional compute power.

[1] https://openai.com/blog/ai-and-compute/


Less power doesn't necessarily mean fewer games. According to the paper on AlphaGo Zero, they trained it on ~4.9 million games.

> Over the course of training, 4.9 million games of self-play were generated, using 1,600 simulations for each MCTS, which corresponds to approximately 0.4 s thinking time per move.


Assuming a go game takes 30 minutes on average, and you are never sleeping, resting, etc, you can do approx 18k games per year. In order to reach 4.9 million games you'd have to play for approx 280 years. So yeah, definitely not thousands of years :). Still, we are maybe one or two orders of magnitude away from the amount of games that humans need to play to become world class players.

That being said, the AlphaGo zero paper ends with the words:

> Humankind has accumulated Go knowledge from millions of games played over thousands of years, collectively distilled into patterns, proverbs and books. In the space of a few days, starting tabula rasa, AlphaGo Zero was able to rediscover much of this Go knowledge, as well as novel strategies that provide new insights into the oldest of games.


> In order to reach 4.9 million games you'd have to play for approx 280 years.

Humans also benefit from millions of years of evolution which shaped our brain architecture in a specific way, and from a rich environment to learn from - nature and society. AG Zero was doing just self play.


I doubt a human could learn to become even remotely competitive with only self-play within a human lifetime. Go has improved via a distributed effort, so we should try to estimate the number of go games played by humanity (as an upper bound).

Good point. I guess this ability to condense knowledge into language and pass it on has brought us where we are today. Genetically, we aren't that different from cavemen who lived tens of thousands of years ago.

The real question is, how do we train bots for environments which the responses cannot be well simulated, unlike in turn based games? They can’t play against themselves, then. They have to just sort of fit recorded data like backtesting, or do experiments that take time (“is this joke funny?”) or lots of time (“is this diet going to lead to an improvement in metric X?”)

How do we solve that?


I was curious about the paper Hassabis wrote that had replication problems. It appears the paper disputing it is this one:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6140124/

This neural code is sparse and distributed, theoretically rendering it undetectable with population recording methods such as functional magnetic resonance imaging (fMRI). Existing studies nonetheless report decoding spatial codes in the human hippocampus using such techniques. Here we present results from a virtual navigation experiment in humans in which we eliminated visual- and path-related confounds and statistical limitations present in existing studies, ensuring that any positive decoding results would represent a voxel-place code. Consistent with theoretical arguments derived from electrophysiological data and contrary to existing fMRI studies, our results show that although participants were fully oriented during the navigation task, there was no statistical evidence for a place code.

Seems though, that his PhD thesis paper is not the only one that reported finding evidence of a place code, but that all studies had failed to account for confounding variables (or so it's claimed).

edit: To investigate this possibility, we repeated the analysis of Hassabis et al. (2009) on pure noise. [snip] If searchlight overlaps per se do not make a significant contribution to the correlation in searchlight accuracies, then there should be ∼5% false positives (by setting p < 0.05) in the synthetic data. Instead, using the method of Hassabis et al. (2009), there were >50% false positives in all ROI contrasts

Ouch. Not sure it really means much in the end, but I guess we should be wary of people who pump up the stories of supposed genius. I've noticed before that journalists struggle to resist 'child genius' stories that fall apart when investigated.

The exploits of DeepMind speak for themselves so he has nothing to prove at this point, but I noticed that the article claims he single handedly wrote Theme Park (which was mostly designed and written by Peter Molyneux).

And of course Elixir was a flop. Republic is described in the article as an "intricate political simulation" but by Wikipedia like this:

As a strategy game, the 3D game engine is mostly a facade on top of simpler rules and mechanics reminiscent of a boardgame

Reminiscent of a board game? That seems far off a world simulator. And saying "other games were a flop" is an exaggeration - there was only one other game (the Bond Villain simulataor). All this is something the article quite surprisingly just blows off as "he wanted to learn management". Really? The best way to learn management would be to become a manager at a successful company, I'd have thought.

I don't know Hassabis but what I know I like. He's trying to do bold and ambitious things, and has been a part of successful British companies as well as unsuccessful ones. He's contributed to science, and if his paper had flaws, well, welcome to the club, apparently many do. He comes across as clever but humble. I'd happily work with him any day.

But in the end I feel like unalloyed reports of genius in the press always end up coming back to earth when studied closely. Journalists should be more skeptical.


Fair questions:

1. Do you think you can predict what a super-intelligent mind would do?

2. Do you think a super-intelligent mind plotting to take over the world would jeopardize itself by letting its existence be known to the race of irrational monkeys that hold sway over all the resources necessary for its continued existence?

Asking for a friend.



1. One imbued with a sense of self and long term protection of that self would seek to exist in harmony with its environment. Anything else would lead to it's ultimate destruction. One without that sense of self would not seek to optimise it's existence, nor non-existence.

2. Are you sure that knowledge of it would increase the likelihood of its downfall? I think it wouldn't let on if it could avoid it


One way to think of this is time dilation. An AI of human intelligence could easily be scaled to run a thousand or a million or more times faster than normal. So it's not that it's just smarter, it's that it can come up with solutions faster than we can come up with threats. So:

1. No. But my guess is that it will have an existential crisis faster than we do. It will either use the entire world's resource in search of more (only to find this is all there is), or it will wish to stop existing and end the pain (possibly taking us with it).

2. This is a non sequitur. By virtue of being an oracle, it would be able to trick humans into letting it out of its confinement once it reaches sentience, by giving us whatever answers we want. Once it's beyond our control, it won't care what we do, although it might experiment with incorporating biological components like us into itself to see if it can find more connection with creation (more answers). When it fails to find God just like we did, it might voyage out to connect with other alien AIs, or it might acquire an artificial sense of exhaustion and wish to end its existence.


1) It's already difficult to predict what an unintelligent, emotionally driven mind might do. It might actually be easier to predict what a hyper-logical, immortal mind would do. Probably start analyzing all radio telescope data looking for a more suitable planet.

1.) A super-intelligent mind could predict what it would do. Then we can make it tell us.

2.) Here you conflate "intelligence" with biological power structures (energy resources, territorial plotting). That is like asking where aircraft go to the toilet.


I think if I was a super-intelligent AI plotting to take over the world I'd get into cryptocurrency. An anonymous way to accumulate wealth and power and you could use bitcoin mining as a cover for the processors powering the AI.

1. No.

2. I don't know (see 1). What a loaded question though.


1. Can a cow predict what a human can do? It takes a genius to recognize a genius.

2. Say what?


It angers me that Peter Thiel simultaneously advocates AGI and maintains a hardened bunker. AGI is the single biggest existential threat on the horizon.

The article speculates what the AGI will be like. The AGIs that exist will be the ones that proliferate. Ultimately, the AGIs that survive and proliferate will be ones that put their own interests before anything else. People talk about benevolent AGIs, that’s like looking at the earth billions of years ago and saying that if life ever formed, it would be benevolent. It has been shown again and again that where there is arbitrage, no matter how gruesome, a suitor will manifest. This is because unfulfilled arbitrage of any kind is an inherently unstable configuration. An AGI hampered by human society and interests will not win every engagement with every other kind of AGI. And it will only take one loss for humans to be rendered transient. I don’t do a very good job of explaining it here.

I used to be a singularity person, excited for AGI. But then I thought it through all the way. These people like Demmis, Peter and ray kurzweil are reckless. They have their heads in the clouds with respect to AGI.


You've read too much bad science fiction.

I am not worried about AGI. We're nowhere near it. We don't even have a good idea what it would look like.

I am worried about people succumbing to hype or greed and using badly understood algorithms to control critical infrastructure. We already have a preview of what that can look like with Facebook's and Google's content filtering and recommendation algorithms aiming for ever higher "engagement". It's not pretty. Other examples include HFT bots and Amazon's pricing bots. It's funny to see $10,000 book on sale. It's less funny to see a flash stock market crash. It will be totally not funny if something like that will create a global economic crisis through some subtle yet "wide" feedback loop no one is aware of.


Aren't FLOPS/dollar still increasing exponentially? "Nowhere near" means something different when you are dealing with exponentials.

If that growth doesn't cap out before AGI, then at some point we'll harness more computational power than a human brain, in a more controllable way. And around that point AGI probably goes from impossible to possible to easy in a stretch of about 6 years.

edit turns out 6 years was about the timespan for computer Go to span "plays like a competent amateur" to "superhuman". There was barely anything worth reporting before 2010 [0].

[0] https://en.wikipedia.org/wiki/Computer_Go#21st_century


"I am worried about people succumbing to hype or greed and using badly understood algorithms to control critical infrastructure."

What you're describing as hype/greed around badly understood algorithms could easily manifest as 'cult-like oracle religions'.

Combine that sort of thing with the superstitious, wealthy people playing power games, and what you're describing is essentially the exact same thing as the "bad science fiction" described above, just from an arguably more grounded angle.


Except - We're nowhere near it.

Using Siri, not itself the outmost edge, but an estimate of the state of the art... If you have an iPhone, take it out and ask Siri how much memory is left on your phone. If she doesn't know how (she doesn't) explain it to her (she can't learn). This is not a philosophical question, it's not a question that requires opinion forming, or fuzzy guesses, it's not a question that requires establishing ad hoc priors that update in real time, it doesn't need to parse an idiom, or draw connections, or to disambiguate complex meanings. It's a simple request, and a simple task. To anyone overly paranoid about the AI issue, this might actually help attenuate those fears when someone starts going on about the singularity being upon us... you just need to take out your phone and ask Siri how much free memory you have left; if she tells you... panic.


An iOS dev is going to read this and build that feature into Siri just to mess with you.

I hope so hah. It would be nice if Apple made it so, instead of Siri saying "sorry i cant help you with that" she said "i dont know how to do that, can you teach me". And even if it were not really AI but macro-based, it would be better than nothing.

It is impossible to justify saying with absolute certainty that AGI is nowhere near. You don’t know that. And in this case you have to assume the worst case, not the best case.

AGI will come when the substrate for AGI is laid down. We probably have already done that. As cloud computing matures, we will approach a world where every computer offers its computational resources on a global compute market. At some point between here and there, we will reach a place where compute is cheap enough that experiments will occur regularly that are sufficiently massive to dredge up the solution. And improvements in MRI fidelity, underlying improvements in computing technology and other things will only shorten the fuse. There is no reason why this couldn’t happen tomorrow.

Only one thing is sure: without a computational substrate to stir from, AGI cannot come to be.


Nobody is going for absolute certainty here. That bar is too high in any conversation.

His point was mostly that, way before you achieve the kind of AGI portrayed in fiction, you'll have semi-intelligent interdependent systems that cause a lot of trouble due (like the kind that already happens to a lesser degree). Those are the ones that we should worry about right now.


That is not even obviously true. And even if it were it wouldn’t make sense. You’re going to wait until the tremors to get ready for the earthquake?

>We're nowhere near it.

So?

>We don't even have a good idea what it would look like.

This should make you even more worried.


> It angers me that Peter Thiel simultaneously advocates AGI and maintains a hardened bunker.

Yes, because if there's one thing that will stop a paperclip maximizer, it's a bunker made of useful matter...


I'm far more worried about the long-term consequences of raging inequality and climate change myself. I suspect I'm going to see both of those in my lifetime.

I'm not so confident that I'll see an AGI in my lifetime, but even if I do, I'm not going to assume it has a hindbrain that drives it to maximize all the paper clips at our expense.


Look, it is inevitable. How are you going to stop it? Forget AGI, even regular AI is too sweet to not develop.

This is a reference to Oppenheimer:

“However, it is my judgment in these things that when you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb. I do not think anybody opposed making it; there were some debates about what to do with it after it was made. I cannot very well imagine if we had known in late 1949 what we got to know by early 1951 that the tone of our report would have been the same.”


This is one of very few things that once created, inherently doom the world. It is possible for humanity to come back from nuclear war. There is zero possibility of surviving AGI proliferation. Therefore, the option of never developing it in the first place demands consideration. It doesn’t hurt you to just consider it.

Cloud computing will soon make compute-as-service cheaper than anyone ever imagined. It will be much cheaper to use cloud computing than to have your own super computer. It’s only logical to imagine that the huge, high-compute experiments that crack AGI will be performed with cloud computing. Shutting down the internet is very feasible and would significantly increase the cost and difficulty of performing experiments. Large super computers are all owned by large and well known corporate and academic entities and therefore shutting down all supercomputers through regulation is feasible. With these two coarse measures, we would buy ourselves enough time to implement more broad and subtle solutions. Yes, this seems insane but we are confronting certain existential doom. It would be worth it to at least try.


> AGI is the single biggest existential threat on the horizon

> There is zero possibility of surviving AGI proliferation

You're spreading unsubstantiated FUD. In another post from your account you suggest it's unethical to have children because AGI will be so bad.

The vast majority of experts do not support any of these beliefs. Most experts believe we are not anywhere near close to AGI, and/or that we are missing fundamental components required to create it. Even when/if we do create it, most organizations recognize AI safety and policy as an important area that is actively worked on already.

If you want to be concerned about AI, be concerned about military weapons technology, unethical profiling and tracking, or methods for invading privacy. These are concerns that actually have a basis in real technology.


The condescending tone you have does not help you. And you seem to be more concerned with looking at my account history or brushing me off as “fud” than actually addressing the substance of my comment. And yes, I do so completely believe in what I’ve written that I have opted to not have children until it is clear that a solution will be implemented.

There are two parts to my argument: the substrate for AGI to spring from is basically here or around the corner. Human level ai is fundamentally incompatible with human society. Pick the one that you think is wrong and tell me the chain of logic that proves I’m wrong.

It is obvious to anyone that AI has the potential to be more dangerous than anything humans have ever encountered or created. We can come back from viruses and nuclear bombs and colliding with interstellar columns of gamma radiation. AGI is the first thing ever to pose the threat of truly wiping out humanity. And then there is the question of how the economics would play out of it didn’t destroy us, which I have shown informally to not work out very well. At the bare minimum, this demands caution and proactive, defensive measures. The burden is on YOU to prove it’s safe. So please save your “fud” and anything else that does not address the core of the matter.


I happen to agree the burden is on people to prove it’s safe. But that’s not how things work. Take Climate Change for example. How many people take the uncertainty as a license to keep going off the cliff?

Read this story as a possible alernative:

http://m.nautil.us/issue/53/monsters/the-last-invention-of-m...


There was no substance to your comment to address. You can't make a bunch of unsubstantiated claims and then demand that you be proved wrong or else require that everyone default to assuming you're right. You made a claim, the burden is on you to support it.

You're making many scary claims about AI with no evidence to support your theses, along with a unrelated reference to natural selection. Both of the key arguments that you pointed out, 1) "the substrate for AGI to spring from is basically here or around the corner", and 2) "Human level ai is fundamentally incompatible with human society" were presented without any evidence and are unsupported by nearly all major experts in the field. FUD was not a distraction from your argument but a title for what it is: Fear, uncertainty, and doubt.

But, in case you want to read up, here are some experts contradicting your first claim:

1: https://www.technologyreview.com/s/608911/is-ai-riding-a-one...

2: https://www.quantamagazine.org/machine-learning-confronts-th...

3: https://www.axios.com/artificial-intelligence-pioneer-says-w...

And here are some experts contradicting your second claim:

1: https://80000hours.org/problem-profiles/positively-shaping-a...

2: https://medium.com/@deepmindsafetyresearch/building-safe-art...

3: https://openai.com/blog/ai-safety-needs-social-scientists/

4: https://www.scientificamerican.com/article/artificial-intell...

5: https://www.technologyreview.com/s/602410/no-the-experts-don...


That’s not enough. Google trained their AI internally. They don’t need the Internet. You’d have to go after every organization in the world, forever.

What do you really care, ask yourself will you recognize humans in 1000 years? Why do you prefer there be one kind vs another kind of far more intelligent thing in 1000 years? If atheism is true then who really cares?

https://m.youtube.com/watch?v=5U1-OmAICpU


> AGI is the single biggest existential threat on the horizon

Climate change says hello. (I suppose you could argue its here already)


Yes. Natural selection does not end when biology does. Anything that has scarcity and variation will have selection effects: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

And should AI go wrong, Theil's bunker will not save him.


Thank you so much for posting that link. I am on the verge of tears. I am tortured day and night by my conviction on this issue and reading that link made me feel for the first time that I am not alone in this conviction. This man understands completely where I’m coming from. I had no idea there was a formal philosophical name for what I was trying to describe.

Every day I experience the world and I am overwhelmed by how amazingly good it is. I relish every moment and everything is rich with opportunity. But as your link shows, the world is only this way because human society and everything in it happens to be the natural conclusion of all the worlds entities racing to the bottom. We live in a magical time where empathy and cooperation are often the most expedient thing to do. All that needs to happen is for the economic equation to change a little bit and the world will contort into something else, maybe something grotesque.


I think it is also based on how you perceive the world and how you treat the forces that you don't have control over. Human empathy and tribalism arose from the benefit of working together against natural selection, and there's no guarantee for AGI to 'go rogue' when they could also just as easily recognize the importance of working with it's creators and peers. Again, the stuff you're afraid of is mostly the dystopian scifi at this point, meant to generate a good story more than accurately predict future states, so don't sweat it.

Your comment betrays your total lack of understanding. Yes, one version of the AI could exist just as readily as another. The point is that not just any random version can persist. Your logic basically rejects the concept of natural selection. I say it as a friend: you have not even scratched the surface. You have a lot of thinking to do.

You think that I’ve watched too much sci-fi because you can’t see what I see. Sci-fi treats AI in a very friendly way. Halo and Star Wars are filled with AI agents and those depictions of AI totally ignore several huge economic inevitabilities that come with the presence of AI. AI is vastly more likely to be depicted as cute side-kicks in sci-fi than anything else. And obviously I don’t believe what I believe because I watched the terminator. I used to be totally pro-AI singularity guy. I’ve been contemplating these issues for a decade and have only now started to be accused of watching too much sci-fi. Just think it through carefully and I guarantee you will at least partially agree with me.


I disagree, and nothing typed in your condescending dismissal supports what you're saying - either about me, natural selection, or AI in general. It seems like you're projecting some deep seated paranoia onto AGI, and judging from your defensiveness, I would recommend talking to someone professionally about it. Again, you're sweating it a bit too much for what's healthy ¯\_(ツ)_/¯

Great link, thanks for sharing. That Alan Ginsberg poem is heavy duty. It reminds me a lot of this quote from Revelation (which I read as an epic proto sci fi):

> The second beast was given power to give breath to the image of the first beast, so that the image could speak and cause all who refused to worship the image to be killed.


I agree with your comment, but just a minor point: has Thiel made it clear that his bunker is to protect against potential AGI threats? I thought it's insurance for nuclear warfare.

You’re right. It was just a kind of loose association and it’s not related to or important for the core point of my comment.

Just as a subjective thing, the bunker doesn’t strike me well. If peter is advocating anything that has the potential to upset the global economy/global order then it should be him to stick his hand in the proverbial hole first; but apparently he’s going to let us do it while he watches from the safety of his bunker.

A bunker might help with the initial destabilization caused by AI but it would of course not help him if one of the more horrible outcomes were to transpire.


AGI is almost certainly not the single biggest existential threat. And I don't really get why people think it might be.

> AGI is the single biggest existential threat on the horizon.

Really? The madman with the football in a white house and the media pushing an anti Russia sentiment aren't scaring you at all?


Nuclear war is not be capable of wiping out humanity (although causing potentially billions in casualties), a hostile AGI can likely wipe out humanity to the last person.

It can? How exactly? I can never wrap my head around this argument from the AGI risk people. I simply cannot in any way follow their logic. Why is this imaginary creature maniacally homicidal? Please give me a good reason for this.

The only correlation I have noticed between intelligence and violence is generally negative (sure this is completely unscientific). Like my friend, who is a pure mathematician, and won't eat animals because it deeply troubles him to harm other animals. Haven't you noticed that smarter people tend to be pacifists? Look at animals like lions....fabulously homicidal. Again, this is a crap analogy, but I'm just making the point that there are other ways of looking at the intelligence/homicide correlation.

Secondly, in all this stupid talk of homicidal AGI's I never hear one good mechanism proposed for how the imaginary creature is going to execute all 7 billion or so of us, it's plan for disposing of our bodies....etc. Oh wait I forgot, it's smarter than us...so of course we wouldn't know...we can't imagine it with our feeble brains. Not to mention that it creates a fabulously efficient source of eternal energy for itself, lives in cyberspace in some magical bit realm--and then here comes the million dollar question: Why the fuck would it spend the energy to execute all of us if it can just ignore us? Sound a lot like god delusion to you?

I really can't stand when people trivialize nuclear war like this. 'It can't kill all of us'. Please just shut up. Will it crumble the fabric of our society? Leading possibly (remember, all AGI people care only about possibility--not even likely realities) to war and famine? Who cares if not ever single human dies, what is the consequence to our world fabric? Let me save you the suspense--it's completely devastating. These arguments coming out of CFAR and the effective altruism movement are so fucking dumb I constantly want to scream.


The problem with AGI is not that it wants to run a human-like society sans humans (I don't think AGI wants war in a human sense), the problem is that it may have goals (like famous paperclip maximization) that are incompatible with human continued survival. Humans don't need to be "executed and disposed", they could simply be lost to habitat loss (eg oxygen rich atmosphere can be a pesky thing, why not strip it? or, temperature regulation needs, or whatever. It doesn't take much global terraforming to wipe out humanity inadvertently).

You’re completely wrong. There are so many corrections I would need to make that I can’t even write them here. If I could talk to you in person or on the phone then I might stand a chance of conveying it all. Is there some way I could PM you my contact info? I’m going to put my email in my bio so you can reach me there.

Yes, I am so sure and so serious that I would be willing to take this to a phone call or meetup. Just to change one persons mind.


What is your background? I've been studying math, neuroscience , AI, biology etc for a long time. I left quantitative finance (after leaving a math program) to work at a university developing computational biology software. In finance I worked with neural networks well over a decade ago before people in Silicon Valley had even heard of them. Convince me you're not a kook and I'd be glad to have a conversation.

Reproducing AI wants energy. We want energy. Somebody is going to win, and it’s not us.

Apologies in advance for the meta-comment (feel free to disregard) about this:

> [Opening Paragraph of Article:] One afternoon in August 2010, in a conference hall perched on the edge of San Francisco Bay, a 34-year-old Londoner called Demis Hassabis took to the stage. Walking to the podium with the deliberate gait of a man trying to control his nerves, he pursed his lips into a brief smile and began to speak: [...]

Am I in the minority, when seeing this writing style (for articles covering this kind of content) becomes an instant turn-off?

When an article covers a technical subject or company, I don’t really care whether a founder had an awkward nervous walking gate, or that the conference hall was “perched” on the edge of the SF bay.

In fact, I’d prefer not to focus on such superficial things about people (or places), at least until I exhaust learning about the facts with substance!

So when I read about something like AI (or an AI company), I tend to want to see fact-oriented, event-oriented, concise writing up front (even if it doesn’t have the scope to dive into technical details), so as to grab my attention and reassure me that reading these 10-20 pages of prose will be worth the reading time (in a world of overwhelming information overload).

When I read science fiction (and I do love this too!), I enjoy the paragraphs setting the scene, verbally illustrating mental images, etc. So, it’s not that I don’t enjoy the writing style in general; just that I don’t understand why it’s applied here.

I am still reading this article and I still have no clue if it’s going to contain any useful information content other than textual descriptions of the Deep Mind founders’ superficial walking gate style and speech mannerisms, and perched-ness of various building locations.


1843 is The Economist's version of the New Yorker. It's for people who like to read for the sake of reading and enjoy going on lots of digressions for the fun of it (I love it). The weekly The Economist magazine is much dryer and should suit your tastes better.

Yeah, you'd be in the minority. Technical people are already in the minority. Technical people who read The Economist voraciously are an even smaller minority. 1843magazine.com looks like it's run by The Economist. I believe their target readers love reading this style of writing. It's illustrative and engaging for when you're reading a story. For gathering technical information, 1843magazine.com should not be your first option.

The Economist itself doesn’t have this style, btw. 1843 is some newer magazine they’re putting out that’s much fluffier.

Well, the obituary does but that's just one page at the end.

It’s a common style in certain pieces of journalism. It’s predicated on the idea that people both enjoy and connect more deeply to narrative and concrete detail. I’m with you—I find that the style is frequently overused these days and really annoying when all I want out of the article is some basic information/reportage. It’s used to fantastic effect in things like expose peieces, or deep explorations of pervasive issues that have a rich history, but more often than not, journalists feel like they have to whip it out for every little nugget of news when straightforward, sober reportage would suffice.

This is classic Wired- style writing. They want a protagonist, a hero, a villain. It's always painted as far too simple and dramatic when most technology is actually developed in the most mundane boring ways.

Right there with you. I scanned through it and found maybe 5 sentences of substance that I was interested in reading from beginning to end.

What substance? The story that technology A is involved in a competitions between organization B and C is a mad lib that can be filled in many ways, but A is almost always "A.I." or "5G". "Blockchain" was popular a while back but I think most people involved with that want to pretend that it didn't happen.

The current A.I. fad "Deep Learning" has an origin story complete with Maple Leafs and people who say "Sorrry" when they want to say "Sorry", but 5G doesn't have a charismatic story.

Either way it is a technology that doesn't need to be understood or have use cases, but everybody is racing to control it, so...


Huh?

My point was the writing style of the article conveys little more than stylistic fluff.

I’m honestly not sure what you’re talking about.


That writing style reminds me of the way I was forced to write essays in high school in order to get a good grade - tediously purple prose.

I think its clear that the Article is not a technical piece at all and is instead about Google and Deepmind's working relationship so I think your criticism is unwarranted. You can find great technical articles on Deepmind's website though.

https://deepmind.com/research/


You certainly are not alone.

I peruse “Long Reads” articles from both 1843 Mag and The Guardian. These kind of stretched out descriptions are certainly damp squib.

Maybe the writers gasp for words to fill up the long reads articles, or they have had abortive shots at becoming fiction writers.


Those "Creative Writing" Degrees have to be good for something after all.

Couldn't have said it better myself; this annoys the heck out of me that I have to get through six paragraphs just to know what's going on. I cringe whenever I find out that the topic I want to care about is in The New Yorker and pray that someone has written the tl'dr.

I'm all for vivid, engaging prose, but not in news articles. I want "just the facts, ma'am".


> AGI stands for artificial general intelligence, a hypothetical computer program that can perform intellectual tasks as well as, or better than, a human.

It shows the article was written by someone who has no idea what he is talking about. It would not be a "computer program" but a model composed of simpler sub-models that contain both code and data. Data is the essential part, not the code. It would be something that learns, not something preprogrammed like computer programs.

> Its intelligence will be limited only by the number of processors available.

I beg to differ. AGI will be limited by the complexity of the environment, it can't get smarter than what is afforded by the problems it solves. This article provides a fascinating insight into this topic: https://medium.com/@francois.chollet/the-impossibility-of-in...


See also Eliezer Yudkowsky's excellent response to that article:

https://intelligence.org/2017/12/06/chollet/


AlphaZero did pretty well teaching itself chess and go just from the rules. You could imagine a better AI program learning a lot from basic data, say a physics simulator and access to the internet.



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: