... or it will all fizzle out once it becomes clear that classifiers trained from immense datasets can't get to AGI.
In the last few months I've noticed a sudden uptick in papers and articles on the limitations of deep learning and even a few conferences discussing the way to overcome them (e.g. Logic and Learning at the Alan Turing Institute). Eventually, the hype will die down, people in the field will feel more confident discussing the weaknesses of deep learning and the general public (including the industry and the military) will catch on. Then we'll move forward again when the next big thing comes along.
Also, AGI is a misplaced milestone, even as OP explores a more interesting one: as incremental advances in AI empower non-human entities (corps and governments) with unique powers of surveillance and autonomous action, the impacts can be just as important as if it were a machine intelligence.
The uptick in articles is just a sign that it's trendy to poke holes in hype. Expecting AI to go away is like expecting the Internet (another fundamental shift that was announced with an annoying hype bubble) to go away. It's just becoming a part of life, and creating new power structures while it does.
I would go a step even further. I don't think this even justifies being called a "reckoning", we are just finding the limits of something that we have not yet found the limit I would go a step even further. I don't think this even justifies being called a "reckoning", we are just learning its limits.
My point is that deep learning has generated a lot of excitement in the last few years, but as we're hitting inevitable diminishing returns the excitement is dying down and people are starting to think about how to move beyond current techniques. As usual, the last to catch on to the reversal of the old trend, or the new trend if you prefer, are the people who pay the money- industry investors or the military etc, and then the general public.
My money is on using deep learning for "low-level" sort of sensory tasks and then GOFAI, symbolic techniques, for high-level reasoning. There has been some recent activity on trying to marry symbolic reasoning and deep learning (e.g. differentiable ILP, from Evans and Grefenstette in DeepMind  etc) and it's a promising line of research that has the potential to yield "best-of-both-worlds" results.
But - "AI going away"? Not any time soon!
Although note that their assertion that ILP can't address noisy or ambiguous data is plainly wrong :)
The "it" in your original comment makes the subject ambiguous - your comment can be interpreted as saying AI itself (not the arms-race) will "fizzle out".
I agree that AI's hype cycle is nearing the "trough of disillusionment" stage, but after that will come a wider and deeper application of AI in a multitude of fields and industries instead of the narrow applications today. Even the diminished returns are worth chasing if it increases your turnover by 1-2%.
I'm sorry but I never said anything like that. AI is not the same as deep learning. It's deep learning that has been hyped in the last few years, not AI in general. Of course, in the lay press, there is a great confusion between AI, machine learning and deep learning- but I don't see why there should be an assumpion that I, too, am equally confused about those terms.
In any case, it should be easy to dispel any misunderstandings by a quick look at my profile- I'm an AI PhD research student. I would hardly claim that AI is about to fizzle- or even deep learning. What is being discussed in the quote from the original article is the purported arms race to AGI. And how could AI "fizzle" now- when it has been growing as a field for the last 60+ years?
Honestly, I don't see how any other interpretation of the "ambiguity" in my comment can be justified, assuming good faith. I sure had to squint really hard to see any ambiguity at all.
Regarding good faith, the quotation pattern you used didn't mention deep learning - you referred to either "the arms race" or "classifiers on large datasets" as fizzling out, and my reply resolved them to "AI" as a spanning term, since that was the context of the original post you quoted.
For what it's worth, I would have written the same response about deep learning specifically for the same reasons you point out later in the thread that AI will remain useful. The specific profile of opportunities opened by DL is finding plenty of valuable homes in corporate processes where other kinds of automation fill in the gaps between it and AI.
At this point, I'm not sure whether you disagree with that, or just think that some hype could afford to fade (no argument!).
Since you are a grad researcher, I'll throw in some of my context. I did my PhD focused on probabilistic graphical models, back when it was easy to ignore NNs (and when we expected to hit some of the same perceptual milestones). As a grad student, a big part of your job is to filter fads and find ideas that will stay true.
Because of that, I was slower than I could have been to recognize the feedback loops in what is "true" about applied AI. Deep learning's initial fit for some architectures and problems has attracted attention that made it much cheaper and easy to experiment, and therefore useful for more and more - even gobbling up adjacent techniques and giving them new names (over any manner of academic protest). That feedback loop isn't unbounded, but I guess I'm just sharing the perspective that hype, while annoying, isn't something that even a grad student can afford to disdain.
I think I agree. I believe the hype is primarily driven by industry looking for applications rather than researchers looking for, er, well, understanding, hopefully.
>> Because of that, I was slower than I could have been to recognize the feedback loops in what is "true" about applied AI. Deep learning's initial fit for some architectures and problems has attracted attention that made it much cheaper and easy to experiment, and therefore useful for more and more - even gobbling up adjacent techniques and giving them new names (over any manner of academic protest). That feedback loop isn't unbounded, but I guess I'm just sharing the perspective that hype, while annoying, isn't something that even a grad student can afford to disdain.
You're right of course. Deep learning has earned its due respect I think and although I expect the field to look for something new eventually, I'm guessing that CNNs and LSTMs in particular will remain as established techniques, probably incorporated into other work. I mean, until some new technique comes up that can match CNNs' accuracy but with much improved sample efficiency and generalisation, CNNs are going to remain the go-to method for image classification.
Like, I don't disdain deep learning, I did some work with LSTMs for my Master's and I'm thinking of using CNNs for some vision stuff after my PhD (my subject woulnd't really fit). It's just, there's so many people publishing on deep learning right now that I don't see the point of joining in myself.
Granted we are still far from AGI even with these new advances, but given the progress of the field in the last few years as resources pour in, we cannot say for certain that AGI won’t be reached in our lifetime.
One watershed moment I would look out for is when an AI wins a Starcraft tournament. Winning Starcraft requires many of the ‘general intelligences’ humans excel at (relative to current machines). In my estimation, it is much harder than Go (continuous state space, multi-agent interactions, etc.). DeepMind has announced they are working on it but I’d guess we are at least 2, likely 5 or more, years away from achieving the milestone.
The progress of the field in the last few years is limited to improvements in performance on specific benchmarks all of which pertain to classification specifically and only in a few domains- speech recognition, image recognition and lately game-playing (the function of the deep nets in the AlphaGo family is still essentially discrimination of good vs bad moves, rather than, say, complex reasoning).
So, unless classificiation -in fact, classification in speech and image recognition and game-playing- is sufficient for the development of AGI, yes, we can say with pretty good certainty that there is no clear path from the current state of affairs to AGI and therefore, no good reason to assume AGI will be achieved within our lifetimes.
Of course, we can't ever say anything with absolute certainty. Perhaps we live in a simulated universe and the Simulators will switch it all off tomorrow. Perhaps aliens will make first contact and hand us all the tech we're missing - or exterminate us all. Perhaps dread Cthulhu will rise from his sleep at R'lyeh Ia! Ia! etc.
But- reasonable predictions can only be made based on what we know so far. The rest is only wild speculation that doesn't really serve anything, except of course to satisfy one's imagination.
Here are approximate numbers (with 2 significant digits) of faculty members/university researchers who published as above in each country/continent:
Asia (including China) 340
Australia + New Zealand 86
South America 12
The world excluding the US 810
So the US is still far ahead of other nations/regions, but it now has a bit below 50% of the world's university researchers who recently published in top AI conferences. China as a country is close to Europe as a continent and its number of published university researchers has increased rapidly in recent years.
The number of researchers is not weighted by the number of papers published but this number is useful since it counts how many people are capable of advising graduate students to produce world-class research. Using the number of papers is complicated by how likely highly capable international graduate students would choose to study in each program (in addition to the researcher's capability), i.e. university's reputation would have an additional impact beyond its research capability.
The current administration is trying their hardest to prevent immigrants from studying here, are openly hostile to the ones here, and a lot of the students doing research in my program are returning back to China because it's not worth the effort to work here for many of them.
I certainly agree that the US strength in cutting edge research in many fields significantly derives from its recent immigrants. If the US becomes hostile to high-potential immigrants, its strength will decline.
Somebody told me that's why there's so many Russian hackers even though Russian compsci education is basically third world.
That's why there's a vast population in the hot tropical areas, whereas there's nobody in Canada's frozen north, or vast stretches of frozen Russia. Even in the more temperate parts of cold Canada & Russia, the population density is extraordinarily low. Compare that to numerous high population density hot tropical regions.
Mexico for example is one of the hottest countries on earth, it has 127 million people with a population density seven times that of Russia. Its population will soon surpass Russia.
India is even hotter than Mexico.
Iran and Iraq are even hotter than India (120 million people in those two countries, they'll combined surpass Russia in population very soon).
Universal function approximators are not about to take over the world.
Ignoring the unsubstantiated ad hominem for a minute though, it's getting frustrating that the mere introduction of the topic of discussing possible AI futures causes immediate derision.
It's as though the corporate research community, for which I'm associated, is vehemently against even discussing AGI. Whereas Barto, Bengio, Hassabis etc... are happy to discuss it in reasonable ways.
The author seems to be discussing it reasonably and not making crazy kurzweilesque prognostication.
Everyone at the Dartmouth Workshop wanted to create human level intelligence. Let's stop pretending that's not still the goal. OpenAI and DeepMind have that as explicit goals and if you dig into any serious AI researchers they say that's the goal vector.
So where's the beef?
I think it comes from a combination of being scared of overhyping AI and causing the next AI winter, and some form of gatekeeping.
Suppose we have a really strong universal function approximator---stronger than current neural networks, whose generalization properties are not really that great in the grand scheme of things as of 2018. Tell it to approximate the action policy that maximizes some overly-simplistic geopolitical objective function, like GDP or territory controlled at time t+1. It doesn't seem at all obvious that this thing could not take over the world or at least cause significant havoc if given sufficient resources.
The problem is that with such a vague objective as "maximise GDP" or "maximise controlled territory" you need to train a model that is extremely broad in scope - because the objective might be narrow, but the steps to realise it are extremely varied. In practice, you're trying to approximate a function that is outputting the state of the entire world at each time step. Good luck with achieving that in practice.
Edit: The bit about collecting examples is not a trivial problem. Note that the successes of deep learning so far are in domains where not only the objective is well defined ("choose one of n categorical labels") but also the data associated with the objectives is easy to collect and has an obvious relation to the outputs. Say, if you want to train an image classifier to recognise images of dogs- obviously you need to collect images of dogs. This is not the case in "increase GDP" type objectives, where it is not even clear what exactly influences a country's GDP. In principle, you could feed the entire world as examples to the model, but in practice, that's just unfeasible.
You could specify a simpler goal like “increase economic output”, and an example might be something like optimizing the Cobb-Douglas production function. Even that very narrow goal in the context of say, manufacturing Teslas, would give me pause. Look up “instrumental convergence” to see why the above is a bad idea.
From what I can see on wikipedia, the parameters of Cobb-Douglas are the value of goods produced, labour, capital, "total factor productivity" (as I understand it, everything other than labour and capital that might contribute to productivity) and a couple of constants. For an AI to maximise the output of the function you'd have to somehow make it possible for it to manipulate those parameters- to hire or fire personel, to spend or acquire capital and to somehow manage all those unknown factors that might be contributing to the output.
The question is: how do you do that? You can certainly collect, or even auto-generate examples of the inputs and outputs of the functions, since we're just talking numerical parameters, and find a maximum of the function. But for an AI to actually improve the productivity of a business, it would have to do a lot more than that. It would need the ability to manipulate those parameters directly in the real world. Otherwise, all it would do is calculate a number. Which is not that very threatening.
The hard part here is writing down that objective function. Remember, an AI/ML/cogsci algorithm is locked inside a black box, that being the hardware it runs on. Any objective function for RL must be expressed as a function (preferably a smoothly differentiable one, for gradient descent) of the sense-data available to the agent and the agent's hypothesis class about the world. Naive RL tends to optimize the function by, where at all possible, systematically decorrelating the agent's sense-data and reinforcement signal from the distal causes we intend them to represent.
So for example, write down the objective function for current General Intelligence: Humans. It's impossible, and has been the work of the field of Philosophy/Economics since we started seriously thinking about it.
Nothing in that article discusses interpersonal neurological response systems or anything relating to how mores and boundaries are created.
Seems like you're linking to something which may be on the track to narrow down consciousnesses which is a separate question - and one I also question the benefit of caring about.
Well, we weren't discussing social reasoning and behavior, so I linked an article talking about the systems governing the brain's "objective function".
My original statement should have been: "We can't model Humanity's Collective Objective Function" - which is what would be behind what we are interested in: Stable functioning muti-agent systems. I think EY took a crack at this a long time ago and rightly abandoned the concept (see: CEV).
Even with that clarification I disagree with the premise that we can model an "objective function" for an individual strictly in-vivo. Modelling an individual agent's reasoning/function system doesn't account for the environmental context it exists inside of, gives input into and responds to. So even if it was possible to understand the mechanism for intra-personal decision criteria, and I don't think it probably is, I don't think it's generalizable without having the context of inputs.
Assuming that we could do this, I don't think you can extrapolate intentionality directly from individual to collective groups - which for an AGI is what is existentially important as it needs to be collectively general to solve the existential problem.
I also don't think this is desirable as a framework for AGI - as humans, despite our intelligent status, are quite unstable and sub-optimal in groups.
If no such thing exists, then it was the wrong thing to investigate, so stop being interested in it.
>Even with that clarification I disagree with the premise that we can model an "objective function" for an individual strictly in-vivo. Modelling an individual agent's reasoning/function system doesn't account for the environmental context it exists inside of, gives input into and responds to. So even if it was possible to understand the mechanism for intra-personal decision criteria, and I don't think it probably is, I don't think it's generalizable without having the context of inputs.
That's just an inverse reasoning/theory-of-mind problem, one that normal theory-of-mind models and actual human brains solve every day.
>Assuming that we could do this, I don't think you can extrapolate intentionality directly from individual to collective groups - which for an AGI is what is existentially important as it needs to be collectively general to solve the existential problem.
What's this about "collectively general" and "the existential problem"? You seem to have gone off the deep end into philosophy salad.
>I also don't think this is desirable as a framework for AGI - as humans, despite our intelligent status, are quite unstable and sub-optimal in groups.
Considering you don't seem to know much about how humans work and what causes us to work well or badly in various situations, this statement comes off as almost racist.
You broke the HN guidelines pretty badly here, first by getting personal, and second by not doing this: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize."
The result was a whole litany of complaints elsewhere, which were understandable if off topic (https://news.ycombinator.com/item?id=17359858).
You unfortunately have a long history of breaking the site guidelines. Would you please reread them and fix this? It's really not ok, and even though I appreciate your substantive comments, their value doesn't obviously exceed the damage you've caused with harshness, swipes, and uncharitable responses over the years.
The question is, of course- how? How will AI become able to take over the world?
This is an important question to ask because predictions of AI rapidly becoming capable of taking over the world tend to flare up in times of progress in AI performance. For example, the above article is basically based on the preimse that the latest developments, i.e. deep learning, are sufficient to give some nation-state technological supremacy by leading directly to AGI.
However, there is no clear path from deep learning, however powerful it may be as a set of techniques to train classifiers, to AGI, which (well, probably) entails much more than powerful classification. For instance, deep learning can't really do inference. It seems reasonable to assume that AGI will need the ability to reason about the objects it can recognise and the relations between them. Deep learning can't do causal analysis- like Judea Pearl has been screaming from the rooftops lately. It seems reasonable again to assume that AGI will need to model causal relations. Deep learning can't do semantics. AGI will need to have some handle on semantics. And so on and so forth.
In general, anyone that claims that X current development in AI risks inadvertently producing AGI, must explain how this is supposed to happen- not just wave their hands about and claim that, if you throw enough compute and enough data at the problem, magic will happen and AGI will turn us all into paperclips.
I think the mismatch comes when people looking from the outside who have no real understanding of how the internals work see this progress in a very narrow domain and extrapolate that idea into thinking that something more general is happening.
When you understand the trick, it stops being magic.
> When you understand the trick, it stops being magic.
The only argument I've seen about the difficulty of general AI is that we don't presently know how to do it. But it only seems difficult (like magic) because of our lack of comprehension. Maybe it actually is, but the good odds are that like everything else it's just one or a couple of tricks, and once we put the pieces together we'll be moving just as fast.
If a machine can generate strategies to win simulated wars, how much harder is it to generate strategies to win real wars?
If machines can out compete people on 50% of jobs, how much harder is it to out compete people on 100% of jobs?
The trick with AGI is that it can learn faster, and humans (even with our limitations) did take over the world.
Humans are limited by a lot of things like motivation and communication which is likely distinct from the general problem solving architecture itself. Humans also have goals and an evolved morality that guides their thinking. Humans share a similar type of intelligence and understanding of goals, culture, right and wrong. You don't get this for free with AGI.
The point of the human example is that there already exists a general problem solving architecture you can scale up (it's not impossible and there are brains everywhere in nature).
If you increase operations per second many magnitudes you can have all human learning over thousands of years compressed into a couple of hours. You see this loosely in narrow spaces now with things like Alpha Go Zero.
There may not be other AGIs if there's an intelligence explosion and the first one improves itself very quickly. If its utility function is not aligned with human goals then it could end up turning all matter into paperclips - not because it's evil, but because it's hard to set goals tied to human morality (when humans don't even agree in every case) and it happens to be configured such that making paperclips maxes out its reward function by accident.
It may be possible to make the unsafe version before making the safe one, since it seems harder to make an AGI that's aligned with human goals than just an AGI in general.
I can only interpret this as a mystical claim...
That's just pure speculation because nobody has a clue what such an intelligence would look like. I can imagine using the mind infinity stone to take over the world, but that doesn't make it realistic.
This is an interesting observation, it is actually already happening now with Internet companies, and to a lesser degree physical product companies with global reach (like Apple and Amazon). Money flow from local economies into those companies who don't pay much local tax nor creating much local employment. That could eventually drain the well dry.
Countries suffered from colonialism have been catching up to the developed world in terms of standards of living in the past 100 years. I wonder if the above effect will reverse that course.
Basically, the component of this that says “but machine learning is different” is still not convincing. The same nationalistic divides and concerns about geopolitical backing for warfare tech that happened in response to nuclear weaponry and chemical weaponry are likely to be high-fidelity models of whatever geopolitical divide for machine learning weaponry.
I agree it will be a significant policy issue, but I do not agree it is very related to the topic of AGI. Reasoning about it by studying how various other tech arms races have unfolded in history will be a good, but not perfect, model for how it unfolds for ML too. And the pieces where this time is different will be far more understated than the amount of hype about it.
: < https://vimeo.com/9508131 >
Bucky was confident that we could use computers to solve our problems. We could enter all relevant data and the machine could compute the optimal solutions for us.
The issue has always been ensuring we ask them to solve the right problems.
If we use AI to tell us how long to imprison people (already happening) rather than how to decrease recidivism, that's a meta-computer choice that we made, not the AI.
If we use AI to kill people, rather than to figure out how not to have to kill them in the first place, that's also our choice.
Cf. "Wargames" https://en.wikipedia.org/wiki/WarGames This was in '83!
I'm bawling my eyes out right now.
"The only winning move is not to play."
The imaginary graph of ML technology that can be developed for destruction or defense is fraught with inter-dependent paranoid scenarios. The use of ML for the increase of human happiness is apparent and obvious. An ML arms race that invokes conflict is going to be a huge waste of a nation's ML resources.
It would be much more productive to think about how ML/AI can be used to for egalitarian human prosperity (a la post-scarcity, etc).
Computation, in general, is capable of solving many problems that afflict the world - disease, hunger, resource allocation, etc. Some of these problems have "conventional" computational solutions.
Fundamentally, there are two problems that must be solved. First, the actual ability to actually compute needs perfected. This means that massive computations (i.e. the computations that solve massive, game-changing problems) can easily be performed. Things like public clouds are solving that problem. Second, computation needs applied to a problem. Statistical learning approaches have become popular because they are relatively simple to apply and are relatively successful. AI researches tend to believe that AI is all that matters, but obviously the success of AI is only possible with efficient computation. Similarly, efficient computation alone is useless if that computation cannot be used to solve actual problems.
Computation is to the 21st century as energy is the 20th. The ramifications of that statement are immediately obvious: consider the petrodollar. Soon, computation will become a currency.
- The economics surrounding AI development favor those who can commoditize data to the cheapest price. (Silicon Valley, militaries, and finance have AND MUST MAINTAIN their influence over this commoditization) This commoditization requirement was once previously thought as irreversible, allowing dumb money to buy into the idea that “data is the new oil”, but Butterfly War shows how to unexpectedly drive up the liability of a mass accumulation of data commodities.
- Foreign actors and short sellers can now use derivations of the Butterfly War to become market makers of the data economy, forcing the theory of “AI Winters” to be replaced with a more predictive “AI Business Cycle”. (Do you now understand why I went to Soros-influenced actors first?)
- This undesirable pressure, when paired with the institutional dependencies of established AI infrastructure, will force a deeper consolidation of Silicon Valley, military, and financial “cognitive assets”, which in turn will skew the funding and purposes behind additional AI development to be more risk-averse and conservative (from an power preservation standpoint).
- The pressures to embrace “cognitive mercantilism” become irreversible. Nations will aggressively retain talent and technologies for themselves to improve their collective bargaining power on the international stage. /pol/-tier nationalism finally has the footing to stifle their material humanist opposition.
- AI development will enter an artificially induced “deep freeze” period, similar to what happened to space exploration after the Space Race.
- The doctrine of Gnostic Warfare we develop today dominates in this period, focusing primarily on the epistemological limitations of Deep Belief Networks and, more precisely, how these cognitive assets define emotion.
> what is intelligent
> what is the set of all x such that x is intelligent
not political questions