Hacker News new | past | comments | ask | show | jobs | submit login
AI Nationalism (ianhogarth.com)
125 points by niccolop on June 14, 2018 | hide | past | favorite | 84 comments



>> This arms race will potentially speed up the pace of AI development and shorten the timescale for getting to AGI.

... or it will all fizzle out once it becomes clear that classifiers trained from immense datasets can't get to AGI.

In the last few months I've noticed a sudden uptick in papers and articles on the limitations of deep learning and even a few conferences discussing the way to overcome them (e.g. Logic and Learning at the Alan Turing Institute). Eventually, the hype will die down, people in the field will feel more confident discussing the weaknesses of deep learning and the general public (including the industry and the military) will catch on. Then we'll move forward again when the next big thing comes along.


I fully agree that there is an ongoing reckoning with the limitations of deep learning, but that is the sign of a thriving community not a failing one.

Also, AGI is a misplaced milestone, even as OP explores a more interesting one: as incremental advances in AI empower non-human entities (corps and governments) with unique powers of surveillance and autonomous action, the impacts can be just as important as if it were a machine intelligence.

The uptick in articles is just a sign that it's trendy to poke holes in hype. Expecting AI to go away is like expecting the Internet (another fundamental shift that was announced with an annoying hype bubble) to go away. It's just becoming a part of life, and creating new power structures while it does.


I fully agree that there is an ongoing reckoning with the limitations of deep learning

I would go a step even further. I don't think this even justifies being called a "reckoning", we are just finding the limits of something that we have not yet found the limit I would go a step even further. I don't think this even justifies being called a "reckoning", we are just learning its limits.


I’ve long believed the first true “AGI” will be a corporation in which all of the individual jobs have just been automated out of existence one by one over time. Eventually, many companies will essentially just become autonomous entities under the direction of shareholders. It may even have protocols put in place to determine when to hire outside consultants to come in and improve itself.


Has there been a misunderstanding of my comment? I didn't mean to say that I expect "AI to go away". For one thing, I certainly do not equate all of AI to deep learning, or even machine learning.

My point is that deep learning has generated a lot of excitement in the last few years, but as we're hitting inevitable diminishing returns the excitement is dying down and people are starting to think about how to move beyond current techniques. As usual, the last to catch on to the reversal of the old trend, or the new trend if you prefer, are the people who pay the money- industry investors or the military etc, and then the general public.

My money is on using deep learning for "low-level" sort of sensory tasks and then GOFAI, symbolic techniques, for high-level reasoning. There has been some recent activity on trying to marry symbolic reasoning and deep learning (e.g. differentiable ILP, from Evans and Grefenstette in DeepMind [1] etc) and it's a promising line of research that has the potential to yield "best-of-both-worlds" results.

But - "AI going away"? Not any time soon!

________

[1] https://arxiv.org/abs/1711.04574

Although note that their assertion that ILP can't address noisy or ambiguous data is plainly wrong :)


> Has there been a misunderstanding of my comment? I didn't mean to say that I expect "AI to go away"

The "it" in your original comment makes the subject ambiguous - your comment can be interpreted as saying AI itself (not the arms-race) will "fizzle out".

I agree that AI's hype cycle is nearing the "trough of disillusionment" stage, but after that will come a wider and deeper application of AI in a multitude of fields and industries instead of the narrow applications today. Even the diminished returns are worth chasing if it increases your turnover by 1-2%.


>> I agree that AI's hype cycle is nearing the "trough of disillusionment" stage,

I'm sorry but I never said anything like that. AI is not the same as deep learning. It's deep learning that has been hyped in the last few years, not AI in general. Of course, in the lay press, there is a great confusion between AI, machine learning and deep learning- but I don't see why there should be an assumpion that I, too, am equally confused about those terms.

In any case, it should be easy to dispel any misunderstandings by a quick look at my profile- I'm an AI PhD research student. I would hardly claim that AI is about to fizzle- or even deep learning. What is being discussed in the quote from the original article is the purported arms race to AGI. And how could AI "fizzle" now- when it has been growing as a field for the last 60+ years?

Honestly, I don't see how any other interpretation of the "ambiguity" in my comment can be justified, assuming good faith. I sure had to squint really hard to see any ambiguity at all.


It sounds like we (in this thread) are all on the same page about the future of AI-writ-large.

Regarding good faith, the quotation pattern you used didn't mention deep learning - you referred to either "the arms race" or "classifiers on large datasets" as fizzling out, and my reply resolved them to "AI" as a spanning term, since that was the context of the original post you quoted.

For what it's worth, I would have written the same response about deep learning specifically for the same reasons you point out later in the thread that AI will remain useful. The specific profile of opportunities opened by DL is finding plenty of valuable homes in corporate processes where other kinds of automation fill in the gaps between it and AI.

At this point, I'm not sure whether you disagree with that, or just think that some hype could afford to fade (no argument!).

Since you are a grad researcher, I'll throw in some of my context. I did my PhD focused on probabilistic graphical models, back when it was easy to ignore NNs (and when we expected to hit some of the same perceptual milestones). As a grad student, a big part of your job is to filter fads and find ideas that will stay true.

Because of that, I was slower than I could have been to recognize the feedback loops in what is "true" about applied AI. Deep learning's initial fit for some architectures and problems has attracted attention that made it much cheaper and easy to experiment, and therefore useful for more and more - even gobbling up adjacent techniques and giving them new names (over any manner of academic protest). That feedback loop isn't unbounded, but I guess I'm just sharing the perspective that hype, while annoying, isn't something that even a grad student can afford to disdain.


>> At this point, I'm not sure whether you disagree with that, or just think that some hype could afford to fade (no argument!).

I think I agree. I believe the hype is primarily driven by industry looking for applications rather than researchers looking for, er, well, understanding, hopefully.

>> Because of that, I was slower than I could have been to recognize the feedback loops in what is "true" about applied AI. Deep learning's initial fit for some architectures and problems has attracted attention that made it much cheaper and easy to experiment, and therefore useful for more and more - even gobbling up adjacent techniques and giving them new names (over any manner of academic protest). That feedback loop isn't unbounded, but I guess I'm just sharing the perspective that hype, while annoying, isn't something that even a grad student can afford to disdain.

You're right of course. Deep learning has earned its due respect I think and although I expect the field to look for something new eventually, I'm guessing that CNNs and LSTMs in particular will remain as established techniques, probably incorporated into other work. I mean, until some new technique comes up that can match CNNs' accuracy but with much improved sample efficiency and generalisation, CNNs are going to remain the go-to method for image classification.

Like, I don't disdain deep learning, I did some work with LSTMs for my Master's and I'm thinking of using CNNs for some vision stuff after my PhD (my subject woulnd't really fit). It's just, there's so many people publishing on deep learning right now that I don't see the point of joining in myself.


Looks like you are doing some cool stuff based on learning of structures! And indeed, the folks who are now celebrated for DNNs were doing the less popular thing for 10-20y prior :) Best of luck on your research.


Ah, I've been telling myself that, yes. Cheers :)


Simpler neural networks clearly have limitations that we might be reaching. Recent papers from DeepMind and OpenAI, among others, show that one can develop more sophisticated architectures that can solve some complex real-world problems much better than vanilla networks, however.

Granted we are still far from AGI even with these new advances, but given the progress of the field in the last few years as resources pour in, we cannot say for certain that AGI won’t be reached in our lifetime.

One watershed moment I would look out for is when an AI wins a Starcraft tournament. Winning Starcraft requires many of the ‘general intelligences’ humans excel at (relative to current machines). In my estimation, it is much harder than Go (continuous state space, multi-agent interactions, etc.). DeepMind has announced they are working on it but I’d guess we are at least 2, likely 5 or more, years away from achieving the milestone.


>> Granted we are still far from AGI even with these new advances, but given the progress of the field in the last few years as resources pour in, we cannot say for certain that AGI won’t be reached in our lifetime.

The progress of the field in the last few years is limited to improvements in performance on specific benchmarks all of which pertain to classification specifically and only in a few domains- speech recognition, image recognition and lately game-playing (the function of the deep nets in the AlphaGo family is still essentially discrimination of good vs bad moves, rather than, say, complex reasoning).

So, unless classificiation -in fact, classification in speech and image recognition and game-playing- is sufficient for the development of AGI, yes, we can say with pretty good certainty that there is no clear path from the current state of affairs to AGI and therefore, no good reason to assume AGI will be achieved within our lifetimes.

Of course, we can't ever say anything with absolute certainty. Perhaps we live in a simulated universe and the Simulators will switch it all off tomorrow. Perhaps aliens will make first contact and hand us all the tech we're missing - or exterminate us all. Perhaps dread Cthulhu will rise from his sleep at R'lyeh Ia! Ia! etc.

But- reasonable predictions can only be made based on what we know so far. The rest is only wild speculation that doesn't really serve anything, except of course to satisfy one's imagination.


I added up the number of university researchers who publish in top AI conferences (according to csrankings.org) from Jan 2017 to late May 2018.

http://csrankings.org/#/fromyear/2017/toyear/2018/index?ai&v...

Here are approximate numbers (with 2 significant digits) of faculty members/university researchers who published as above in each country/continent:

US 770

Canada 92

Asia (including China) 340

China 240

Europe 280

Australia + New Zealand 86

South America 12

The world excluding the US 810

So the US is still far ahead of other nations/regions, but it now has a bit below 50% of the world's university researchers who recently published in top AI conferences. China as a country is close to Europe as a continent and its number of published university researchers has increased rapidly in recent years.

The number of researchers is not weighted by the number of papers published but this number is useful since it counts how many people are capable of advising graduate students to produce world-class research. Using the number of papers is complicated by how likely highly capable international graduate students would choose to study in each program (in addition to the researcher's capability), i.e. university's reputation would have an additional impact beyond its research capability.


Are counting the country by the physical location of the university or the actual nationality of the authors? Many of the papers I've read in my program come from the US, but nearly everything I read cutting edge on computer vision had a number of Chinese authors if the entire team wasn't Chinese.

The current administration is trying their hardest to prevent immigrants from studying here, are openly hostile to the ones here, and a lot of the students doing research in my program are returning back to China because it's not worth the effort to work here for many of them.


It's by the physical location of the university/campus. (If the offshoot campus is in another continent, it is counted there, but there are few published AI researchers in offshoot campuses so far that I've seen.)

I certainly agree that the US strength in cutting edge research in many fields significantly derives from its recent immigrants. If the US becomes hostile to high-potential immigrants, its strength will decline.


Surprised at how high Canada is when you factor in its population is only 4x NYC.


When you say it that way, it sounds impressive, but the population is ~11% of the US (36/320), so it seems reasonable they have ~11% the output of the US (90/770).


Yeah you're right.


Hey this is ian hogarth. I first presented a version of this essay at an event at a place called Ditchley which had brought together various ML researchers and politicians from North America and Europe to discuss this topic. One of the things that really struck me was the similarity between the U.K. and Canada in terms of their depth of academic talent around ML but the paucity of independent “domestic champions”.


If you haven't, you should rewrite this essay from the perspective of Crypto nationalism and how that turned out.


I really enjoyed reading this essay-thank you for writing it.


I am curious how many of them are first generation emmigrants.


When it's really cold outside, staying inside and hacking on stuff is probably preferable to working outside.

Somebody told me that's why there's so many Russian hackers even though Russian compsci education is basically third world.


Are there a lot of hackers in regions where it's too hot and sunny to be working outside?


Humans adapt to living in and working in eg tropical, very hot zones dramatically easier than the coldest regions.

That's why there's a vast population in the hot tropical areas, whereas there's nobody in Canada's frozen north, or vast stretches of frozen Russia. Even in the more temperate parts of cold Canada & Russia, the population density is extraordinarily low. Compare that to numerous high population density hot tropical regions.

Mexico for example is one of the hottest countries on earth, it has 127 million people with a population density seven times that of Russia. Its population will soon surpass Russia.

India is even hotter than Mexico.

Iran and Iraq are even hotter than India (120 million people in those two countries, they'll combined surpass Russia in population very soon).


I suspect it also has something to do with information markets being global and internet tech being a way to get western levels of income while living with Eastern European costs.


I think you counted China twice.


Something tells me people that write these articles have never read a paper from e.g. NIPS or pick any top tier conference on cutting edge research, heck I would go so far as to say they don’t even know how to write an image classifier for MNIST using Keras if their life depended on it.

Universal function approximators are not about to take over the world.


What's especially shocking about your statement is the complete lack of research you did on Ian before posting this.

https://www.ianhogarth.com/about/

https://en.m.wikipedia.org/wiki/Ian_Hogarth

Ignoring the unsubstantiated ad hominem for a minute though, it's getting frustrating that the mere introduction of the topic of discussing possible AI futures causes immediate derision.

It's as though the corporate research community, for which I'm associated, is vehemently against even discussing AGI. Whereas Barto, Bengio, Hassabis etc... are happy to discuss it in reasonable ways.

The author seems to be discussing it reasonably and not making crazy kurzweilesque prognostication.

Everyone at the Dartmouth Workshop wanted to create human level intelligence. Let's stop pretending that's not still the goal. OpenAI and DeepMind have that as explicit goals and if you dig into any serious AI researchers they say that's the goal vector.

So where's the beef?

I think it comes from a combination of being scared of overhyping AI and causing the next AI winter, and some form of gatekeeping.


I'm always a bit confused when people call neural networks "universal function approximators" as if that makes them trivial or weak.

Suppose we have a really strong universal function approximator---stronger than current neural networks, whose generalization properties are not really that great in the grand scheme of things as of 2018. Tell it to approximate the action policy that maximizes some overly-simplistic geopolitical objective function, like GDP or territory controlled at time t+1. It doesn't seem at all obvious that this thing could not take over the world or at least cause significant havoc if given sufficient resources.


But what are "sufficient resources" in this case? How do you train a universal function approximator, even a good one, in, say "increasing GDP"? Where do you get the examples for that?

The problem is that with such a vague objective as "maximise GDP" or "maximise controlled territory" you need to train a model that is extremely broad in scope - because the objective might be narrow, but the steps to realise it are extremely varied. In practice, you're trying to approximate a function that is outputting the state of the entire world at each time step. Good luck with achieving that in practice.

Edit: The bit about collecting examples is not a trivial problem. Note that the successes of deep learning so far are in domains where not only the objective is well defined ("choose one of n categorical labels") but also the data associated with the objectives is easy to collect and has an obvious relation to the outputs. Say, if you want to train an image classifier to recognise images of dogs- obviously you need to collect images of dogs. This is not the case in "increase GDP" type objectives, where it is not even clear what exactly influences a country's GDP. In principle, you could feed the entire world as examples to the model, but in practice, that's just unfeasible.


> But what are "sufficient resources" in this case? How do you train a universal function approximator, even a good one, in, say "increasing GDP"? Where do you get the examples for that?

You could specify a simpler goal like “increase economic output”, and an example might be something like optimizing the Cobb-Douglas production function. Even that very narrow goal in the context of say, manufacturing Teslas, would give me pause. Look up “instrumental convergence” to see why the above is a bad idea.


When I say "examples" I mean training examples- a dataset to train (and validate) on.

From what I can see on wikipedia, the parameters of Cobb-Douglas are the value of goods produced, labour, capital, "total factor productivity" (as I understand it, everything other than labour and capital that might contribute to productivity) and a couple of constants. For an AI to maximise the output of the function you'd have to somehow make it possible for it to manipulate those parameters- to hire or fire personel, to spend or acquire capital and to somehow manage all those unknown factors that might be contributing to the output.

The question is: how do you do that? You can certainly collect, or even auto-generate examples of the inputs and outputs of the functions, since we're just talking numerical parameters, and find a maximum of the function. But for an AI to actually improve the productivity of a business, it would have to do a lot more than that. It would need the ability to manipulate those parameters directly in the real world. Otherwise, all it would do is calculate a number. Which is not that very threatening.


>some overly-simplistic geopolitical objective function, like GDP or territory controlled at time t+1.

The hard part here is writing down that objective function. Remember, an AI/ML/cogsci algorithm is locked inside a black box, that being the hardware it runs on. Any objective function for RL must be expressed as a function (preferably a smoothly differentiable one, for gradient descent) of the sense-data available to the agent and the agent's hypothesis class about the world. Naive RL tends to optimize the function by, where at all possible, systematically decorrelating the agent's sense-data and reinforcement signal from the distal causes we intend them to represent.


You mean write down a good objective function, making terrible ones are trivial. And the hard part is figuring out if the objective function you've spent six months refining actually gives sensible scores or if you just encoded all the mean of your data in that one hand tuned constant with six digits.


In my opinion, it's an impossible and undesirable task.

So for example, write down the objective function for current General Intelligence: Humans. It's impossible, and has been the work of the field of Philosophy/Economics since we started seriously thinking about it.


Ehhhh we're pretty close on that one, actually. https://www.nature.com/articles/s41562-017-0069


Did you link the right article?

Nothing in that article discusses interpersonal neurological response systems or anything relating to how mores and boundaries are created.

Seems like you're linking to something which may be on the track to narrow down consciousnesses which is a separate question - and one I also question the benefit of caring about.


>Nothing in that article discusses interpersonal neurological response systems or anything relating to how mores and boundaries are created.

Well, we weren't discussing social reasoning and behavior, so I linked an article talking about the systems governing the brain's "objective function".


In fact I was, but I wasn't explicit enough, so that's my fault.

My original statement should have been: "We can't model Humanity's Collective Objective Function" - which is what would be behind what we are interested in: Stable functioning muti-agent systems. I think EY took a crack at this a long time ago and rightly abandoned the concept (see: CEV).

Even with that clarification I disagree with the premise that we can model an "objective function" for an individual strictly in-vivo. Modelling an individual agent's reasoning/function system doesn't account for the environmental context it exists inside of, gives input into and responds to. So even if it was possible to understand the mechanism for intra-personal decision criteria, and I don't think it probably is, I don't think it's generalizable without having the context of inputs.

Assuming that we could do this, I don't think you can extrapolate intentionality directly from individual to collective groups - which for an AGI is what is existentially important as it needs to be collectively general to solve the existential problem.

I also don't think this is desirable as a framework for AGI - as humans, despite our intelligent status, are quite unstable and sub-optimal in groups.


>My original statement should have been: "We can't model Humanity's Collective Objective Function" - which is what would be behind what we are interested in: Stable functioning muti-agent systems. I think EY took a crack at this a long time ago and rightly abandoned the concept (see: CEV).

If no such thing exists, then it was the wrong thing to investigate, so stop being interested in it.

>Even with that clarification I disagree with the premise that we can model an "objective function" for an individual strictly in-vivo. Modelling an individual agent's reasoning/function system doesn't account for the environmental context it exists inside of, gives input into and responds to. So even if it was possible to understand the mechanism for intra-personal decision criteria, and I don't think it probably is, I don't think it's generalizable without having the context of inputs.

That's just an inverse reasoning/theory-of-mind problem, one that normal theory-of-mind models and actual human brains solve every day.

>Assuming that we could do this, I don't think you can extrapolate intentionality directly from individual to collective groups - which for an AGI is what is existentially important as it needs to be collectively general to solve the existential problem.

What's this about "collectively general" and "the existential problem"? You seem to have gone off the deep end into philosophy salad.

>I also don't think this is desirable as a framework for AGI - as humans, despite our intelligent status, are quite unstable and sub-optimal in groups.

Considering you don't seem to know much about how humans work and what causes us to work well or badly in various situations, this statement comes off as almost racist.


> you don't seem to know much about how humans work [...] comes off as almost racist

You broke the HN guidelines pretty badly here, first by getting personal, and second by not doing this: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize." The result was a whole litany of complaints elsewhere, which were understandable if off topic (https://news.ycombinator.com/item?id=17359858).

You unfortunately have a long history of breaking the site guidelines. Would you please reread them and fix this? It's really not ok, and even though I appreciate your substantive comments, their value doesn't obviously exceed the damage you've caused with harshness, swipes, and uncharitable responses over the years.

https://news.ycombinator.com/newsguidelines.html


Clearly you have a chip on your shoulder so it's not worth having any further discussion.


Well, clear communication is important. The more important something is, the more clarity is necessary.


At least, a lot of things that have conventionally been understood as very hard for AI (like catching a ball or playing Go) can be viewed as function approximations!


Mixtures of Gaussians are also universal function approximators. It more makes clear that it’s in a class of functions that could each do many of the things neural networks do.


Devil’s advocate: success is a step function. AI isn’t going to take over the world until it is able to, and then it very rapidly does.


>> AI isn’t going to take over the world until it is able to, and then it very rapidly does.

The question is, of course- how? How will AI become able to take over the world?

This is an important question to ask because predictions of AI rapidly becoming capable of taking over the world tend to flare up in times of progress in AI performance. For example, the above article is basically based on the preimse that the latest developments, i.e. deep learning, are sufficient to give some nation-state technological supremacy by leading directly to AGI.

However, there is no clear path from deep learning, however powerful it may be as a set of techniques to train classifiers, to AGI, which (well, probably) entails much more than powerful classification. For instance, deep learning can't really do inference. It seems reasonable to assume that AGI will need the ability to reason about the objects it can recognise and the relations between them. Deep learning can't do causal analysis- like Judea Pearl has been screaming from the rooftops lately. It seems reasonable again to assume that AGI will need to model causal relations. Deep learning can't do semantics. AGI will need to have some handle on semantics. And so on and so forth.

In general, anyone that claims that X current development in AI risks inadvertently producing AGI, must explain how this is supposed to happen- not just wave their hands about and claim that, if you throw enough compute and enough data at the problem, magic will happen and AGI will turn us all into paperclips.


What seems more likely at this point is that: machine translation will get so good that it will be better at translating between languages than any human can; image classifiers will become so good that they will be better at classifying images than any human can; self-driving cars will become so good that they will be better at driving than any human can; etc. etc.

I think the mismatch comes when people looking from the outside who have no real understanding of how the internals work see this progress in a very narrow domain and extrapolate that idea into thinking that something more general is happening.

When you understand the trick, it stops being magic.


Adding to the fact that thousands of tricks are involved in the resourcefulness of the human brain and that each trick has multiple ways to interact with other tricks or modify itself . Duplicating this feat into machines isn't a simple matter. AGI isn't going to be some miracle of ML or any one technique. It requires simulating several different self selective learning models and their managers including a way to learn and debug better ways of learning, combining at least 10 different realms of knowledge and process representation including language abstractions, several tools for selecting which method for learning/remembering/acting is appropriate, several strategies for goal creation, selection, planning, execution and evaluation, a robust and scalable memory architecture, a dependable way to interact with the environment, programs for self evaluation and debugging.... the list goes on..... Bottom line is that nobody can get all this functionality from a few optimization algorithms...


Didn't we already get all this functionality from one optimization algorithm, namely, biological evolution?


Yes. But I don't think we've millions of years to make that happen nor do we have the capabilities or the knowledge to create the environmental situations that could potentially reinforce these mutations.


The timeline only matters if the generational period is the same. For the rest, I agree that we don't currently. But things change.


So we're in agreement, right?

> When you understand the trick, it stops being magic.

The only argument I've seen about the difficulty of general AI is that we don't presently know how to do it. But it only seems difficult (like magic) because of our lack of comprehension. Maybe it actually is, but the good odds are that like everything else it's just one or a couple of tricks, and once we put the pieces together we'll be moving just as fast.


If a machine can drive a car, how much harder is it to drive a tank?

If a machine can generate strategies to win simulated wars, how much harder is it to generate strategies to win real wars?

If machines can out compete people on 50% of jobs, how much harder is it to out compete people on 100% of jobs?


Well, even for humans driving a car doesn't mean you can drive a tank; being good at Rome: Total War or Starcraft doesn't mean you can win a real war; and we're a long way away from "machines" being able to outcompete 50% of humans at their jobs.


Some day we will understand how we do our tricks though.


Or alternatively, how to accomplish the same thing in an easier way -- we don't fly in commercial aircraft by flapping feathered wings, after all. And then magic will be dissolved and we might find that general AI isn't really all that much harder than narrow AI.


Devil's devil's advocate. The world is too complicated to be reduced to a function. AI isn't going to take over the world anymore than anyone else has (governments and organizations included).


Take the human general intelligence learning architecture and scale it up to a billion operations a second.

The trick with AGI is that it can learn faster, and humans (even with our limitations) did take over the world.


So one AGI is like a billion human beings, so a little bit less than China or India. And like countries, there will be other AGIs. An AGI isn't born in a vacuum, it comes into existence in a world advanced enough for such a tech to be developed. As such, it's reasonable to suppose there will limitations for AGIs when it decides that world conquest is a worthy goal.


It's not like one billion human beings.

Humans are limited by a lot of things like motivation and communication which is likely distinct from the general problem solving architecture itself. Humans also have goals and an evolved morality that guides their thinking. Humans share a similar type of intelligence and understanding of goals, culture, right and wrong. You don't get this for free with AGI.

The point of the human example is that there already exists a general problem solving architecture you can scale up (it's not impossible and there are brains everywhere in nature).

If you increase operations per second many magnitudes you can have all human learning over thousands of years compressed into a couple of hours. You see this loosely in narrow spaces now with things like Alpha Go Zero.

There may not be other AGIs if there's an intelligence explosion and the first one improves itself very quickly. If its utility function is not aligned with human goals then it could end up turning all matter into paperclips - not because it's evil, but because it's hard to set goals tied to human morality (when humans don't even agree in every case) and it happens to be configured such that making paperclips maxes out its reward function by accident.

It may be possible to make the unsafe version before making the safe one, since it seems harder to make an AGI that's aligned with human goals than just an AGI in general.


> The world is too complicated to be reduced to a function.

I can only interpret this as a mystical claim...


The world is too complicated to be reduced to a function that does not exhibit general intelligence. At minimum, able to formulate rational strategies in arbitrary games against other rational adversaries.


The mystical claim is that the world is like a board game where we can extrapolate from the success of AlphaGo/Zero to AGI taking over the world.

That's just pure speculation because nobody has a clue what such an intelligence would look like. I can imagine using the mind infinity stone to take over the world, but that doesn't make it realistic.


We have a working model of general intelligence. It runs on 20 watts, takes 20 years to became somewhat usable, and is known to cause local disasters.


"the world" is vague. Governments certainly have taken over various worlds.


Organizations of humans (governments) seem to have conquered the world pretty well, outside of a scant few lawless areas. Even then there's usually some local warlord maintaining order, just not a nation state. I'm not sure what you're trying to say?


What about reinforcement learned universal function approximators?


You can hardly train RL in the real world (as opposed to simulation) because of its high sample complexity.


Exacerbated by sparse rewards. Well, it is being tackled as everything else. [0] for example.

[0] https://arxiv.org/pdf/1802.10567.pdf


> [I]f most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers...This kind of dependency would be tantamount to a new kind of colonialism.

This is an interesting observation, it is actually already happening now with Internet companies, and to a lesser degree physical product companies with global reach (like Apple and Amazon). Money flow from local economies into those companies who don't pay much local tax nor creating much local employment. That could eventually drain the well dry.

Countries suffered from colonialism have been catching up to the developed world in terms of standards of living in the past 100 years. I wonder if the above effect will reverse that course.


So - we end up with Elysium?


I prefer to think about this in a manner similar to Robin Hanson’s Foresight Institute presentation regarding models for AGI timescales [0].

Basically, the component of this that says “but machine learning is different” is still not convincing. The same nationalistic divides and concerns about geopolitical backing for warfare tech that happened in response to nuclear weaponry and chemical weaponry are likely to be high-fidelity models of whatever geopolitical divide for machine learning weaponry.

I agree it will be a significant policy issue, but I do not agree it is very related to the topic of AGI. Reasoning about it by studying how various other tech arms races have unfolded in history will be a good, but not perfect, model for how it unfolds for ML too. And the pieces where this time is different will be far more understated than the amount of hype about it.

[0]: < https://vimeo.com/9508131 >


So you're saying we are going to digitize emotional bias and pretend it's more intelligent decision making, and then hand that over to a corporation. Sounds like a winner.


That's a huge win. Consider that the economy today is built upon trading emotional bias for labor-saving and calling it "employment". With AI, we can get all the same emotional bias and not have to pay them. Plus they operate millions of times more efficiently and never need to sleep or have a personal life.


> Machine learning will enable new modes of warfare

Bucky was confident that we could use computers to solve our problems. We could enter all relevant data and the machine could compute the optimal solutions for us.

The issue has always been ensuring we ask them to solve the right problems.

If we use AI to tell us how long to imprison people (already happening) rather than how to decrease recidivism, that's a meta-computer choice that we made, not the AI.

If we use AI to kill people, rather than to figure out how not to have to kill them in the first place, that's also our choice.

Cf. "Wargames" https://en.wikipedia.org/wiki/WarGames This was in '83!


I found the scene on youtube: https://www.youtube.com/watch?v=NHWjlCaIrQo

I'm bawling my eyes out right now.

"The only winning move is not to play."


AI will have the biggest impact on who makes money and controls wealth. Before any nation-state tries to take over the world with some kind of ultra-dominant weapon, most large states will have to deal with their own populations, as the rich control more resources.

The imaginary graph of ML technology that can be developed for destruction or defense is fraught with inter-dependent paranoid scenarios. The use of ML for the increase of human happiness is apparent and obvious. An ML arms race that invokes conflict is going to be a huge waste of a nation's ML resources.

It would be much more productive to think about how ML/AI can be used to for egalitarian human prosperity (a la post-scarcity, etc).


A general critique: it's not just about AI.

Computation, in general, is capable of solving many problems that afflict the world - disease, hunger, resource allocation, etc. Some of these problems have "conventional" computational solutions.

Fundamentally, there are two problems that must be solved. First, the actual ability to actually compute needs perfected. This means that massive computations (i.e. the computations that solve massive, game-changing problems) can easily be performed. Things like public clouds are solving that problem. Second, computation needs applied to a problem. Statistical learning approaches have become popular because they are relatively simple to apply and are relatively successful. AI researches tend to believe that AI is all that matters, but obviously the success of AI is only possible with efficient computation. Similarly, efficient computation alone is useless if that computation cannot be used to solve actual problems.

Computation is to the 21st century as energy is the 20th. The ramifications of that statement are immediately obvious: consider the petrodollar. Soon, computation will become a currency.


Maybe it's not about AI as a strategic asset, so much as it's about private sector data collection of the kind that AI can utilise as a strategic asset. The ability to perform image recognition against user photos would be limited to countries that host the headquarters of a large social network, for example.


This is strangely relevant.

https://archive.fo/zP1F1

- The economics surrounding AI development favor those who can commoditize data to the cheapest price. (Silicon Valley, militaries, and finance have AND MUST MAINTAIN their influence over this commoditization) This commoditization requirement was once previously thought as irreversible, allowing dumb money to buy into the idea that “data is the new oil”, but Butterfly War shows how to unexpectedly drive up the liability of a mass accumulation of data commodities.

- Foreign actors and short sellers can now use derivations of the Butterfly War to become market makers of the data economy, forcing the theory of “AI Winters” to be replaced with a more predictive “AI Business Cycle”. (Do you now understand why I went to Soros-influenced actors first?)

- This undesirable pressure, when paired with the institutional dependencies of established AI infrastructure, will force a deeper consolidation of Silicon Valley, military, and financial “cognitive assets”, which in turn will skew the funding and purposes behind additional AI development to be more risk-averse and conservative (from an power preservation standpoint).

- The pressures to embrace “cognitive mercantilism” become irreversible. Nations will aggressively retain talent and technologies for themselves to improve their collective bargaining power on the international stage. /pol/-tier nationalism finally has the footing to stifle their material humanist opposition.

- AI development will enter an artificially induced “deep freeze” period, similar to what happened to space exploration after the Space Race.

- The doctrine of Gnostic Warfare we develop today dominates in this period, focusing primarily on the epistemological limitations of Deep Belief Networks and, more precisely, how these cognitive assets define emotion.


the idea that politicians are welcome in theology is ridiculous

> what is intelligent

> what is the set of all x such that x is intelligent

not political questions




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: