Hacker News new | past | comments | ask | show | jobs | submit login
What if AI is a failed dream? (madebymany.com)
119 points by errkk on July 14, 2017 | hide | past | favorite | 204 comments

I don't understand why he finishes with this paragraph:

I’m as excited by the idea of colonising Mars as much as the next nerd. But not at the opportunity cost of not solving the problems facing us on Earth. That’s a bet too big.

Making an effort to colonize Mars does not carry such a cost. People who are interested in solving this issue will not magically do whatever you'd like them to do instead. We will never solve all problems we have here on Earth, that shouldn't stop us from venturing out into space.

Staying on earth is running a server without a backup. We all know how that ends.. Work oppurtunities for data retrieval companies.

Other planets would be our species redundancy at the very least.

Playing it out, our sun will run out off fuel eventually. Mother nature is immune to patience. It will come.

Saying no to space travel for survival reasons is the definition off "ignorance is bliss". The sun is here for a few million years, "what do I care" right, I'm long gone anyway.

>Staying on earth is running a server without a backup. We all know how that ends.. Work oppurtunities for data retrieval companies.

Unless there is an exoplanet comparable to Earth that we can reach, your analogy is akin to calling a floppy disk (Mars) an adequate backup for a server (Earth) with petabytes of data. Colonizing Mars for an extinction event would be a data retrieval endeavor

This is true.

That said, many inventions in history were not the endgoal, but a side-effect off a research with a different goal in mind.

Do you think if we stopped development on pc's around the 80's, and pick it up fresh in 2017, we would "invent" the Iphone or the SSD within a week?

The internet was mainly motivated for porn distribution, then data, now it literally runs the entire world, and speeds up science and business on a scale that warrants an applause.

We need to walk before we can run, at least that's what I personally like to believe.

Mars is not feasible, but we still can learn many things off (consumer & daily) space travel, and living in a radical different environment by starting with it.

Waiting with any practical/experimental advancement until we are "absolutely sure" more often than not tends to end up in little to no advancement at all, or very slowly.

There are also non-technical reasons to "test drive" inter-planetary migration on a planet that can be "thrown away".

Imagine we find the perfect planet, go there, but then this newly found goverment or terrorists decide to nuke it for reasons. Rather have Mars destroyed, than a super rare planet. Humans like to theorize, but it's accidents and mistakes that drives practical advancement.

If we are able to colonise Mars we would have created technology that would allow us to survive on a fucked up Earth as well so their is little downside in exploring ways colonise Mars.

A meteor big and fast enough to break up the planet?

It's very presumptumous to say if we can colonise, than we "obviously must have" a solution for this scenario.

Let's fully assume you are right, theory and practise are 2 different things.

Take the burning of the building in London. Convince me it burned down because we didn't have the tech.

The solution and even production for fire-resistant material was available decades ago was it not?

It was inability to come true in practise that made it fail and burned it down. The reason within this context is not relevant. over 100 people dieing was very real, and ctrl-z doesn't work in the real world.

After that accident, buildings all over Europe were revisted and where needed updated with new materials.

A full on destruction off earth most likely will not have this luxury off a revisit, and as such from a survival point of view, it's best to not wait, but just start experimenting with things like space travel and colonies before Hollywood becomes real and a meteor will hit us by the end off the week.

I agree with you. But typically when people make such an argument, it's about where we spend money.

But even so, the law of diminishing returns kicks in here.

The more a field is neglected, the higher the likelihood that there's some low-hanging fruit left unharvested.

Article author should go play a few hundred games of Civilization.

Money is a human-made construct, productivity is all about people wanting to work on things they are passionate about - whether that's having a healthy family or Elon's escapist plans of colonizing Mars. Once we have UBI and automation in place - which will lead to properly managed capitalism - then everything can become very efficient. The foundation to this working is people being healthy and community being healthy, real community where we have time to have deep discussion not only online but in person - which is currently greatly lacking. What currently keeps most of us engaged is economics, the requirement of a job. Once UBI is in place worldwide then things will really start moving.

Step 1: UBI you can chase your dreams

Step2: well you can't afford real dreams you're on UBI so chase your dreams in VR. plus the clicks will help pay for UBI.

Step 3: eat Soylent it's less expensive and you don't need much nutrition your always in VR anyways. besides the system can get more clicks the more time your in VR.

Step 4: you can get a more enriched experience if we attach cables to you. besides the system can get more clicks the more time your in VR.

Step 5: The system decided to make VR more like real life and add constraints and problems to your life. besides the system can get more clicks the more time your in VR.

Shit we ended up in the Matrix....except it turned out to be a giant click farm.

Devil's advocate - if the experience of VR is completely indistinguishable from reality, is it not just reality for the consciousness experiencing the VR?

We experience sadness, happiness, joy, and despair, based on sensory input, which is supposedly a manifestation of the world around us, but maybe it's the abstractions we build on top of that matter which are important. Maybe the experience of VR fishing with my dad is imperceptibly different from actually going fishing (better, even, since the simulated fish isn't full of microplastics and our two stroke boat didn't poison the water) - If my VR avatar and your VR avatar build full, rich (as far as we can tell) experiences together, and generate real sensations of joy, then I'm not sure it's so different from doing the same thing in reality. In either case we're manipulating atoms in ways we find pleasant; in one case those atoms simply happen to be in a computer.

As a thought experiment, say you could build an entire copy of our civilization, but at half the physical size - its experiences would be no less rich, I think. Lower carbon emissions too. Now make it a hundredth the size. Or a billionth. Maybe modeling this inside a computer isn't so different from that scenario.

Having everyone hooked up to VR all the time sounds like a hellish nightmare, for the record, but I'm not sure why it bothers me as much as it does.

I think the VR world is bothersome because you (and me, and many others) have some part of our value system or moral calculus that isn't a function of people's subjective experience but is to do with the arrangement of stuff in the highly contentious 'base reality'.

Wireheading gives a similar icky feeling; if we got everyone in the world some brain implants that made sure they were perfectly happy with their situation, no matter the situation, would the situation as a whole be morally better, worse, or equal to how it is now?

I would say worse, because somewhere in my morality look-up table it says that the happiness of a person who is sitting in a concrete cell with their brain wired up to induce happiness is less good than the happiness of a person listening to the symphony or whatever, even if their subjective reports would both be full of joy.

For me this holds true even if the hypothetical wirehead-world is setup in some way to make sure that people keep breeding or live forever in wirehead bliss or what have you; the situation's value is not just some sum of expected subjective 'utility scores' or something similar (apart from the "Repugnant Conclusion" which is another problem here).

This is probably a patronising and paternalistic kind of morality for me to have, but I'm OK with that (and secretly suspect a bit that people who claim to have a morality totally devoid of this sort of thing are either deluding themselves or lying).

I do like the idea of 50% scale world (although whether 50% scale world would actually work the same I don't know). Introspecting, that seems OK because it's still made of normal matter, which I must conclude has different moral value to simulated matter.

Very interesting way of thinking about it, I'd enjoy to read a much longer fleshed out version of this.

Also, is there a name for the "persuasion technique" you've used here? The idea of taking an understandable scenario and then reducing it? I have a similar technique that starts with an exaggerated and unavoidably rhetorical question, and then work back towards the topic at hand, when someone has a mental block and simply won't budge. I've always wondered if there's a name for it so I could become more effective at it.

I'm not sure, it's just something that's crossed my mind with respect to VR and the nature of reality vs models of said reality. I'd enjoy thinking about it more, or reading about it, if there's a word for this.

The problem is choice and losing choice in the future.

I agree entirely that loss of choice is tragic, though I can imagine you referring to a few different things. Do you mean the choice between reality and simulation, or the choice in how one lives, regardless of whether it's in the real world or not? Or perhaps you meant something else?

Ironically I suspect a lack of choice in life is part of the appeal of the escapism grandparent describes.

> money is a human-made construct

Like the alphabet, and just a useful.

Usually, people who point out that money is a human-made construct are not trying to say that money is useless. Rather, they're trying to encourage you to question how money is constructed, likely because they think that making some tweaks would lead to it being more useful.

Kind of like people who tweak fonts to make them more useful, to stick somewhat close to your alphabet example.

I didn't say it wasn't useful, UBI is using money still? It's all about distribution of resources.

Why will UBI and automation lead to properly managed capitalism? And what do you mean by "properly managed capitalism"?

Good questions. The brief of it is that it's known if you increase money available for education, educational institutes will increase their fees- the same goes for landlords and rent - and pretty much everything which leads to inflation; the more of a necessity something is will generally dictate who benefits the most from these cost increases and pressure on systems. There will have to be a floor created for housing, food, transportation, etc. Most of what we need can be automated, heavily automated. We'll have to decide as a society what people should have a baseline for everything. Fitting that new system in the existing system is the tricky part.

My take is he means that trying to move to another planet isn't a good survival strategy as a species, because of the high probability that such an endeavour would end in failure.

We stay on this rock, we die on this rock. Confining ourselves to Earth isn't a survival strategy, it's giving up. Rebuttals would be appreciated, though!

Perhaps a counterpoint would be that the opportunity cost of colonizing mars would be better spent by first solving the existential issues here on Earth.

Once we have figured out how to achieve homeostasis on a planet so plentiful in resources as our own, then we could start looking to do so elsewhere in less favorable environments.

Mars is extremely barren and has negligible capabilities for life support.

What existential issues are you thinking about?

The biggest issue I can think of is oil running out, but we're on a good trajectory to solving that. Other resources seem either abundant or replaceable. The threat of global nuclear war is problematic but not really solvable by throwing more people at the problem. Overpopulation isn't projected to be a problem. Climate change will be a major inconvenience and costly, but unlikely to be an existential issue.

Of course there's lots of injustice, hunger, disease, murder and torture that would be nice to resolve, but it's not an existential issue for humanity.

Even oil was never an existential issue: there was always coal (and perhaps since the 1950s nuclear fission).

> the existential issues here on Earth.

I don't think we have any issues like that as a species, as a species we are pretty much thriving. That growth might not be built on the most sustainable principles, but we are slowly getting there.

Short of something super apocalyptic ruining the whole planet for most life (like a big asteroid hitting us), I don't see humanity eradicating itself completely anytime soon, we are quite a sturdy bunch.

> Once we have figured out how to achieve homeostasis on a planet so plentiful in resources as our own

We've had plenty enough time trying to do that, maybe it's time to try a different exercise with more constraints to motivate creativity? Mars could be exactly that.

> That growth might not be built on the most sustainable principles, but we are slowly getting there.

Despite what you hear about electric cars and so on, every year the human race increases the amount of environmental damage it does compared to the year previous. We're not just increasing the damage; we're increasing the rate of increase of the damage. Heck, not just the first and second but also the third derivatives are all positive.

So in fact we're not getting to sustainability at all; we're moving away from it faster each year. We're accelerating into the apocalypse. Some day that might change, but not this year. First we'd have to stop accelerating. Then we've have to slow down. Then we'd have to start reversing.

First part of solving a problem is recognizing that you actually have a problem. I increasingly see that happen in regards to the finite nature of the resources on Earth and it's rather delicate balance of climate vs pollution. These are problems we've accumulated over generations and just recently recognized we actually have, to me that's worth something.

>I don't think we have any issues like that as a species, as a species we are pretty much thriving. That growth might not be built on the most sustainable principles, but we are slowly getting there.

Between environmental disaster and global warming that could wipe a huge part of humanity, the ever present possibility of a nuclear war, tolerance to antibiotics, and other such niceties, I can't even begin to see how would one think that...

> a huge part of humanity

That's the point there: We can easily wipe out huge parts of humanity. Huge parts yes, but the whole species? I don't think that's gonna happen unless something of a truly apocalyptic scale happens, and as ingenious we are, I doubt we are that ingenious to make that happen any time soon.

Note: I'm not saying everything will be just fine, I'm just saying we are a rather resilient species as we don't need actually that much just to "survive".

Nuclear war remains an existential threat to our species.

There are ~15,000 nuclear weapons deployed right now by 9 nation states. That's enough to wipe out almost all of the world population, and I'm not convinced the few that remain would survive in the food scarce, irradiated lands for long.

Well, there's the simple fact (or what seems to be a fact at least, might not be true) that the reality of physics confines us to this rock, more or less. We don't travel faster than light, so at best we can achieve a Mars colony dependent on Earth supplies.

So yeah, I'd rather just focus on developing survival strategies for our rock. But the whole discussion is maybe a bit off topic here too.

A self-sufficient Mars colony isn't easy, but it doesn't seem impossible. And the lack of FTL doesn't really limit us to this solar system either: generational starships with nuclear pulse propulsion [1] could get us to the next star with around 100 years travel time. Not exactly something we want to start tomorrow, but fairly feasable.

1: https://en.m.wikipedia.org/wiki/Nuclear_pulse_propulsion

Viable long-term survival strategies on Earth have some pretty hard limits. Even if we master famine, disease, our own worst impulses, and asteroids, there's still the expansion of the Sun. Even if we don't expand beyond our own solar system, the technological advances gained in doing so could be invaluable for maintaining our home planet.

The real problem is that wherever we go, there we are. Earth is perfect for us, as evidenced by the fact that we're here. But, we've come up with a system of allocating our abundant resources that is unhealthy for the planet and, ultimately, our survival. Beyond just climate change, we're trashing the planet for money even as we expend precious few dollars on, say, the problem of extinction event level asteroids. There is more raw brainpower being applied to trying to get you to click an ad.

So, we have devised a system that directs our considerable resources (human, natural, and otherwise) almost exclusively per financial incentives and, oddly, there seems to little financial incentive in ensuring our own collective survival.

If we don't evolve in our thinking, we'll just reproduce the same problems wherever we go.

Perfection is a particularly anthropocentric concept. Earth was clearly a local optima, but our very capacity for trashing the planet suggests it was rather shallow. We've changed the fitness landscape like so many bulldozers in a landfill. There's no way to know what lies among the slopes beyond. Maybe we really are stuck here and defending our crapsack position from asteroids is as good as it gets. That just sounds depressing, though.

I expect we won't evolve our thinking, but if we could somehow reproduce the same problems wherever we go enough times, there's an opportunity for evolution and natural selection to operate on our societies at a galactic level. We'd buy ourselves the chance to roll the dice many, many, many more times. There will be misery and suffering along the way, sure, but that's been the cost of our existence thus far.

>our very capacity for trashing the planet suggests it was rather shallow

That strikes me as circular? I mean, are you saying that the planet was never fit to begin with because it is unable to withstand any assault we can muster and remain habitable for us?

We've evolved the capacity to split the atom, while being simultaneously limited by our own biology. We need air. We need water. Yet, we can easily create a blanket of fallout that will render virtually any environ inhospitable to our delicate biologies. We can't ignore that incongruence and think that our planet is the problem.

Wherever we go will require some stewardship.

>defending our crapsack position from asteroids is as good as it gets.

LOL. Well, given our difficulty in finding any other place that offers a baseline of accommodation for any life, I'd say it'd be even more difficult to find a place that works for us and is also immune to cosmic activity. So, we'd likely have to consider certain issues for any crapsack of a planet we populate.

It's kind of like saying, "we don't have the technology or will to ensure our survival on a customized-for-us planet, so let's go out and terraform another planet or achieve interstellar travel, and then figure out how to ensure our survival on that planet".

How's about we optimize on the stewardship-front here at home?

>if we could somehow reproduce the same problems wherever we go enough times there's an opportunity for evolution

More likely, extinction.

For one, the dilution of resources to get off this rock might take resources from actually protecting life on this rock, without resulting to anything concrete, and thus just bring our end much closer.

Take a hypothetical epidemic that wipes as all out, and that we could have prevented if only we gave money to more biological/medical research instead of space exploration.

If we can't make it work on a literal Garden World, then we have no chance of making it work in the frigid void, or on an irradiated hellworld.

Relative costs change with technological development.

However, it is a good idea to move our heavy industries off Earth if possible. Once we have access to space, we can get things from space cheaper than we can get from Earth, with asteroid and moon mining and all.

>Making an effort to colonize Mars does not carry such a cost. People who are interested in solving this issue will not magically do whatever you'd like them to do instead.

The same holds true for every calculation of opportunity cost. It's not about whether you WILL do Y instead of X, but about the relative cost of doing X vs Y.

E.g. a business that does X will not "magically do Y instead" just because X incurs a big opportunity cost. But that's beside the point -- it will bear the opportunity cost whether it's willing to switch to Y or not. If the CEO is stubborn and doesn't even want to hear about Y and wants only to do X, doesn't mean the company wont suffer the opportunity cost of not doing Y.

The same holds true for humanity. If the people spending resources to go to Mars wont ever direct them elsewhere, we (as humanity) will still have an opportunity cost of going to Mars vs spending the same effort on something else.

(And it's not like that's our premise a given -- that those Mars resources can't "magically go to something else instead". Pressure on the government for example could cut NASA's budget and Space-X subsidies towards some other cause).

A real-life example of this is the current commute mess in NY/NJ area. There is free wifi on the streets but subways and train stations are falling apart with no visioning for commuter growth over the last 5 years and the next 5 years.

http://www.nytimes.com/2012/04/10/nyregion/report-disputes-c... goes into some of the mess and lack of accountability.

We already know how to solve these problems: just do whatever Singapore does.

and what is that?

It's a bit of a complex topic. There's lots to learn from how their civil service is run, and how infrastructure is build, how the legal system is set up, etc.

My comment comes from having lived there and seeing what results they achieve in practice---without relying on eg oil money.

The main barrier to adopting Singaporean methods for eg "current commute mess in NY/NJ area" would be institutional, not technical.

Some very quick Googling turned up eg http://www.alphr.com/life-culture/1005939/building-a-smart-c... and http://www.bbc.co.uk/news/business-32028693 and https://en.wikipedia.org/wiki/Economy_of_Singapore

Thanks for coming back to reply. Too bad hacker news does not notify you about such things. Do you think the Singapore model can scale to larger countries? I've heard a lot of arguments around healthcare, education and transportation models from Europe don't scale beyond those small countries. Any thoughts?

I personally consider another possibility :

AI is already here and we're all slaves to it already, we're just in denial of being in control. It's just a matter of interpretation.

Right now, machines dominate our life from the moment we wake up till the moment we go to sleep with our phones beside us.

It runs the world - guides the ships, planes, cars, drilling machines, financial system and so on.

The symbiosis makes us feel like we have a say - in practice, any of us can do absolutely nothing to stop it or change its course.

Our power starts and ends with the branch we're working on - then our pull request will be merged into master - and that's pretty much the value of the company we're working for - source code for apps that run and interoperate in the cloud.

The illusion is that we're doing it for other people, the reality is that we do what the machine requires us to do so that it's even more pervasive in everyone's lives.

Overexposed to machines, people are too numb and apathetic to notice or care about whatever new app or service or product.

Well, it's just another way to look at it, otherwise there're lots of great things about the machine - maybe it will even save us from ourselves - question is, why ?

What? I see what point you are making about being glued to our screens. But calling this phenomenon "AI" is a complete non-sequitur.

We are also slaves to putting little squishy bits into our mouths three times a day, or wrapping ourselves in strange fabrics, but we don't call that AI.

I don't really mind that we're metaphorically slaves to machines. People have been metaphorically slaves to all sorts of other things: dollars, diamonds, oil, sex, fame, influence, territory, you name it.

What I do mind is that the machines have their own masters, and that the masters are doing their best to hide the fact that they own and control the machines -- and by proxy, the rest of us. AI as it currently exists only helps to obfuscate the chain of responsibility that leads up to all these large corporations.

"Our autonomous car T-boned a truck in broad daylight? Too bad, it made its own decision. Don't hold us liable for its mistakes, but please do give us credit for building AI that makes fewer mistakes than the average driver." All those complicated algorithms help you launder responsibility just like bitcoin mixers help you launder money. The more you make your machine look like a truly autonomous being that can learn to do its own thing, the more you can distance yourself from your responsibility as its owner and creator.

I am not a huge fan of these theories that give agency to objects or tools with regards to evolution.

It's something that has been very popularized with the "Sapiens" book: wheat enslaved men to produce more of it, apple tricked men by "evolving" into sweeter fruits etc.

You could decline this theory to whatever humans produce that's useful to humanity. You could say that walkers are a very successful species of objects, since it has managed to have a lot of men depend on them, and so have proliferated in the last century.

The truth is that humans are addicted to safety, comfort, efficiency, pleasure, etc., and as such it is addicted to computers, just as it is addicted to cars and reality tv.

Does that make humanity slaves of cars, and of reality tv?

I forget where but I've read of this theory before: that what constitutes a true AI is constantly redefined as the level of development that we have not yet achieved, so as to be invisible

In other words, true AI is magic.

Or at least the idea of what true AI is is a constantly changing mythology

Electricity was magic a couple hundred years ago.

I appreciate the point of view this post is providing. It is always important to think about the potential limitations of AI, and the fact that once again, we may hit the ceiling of our current attempts at AI much more quickly than we realize, and no singularity will ever occur. We need to think about how to solve our problems realistically, now, without waiting for a godly super AI to come solve them for us.

However, what's bizarre is that he is painting a world that is already wrong. In particular:

"We won’t have massive, perfectly coordinated networks with optimised flow and distribution — think traffic networks (those self-driving Teslas that act as taxis when you don’t need them)"

But... we will. We're already making it. It might not work very well at first, but automated cars are a real thing, and they are going to happen. We have preliminary automated cars right now. Perhaps he is instead claiming that our autonomous cars won't scale, but this too makes no sense. Of course we can make them scale. Once we've solved the hard problem, which is actually driving the cars around and not hitting things, solving distribution and flow is almost trivial. We have all sorts of systems for solving distribution problems and finding maximal flow along a network. There's even an entire class of algorithms to solve it with[1], which we currently use, right now, to solve things like scheduling airplane flights.

And then he brings up a point that seems completely perpendicular to the entire rest of the post:

"We won’t have total surveillance (à la Reynold’s Mechanism or Brin’s Transparent Society)"

This has nothing to do with AI and everything to do with cryptography. Whether or not the singularity does or doesn't happen is completely irrelevant. Indeed, it currently looks like we're headed towards having total surveillance with or without the singularity, unless we do something about our privacy laws.

While we shouldn't assume AI will fix all our problems, the examples provided in this post are bizarre, to say the least.

[1] https://en.wikipedia.org/wiki/Maximum_flow_problem

Your first paragraph is spot on.

The "singularity" is really just 17th-century metaphysics. It's Rene Descartes all over again imbued with religious undertones of salvation and transcendence. Most of the "aspirations" of its occurrence are nonsense. Kurzweil has fashioned himself like an AI prophet. For some reason, it has caught on among people, some very smart, and has lead to this general passive-ness to solving real problems today. The "singularity" becomes the answer for everything, "it's definitely coming!"

The only problems I'm interested in solving today are the ones that will make me rich enough that I don't have to work when the robots take over.

The author made the mistake of citing specific examples that weren't very good. It's odd to me that's there's this conflation of ai with the singularity. Strong ai is only one path to the singularity, others being human augmentation, and group minds. Also, the idea of this all being a 'failed dream' was addressed by Vinge in this talk[1]. He picks a metric, then tracks how it would develop in the case of a singularity never happening. Very enlightening to see his thoughts on this.

[1] http://longnow.org/seminars/02007/feb/15/what-if-the-singula...

My pet theory is that if everything goes as well as it can go, people thousands of years from now will call the period we are currently living through the singularity.

I mean, if you look at the definition of singularity that is implied by the site behind your link -- "self-accelerating technologies will speed up to the point of so profound a transformation that the other side of it is unknowable" -- then to be quite frank, that has basically already happened.

BTW, I'm not trying to say that the singularity has ended already. It's a development that started 1-2 centuries ago and is currently ongoing, and pedants of the future will quibble that it's not truly a singularity (but then again, true "naked" singularities simply don't exist in nature, so that point is moot).

> "self-accelerating technologies will speed up to the point of so profound a transformation that the other side of it is unknowable"

It is not even the first time this happen. There was something fitting this description starting the Neolithic, that people are not sure what exactly it was. There was writing and the big civilizations that happened through it. And there's the scientific/industrial revolution that we can argue endlessly if it is or isn't just a part of what is happening now (but falls just outside of your timeframe).

Re: writing. Lots of the oldest clay tables are rants about how 'our laws are the best' and 'other cities' laws will make you work too hard'. Apparently writing really took off when it got used for PR!

> solving distribution and flow is almost trivial. We have all sorts of systems for solving distribution problems and finding maximal flow along a network. There's even an entire class of algorithms to solve it with[1],

I would not be so confident with that. Yes, in a perfect world with perfectly rigid, cubical cows, and passengers that are never late you can make flow algo's that reliably hit near 'global maximums'. Things do not change dramatically if you feed the problem to some ML algo.

The flow problem for airlines is in bigger part a problem how to make decisions given near random occurrence of things like sudden worsening of the weather, traffic redirection due to something happening on the runway/airport, fueling trucks being late, and of course the need to maximize revenue per seat. The best decision making algorithm here would not be the one that maximizes throughput, but one that can keep routes more or less consistent given all those changes.

Here, I see that even if you had a near-perfect routing algorithm, and a transportation system with no human driven cars, the whole routing map may have to be changed completely in response minuscule externalities like somebody jaywalking. This class of problems is altogether different than any kind of optimization.

Ironically, the logic class most adapted for solving such problems is the long forgotten fuzzy logic.

You know how solving 90% of some problem takes 90% of the time, and solving the remaining 10% takes another 90%? I have a feeling that with self-driving cars we are not close to 90% yet, not to mention the last 10%. There is a huge difference between "hey, this prototype kind of works" and polished product ready for consumption.

I'm gonna be honest, I have no idea what you are trying to say here.

All kinds of cool inventions never (yet) got past one-offs. Jetpacks, flying cars, exoskeletons (done in the 60's). If the price, safety, usability, etc. aren't all perfect, it can completely fail. Even electric cars got almost nowhere so far.

He makes the same comments about elder care and robots. This will happen to some degree with or without strong AI. I don't think the author realizes that some tasks, like network optimization don't require anything close to strong AI

The thing about self driving cars is the point at which they become generally viable is defined by human social issues.

We could have self-driving cars today. Heck, we could have had them in the 90s. In limited scenarios, and with some small infrastructure investment.

I don't personally view it as a super-hard technical problem. It is a much much harder social problem.

You don't think self-driving is a hard technical problem? Come on man, that's obvious BS.

I think by "hard" he means "might not achieve it". But they actually already exist, so they're automatically possible.

The existing systems could not be defined as self-driving. And the idea that we could have had them in the 90s is ludicrous.

We did have them in the 90s:



You seem to have a personal definition of self-driving. What I was saying was to have self driving cars, today you need to:

a) resolve the social issues around self driving cars.

b) install a small amount of infrastructure to enable self-driving cars/standardize roads.

> But... we will

that is a belief. not a fact.

>But if you took someone from 1870 and had them wake up 70 years later, in 1940, the world would look entirely different.

>Now skip forward from 1940 to 2010: apart from our obsession with little glass rectangles, the world would be fundamentally familiar.

Disagree completely. The technology and information economy revolution means that a socially well-adjusted blue collar, white collar or even field-hand in 1940 cannot compete at all in 2010 at the same job.

My grandmother was a child in 1940. Despite hours of patient explanation, she has absolutely no comprehension of what I do all day. I sit at a computer, I type some gobbledegook and inexplicably get paid for it. Everything else is a complete mystery. She just doesn't get the fundamental concept of general-purpose computing - that the same physical object can do infinitely many things. To her, the little glass rectangles that I stare at might as well be enchanted amulets.

We occupy the same physical space, but we live in completely different worlds. I can summon a chauffeur-driven car or a hot meal with a few taps on my magic rectangle. A genie in my pocket can answer seemingly impossible questions when I utter the magical incantation "OK Google...". In the eyes of my grandmother, my $200 glass rectangle makes me practically a warlock.

I can imagine a Sumerian sage in 2017 BCE remarking "apart from our obsession with little clay tablets, our society has changed little in a millennium". It might be easy from that perspective to dismiss literacy as a trivial fad; from our perspective, it is easy to take literacy for granted and overlook the transformative impact it has had on our society.

Weak AI is already here and already changing the world, it just suffers from a fundamental perceptual problem - AI that works is just software and software is boring. For decades, playing chess was seen as a grand challenge for AI. After Deep Blue's defeat of Kasparov, chess AI had become merely chess software. Our attitudes flipped almost instantaneously from "of course a computer can't beat a human at something as complex as chess" to "of course a computer can beat a human at something as simple as chess".

Google Search isn't AI, it's just an algorithm. Fully autonomous cars will stop being AI the first time you fall asleep at the wheel and wake up at your destination. Wolfram Alpha isn't AI, Siri isn't AI, content-aware fill isn't AI, Amazon's stock control system isn't AI. IBM Watson is briefly AI when it's winning a gameshow, then quickly reverts to being software.

Lots of crop types haven't been automated partly due to the presence of cheap labor, and the difficulty in automating. Some fruits are very difficult for mechanical pickers to not bruise, and a human touch has been required up to now. Of course if the economics change or human like mechanical picking hands become common, that would change.

It'll be interesting to see if the UK invests in such automation, as apparently migrant workers are less inclined to pick fruit for the UK, post-Brexit:


Right, but lets say between 1920 and 1990? The difference would be less pronounced. I would say there was a discontinuity in the 90s and 2000s. But it is entirely possible that once everything has been "internetized", we will reach a plateau for the next decades.

Just restricting ourselves to the developed world: Being able to call anyone in the world from your home? Not having to worry about food scarcity? Not worrying about being drafted into an army? Political and technological differences have drastically changed life between 1920 and 1990. I bet any 70 year epoch that covers the development of the internet or mobile phones, will have the same features. Who knows what's next.

Part of the problem is comparing bleeding edge 1940 with average 1870, and average 2010 with bleeding edge 1940.

Most of the world in 1940 wasn't that different from 1870. Industrialization only touched a few countries. Still most people were farmers, and farming was similar to 1870. There were many innovations, mainly cheap artifical fertilizers thanks to Bosh-Faber process, but it wasn't widespread yet.

Nowadays as well we have pockets of sci-fi level technology (fusion, genetic engineering, space stuff, nanomaterials), but the widespread stuff is not on that level.

Don't compare a car in 1940 to mobile phone in 2010, because almost nobody had a car in 1940 when you look at the whole world.

Maybe not in the same job, but why does that matter? Are you saying they couldn't learn to drive an Uber or work at Starbucks? An unskilled worker today is a millioniare by 1940s standards but I don't think the work is any more complicated, in some ways I'd argue less because being a field hand takes a lot of knowledge about fields.

Factor in cost of living and they dont live the life of milionaires.

They'd never feel hungry. They'll live 20 years longer on average. Have access to a diversity of food and entertainment experiences that would make their heads spin and drive a vehicle that by 1940s standards would be considered a marvel. Even the richest people alive in 1940 didn't have access to any of that.

A millionaire had way more than someone living in rural america does. Large percentage of the US population does not have 400USD to spare for emergency and again the cost of living is going up while salaries and jobs aren't.

Just because a field-hand's job got automated doesn't meant the world would look fundamentally different.

It would look remarkably different. I don't thing people are realising that AI isn't going to replace jobs as much as they are going to remove the need for those specific jobs.

I.e. it's not like where you used to have company x doing product y you now have AI. AI is transforming the very idea of constitutes a company.

Digitalization is a winner takes most if not all. And more and more things are becoming digitalized to a point were they make no sense as scarce offerings. The supply will heavily out weight the demand.

Sounds like OP is trying to regurgitate Wait but Why.

Plus black people now get to sit anywhere in the bus.

> Now skip forward from 1940 to 2010: apart from our obsession with little glass rectangles, the world would be fundamentally familiar.

This is similar to summarizing the discovery of space-warping technology by "Except for everyone's pockets now being bottomless Bags of Holding, the world hasn't changed much."

It misses the fact that computers are involved in everything today. When you listen to music, the sound is undistorted thanks to computers. If advertising shows a smiling woman, her face is prettier than life thanks to computers. When you see a plane flying overhead, it is saving fuel thanks to computers. Even mobile telephony requires computers. (Imagine having an operator in every cell tower, manually routing calls.)

It is easy to overlook, but lots of little details of our modern lives would seem utterly impossible to someone from 1940, and I don't think this kind of change is going to stop anytime soon.

When we discovered electricity, some people thought we could revive the dead with it. When we invented steam engines, some people thought that we could make brains out of steam. When we invented computers, some people were quick to say that computers would replicate human thoughts. Though these technologies did not match the wildest dreams, they had massive impacts on humanity.

While you can certainly criticize the hype around AI, you can't deny the advances made by self driving vehicles. That in itself is a major technological leap, and will revolutionize the transport industry.

I think that robots that perform some more complex tasks - such as filling amazon boxes, picking fruits, cooking stuff, and so on - may not be very far away.

General intelligence/consciousness is not there, clearly, but there are some elements of technology that somehow resemble the way the brain works, and that we can find some useful cases for. That's really all that matters

We do revive the (recently) dead using electricity.

The premise is wrong.

"We’ve been told the artificial intelligence (AI) revolution is right around the corner. But what if it isn’t?"

Google Photos already picks out objects in photos to make photos searchable by keyword, removes obstructions (eg. fences) from photos. Tesla cars already automatically follow the speed limit, automatically brake, automate lane changes, etc.

A few years ago this article might've had a valid point, but not anymore.

And yet we don't have have an AI which could tell us if a photo is a bird and win at checkers (or tic-tac-toe). Our current AI is stuck in widget-land and it might solve some small, interesting tasks but we don't know how to even approach the harder stuff.

It reminds me of that problem where anytime we do something new in AI, it is quickly defined as not AI. I think that is totally correct because what we're doing isn't actual AI! We're going to enter another AI winter once lay people begin to realize the limitations of the current state-of-the-art. My prediction is that this will happen once progress on the self-driving front stalls.

There is a limit to the current method. The generalised AI you described likely is quite a while away. It's also economically questionable, now that many of the specialised AIs can be done very affordably.

For example, if you want to teach a robot to flip burgers, you could invest a few trillion dollars into generalised AI and wait many years for an uncertain outcome, or you could create training sets for a neutral network and be done in a few months for a few million dollars. The reason we know we have made advancement is that until a few years ago, the problem of training that robot did not seem to be within reach. Today it mightn't be completely easy, but most people would agree that it's quite achievable.

> And yet we don't have have an AI which could tell us if a photo is a bird and win at checkers (or tic-tac-toe).

Google can. I did search by image and it identified my picture as a dog. And if you search for "tic-tac-toe" it has a built in game which has difficulty settings up to Impossible (which presumably plays perfectly).

Presumably those are separate systems. I'm talking about multi-task learning.

The point is that we didn't have this in products 5 - 10 years ago. Advancement is being made. It might have a limit, but no one is claiming that we have 100% solved AI; just that advancements have been made and products are not yet taking advantage of all the new possibilities.

Once people come up with a way to have a computer think abstractly it's just a matter of linking together a bunch of different subsystems. Your brain works the same way (try getting your visual cortex to come up with your next tic-tac-toe move).

These things are not "intelligence" though. they are gimmicks that can only be applied to a very limited range of problems.

These advances are advances but they do not prove that general ai is possible.

Maybe this "not intelligence" stuff is way more useful than strong ai.

And sure, each solution tackles a limited range of problem, but a lot of problems can be solved or made easier by this approach. The results are already above and beyond anything ever achieved by strong AI research.

Edit: Sorry, only now I realize the point of your comment. Yes, there is no indication (in my opinion) that current achievements will lead us to strong AI

> they do not prove that general ai is possible

Is there anyone who seriously believes that AGI isn't possible? I thought the only argument was about the timeline. To me it's absolutely inevitable, even if it may not be in my lifetime. I mean if we really can't do it with computers, then we'll just genetically engineer a giant brain in a vat or something. Still artificial! But unless you have some convincing evidence that human intellect is at the very limit of some speed-of-light-like universal constant, you bet we'll build something better. Eventually.

All of these articles basically make me think of a newspaper in 1915: "What if human flight is a failed dream?" Just wait, buddy. Rome wasn't built in a day, or even a decade.

Doubter here..

a) For AGI - I personally think there is an intellectual limitation in the human to go to this point. Case in point: Dogs can't talk. Arguing the 'inevitability' is to some extent arguing that one day dogs will be able to talk, because why shouldn't they? Will we make 'general AI' close enough to fool many people many times, which is essentially composed of a bazillion cogs wired together and rigged to appear general? probably. But to the level that it meta-tunes itself? no.

b) For your 'engineered brain': If you are mimicking natural intelligence with chemistry, biology, etc, it is a clone - you are still dependant on understanding natural processes which you didn't create, so it is only a clone and not at all 'artificial'.

If you'll notice, the philosophical limitations of 'b' are somewhat the same as 'a' - e.g. copying processes that already exist manually is not the same as creating it from scratch..

> arguing that one day dogs will be able to talk, because why shouldn't they?

You misunderstand how evolution works. Humans are the product of over 10 million years of brutal selection pressure in favour of intellect. Before that, we indeed had about the same vocabulary as dogs (you should really say wolves, by the way - dogs are our invention).

If wolves/dogs were subject to the same pressure, then there's no reason why they would not eventually adapt in the same way humans did. However, humans got there first, and have so thoroughly colonised the earth that there is no chance of this happening.

> But to the level that it meta-tunes itself?

Well, you're extending AGI here into some kind of singularity runaway intelligence explosion. That's not within the scope of my argument. I have no opinion on that.

> it is only a clone and not at all 'artificial'

In my view, anything that is not naturally occurring is artificial.

There is good reason to think that ML style techniques won't work.

The problem is that computers have long since surpassed human computational capabilities by orders of magnitude but the increased computational abilities don't make them much more intelligent.

For a computer to identify objects in a picture, with sub-human accuracy, it must be trained on a dataset of billions of photos and get massive amounts of human feedback to tune its algorithms.

The human mind can perform the same task with a sample data set many orders of magnitude smaller.

This would tend to indicate that the mechanism being used by ML is not the same mechanism as the brain uses or is vast inferior by many orders of magnitude.

The further computational power increases without producing intelligence the less likely it is that raw computation can produce intelligence.

It can be done in principle (proof: our existence). But there is no proof yet that we ("human intellect") are capable (smart enough) to get it done. There might be a hard barrier to our intellectual capabilities we can't see or didn't hit yet.

> There might be a hard barrier to our intellectual capabilities we can't see

I guess? But there's no evidence for that, and you're not even presenting a theory.

They remind me more of cold fusion.

Then you're pretty damn confused. The two are not even remotely comparable.

I think they're much more similar than AI and flight. We've been promised AI for decades and it's always just around the corner. Yet even today we don't have even simple worm level AGI. We have game-players and cool-picture-makers and maybe some categorization and function-approximators which are useful to business tasks but in general it's a bunch of playthings and fluff.

I really think you should examine your way of thinking. I'm sure charlatans have been making and breaking promises about pretty everything you can think of since the beginning of time. It's meaningless and is no basis for estimation.

I am talking about raw scientific possibility, and a good rule of thumb is: if you see it in nature, then it is assuredly possible, and humans will eventually do it a thousand times better.

Cold fusion: Does not exist in nature. Only exists in (controversial) theory. I'll believe it when I see it.

Hot fusion: Exists in nature so it's possible. After developing the technology, humans will be able to do it better. Will take a while.

Flight: Exists in nature, so it's possible. After developing the technology, humans can do it better. Took a while.

Intelligence: Exists in nature (ie you and me). After developing the technology, humans will be able to do it better. Will take a while.

See the pattern?

> humans will eventually do it a thousand times better.

From an energy conservation perspective nature is perfect. It is impossible for humans to do better than nature when it comes to conserving energy, which is a pretty key problem, so it seems a little optimistic to think that humans can do 1000X better than nature at very many things (or any?).

> From an energy conservation perspective nature is perfect

I don't understand. Energy is always conserved no matter if it's nature or humans - literally the first law of thermodynamics. Nature, humans, aliens, anything else is all "perfect" so I don't understand your point, or its relevance

My point is that in this key area it is impossible to do better than nature since nature is already perfect.

I am suggesting that the assumption that humans are generally able to do 1000X better than nature may be flawed if nature can already do the most important things so well.

I don't mean to be rude but that's not a coherent argument. I don't understand what you're trying to say. Nature is certainly not "perfect" in any interpretation.

I don't doubt that you have well-intentioned beliefs, but before stating them again you need to withdraw and figure out how you can state them in an understandable way. When you've done that, come back and we can talk. I am saying this with the best of intentions btw.

I might accept that difference with the understanding that if AI is analagous to flight, we're currently at 6th century Chinese kites and not 10 years from Kitty Hawk (which seems to be the perception when the flight analogy is made).

Then we are in complete agreement. I am only commenting on the physical possibility. It could very well take a thousand years!

This argument has been made at every step in the evolution of AI.

For a long time, AI would "arrive" if it could beat a human in chess. Then it did and suddenly that became a gimmick, a trick of computation.

30 or 20 years ago, picture (if you can) how we would have viewed this technology: someone verbally telling a pocket-sized device to make an appointment and order groceries, asking it for facts or directions, etc. It would have seemed more like magic than feasible AI.

Today it's a "gimmick". The bar for AI always rises to beyond whatever we're currently comfortable with, and the bar for "strong AI" doubly so.

It's probably because at every step, we always think that only a strong AI could solve the puzzle. So, when someone manages to make a not-strong-AI solve it, we say they worked around the implicit rules and created a 'hack'.

Paradoxically this argument is made every time someone refutes the idea that AI is making progress.

> Google Photos already picks out objects in photos to make photos searchable by keyword, removes obstructions (eg. fences) from photos. Tesla cars already automatically follow the speed limit, automatically brake, automate lane changes, etc.

> A few years ago this article might've had a valid point, but not anymore.

Not that I entirely disagree but we had the stuff above a few years and arguably several years back already.

Whn it comes to reasoning about AI and automation, and their potential effects on employment I think hjournalism is doing a terrible job. There's a borderline dishonest blurring of the lines between has-happened, is-happening and may-happen.

..automation is eradicating our jobs. But — unlike in the past — new ones aren’t being created to replace them.

This is very often written about as if it had already happened or at least halfway there. As of right now, this is still a speculation about a future that may happen, not some known fact about the past or present.

This reason that I bring it up here is that this article is specifically about "what if we're wrong about the future."

Predicting is hard. Predicting technology, predicting economy, predicting culture... It's all hard and we need to remember that we're speculating.

If we travel back to the futures predicted 70 years ago (as he suggests), many of them were wrong. Keynes predicted drastically shortned workweeks. The era was named the "nulcear age" or "Space Age." There were also predictions about the continuation of the mechanisation (a close cousin of automation) trend, which did turn out to be true.

Keynes' workweek never happened, even thoug the workforce grew as women joined it. The space & nuclear ages kind of happened, but so far nothing earth shattering has resulted. The continuation of the industrialization-mechanisation trend has resulted in much cheaper durable and consumable goods. Cutlery, soap and such is very cheap.

> automation is eradicating our jobs. But — unlike in the past — new ones aren’t being created to replace them

He presents no evidence to support this claim, and it is most likely false.

If you remove resources from some sector, and people become unemployed, but total resources were increased due to improved efficiency then those displaced people will more than likely figure out how to get some of those resources rather than starving to death.

Your argument is as speculative if not more. Resources are not all equal. The ex driver or ex cashier is not going to starve to death. But it's not getting the same salary, if any at all.

My argument is speculative except that technological progress has always led to increasing generalized prosperity in the past.

The adjustment periods have been on the scale of generations though so you can definitely have localized decreases in well being for large segments of the population due to technological change.

> has always led to increasing generalized prosperity in the past

Because people always had something else to do. But what will happen when machines can do pretty much everything? That never happened in history before.

This is a kind of "end of history" argument that assumes no further advances can happen. I just don't buy it.

We will invent new things to do. Or, god forbid, maybe just spend some time relaxing and enjoying life instead of working ourselves to death.

> that assumes no further advances can happen

...by humans.

> We will invent new things to do. Or, god forbid, maybe just spend some time relaxing and enjoying life instead of working ourselves to death.

With some luck we'll be good pets.

I don't see how they go from 'but it might not happen yet' to 'failed dream'. The fraction of things that humans can do which machines cannot is shrinking monotonically. One of two things will happen - either humans are unbelievably close to some 'maximum possible intelligence', or computers will one day handily beat us at everything. My money's on the second scenario.

> The fraction of things that humans can do which machines cannot is shrinking monotonically.

That's only true until it's not. Our little enclave of civilization that currently exists on this planet has a finite lifespan, just like every other civilization that's ever been run by uplifted killer apes. Either we will achieve a takeoff point within that lifespan, or we will not. If not - and I think that's the more likely outcome - there is no reason to expect another industrial revolution now that the easily accessible fossil fuels are all gone. Our descendants will eke out a living as subsistence farmers until evolution eliminates the overhead of general intelligence or the sun autoclaves the biosphere.

This is a bit like saying that matches beat us at making fire. A more accurate thing to say is that matches allow us to make fire more easily and efficiently.

@idlewords has a very nice talk about this: "Superintelligence, The Idea That Eats Smart People". http://idlewords.com/talks/superintelligence.htm

Also don't forget the filter bubble we're in. This article opens with "We’ve been told the artificial intelligence (AI) revolution is right around the corner", but only a rather specific in-crowd actually believes this. I bet if you'd interview the average world-citizen they'd not be so convinced.

> I bet if you'd interview the average world-citizen they'd not be so convinced.

I bet if you interview the average AI researcher they'd not be convinced. All the progress we've seen can be summed up as doing dumb things quickly. Obviously that's a bit reductive and some of the dumb things are less dumb than they were before but it's hard to deny how much progress depends on our ever increasing ability to do mechanical things faster than ever before. "Dumb things quickly" is also closer to the truth about the current (and forseeable future) state of AI than all the pop-culture bullshit about super intelligence and robots taking over.

> All the progress we've seen can be summed up as doing dumb things quickly.

Multiplying large numbers, playing chess well, and proving mathematical theorems were widely believed to necessarily require intelligence, up until when a machine could do it.

Doing those tasks intuitively requires intelligence - and has still not been done. Doing these tasks brute force by exhausting the problem problem space never required 'intellegence', only computational power.

> doing dumb things quickly

AlphaGo's policy (what moves should I look into) and value (how good is this board position) networks aren't dumb-but-fast. The rollouts are though.

Recent image successes (classifying images) aren't dumb-but-fast.

You can still trick the best image classification techniques with objects that to the human mind are clearly not what the machine says they are. Machines do not reason in the way that humans do. They don't have a concept of a self that exists in space-time. Humans can make inferences about images precisely because they have an a priori understanding of space-time.

You will always find that those who have faith in strong AI also have faith in a reductionist approach to human intellect. That is, they are behaviorists and neuroscientists and not philosophers and poets.

What makes great art is not the notes that make up the melody but expression of the human condition.

Machines do not make art. They make meaningless objects.

This is something I want to believe, but history shows us that scientific progress often comes with the loss of traditional, magical views of the Self and of the universe which are taken for granted: Newton "unweaved the rainbow", Darwin showed us that we are quite close to monkeys, and so on. If AI reaches a point where it can mimic great art, it will be a shock to humanity similar to when we discovered that humans are only a minuscule part of time and space.

I'd argue that the classifying images successes do indeed qualify as dumb but fast. Not to say it was not a difficult problem to teach a computer how to solve, but if you ask a 5 year old to pick out all the pictures that contain chairs they could definitely do it. Vision & identifying things is a problem that even very dumb humans can easily solve, but all of us do it relatively slowly.

Strangely, I feel the opposite. That the in-crowd is aware of the limitations of AI and the average educated citizen is largely fearful of the incoming implications of highly advanced AI.


AI/ machine learning is really expensive to most companies. The ROI just isn't there for a lot of business domains. I don't think we'll ever reach a full AI controlled society for the sames reason the world will never run out of oil: supply and demand. As demand rises so will price, and at a certain point it's cheaper to employ people.

That said, there will be swaths of industry that are more easily automated that will be (and already are), but there are much larger areas that will still need people doing the 'plumbing' with excel and phone calls.

My guess is there's going to be a continued trifurcation in the white collar work force that we've been seeing for 20 years. 1) The MBAs: they will continue to run the show for the vast majority of businesses. 10% 2) The tech kings/queens, barons, and knights: they will run a few of the mega-tech businesses. The rest will work for the MBA's and get rewarded nicely. 10% 3) The information plumbers: proficient at writing/reading reports for the MBA's, working in excel, calling who needs to be called, and moving stuff to the right space when the machine doesn't know how to (e.g. when the tech royalty messes up). 80%

Then there's the blue collar class. Most of the changes there have already begun to happen. The loss of jobs in manufacturing will be echoed in some other industries (e.g. trucking... we'll still need truckers, just not the same % of the population).

The standard solutions for these changes people propose is: 1) UBI: I believe this is a pipe dream that helps the tech royalty sleep at night. 2) Training: Covert Blue collar jobs into the while collar plumbing class. A nice idea, and has merit. The problem is white collar is office, and most blue collar employees _HATE_ the office. 3) Infrastructure: hearkening back to the United States CCC, beef up infrastructure projects to replace loss of jobs in the blue collar sector. This is not a bad idea, but has funding issues.

Any I missed?

This made my day:

> If AdSense became sentient, it would upload itself into a self-driving car and go drive off a cliff.

"I bet if you'd interview the average world-citizen they'd not be so convinced."

That's as maybe, but it doesn't mean it's not going to happen.. Granted it's easy to see the possibility of AI taking over everything when you're working with it, but I don't think ignorance of the field has any bearing on whether it actually happens. It was popular opinion in the 1980s that video phones will "never happen" because the telephone system couldn't handle the amount of information needed for pictures to be transmitted, and that's clearly no longer the case.

I'm merely an interested observer of this stuff, who is slowly learning programming. But the vast majority of people I speak to (and indeed teach) are totally oblivious to the advances that are being made, or their consequences. But that doesn't mean that some of them will not have particular jobs available to them in 10 years time because of the progress of which they are unaware.

> I bet if you'd interview the average world-citizen they'd not be so convinced.

You don't even need to go that far afield, we're talking about a country that can't even provide clean drinking water to all of it's citizens.

Wait, we were talking about a country? Which one?

The USA. Look up what's going on in Flint.

Personally, I find the USA very far afield. (I'm in the Netherlands).

From the link you reference:

> Premise 2: No Quantum Shenanigans

> ...

> the mind arises out of ordinary physics. Some people like Roger Penrose would take issue with this argument, believing that there is extra stuff happening in the brain at a quantum level.

And some other people would take issue with the idea that you can talk about a simple physics that excludes quantum physics.

> But for most of us, this is an easy premise to accept.

I'm out.

It's not excluding quantum physics, it's excluding quantum shenanigans.

You'll certainly need chemistry to model the brain; you'll very likely need solid state physics; everything points towards you needing some very low level molecular dynamics.

What you can't use is high temperature long term coherence. Mostly because quantum physics all but forbids it.

But of course you can talk about simple physics without invoking quantum physics. What do you think physicists did before? Most everyday phenomena can be approximated very well by classical physics, and modeling the full wave-function for every particle wouldn't even get you any results in a reasonable time.

Since neurons are relatively large and dense, I expect a classical approximation wouldn't show any observable difference in behavior. Unless a deviation is shown, I don't think it is necessary to postulate things like Roger Penrose's quantum-gravity "explanation" of consciousness.

I read it as "everything can be explained with classical physics alone", the same way we can safely assume the Earth is flat for making a small house. We're not going to complicate math by using the curvature the same way we're not worrying about tunneling, spin, entanglement, etc.

That is exactly why I'm skeptical. Photosynthesis involves quantum entanglement. I'd be surprised if quantum entanglement doesn't play a role in consciousness.

Saying that quantum effects are present and saying that the human brain is a quantum computer (or more?) and will get exponential speedup are two very different things.

We have a pretty good understanding of the individual neuron, and we modeled it with good accuracy. We used a very simplified model of it to build deep learning. The problem is the absurd amount of neurons and interconnections that we would have to run to simulate a human brain.

At some point we may find some phenomena for learning that involves some quantum event, but I doubt we have ever to simulate them at the quantum level.

AI is no more a failed dream than normal intelligence and i chsllenge anyone to come up with an argument for why we could evolve from basic elements of the universe to what we are today but that this couldnt happen with "machines". AI is already a reality its not just wishfull thinking. And people would notice the difference quite a as it would have become way more granular.

This is exactly my thinking. Things like faster than light travel or whole object teleportation are wishful thinking. Things like intelligence explosions are not, because one has happened in the last 100,000 years. That intelligence has come to dominate every animal on this planet and the biosphere, even to the point of escaping the planets gravity well itself. To imagine that we are the pinnacle of the optimization of intelligence (especially when we cannot tweak our bandwidth or input devices much) sounds like hubris.

Your brain weighs about 3 lbs and uses about 20 watts. It's an analog device. It does not have optical interconnects and the switching speeds of the components within it are governed by chemical rather than electrical processes: signals within it propagate at under a few thousandths of a percent of the speed of light. It takes about 3 years to boot up and begin to be sensible and over 12-15 years it achieves roughly human intelligence. Well before those 15 years it surpasses all of our AI in a variety of tasks for which AI is not intelligent enough.

The brain does lots of things, but some of them are quite small and well-defined intelligent tasks, such as judging the meaning of a sentence it is parsing in a language it has learned, or other "AI"-type work. We can easily estimate whether it is doing so correctly. (For example through reading comprehension tests, which we have standardized.) Human brains are able to pass these tests and our best AI fails these tests.

The only way that there is no digital device that can ever model these aspects of this analog device well enough to make the same meaningful calculations (such as deciding what a sentence means in the context of human culture), i.e. the only way the analog device has a monopoly on the calculation and judgment it performs, for not only the next 10, 30, 50, or 100 years, but 1,000 or even 10,000 years, is if this analog device is a keyfob to a magical ethereal plane where our souls and consciousness do all the real intelligent work, only communicating back to our corporeal selves through an antenna which is our brain.

Under that scenario it is certainly possible that AI will be a failed dream forever. After all, rather than 3 lbs of analog device doing work, our ethereal selves could each be the size of a billions of our universes.

Then it would be silly to imagine we could ever accurately model any part of that. I don't think it's a false dichotomy here - I think it's one or the other.

In my personal opinion the latter scenario is unlikely. In my personal scenario anyone who says that nothing digital will ever capture the calculating power of 3 lbs of meat is living on the same side of history as Lord Kelvin when he announced "Heavier than air flying machines are impossible".

To the exact and same extent, true AI is impossible.

This guy gets it.

To be honest I didn't read the actually because I gotta run, but right from the outset there is one bullshit bit that always annoys me: there is no "the" singularity!

For those who are in it, there is nothing else, they might not even have a concept of the singularity. For those who aren't in it, there could be lots -- it's not called singularity because there can be just one of them. It just means falling into a well you cannot get out of, and that an outside observer can't distinguish you from the well either. Okay, so I made that up. But whatever the best way to describe it may be, nothing about that inherently prefers fulfilling all our dreams to torturing us forever or something else entirely, that's quite orthogonal to something being a singularity.

In this context the term singularity means a point in time in the future where certain technologies become available and open up such vast possibilities that it's impossible to predict what will happen beyond that point.

Have you seen old sci-fi? If that's the definition of singularity then we've been there for some time.

I'll admit I fall into the camp of not believing the "Johnny Depp Transcendence AI" is right around any corner (neither is an Martian apartment complex for that matter). But I also believe it's the journey to such AI that's important. Even if we will never reach this ultimate goal.

We're already reaping the benefits of what we have learned from this struggle. And I think we gain so much insight into ourselves as we try to reverse engineer the most important part of our meat vehicle.

> we must strive to be more than we are

To what end? Is a computer with our collective brains as AI orbiting a sun more or less than what we are? what should we be striving for as an ultimate utopian end to progress? Does/can such an end even conceptually exist?

I think the answer can be venturing out into space. It's a logical next step for mankind.

Why? Truth is we're hoping to find a miracle out there when it appears that it's all just rocks and nuclear fireballs. What do we do after space?

Survive a cosmic event.

We have been dreaming this dream a long time now and are no closer really. I'm sure you've all done this - one of the very first things I ever made was what was known way back then as an 'expert system'. It determined which particular disease you had by means of a series of interview questions from 'Robodoc'. I drew flowcharts and planned it all out with the limited set of diseases, symptoms and remedies available to me and was as you can imagine nothing more than an horrendous spaghetti of if-then-else (or maybe even switch-case statements) the fall through answer (when all diagnoses failed) being 'take two aspirin and go to bed'. Even then I thought 'bah not enough data - the bane of the computer scientist!' I think I was about eight at the time. Looking back now I think it was cute. Anything other than flu and rabies and you were in trouble. Is Robodoc closer to Watson than Watson is to HAL? People are conflating AI and machine learning. They think AI is already here. Personally I don't think any one team or project will ever solve AI as real intelligence is an emergent property.

I think many continue to underestimate human beings. It reflects a curious mix of lack of self awareness, a certain capacity for self aggrandizement and hubris, reductionism of basic human tasks and confusing under-achievement in others with general human potential.

A lot of the current hype far from demonstrating a firm grasp of the problem rest on reductionism and betray a shallow child like perspective of humanity.

AI is going to take a far greater understanding of ourselves and our environment that we currently possess and like all advancement it will be exciting when and if we achieve it. Self driving cars will happen, but in far constrained and controlled environments than our roads today putting a perspective on current capabilities.

Car manufacturers have been extremely slow to adopt technology and have been stuck in a time warp for nearly 20-30 years. Had they been faster a lot of the tech and sensors that deliver better situational awareness in self driving cars today would have made our roads far safer than they currently are.

I wouldn't compare AI to mars colonization. AI is coming in strong, we've made tremendous progress - mars colonization is still in its infancy / theoretics. AI's a constant-moving target of a definition; by all accounts, we've "achieved" AI already if you'd ask someone from 50 years ago. Art, music, conversation, research, ... If he wants to say "what if the Singularity never happens," that's fine and good - but it just seems weird to me to say "what if AI never happens." It's like saying "what if self driving cars never happen" just because he's not yet driving one.

False-starts: in this regard, AI is like VR. VR had its own winter too, after Virtual Boy and the like. We're in VRs second stand; same as AI. And in both cases, both are making a very strong case, and making lots of money. I'd put my money on both horses now.

The classic AI (as in the AIMA book), it seems, got it right - heuristic-based search, guided by feedback (to improve an heuristic) is what intelligence is in general. Every living organism posseses intelligence (by the process of trial-and-error, which is a search, to "learn" an "optimal" heuristics with selection by the process of evolution) of some kind relative to its environment. This is how a bacteria fought viruses, for example.

The problem is to find a good-enough heuristic, or to "extract" one from the actual (not imaginary) features of the environment and "train" it. This is, roughly, how ensimes has been made.

This second goal is murderously hard, because to select right set of features which adequately represent some aspects of reality (as it is, not as we imagine or know it is) is where humanity is still failing miserably.

At this point, the velocity is too great to stop AI before it can solve most of the problems listed in this post (which are, from today's vantage point, relatively low-hanging fruit).

I can see an argument that we might not make it to super-intelligence but we'll still solve a bunch of problems on the way. Weird post.

Though there may be real limits to AI, the author's superficial treatment of the matter includes few facts, and no understanding of how the techniques and mathematics underlying modern machine learning are substantially different than what researchers were focusing on in the 80s.

>substantially different than what researchers were focusing on in the 80s.


Weren't the neural network stuff and evolutionary algos all hip in mid-eighties?

Simple perceptrons, yes. Feed forward, RNNs, CNNs, SVMs, gradient methods, and the rest? Not so sure about that. I know that genetic algorithms do sometimes get discussed today, but they are a small part of the community discussion, IMO. Not to mention that research in non-linear optimization and attendant numerical methods definitely made some breakthroughs in the 1990s.

I'll go as far to say that it didn't matter if we had the secret software algorithm to intelligence in the 80s, we couldn't execute it on the necessary hardware. The only examples of learning/intelligence we have are executed on massively parallel systems (brains). Only in the past 5 years or so do we have systems on the necessary scale (GPU/ASIC).

I don't think so - I was involved in AI research from about '89 to '95 and the field was pretty much dominated by symbolic/logical approaches at that time although these were arguably running out of steam (I left the field because I blundered into the web in '92 and founded a startup in '95).

Computers aren't substantially different from 18th century looms. The perception that computers are intelligent is a mere illusion.

Human brains aren't substantially different from mosquito brains. The perception that humans are intelligent is a mere illusion.

AI is amazing. Hype isn't.

Look at something like smartphones. We all pretend that the iPhone invented the segment... but I had a shitty PDA that was a 2nd cousin to the iphone a decade before, and a functionally equivalent if not polished iPaq in the 2003/2004 timeframe. I used Google maps on a AMPS data plan in 2005.

AI is a tool, and just like with "natural intelligence" that we walk around with in our heads, its foundational but only transformative when we apply intelligence to solve problems. There's no magic.

The Judgement Day scenario is a strawman. Of course humanity will not be replaced by killers robots over night.

When we learn how to connect our brains to computers - without our eyes, ears and hands as bottlenecks in between - humanity's views about the world and life itself might change significantly. Strong AI is not even required for that to happen. There will be brain mines in rural Chinese sheds.

I like that some people dare to challenge AI and get upvoted.

There were def some achievements in AI the last decades but when I look how the brain works--or to be more precise: Nobody has a clue how our brain works, we only know that it seems to be so different to a semiconductor and before we do not know more about the brain how should achieve real AI?

Isn't A.I. already 'real'? I use Amazon's & Netflix's recomendation systems everyday?

"You just ordered a lawn mower from us. Here are more lawn mowers for you"

"You might like this movie because you have watched other movies with _the_ in the title before"

I guess this comment means you consider making good recomendations a simple problem? Or maybe not real A.I?

I think he means those problems are real AI, and current solutions stink.

That's one of the problems with discussing AI and with whom you discuss it with: it can cover relatively simple stuff like recommendation engines all the way up to the stuff that Boston Dynamics works on. Both have completely different trajectories in terms of what they are capable of.

Agreed! I have a hard time knowing what language to use. I think I'm going to try to use the term machine learning more. It seems many people use A.I. to refer to a self aware/skynet level of intelligence.

Machine learning is American version of the "5th generation computing" : https://en.wikipedia.org/wiki/Fifth_generation_computer

Interesting links, though blaming AI does not "solve the problems of today" seems naive. It does.

P.S. Funny that google suggests to search for "what can't we do without AI" instead of "what can’t we do without AI" ;)


Don't worry, uninterrupted exponentially improving AI certainly will not happen.

Exponential growth requires a uniform medium to support it, which -- of course -- it quickly exhaust.

The idea that smart AI will produce smarter AI which will produce even smarter AI seems wildly simplistic and I'm surprised it has any traction. Why assume it would be a linear progression... a given advance might require a leap that even the smarter AI can't make directly. Or, the problem may become exponentially more difficult at a rate outstripping the advances. And those are just really simple objections. In the real world, progress on big things is messy and complex.

A dream can't be failed. A dream is a dream.

Some dreams are nightmares.

It's still a dream, just not a very pleasant one.

> A flatworm can dream. Can't ze?

What was that at the end? was that tumblrspeak? Isn't "it" used for animals and this person forgot because "ze" spent too much time talking with people better left alone? Likely.

In any case, I stopped reading at that point, I guess after the "second shift" comment (as a male living alone) and the false claim of jobs not being created to replace the old ones, those two letters were just too much baseless moralizing.

I imagine the AI of the future will have a good laugh when it stumbles onto this thread. If it can laugh...

TLDR: Don't assume singularity will come along and solve growing "today" problems.

What are people going to do when AI learns to write alarmist click-baity blog posts?

We are an organic based AI. It is possible.

Consider findings such as this one:


It may be a lot harder than we think. In turn, it may take much much longer than we think. Perhaps we will trash this planet before we reach the point.

But it's hard and we don't understand much at all about us, and the things we built after thinking we had some gigantic insight are kinda not as great as we hoped them to be. Therefore, doubt is appropriate. (as is working on further progress)

We will make humans brains more computer-like instead of making computers more human-like. Some day, the last cyborg will replace their remaining flesh with metal. Strong AI the hard way.

Uh, ever heard the statement that premature optimization makes changes harder further down the road?

The problem with the body/mind system is the insane level of interconnection and feedback loops. "We made you 20% smarter!, uh, sorry about the 60% incident increase in cancer though". The size of the problem space when dealing with human minds is astronomical, we will likely need intelligent systems to solve the problem, which means the silicon Strong AI will come before the Wetware AI.

If we figure a way to copy wetware AI, the interconnection problem is pretty much solved. Cancer again? Get a new body.

The problem with AI is that the word has become layman-y.

AI at a glance seems so heavily focused on both technical aspects (ie. computing power), and modeling a human's cognitive train off thought.

So many neural networks around which outwit/out-strategize humans, but how about other aspects humans use to solve problems.

People doing something "crazy", based on a "gut instinct" for example.

I have yet to see a serious neural network (company/research) that models "gut feelings", because in the "real world", many people have found success and solved problems/made decisions in every branch and form based on a "gut feeling".

To add a few more to the list, I'd love for someone to give me a link to a neural network emulating the following traits which very much have proven to drive humans to advancement and problem solving:

- intuition

- motivation

- "taste"(opinion)

- empathy; A judge reducing or increasing sentence based on the intricacies off an isolated situation, and not on pure objectivity (IE. 1 murder 5 years, 2 murder 10 years etc.) The law seldom is this absolute, precisely because off human traits, and being able to acknowledge and take into account this variable is a case off empathy moreso than math / logical deduction. Nuance by it's very definition is not absolute.

- inspiration

- hope


A different perspective:

Imagine your on a springboard at the pool for the first time.

You don't have any memory off you doing a jump on your brain's "hard drive", so the uncertainty and the fear off jumping is valid.

A computer might end up in a crash, because it loops infinitely false, until a memory is found containing the info that a jump can be successful, which will never come.

However, you look behind u, and there is social pressure to jump. The fear off being laughed at intervenes in this loop, and u make the jump.

Humans rarely "crash", because we are not as bound to logic as a computer.

This is why a computer excels in terms off reliability with math. Because a computer is absolute, 1 + 1 will always result in 2 on every calculator ever, but as soon as unknown parameters are introduced, it becomes that much more difficult to keep it running.

Our emotional side is as double edged as can be, without it we would crash if we can't logically solve something, but equally reduce reliability ("human error"), when emotions overrule logic entirely.

Off course one might say, just solve it with an automatic breakout after 10 iterations, but now you wrote an (conscious) edge case. Humans improvise.

I think almost every person has been in or witnessed a situation, where they (logically/rationally) concluded that "he should not do it / bad odds", then proceed to see this person make it anyway resulting in succes.


AI has seen many advancement, modeling a humans capability off functions that can be compared to a humans prefrontal cortex, but the hippocampus, amygdala, cerebellum to name a few are parts which I have yet to see a promising existing computer version off, but are vital to our ability to become as a race to the point where we are at.

As a final anecdote, there was this guy which through an accident had his brain split between the front and the rest (emotional parts). Nurses came in with 2 meals to choose from, but he could no decide.

The nurses found this peculiar.. "just pick one", but he just froze, BSOD on them. He said there was no logical reason to pick one or the other.

He's absolutely right, there isn't, yet he'd die if he doesn't pick one. Our brain never ends up in absolute false. When it does, emotions pick up and solve it, perhaps imperfectly, but "life goes on".

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact