Hacker News new | past | comments | ask | show | jobs | submit login

> These things are going to grow up very, very soon! We know that. It's scary.

I'll believe it when I see it.

Your post has just a bit too much futurologist science fantasy wishful thinking in it.

You sound like some of the kids I teach. All 17 - 18 year old kids have phones and check them at least every 5 minutes. I mentioned that it'll be weird when they are wearing some sort of augmented reality glasses and teachers won't be able to tell if students are concentrating or reading reddit.

The kids all said "That'll never happen." as if technology is stuck where we are today. They were astounded when I told them that ten years ago, I never saw a phone in a classroom. So from my perspective we went from no phones, to big phones, to little dumb phones, to little connected computers... it's no stretch to think that soon they will be glasses mounted or project some sort of holograph into the eyeball. Version 2 of Microsoft HoloLens will be cool, v3 will be tiny, v4 will be mounted inside glasses for sure.

The same with AI. A few years ago I couldn't talk to my computer. Today I do it all the time. Last week I had to remember passwords, today my computer recognises me and logs me in.

AI is here and it's getting better and from my point of view it is getting better much faster.

It's one thing to say "VR will be commonplace in 10 years" or "Self driving cars will be commonplace in 10 years". That, I can believe because we already have prototypes and I've messed with them and could imagine advancing them.

It is quite another thing to claim that human adult level strong AI is coming "very very soon" and could happen overnight.

The latter is just science fiction wishful thinking. I have no reason to believe we will ever have truly thinking, sentient computers, let alone "very very soon." Sure, our Siris and Alexas and whatnot will get better and better at responding to our queries how we want, but that's way different from an adult level human intelligence AI. Machine Learning has limits and will not yield conscious machines anytime soon, if ever.

> I have no reason to believe we will ever have truly thinking, sentient computers

Unless you think human brains work on magical pixie dust, I'd say you have quite a few reasons.

I'm not even convinced that the human brain can be modeled in terms of a Turing machine. How does does the brain achieve free will - the ability to "decide" between two arbitrary, equally-weighted things? RNG? And yet, it doesn't "feel" random.

Firstly, free will isn't what you think it is. A random choice might seem "free" but it's not "willed" in any meaningful sense. So let's leave aside the free will question because that term is pretty much undefined at this point.

What your post sort of hints is what's known as the hard problem of consciousness. I recommend reading the Wikipedia pages on it and qualia if you're interested.

Suffice it to say, given our current understanding of physics, we are no better than finite state automata (see the Bekenstein Bound). The only escape from this inevitability is if we collectively decide that the hard problem of consciousness is irreducible, and then something like panpsychism becomes preferable.

This is unlikely though, and we've been through this once before in the debate over how living matter differs from non-living matter. Must there be some "secret sauce" added to non-living matter to bring it to life? This was the proposal of vitalism, but eventually biology came to prominence and all of those who insisted living matter had to be different just died off and we were left once again with a reducible, mechanistic understanding of living matter. So it will be with consciousness (see [1] for an example of how this might work).

[1] http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00...

This presumes free will, which is itself contentious. I think it's clear we have the appearance of free will, but it's not clear that consciousness actually makes decisions rather than an emergent appearance of agency, claiming decisions as made for actions that are already in process. If you're going to make a claim about the feasibility of AI that relies on free will, you'll have to prove free will first.

Of course, I have no way of knowing how soon "very soon" is.

However, although my post may have wishful thinking in it, I think if you investigate the reality of the cells that contribute to human thinking, you will come to the conclusion that it is impossible that this or some similar system should not be modelled in computers within the next hundred years. You can look at the processing involved yourself. Just remember that the whole is the sum of the parts, and the parts are doing slow, biological things - it's not silicon etchings that work at practically the speed of light. Neural signals propagate at something like 100 meters per second at the fastest[1] - light propagates at 299 792 458 meters per second and if you look at our 4 Ghz CPU's you will see that in a clock cycle light just goes about 10 cm.[2] Light goes fast enough to make a million trips across a server room in the time it takes neurons to fire. While staggeringly complex, the fact is, our server rooms are doing more.

This is like heavier-than-air flight. Let's say the fuels and engines are there - but we don't know how to assemble them.

When we see neural nets produce results that are similar to what young children produce, this is amazing. I cannot overemphasize that at any moment we can have a breakthrough. We have so much more computational power than required: 3 pounds! 20 watts! 86 billion neurons with 7000 connections! And at speeds of 300 herz.

The training time for a human brain to get to intelligence even close to adulthood is, let's say, 12 to 14 years even in the most precocious.

We're looking at the output of neural nets that receive perhaps months of training.

This stuff is happening right now. It's absolutely astounding and there is no limit to what can be unlocked overnight. Any server farm at Amazon or Google has the hardware today to go through 10 years of human brain training in about 10 months, and then end up with a network that thinks ten times the speed of human thought. It's there, today. Three pounds. Twenty watts. Three pounds. Twenty watts. Three pounds. Twenty watts.

It's not miraculous. It's happening.

[1] http://biology.stackexchange.com/questions/21790/how-long-do...

[2] https://www.google.com/search?q=c+%2F+4+hgz

> This is like heavier-than-air flight.

Yes, this is like heavier-than-air flight. It would be like if we didn't really understand how flight works so we are just imitating birds by gluing feathers to cardboard wings, flapping them, and hoping flight just sort of happens.

In a similar way, we don't really understand how the human brain works, so we are just imitating neurons and hoping sentience will just sort of happen.

How long did it take to go from really silly and obviously impractical flying machine designs to working, practical ones?

Thousands of years? Who knows how long before the Wright Brothers people have been attempting flight?

At least 500 years of relatively serious attempts, if we start with Leonardo.

ythn, you should start where we had the combustion engine, since it was a source of huge amounts of power (the power density of the fuel) that was already effectively being turned into motion.

What I'm saying is that the server farms @ Google, Amazon, and elsewhere, are at the stage of the "combustion engine" -- not just 'powerful enough', but more powerful than necesssary. I base this on a count of the number of neurons in the human brain, their rough topology and connectedness, how quickly and often they fire, etc.

In that sense, I think you will find it almost impossible to conclude that our server farms today do not have sufficient power to simulate 86 billion neurons' 7k connections each, at a lowly few hundred herz. They are not even more connected than we are able to simulate, precisely because we can saturate links over a million roundtrips before we meet brain's realtime behavior. (Or at least, thousands of times.)

So perhaps "we are just imitating birds by gluing feathers to cardboard wings, flapping them, and hoping flight just sort of happens" -- but while having an engine attached that's at least an order of magnitude more powerful than is required to achieve flight.

In that context, the story we're replying to is like a video of someone achieving a 2 second flight - with a lot of feathers flying everywhere. It might not seem like much, but given what we know an 86-billion-neuron neural net is capable of, it is exciting.

The results of AI based on neural nets that I see being posted every single day are absolutely astounding.

this is happening right in front of your eyes. you're witnessing the birth of flight.

no, the engines won't fly the way birds do -- but they're more than powerful enough to start flying, and you are seeing this every single day.

By the way, you know there's not a particular push toward trying to make neural nets that are self-conscious or feel pain, etc? This just isn't a primary goal of researchers at this very second.

Just as flight engineers don't really try to make ornithopters. We have better and more powerful things to be doing.

this is an absolutely miraculous time. And you're not seeing it - which is sad.

This isn't some kind of pipe dream. These are results coming out every single day.

Here's a neural net designed to make horror art out of normal photos:


And that's just one thing out of many. Nuance's Dragon systems have practically eliminated the entire profession of medical transcriptionist. (Just google regarding 'medical transcription dead' without quotes to find people from that industry reporting on its demise.) Becoming a medical transcriptionist today is like going to typewriter repair school. Because speech recognition techniques based on machine learning have gotten that good.

Go, an incredibly nuanced game with a staggering possibility space, has fallen to machine learning competing with a lifetime of competitive human mastery.

It doesn't matter in what order or how the next breakthroughs come that go toward sentient or at least very intelligent interaction. We know what the limits are. And we know based on the topology and computation involved, that we have massively more than enough horsepower.

so while you might point to flying feathers and crashing airplane and deride it, I think about the jet engine behind those flying feathers and my heart skips a beat when I see it sustain flight for 2.3 seconds before producing "A hundred and half hour ago" like a human toddler blabbering incoherently and without even understanding those words.

Because I know what else AI has been accomplishing, and I know the horsepower behind it. you need to expand your thinking and realize that our algorithms and machine learning techniques are playing catchup with hardware that has been sitting around being dumb.

That's right: computers have just been sittin' around, bein' dumb, while they have all the computational power necessary to surpass humans in every realm of neural computation. mark my words, ythn. no pipe dream involved.

I don't doubt we will be able to do great things with machine learning in the coming years.

What I do doubt is that machine learning will become generic anytime soon. I predict machine learning will always need some degree of specialization - we aren't reaching general intelligence/learning within our lifetimes. A machine that is awesome at playing Go will suck at translating languages.

I also predict machine learning will never be able to surpass humans in terms of creative ability. A top notch machine-written book/poem will always be inferior to a top notch human-written book/poem, for example. Humans can invent new things, machines seem only capable of rehashing existing things. For example, at some point a human writer invented the concept of an unreliable narrator. If you "teach" a machine how to write by feeding it thousands of books, but you exclude books that have unreliable narrators, will the machine ever write a book whose narrator is unreliable? I think not.

I'll happily admit you were right all along if AGI does come about within even the next 20 years, but I think you are grossly oversimplifying things in order to embrace the sci-fi fantasy you wish were real.

> we aren't reaching general intelligence/learning within our lifetimes

Almost certainly false.

> I also predict machine learning will never be able to surpass humans in terms of creative ability

Algorithms are already churning out papers that are accepted to journals, and they can compose crude music. This a mere 10-15 years after the study first began. I give it maybe 20 years before a computer generated song will appear on one of the top charts. These will likely still be domain specific algorithms.

> Humans can invent new things, machines seem only capable of rehashing existing things

So you think human brains run on magical pixie dust? "Things" that humans invent can all be described by finite bit strings, which means generating "new things" is a fiction.

We discover these compositions just like a computer would. The secret sauce that we have but don't yet know how to describe algorithmically, is discerning those bit strings that have more value to us than others, like a clever turn of phrase is more valued than a dry, factual delivery.

> If you "teach" a machine how to write by feeding it thousands of books, but you exclude books that have unreliable narrators, will the machine ever write a book whose narrator is unreliable? I think not.

I don't see why not, even if we stick to domain-specific novel generation, but it depends on how you train the system based on the inputs. Random evolution is hardly a new concept in this domain.

I'm curious if this is in spite of you being on board that "sure, server farms do more than enough computation for parity with the human brain" or whether you don't consider neural nets in relation to human neural nets (biological brains)?

If you do consider and admit that computationally there seems to be enough horsepower there, where does your skepticism come from that anybody would figure it out?

Alternatively, did you happen to completely ignore the argument about how much computation the human brain does? (Which isn't that much compared with server farms). I mean on a neural level, using the same neural network topology or an approximation of it, actual neural networks.

I guess I'm perplexed at your skepticism.

I'm skeptical because you are promising the moon, and when I look and weigh the tech for myself it seems many orders of magnitude less advanced than your hype leads one to believe.

I am basing the promise bottom-up, based on how many neurons are in the human brain, their connectedness and speed, and amount of computation those 3 pounds can be doing using the synaptic mechanisms we know about.

You are basing your skepticism top-down based on the results the science of artificial intelligence has shown to date.

It's a fair source of skepticism. There are 15,000+ species of just mammals, all of which have neural nets and exactly one of which have higher abstract reasoning communicated in a language with very strong innate grammatical rules - and that is humans.

However, we have 7 billion working specimens, a huge digital corpus of their cultural output, and their complete digitized source code which can be explored or modified biologically.

For me bottom-up wins. We can just try things until it works - which may be sudden/overnight.

At the moment I see a jet engine, feathers flying everywhere, and no flight. But looking at that jet engine, I just can't imagine it will take long.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact