Hacker Newsnew | past | comments | ask | show | jobs | submit | RushAndAPush's commentslogin

It's amazing how so many people on the internet feel so confident in their own intellectual abilities that they can read something that they don't understand and dismiss it as if it's non-important.


That's funny, I read his comment as I don't understand this, it sounds lame, can someone please explain the import for me as I trust this source wouldn't be pushing lame crap.

That is, he was giving credit as respect to the researchers.


A lot of science news is sensationalist nonsense spun out of university press departments and that the most overhyped stuff also tends to get the most coverage. I think it's actually fair to look at something you may or may not understand fully and if your gut feeling says "this doesn't seem important", discount its importance significantly. In this case, OP was even explicitly asking whether or not someone more knowledgable agreed with their asssessment.


I think a lot of computer scientists have a hard time believing quantum computers are possible. Not saying I agree, but I've heard this viewpoint a number of times.


Like who?


HN, slashdot, reddit, etc. etc :)


Fitting username.


Bioethicist like you are literally the worst people on earth.


It's going to be alright. I didn't get a job at Google either.


But there were positive notes in your interviews! They'll be sure to call again next summer!... They say 3rd time's the charm.


So what you're saying is that the mice that didn't have their genome edited didn't have a drastic life extension? I assume the improvements only occurred in the mice that had their genomes edited to reduce Yamanaka factors because they then artificially replaced the Yamanaka factors?


No. That's not quite what I'm saying. There was no particularly interesting effect of inducing transient expression of the Yamanaka factors on normal mice reported in this paper. It is effectively a negative result that's being reported, as it pertains to wild-type mice.

The paper primarily reports results that pertain to mice with a genetic defect that causes progeria. Mice with this defect have a shorter-than-normal lifespan.


so its more a treatment for progeria than general aging reversal


Deepmind isn't completely separate from Google. They have a team that helps apply their research to google products. I find it highly unlikely the team would be dissolved considering how successful and prestigious they are becomeing.


I've read every comment in this thread and its filled mostly with peoples self congratulatory intellectual views. Nobody, not even Robin Hansen himself has given a good, detailed argument as to why the current progress in Machine learning will stop.


I doubt you'll get that, because nobody thinks that progress in machine learning will stop.

An AI winter doesn't mean that progress stops. It means that businesses and the general public become disillusioned by AI's or ML's failure to live up to the popular hype, and stop throwing so much money at it. The hype then dies down. Research continues, though, until enough progress is made that machine learning starts to produce results that excite the public again, and the cycle goes into another hype phase.


> I doubt you'll get that, because nobody thinks that progress in machine learning will stop.

Robin Hansen is notoriously skeptical about the possibility that Deep Learning can make real gains. He for some reason thinks brain emulation is more likely to make large progress in AI.

>An AI winter doesn't mean that progress stops.

It doesn't completely stop, but progress would be at a snails pace.

> The hype then dies down. Research continues, though, until enough progress is made that machine learning starts to produce results that excite the public again, and the cycle goes into another hype phase.

I think we as a community may need to take a good long look at the hype cycle theory and be skeptical it has any merit.


I don't know that I'm positing any sort of formal theory, just describing a pattern that's been going on for decades now. I'd suggest that unless something fundamental changes, there's a reasonable chance that it will keep on going that way.

Progress in machine learning has never been at a snails pace. What has happened at times is that the futurists of the world stop making breathless pronouncements about machine learning, and that creates a public perception that things are going slowly.

Case in point: Deep learning really isn't anything revolutionary or new. I first started noticing papers about stuff that now falls under the "deep learning" catchphrase about 15 years ago. Not because it started then, but because that's when I started reading that sort of thing. During that time, there's been relatively constant, steady progress being made. But you wouldn't know it unless you had been following the literature, which not many people do. And all of this happened at a time when everyone was convinced that neural nets were dead and support vector machines were the way of the future.

As far as Robin Hansen's skepticism about deep learning getting us to true artificial intelligence, meh. A technology doesn't need to make Ray Kurzweil's eyes roll back in his head for it to be useful.


Brain emulation? I didn't realize people seriously thought that could be good for anything other than research and investigation.

If we can successfully emulate the brain, it seems we would have necessarily acquired the knowledge needed to build models that are very powerful without having to exactly mimic the brain.


See Hanson's http://ageofem.com/


I keep hearing great things about that book. Thanks for the link, maybe it is time to pick up a copy.


Sorry but I don't think "AI winter" as a term is appropriate here.

The "AI winter" as I know it was around 1988. I've still vintage issues of "AI Expert" and other publications of the time (which I kept for their great linocut-style artwork, and because they were expensive items here around).

Back then it wasn't so much about ML (aka "Deep Learning") but anything Prolog, expert systems, artificial neural nets and their generalizations, and Lisp.


Why, as presumably rational agents, is there so much hype with certain technologies, even though past history should have taught us otherwise? Is it all for the VC? Is it the media just needing to sell stories? Usually there are smart, knowledgeable people involved in the hyping. They should know better.


There is no such thing as complete rational people, every person goes through bouts of rationality and irrationality, some are just more often rational than others. With that said nobody is immune from irrational emotion sometimes, for example have you been angry from something that wasn't worth it?


Hell yeah I have been. And then realized it was stupid later.


Participating in hype may be very rational.

Putting 'AI' on a startups prospectus will do it no harm. It may help sway lazy investors, or at least make the company appear more cutting edge.

Same goes for the investors, they want to be seen to be investing in cutting edge, buzzword compliant startups.

It is just human nature, following the herd is always going to be the safest option.


I think the core argument against is the same as it was in 1990 (Dreyfus, Heidegger) - basically AI can solve problems in micro-worlds (chess then, Go now; blocks worlds then image annotation now), but it's not clear these micro-worlds can be fused together, unless they are embodied in an entity who lives in the world. The current deep learning & robotics work is exciting and promising, but it's still a long way from an embodied general intelligence. So - progress might hit the same wall - we can build better subsystems , but not the whole system


What if I was that entity?


Fine - let's go for a walk together, build a fire, cook and chat and see what we have in common - It would be interesting. This is why TV and film like Humans and Bladerunner and I, Robot and Arrival and books like Ancillary Justice are fun - how much would we understand each other? how would it be different from interacting with a human?


I am a human. I think it's ridiculous to think I was a special entity so I won't. But I think we reached the point where humans became machines, and machines became humans. The first one to achieve superintelligence would guide the world. I think humans won the debate versus AIs on Twitter. A human effectively debated Big Brother and won.


It's because that's just how things always work.

Have you ever played one of those strategy games with a tech tree? Research A and it lets you research B, C, and D; research C and D and it lets you research E; etc?

That's based on the way discoveries in the real world build on eachother. And in the real world, the research tree seems to be "lumpy".

Think "agricultural revolution", "industrial revolution", etc. Something new comes available, and everyone rushes to pick off all the new low-hanging fruit. Eventually the easiest gains are all taken, and people lose interest and move to other things. And as people keep picking away more slowly at the more difficult/involved things, eventually someone will find something that -- probably combined with some completely different existing knowledge -- opens up another new field. And it repeats.

Right now we're in the "low-hanging fruit" phase of (1) computers that are powerful enough to run neural networks, combined with (2) feedback algorithms that allow networks with lots of layers to learn effectively. Sooner or later the gains will get a bit tougher as we understand the field better, and then research will slow even further as many researchers find something else new and shiny -- and with better returns -- to focus on.


> Right now we're in the "low-hanging fruit" phase of (1) computers that are powerful enough to run neural networks, combined with (2) feedback algorithms that allow networks with lots of layers to learn effectively.

I'm not so sure about that. Places like Deepmind are not satisfied with simply having AI that does straight forward pattern matching problems (Though that's very important). They are moving into more complex problems like transfer learning, reinforcement learning and unsupervised learning for more complex, real world problems solving. They also seem to be making good progress on this as well.


you might enjoy a book called "the structure of scientific revolutions" by thomas s. kuhn. a historian of science who basically argued this. that book was the origin of the phrase "paradigm shift"


The progress in AI didn't stop during the past AI winters, it just slowed as funding dried up when people realized the AI at the time couldn't possibly live up to the hype.


To say that Google and Microsoft only make operating systems and sell adds is a bit misleading. Both company's are constributing significantly to the development of Artificial Intelligence and quantum computers. These technologies are fundamental and pivotal to the success of humanity.


Why would this bring up an anti-trust case. It doesn't make any sense.


Google has majority share in the EU smartphone market. They are using that majority share to extend/maintain their dominance in the search space by forcing OEM to license their applications.

The EU regulators view the OS and Google apps as two seperate spheres.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: