Hacker News new | past | comments | ask | show | jobs | submit login

Which is dead wrong, because not only is there no usable definition of "improve itself", but there isn't even any understanding of the kinds of skills required to create a usable definition.

It's the difference between a computer that is taught how to compose okay-ish music, and a computer that learns spontaneously how to compose really really great music and do all of the social, cultural, and financial things required to create a career for itself as a notable composer and then does something entirely new and surprising given that starting point.

They're completely different problem classes, operating on completely different levels of sophistication and insight.

A lot of "real" AI problems are cultural, social, psychological, and semantic, and are going to need entirely new forms of meta-computation.

You're not going to get there with any current form of ML, no matter how fast it runs, because no current form of ML can represent the problems that need to be solved to operate in those domains - never mind spontaneously generate effective solutions for those problems.




> Which is dead wrong, because not only is there no usable definition of "improve itself", but there isn't even any understanding of the kinds of skills required to create a usable definition.

I disagree. A program improves itself when it reacts to a problem and implements a solution. Obviously that is very general, but enough. A human of IQ 100 certainly can develop software; a program of IQ 100 should be able to do the same, and then you scale horizontally.


Have you taken an IQ test? They only test a few classes of problem. Performance on these for a human is deemed a workable poxy for intelligence but, for something that approaches the problems very differently, it may not be at all indicative of general intelligence. I think we probably already have reached or are near the point where we could train systems to achieve human-level performance on each of those basic tasks. We are, however, not seemingly near a humanlike AGI.


I don't mean to be rude, but I'm impressed that you seem to have completely skated over the content of my comment.

Please read it again. You need to understand what a "problem" and a "solution" both are - in detail - because otherwise you have nothing to work with.

And a human of IQ 100 will only ever develop poor software. If you scale horizontally, you won't get game-changers - you'll just get a flood of equally poor software more quickly.


Many here have an IQ well over 100 and have no idea how to rebuild an improved version of themselves or even how their consciousness works internally. Seems rather likely any sapient AI we cook up will have just as little idea how itself works as we do how we work. QED, no singularity.


A human with IQ 100 can develop software. Can they develop it well enough to improve AI software? Or can they just adequately develop general software?

Can a human with IQ 100 write general AI software that a human with IQ 100 can debug?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: