Hacker News new | past | comments | ask | show | jobs | submit login

I hope he'll go the hybrid symbolic and neural network way (causal and statistical), instead of just statistical.

AGI needs a type system...

I hope I'll achieve AGI before him but it's nice to know there's some real competition! (because, reader, there are almost 0 researchers seriously trying to achieve AGI in a not totally bullshit way. Only opencog and Cyc comes to mind).




Are you really that sure that the approaches to increasing the generality of AI being taken by LeCun (self-supervised model learning), Hinton (capsule networks) and Bengio (state representation learning) are all "total bullshit"?


From my reading, Hinton capsule networks seems far from being enough, it could at best be an incremental improvement. And is unrelated to English semantic parsing, it seems specialized for computer vision.


English semantic parsing is small part of AGI. And a system that can only do that or only for one language is never going to be general.


You know who just might be full of total bs is Ben Goertzel.


From what I've actually seen that is completely inaccurate and unfair. The only thing that he did to "deserve that" that I know of is to start seriously pursuing and talking about AGI before it was cool again.

For example, OpenCog is an implementation of the classic cognitive architecture and its about as traditional and far from "total bs" as you can get in AGI.

I have never heard anything to back up the insults against Goertzel.


> AGI needs a type system.

My brain bit on that remark; would you care to elaborate?


You forgot "he who shall not be named"...

/ok, maybe his project falls under "total bullshit"...


Who are you referring to? SOAR?

Contrary to popular belief, both openai and deepmind have zero roadmap and specification of a cognitive architecture, not even a semantic parser.



Who are you referring to? I feel out of the loop here.


Those of us who've been on the net long enough, and have at least dabbled in AI/ML circles, know of him.

He claims to have invented a program that is a mind, originally written in Forth, translated by others to many other languages, etc. He has published the code of this program in a form of "open source", so you can easily find it if you dig enough.

He's widely considered to be a crank. That said, the line between genius and madness can be mighty thin, and what side he lands on is anyone's guess, but most put him well over the line into madness, for whatever its worth.

My own opinion?

Well - looking at his work from purely the modern understanding and research into ML/AI - that is, deep learning and such - his work would be considered pointless, probably worse than Eliza as to its contributions to the field.

But as someone who has read a lot of other works (for for and against) the idea of AGI, artificial consciousness, theory of mind, etc - his work at a certain level has echos with some of that work. Still probably a dead end, but at the same time, there's some interesting concepts within his code and theories (he's self-published a book on it, too - you can find it on Amazon - he also has it for free on his github and it can be found elsewhere).

I guess I still put him in crank territory, but not in the abusive crank arena, more in the "doing his own thing, but being a bit evangelical about it" - relatively harmless.

His work is not as amazing as TempleOS, imho, but there's a similar mind behind it (though comparing it with that operating system is maybe an unfair, possibly orthogonal, comparison).

I won't say or reveal more (but I've written enough for you to figure it out) - he tends to monitor tons of forums and if he thinks he's being "summoned", he'll spam the forum with his writings and theories. It got him "perma-banned" from more than one newsgroup back in the day...


Schmidhuber


Do you have a goto resource to watch/read for someone new and kinda interested in the field?


The opencog website is a great resource. Going directly to the specification is a bit too intimidating but here it is: https://wiki.opencog.org/w/CogPrime_Overview

You might just begin by learning the list of NLP tasks and how good are the state of the art at it. The cognitive architecture that needs to be created to achieve AGI will one way or another be a composition of said tasks, which are the primitives.

You can discover such a taxonomy here: https://github.com/sebastianruder/NLP-progress/blob/master/R...

Also you might be interested by learning logic as a big task is to translate natural language into queryable, logical forms.


reddit.com/r/agi sometimes has interesting stuff. Although its often pie-in-the-sky articles that have no actual implementation.


Where and how are you working on AGI? Are you at opencog or Cyx?


I'm not a big player on the field. I'm specialized in semantic parsing and argument checking. I'm the first to my knowledge to have made a syllogism (and more) checker for English. Also, I have allowed researchers to beat the state of the art on constituency and dependency parsing (but simply by sharing knowledge of the state of the art to other researchers).

I do this on my free time so I'm not productive, but I have designed an intermediary language (IR) for natural languages that seems very promising.


> I'm the first to my knowledge to have made a syllogism (and more) checker for English.

Either what you actually mean by "syllogism checker" is extremely specific and unpractical, or this is 100% BS.


My program check the validity with 0% false positives of the 256 possible forms of syllogisms. This is not bullshit and not that complicated.


Doesn't honestly sound very promising but I would still like to see the IR and stuff if that is online.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: