Hacker News new | past | comments | ask | show | jobs | submit login

“Implementing this in a machine will enable artificial general intelligence” If that’s true why didn’t he just implement it? Why should I take anyone seriously that just talks about code instead of actually coding it? This would be much more compelling if he just showed benchmarks and performance instead of writing up an argument. Furthermore, I don’t believe him.



It's the same hyperbolic nonsense we've seen from hundreds of other confident "researchers" over the past 50 years. Eliasmith has a book called "How to Build a Brain", Hawkins built an entire company, Numenta, around a theory that hasn't created anything remotely useful or interesting in almost 2 decades and has pivoted to creating tools for current ML zeitgeist methods.

This unknown researcher is exactly the same. Write books and papers for years while creating literally nothing of actual value or usefulness in the real world. But what else would you do in his situation? You have to publish or die in academia. Publish the 1,000th iteration of some subset of LLM architecture? Or create grandiose claims about "implementing human thought" in the hopes that some people will be impressed?


You really don't need to be so cynical, some things just need time. Light bulbs were patented only after 40 years of work by multiple researchers, and it took another half a century of work to achieve decent efficiency. Neural networks themselves have been in development for 50 years before taking off recently, and for most of that time people working in the field were considered nuts by their peers.

But if you have concrete criticism on the idea feel free to articulate.


> Light bulbs were patented only after 40 years of work by multiple researchers

Not actually true. The early ones were patented, but used filaments made of materials like platinum, and glass-blowing and evacuation were costly as well at the time.

Edison's genius was for innovation rather than invention: making things manufacturable at scale, reducing costs, and setting up profitable sales systems. A systems man.


I'm not cynical. I'm just tired of people making wild claims which inevitably don't pan out. Neural networks are the exact opposite of this. They became popular because people SHOWED that they were useful. The early papers on perceptrons and later feedforward NNs SHOWED that they are capable of solving classification problems at the very least. Those mentioned in my original reply have shown NOTHING for all their grandiose claims about human brains and "thought". If your theory works, then go ahead, IMPLEMENT HUMAN THOUGHT. Until then, stop writing about how you "solved" one of the deepest and most profound mysteries in human history.


This is why Open AI was so revolutionary. They were able to create a useful, simple, and free product that anyone can use to improve their life.


At the same time though (before LLMs) I also thought not enough people were interested in AGI and not enough people were willing to put forth half baked algorithms to try out.


wut?

i'm not even sure what your argument is:

- some people have tried and failed, so anyone else who tries is grandiose?

- anyone who ventures beyond streetlights is spouting hyperbolic nonsense?

and why: "creating claims in the hope that others are impressed"; rather than communicating ideas in the hopes of continuing a conversation?

- why didn't they just build it? eh, cause writing it up is the first step/ limit of budget/ project scope/ overall skill set/ etc

what a toxic take on the effort and courage required to explore and refine new perspectives on this important (and undoubtedly controversial) space

f###


I've been seeing an increasing cynicism for anything that isn't a finished product and I'm a bit confused and concerned. It also seems a bit random. Like someone's blog post is okay, but a research paper is garbage? These can definitely be overselling, but I think that's a different discussion about the weird academic incentives (which we definitely should have, but is different). I feel like there's just an acceptance of cynicism and complaining but weirdly not for critiques. Just feels like we're throwing context out the window arbitrarily.


> As a formal algorithm, it could be modeled as a stateless Markov process in discrete time, performing non-deterministic search. As a computable function, it could be instantiated by traditional or neuromorphic computer clusters and executed using brain emulation, hierarchical hidden Markov models, stochastic grammars, probabilistic programming languages, neural networks, or others.

These sentences from section 5.2 convince me that the author is oddly not even interested in building what he's talking about, or making a plan to do so.

- Isn't the point of a markov process that it _is_ stateful, but that _all_ its state is present at in each x_i, such that for later times k > i, x_k never needs to refer to some prior x_j with j < i?

- "Here's a list of broad, flexible families of computational processing that can have some concept of a sequence. Your turn, engineers!"


> If that’s true why didn’t he just implement it?

Simple answer: most of the things he mentions haven't been invented yet. At least in terms of computation. Or some parts have been built, but not to sufficient degrees and most don't have bridges for what's being proposed.

I do agree that the title is arrogant, but I'd say so is this comment. There's absolutely nothing wrong with people proposing systems, detailing them out, and publishing (communicating to peers. Idk if blog, paper, whatever, it's all the same in the end). We live in a world that is incredibly complex and we have high rates of specialization. I understand that we code a lot and that means we dip our fingers in a lot of pies, but that doesn't mean we're experts in everything. The context does matter, and the context is that this is a proposal. The other context is, this is pretty fucking hard. If it seems simple, that's because it was communicated well or you simplified what he said. Another alternative is that you're right, which if so please implement it and write a paper, it'll be quite useful to the community as there are a lot of people suggesting quite similar ideas to this. Ruling out what doesn't work is pretty much how science works (which is why I find it absurd that we use and protect a system that disincentivizes communicating negative results).

It's also worth mentioning that if you go to the author's about page[0] that you'll see that he has a video lecture where he discusses this and literally says that he's building it. So... he is? Just not in secret.

Edit: I'll add that several of the ideas here are abstract. I thought I'd clarify this around the "not invented yet" part. So the work he's doing and effectively asking for help with (which is why you put this out) is to get higher resolution on these ideas. Which, criticism is helpful in doing that. But criticism is not just complaints, it is more specific and a clear point of improvement can be drawn from criticism. If you've ever submitted a paper to a journal/conference, you're probably familiar with how Reviewer 2 just makes complaints that aren't addressable and can leave you more confused asking what paper they read. Those are complaints, not critiques.

[0] https://aithought.com/about/


There are a ton of people like this, trying to flag plant obvious “duh” ideas so they can jump in and say “see I knew it would require some kind of long term memory!”… Duh?

The implementation and results are what matter. Nobody cares that you thought AGI would require long term memory.


Yeah, I think a problem continues to be that there are a bunch of interesting threads in cognitive science research that have seemed sort of decent as explanations of how animal cognition is working, and maybe directionally reasonable (but incomplete) in suggesting how one might implement it in the abstract, but that doesn't suffice to actually build a mind. If there are a bunch of good hypotheses, you need actual results to show that yours is special. I haven't read this thoroughly, but the author cites and reuses a lot of stuff from the 1970s and 1980s ... but so far as I can tell doesn't really answer why these models didn't pan out to actually build something decades ago.

Today, I think active inference and free energy principle (from Friston) are perhaps having a bit more impact (at least showing up in some RL innovations), but are still a long way off from creating something that thinks like we do.


The same reason Babbage never built the full Difference Engine. Design of this sort allows some tinkering unbothered by the constraints of actually building it (if you know history of the Difference Engine you know how many ideas it unleashed even if never really built to completion except in recent times for display at a London Museum :p).


I looked it up, the author has PhD in Brain and Cognitive Science.

Are you aware of the fact that even partially implementing something like this, would require multi-year effort with dozens of engineers at minimum and will cost millions of dollars just for training.


"even partially implementing something like this, would require multi-year effort with dozens of engineers at minimum and will cost millions of dollars just for training"

Unfortunately, that means that's also the bar for being able to utter the words "IMPLEMENTING THIS IN A MACHINE WILL ENABLE ARTIFICIAL GENERAL INTELLIGENCE" and being taken seriously. In fact the bar is even higher than that, since merely meeting the criteria you lay out is still no guarantee of success, it is merely an absolute minimum.

The fact that that is a high bar means simply that; it's a high bar. It's not a bar we lower just because it's really hard.


I'm the best musician on earth but I can't play any instruments but I can imagine a really amazing song, you'll just never hear it because it would take 1000s of hours of me practicing to actually learn to play so that I could prove it. So you'll just have to make do with my words and believe me when I say I'm the best musician alive.

Here's an article I wrote describing the song but without actually writing any of the notes because I can't read or write music either.

But I've listened to a lot of music and my tunes are better than those.

I went to school for music description so I know what I'm talking about.


You can go into studio with a producer and turn your ideas into a beat and then to a song. The same thing does not applies to engineering.

This is like comparing apple with an orange.


well, you could hire a team of developers and neuroscientists to build a prototype of the idea and concept and do physical research, whether you yourself have the chops to do it yourself is irrelevant at that point.


You still can't see the difference, can you?


The perfect way to never have your theories invalidated


I think this is a wrong way to look at the Academia, theoretical groundwork is essential for building practical things.


That's a tiny team, a short timescale, and a trivial cost.

Seriously, when you're a bit older you'll see more wasted on the most useless ideas.


I also think so, in the context of "at minimum".


Xanadu.


A brain specialist can’t code therefore his argument is invalid.

HN at its best worst.


I am skeptical of anyone making grandiose claims.


Me too, but you responded to an arrogant claim with a naive one (see my other comment). Please do critique their work, but don't bash it. Point to specific things rather than just throwing something out outright. Just bashing turns the comment section into noise and HN isn't trying to be reddit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: