Hacker News new | past | comments | ask | show | jobs | submit login
Silver Nanowire Networks to Overdrive AI Acceleration, Reservoir Computing (tomshardware.com)
49 points by rbanffy 6 months ago | hide | past | favorite | 13 comments



> explore the emergent properties of nanostructured materials [...] due to the way their atomic shapes naturally possess a neural network-like physical structure that’s significantly interconnected and possesses memristive elements.

Sound like bringing back specialized analog computers. (Ex: MONIAC [0])

[0] https://en.wikipedia.org/wiki/MONIAC


That's the idea! But if you specialize for neural networks, you can then train the network to do whatever you want. This is what makes analog computers practical.


It’s what may make analog computers practical someday.

Analog computers are not currently practical in that as far as I know, none are being used for AI outside of a lab/research setting.

Some efforts have already failed, see Mythic.

It’s an intriguing approach, but how well it can compete practically still remains to be seen.


>The paper itself describes the possibility that aspects of the online learning ability (the ability to integrate new data as it is received without the costly requirement of retraining) could be implemented in a fully analog system through a cross-point array of resistors, instead of applying a digitally-bound algorithm. So both theoretical and materials design space still covers a number of potential, future exploration venues.

That is very cool. Feels like a nuevo tape drive for neural networks.


Like the holotape in Brainstorm? Tres cool.


I wonder could it still be hybrid with digital systems?


Calling the structures 'similar to biological CPUs (brains)' is a pretty huge red flag.

Im so tired of people forcefully anthropomorphozing these things when its just so innacurate. It exposes the authors ideology and personal, not factually accurate, viewpoint around what the brain/intelligence is but acts like its a given. The brain is not at all similar to a CPU in any meaningful sense...


It doesn't matter, the most important factor is the training data. The network implementation, and sometimes even the topology don't matter as long as they have some basic properties, one of which is to allow pairwise interactions.

We have seen RWKV, S4, MLP-Mixer, T5, etc - they all give comparable results to vanilla GPT when trained on the same dataset. Similarly, there are no two people with identical neural wiring, but when they learn the same course the gain similar abilities. There are only small differences.

On the other hand it appears that even small models like Phi-1.5 when trained with "textbook quality" data perform 5x above their weight, and when you train a big LLM with 10x more data - GPT-4 was rumored to be trained on 13T tokens - its abilities are superior to all other models.

So better data or more data make a difference. Architecture makes little difference, controlling for model size. Interesting tidbit - since 2017 when transformers were invented, almost no change to the architecture was widely adopted, only a small change or two (postnorm and GELU activation in the feedforward), and not for lack of trying. There are hundreds of papers trying to invent a better transformer. Where are they now?

The brain is just inside a better data engine than AI models. That's the magic behind the brain. It's not some kind of special learning ability, it's continual learning from a persistent environment, one with the highest level of complexity and diversity. Humans can create causal interventions in the environment to test their hypothesis. AI models trained on the internet don't have causal intervention powers except when we provide embodiment and environment.

Humans also have access to many other humans, AIs train with a text corpus, not live humans. I reject the idea that brains have some special sauce. It's everything else around the brains that is better.


Thanks for putting language to a vague feeling I've had wrt "causal intervention", I tend to think there's no such thing as self or self-preservation/fear-of-death or desire-to-love that might facilitate actual agentic decision making without being embedded in a fight for survival against an adverserial environment.

As far as embodiement and environment goes, does simulating an environment get us anywhere? Agents given avatars in games practice causal intervention, no? Is that too a matter of training a model in a rich enough, accurate enough simulation? I guess that's the problem self driving cars face, and they at least have a programmed-in-fear of mortality, tho not its own, in the form of avoiding a fatal wreck.


> does simulating an environment get us anywhere?

Sure does, one of the few super-human AIs - AlphaZero, learned from an environment made of the go board and his opponent. Diverse exploration is essential, but ultimately everything was learned from that one bit of reward signal at the end of the game.


Isn't not anthropomorphizing equally an ideological stance?

Humans have an agentic apophenia bias and disrupting it stops us from fully engaging in the diversity of human expression.


Anthropomorphizing in this way (sacrificing all accuracy for the sake of a nice, bite size analogy) immediately removes credibility of the article. Me writing a science article on solar power and I open with "The sun is just a large flashlight" makes me look like an idiot, even if the rest of the article is good. What's worse is that most people would rightly discredit the flashlight analogy, because most of us are quite familiar with the sun and how it works. But most people are not familiar enough with machine learning and how it works to similarly discredit this article, so they might BELIEVE it, which is a huge issue.


Can I ask, when did you choose to believe that anthropomorphizing immediately removes credibility?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: