Hacker News new | past | comments | ask | show | jobs | submit login

2038.

* 86 billion neurons in a brain [1]

* 400 transistors to simulate a synapse [2]

That's 34 trillion, 400 billion transistors to simulate a human brain.

As of 2024, the GB200 Grace Blackwell GPU has 208 billion MOSFETs[3]. In 2023, AMD's MI300A CPU had 146 billion transistors[3]. In 2021, the Versal VP1802 FPGA had 92 billion transistors[3]. Intel projects 1 trillion by 2030; TSMC suggests 200 billion by 2030.

We'll likely have real-time brain analogs by 2064.

(Aside, these are the dates I've used in my hard sci-fi novel. See my profile for details.)

[1]: https://pubmed.ncbi.nlm.nih.gov/19226510/

[2]: https://historyofinformation.com/detail.php?id=3901

[3]: https://en.wikipedia.org/wiki/Transistor_count




> * 86 billion neurons in a brain [1]*

> * 400 transistors to simulate a synapse [2]*

> * That's 34 trillion, 400 billion transistors to simulate a human brain.*

You forgot about:

1-Astrocytes - more common than neurons, computational, have their own style of internal and external signaling, have bi-directional communication with neurons at the synapse (and are thought to control the neurons)

2-Synapse: a single synapse can be both excitatory and inhibitory depending on it's structure and the permeant ions inside and outside the cell at that point in time and space - are your 400 transistors handling all of that dynamic capability?

3-Brain waves: are now shown to be causal (influencing neuron behavior) not just a by-product of activity. Many different types of waves (different frequencies) that operate very locally (high freq), or broadly (low freq). Information is encoded in frequency, phase, space and time.

This is the summary, the details are even more interesting and complex. Some of the details are probably not adding computational capabilities, but many clearly are.


Reading about brain waves makes me think analog computing could be the way to go.


The main issue is that we're lacking one of the four fundamental circuit components: the memristor [0].

If we had a simple, throw it in a box for five years, stable memristor, we'd have everything we'd need for brain modeling. Being as a memristor is effectively a synapse.

With that, we would want to do everything in the analog domain anyways.

[0] the other three are the resistor, the capacitor, and the inductor.


We have them now, on chips and everything.


Woah, where?!

I can't seem to find anything on memristors being on chip or really being in solid state anything.

I would love to get my hands on them!


A quick Google search will find thousands of articles.

It was possible to buy memristor storage devices, but they weren't sufficiently competitive with ordinary flash memory and I believe they're relegated to niche applications such as high erase counts.

AI implementations on Memristor circuitry has come up often enough that even I've heard of it. E.g.: A random recent development is https://www.nature.com/articles/s41467-024-45312-0


I've been googling for a while on these, it's the first I've heard of it. Do you have a link at all? Not trying to troll here, I really am genuinely interested.


> Blackwell GPU has 208 billion MOSFETs[3]

Things us hard sci fi fans will insist on:

- You have significantly undercounted transistors. As of 2024 you can put up to 8.4 terabytes of LPDDR5X memory into an NVidia Grace Blackwell rack. So that's 72 trillion transistors (and another 72 trillion capacitors) right there.

- A GPU executes significantly faster than a neuron.

- The hardware of one GPU can be used to simulate billions of neurons in realtime.

- Why limit yourself to one GB200 NVL72 rack, when you could have a warehouse full? (what happens when you create a mind that's a thousand times more powerful than a human mind?)

You really need to separate the comparison into state (how much memory), and computation rate. I think you'll find that an NVidia GB2000 NVL72 will outperform a brain by at least an order of magnitude. And the cost of feeding brains far exceeds the cost of feeding GB2000's. Plus, brains are notoriously unreliable.

The current generation of public-facing AIs are using ~24Gb of memory, mostly because using more would cost more than can conveniently given away or rented out for pennies. If I were an evil genius looking to take over the world today, I'd be building terabyte-scale AIs right now, and I'd not be telling ANYONE. And definitely not running it in Europe or the US where it might be facing imminent legislative attempts to limit what it can do. Antarctica, perhaps.


> The hardware of one GPU can be used to simulate billions of neurons in realtime.

You mean simulate billions artificial neurons, right?

You can't simulate a single biological neuron yet because it's way too complex and they still don't even understand all of the details.

If they did have all of the details, at minimum you would need to simulate the concentrations of ions internally and externally as well as the electrical field around the neuron to truly simulate the dynamic nature of a neuron in space and time. It's response to stimulus is dependent on all of that and more.


Depends a bit on how certain you are on your preferred answer to the question: is the brain analog or digital? If you think it is predominantly digital, quite a lot of the details of the biochemical mechanisms don't matter. If you think it is predominantly analog, they matter hugely.


Do any serious theories on the brain being predominantly digital hold any water with neuroscientists? I feel like it's fairly well accepted that the brain isn't digital even just with what we do understand about neurons?


Action potentials are digital. Once the neuron hits the trigger threshold, the spike happens at the same amplitude every time.

The process by how the neuron determines whether it's time to fire or not is almost grotesquely analog, in that it is effected by a million different things in addition to incoming spikes.

How much of each process matters for what is the mind-generating part of the brain's work? We don't know.

Myelination (which allows very fast, and digital, communication between neurons) is universal in vertebrates, which points to it being important for intelligence.

As a counterexample, we have cephalopods, which accomplish their impressive cognitive tasks using only 'slow' neurons; they get around the speed limit by having huge axons, which reduce internal resistance. So we know that digital transmission isn't necessary for intelligence, either.

The nature of cognition is a fascinating subject, isn't it!


I don't think it's possible to tell at this point.

If you looked at the operation of a digital computer, you'd find an analog substrate, with some quite interesting properties. Deciding whether the analog substrate is an important semantic element of how the computer worked, or whether its operation should be understand digitally, is as much a (research-driving) leap of faith as anything else.


That's actually a pretty convincing point. If all we were able to understand was the flow of electrons in a transistor and not how they were connected, we wouldn't necessarily be able to conclusively claim that digital computers were indeed digital.


> Antarctica, perhaps.

Skull Volcano lair on an island. Geothermal energy to power the place. Seawater for liquid cooling.

Then of course, you also need the beautiful but corruptible woman as some high-up part of the organization. And a chief of security in your evil enterprise with some sort of weird deformity or scarring - preferably on the face.

Protip: If you catch a foreign spy on the island, kill them immediately. Do not explain your master plan to them just before you kill them.


1) There is not enough data in the entire world to train a trillion parameters large language model. And there is limited use for synthetic data, yet (we need an internal adversarial/evolutionary model like AlphaZero for that, which I think will be developed soon, but we are not there yet).

2) Models are not linearly scalable in terms of their performance. There are 8B parameter models (Zephyr-Mistral) routinely outperforming 180B parameter models (Falcon). Skill issue, like they say.

3) Even if you are an evil genius, you need money and computing power. Contrarily to what you might be reading in the media, there isn't much capital available to evil geniuses on a short notice (longer term arrangements might still be possible). And even if you have money, you can't just buy a truckload of H100s. Their sales are already highly regulated.


Antarctica, perhaps

Powered by what?

Solar panels in a place where the sun doesn't even rise for months at a time? That big beefy transmission grid that the Antarctic continent is so famous for? The jillions of oil wells already drilled into the Antarctic?

Oh wait, I know: penguin guano.

Pass the crack pipe, you've had enough my friend.


Nuclear reactors, of course.

Antarctica was chosen because the area on the mainland between 90 degrees west and 150 degrees west is the only major piece of land on Earth not claimed by any country[1], which makes it a good place place to locate businesses that do not wish to be subject to regulations on Artificial Intelligences. Plus it's much more inconvenient to attack than it would be if you located it in a volcanic island.

[1] Sez wiki.


Each of those cells has ~7,000 synapses each of which is both doing some computation and sending information. Further, this needs to be reconfigurable as synaptic connections aren’t static so you can’t simply make a chip with hardwired connections.

You could ballpark that as that’s 400 * 7000 * 86 billion transistors to simulate a brain that can’t learn anything, though I don’t see the point. Reasonably equivalent real time brain emulation is likely much further I’d say 2070 on a cluster isn’t unrealistic, but we’re not getting there on a single chip using lithography.

What nobody talks about is the bandwidth requirements if this isn’t all in hardware. You basically need random access to 100 trillion values (+100t weights) ~100-1,000+ times a second. Which requires splitting things across multiple chips and some kind of 3D mesh of really high bandwidth connections.


> Which requires splitting things across multiple chips and some kind of 3D mesh of really high bandwidth connections.

Agreed. In the novel, it's a neuromorphing perception lattice, to suggest a 3D mesh. If you'd like to have a read, I'd be grateful for your feedback.


> I’d say 2070 on a cluster isn’t unrealistic, but we’re not getting there on a single chip using lithography.

Wild-ass prediction: future circuits are literally grown; circuit layout will be specified in DNA. No lithography necessary.


Simulating a 100 neuron nematode is still an unsolved problem.


There is a simulation. The simulation was even loaded to a robot that then behaved disturbingly like a worm.

What does 'unsolved' mean in this context?


Early in 2038, the AI apocalypse begins and all of humanity is powerless to stop it for six days until Unix time rolls over and the AI serendipitously crashes.


Won't we run into physical limits long before 2064?

I imagine that would push out the timeline.


Another interesting angle is power consumption. The brain consumes about 20W so that’s ~200pW/neuron. The GPU is 4 orders of magnitude worse at ~2uW/neuron.

The brain also does much more than a GPU. I’m not sure what percentage of the brain could be considered devoted to equivalent computation but is also interesting to think about. I’ve seen that the brains memory is equivalent to ~2PB. The power of a 12TB hard drive is ~6W. So current tech has a long way to go if it’s ever going to get close to the brain, 2PB storage is about 1kW alone.

That’s pretty impressive for a bunch of goo that resulted from a random process.

Makes me think we are barking up the wrong tree with silicon based computing.


> We'll likely have real-time brain analogs by 2064.

When there’s occasional talk about triple-letter agencies having far-future tech now, I used to wonder what it might be. I guess this is one of those things.


Transistors are much faster than synapses so a few can simulate a bunch of synapses. As a result you can probably get by with like a million times less than your estimate.


Your suggestion actually makes things much harder.

A single “cycle” (and you need ~100-1,000+ cycles per second) would involve ~7,000 synapses per neuron * 86 billion neurons random memory accesses each second. Each of those accesses need to first read the location of memory and then access that memory.

For perspective a video card which is heavily optimized for this only gets you low billions of random memory access per second. True random access across large amounts of memory is hard. Of course if you’re using a 1 million GPU supercomputer you now need to sync all this memory between each GPU every cycle, or have each GPU handle network traffic for all these memory requests…

PS: The article assumes more efficient AI designs for good reason, actually using a neuronal network the size of the brain in real time is prohibative.


Super computers are almost at this scale, if you go by FLOPS.

Distributed cloud systems certainly surpass this number as well. https://www.top500.org/lists/top500/2023/11/


Yea, FLOPS and RAM aren’t that problematic it’s really insane memory bandwidth or really novel hardware that’s at issue.

That or some novel AI architecture. I doubt we need to actually emulate a brain to reach near parity with human cognition.


Too many presumptions.


So list your better assumptions. The exercise is guesstimating the future. Gather your unique perspective and offer up some upper and lower bounds.


400 transistors per synapse is a foundational assumption and is total nonsense


Also, at what rate? If it's not at least one second per second, that changes things quite a bit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: