Hacker News new | past | comments | ask | show | jobs | submit login
Chipping Away at Moore's Law (acm.org)
32 points by rbanffy 11 days ago | hide | past | favorite | 26 comments





Nanometer wars became marketing bullshit pretty much around the shift down from 22nm to 14nm and lower (2013-ish and later)

At this point, we should just be looking at IPC improvements at the same frequency/voltage and power consumption to see if there are any improvements.

And yes, sure, more cores are nice, but tons of useful software is still bottlenecked by single core speed. So is general computer responsiveness.


"general computer responsiveness" at this point is 100% on software/OS - QNX for example was perfectly responsive in the 90s on Pentium II class hardware (and you can probably find earlier examples with weaker CPUs like BeOS on early PPC, but these were just the first to come to my mind - someone will probably chime in below with Amiga anecdotes or something).

I refuse to believe at current high end intel/amd levels (i7-9 & ryzen 7-9) & even mid-range that lack of responsiveness is due to the CPU rather than windows/mac.


> I refuse to believe at current high end intel/amd levels (i7-9 & ryzen 7-9) & even mid-range that lack of responsiveness is due to the CPU rather than windows/mac.

You are right, of course, in that the fault lies with software. But holding the software as constant, the only way to improve responsiveness is to up your core speed and IPC.

It's not as if the average user can email Microsoft of the Chrome browser team and ask them to make the OS/Browser more responsive for their older hardware. But they _can_ go to the store and buy a faster CPU, most of the time.

The situation was probably reversed a few decades ago, when hardware was actually expensive and multi-core was not a thing.


I think the responsiveness problem will be tackled by specialized memory controllers like where the new gaming consoles are headed

When MS Office is written in JavaScript[0] instead of C, I have zero hope of the responsiveness problem ever being fixed. It’s only getting worse.

[0] - I’m talking about Office 365 which runs in a web browser (yet somehow is no slower than the “native” version that runs on my desktop computer).


And Google docs somehow faster than either

Sure it was. Back then screens had 16+ times less pixels (multiply it by 2-8 for text mode), Linux (the kernel) source code was still relatively small, and the games played had 10 2D levels of a few million pixels each.

I'm not talking about games, I'm talking general computer use responsiveness. the difference in computing resources between a pentium II @ 266mhz and modern-day CPUs is much bigger than 16x and even back then win95 was a lot slower than it should have been (we were saying basically the same thing - "why is this 266mhz PII not feeling any faster than my old 8mhz amiga?").

Again the "poster boy" for responsive interaction is probably BeOS - the original BeBox used dual ppc 603 CPUs at 66mhz: if you consider IPC, Mhz & number of cores a modern CPU probably has 1000x the computation power at its disposal & RAM is also generally 1000x more plentiful (we have as many GBs as we used to have MBs back then). I'll bet with GPUs the difference is even bigger.


Single 4k screen @ 60Hz requires transfer rate of 3840x2160x3 bytes 60 times per second = ~1.5GB/s just to copy pixels without any logic applied.

Typical RAM module (DDR-266) in late 90s could only provide ~2.1GB/s, leaving almost no room to perform any compute on a single 4k screen, and simply insufficient to run two of them.

PC66 of early 90s could only do 0.5GB/s.

DDR3, commonly used now in integrated GPUs does 13GB/s.

What I am getting at, is that the task to just copy pixels to a screen became proportionally harder as RAM progressed forward (26x faster per module since 1990, 6.5x per module since 1999 vs 60x more work per screen since 1990 (256 colors), 10x per screen since 1999).

And that is just screen rendering. Same happened to source code, documents, images, everything.


Back then, to write a pixel you did, at most, 3 memory writes. Now you need to write those bits to a bitmap and pass it on to the GPU, which will combine it with all the rest of the stuff needed to make your screen happen and, hopefully, you'll see something in a couple screen refresh cycles.

16 times less pixels, but our systems are now literally over 50 times faster, and that's only considering clock speeds of a single core and ignores IPC improvements.

Huh? Pentium II had clock speeds 233-450MHz, which is only 10x less Hz. And the RAM bandwidth did not even improve 10x since P2.

Ah crap. I misread a comment above which mentioned "the 90s" and in my mind I though 90 Mhz.

> but tons of useful software is still bottlenecked by single core speed

It's interesting to see approaches like Apple's where more and more of computationally heavy work is moved onto what are essentially special purpose purpose ASICs (I think you can generously expand the definition to even include GPUs). Specific examples include video decoding, graphics, and increasingly ML computation. Once those things become "free" what is left? I'd say mostly just many layers of abstractions atop basic computation.

As more and more software stacks become ergonomic w.r.t. multithreading & multiple execution contexts, single core becomes less of a bottleneck. IMO that's the ultimate solution to the hard physical constraints of Moore's Law. Short of a new type of compute substrate.


Anyone have up to date data on the progress of Moore's law through today? All the stuff on google looks like it taps out in 2015

https://www.google.com/search?q=moore+law+&tbm=isch&ved=2ahU...


One thing that is maybe helpful to consider in the modern era is that Moore's law's meaning has unraveled as we have reached smaller scales (hence the basic arguments over what it even means).

Originally the law relates to the transistor density of an IC, but this was strongly correlated with Power Consumption, Clock Speed, and a bunch of other metrics. As we reach smaller scales these parameters are no longer tightly coupled. I recall seeing a plot to the effect that Power Efficiency tapped out at 14 nm. Likewise clock speed has not been increasing at the original clip for the better part of a decade (it went up ~50% in 10 years, which in any other area of engineering would be astounding, but I could sure use a 32GHZ processor). Anyway, having trade offs makes things more interesting and perhaps we are going to see an era with more cleverness chip architecture soon.


32 GHz would be fun. We'd have multiple clock pulses flowing around because the chip is larger than the clock's wavelength.

And I'm not counting the circuit paths - this would be on a straight line of copper.


The progress of Moore's Law no longer exists. It was a very specific claim that we have not managed to achieve for a while. People often use the phrase "Moore's Law" just to refer to the continuing shrinkage of transistors, though. Which I think is what you mean.

We are no longer doubling / 2x every 2 years. And hasn't been so for a few years now. But we are getting 1.8x. So not too bad.

People will like to show you graphs, especially with 10 - 50 years time scale. Well you will still see a straight line, because only the end tip of that graph is beginning to curve.

Assuming a perfect execution, TSMC will get you ~1.8x transistor density improvement every 2 years all the way till 2030.

You may also want to read [1] David Kanter on Transistor Density. But TD;LR not all transistor shrink at the same ratio, 1.8x is the best case scenario.

https://www.realworldtech.com/transistor-count-flawed-metric...


Density on the leading node (CPU & GPU graphs): https://docs.google.com/spreadsheets/d/1NNOqbJfcISFyMd0EsSrh...

I've also got an NVIDIA performance graph, since GPUs scale well with transistors: https://docs.google.com/spreadsheets/d/1dukdlqkh-zPkhmjUVuUL...


I did a variant for GPU land around AWS instance hour purchasing power for GPU TFLOPS + GPU RAM over the last 10 years: https://twitter.com/lmeyerov/status/1232937998464901120

For context, the last entry is the T4 (aws g4dn), which is 12nm

Not exactly doubling every two years, but not far off. Moore's Law is dead, long live Moore's Law!


5 nm node products are being released to consumers this year so we're still on track. I think at this point the next node is always questionable because it takes a pretty big breakthrough in manufacturing techniques, resist chemistry, and node design to shave off another nanometer.

"Moore's law is the observation that the number of transistors in a dense integrated circuit (IC) doubles about every two years"

We are not on track, haven't been on track.



The 5nm nodes being released aren’t actually 5nm though right? That’s just the branding/marketing associated with them from what I’ve seen.

transistors/mm^2 seems like a harder-to-game metric, and more in line with Moore's Law.

Though it was "number of components per integrated circuit" https://wikipedia.org/wiki/Moore's_law

So... larger chips areas could continue it... I've thought that huge wafers, at slower clocks, would make sense. In a PC, tablet or even phone, there is plenty of physical room.

Not sure if the barrier is technical or just present usage and economics.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: