Hacker News new | past | comments | ask | show | jobs | submit login

Not sure if you noticed, but Moore's Law died quite awhile ago now.




Moore's law is bout the most economic die, not about always more expensive top of the line.

If you take those out, there is a very clear stagnation on that graphic.


Moore's Law was originally about the number of transistors per unit area and that has absolutely stalled. They are comparing 72 and 64 core processors to single core ones to try to make the claim that "transistors per chip" are doubling in accordance with Moore's Law (which they still aren't--you can plot a line through and see that we are falling short of linear on a log scale). Are you really arguing that simply making microprocessors larger, ignoring cost and density, is an improvement in processing power (especially for the average end user, who is on a machine with only a handful of cores)?

Look at the actual transistor sizes. In 2009 we were at 32 nm. We're now in 2021, so if transistor sizes had kept halving every two years we would be at 0.5 nm. Clearly, we are not anywhere close to that--we're off by a factor of 10, and that's only with the very latest and greatest manufacturing processes that almost no consumer chips use (not to mention that the 5nm process used by AMD is not the same as a 5nm process used by Intel). As the article itself notes:

> Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, below the pace predicted by Moore's law.

Of course transistor companies are happy to claim that they are secretly keeping pace, but in terms of commercially available microprocessors it is unquestionably false. Anyone using the "doubling every two years" approximation to decide how much more computing power is available now than 10 years ago, or how much more will be available 10 years in the future, is not going to arrive at correct figures.


Moore's law was stated originally as the doublinkd of the number of transistors per integrated circuit per year, later modified by him to a doublingevery two years. Reading the article where he first proposed the rule (cited in the wikipedia article linked in parent) this confirms that definition, as opposed to transistors per unit area or other measures.


No, it was per area per economic unit originally, and the definition was changed in ways that don't make a lot of sense. Clearly the performance of a 72-core machine has little relevance to personal computing.


Not commenting on Moore's law, but I don't see how the last part is true when GPU have become a main component of personal computing. They are in fact becoming more and more important with the need for AI inference and always more demanding rendering tasks (both in resolution and frame rate).


The vast majority of software isn't written for the GPU. That's still true today, and it will likely be true in ten years as well. Unless we reach a point where that changes (and most software can fit its restricted paradigm), we should compare apples to apples. If we were talking solely about GPU performance, I would be more inclined to agree with claims about "80x more processing power than 10 years ago" etc, though.


Apple GUI frameworks enable them to rely heavily on the GPU, and the smooth and slick GUI are definitely a big part of their value proposition. While I agree that the business logic of most apps aren't using the GPU for compute, I think most app will have a rendering side where it is important.


For single thread performance.


Moore's laws has nothing to do with how fast a chip is. It deals with how many transistors you can fit in a given area.

This can equate to a faster chip because you now can do more at once. However, we hit the frequency limits a while ago for silicon. Particularly, parasitic capacitance is a huge limiting factor. A capacitor will start to act like a short circuit the faster your clock is.

Moore's law has a little more life, although the rate seems to have slowed. However, at the end of the day it can't go one forever you can only make something so small. One gets to a point they have so few atoms to constructing something useful becomes impossible. Like current transistors are finFETs because the third dimension gives them more atoms to reduce leakage current, compared to the relatively planar designs on older process nodes. However, these finFets still take up less area on a die.


Since it can't go forever - it is time to update its definition to reflect what we are doing with computing now - adding cores and improving energy efficiency. No?


adding more cores requires moores law. When we hit the end the only to get more cores will be either larger dies or more dies.

Moore's law does help with efficiency to some degree, small transistors generally require less power to switch. However, most the power is last due to the miles of wiring in a modern chip when running at such high clock cycles. Again, it's the parasitic capacitance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: