Hacker News new | comments | ask | show | jobs | submit login

I'm not sure how much of an effect this would have, but from a consumer's point of view, there isn't as much of a reason to buy a new PC every few years anymore. The laptop I'm using right now was purchased in 2011, and the prices in stores now are very comparable to what I paid back then. I've made up my mind to buy a new one a few times, but then when I go to the store, it just isn't worth it.



Moore's law may become irrelevant long before it becomes invalid. Without rapid increases in usable computing power, the reasons for upgrading have largely disappeared. In particular the chips at the top of the profit margin curve, where Intel loves to play, are much less compelling.

Intel might eventually find itself in the same shoes Kodak did - when your primary business dries up, there's unlikely to be a follow-on that is successful enough to keep the company going.


I'm not convinced that moores law has ended (using the processing power doubles not transistor count doubles).

We've hit the physical limits of our current processes but we haven't remotely hit the physical limits of..well physics.

It's possible that we are on the flat before the next curve starts whether that curve will reach the consumer space I don't know but at the top end of computing (supercomputers) there is still massive demand for more processing power.


What makes you think we haven't started to hit physical limits of physics? A 4nm transistor is 7 atoms wide. Obviously, a one atom transistor is pretty much the absolute smallest possible, which would be about .5nm. Anything less than 7nm experiences quantum tunneling, which is going to really mess things up, and will get worse the smaller the size. [1] I imagine the interference between transistors will be fun...

Next, how exactly are you going to make these atom-sized objects in quantity? You can't be manually placing atoms. But lithography has a hard physical diffraction limit, forcing the use of higher frequency light. The higher the frequency, the greater the energy. Etching single-atom features will require hard x-rays, which we may not even have technology to generate. Even if we do, focusing x-rays is not trivial, as you can't just slap in a glass lens. Plus, the focal lengths may be rather long. And what kind of photoresist do you use? Hard x-rays are likely to blast pretty much anything, and the etching characteristics are probably not neat little troughs. How do you ensure you have your one atom stay put when everything around it is being blasted by high-energy x-rays?

I think the physical limits are looming pretty large.

[1] https://en.m.wikipedia.org/wiki/5_nanometer


Maybe but (and I could be way off I'm a programmer not a physicist) those limits are pretty much the limits on our current technology, it's a bit like saying "well we've reached the point of diminishing returns with this steam engine, that's it no more progress" and over in a shed somewhere else someone is inventing the AC electric motor.

I'm optimistic because I've seen "end of progress" reports on computing power since I was a kid in the 80's.

We are long way away from this https://en.wikipedia.org/wiki/Bremermann's_limit

We've known for ages that Germanium and similar offer better characteristics than Silicon but the cost to improve silicon has stayed below the cost to retool for Germanium til now, if we do hit the limit at 5nm silicon then the cost equation changes and alternate materials become worth the investment.


They already use germanium in their manufacturing processes. It's not all germanium because there are challenges that would create. There are also benefits to mixing a different sized atoms into the same lattice to affect the speed in which the electrons travel so modern manufacturing often implants germanium atoms into the silicon lattices. I do agree that material improvements can change everything so the often spouted doom and gloom of stagnation is often unfounded.


Mostly agree with what you said, but just throwing this fun tidbit out there. The stated nm "size" of transistors are no longer accurate (and haven't been for some time). For example, when a company says it's manufacturing at 16nm or 14nm...that is the size in which the chip designs are drawn and planned, but the physical size dimensions are closer to 20nm. Same will be true when they go down to 10 and 7nm, it wont actually be a 7nm transistor in physical dimensions. And as far as lithography, right now they're mostly looking at "extreme" ultraviolet wavelengths but the problem is mass producing is not currently feasible with with how long it takes the photoresist to react. That being said things like quadruple patterning allow them to get pretty small with EUV wavelengths.


It doesn't matter if Moore's law still holds or not, it's become irrelevant for most of the PC market. That's the point I was trying to make.

There are always good uses for more computing power, but not on the desktop where the big volumes are.


> Without rapid increases in usable computing power, the reasons for upgrading have largely disappeared.

1. Force Windows 10 upgrade.

2. Windows 10 sucks on this computer!

3. Buy Windows 10 compatible computer.


Microsoft's not stupid, they realize that people are keeping their computers longer than ever. They're also trying hard to make Windows 10 run on the widest range of machines possible. So I think it's a priority of theirs to make sure it doesn't suck. They don't want a repeat of Windows XP where nobody upgraded!


> They're also trying hard to make Windows 10 run on the widest range of machines possible.

Why do you say so? Do you work at Microsoft?

Anecdotally: I upgraded a somewhat older laptop to Windows 10 and found that the screen brightness was locked to full. Called up support and was told that no, there's no fix and no, there won't be one for the foreseeable future - mine isn't a "supported system", so it has to stay on Windows 7. Looked up the issue online and found many others with the same problem going back to release day.


except windows 10 works, to my anecdotal experience, better than windows 7 on the same box.


Yes, this thought really makes me wonder about where the future hardware computing advances will come from. How long is it going to take for a new company to 'disrupt' the processor industry in a way that the behemoths of today wouldn't have expected or weren't able to do. I would guess at least 10 years from now.


The end of Moore's Law has broad industry-wide implications. Imagine if Moore's Law had ended in the year 2000 - no smartphones...

A lot of future technologies that rely on exponential increases in silicon capability will not be possible. It's almost impossible to replicate an exponential.

The rate of progress will slow and look more like more mature industries such as aerospace.


Exactly the tech industry has been incapable of coming out with new functionality that really requires more power.

The only thing I can see which is going to benefit a lot from CPU power is more advance AI or machine learning built in. But we are only seeing very gradual moves in this area now.

VR and Games are also candidates of course, but most people aren't hard core gamers. They are fine with Angry Birds ;-)

And with the bandwidth we got today, it wouldn't necessarily be a big problem to offload much of that processing to servers and do it as a service instead. In many ways we might be heading back to the dumb clients of old where the majority of processing happens on a remote server.


The problem for Intel is that AI and machine learning are going to be typically done on the GPU. So, Nvidia. Also, a chance for AMD to recover.


Well, that's just another way of stating that CPUs have fallen way, way behind graphics cards in terms of power. At some point in the past the difference in power wasn't nearly as large.

If there were consumer-grade 16- and 32- core Intel processors with reasonable prices I would consider buying them for the sake of certain experiments. It would save me from dealing with CUDA or OpenCL. But right now that's the domain of pricey high-end server stuff.


Yep, Moore's law is still going on strong in GPUs.


Or maybe we just reached an entertainment / office plateau. Having more compute power benefits high end needs not average consumer ones.


Even games are slowing down a lot in their requirements, aside from that VR garbage.


Interestingly, there's a strategy game series called Europa Universalis, which moved from a 2D map to a 3D map quite a while ago... the reason being that the 2D map was rendered by the CPU, whereas a 3D map could be shunted off to the GPU to deal with, freeing up precious CPU cycles for the game logic.


2D maps can be rendered by a GPU just fine. However, when you're doing the upgrade from CPU rendering to GPU rendering anyway, you likely want to have some returns, so they probably just went to 3D in the same step with few extra development cost.


So Intel has had decades to become more than a PC parts manufacturer (albeit largest one ever).

The issue is creating compelling reasons for buying a new PC. Intel/Microsoft have not been very successful at promoting those sales-drivers.


Yes, and Intel has been trying. Remember, there once was a time when Intel did not make CPUs at all. Intel has remade itself several times, and pretty much each time it required retrenchment and refocus. Intel has been trying for a long time to get wins in the mobile space, without much luck. High gross margin per wafer and high volume is a rare combination, and a difficult act to repeat.

An interesting aspect of Intel culture is how it reacts to body blows, urgently seeking a path out of the darkness along many different paths simultaneously. Remember the legend of "operation crush" -- the sales force was tasked with getting 1000 design wins for the 8088, from whatever place they could find them. Some guy in Florida found a renegade group in Boca Raton working on a skunkworks project called the "PC". Meh, it didn't sound like much, but hey, only 999 more design wins left to go, eh? Bag the order and get on the road again... whoever that guy was, I sure hope he made his quota that quarter.


That's exactly it. The same performance isn't (that much) cheaper, and it's actually difficult to find any sort of high performance laptop (for cheep).

Plus, the last few laptops I've purchased all seem to die JUST after the warranties (extended or otherwise) expire.

Are they finally at least putting 1080p+ screens on these as a standard thing, or is everyone still stick on that crummy 1366x768 regression?


> and it's actually difficult to find any sort of high performance laptop (for cheep).

I went to a computer store recently, and told them that I was interested in dropping about 2k on a laptop, and I brought a list of specs (at least 16gb ram, lots of cores, etc.). He suggested I should get a gaming laptop and never gave any suggestions from his store.

It will be interesting to see what the future holds, because they're already getting close to the physical limits of what they can build in terms of processors. If they start slowing down financially, it may be the case that we'll still be using computers that are about as fast as they are today in 7-10 years from now.


128 cores?


While a nice idea most software cannot even use 4 or 8 cores properly. That's one of the reason Intel CPUs are still king compared to AMD in desktop space. If you can use all of that cores AMD tries to cram in their CPUs you can get good performance for a good price, but the single-threaded performance is an absolute desaster at the moment (and the one thing AMD has focused most on for their next architecture, Zen), which is still what many workloads use.


>While a nice idea most software cannot even use 4 or 8 cores properly

No that's wrong. They simply don't need to use 4 or 8 cores.


That's why we have ML and AI scheduled.


Your laptop already has a GPU, which has more cores than that.

Let's take the segment of the market with an NVidia GPU. They have CUDA, one of the most polished development environments for GPGPU today.

When was the last time you wrote something in CUDA? What was it for?


Interestingly enough, I doubt any of the GPUs actually has even 128 physical independent cores (cuda cores are not actual cores).


The problem is that most applications have to be completely rewritten for parallel algorithms, which does not fit well to many application domains (does not bring additional speed).


Most software workloads can't use 128 cores in any useful fashion.


Even if you get a high-performance processor in there, it's going to get seriously thermal throttled during any longer periods of serious work. I tried to do large-scale C++ on an ultrabook, but doing long compiles was super annoying. Not only did Qt take forever to compile, the whole machine was crawling while it was happening.


Much to my boss' disappointment, it's almost impossible to find a 1366x768 laptop, or more generally, one that isn't 1080p.


If you are in the United States, check Sam's Club. I was just there a couple days ago and they had several 1366x768 laptops--made by HP, if I recall correctly.


Writing this on a Toshiba I got at Staples on sale last year. It has 1366 x 768.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: