

If someone swapped out your CPU for a slower one, would you notice? - tszyn
http://blog.szynalski.com/2010/09/22/if-someone-swapped-out-your-cpu-for-a-slower-one-would-you-notice/

======
mjfern
Sorry for the self promotion, but here is a blog post that argues that the
latest microprocessors now overshoot the needs of the vast majority of
customers and how this spells trouble for Intel:
<http://www.fernstrategy.com/?p=229>. I've appended the key parts of the blog
post below for convenience:

In 1965, Gordon Moore, the co-founder of Intel, proposed that the number of
transistors on a chip (forming a microprocessor) would double approximately
every two years (Moore’s Law). Transistor count is important because the
number of transistors on a chip affects a microprocessor’s performance; i.e.,
the number of instructions performed within a given period of time. For the
last 45 years, this doubling trend has continued unabated, and the latest
generation of microprocessors from Intel contain about 2-billion transistors.
Now combine this rate of technology development with the movement towards
cloud computing. Cloud computing reduces the need for high performance (bulky
and expensive) computers at every desk because complex processing tasks can
now be handled remotely. In short, the rate of technology development combined
with the advent of cloud computing has given way to microprocessors that now
overshoot the needs of the vast majority of customers. Because many customers
are content with older generation microprocessors, they are holding on to
their computers for longer periods of time, or if purchasing new computers,
are seeking out machines that contain lower performing and less expensive
microprocessors. The result for Intel is lower revenues and profitability.

The next issue for Intel is that cloud computing is facilitating an entirely
new crop of mobile computing devices, such as netbooks, tablets, and
smartphones. The issue for Intel is that many of these mobile devices use the
ARM architecture, a competing technology that is more energy efficient. And
it’s very difficult for Intel to compete directly with ARM because of ARM’s
unique strategy. Unlike Intel, ARM does not produce and sell microprocessors
based on its technology; rather it licenses the technology to companies. If
Intel were to follow a similar strategy, its revenues and profitability would
drop significantly.

In sum, Intel is facing a big squeeze; as Intel’s revenues and profits are
getting squeezed at the top because the average customer no longer needs or
wants the latest generation Intel chip (due to Moore’s Law and cloud
computing), it’s also getting squeezed at the bottom as OEMs of mobile devices
are adopting the ARM architecture in droves. And, like many disruptive
technologies, the ARM architecture is now moving up-market, beyond mobile
devices. For instance, the company “Smooth-Stone” recently raised $48m in
venture funding to produce high performance, lower powered chips based on ARM
technology to be used in servers and data centers.

~~~
fragmede
The comparison between Intel and ARM is like comparing Microsoft and Apple. On
a customer facing level, they may appear similar, but they're different
beasts, and arguably in different markets.

------
zokier
Using a CPU that is under (artificial) load and using slow CPU are extremely
different. The responsiveness comes from properly working scheduler, not
overpowered CPU.

~~~
noodle
agreed, although its also worth noting that for the majority of people, paying
for the bleeding edge of anything in their computer isn't going to really be
tangible or worth the premium they'd pay.

------
cs_loser
CPU contention is one of the least noticeable overload situations on a
computer because schedulers have hidden it well since forever. Even if your
video playback and your benchmark are at the same nice-value, the scheduler
notices that the benchmark is using lots more CPU, so it gets effectively
"niced" compared to the video playback, which then gets the CPU whenever it's
runnable.

If someone swapped out your RAM for something less than the working set of the
programs you have running -- you would notice. Also, if it were an I/O
benchmark instead of a CPU benchmark -- you would notice. (Yes, there are I/O
schedulers now, that help somewhat.) I/O and memory are not as trivial for a
kernel to "make room" in by kicking out other programs.

------
skip
Yeah, I would. If you have a laptop just set the max CPU speed to 80% (or
whatever, <100%) under power options to simulate the pain.

Maybe if you just type emails and surf all day it's not an issue, but if you
are debugging big applications, it is most definitely an issue.

~~~
GFischer
So would I. I use a 4GL development application (Genexus) that takes hours to
compile on some days.

~~~
hsmyers
One word--- fractals...

~~~
eru
Fractint was quite snappy on 386s.

------
SageRaven
I'd only notice if I didn't fire up any games or any major CPU-intensive
workload, which are few and far between most of the time. I run powerd on my
3.4GHz 4-core workstation, and the CPU spends 50% of its time between 800 and
1100 MHz, and it often goes all the way down to 100MHz if I shut down Firefox
for the night.

------
lotharbot
Whether a person will notice depends very much on how they're using the
machine, and where the bottlenecks are.

I'd notice if you swapped out my CPU for a slower one _or_ a faster one,
because it's my main bottleneck in SC2. But if I was only browsing the web, I
wouldn't notice. I have friends whose main performance hits are coming from
hard drive access, from too little RAM (and therefore paging/swapping), from
underpowered video cards, from a weak CPU, and from having too many resource-
wasting processes running in the background.

So, to expand on the guy's point: you might or might not notice the difference
between a faster and slower CPU, because it might or might not be what's
holding you back. If you're going to upgrade, make the right upgrade.

------
jacquesm
I know that Hector Garcia-Molina pioneered the re-purposing of exotic hardware
to be used as a space heater, but really, there are cheaper ways of doing this
(and arguably less likely to impact the lifespan of your computer fans in a
negative way).

Normally speaking 'orthos' would run at a priority low enough that it would
not interfere with other foreground tasks but you can actually set it to be
'bad', and then you would definitely notice.

see: ([http://rjlipton.wordpress.com/2010/05/20/sorting-out-
chess-e...](http://rjlipton.wordpress.com/2010/05/20/sorting-out-chess-
endgames/) for the space heater usage of very nice machines)

------
sdkmvx
I was thinking about this today. In math class I got bored and started playing
with my calculator (TI83) and a friends calculator (TI84). I wrote the same
program, essentially

    
    
        x=1
        while true
            print x
            inc x
        end while
    

on both calculators. Starting both at the same time, the TI84 naturally
counted faster than the TI83. But while the TI84 was faster, I've used both
models a lot and have never really noticed a huge difference.

On the other hand, a TI89 that I use for a different class and performs
noticeably faster at stuff (and when I ran the program on it later, seemed to
count much faster).

------
daok
Am I reading correctly that he is using his CPU to heat his room...?

~~~
sophacles
I have a script in my ~/bin that does some very inefficient calculations on
some numbers from /dev/random. It is called heat.py and used to get run in the
winter, when I lived in a less than ideal place with a barely working heater
and poor insulation. Definitely made a difference.

~~~
Naga
Would you care to post that? I am going to have some problems in my new place.

~~~
sophacles
Can't right now, as apparently my home internet connection is down. Off the
top of my head tho, basically I used python's random module, and got a random
int, and a random float, then did a bunch of pow(rfloat, rint) and
multiplications in loops and bit shifts and so on. Basically i tried to
utilize all the parts of the CPU I could think of, and keep the cpu usage
pegged.

~~~
andfarm
Better yet:

    
    
       apt-get install cpuburn

------
goalieca
If you permanently undervolted my CPU, I wouldn't notice until I go to run
numerical computations or load up star craft. But I would always notice a
slower hard disk and slower ram.

------
teilo
I would notice if my DVD Rips were taking longer to process, or I was no
longer able to sustain my live streaming video feeds at church on Sunday
morning.

------
carlos
Windows already swapped my CPU...

------
ergo98
Either the CPU-consuming process was running at an idle priority -- meaning
the OS would _always_ push it aside for other activities that needed CPU, such
that when the author was browsing the web or navigating emails, those more
important processes got the run of the processor -- or the author is just
incredibly tolerant. If there was something actually competing on an equal
basis for the same cycles (instead of only taking the available cycles), the
impact is obvious to anyone.

While for some reason "browsing the web" always comes up as a low need
activity, it happens to be the area that really differentiates machines. It is
not a low demand activity, and hasn't been for years.

