
Why isn’t CPU time more valuable? - panic
https://www.johndcook.com/blog/2019/03/07/cpu-time/
======
abecedarius
You could look at this as "CPUs are cheap, of course we can afford to leave
them idle." But a more interesting angle is, "For one programmer's hourly
cost, you could run 4000 CPU cores continuously. Can there really be no
practical way to apply _thousands_ of cores to boosting the programmer's
productivity? What are we missing?"

For instance, couldn't
[https://github.com/webyrd/Barliman](https://github.com/webyrd/Barliman)
develop into something that makes computing worth spending on at that level?

~~~
mobjack
What practical problems can 4000 CPUs solve that 16 CPUs can't?

You get less and less been benefit for each additional CPU you add to the
problem, unless the CPU is the main bottleneck.

Also if something requires 4000 CPUs, it is going to start getting expensive
if you need to double the output. These types of problems don't scale well.

~~~
pradn
One obvious answer is parallel builds. It's an immense waste of developer
productivity to force builds on developer machines.

~~~
Macha
At sufficient speed too. A build on our build farm takes 2 hours, my local mac
does it in 50 minutes.

~~~
creatornator
Am I reading this right that a build farm is slower than your local
development computer? Shouldn't that launch a re-evaluation of the build farm?

------
lasereyes136
You could say the same about many things. My car is idle most of the time. I
usually only use it for 1-2 hours a day during the week and less on the
weekends. I could use Uber or Lyft instead but I don't. I could rent out my
car when I am not using it but I don't. It is idle because I want to use when
I want to use without coordinating with anyone.

The same is true of my computer. I once worked at a Unix shop where people
would routinely log into other peoples computers to do builds. It locked the
machine up (this was in the 90s) and made it hard to do anything else on the
computer. The whole point of a personal computer it to have all of the power
there for your use when you want it.

~~~
bbeekley
> My car is idle most of the time. I usually only use it for 1-2 hours a day
> during the week and less on the weekends. I could use Uber or Lyft instead
> but I don't. I could rent out my car when I am not using it but I don't. It
> is idle because I want to use when I want to use without coordinating with
> anyone.

That's such an interesting comparison that I relate with. If availability and
startup time are good enough, I'd be happy to use a shared resources for both
CPU and cars.

------
mynegation
The article contains the answer to the question: the CPU time is not the
bottleneck. The bottleneck is the time of qualified people turning real-world
problems into the code and interpreting the results. And this time is
expensive.

~~~
ip26
Which, by the way, is also why we don't have dynamically reconfigurable FPGA
coprocessors like was once dreamed.

------
pletnes
A large part is the fact that memory access is comparatively slow. Getting the
data to the core is the challenge. Most processing is so fast that the
operation you want to perform is latency-limited by IO issues. Some have
suggested offloading compilation to other machines; this works, but efficiency
depends on having large enough compilation jobs that this results in net
benefit. Similar arguments apply to data analysis - moving the data is too
expensive, so you move the code to the data instead.

------
vortico
John is confusing CPU time with CPU-core time. I assume he gets the
$0.025/hour figure from "a1.medium", which is 1 core ("vCPU") with 2GB RAM.

As a result of competition, the price of any service tends toward cost of the
service. So a $2000 computer with 8 cores that lasts 4 years + $500 for
power/year + $500 for support/year is $1500/year or $0.17/hour. That's
$0.021/core/hour, and the difference is a small amount of profit for Amazon
and a buffer for when the CPU is idle.

~~~
magduf
>So a $2000 computer with 8 cores that lasts 4 years + $500 for power/year +
$500 for support/year is $1500/year or $0.17/hour. That's $0.021/core/hour,
and the difference is a small amount of profit for Amazon and a buffer for
when the CPU is idle.

The other factor is power. That $500 for power for a year won't be $500 if the
CPU is running full-tilt all the time, it'll be much more. Modern CPUs are
designed to be power-efficient when idle, and then be able to perform many
computations quickly when demanded, even if this actually exceeds the
hardware's ability to dissipate heat (in which case the CPU throttles itself
to avoid overheating). For computers used interactively by people (i.e.,
laptops, desktops, but not servers), the workload tends to be extremely
bursty, with the CPU doing nearly nothing most of the time waiting for the
user to do something, and then suddenly having to do a lot of work quickly
when demanded (e.g., rendering a bloated webpage or watching a video).

In short, CPU time isn't just being "wasted"; modern CPUs are explicitly
designed to be used this way.

~~~
teraflop
Where I live (Texas) electricity costs about $0.08/kWh, which means $500 per
year will buy about 700 watts of continuous power. That's much higher than the
TDP of any 8-core processor on the market. Even if you have more expensive
power and account for cooling costs, it's hard to see how you could end up
spending much more than $500/year to power a single machine.

~~~
bzbarsky
Just for comparison, where I am (Massachusetts) it's $0.23/kWh or so for
residential service.

~~~
souprock
It's $0.05 here in Florida. Cost of living is no joke.

It's only a penny for the big industrial users.

~~~
bzbarsky
$0.05 for both generation _and_ transmission charges? I would have expected
that to be more like $0.09 or $0.10 for Florida... Still a good bit better
than MA, of course. :)

~~~
souprock
There are some extra things, like a fuel charge. It comes to about $0.08 or
$0.09 adding those things in, plus about $8 per month just to be connected.

------
sytelus
Human brain has equivalent of at least 30 TFLOPS of computing power [1]. To
get same computing power with CPU at $6.3/TFLOPS in AWS it would cost $189/hr
[2]. So renting a human as "general AI computer" is more than 27 times less
expensive at the moment.

Also, don't forget that humans also come with powerful high precision mobile
actuators and unmatched sensor arrays. And that humans come pretrained in
large array of complex skills including object recognition, text to speech
etc. My question would actually be reverse of what is posed in the article:
Why isn't Mechanical Turk far more lucrative business than AWS?

So in conclusion, if you have $1M lying around, you more likely to find more
profitable endeavor by renting humans than same amount of compute capacity in
cloud. Price of GFLOPs is falling however at about 10X every 13±3 years. So
possibly in 20-30 years things might be different.

[1] [https://aiimpacts.org/brain-performance-in-
flops/](https://aiimpacts.org/brain-performance-in-flops/)

[2] [https://aiimpacts.org/recent-trend-in-the-cost-of-
computing/](https://aiimpacts.org/recent-trend-in-the-cost-of-computing/)

------
RivieraKid
Because cost does not equal value. This is called the consumer surplus, the
difference between how much you're willing to pay for a product and the market
price. For example, most people would be willing to pay more for internet
access than they currently do.

------
rcar
I think he hit the primary reason in this paragraph from the article:

 _We might need hundreds of hours of CPU time for a simulation, then nothing
while we figure out what to do next, then another hundreds hours to run a
modification._

Now that CPU time is so readily available vs. the mainframe era or even just
the pre-cloud era, there's far less capacity-forced "figure out what to do
next" time unless you've got an extremely computationally heavy problem or are
very resource-constrained. As such, there's a lot less unmet demand for
compute out in the world, which naturally brings cost down.

------
x43b
This always bothered me too. Both when computers were expensive to me and now
cheap. I get paid a good amount of money during the day to use a computer to
compute things, display/interface with me, to communicate to humans and other
computers. Then my computer is idle/sleep when I am not there. This feels
inherently wrong to me. How is that I cannot come up with something useful for
my computer to do when I am not there to make real contributions when I resume
work? (disclaimer, engineer who programs but not a computer scientist)

~~~
jandrese
Isn't this the point of Folding@Home, SETI, etc...?

------
gnode
I think the answer is that CPU time doesn't have consistently high value to
the CPU owner, and isn't a free market as CPU time is non-transferable due to
security concerns.

It isn't practical to buy compute power from a diverse set of CPU owners,
because any of them may be malicious, and this problem only increases with
scale. The only exceptions are cases where you can afford or mitigate
malicious CPU owners, which doesn't lend itself to general computing.

------
penagwin
Part of the problem is that most entities who use CPUs have more then they
need. If you want to make money with your CPU then you need to use it either
to produce a good or service worth more then a few cents an hour. The issue is
that everyone else also has CPUs, so whatever you do can't be trivial.

At that point you're optimizing for the market of your good/service relative
to it's cost, and not optimizing for CPU usage.

------
pjc50
It's commoditised very effectively. So the cost is capital(buy computer) +
operating(electricity+sysadmins) + margin(tiny).

------
CydeWeys
I don't really understand why you'd expect CPU time to be more valuable. If
anything you'd expect it to be cheaper, given how the majority of CPU time
goes unused for anything useful.

Many decades and untold billions in R&D went into modern computers. I wouldn't
expect them to be anything but extremely cost-effective, and they are.

~~~
cestith
The author of the blog entry is looking at things from a different angle. He
knows the cost is low because demand is low. He's trying to figure out why
demand is low. He wants to find a way to make that idle time useful time that
produces value greater than the cost.

------
floatboth
Fuzzing is a great use of spare CPU time…

~~~
UweSchmidt
We do have an energy constraint (i.e. catastrophic global warming), and idle
CPUs use less energy.

~~~
adrianN
I tend to agree with you, but CPUs only use a percent or two of our energy and
idle CPUs still use power. It seems to me that there are lower hanging fruit
that we should pluck first.

~~~
sagartewari01
Overuse reduces the lifetime of CPUs. I'm pretty sure unning a CPU at full
throttle for a week will burn it out.

~~~
penagwin
I'd like to see a source on that.

With most electronics, CPUs and GPUs included, the killer is heat. As long as
you have the heat under control you're fine. Some applications such as laptops
can't keep the thermals undercontrol at sustained 100% CPU usage, so those are
obviously at risk.

The only other form of "wear" is electron migration and I HIGHLY doubt you'll
kill a modern processor "in a week", even if you left it running at 95C .

------
buboard
Yeah but what about GPU time? It has unfortunately become an arms race in AI
and everyone who doesn work for big co is essentially precluded from making
research/innovations

~~~
jdietrich
GPU time is available by the minute from AWS/GCP/etc and it's dirt cheap. If
you think it's expensive, I envy your youth.

------
mcguire
Hey! With the article about Cringely and this mentioning Condor
([https://research.cs.wisc.edu/htcondor/](https://research.cs.wisc.edu/htcondor/)),
it's flashbacks week.

It was originally designed to soak up CPU cycles on unused desktop machines,
but I usually used it in dedicated clusters.

I suppose the modern batch-processing hotness is Docker/Kubernetes, which are
very heavy-weight for that usage.

------
carimura
There's an abundance of CPU power. Access to those CPU's is one problem but
essentially why two-sided marketplaces exist. They've emerged for this exact
problem in other categories (as pointed out in other comments) such as
AirBNB/VRBO (housing), Turo/GetAround (cars), liquidspace (offices),
boatsetter (boats).

I wonder if access was democratized, would demand increase? Jevon's Paradox in
action?

------
api
I wonder if this isn't why Moore's Law appears to be slowing down. Maybe there
are still huge gains to be made but the economic drive is not there.

~~~
magduf
It's slowed down for several reasons:

1) We hit a brick wall with silicon clock speeds. Silicon apparently can only
go 3-4GHz; after that, there's too many switching losses, too much power used,
etc.

2) Because of #1, we jumped on the multi-core bandwagon. This worked OK for a
while, but most tasks can only be broken up and run in parallel so much. You
can't just throw 1000 cores at every problem and expect it to scale. For
anything with user interaction, this is especially true, so there's no point
in having more than 4-8 cores on a single-user machine.

3) For the stuff normal people do, there just isn't much demand for more speed
any more. How much faster do you need MS Excel to calculate your spreadsheet,
or PowerPoint to show you slides?

~~~
berbec
I'd also argue:

4) cpu speed isn't the limiting factor many times now. Disk, memory, network,
user input etc all are much more impactful honestly.

Sure getting a Blender run down 10% is huge, but what is that time saved
compared to how long setting up the render took?

------
cimmanom
If my computer didn't overheat and throttle the CPU any time you push it over
50% for more than 5 minutes at a time, I might buy this.

------
expopinions
Depends on what you mean by speeds, but there's a huge confluence of reasons
that processors are where they are today.

If you mean the clock speed specifically, then it's largely due to the
inability to manufacture smaller gate widths in silicon. The Core 2
architecture by Intel, for example, uses a 45 nanometer gate width for
transistors in each core. Core 2 was part of the Penryn family. The latest
family is Nehalem, and it, too, uses 45 nanometer gate widths. Core i5 and i7
belong to this family, among others.

Since the gate widths didn't shrink from the Penryn family to the Nehalem
family, the power consumption of a single state change in a given transistor
didn't decrease. Since the heat dissipation (and, therefore, power
consumption) is proportional to both the gate width and the clock speed, this
new architecture couldn't change the state of the transistors any faster than
the previous one. Therefore, core clock speeds remained pretty constant.

Getting to 45 nm was really tough. Going to the next frontier, which will
likely be 32 nm, will be even tougher. So tough, in fact, that
STMicroelectronics, Freescale Semiconductor, NXP Semiconductor and Texas
Instruments have all decided to stop their process research. An article in
2007 claimed that Intel, IBM and Matsushita, AMD and Renesas would be the only
organizations still pursuing R&D in this area. That's a vastly reduced set of
brains and dollars on the gate width problem.

If your question about "speed" is more general, well, then there's another
discussion around multi-core architectures that's also fascinating. The
primary technical advances in Nehalem versus predecessor families are its
multithreading, caching, bus and memory management schemes. If you keep each
core at 3 GHz, how can you efficiently use two 3 GHz cores to get, say, 1.5
times the speed of a single core? How can you efficiently use four 3 GHz cores
to get, say, 1.5 times the speed of two cores? in this respect, processor
speeds have increased significantly in the last 5 years, and will continue to
do so as software is written to take advantage of these new architectures.

But, then again, when was the last time you really found yourself waiting for
your processor? It was probably your disk, your network or your brain that was
the bottleneck in the first place. :-)

