
Why Moore's Law, not mobility, is killing the PC - Libertatea
http://www.networkworld.com/news/2013/030513-why-moore39s-law-not-mobility-267394.html?source=nww_rss#.UTcm6fQsonw.hackernews
======
UnoriginalGuy
The article goes on and on about CPUs. They're right of course, anything after
a core 2 duo is likely "fast enough." Tons of people have been buying the i5s
instead of the i7s (myself included) for that very reason.

But they're wrong in general. Memory capacity has continued to climb (from 2
GB with Vista, to 4 GB with Win7, to 8 GB today). I wouldn't build a PC today
without 16 GB of RAM.

Then we get to talk about the MASSIVE amount of improvement in IO in the last
few years. The 7200 RPM HDD is dead. SSDs are now affordable. You can get a
256 GB SSD for £100.

When consumers start to experience an SSD system compared to their old "slow"
HDD system, they will be at the stores in droves wanting to upgrade. SSD's
speed improvement is very noticeable indeed, like night and day.

~~~
Spooky23
You're missing te point -- computers for the mass market are no longer an
enthusiasts market. They are like TVs.

Where I work, we're stuffing users into 2GB/1core VDI instances. They work
fine. The only new client purchases are replacements of dead stuff, devices
that improve mobility, and some dev/kiosk machines.

~~~
ghshephard
So, your users aren't working on traditional "Desktop PC" instances anymore,
but inexpensive terminal clients that connect to a backend Hypervisor running
on a server? I've been waiting to see when companies would start doing that
"en masse" - back in 1999/2000, I thought it was going to take off when Sun
was promoting the Thin Client Concept. Then VMWare took it up - but, for some
reason, I never saw it get any traction, and the vast majority (Well, in my
experience, all) companies seemed to go the other direction (countering where
my instincts thought we were going to go) - and started issuing everyone
laptops. The exact opposite of thin clients connected to servers. I had to eat
a lot of claim chowder.

If I'm understanding your situation correctly, how do your thin clients work
out in practice, and, how do people deal with scenarios in which they would
normally use laptops? (Business Trips, Airplane Flights, etc...)

~~~
Spooky23
I think the issue until a couple of years ago is that compute densities
weren't there, and broadband wasn't pervasive.

Now you can deliver a decent desktop through a VMWare VM-centric solution or a
Citrix/Microsoft Server computer solution.

The use cases work out like this: Stationary workers get PCs reimaged with a
locked down OS...PCs get replaced by thin clients over time; Mobile workers
either get laptops with remote access clients or tablets. The mobile folks
love it, because they aren't tied down to some horrific work-issued laptop.

The big remaining problem is Microsoft licensing. Unless you have an EA, the
licensing is both obtuse and expensive. Lots of big enterprises are rolling
out thousands of VDI desktops -- 2011/2012 was the point where the financial
case started to make sense.

------
api
Starting in the 90s up until today, the CPU surged ahead of everything else:
disk, memory, and network. Today we have very, very fast chips that are bound
by the speed of their connection to their memory, disk, and each other (the
Internet).

Memory has improved with DDR technology. (What are we up to now... DDR4?) Disk
is improving with SSDs, which offer a _huge_ speed improvement. If you've
never used a box with a SSD in it, it must be seen to be believed.

But the real bottleneck I think is the Internet. Imagine what we could do with
gigabit peer-to-peer IPv6 networks in areas like games, collaborative
development tools, teleconferencing, etc. Things like virtual reality
"metaverses" could _really_ arrive, etc. These in turn would require more CPU
power and parallelism. Super-fast gigabit+ WANs would deliver the killer apps
that we need to keep pushing the speed barrier.

Once again, it is the Internet carriers that are the barrier to all progress.
Breaking the telco oligopolies is _the_ thing that _must_ happen to unlock the
future. Everything-- maybe even the continuation of Moore's Law-- depends on
that. The telcos, cable companies, and wireless carriers are the trolls
guarding the bridge to the future.

~~~
TillE
Having a speedy home connection is well and good, but what really needs to
improve is mobile data. Think of stuff like Google Glass, and all the possible
advantages of having a constant connection everywhere you go for a two-way
exchange of data.

Now imagine your data bill after just a couple hours of live streaming HD
video. Something's gotta change. The bandwidth keeps increasing as technology
improves, but the price per bit is actually getting higher.

~~~
api
Mobile data is the worst, by far, but if we're talking about pushing the
performance envelope then we're talking about PCs. Mobile is constrained by
power consumption as well as network bandwidth, so you're not going to see
blazing fast continuations of Moore's law there.

I really think Google Fiber nationwide would lead to killer apps that would
revitalize the PC and the high-end chip market. I'm thinking of rich constant
collaboration, virtual reality, next generations of MMORPGs where you can
really richly interact with the world's physics (e.g. shaping materials, in-
world chemistry, etc.), tele-presence, digital telepathy with brain-computer
interfaces, distributed supercomputing, ... lots of stuff. All those things
are things that are CPU-hungry. Many are embarrassingly parallel, so they can
utilize many-core chips efficiently.

If Intel, AMD, Dell, Apple's PC/laptop division, IBM, etc. knew what was good
for them they'd get together and invest in gigabit+ broadband nationwide. It's
also got to be network-neutral, since all the things I mentioned above are at
least somewhat P2P. The PC industry is spiraling down because of a lack of
killer apps for high-end boxes. Most of these are waiting on fast networks.

------
JonnieCache
Forget your grandmother. There are loads of hobbyists out there doing high
quality video, audio and software development work on boxes that are 5+ years
old, quite happily.

The set of things for which you need a superfast machine is really very small
(compared to the whole,) especially if you are happy working within the
limitations of the tools you have to hand, which covers a huge quantity of
creative people, and most of the really good ones.

~~~
yareally
One reason to need a fast CPU and lots of RAM is building Android from the
source (primarily within the Android modding/hacking community). I mostly
point it out because it has a wider appeal than most other large projects that
are built from the source by typical users regularly these days and also for
its demanding build requirements. Rebuilding the entire source can be lengthy
without a fairly new CPU, SSD and lots of RAM. Granted one does not have to
rebuild the entire source most of the time, but rebuilding some parts like the
framework base (the java userland area libraries), takes a while.

Following specs came from Google[1] the other year when ICS was released:

-6GB of download.

-25GB disk space to do a single build.

-80GB disk space to build all AOSP configs at the same time.

-16GB RAM recommended, more preferred, anything less will measurably benefit from using an SSD.

-5+ hours of CPU time for a single build, 25+ minutes of wall time, as measured on my workstation (dual-E5620 i.e. 2x quad-core 2.4GHz HT, with 24GB of RAM, no SSD).

Lots of RAM allows you to switch the compile output over to a ramdrive (or if
not, move it to a separate SSD if you have two). Moving the entire source to a
ram drive or fast ssd also makes editing much faster (especially the initial
load into an IDE like Intellij or Eclipse) or grepping through the codebase.
Doing it on anything less becomes somewhat tedious from my experience.

[1]
[https://groups.google.com/forum/?hl=en&fromgroups=#!topi...](https://groups.google.com/forum/?hl=en&fromgroups=#!topic/android-
building/N1exifTpPfA)

~~~
tsunamifury
I don't understand why anyone would do this locally when you can compile on an
elastic cloud instance at many times the speed for pennies. Why keep upgrading
your hardware when you should just be submitting these jobs to an ultra-cheap
server that will be many times faster anyways?

~~~
jiggy2011
It might be cheaper in the long run to have it locally if you do it a lot. At
current pricing tying up that much EC2 (about equiv to an XL instance) for 5
hours is going to be measured in $ rather than pennies.

~~~
yareally
You're totally right. Compiles for aosp add up quick when testing or building
for multiple devices. Even cyanogenmod's (the largest group) nightly automated
builds are done on bare metal servers.

------
ghshephard
Excellent article - I've been waiting for someone to capture the fact, that as
a reasonably productive Network Engineer/Technical Account manager, I'm able
to work, every day, completely proficiently with my Dell Precision 650 -Circa
January 2004. Visio, Power Point, Excel, VMWare Workstation (with 5-6 OpenBSD
+ Linux VM instances running) , Cisco Dynamips Simulations, Outlook+Lookout,
PowerPoint, and lots more Visio. Prior to 2003, I really felt the need to
upgrade every 2-3 years, and by year 3, my PC was really, really starting to
drag.

I've just never felt the need to lean on IT to upgrade/replace my current
Desktop though. Runs just fine.

~~~
DeepDuh
While I share your sentiment I'm honestly kind of impressed that you run 5-6
linux VMs on a Pentium 4/AMD64 generation system. Really? Those things didn't
virtualization support on the CPU yet, did they? Also, Ram probably maxes out
at around 4GB which is quite low for an updated winXP with so many instances.
Does that really run smoothly? I recommend you get an i5, 16GB, 500GB SSD and
then come back and tell me the upgrade wasn't worth it ;-).

~~~
drucken
Vmware Workstation was first released in 1999. It never used to _require_
hardware virtualization support until recently.

Also, 4GB is plenty for Windows XP. Perhaps you have not used it for so long
now that you've forgotten, but it only ever required 64MB and ran perfectly
fine with 128MB-256MB even with high performance apps. All that has happened
since is a factor increase in memory requirement for a limited set of
applications.

With Linux guest instances, it is easy to cap and keep very low the maximum
memory usage. 6x VM with 256MB each is still only 1.5GB.

Without a change in software requirements, I doubt many consumers would feel
the need to upgrade anything. CPUs/PCs really have been fast enough for many
years now.

~~~
DeepDuh
Did you also _update_ your XP to the latest service packs and patches? My
updated XP in a VM currently runs at 780MB after one morning of _very light_
usage. It has only one application open at 100MB.

Also, yes, virtualization wasn't always needed, but without it anything
slightly CPU intensive will be _awfully_ slow. I can see it working for CLI-
only linux/bsd systems, but not much else. Do you ever compile something
bigger on your guest systems as an example?

------
zurn
Nobody is mentioning the multicore problem? Wide out-of-order CPU designs have
hit diminishing returns a while ago in adding more instruction-level
parallelism to the single thread of execution. Processor speedups haven't been
coming dominated by faster transisitors for a while, it's been smarter OOO.
You can see this now in ARM-land as they start climbing the OOO ladder.

Multicore has been the best "next step" they've come up with, but the required
difficult software rewrites prevented consumers from seeing speedups.

Total on-chip exceution resources have essentially kept on the Moore's law
path as CPU vendors punted to multicore designs. 99% of PC software is serial
since parallel software is too hard to make.

Runing the same code faster was the reason that software used to become more
hungry each CPU generation and forced people to upgrade their machines. New
CPUs used to make the programmer more productive, not less.

~~~
mark-r
The problem is that in a fight between Moore's Law and Amdahl's Law, Amdahl
will win. Extra cores reach diminishing returns very quickly.

------
qwerta
It is not just that 'old is good enough', in last years PCs are actually
getting worse. I would love to spend money, but there is no stuff to buy.

My 7 years old laptop has higher display resolutions than anything on the
market (except two new laptops). It is also impossible to buy laptop with 4:3
screen ratio.

My 2 years old desktop CPU overclocked at 5GHz is faster than anything on the
market, last two CPU generations can not reach such performance.

And some newest SSD generation actually have worse performance characteristics
than previous generation.

~~~
jiggy2011
_My 2 years old desktop CPU overclocked at 5GHz is faster than anything on the
market, last two CPU generations can not reach such performance._

That's a fairly extreme overclock! I imagine you must be able to get some
impressive results from a modern i7 with the right cooling though?

I remember when overclocking was a thing that anybody into PC gaming did for
economical reasons, these days it hardly seems worth the effort of fiddling
around in the bios menu.

~~~
qwerta
Cooling is regular Antec H2O 920, it fits into case without any problem. It
took about 3 hours to assemble and overclock, I just followed manual. It runs
24/7, I do number crunching and Scala developing (slow compiler).

Intel does not bother soldering heatsink into chip anymore (saves 50 cents).
So newer CPU can not be overclocked without disassembling and replacing some
parts.

Results are not bad, but in reality it is just 30% speedup.

~~~
jiggy2011
What is the CPU?

~~~
qwerta
2600k

------
DanBC
The article doesn't mention the bring your own hardware stuff that's
increasingly happening at work.

Docking stations have been tried before, and they didn't work particularly
well, but the time's right now for a great tablet and a great docking station.
The dock needs to have keyboard & mouse inputs, network connectivity, and be
able to output to a nice monitor. (Allowing user to have two monitors?).

Someone needs to sell that as a kit, with some kind of Office suite installed.

MS could, if they put a lot of effort into trimming down MS Office, make a lot
of money here. (As could a Lenovo / LibreOffice collaboration. Because
although Excel is still much better than Libre's spreadsheet it's good enough
for many people.)

~~~
jiggy2011
I don't know, if you accept the premise of the article that old/cheap PC
hardware is good enough for most applications then what do you gain by having
a docking station vs a desktop PC?

There is only a small cost saving since the expensive thing is going to be the
peripherals and screen and you now have to deal with problems like employees
taking the tablets home and losing/breaking them , getting malware on them or
filling them up with personal stuff.

The cost of having a few computer components nailed to the desk is probably
less than the loss of productivity when somebody can't work because their
tablet is broken.

Are there many companies who are doing full on bring your own hardware apart
from maybe startups etc who can't afford to provide extra computers to staff?

------
brisance
What's the point of all that computing power if there's no "killer app" to
make use of it?

These days the devices that interest me are those that consume little power,
have perpetual internet connectivity (WiFi or cell tower), and are instant
on/off. It has to be portable and durable. Everything else is icing.

~~~
noahl
Just at a guess, really really good speech recognition might be a "killer app"
for big, vectorized chips. I haven't tried to implement it, though, so I don't
know how much power it would really take. Apple and Google seem to do all
right with a little phone CPU.

~~~
api
As far as I know, Siri off-loads something to Apple's servers. But maybe I'm
mistaken.

------
antonios
Yes, I remember when my aunt was to decide between an iPad and a new PC. Once
I explained Moore's law to her, the iPad won her over. It's definitely Moore's
law.

------
kiba
I think mobile phones will eventually move up and displace traditional PC with
a docking system that connect to your big monitors, your mouse, and your
keyboard, and maybe even a big fat hard drive.

Perhaps we will see a hybrid model in which your phone gets additional CPU and
RAM to work with when plugged in?

What about google glasses and other devices? The PC should find a way to
interface with that too.

~~~
VLM
"Perhaps we will see a hybrid model in which your phone gets additional CPU
and RAM to work with when plugged in?"

Yes, but indirectly over the network.

The hybrid model I've been using for about 20 years now is connect a wimpy
machine to a big one using SSH/NFS and a couple other technologies. For a
decade or so the end users at wife's employer have been using rdesktop/vnc for
their hybrid solution.

Its actually "easier" for enduser devices to connect to a (private?) cloud
than to try to do stuff themselves. So that's the more likely hybrid model.

I don't run exchange server at home, or even run exchange client at home.
Exchange webmail, yes.

~~~
kiba
_Its actually "easier" for enduser devices to connect to a (private?) cloud
than to try to do stuff themselves. So that's the more likely hybrid model._

It depends on the latency tolerance that a use case might have. Why not have
all of these above?

------
api
It's economics. I can attest to the fact that most five-year-old chips are
good enough for 95% of software I want to use. So there's no real demand
outside of big data and data center applications.

The only way to keep pushing performance would be for software writers to
invent new and compelling things to do that genuinely require more power. It
can be something as simple as a game-- I was a teenager when Quake came out,
and I remember scraping together as much money as I could to get the absolute
fastest machine I could (A P120) to play it. Before Quake came out, my old 486
DX2/66 was fast enough for the Linux hacking, adventure games, and
BBS/Internet stuff I was doing at the time.

We need another Quake... or something equally cool. We need killer apps.
Otherwise computers are going to go over like the aerospace industry did. We
stopped pushing the envelope there in the 70s, again for economic reasons.

~~~
qznc
The point is, it must be about something noticeable other than the graphics.
For example, a good AI should easily be able to exhaust all the cores you can
throw at it. However, it is much harder to sell. In those days they could show
you some pictures of Quake and you know you wanted that. It is harder to make
"pictures" of AI behavior. Alternatively, exchange AI for physics simulation.

Maybe the sandbox genre (Minecraft, Dwarf Fortress) will grow. Simulating huge
worlds requires lots of processing power.

~~~
api
I did a lot of work on evolutionary computation a while back. Now there's
something that can soak up some cycles. It's also embarrassingly parallel.

(For those who want to trot out the "GAs don't work well," "evolution is
slow," etc. mantras, please delve into the literature first. EC has some
interesting and unique results, the most important of which is that it's the
only "AI" technique I'm aware of that can demonstrate true creativity. EC can
invent novel structures ex-nihilo. Everything else can just optimize, find
maxima, etc. Another way of saying it is that EC can find maxima involving
unknown, novel parameters that it invents along the way, while everything else
requires that you know all the parameters to be optimized ahead of time. So EC
can work with unknown unknowns, not just unknown values of knowns.)

------
disturbedfood
I think what needs to be focused on is mostly software, to make it less clunky
and slow, and teach developers to write efficient sofware rather than just
write bloaty software since their super-powerful computers can handle it just
fine. Not everyone can afford to get the best computer every couple of years.

------
bhauer
I agree with the author, although I don't buy the literary ploy that this
realization just hit him recently. This has been a development building within
the PC sphere for years. Not only that, many professional and countless more
no-good hobbyist opinion-makers (like myself) have made this point over and
over.

I'm nobody, but I have opined about this and the insipid taint of Good Enough
several times [1]. Like "api" at the top of this comment thread, I feel there
are bottlenecks that keep the desktop PC, and the laptop PC to a degree,
constrained to the roles that we have presently envisioned for those devices.

My opinion is that desktop display technology is ultimately the root of
desktop stagnation and, speaking positively, could be the root of a desktop
renaissance [2]. I agree that bandwidth progress has slowed and storage
capacity progress has slowed [3], but point out that only display technology
has actually outright stopped and regressed. In 2001, the IBM T220/221 monitor
achieved high-DPI on the desktop (3840x2400 at 22 inches). Since then, we
banished high DPI from the desktop and obsessed about a marketing term named
"HD" because we allowed the rules that govern our living rooms' displays to
take over on our desktop. That we accept 24" desktop monitors with 1080
vertical lines in 2013 is heartbreaking.

I agree with api that bandwidth is to blame. But bandwidth hasn't stopped
progressing and regressed (well, with one exception: mobile data caps). More
importantly, I feel that behind the bandwidth complacency is display
complacency. Put a 40" high-DPI desktop monitor on your desktop and I think
the demand for ultra-high-definition content, high-clarity user interfaces,
bandwidth, computation power, and storage capacity will all increase. Intel
sells more processors, HP/Dell/Lenovo sell more PCs, and oh-boy would I be so
happy to finally get a real monitor upgrade.

[1] <http://tiamat.tsotech.com/technology-sucks>

[2] <http://tiamat.tsotech.com/displays-are-the-key>

[3] <http://tiamat.tsotech.com/boring-technology>

------
saturdaysaint
Hyper-competition to make a faster browser and faster websites has to be a
factor. In the days of packaged software, you might be happy to get a handful
of performance updates before a flashy (and more CPU-intensive) new version
came out, but any web service worth its salt is built to load almost
instantly; snappy usability is important when competition is a click away. And
just think of how much horsepower has been saved by abandoning Flash. The end
result is that the average software user (Facebook, Google, Gmail/Hotmail,
etc.) has probably seen the software they use actually get _faster_ in the
last 10 years.

------
MilesTeg
If you want to see Moore's Law in action today you really need to look at GPUs
rather than CPUs. Increased performance in recent years has mostly come from
increased parallelization which has limiting returns on the CPU( quad cores
anyone?) but does wonders for computer graphics.

------
DGrutt
"Meanwhile, Intel's push towards ubiquitous computing--gesture controls,
speech recognition, and so on--not only advances traditional interface models,
but the technologies involved also require strong computing heft. Sneaky,
sneaky."

Lol ya this must be the reason. I bet that they have a hand in the self-
driving cars too. Its all an excuse to require more computing power.

~~~
coldtea
No, Intel is doing it out of the goodness of its heart.

It might be a business but they never cared about money. The push these
technologies to people out of pure love of new technology.

~~~
fthssht
You make money by selling what people want.

~~~
coldtea
You'd be surprised.

