I had a 384 Kbit, ~100ms-latency crappy SDSL connection until recently (was renting in the Santa Cruz mountains in a location that had poor connection options), and it was pretty amazing how two sites that looked very similar from a fast connection, would load much differently on the slow connection, often for reasons not really inherently connected to the site's needs (it's one thing if it's loading slowly due to streaming video, versus due to having a gigantic background-wallpaper image, or unnecessarily serialized roundtrips).
It's quite handy. My partner has slow, unreliable, high-latency Internet in his house and there are an entire class of performance problems that are extremely obvious when I work from his place that are barely measurable when I work from a 100mb line that's only a few milliseconds away from a major datacenter.
While tethering, it often takes upwards of two minutes to load a page. That seems just a little obscene when the entire useful content of the page is 10 lines of text.
(Hmm, maybe if I disable JS it'll fall back to a more sane approach?)
 Ok, now I feel simultaneously stupid/vindicated, since I posted this before reading the article... :)
BTW, didn't know about tc. Thanks for the tip.
If you teamed low bandwidth with virtualised browsers of all flavours, this would make a pretty good testing service.
It used to be that you should develop and test on the newest, fastest hardware available, because PC sales were growing exponentially and by the time your software had sat in a shrink-wrapped box in a shop somewhere for three months, most of your potential customers would have more powerful PCs than you could buy now.
This scenario is now wrong in two respects.
First, PCs are now a mature market. Sales are flat. People and companies are going longer and longer without upgrading. So a greater proportion of the market is using older hardware.
Second, the supply chain is shorter. Obviously in the case of a website, the update is instant. But even in the case of Apple's app store, it takes less than a month to get your app in your customers' hands.
Tablets --- that's more interesting, because they're all new. I don't know to what extent it makes sense to optimise for the ipad 2 -v- the original ipad.
If you use a slower machine (or one with different performance, in general), you might discover race conditions in your code, that otherwise go unnoticed. I get this all the time with software on my mac — run stuff on a loaded machine and discover applications aren't ready for things appearing in different order than on the developer's machine.
And then you try something like Windows Live Messenger or Skype, who both somehow find ways to use more RAM and CPU than half of the games on my system let alone other applications. It is amazing to me how long it takes some websites to load, too - fast browser, fast internet connection, slow CPU, all adds up to a pretty miserable experience if you like Facebook, Twitter, Google+, GMail (is there ANY other website which has a loading bar?).
Running a netbook is very much a reminder that CPU speed is not a linear scale.
Quoting from his article:
A nice example of a website that could do a lot better
in this respect is twitter.com.
They've now forced all their users to the 'new' interface,
but frankly imo it sucks.
I highly doubt the CSS included in the bootstrap is having a great massive impact on the performance of the new twitter interface.
I'm not advocating luddism, but if you use a computer which is a few years old and make some dumb mistakes then the performance penalty is big and immediately obvious at the development stage, whereas on the latest hardware with multiple CPU cores it might not be so obvious and then turns into a crisis during deployment.
I'm especially worried about UI thread getting blocked on file access to sleeping HDD or unreliable network drive (i.e. I don't want my applications to be as beachball-death-prone as Finder and iTunes)
You could probably do it over a loopback device on the same machine if you don't have a network handy.
Hm, I should add these mp3s to my library. Drag, drop, make coffee, hit the bathroom, chat with the QA guy, come back, read some HN comments, hey, it's done.
Also, when you make I/O truly async, you need to ensure UI behaves sanely while slow I/O happens in the background.
I like to be able to run a couple VMs at the same time without causing my system to grind to a halt. That being said, what I've done in the past is give myself (and my devs) fast machines loaded with RAM and such, but the environment they deploy onto is a low-grade commodity box.
The environment should be bare enough so that it causes developers and operations personnel to think twice before reading in a large file, or opening a tonne of file handles.
As a corollary, if you are deploying into a JVM environment (e.g.: Tomcat), DO NOT give the JVM a tonne of memory by default. Developers will write applications that just drink it up. Instead, start at the default (256MB) or sanely bump it up progressively as required by the application.
It's amazing what you can accomplish given those kinds of constraints.
BTW, that's why some old phones/PDAs are legendary (for example, HTC Tornado) - because several next versions of WM run there flawlessly.
Using the emulator you're conditioned to the slowness. You can't even set a valid baseline. Now your application is so slow you can't tell if the performance is due to the application or the emulator.
Seriously, the emulator is a complete joke.
Where I might be concerned with muddying my main pc, I don't feel any problems with doing something potentially hacky or dangerous on the old box. Freeing up this mental space makes the first few steps of learning some new language or environment much easier for me.
It goes in a 5.25" bay and allows you to easily swap out 3.5" SATA drives. You could then have two SATA drives, one drive is your normal OS, and the other drive would have your muddying OS.
There is also virtualization. Not sure if that is an option for you.
There are disadvantages too, of course, such as increased desk space, power usage, and heat in your office, and inability to work on the subway.
VMs are good, though VMs don't solve the original problem of letting you test on an older CPU. (Their IO is generally slower, but not in the same way as an older machine.)
I remember reading this but cannot remember where.
If the customer asks "How long does it take to import 10k documents?" I can give them two data points, the VM running on a slow DSL connection over VPN on an older server, and a VM running locally on a i7 with more RAM and an SSD.
Of course, the requirements change if you develop desktop applications vs. server applications: if you plan to deliver software to government accountants running hardware that is at least 5 years out of date, you better go and get yourself a matching setup - "It would be faster if your Boss bought you a faster machine" is not helpful.
The target is "people who spend enough money to make writing code for them worthwhile," not "everyone ever."
There's a cost/benefit to supporting old hardware, and very often the benefit is vastly outweighed by the cost. Econ 101. Sorry.
You can also run Windows XP in vmware with just one cpu allocated to it and limited memory (and screen space).
If you have the paid version of vmware you can also do cpu throttling.
To me, fast tests are good tests.
It's not rare for me to see applications running faster on my meager single-core 1.6GHz laptop than on considerably faster machines, only because they don't have to compete as much.
I guess some webdevs don't do this - at least I have to assume they have some maxed out monster workstations when I visit websites that let my CPU usage go to 100%.