- What is your standard developer hardware configuration?
- How often is it refreshed?
You get bad answers to those questions and it's a definite warning sign (IMHO).
If you're working for a cash-strapped startup (possibly bootstrapped), you may want to negotiate a deal whereby you supply your own hardware (in exchange for higher salary and/or stock/option grants) if they're only in a position to provide substandard hardware.
One side note: what constitutes good hardware depends on what you're doing. If all you're doing is writing PHP/Python/Ruby in vim/emacs and running a local Apache/nginx and possibly a MySQL server then it almost doesn't matter what hardware you have... apart from the monitor.
But if you're compiling huge C++ (or even Java) projects then you probably want good I/O (possibly an SSD, preferably in RAID1 config for redundancy), lots of RAM and a good CPU.
As one data point: I have a 6 core machine with 12GB of RAM and 2 24" monitors (some opt for 1 30") plus a Macbook Pro 15" with SSD. You can refresh your hardware every 18 months if you want to (but most don't unless they have a pressing need; I know some people with 4-5 year old workstations because those are fine for what they do).
OK. If you're developing on a local instance of Apache/nginx sure. But as a professional web developer I usually have a fully virtualized dev environment running. That means, at least one of our app servers, our database server, and possibly a Windows VM (for testing in IE - usually I can save this for when I'm in the office and have all this stuff running on another machine).
For huge builds it makes sense to go with a compile farm. Working on a god box that sits idle 99% of the time is a bit wasteful.
A friend of mine tested it 3 years ago and it was something like 300% slower (no idea about the disk config, but he's a smart guy). It would have been worth it at anything under about a %50 hit...
It would have been a good fit for VMs.. except for the performance.
If you have to use a laptop, throw away the useless DVD drive and change it with a second HDD/SSD.
Many of the better laptops can also have a second mini-PCIE slot for an internal 3G modem. Install a modem there. This will help you a lot: 1) Internet access everywhere, 2) you don't have to worry about signal (the internal antenna is bigger and better) 3) you don't have to worry about crappy USB modems and their crappy drivers.
* The compilation is done remotely on a beefy server with no CPU/Disk IO/Disk space issues.
* I get several monitors. they don't even have to be big - 17 inch is fine - as long as there are at least two.
* The dev box is able to handle its software responsively. I souldn't have to reboot it. Ever. Hell any dual core with 2GB of ram, a linux distro + emacs would do.
Currenlty my 600 euro i7 box at home would probably put my work box and the team's remote compilation VM to shame. Both. At the same time...
Interestingly, from reading the comments i feel that startups are actullay more inclued to supply devs with good hardware than established players. I was expecting quite the opposite.
On most startups, the CEO's desk is, at most, a couple dozen steps away from the devs. It's the bean-counting professional managers that are brought in after that stage that make the developers miserable.
Since the code should be in a source control system, I don't see the point of this. RAID5 (increased redundancy and throughput) maybe, but even then you're increasing the chances of failure (the RAID hardware itself can fail) and you're not gaining much (if any) in the way of IOPS, which will be your biggest issue.
With Time Machine, you still have to manually rebuild your system after your drive fails. This is not what Time Machine is for; it's for, "oops, I shouldn't have deleted that file". (Which, incidentally, RAID is not for.)
Even a RAID1 failure requires you to have a replacement (ie, trip to Fry's). A bootable backup, however slow, is just there.
Also, you don't immediately need a replacement drive: RAID1 boots fine with only one device. Assuming your company policy is to have RAID1'd SSDs for all developers, you're probably going to have extras on the shelf anyway.
Time machine is for accidental data loss. RAID is for disk failures.
A nightly differential bootable system image backup a la SuperDuper, Macrium Reflect, Acronis or the like + cheap external 3TB drive = zero downtime... if the SSD fails, boot from external, and schlep it while the SSD is repaired/replaced.
Time Machine is bonus for lost/overwritten local files. Source Control guards the jewels (you don't fear committing untested code to local branch, do you?)
I think there are two requirements here: 1) redundancy of work in progress and 2) change control management. I want my work in progress to be relatively safe from failures, I want it to be automatic and I want to be able to choose which changes are worthy of their own revision number in version control. Personally, when committing work in progress I want my revisions to represent a logical stopping point and not necessarily dictated by how much time has elapsed since the last commit.
That way if you go down you can just swap the drive out and be on your way again.
This is what we do (and we have a lot of hard drive churn due to heavy use) and it works well.
For Linux / Windows folks, SuperMicro has a nice tower that has 4 hot swap hard drive bays that I really like.
Speaking only for myself... I have a gold disk plus a little script that syncs scripts and configs to backups. And I use Chrome logged into my Google account.
I do see your point, but a lot of it can be mitigated.
Well, our air-conditioner went out for 3 days in the middle of a Japanese summer, and the RAID controller itself shorted out, taking the drives with it. My boss still doesn't understand how we lost our data, because "RAID was supposed to protect our data."
But I have also been on the other side. Let me give you a simple question to ponder: assuming an engineering team of 50, and the cost of 2% to upgrade machines, would you rather upgrade everyone, or hire one more developer? As the manager, I can tell you that everyone is asking for an extra headcount: "we need a full-time person to handle builds", "Tom could really use an extra hand with the XYZ module". Etc.
It happens to cost the same: hire an extra developer, or upgrade. So, as a manager, you handle the trade-off. Life is all about trade-offs, not absolute. I'd love to have a faster machine. I'd love to have more engineers. I can't have both.
Developers productivity doesn't work the way you think it does. 51 developers on crap hardware won't be 2% more productive than 50 developers on good hardware. It won't even be equal.
Somewhere in that stack of 50 developers are a few of those mythical 10X-100X guys, who are currently spending a lot more time than you think arguing politics on Reddit. When you're writing code and your machine bogs for whatever reason, it kicks you out of what you were doing and almost forcibly alt-tabs you over to look at lolcats.
Give those guys good machines that don't piss them off, and you'll find them a lot more productive. And not, like, 2% more productive. More like 2X more productive as a group and 10X productive for some individuals.
Try it yourself one day and let us know how it works out for you.
I wrote code as fast on an Apple //c as on this late-2010 MacBook Pro 17". I'd argue faster, as context switching to something unproductive was actually a chore. For writing, Appleworks 2 on the //c was, if anything, more efficient than the latest MS Word, for the same reasons some hardcore devs prefer EMACS over Visual Studio.
In my experience, the single most meaningful programming productivity boost in the past 25 years is a second screen.
Running PageMaker on an SE/30 with an external Radius Pivot, for example, ran productivity circles around working the built-in screen alone.
But before second screens, devs managed to get by. In the Apple II coding days, an Imagewriter printout of code thus far taped to the wall behind the monitor served as the "second" screen, and physical books lying open around the desk served as reference tools. In many ways, I'd argue the simplicity and thoughtfulness of that approach was more efficient. It's as though the mind can keep separate threads for each type and physical location of media.
For horsepower, as the rest of the comments here point out, a team does want a source code and build system that screams, but each developer workstation could arguably be a text terminal provided it can handle a few windows.
The psychology of latest toys contributing to job satisfaction contributing to productivity is a separate argument. I have found giving dev groups machines tuned for LAN parties (latest graphics, high GHz CPUs) more relevant to productivity than providing machines tuned for compiling (striped raptor HDs, quad xeons).
My dual-headed Apple II+ was something to behold.
I should elaborate:
I'm running an almost 3 year old MacBook Pro. I do all of my development within VirtualBox (Ubuntu 10.11). I have it connected to a gasp single external monitor. I work every day on a MongoDB/Python stack.
It could be faster, sure... but I'm hardly unhappy with it.
I think that for most of us, a few gig of RAM, 2 cores and a 22" monitor (that was my setup last month) is enough that more horsepower wouldn't make us substantially more productive.
jasonkester mentioned "crap hardware" versus "good hardware", a pretty minor revision of a machine likely won't move it from one to the other.
Clearly you get diminishing returns here. And in your example you selected the comparison points (MBP'10 vs '11) so high and close to each other that the effect is negligible. jasonkester on the other hand is comparing "crap hardware" and "good hardware"...there the effect is huge.
// haven't tried putting the VM's on the Promise RAID yet
I find it bleakly amusing that so many blue-collar companies have no trouble putting a $10/hour worker in front of a $100,000 machine, while white-collar companies will do anything to avoid putting a $50/hour engineer in front of a $2000 machine.
Seems like something you could quantify pretty easily. Arstechnica or Phronix or someone could build up a $1000 machine, a $2000 machine and a $10000 machine and build various open source projects on each and publish the times.
IDE reaction time might be harder to quantify but I bet you could do it.
My hunch from 10000 feet, you'll see an appreciable difference between like $1000 and $2000 but between $2000 and $10000 it won't be that substantial or it could be attributed to a $300 upgrade (SSD vs. spinning platters) or something.
If the difference is that big, nobody would ask this question, everybody would have $12000 workstations on their desk at work and $5000 laptops to carry home, that's just how it would be.
Now, it can be close if your comparing 150 people vs 151 people but I can tell you better hardware would be more productive in that case.
No you can't.
Give them the liberty to choose!
Let's also not forget that when developers are choosing their own machines, they're doing so on company time. It's certainly not cost effective to have developers doing comparison shopping on machines. And it's absolutely not cost effective to have developers putting together their own machines from parts (as some would happily do if allowed).
Additionally, it would be frustrating to use your credits too soon and then the guy next to you gets something twice as good one month later. So just play it safe by keeping everyone on the same playing field.
However, I'm sure the tax benefit is small compared to the potential for increased productivity and output. But try explaining that to the average CFO.
It implies that every developer is responsible for his/her own workstation maintenance, though, and that might be a risk/pain you don't want.
OR, if Tom is having trouble with a module, maybe he would feel more comfortable handling CI...
Do that, and make sure the system has 4GB+ RAM, and 2 monitors at high resolution, and your hardware will feel brand new to the guys on your team.
I did this a couple years ago as one of my first acts after being promoted, and it worked great. I paid a bit more to get the drives that came with an upgrade kit (that is, an enclosure and imaging software). Gave them to each dev to take care of themselves.
I didn't have to wait on and work with the bureaucratic IT department, and a dozen SSDs was cheap enough I could put it on my card without needing CFO approval.
You don't need "The Best" hardware. In fact, it reminds me of the saying that "People buy horsepower, but they drive torque."
What matters to developers isn't how many FLOPs you can do, but how quickly can you load your VMs and start Eclipse and grep your local filesystem.
Don't answer, it was rhetorical.
Space is still cheaper on hdd's, and data volatility concerns. Is the latter still an issue with SSDs, or have they reached the point now where you can store your music collection on them and not worry about losing it in two or three years?
Fun oddity after switching to SSD: If the laptop gets REALLY hot from long term summer usage, in the past I would just slap a cooling pad under it to get the fans to stay quiet. Now, the cooling pad somehow causes the SSD to freeze, so I have the cooling pad sitting only under the left half of the machine, and all is well.
I worked with a Macbook Pro, top of the line, with 2 x 24 inch monitors attached, with an ergonomic keyboard and wireless mouse and all that crap.
Now I work with a $500 ASUS that doesn't even lit my keyboard, with Ubuntu Linux installed and I'm using it directly (no external monitors or keyboard) since I'm on the move a lot.
As far as my productivity is concerned, I still get things done at the same pace. Hardware is not my bottleneck.
Of course, this has more to do with my other preferences. I like keeping my toolchain as light as possible. I don't use bloated IDEs, even when working with languages that require an IDE, such as Java. I don't do heavy processing often and when I do, I prefer offloading that work on AWS. I am also proficient with manipulating virtual desktops.
As far as startups go, I think that spending money on expensive hardware is not frugal spending. If you're my boss, I would rather prefer a bonus than the latest and shiniest crap -- if I want shiny crap, I can buy it myself while respecting my own priorities.
Also, a big project is hard to optimize, especially when threads are involved; so whatever shortcuts you take to make it work at first does come back to hunt you in a big way and you can't fix it easily.
For example people are telling me that Eclipse should run fine, but on my laptop it freezes a lot, even for small projects; while IntelliJ IDEA works flawlessly. So I'm not sure what the problem is, but it does something on my box that it shouldn't -- and throwing more money on hardware (instead of searching for a better alternative) just seems like the wrong approach to me.
Another problem is that some operations are blocking in nature. Consider the case when an intellisense dialog is triggered (after typing a dot or pressing Ctrl-Space) -- to give you intellisense, Eclipse has to compile your code, make guesses about your intent in case the code has errors (since intellisense also has to function in case of simple errors, otherwise it is useless) and then present you with a dialog with options available for completion. This is not something that can be done async.
Of course, this opinion was just some random guess. I have no idea why Eclipse has the tendency to freeze the UI on my machine.
For example with Java I use manually written Rakefiles (I prefer it over Ant since I have more control), I make sure it doesn't compile unless files have actually changed and in case the project is getting big I start separating functionality in multiple projects, having multiple JARs as a result.
Then, I'm using Emacs and in Emacs I can start a build whenever I'm hitting "Save" on a file. And in case of compilation errors, Emacs even highlights the errors for me.
You have to work on it a little and you lose time on the actual build process, but you can achieve a lean and mean setup (unless the compiler really sucks).
Of course, this is the advantage of an IDE - it takes care of annoying details for you; but then you have to put up with all the bloat that brings. And for humongous projects, your IDE will choke anyway, even if you have the latest state-of-the-art hardware; try loading the Firefox codebase in Eclipse CDT or in Visual Studio sometimes.
It's quite rare these days for developers to literally see their whole program's code at once, and I think we're the worse for it.
On the contrary, 10,000 lines is still only 150 pages at 66 lines per page, and fanfold paper flips through easily.
I've read through several projects that took at least three reams of fanfold paper. I'm not saying that was fun. Fortunately one didn't generally have to read through the whole project, only the module one was working on.
I have printed code and reviewed it before. Sometimes it's useful for small programs or classes. I don't think it's useful to waste 150 (or more) pages to print an entire large program, though.
Not sure it was about holding the actual code in one's head so much as the structure, flow, or shall we say, "plot".
The Chronicles of Thomas Covenant runs 4948 pages, Song of Ice and Fire is 4195 pages so far, and even LotR is 2144 pages.
This is one of the reasons I think there's a high correlation between great developers and developers who love history -- leveraging the ability to envision and hold a complex sequence of interlocking details in mind.
I just can't see the value in printing 150 pages of code to barely skim it. Especially since those 150 pages will be increasingly out of date as time goes on. It's just such a waste of paper.
I'm not sure where you get your "great developers" and "developers who love history" link, either. Liking history has nothing to do with coding. Nor does it have anything to do with reading fantasy novels. And none of the history buffs I know are even coders. This is such a random tangent.
I provided credentials above. I've been managing hundreds of developers over the past decade and working as an independent developer for a decade before that. (And as a hobbyist developer the decade before that.)
> "Liking history has nothing to do with coding."
My experience hiring and managing hundreds of devs indicates the exact opposite. You may have found differently, but I will continue to focus on hiring people who find learning a rich tapestry of interconnected context fascinating, and preferring to hire those with history (or linguistics or other complex humanities) degrees with formal CS electives over those with pure CS degrees.
“Now, I don't want to get off on a rant here ...”
By your definition, most science is "personal anecdotes" if building knowledge through systematic observation and study of hundreds of test cases is merely "personal anecdotes". (See "empirical research".)
By contrast, you wrote "Liking history has nothing to do with coding" but you supplied nothing whatsoever to substantiate that statement, just as you supplied no basis for the statement that working closely with hundreds of developers is not sufficient to establish a connection. Ok, I've managed hundreds, and we've collaborated with thousands. How many developers would be enough to establish any meaningful connection?
You challenged "I'm not sure where you get your link" and I provided the basis for that link: observation of several hundred developers I have hired and employed. That's a reasonable number considering many studies use pools of just a couple dozen test subjects.
Throughout this thread, you have countered remarks I've based on experience, with your own unsubstantiated assertions, sometimes insulting in nature. For example, you wrote "In the 'old days', you must have been writing small programs" when the opposite was true.
Just because you "can't see the value" in a printed code review back in the 80's, or haven't noticed a link between coders with an appreciation for history and an exceptional ability to architect software systems, that doesn't obviate the need to provide at least as much foundation for your arguments as I've provided, particularly if you're trying to call me out for lack of basis.
You said this is "entirely tangential and has nothing to do with what we're discussing", yet this concept of holding the complex in mind is precisely what I opened with, and the theme I've stayed with.
In my experience – which I've scoped so readers can decide for themselves if it's relevant – reading well, taking time to contemplate and be thoughtful, remembering and understanding complexly woven tapestries of information (whether multi-volume literature or world history or computer code), and being able to read long code (or threads) and form a structure of it in one's mind, are all skills signaling good developers.
In my experience, a love of reading, stories, and history in particular, signals a desire for learning, a sense of proportion and place, and a respect for 'the shoulders of giants' likely to help a good developer become great.
“... of course, that's just my opinion. I could be wrong.”
No number of developers is enough to turn offhand observation into meaningful correlation. That would require measuring and tracking. It would require more than your gut feeling and confirmation bias. I frankly find it worrisome that you actively choose history majors over CS majors for development work. That says nothing to me except that you have a personal bias.
I stand by my statement that you must be writing small programs if you're willing to print them in entirety on paper. You can find that insulting if you want (it wasn't intended to be), but it's a fact. 10K lines is not a large program in modern terms (though I say it's too large to waste the paper on). It's definitely not large when your team involves multiple developers.
As for code reviews in the 80s, the value is in the review. Whether it's printed on paper or not isn't really very meaningful. It can be nice to have a paper copy sometimes, but it's just that, nice. It's not really a substantive change.
And yes, this history argument is extremely tangential. You did not start with that. You started by saying that you used to spend compile times reviewing the printed code. That has nothing to do with being a history buff. Someone could love doing paper code reviews and hate reading about history. And any link that might exist was certainly not established before you transitioned into talking about history buffs being good architects.
You still haven't provided your basis for positions, and you're still littering your responses with assumptions or accusations.
> I don't believe at all that you've been systematically tracking which of your programmers are history buffs and how it correlates to performance.
You are mistaken. When, out of hundreds of hires, a handful match the criteria, it's easy to look back at data collected at hiring time, and find out if there are correlations (not causations).
> I think you're a history buff and so you assume that it must correlate meaningfully, just as programmers who are into music, or art, or whatever else do.
Wrong. History isn't a primary interest. I enjoy a variety of things more. I believe most are unrelated to ability to architect software.
> No number of developers is enough to turn offhand observation into meaningful correlation. That would require measuring and tracking. It would require more than your gut feeling and confirmation bias.
See above. Your assumption was mistaken.
> I frankly find it worrisome that you actively choose history majors over CS majors for development work. That says nothing to me except that you have a personal bias.
Prefering to hire a History major with a minor in CS over a pure CS major typically results in better rounded individuals more capable of software architecture, dealing with clients, and collaborating with peers. Unfortunately, out of hundreds of hires, again, only a handful have fit that bill. But none of those who did fit that bill, had to be let go.
This was data collected at interview time, and in fact, on the first such individuals, I was as skeptical as you. Looking back at the collected hiring data about outperformers revealed this correlation. Since then, I've confirmed this curious correlation with several peers.
> I stand by my statement that you must be writing small programs if you're willing to print them in entirety on paper.
Again, your assume that I was printing the large programs. I was not. I was invovled with software developed and used by scientific research organizations, hospitals, and universities. They would hand me a box of fan-fold paper (once even on a hand truck), and say, "Here's the code." I found learning the overall picture faster reading through the stack than scrolling the code on a CRT, particularly thanks to the ability to use a highlighter and Post-Its.
Today, when confronted with a similar task, I prefer an iPad and a good reader with markup tools.
> As for code reviews in the 80s, the value is in the review. Whether it's printed on paper or not isn't really very meaningful. It can be nice to have a paper copy sometimes, but it's just that, nice. It's not really a substantive change.
Given today's technology, mostly agreed. However, research suggests tangible artifacts cement concepts more firmly in our organic brains than digital exposure alone.
> And yes, this history argument is extremely tangential. You did not start with that. You started by saying that you used to spend compile times reviewing the printed code. That has nothing to do with being a history buff. Someone could love doing paper code reviews and hate reading about history. And any link that might exist was certainly not established before you transitioned into talking about history buffs being good architects.
You're right, I didn't bring up history first, I brought up the concept of understanding "a map of the whole". To me, that equates closely with a long multifaceted narrative. You brought up the inability to hold a few pages of code in one's head, and I countered with literature and history. Both code and history are something like the Bayeux Tapestry, where the local is most useful when understood in context of the whole. Come to think of it, the word for our oldest long historical texts, and for what you do reviewing code on a the screen, are the same: scroll.
My ravioli's done baking. Enjoyed the discussion. Cheers.
I also don't think your handful is a very large sample size, regardless of what correlations you think you see. Honestly, I can't imagine how you're even attempting to gathering this info. Are you just randomly asking people if they've read 1776 during the interview?
But no, hiring "hundreds of developers" is still not the same as actually measuring. I do not believe that you have files that track how history-oriented your developers are (make them take a survey?) vs how productive they are, so that you can find a proper correlation. A proper study of this might yield a strong correlation (though I doubt it), but it would require a lot more than casual observation during your hiring.
I did assume that you were the one printing the programs, because that's how I read your reply. In the 'old days', compile time was a chance to print out your code on fanfold paper ...". If that was a misunderstanding, then I guess it changes the situation. Sure, if someone's already handed you a stack of printed code, why not look through it while compiling?
Hope you enjoyed your ravioli. Cheers.
A few pretentious programmers wanting fast hardware to play with is one thing, but demanding it and proclaiming that programmer productivity has increased and thus all programmers deserve the best possible hardware is laughable at best.
Anyway, fixating on HW like it has meaning outside of the software that runs is a common misconception, and assuming there is any reason to buy a PC with less than 200$ of ram is simply premature optimization. Today that works out to 16GB, in the past that may have been less than 16KB, but that does not mean 16KB was enough just that it was a point of diminishing returns.
PS: Assuming that the same assumptions will always apply even as HW gets 1000 times as fast is ridiculous. I know someone that prevented 1 million in computing hardware from being purchased due to a few weeks work. Today doing those same optimizations would be a waste of time because computing power has literally increased by that much.
We do a lot of VM spinning, and it's a fantastic way to do that sort of work, plus managing a single server is much less overhead than managing 6 workstations.
Whatever makes you feel better if you have the cash, plus for MacBook Air the portability gained is a great bonus and you may need it.
I was arguing about this notion that companies should buy the latest and most expensive hardware as that supposedly leads to better productivity, but I have my doubts about that. Better hardware does give better productivity, but it depends a lot on what you do with said hardware -- if developing PHP-stuff, no, you don't need a 20-core processor with 50 GB of RAM. You don't even need a MacBook -- any crappy laptop will do.
It's also debatable if you need big dual monitors and an office that looks like the cockpit of a plane. Yes it's cool. No, it won't help with your ADHD problems.
I use Eclipse IDE on vmware-Ubuntu on a winVista 32 bit computer with not claims of bloated IDe slowing me down..it snot the IDE but the operator..more often times than not its improper eclipse.ini config that produces such bias rather than knowledge
Should not a java developer know how to use VM settings to set up their IDE? Its about like doing J2ee but not knowing how to do the VM setting son the server..is it not?
I also think that proper configuration should be preferred over hardware-investments. Heck, I lived in a time when optimizing Config.sys and Autoexec.bat was required for playing games.
I do get annoyed by arguments that take a small part of what I said and comment on it out of context. Don't take this the wrong way, but your rant is not warranted -- if Eclipse floats your boat then I have no problem with that.
But a $99 software license I asked for 4 months ago that will make me more productive every day, I'm still waiting for it.
One time I was at a large company that gave all developers ridiculously fast and powerful machines. By far the highest specs I'd ever used. But the machine crawled because of all the million background processes IT put on there, and the crappy tools in our software chain.
Software is so much more important than hardware.
Now, if I could just afford a CX1...
It can automatically keep logs of all sessions. Grepping through these logs to find a complicated bash command or sql query or the output of a query/command that I ran in the past has been invaluable.
And there are little things it handles better than other clients I've used. It restores broken connections better. The session profile management is easier to use. Quickly opening an SFTP tab for the current session is handy. Resizing a window does the right thing. It has a million other options, far more than other clients, but doesn't feel bloated at all.
I'm sure other clients have some of these features but this is the only one that has what I think is the best implementation of all of them, and they add up to a great experience.
Maybe I'm in the minority here, but I would expect that a developer should know enough about computing to not get a virus. This, of course is assuming the organization has a competent IT department and is "securing the perimeter" with firewalls/proxies, and only allows approved software loaded on machines.
Sure, this would be a radical approach to IT, but I think it's a policy that needs to be implemented.
A locked down network without av on the endpoints worked better in my experience, and the users loved it. The only reasons I still use av for some offices is this:
1. A manager will inevitably ask "omg why wasn't there AV on this users station?! Are you negligent!?"
2. Locking down the network can get in the user's way too. Just because I have good network practices doesn't mean that the software you require does too. All it takes is one business need/damn the damn the risks, and you are left scrambling.
I think you need a new AV. MSE uses <50MB and does an acceptable job. Another 5MB for a firewall and you're set.
Although building in less time did lead to happier developers, it did not lead to more features getting built in the given time frame. We did eventually all get RAM upgrades at least, but the development process and technology stack we were using were the real time sucks that we could not fix with better hardware.
Intentionally crippling the entire dev environment seems a bit drastic to me.
It doesn't make any sense, but 70% or so of people who hire developers don't seem to look at this rationally. On the other hand, they usually don't provide you good specs either.
I have a friend in high finance whose desktop is a (dual?) quad core with 12GB ram which he regularly maxes out. I have a colleague that writes firmware on windows XP, which with his various drivers and Eclipse problems causes a BSOD about once a fortnight. I keep offering him to get a new computer, cards, whatever, and he refuses because of the cost to him in terms of getting his environment set up 'just right' again - he's quite picky (at my previous workplace, a digital circuit designer on six figures didn't want to move from his PIII to a Core 2 for the same reason).
Another colleague runs a quad core on an SSD and complains that his CAD program runs slow... but when we run it through its paces, it doesn't seem to hit any bottlenecks bar the initial load (which isn't his complaint); it just 'feels' slow.
as with all things computery, the answer is: "it depends"
I've seen this before (and was guilty of this kind of thinking a few times) and I believe this is simply misguded. No matter how good of an "old-school" developer someone is, she needs to keep her tools fresh. To me this is no different than having a very manual build or deployment process (maybe not as bad but still).
I believe when it comes to HW + toolset for development, you simply must consider:
-Computers break - you need to be able to set up a new one in a matter of hours (or 10s of minutes). Maybe it does not need to be this drastic everywhere, but if your computer breaks and it takes you a few working days to get it just right - something is wrong - you need to automate it or reconsider the tools (which may not be always possible especially in embedded/hardware design world). If you get this right - upgrading will not be (such) a problem.
-You need to keep up with the tools - maybe more than you need to keep up with the libraries. With this I don't mean only the newest versions, but to constantly be on the lookout for the better alternatives - blogs, forums etc. there is simply no excuse for not doing this if you are a developer other than laziness. This will help you get rid of some stuff on your machine you don't actually need because you may end up with better alternatives. I am a firm believer in getting rid of clutter in your life.
Not all devs in all companies can follow these, of course, due to policies, the nature of work, "legacy stuff" and whatnot, but it helps a lot if you can, and point 2 means you are learning - so you are not bored.
For the other applications, I went for the 'portable' version of everything I need. This way I can move from one machine to another without spending days setting up everything - I just copy the 'tools' directory and I'm ready.
I simply hate having to put up every day with bugs that have been solved five years ago. And especially if there is no reason beyond simply the slowness and conservativeness of the IT dept that causes this.
Buy a nice monitor (2 if you can afford it).
Buy whatever hardware is reasonable for ~1k or less. Repeat yearly.
This will always keep the developer with a good machine without spending too much money. With the way HW is nowadays, it's possible to go 2 years on each machine.
The story changes if you need laptops. After using LOTS of laptops over the years, if given a choice I'll only use MBPs now.
Never could convince my superiors. But maybe when I start my own...
So, if you already have hardware you like, use that and take the money as bonus. Or get a big monitor.
I usually upgrade my own machines, but I have to answer awkward questions from my managers ("why the hell do you need an internal 3G modem? don't you have the USB stick?" to "can you really have two hard drives in a 15" laptop?"). Sometimes I just buy the hardware and ask for a refund later, it's easier to prove that it's useful.
If you're working in a big company then central IT probably have a standard PC supply agreement and standard image and will oppose allowing anything non-standard on the network until someone signs off on 2 or 3 extra support engineers to "support" this non standard stuff.
If your company isn't making much money, any capital expenditure like this is hard to justify.
So, you really want to work for a small software company that's making lots of money.
One of the big advantages of being a small startup is flexibility -- no need to have policies for stuff like this, just handle it on an ad hoc basis.
According to me, all programmers should be given absolutely the bare minimum hardware to program on. This way we can eat our own dogfood and hopefully reduce program bloat. The primary reasons programs suck these days is because 'developers' have terabytes of RAM on their development boxes and consuming 1G of memory for an applet isnt a big deal for them.
I say give all these developers asking for more hardware a 386!
Sure, if I know my compile is going to take a few minutes I can plan for it, and work on other stuff in the mean time.
But for me the tiny interruptions really add up, and contribute to me not being as productive as possible. This is especially true when you're hunting a bug, or working in a tight TDD loop where you're constantly editing/compiling/testing. Having to wait for the machine is a real morale-killer.
The software can also suck from hundreds of hours of time being wasted by the older gear. If your competitors don't think the same way, you have a problem.
In my case I was also often developing for embedded target hardware which was a good deal slower than a typical PC, so older PCs were more realistic for testing.
The advantage of always being on the latest hardware is if you're developing large software systems which take a long time to compile, or if you're doing something which fundamentally requires significant number crunching - such as games or computational chemistry.
Seeing as developers are a small percentage of the total workforce, even if all of them complained, it would be drowned out by the mass of people who have computers good enough for their jobs. The cost of having to deal with ordering and supporting different computers (beyond just laptop versus desktop) is not 0. The quantifiable gain from having some people have better computers is very difficult to calculate. Thus, it's easy to just give developers the slow boxes and listen to them complain.
Related, I've previously emailed to ask why my company has a policy that every PC must be shutdown every night even though it can take up to 8 minutes to start up in the morning (we have a lot of required anti-viri/spam/malware and disk scanners that run). I was told that the company expects us to not be fully efficient all the time, go get some coffee while your computer boots. Also, different budgets cover PC cost versus payroll.
Who told you that? Well, actually, it doesn't much matter to me, but if you were told that by IT, you may discover that forwarding that email up to management will have exciting and entertaining consequences. As long as you don't mind making an enemy or two.
On a more practical note, some machines can be set to turn on at a certain time, via BIOS settings, or hardware that takes advantage of Wake-On-LAN or other such things. If you're really personally bothered, you may be able to take advantage of that. You may also want to consider trying to simply suspend the machine. It'll mostly look and be off, but should come up more quickly... if it works.
Our IT is some low paid people answering the phones in a foreign land. They don't much care if I complain. The IT managers higher up who make the decisions aren't any better at listening to employee complaints. Their motivation appears to be save as much money (that they decide to count) as possible. Their impact to costs outside of the IT budget aren't high on their priority list.
My laptop gets locked in a cabinet at night, I just suspend it and everything appears to be off. If you have a desktop, that's not necessarily possible, and the Dell's we have usually have some blink'en lights still going when the machine is asleep.
BIOSes are generally locked down so employees can't change them. Laptops have mandated full disk encryption that uses the MBR in special ways, so dual booting is not possible.
I actually was told once that I had left my 21" CRT on overnight. The little stand-by light was still on when I left one day. That's a big no-no. There's somewhat random checks for these types of things. Fun, eh?
Thankfully, the work I'm doing is interesting and the pay's good. The politics and budget antics are rather annoying but I get the impression that's the way the world works at large companies. Maybe I'm wrong?
I think Google has a lot of MacBooks but I don't know where I get that from.
http://digitizor.com/2011/07/12/google-android-linux-dream/ and http://news.ycombinator.com/item?id=2755050
It's the sort of thing which can easily consume 40 person hours - even without the considering the inevitable time lost playing around with the new toy.
Finally, there's dealing with the inevitable pissing and moaning which accompanies any change - some people just want their damn computer left alone because it works fine, thank you very much. Other's wanted the 15" MBP not the 17", while the OSS fanbois cannot believe that they were once again thwarted in favor of commercial software.
It also depends what your doing, if you are after someone to produce great pixel perfect designs, get them some decent screens. If you have an app where the latest i7 and SSD can cut compilation time in half you can probably make a good gain there by not having the programmer get distracted each time they compile.
The reality is that organizations don't change, and if programmers are considered code monkeys at yours, you need to GTFO if you aren't one. The reason your coworkers don't do more to change the status quo is because it's great for them: no real obligations and a nice bump in titles every five years. They don't need a better computer because they don't do any work. If you actually want to program computers, though, then you need to look for other opportunities.
<jedi hand wave> This isn't the employment opportunity you're looking for.
If you work for a small company and have this problem, it's simply because they're cheap.
I've seen many developers sit around staring at laptop screens while I'm working on my 2x24" monitors.
Do not feel like you're being greedy. Ask your boss. The most he can do is say no, but the likely thing he'll do is 1) ask why, 2) say yes.
The founder, a marketing major, had a hard time explaining things to their investor once the inevitable end was clear.
I'd ask the question: why wouldn't a company trust the developers' specs for adequate hardware?
"8. Spend little.
I can't emphasize enough how important it is for a startup to be cheap. Most startups fail before they make something people want, and the most common form of failure is running out of money. So being cheap is (almost) interchangeable with iterating rapidly.  But it's more than that. A culture of cheapness keeps companies young in something like the way exercise keeps people young."
Take those few moments of lag or downtime or whatever and enjoy your day--do something else that's productive or have a drink or something; make the best out of life.
The bigger hassle for me is that upgrading machines causes some downtime, so it's better to buy loaded boxes and replace them slightly less frequently (every 18-24mo) vs. a new machine of lower spec every year.
Tools also are a great place to spend money; having a great build/provisioning/tinderbox/etc. system saves developer time, and doesn't add communications complexity.
The remote debugging setup process isn't very slick, but it doesn't take long to figure out, and with a bit of folder-sharing you can keep all the files on your work PC so that it's all nice and convenient. I did this for quite a while, working on Windows, and it worked well. I don't recall any significant problems with it.
If cost is what's preventing you getting a nice PC, then that's one thing, but the issue is just ensuring that slow code doesn't go unnoticed, this approach will probably keep you/your programmers happier...
Right now I write code for a 32-CPU 64GB RAM 300MB/s machine, should I request one exactly like it for myself? And it's not even a hardware issue - the OS, DB and middleware licenses to use on that HW cost >$1M.
If you write Java code, you need 2GB for the IDE on a large project.
But then, many people don't write software to run on customers machines.
I'm surprised that most developers don't take the same approach to tools. If you're using Eclipse or (god forbid) a text editor to write code, spend a minute and tally up all those 5-second chunks of your life you've spent this year looking up the names of variables, objects, whatever, and running into runtime errors from typos. Multiply by $$$/hour and see what you could have spent on a decent IDE.
JetBrains makes IDEs for pretty much every language out there by now, and any one of them will pay for itself in about four days.
By the way many folks are highly productive using a text editor to write code. They might start off slower but end up actually learning the language and libraries they are using. IDEs that step in and try to take over while I'm typing drive me insane.
Back to the primary topic, certainly it's a false savings to skimp out on buying an adequate machine for the task, but for most devs who spend most of their time in an IDE or editor, something like buying last year's best is often completely adequate and much more economical. To expect the "best machine you can get" without regard for cost is not realistic.
This is a bit of FUD. There are plugins for most text editors that give them features that most IDEs have. For example, my Vim setup has tab and code completion, snippet management, syntax error highlighting in real time, code folding, document and file search, etc. I can also traverse a file faster in Vim than an IDE, my fingers do not ever need to leave the keyboard, and it's completely free (as in beer and freedom).
For example, being able to click through an entire app from Spring XML config, all the way through your own source code, into library files, and even the source for the JRE makes a huge, huge difference in productivity.
And languages/libraries/frameworks change all the time; the IDEs tend to follow these things closely. Having to maintain your own hand-crafted elisp (or whatever) takes a lot of work. I've been there, and I don't plan on going back any time soon.
Nope, i love my text editor and i see no reason to start using an IDE. I've used IDEs before and they're all monolithic, slow, bloated tools that slow me down and make my life harder.
 Hi! I'm <strike>clippy</strike> Netbeans. It looks like you're trying to write a program. Can i autocomplete your words wrong, reformat your text in ways that don't make sense or follow your formatting guidelines and then crash?
Because of useless software (cya-ware) installed by corporate IT.
Useless anti-virus crap (ever hear of sudo?), ridiculous hard drive encryption, remote monitoring/management stuff.
Just working in Eclipse, I often wait every single keystroke. Yes, Eclipse is mostly a pig, and I've disabled/closed everything I could. But I have zero hassles working on my personal laptop, even when I have video (or audio) running too.
Our management just tells us to go context switch onto something else, there's always lots of thrilling email answering and documenting to do. At other times you can switch to fixing bugs or working on another feature, even though I personally cannot stand continuous context switching as it decreases the quality of my work.
Questions is - will 2-3 hours task become 30 minutes task with SSD drives?
That said...fast is good...faster is better...ridiculously fast is just fine!
All of our workstations have a minimum of two monitors, some three. A few are overclocked and use fluid cooling to keep them from going up in flames.
And, yes, this doesn't take into account that the company might not want to support your idiosyncratic hardware choices.
Getting a new machine isn't just a case of plonking a new one on your desk, I'd imagine a lot of red tape goes on in the background
one of the best benefits? I can look something up in no time.