Hacker News new | past | comments | ask | show | jobs | submit login

You grossly underestimate our industry's ability to wank and bicker over everything anyway.

You also overvalue uniformity. Or under value the antifragility of not caring how the code was created. The most fragile systems I have ever had the displeasure of being near, required a very specific build machine because the environment couldn't be replicated. In large, because the original development team did what you propose, and mandated all development happen using the same controlled environment.




Hyper-standardized, pants-shittingly brittle build environments stemming from the mentality that the OP is suggesting is its own special sort of hell. It's "works on my machine" applied to a way bigger surface.

Yes, a team obviously has to agree on obvious things such as language and build system. However, cornering yourself into such a tight environment leaves a lot of opportunities to have the rug get pulled out from under you.

I currently work on a team that writes DAQ software in C++ for Windows. The developer has their choice in everything (editor, VCS front-end, debugging and profiling, etc.) as long as the build system runs, their code meets the team's code standards, and they're productive with it. We've found that we can basically close our eyes, point at a random target system, and odds are that our code base can be trivially ported to that target with minimal effort. That reality is possible because somebody has incidentally tested just about everything. At one point in the past, we had a pretty narrow/rigid build process, and it bit us in the ass and hamstrung more developers than it was worth.


How did it bite you in the ass out of honest interest? I have not done commercial C++, so I would be interested to know.


We were hit with the reality that the software (and/or components of it) may need to be run on Linux, OSX, and other targets at some point in the near future. At the time, everything was done with a single IDE, a single compiler, with specific old versions of libraries, only targeting Windows, and with no expectation that it would ever have to be done any other way, even though we probably should have assumed that the day would come. It was always assumed that it would just work, but life comes at you fast when you actually have to do it. After weaning the project off of environment specific tools, the process has become a lot more like our initial assumption.

C/C++ are unique in that build processes and integrating other people's code in them are inordinately hard problems in and of themselves. Different compilers will behave a little differently, different operating systems will behave a little differently, and some of the niceties that a certain IDE or tool provide you will behave a little differently, depending on the exact circumstances you're dealing with. The safe thing to do is make no environment assumptions, and try to make everything as portable between environments as possible. You need to have your focuses, but it's reassuring to know that you can jump ship if and when you need to.


Thanks for the input. Couldn’t that be solved by automated building and testing over multiple VMs?


Most anything can be solved by automation in software. The trick here is the anti-fragile nature of every new team member being a slight shock to the system.

You still need solid testing for making sure you don't miss regressions. So, we aren't claiming this is a replacement for any solid engineering strategies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: