When I started at my first company, they had a very complex VB application running on dozens of customers around the country, each having some particular needs of course.
There was a LOT of global variables (seemingly random 4 uppercase letters) controlling everything.
At some point, the application had some bugs which were not appearing when the application was run in debug mode in Visual Studio. The solution was obvious: installing Visual Studio for each customer on site and teaching the users to run the app in debug mode from Visual Studio. I don't know how they convinced the users to do this and how they managed with the license but it was done.
What happened next was even worse.
There was no version control of course, the code being available on a shared disk on the local network of the company with the code copied over in multiple folders each having its own version, with no particular logic to it either, V1, V2, V2.3, V2a, V2_customer_name, V2_customer_name_fix, ...
After that, when there was a problem for a customer, the programmer went there to debug and modified the code on site. If the bug/problem was impacting other customers, we had to dispatch some guys for each customer to go copy/edit the code for all of them. But if the problem was minor, it was just modified there, and probably saved on the shared folder in some new folder.
What happened next was to be expected: there was no consensus on what was the final version, each customer having slightly different versions, with some still having bugs fixed years before for others.
This is amazing. I can so well imagine a bright young hire joining that team, helpfully offering to "setup this thing called git" only to be laughed out of the meeting by all the "senior" staff.
Astonishingly, It took a long time for revision control to become widespread.
Around 1991 when Cygnus had 6-7 employees and was based in the apartment complex where I lived, none of the GNU codebase was hosted in any sort of revision control. Everything was FTPed around as obscurely named tarballs. We had gathered something like 27 different forks of gdb floating around the net, for example. This was back when forking was generally considered a tragedy, something I managed to change five or six years later).
Rich Pixley came and said “all of our code should be in revision control, and I want to use this newfangled thing called CVS.” Michael was OK with it but John and I were steadfastly opposed. We agreed to an experiment, grudgingly, subject to a whole whiteboard of absurd conditions (“must be transparently integrated into emacs so we don’t have to know it’s there).
Pixley agreed to all of that and then ignored all of it completely. It was immediately such a win that everybody adopted it without complaint, including us two obstreperous a-holes.
A few years later a preventable crisis was how revision control first became client-server.
Git is not a natural development at all. Obviously, it is a standard right now.
But I as a hobby coder at my teens I started out with FileZilla and copying over index2.php, index3.php, index_final.php, index_final_2.php and all of it worked well enough when at that point.
I took a little break from that hobby, and it still took me a lot of time to build intuition around Git when I started out professionally.
Obviously now that I have over a decade of professional experience I think it's a no brainer, but I don't think it's natural to understand it as it's suggested to you. I at least kind of had to go along with it and trust that it's good. It felt frustrating at first as many other things are.
Git as a thing was FAR more difficult for me to understand in my 20s than PHP and how to do anything with it was for me at teens. Coding is raw logic, Git is about imagining teamwork at scale, which is a whole other thing.
The funny thing is that I feel like I was overall still more productive when I was at my teens building stuff solo with FileZilla and PHP, compared to a corporate environment now with all the processes and 1000s of engineers.
Git is still a baby as these things go. Software version control systems go back to the early 1960s and descended from manual processes used to manage documents in large organizations, blueprints, and such from the early 20th century. Don’t imagine the Manhattan Project functioned by just passing around bundles of paper! There were staffs of people whose job was to manage document control! And I don’t mean “control” in the sense of secrecy, but making sure everybody was on the the same revision at the same time.
And for a field so steeped in building its own tools and automating human effort in development it is indeed astonishing revision control took so long to be accepted.
You wouldn't believe the amount of crap I take whenever I introduce very basic version control at the various 3 to 6 man shops I find work at these days.
I'm 100% sure that once I left that the devs went back to remote server crash and burn FTP development...they couldn't be bothered with the "hassle" and unneeded headaches of git.
> I'm 100% sure that once I left that the devs went back to remote server crash and burn FTP development...they couldn't be bothered with the "hassle" and unneeded headaches of git.
Have you considered introducing Mercurial or even Subversion?
While Git may be a kind of an industry 'standard', if you're starting from zero, some of its concepts may be a bit mind-bending for folks, and it has often been commented that Hg seems to have a more beginner-friendly interface.
And if branching isn't going to be used (a large strength of git/hg), then Subversion may have an even simpler mental model (svn of course does branching, but the others are more optimized for it).
If folks are doing FTP-push deployment, then moving to 'just' SVN-push (commit) deployment can be an improvement.
There was a new hire who was told to make a small change in a piece of server software, found the repo, read the docs, made the change, ran it in testing, and pushed to prod. Cue the 'senior engineer' screaming bloody blue murder because he'd been directly monkeypatching the servers for 3 years with no vc and no backups.
This could have been written and 2014 and 2004 (hi, it's me). There will always be people who don't use it and others who won't remember a time when they hadn't used it :P
Hello, fellow old person. I was just remembering PVCS (Polytron Version Control System) since it was the first I worked with back in the 80s. Now I see that it's still out there, with the latest release in 2021. Which is insane.
>But senior developers can understand the problems that they claim to address, and why they are important and common problems.
Senior developers have simply grown accustomed to the warts, and accepted their fate - some even celebrating their mastery of the warts as necessary knowledge.
The only pragmatic reason not to change things is the compatibility chaos that will ensue and the resources required - not that problems don't exist.
It only takes doing index73.php and index_old3.php for a few months and then eventually screwing up (unless you’re perfect) to realize how dumb putting numbers and _old4 at the end of names is. Then at that point, you naturally go look if there’s a better way.
> Then at that point, you naturally go look if there’s a better way.
That might be true now but it wasn’t always. Plenty of shops didn’t have any version control at all. And the build was whatever came off your local dev box. And “local dev box” wasn’t even a term…
It's a false dichotomy. Before git there were other version managements systems which would've fit your use case much better than git. Subversion is the easiest perhaps.
Git is pretty unnatural, but CVS? That is much closer to the "copy a shared file back and forth between people", except with nice things such as "keep track of who is editing what", "know what the latest edition is", and "keep track of the history".
That said, if I was going to start out teaching someone coding today, version control would be timestamped .zip files.
Ouch, please don't. Anything else like bzr or hg or svn will be easier to grasp than git, can work locally too (IIRC for Subversion) and not much harder than zip files — don't get them used to the wrong mental model and bad habits.
Entirely besides the point I was trying to make. I wasn’t trying to say that it’s a good idea for a team, but that git is complicated. Too complicated for beginners.
Well yeah for hobby use you're right that it is "normal" that it took a long time to get there. But professional use is completely different. That's the part they are referring to I'd say.
> Git is not a natural development at all. Obviously, it is a standard right now.
Git is actually an unnatural development. Its UI is atrocious. And it worked like crap on Windows for forever.
Over time, I taught non-computer people who used Windows all of CVS, Subversion, and Mercurial. They got each one and why things were better. The first time they had to recover something, they really got it. Source control got out of their way and was generally fine.
Then Github won due to VC cash dumping and foisted Git on all of us. Source control was no longer fine.
Thankfully, there is now Jujutsu(jj) which works on top of the Git storage layer so you can forget about the craptastic UI of Git. Source control is now fine again.
SourceForge had a messy interface and frankly I hated it. Google Code was like frozen in time, like most Google projects. Microsoft’s thing — I don’t even remember the name — felt half assed and like they were going to abandon it… and they abandoned it. There were also others… none of which I even think were serious enough.
Also Git won because SVN and CVS were centralized VCSes and you needed a server. I can just make Git repos before I even know if a project is serious.
There were other distributed VCSes of course, like Hg, but they either cost $ or wasn’t backed by a major user… like Linux. I admittedly just waited this one out and chose Git because it was more popular.
Yes, but it's source control shifted to git years ago (I think you can still use TFS, but it's strongly legacy ) and git is mich better than TFS ever was.
True on both counts. (They’re never going to be able to kill TFVC entirely, but it’s disabled for new projects by default, and they’re going to take away the switch to reenable it.)
GitHub won because they built a great product. It was better than all the alternatives and people flocked to it for that reason.
Git itself is mediocre, and some other DVCS like Mercurial could have won out, although HgHub really doesn't have the same ring to it. The halo effect from the Linux kernel was also a factor. But the VC cash dump and Microsoft buyout came later, people used GitHub because it was better than the rest.
Mercurial lost because it had the wrong mindshare. It was the Betamax of version control and while it had a lot of positives, it never gained the critical mass to overcome VHS.
It's sad that this discussion about git being the standard turned into "Why GitHub won" -- shows that people conflate the two. Who said GitHub had anything to do with git becoming the standard in the first place? (I only even heard of GitHub long after we'd adopted git at a previous workplace.)
git didn't win because of GitHub. Sure, GitHub helped a lot. But actually, it's because its a DVCS, and it does branching and merging way better than anything I've seen so far.
You can thank BitKeeper for all of this, and Andrew Tridgwell for forcing Linus Torvalds into creating git.
You don’t think if the company behind GitHub went all-in on mercurial that might have emerged as the winner? There were plenty of companies using either. Git wasn’t the only distributed game in town. I definitely think GitHub had a lot to do with it.
Sourceforge had a flat namespace for projects and required you to manually apply for each project. It was a PITA and that added a huge barrier to entry.
Plus it didn’t have a mechanism to send contributions. I think GitHub “won” because of its web-based PR system.
Sourceforge was complete garbage though. I hated, hated when projects were hosted on it. It was slow, full of ads, impossible to find what you need to download..
GitHub is to sourceforge what Facebook was to MySpace. MySpace was first but it was buggy as hell.
You're kind of disproving the point you're trying to make, IMO: Saying "GitHub made git the standard!" is kind of like saying "Facebook invented social media!"
Nope, MySpace did. Or blogs, for that matter. Facebook just co-opted the phenomenon, and has for many become synonymous with it. Just like GitHub has managed to do with git.
SF started to be filled with ads only in a second phase. By memory I would say around 2010, and checking Wikipedia it says it changes ownership in 2012. But when it was the de facto "central repository" for Linux softwares codebases, I don't remember it being full of ads.
That comparison is pretty harsh and really underemphasizes how awful sourceforge was. Myspace was mostly fine, death by feature creep. Sourceforge was a flaming pile of garbage that was poorly designed, ad laden, AND silently bundled in ad/spyware to normal downloads.
A more apt comparison would be comparing Facebook to a hypothetical social media site that when you click on a thumbnail of a user's image, you get a fullsize image of something like goatse...which thankfully doesn't exist(yet).
Lots of companies were doing Git and Mercurial "forges" at the time. Many of them were better than Github.
Everything was evolving nicely (including Git and Github) until Github used VC money to offer everything for free to blow everybody out in order to lock people into their product (for example--export of anything other than code is still hit or miss on Github).
At which point, everything in the source control space completely collapsed.
That's such a weird rewriting of history. I know blaming VC is very fun and all, but Bitbucket, originally centered around mercurial, had just as much resources as GitHub and even more. Tons of early contributors to git and the ecosystem were from google and Microsoft. Microsoft started using and shifting towards gif when GitHub was still a baby, etc.
> That's such a weird rewriting of history. I know blaming VC is very fun and all, but Bitbucket, originally centered around mercurial, had just as much resources as GitHub and even more.
You might want to go review your history before accusing someone else of that.
Github took $100 million from VCs in 2012. It then took another $250 million from VCs in 2015. Who the hell else was even within an order of magnitude of that in the same timeframe? Nobody. (Gitlab took almost the same amounts but did so somewhere between 5-8 years later depending upon how you count).
Bitbucket got bought by Atlassian in 2012. Atlassian was bootstrapped until it took $60 million in VC in 2010 and had revenues (not profits) of about $100 million in that time frame. It had nowhere near the resources to be able to drop the equivalent of $350 million on Bitbucket between 2012-2015.
29th September 2010[0] (just over a month after Atlassian raised the $60M.) ~18 months before Github took any VC money. If the VC money was key to Github's success, why did Atlassian/Bitbucket's 18 month head start not get them anywhere?
By 2012 the writing was already on the wall. This was already well into the first PaaS era with Heroku, Engine Yard, etc. Github was bootstrapped, and using git was a grassroots movement. It was just better than what most people had been using. I never looked back after first switching from SVN to git in 2009.
Sure, but 2010 to 2012 might as well be two different eras in the context of VCS adoption. Things changed very quickly.
In any case,that doesn't really matter considering that git had big players adopting before GitHub got any sizeable investment. And I'm not just talking about Linux. Rails migrated towards it when GitHub was still in beta.
If you think SF+CVS is equivalent to GitHub+Git then you never used SF+CVS. Git won because of GitHub, specifically, not because generically it could be hosted on the Internet.
To be fair, by the time git came around, sourceforge had been enshitifying itself for 10 years or so. Git would be popular without github, github just makes things easier.
Which is the transition between steps 1 and 2 of Embrace, Extend, Extinguish. Though the last step is nowadays perhaps unnecessary, depending on how you look at it: Having co-opted FOSS software like git, what you need to extinguish is not that itself, just any other way to use it than your own. The main example of step 2 was of course "Pull Requests".
yeah, git is just the right layer of indirection to support everyone's weird take on how the workflow should go, so you can just use it however you want.
Yes, git has solved a big problem with version control: it has made transactional atomic changesets (commits in the git parlance) mainstream. A git repository is a tree of atomic changesets that group changes into meaningful chunks as opposed to a versioned tree of files where changes to each file are harder to trace back to the intent, i.e. whether or not they are related.
Atomic commits can also easily be moved around (since the repository is a tree of commits with the commits being leaves), and they also make merging simpler in many scenarios.
Larry McVoy was famously wont to engage in trolling upon the Linux Kernel Mailing List, whereupon he did boast of his BitKeeper, which possessed atomic changesets. Concurrently, he did deride Subversion for its lack of the same. Thus, a great drama did ensue, one which ultimately bestowed upon us the creation of git.
git has also succeeded as a DVCS where others have faltered, for various reasons. For there have been monotone, darcs and other such systems; yet, it is chiefly git that has endured.
> Larry McVoy was famously wont to engage in trolling upon the Linux Kernel Mailing List, whereupon he did boast of his BitKeeper, which possessed atomic changesets. Concurrently, he did deride Subversion for its lack of the same
Subversion definitely has atomic commits. That was it's major advance over CVS.
The major difference between svn and git/hg is svn assumes a centralised repository, whereas git/hg are designed to work as distributed repositories that exchange change sets.
Turns out that you can build sophisticated work flows on top of features they added to support that distributed model - things like PR's. These things are useful even if you have a single centralised monorepositiry of the style svn insists on.
But atomic commits aren't part distributed feature set. You need them in a monorepositiry. After you've experienced your CVS repository being turned into a mess by two people doing a commit at the same time that becomes obvious.
> it has made transactional atomic changesets (commits in the git parlance) mainstream
Nope. Subversion made them mainstream:
"An svn commit operation publishes changes to any number of files and directories as a single atomic transaction. In your working copy, you can change files' contents; create, delete, rename, and copy files and directories; and then commit a complete set of changes as an atomic transaction.
By atomic transaction, we mean simply this: either all of the changes happen in the repository, or none of them happens. Subversion tries to retain this atomicity in the face of program crashes, system crashes, network problems, and other users' actions." [0]
git isn't foisted on anyone, although I will acknowledge that if you're working anywhere remotely web-y, you'll be required to use it.
But the vast majority of people I personally see who complain about git simply don't know how to use it, and apparently don't care to learn. They say it's hard, and then they joke about never learning more than three git commands. Can't we do better? Is that where the bar is?
It's not that the basics of git are hard. It's that there are 30 footguns you're forced to deal with every time you want to do anything, just because someone somewhere once had to use one of them -- probably caused by another one of them.
Well, that’s in the eye of the beholder. Yes, I hate the ‘program command’ syntax (“git add”, “git commit” etc) but I just call git-add etc and for me those commands are pretty clear.
But I understand how git works. I imagine most people treat it as a black box and then its behavior probably is rather obscure. I don’t think it was intended for that use case.
I acknowledge that Git is really good on a technical level, but don't like the CLI experience regardless.
Instead, I much prefer to use tools like GitKraken (paid), SourceTree (free, limited platform support), Git Cola (free, lightweight, a bit basic) and know that most of my colleagues just use the Git integration in their IDE of choice.
Working that way, it's a decidedly pleasant experience, not unlike using visual merge tools like Meld, where you can also stage individual lines/code blocks and operate with branches and tags in a very visual and obvious manner.
That said, sometimes the support for Git LFS and things like submodules is all over the place and I've had cases where not adding node_modules or something like that to .gitignore has made the UI unresponsive with how much stuff initially shows up in the working copy, so sometimes you still have to drop down to the CLI and do a fix herer and there.
git has plenty I like and lots of things I would rather not do, but I find most GUIs too obfuscating. I do patch adds, I prefer resolving conflicts in the raw file, and as many plugins as I’ve tried for vim to make git “pretty”, they just don’t hit for me. I used Kraken for a while but ultimately it just felt like unnecessary fluff - knowing the commands I’m using makes me feel like I know what’s going to happen, and GUIs make me feel like I’m not sure. I’m happy for my coworkers that use them, and I’m also usually the first person to get a ping when something beyond what makes sense in the GUI comes up.
I'm consistently baffled by the people who don't understand how git works who then complain that it's hard. Yes, of course something you don't know is hard. Version control is a hard problem; I'm not sure why so many expect it to be unicorns and rainbows.
>> Git[‘s] UI is atrocious. […] Over time, I taught non-computer people who used Windows all of CVS, Subversion, and Mercurial.
> Well, that’s in the eye of the beholder. […]
I would say it is not.
Usability testing is a thing, and products can get better and worse scores when trying to do the same task / reach the same result. I'm not aware of any published studies on the topic, but I would not be surprised if Git got lower scores than Hg on UI/UX, even though they can basically do the same thing (distributed development).
Given the GP trained people on multiple version control systems, including another distributed one (hg), and people mostly had a problem with Git, I would say Git is the problem.
I am unable to do anything with Photoshop beyond starting it and exiting. I have no idea how or even what it does beyond somehow modifying photos. That’s not photoshop’s problem, it’s that I don’t know what I’m doing. So I just don’t use it.
I only hear git complaints from people who don’t have an internal model that reflects what git is doing (I’ll agree that it uses a couple — but only a couple — of terms in a way that might violate some people’s intuitive model). But if you know what you want to accomplish the command line is pretty unremarkable.
But when you don’t know how the tool works you easily get snarled. And when you blindly resort to various “cheat sheet” sites it’s a crap shoot whether you “fix” your problem (how would you know?) or make things worse. That’s like me with photoshop.
> I am unable to do anything with Photoshop beyond starting it and exiting. I have no idea how or even what it does beyond somehow modifying photos. That’s not photoshop’s problem, it’s that I don’t know what I’m doing. So I just don’t use it.
As stated up-thread:
> Over time, I taught non-computer people who used Windows all of CVS, Subversion, and Mercurial. They got each one and why things were better. The first time they had to recover something, they really got it. Source control got out of their way and was generally fine.
Seems that things only got hard(er) teaching people with Git.
> I only hear git complaints from people who don’t have an internal model that reflects what git is doing
Yeah, every time someone says that:
> git gets easier once you get the basic idea that branches are homeomorphic endofunctors mapping submanifolds of a Hilbert space.
> Pixley agreed to all of that and then ignored all of it completely.
Hahaha that's brilliant, and an important lesson for junior developers. Sometimes this is the best way forward. High risk of course. But the reward can be great.
We had the right corporate culture to make this attitude successful. By the time we were a couple of hundred people, though, and had hired some MBAs, that culture was mostly gone.
It requires a level of trust and a level of performance, that I now believe doesn’t scale. Note that we had only a single junior developer, an intern, who wrote our bug tracking system but didn’t otherwise interact with the code base. The rest of the team consisted entirely of experienced senior developers, including management. I think at up to about 25 employees, the shortest working experience on the team (apart from the intern) might have been 10 years of shipping code.
But I really respect Rich for taking this approach which is why I referred to him by name.
I don't think you'd even necessarily need to ignore.
Roll it out in phases. You aren't going to have to deliver the final finished solution all at once.
Some elements are inevitably going to end up being de-prioritized, and pushed further into the future. Features that do end up having a lot of demand could remain a priority.
I don't think this is even a case of "ask for forgiveness, not permission" (assuming you do intend to actually work on w/e particular demands if they end up actually continuing to demand it), but a natural product of triage.
> Some elements are inevitably going to end up being de-prioritized, and pushed further into the future. Features that do end up having a lot of demand could remain a priority.
Why, that sounds positively... Agile. In the genuine original sense.
I remember working somewhere where revision control actually created a mess.
I believe the problem was a combination of xenix (with limited length filenames) and sccs (which used 's.<filename>' to store each file).
Anyway, long filenames like longestpossiblename.c got checked into revision control as s.longestpossiblename where the s. prefix truncated the .c at the end
Later builds would fail since the checked out file longestpossiblename was no longer a c file and wouldn't compile.
My first job out of college was on a university administrative system where all the source files were managed by RCS. I came on in '97, but the system had been operation in at least that state to 1995. Obnoxiously, the vendor layered some secret sauce on top of it in Makefiles for the actual checkins and outs, so you did something like `make co F=foo.c` which totally messed up filename completion. I had it aliased in tcsh within minutes to preserve my own sanity.
When my consultant contract finally came to an end in 2003 they were still using it.
I was one of those once. Tried to get CVS in a project.
Then some other dev committed 9MB of tabs 0x09 at the end of a file. Then the site was "slow" (cause the homepage was 10MB). And the blame went to...CVS somehow.
CVS was notorious for doing "text/binary" conversions (CR/LF line endings to CR and vice versa), sometimes inappropriately. More than once, it resulted in files where every other line was empty. I can very well see this happening several times, resulting in exponential growth of whitespaces.
Assuming a key repeat rate of 15Hz (a number I admittedly just pulled out of thin air), they would have had to lean on the Tab key for almost exactly 1 week.
The guy ate, slept, possibly got married while his finger was still accidentally holding down that key.
The symptom was the website wasn’t loading, just spinning forever.
Looking at the http logs, around 5pm an ip address started spamming a whole bunch of requests to the same url. The same ip was doing pretty normal stuff on the website about an hour earlier.
My theory is holding down F5 key would cause the page to reload about 30 times a second. The website was not able to handle that many requests per second and it effectively became a denial of service attack.
This was around 2007-2010 and I think by now browsers have stopped repeating a reload if the F5 key is held down.
I worked briefly on a site that had this rankings page for users, and it was done by going player by player, and pulling the players table each loop to compare the current player to all the others. For things like "results against women"
Anyway you could DDoS the site by requesting that page. You could actually watch the page fill in as it computed, I want to say it was about a ten second load time.
I had a visceral reaction to this comment! I once joined a company doing ETL with Apache camel and a half dozen underpowered pet machines. Ingesting their entire dataset and running a suite of NLP models took 3-6 months (estimated; it was so slow nobody ever reprocessed the data to fix bugs or release improvements). I drew up a simple architecture using Kafka, hbase, and MapReduce to implement a lambda architecture. The CTO very patronizingly told me that just because something is shiny and new it doesn't mean we need to implement it. This was 2017 :laugh-cry:.
But maybe this isn't really what they felt that they needed at the time? I don't mean to defend bad practices, but your comment makes it sound like nobody had tasked you with re-architecting the business, and you took it upon yourself to show them how it should be done (in your opinion), without having earned the necessary trust. This might have also come across as patronizing, or at least antagonistic, and in any case unbeneficial. Not saying that's the case as I obviously wasn't there, just something to think about.
Fair comment. And I'm usually suspicious of young engineers wanting to implement the new hotness and I'm also a fan of "if it ain't broken don't fix it". In this case, though, the system was in very rough shape. Our customers were complaining about data problems which we had no way to fix (short of manually editing the prod db, which was the SOP). I definitely took it upon myself to do something that nobody had asked for, but it was because the people in charge were entirely asleep at the wheel! They did not last long in their positions.
It's a shame the CTO was patronizing. I've generally found this to be the attitude of many IT workers in similar positions. I would recommend trying to allocate (work) time to prototype and get important back of the envelope metrics that they think are valuable along with those that you think are valuable.
At least that's what I would ask of anyone who was trying to improve a system (And not just the developers circumstance which I think is perhaps what they CTO is cautious of)
I was in this position before but would point out that there is a tactical approach when you know that others will not follow. I set up a cron job (on Windows, not really cron) to check scan the network location for updated source files. The git repo was on my drive and on the corporate GitHub account, safe from those who should have been using it. Whenever files changed it would just auto commit on the main branch with the username included in the message. I could do whatever I wanted on my own branches, keep track of what others were doing, and essentially wield git. You don’t have to try to be a hero inflicting proper source control upon your teams (their perspective) to still occasionally appear like a wizard to save them from inevitable, oft-occurring peril.
I never had to deal with “we don’t use source control”, luckily.
One company I joined was cleaning up the last vestiges of “customize it for every customer by letting people edit their copy on server,” which predictably turned into a mess. They were all very small customizations to styles or images but enough to make upgrades a total mess.
I did work at a company where despite having full source control they didn’t actually know of they could ever deploy the server component again. Edits got made to the live server but then made again in source control, or vice versa. There was one more senior person who couldn’t be talked out of their old workflow.
In theory everything matched.Eventually they even checked and got it all under control where they were positive it was the same and kept it that way.
But it had only ever been deployed from scratch… once. And for like 15 years it lived there and kept getting upgraded. It would all be moved when new hardware was brought in.
But it wasn’t installed from scratch. We truly did not know if we were capable of doing that. It is possible if that server was destroyed and we couldn’t restore from a back up it would take us an unknown amount of time. Even though in theory deploying should be as simple copying the files and starting the web server.
Were there odd configurations that had been made eight years ago that kept it running? Some strange permission changed somewhere?
I wasn’t on that team. But it always made me nervous. That was absolutely a core application of the business.
I really like small shops sometimes. You get a lot of freedom and get to have your hands in a lot of disciplines. You learn a lot of things, including things that’s should never be duplicated.
That's not an exception, I think? Most online service businesses that I've worked with wouldn't be able to run their operations from a "cold start". That takes a lot of effort to engineer and in practice it doesn't happen enough, so that's a risk that most are willing to run with.
Really? Everywhere else I’ve worked has been able to.
I’m not saying all the data in the database. Pretend your DB is fine or you have a full up to date backup. I know losing that KILLS businesses. But in my example just the web server immolated and the backup tapes sitting next to it melted.
Can you set up a new web server?
As a Java developer it’s usually install OS, add Apache + Tomcat, set a few basic config parameters, copy WARs.
Been there. There was this old fashioned developer in one of the companies I worked for a decade ago who never understood nor embraced version control (we were talking of SVN at the time, not even git). Luckily that wasn't the case for all the others developers in the company. But when it came to the projects he owned, I witnessed several scenes along the lines of "hey, customer X has an issue with your component Y, what version do they have?"
He had a spreadsheet where he kept track of the versions used by every customer. Once identified the version, he would open (no joke) a drawer in his desk and pick the right USB stick with that version on it.
I've always wondered whether this overhead was a worth price to pay for not wanting to learn a couple of SVN commands.
My first job out of college, way back in the 90's, I had to "train" an entire department on how to use CVS. This wasn't a startup. This was a billion dollar, established company with 1000's of employees.
I have a confession to make. I was working in a company where the main development center was located in a different country. The main development center wanted to centralize all code on a single SVN server and move the build process to Jenkins
I said we already had source control (CVS) and a a working build process (a script on a server that pulled from CVS, built a jar file and deployed it to X servers). We were too "busy" at the moment but would look into it in the future. This never happened.
The real reason was that I was concerned that the central development center would take over the project, as they had tried to do so in the past. Looking back, I should probably have let them take over as there was more than enough work for everyone.
At this level of dysfunction, installing git won't do anything. You need a holistic change in thinking which starts with convincing people there's a problem.
Yeah, this level of dysfunction takes years to cultivate.
You need the “Dead Sea effect” to be in effect long enough that not only have the good people left, but for them to have been gone long enough that people rising into management have never even worked with somebody competent so they don’t know there’s a better way
I am occasionally on firefighting projects for customers.
It's super hard to get out of this mess.
The code base has gone to shit, customers are often passed enough that they want functioning code NOW, the reaming developers are bottom of the barrel, management is either cause of the problems or has now clue.
Getting out of that is difficult especially if management doesn't understand it has to change first.
Years, and a special kind of myopia in leadership to not see the seeds sprouting. It doesn’t happen quickly, but once established is very, very expensive to fix.
Given the various accidental leaks dues to people not realising deletion still has a history with git when publishing (not to mention git has no equivalent to mercurial's "censor" operation), or people confusing public and private repos on their github accounts, or even the story just last week here on HN with counterintuitive behaviour of forking in terms of privacy (https://news.ycombinator.com/item?id=41060102), I can totally understand a company being opposed to being on github.
Might be a relatively easy sell if you just setup a local migration of their data to repository with a decent web gui for viewing the commits and associating with tickets, given my experiences with TFS in terms of slowness, and crashes/data corruption.
> not to mention git has no equivalent to mercurial's "censor" operation
Haven't followed that for a while, but it used to be the case that it was Mercurial who was principled/fanatic about never rewriting history, ever, while git happily let you mess up with hard deletes.
Which is still the case. They use the hidden phase to avoid that. You can use hg absorb (also awesome) locally of course to simplify matters.
What censor is for is the case where something absolutely has to be removed from the repo, for legal or security. It allows you to do it in a clean standard way, even replacing it with, say, a key signed statement, without forcing regeneration of the whole repository or altering the tree in any way.
... and gotta say. mercurial will let you do hard deletes. They just discourage it and try to offer tooling that allows you to do what you want without destroying history / changing hashes / generally complicating life for those using the repo.
They also do have the tools to completely rewrite history if you really really want to.
So, "principled fanatic" is not quite accurate I feel. They just have a lot more options than git (which also applies to their commandline tooling I feel, although of course there's a lot of 3rd party stuff out there for git to mimic much of what mercurial has out of the box these days).
Here’s a few of my horror stories where I was a consultant at various companies:
1. Previous consultants left no documentation or anything, and a running Hadoop cluster handling (live!) 300 credit card transactions a second. Management hired 8 junior sysadmins - who were all windows sysadmins, had never used Linux before, and were expected to take over running this Linux cluster immediately. They all looked at me white as ghosts when I brought up SSH prompt, that’s the point where I learned they were all windows sysadmins.
2. Another company: all Java and MySQL developers who were trying to use Spark on Hadoop, refusing to learn anything new they ended up coding a Java app that sat on a single node, with a mysql database on the same node, that “shelled out” to a single trivial hello-world type function running in spark, then did the rest of the computation in Java on the single node, management celebrated a huge success of their team now using “modern cluster computing” even though the 20 node cluster did basically nothing and was 99.99% idle. (And burning huge $ a month)
3. Another company: setup a cluster then was so desperate to use the cluster for everything installed monitoring on the cluster, so when the cluster went down, monitoring and all observability went down too
4. A Cassandra cluster run by junior sys-admins and queried by junior data scientists had this funny arms race where the data scientists did what was effectively “select * from *” for every query and the sysadmins noticing the cluster was slow, kept adding more nodes, rather than talk to each other things just oscillated back and forwards with costs spiralling out of control as more and more nodes were deployed
Any many more!
This might sound like I’m ragging on juniors a bit but that’s definitely not the case - most of these problems were caused by bad management being cheap and throwing these poor kids into the deep end with no guidance. I did my best to upskill them rapidly and I’m still friends with many of them today, even though it’s nearly 10 years later now,
"Senior" holds no weight with me. I've had plenty dumb founding conversations with "seniors".
My favorite was at the company that was self hosting their code. The senior team lead wanted me to help him find a memory leak that plagued the product for months. Customers were told to restart the application every few weeks (this was a C++ application).
I sat down with the senior and looked at the code. I spotted the error.
I was like, "You know when you do new[] you need to use delete[]?" as all of his deletions were without [].
> I was like, "You know when you do new[] you need to use delete[]?" as all of his deletions were without [].
This seems like a pretty major lack of a specific piece of knowledge on the senior developers part, yes, but it seems like a much more unforgivable miss on the part of the code reviewers. Was the team stuck in a rut where only a single person (with coincidentally the same blind spot) was reviewing his code, or did multiple reviewers somehow miss this?
I've worked with a consultancy that prouded itself by exclusively hiring top engineers from top universities and I swear, I can put my hand in boiling hot oil and tell you, they were some of the worst coders I've ever seen, I have no doubts I've met way more brilliant people coming from boot camps.
The fact that people study for exams has absolutely no correlation with how much they will remember or care for what they studied, none.
Titles really mean very little. The company I work at recently hired a DevOps specialist to help configure Docker services and assist with development. He was a decent developer but had no idea how to securely configure server side services. Still stuck with his mess two years later :)
…but it is also imperative to remember any modification on checked out code is a branch, technically, regardless of the version control system used. This becomes important if your testing is expensive.
I'm sure I'm not alone in actually having lived such an experience.
I joined a dynamic DNS provider once that had been around since 1999. Their tech, sadly, had not progressed much beyond that point. Showing the higher ups version control was like showing cavemen fire. Of course once the higher ups arranged to have training sessions led by the new hire for the entire dev team the VP of Engineering couldn't handle it and had me fired. Fun times.
I started in 2008. This is what I did eventually. Over the years I introduced the small company to Linux, git, defensive programming, linting, continuous integration, Scrum..., but only for the new projects and stayed 13 years there.
That old project though was never fixed though, probably still running that way now.
Anecdote seems long before git creation, so Visual SourceSafe maybe. Which did not work well over a WAN. Needed other tools to replicate and synchronize VSS.
I was working at a game development company in the mid 90s that used Visual Source Safe and to avoid file corruption due to concurrent commits, we had a shiny silver baseball cap which you had to find and physically possess to commit.
After we had off-site devs, the physical cap wouldn't work. So the project had a "silvercap.txt" file and you had to exclusively check out . And of course people forgot to release that and work ground to a halt.
You can remove the "over a WAN" part: VSS had been designed as a local VCS, so until the addition of a proper server in the mid aught using it over a network share was the only way to actually use it. And it really wasn't good.
I don't know if that made it better, I assume not much, VSS was really trash.
I did this at my first, learned quick oldheads would get flustered and feel challenged if not eased into things a certain way.
Ultimately by the time I left I tried to introduce redbeanphp (orm), git for source control, and CakePHP for some structure. Nothing stuck. When I left it was still raw sql string queries, .zip files when they remembered for backups, and 400,000 line php files with everything caked on there.
Yes, there is a good lesson here. If you walk onto a dev team still using stone tools, there is likely a culture that distrusts of new technology, so tread lightly or you may be perceived as a heretic.
Have been that person before. As an intern, and they even listened! In the days of SVN, just before git, so I ran a server in my laptop and my manager somehow decided we needed a big Red hat server or something, IIRC. In a 20 ppl company.
Setting up git is the easy part. We all used it. Except the owner of the company who would fix bugs in prod and not tell anyone. Then next release we'd unintentionally un-fix those bugs because the fixes never made it back to source control.
Software Configuration Management has existed as a discipline and with proper tooling for at least 50 years. Mainframe and VAX machines had tooling in the early 80s.
For VB Sourcesafe was the go to tool if memory serves.
This is not a case of new vs old, rather incompetence vs competence.
Some of these stories sound a bit far fetched, especially those that involve Unix systems. RCS was released in 1982 and CVS in 1990 so Unix systems have had version control available for over forty years.
I can assure you they are true. Version control was still “controversial” in a lot of shops for quite some time. Plenty of places had the classic “v1, v2.3, v2_next_debug_cruffle_duffle” way of managing versions for a very, very long time.
> I myself introduced SVN as versioning solution to a company in 2007
In 2013, I was tasked with writing wrapper scripts around SVN to make it look like SCCS to avoid confusing the development people who only knew how to use SCCS whilst the company migrated to SVN. Fun but traumatising.
Version control was virtually unknown outside UNIX systems, though, and, in lieu of it, mainframe/PC/Mac developers resorted to barbaric practices which included file tree renaming (v2, v1_John_is_an_idiot), countless zip files with similarly meaningful names with snapshots of entire projects and stuff like that. Then commercial version control systems started popping up, and they were very expensive, usually buggy af, and had no feature parity across themselves, i.e. knowledge of each was not portable.
Whereas nearly every UNIX installation included version control systems for free (SCCS in AT&T UNIX SVR1-4, RCS in BSD UNIX or both) that worked exactly the same everywhere.
In the late 2000s, I worked at $MAJOR_TELCO where management steadfastly refused to implement version control. Upgrades in production were executed by individually SSHing into each physical machine in the prod cluster and typing in commands by hand.
My attempt to introduce a "multissh" tool that automatically executed the same commands in each node at once was regarded with the highest suspicion and shot down. Shortly after I left, they had a multi-week outage caused by somebody fat-fingering the permissions on a network interface.
At least as late as 1998, I recall version control was thought of in some circles the way people think of C vs memory-safe languages today.
Some thought version control was an obvious improvement to make better, more bug-free software. Others had a fetish for doing things the hard way with little justification beyond emotional resistance.
SCCS was released in 1977 and it hasn't even turned up in these comments at all. "Not evenly distributed". (I introduced RCS as a replacement for "just editing files" at a job in 1990; "there's a man page and some binaries on the system" really doesn't contribute to adoption. CVS at least propagated through things like USENIX, because solbourne made cool toys and talked about the challenges.)
It's maybe better than to take the pain to set up git only to see people use it in the same way, setting up a gazillion branches called V1, v2_customer_fix, v3_final, etc...
I don't know how they convinced the users to do this and how they managed with the license but it was done
Enterprise and business users will wade through endless swamps of crap if the software provides enough value. This is a lesson in why "it must be perfect before we release it" is such nonsense - that just says the value your app provides is so low that users barely care and they'll abandon it at the first problem. If that's the case you're not providing anything worth paying for.
As much as this stuff is nuts to think of today and there's tons to hate, I am kinda nostalgic for some aspects of my experience of working at a place where software is maybe needed and/or valued but isn't a core competency. Or maybe a time when software was a new fangled thing that hadn't fully been integrated into corporate structure yet:
- No one having any preconception of how you're /supposed to/ do things or whether you'd even be the type of person to know, so you just kinda figure it out yourself. You spend a lot of time on reading and learning skills. Version control? Wow, cool, what's git, let's try that! A new graphing library? Let's give that a shot, maybe it'll make things easier! You want XYZ? Let me go read about that for a day.
- No one having any idea what's even possible: being treated like a wizard for introducing the tiniest piece of automation or improvement that makes someone's day easier or doing something they never thought was possible. Lots of appreciation and excitement for showing and teaching people new things (... and I know this is somewhat selfish and ego driven, but who doesn't like being appreciated?)
- Similarly people having no idea how long those things should take which, tbh, can be a nightmare if you're not trusted and respected enough to be consulted but also great if people believe you when you say it's gonna take 3 months.
- Beyond the basics just being mostly kinda left alone to do your job however: no standups or tickets or the 30 other kinds of daily (micro)management that is probably necessary but ends up feeling tiresome and stifling at an individual level
- Not being part of software company "culture": no performance review driven development and promo packet madness, no weird rating and ranking systems, no OGPs or KPIs. No ladder. Your bosses think you did what was required of you, so then you're good, and if it's a good year you get a raise, and that's that. I do recognize that with a bad boss this can be a terrible and unfair spot to be in - but again, subjectively with a decent enough boss it felt like a lot less weight on my shoulders at the time.
- No hacker ninja pirate segway mini quadcopter you're the smartest people in the world and we're the best company to work for sort of b.s.
- Socializing with people who are good at and love to talk about stuff other than software
Reading over that, I'm thinking maybe I lucked out a lot and that wasn't most people's experience from that era. And there's some level of rose tinted glasses going on. And/or maybe my years in the rat race are starting to show :-)
Don’t think so. My first job was kind of like that. I don’t even know how they thought that little old me just out of university could be left alone to successfully build applications on my own, but I think people trusted a lot more during that era because eternal september hadn’t arrived yet.
Working directly for the users without any weird BA/PM/TA shit in between is glorious, both because you can always walk up to get immediate feedback (people generally like to see you are actively working on their issue), and in a place like that you can likely deploy it in the middle of the day and immediately improve their workflow.
It still amuses me that IT was located together with finance, because we did reports xD
> I don’t even know how they thought that little old me just out of university could be left alone to successfully build applications on my own, but I think people trusted a lot more during that era
A similar feeling on my end too :-) That might be it - trust sounds like a big part of it for me. Taking a chance on someone who you might eventually end up being good, rather than interviewing and evaluating them 7 ways till sunday. I understand the impulse. I wouldn't want to be a new engineer out of college today though - seems rough.
I did get paid less then than some new grads seem to be now so that might have been a factor in taking the pressure off.
> because you can always walk up to get immediate feedback (people generally like to see you are actively working on their issue)
Oh absolutely!
> It still amuses me that IT was located together with finance, because we did reports xD
It was communications for me, because the software tool we built was free to use on the web, and websites are communications, obviously :D
You’ve got a point! There was a special moment there for a while. Your description perfectly captures my experience interning on a small IT team around 2000. This was in England so the secretaries would snigger whenever I said “debugger”. The downside was that the management had absolutely no clue about software as they’d jumped from some other career and the field was advancing quickly.
> There was a LOT of global variables (seemingly random 4 uppercase letters) controlling everything.
I once ran across a (c) program that had 26 variables, each one letter long, one for each letter of the alphabet. They were all global variables. And many of them were re-used for completely unrelated things.
I inherited a control program from Risø National Laboratories. It had roughly 600 globals of the form A9$, three local variables, and one comment - "Midlertidig".
However on a more practical note, the "Java" used on Smartcards effectively requires that all variables be treated as constants, other than one array. You dynamically allocate the array when handing an event, and it only lasts for the duration of that event.
Dear god this is pretty much what I went through when I started taking over a company with a 35-40 year old codebase. Files spread everywhere, no consensus, and supporting customizations for thousands of customers who we didn’t know if they were even still using the system.
It took five years and the firing of the long-time “head” programmer until some meaningful change was made.
As a glib answer, can I suggest, that without proper training, there were a lot of developers who had never trained under anyone or any company with proper practices??
Honest question. How does our profession root out intrinsically obvious bad practices?
It happens because it’s easier - until it’s impossible, anyway.
The training and best practices you’re talking about is learned experience about how to avoid it getting impossible. But that almost always involves expense that the business side considers ‘stupid’.
> At some point, the application had some bugs which were not appearing when the application was run in debug mode in Visual Studio. The solution was obvious: installing Visual Studio for each customer on site and teaching the users to run the app in debug mode from Visual Studio.
Holy smoke! That's actually the most creative solution (horrible, but creative) I've ever heard to fix a Heisenbug:
Well that depends entirely on what you consider to be the goal - as a software engineer, your role is entirely concerned with engineering excellence. As a member of a team, especially a team of extremely highly paid and highly educated individuals, it is your duty to spend your time (and thus, the company’s resources) efficiently by doing what you’re educated, qualified, and hired to do.
Few people agree that the goal of SWE is engineering excellence. It is to solve business problems. Engineering excellence is a means to a goal: to be able to solve _difficult_ problems _correctly_ and allow solving new problems _in the future_. All these things can be traded off, and sometimes aren’t even needed at all
The thing is, I encountered something very similar with a product that had maybe 20 customers… in 2017. All of them had slightly different versions of the codebase. Version control was used, but haphazardly.
You’d think this sort of thing would be a problem of the 90s or early 2000s, but I’d bet you there are any number of companies with similar situations today.
Don't forget Visual Sourcesafe, that came out around 1995 as well and was the standard source control package for most of the Windows shops in the 90s and even up to the mid 2000s (at least in my experience)
Until about 2014 or so in my department, when we started using SVN. We were more concerned about file locking and a "single version of the truth" of MS Word files stored centrally but accessed locally than we were of fine-grained version control, and Git didn't have the kind of file locking we needed (as I understood it).
Did it run under DOS? I'm not even asking about a win16 native GUI client. (You can still enjoy CVS if you want to contribute to OpenBSD, for instance.)
Yes, and for Win95 users there was actually a very nice GUI plugin to explorer called TortoiseCVS.
The funny thing about CVS and Subversion is that they were last version control systems that non-programmers could actually use. TortoiseCVS/TortoiseSVN were easy enough that with a lot of training you could get technical writers or artists to use it too. Valuable for game projects. With git, forget about it. It can't even handle binary files well.
Only Git had the brain damage to take the source control idea of "there are two areas: working or committed" and add the dumbass index/staging/stash areas that confuse the hell out of everything.
The documentation for Office 2000 [1] describes version control for Visual Basic using Visual SourceSafe integration, although I'm not sure if anyone used it.
I was in a broker trader in 2016 where this was still the case.
I was brought in when an old sheet cost them $10m on a bad trade because the yahoo finance end point it was hitting stopped responding and it just used the last value it had gotten - three months before.
This is entirely typical of especially VB scripts. When I was a software engineer for a Fortune-20 company, I spent more time debugging (and trying to normalize, though that met with mixed levels of resistance) VB applets than anything else.
Oh dear god no. The solution is not throw VS at it and run from the code, the next step is some combination of excessive logging (which to be fair may resolve the issue all by itself) and/or throwing in a ton of DoEvents because Visual Basic.
My first role was with a company who had hit a limit for VB6 variable names iirc. So they'd all been renamed to shorter names. This may be the same issue. They were in the process of rewriting in VB.net.
This sounds like what I see when an inexperienced filmmaker takes on a big project and hands me the “organized” drive to edit, but way way worse and more consequential lol
Oh man, that brings me back. My first tech job was an ecommerce company that a basic online cart with backed by incredibly in-depth set of industry catalogs. We also sold a marketing package as an add-on to the online store where we would proactivly contact our customers and get the info from them to replicate their monthly/weekly physical advertising on the web. This was back in '05-ish, lots of money to be made just helping people get their business online.
We had a group of talented designers/artists wrangling Photoshop and pumping out a few of these designs a day, each, and as we scaled up and gained a lot of repeat customers, tracking these PSDs became a big problem. The designers were graphically talented, not technically savvy. The PSDs were stored on a shared NAS drive that the whole company could see. The designers had a complex naming system to manage major revisions, but overall there was no "history" beyond the classic "_v2_2008_09_best.psd" naming technique.
Several times every week I had to fix "accidentally dragged the PSD folder somewhere and lost it" level problems. Getting IM'd from tech support because the underlying server was falling over trying to clone the multi-GB folder 3 times, logging into a workstation as Admin and searching for an updated PSD that the vacationing Designer hadn't synced back to the NAS before leaving for work, that kind of thing.
As soon as I was promoted to Supervisor I made my first big move. It took a lot of training, far more talking than I thought it should (back then I didn't know anything about politics), but I was able to get SVN implemented to replace the NAS share. I wrote quick-reference documents, in-depth guides, (this was before I knew that no one reads anything, ever, for any reason) and eventually just had to do one-on-one training with everyone just to explain the concept and basic useage.
One of the most satisfying feelings of my career continues to be watching attitudes change over the course of a summer. None of the design-y people liked the new set of hoops they had to jump through. Check-Out, Check-In, Lock, etc, it was "too much". Then, at a happy hour someone mentioned how we hadn't lost the PSD folder in a while. Later someone came to me panicking because a client wanted to re-run an ad from 2 months ago with a couple tweaks, and she didn't have the source PSD or the source material -- I did a live demo on how to get a historical version back, and that's when it really clicked with everyone. With internal political will behind the toolset, it now became an IT problem, as our SVN useage was nothing like Engineering's usage.
Of course file locking was a huge PITA, that feature replaced "forgot to copy the changed file back before vacation" as a problem category. But it also eliminated the problem where 2 people would open the same PSD directly from the NAS share, make their changes, and only the last one to save gets their work persisted. So, a toss-up I guess.
At some point, the application had some bugs which were not appearing when the application was run in debug mode in Visual Studio. The solution was obvious: installing Visual Studio for each customer on site and teaching the users to run the app in debug mode from Visual Studio. I don't know how they convinced the users to do this and how they managed with the license but it was done.
What happened next was even worse.
There was no version control of course, the code being available on a shared disk on the local network of the company with the code copied over in multiple folders each having its own version, with no particular logic to it either, V1, V2, V2.3, V2a, V2_customer_name, V2_customer_name_fix, ...
After that, when there was a problem for a customer, the programmer went there to debug and modified the code on site. If the bug/problem was impacting other customers, we had to dispatch some guys for each customer to go copy/edit the code for all of them. But if the problem was minor, it was just modified there, and probably saved on the shared folder in some new folder.
What happened next was to be expected: there was no consensus on what was the final version, each customer having slightly different versions, with some still having bugs fixed years before for others.