Hacker News new | past | comments | ask | show | jobs | submit login

> I'm amused by the tone. It's like the author doesn't realize that 99% of software development and deployment is done like this, or much much worse. Welcome to the real world.

Agree with this, a lot of developers are in a filter bubble where they stick to communities that advocate modern practices like automated testing, continuous integration, containers, gitflow, staging environments etc.

As a contractor, I get to see the internals of lots of different companies - forget practices even as basic as doing code reviews, I've seen companies not using source control with no staging environments and no local development environments where all changes are being made directly on the production server via SFTP on a basic VPS. A lot of the time there's no internal experts there that are even aware there's better ways to do things instead of it being the case they're lacking resources to make improvements.




I wish I hadn't experienced this exact scenario as well. I actually worked at a publishing company where I discovered the previous team building their major education platform didn't know there was such thing as source control. Their method for sharing multiple team members changes was to email the full project to one person every Friday who would manually open a merging tool and merge all changes themselves. They would then send out the updated code base to all people again. Because of this method, they were afraid to delete any stale code, and just prefixed old versions of functions with XX. As you can imagine, inheriting that code base was a nightmare to deal with.


I remember having this debate many years ago. Would it be better to introduce a team with no source control knowledge to something like svn first or straight to git?

Svn is easier to understand and use, but then you’d have to break some existing habits to get to git. But going straight to git might be a big step and cause reversion back to whatever system was already there.


Straight to git but have a simple and clear branching strategy. As in a full written procedure for the three main events of git. "Get code, commit code, push code".

Then disallow direct commits to master to make people work in feature branches and make merge requests through the platform (github /lab/ bitbucket). I find merging and branching locally is where people normally trip up.

Git GUI tools always make git seem way more complicated than it is so depending on the teams platform I would recommend cli from the start.


I would not start people new to git on cli. That's how you get someone's entire Camera Uploads directory committed to master (I've seen it before). I recommend Git Tower. I use it for most tasks actually even though I am comfortable in the CLI too. It tends to stop me from doing stupid things before I do them.


Also, the git CLI in particular is extraordinarily terrible, given how many conceptually different things are done by the same set of commands. (For example: imagine trying to use git checkout in a repo with a file named "master" at the root level.)


You can probably assume a lot of people who aren't using version control aren't using the terminal either. GUIs are generally much better at CLIs at giving you an overview of what's happening and what actions are available too.


I've supported a few svn instances. Devs could be trusted with svn, but analysts,testers, system people, ... My god the horrors these guys dream up.

Say you want to edit a few chars in a 250MB file. Why don't you, just to be sure, past that whole file in the comment field? Do that for a few 100 commits. Tortoise really hated that one, and crashed the windows explorer every time you dared look at the logs (out of memory).

Or the time some joker (His CV proudly declares 10 years of developer experience) deletes the root of the tree, doesn't know about history, and goes with his manager straight to the storage admin, who wipes everybody's commit of that day ( a few 100 people). There clearly is no need to contact someone who knows anything about subversion if the data is gone, and maybe this way nobody will notice anything and jell at them.

Or say you want to do an upgrade. Theory is every user leaves, service and network port get shut down, VM instance is backed up just to be sure, you do the svn upgrade. Of course enterprise IT means I have to write detailed instructions to every party involved and under no circumstance allowed tot look at that server myself.

So it turns out: A) some users just keep on committing straight trough the maintenance window. B) The clown who shuts down the service doesn't check if the service is actually shut down, and there is a bug in the shutdown script. So svn just keeps on running. C) The RPM containing the patch is transported by an unreliable network that actually manages to drop a few bytes while downloading from http. D) The guy who should shut down the svn network port is away eating, so they decide to skip that step. E) SVN gets installed anyway (what do you mean, checksum mismatch) and starts commiting all kinds of weird crimes to its storage. F) The VM guy panics, rolls back to the previous version, except for the mount which contains the data files. G) Then they do it all again, and mail me how the release was successful without any detail of what happened.

Let me tell you, svn really loved having its binary change right under it, in the middle of a commit, while meeting its own previous version in memory. Oh and clients having revision N+10, while the server is at version N. A problem that solves itself in a few minutes as N goes up really fast ;-)

Now thats what happens with subversion, which is rock solid and never drops a byte once committed. This company is now discovering the joys of git, where you can rewrite history whenever you feel like it.


Here's a vote for going straight to Mercurial (a lot easier to grok) and link to Joel Spolky's excellent tutorial

https://web.archive.org/web/20180903164646/http://hginit.com...


GitHub can be used with both GIT and SVN so you don't have to choose.

I've introduced a lot of people to SVN over the past decades. Be it programmers, sysadmins, artists, translators, it's fairly quick to learn.

I couldn't begin to imagine introducing anybody to git. It's a horrible nightmare to use, even for developers, there is nothing that come close in how many times it screws up and you have to search for help on the internet.


If you haven't used any source control at all, I don't see why svn would be easier to understand than git? Using git will save you from a lot of pain up ahead so I would definitely go for git.


My limited experience with git (and none with svn) leads me to suggest someone might prefer another source control system, not because it's easier to understand per se, but because you can do simple things in it without fully understanding it.


"Straight" to git? If you have already made up your mind, why ask?

Subversion is newer than RCS. But that doesn't mean every use of the latter can or even should be replaced.


Surely these days that question is answered by the existence of GitLab and several other similar tools.


SVN and git! Pah!!

IBM ClearCase is the way to go


Someone created a jenkins pipeline that would deploy code in a zip file into prod.


Am I missing something? Isn't this exactly how deployment servers actually work?

I'm not into Java development, but this sounds fine on the face of it, without you giving the context of how this pipeline is triggered.


In theory it sounds alright. It's not great, because Jenkins is usually layered with some existing deploy framework that makes "deploy a zip file" pretty suspect. A healthier setup would look more like "build a Tomcat war file Maven, upload and deploy that with Jenkins". But in context, it sounds like the horror is that people were making and transferring a zip from local code rather than building from the tip of source control.


Developers had a copy of code in a google drive... they'd modify it, zip it and overwrite the one in google drive, copy it to the network folder which would deploy it and delete it from the network folder... in 2016.


That’s a pretty standard CI process - zip files are often your deploy artifact (using, for example, git archive)


Yeah, except the zip file was... the input.

You pulled the code from google drive, modified it, pushed it to PROD, checked it and moved it back to google drive... and asked the other developers to update.


>I've seen companies not using source control

Well I've seen companies that have their own idea of source control. Which is lots of copies on the network drive, and an Excel registry with what is in which file.

It is source control. Just bad source control.


I worked on a system with “octopus locking”. We a had toy octopus, and you could only change the code (obviously in production) when you had the octopus.


My day job involves multiple third-party systems that enforce their own proprietary version control systems for any custom scripting/programming within those systems. Unsurprisingly, these proprietary version control systems are complete garbage 90% of the time, especially if you're trying to collaborate with someone else.


This comment is resonating very well with that I've been experiencing at my current job, and with our new head company. It's always nice if you can take a new application, throw it into a clean container build chain with an orchestration - or some AWS stack, and there's a cheap, low-maintenance, HA production system.

However, there's also the skill set of taking such a ... let's call it well-aged development team and approach and modernizing and/or professionalizing it. And yes, sometimes this means to build some entirely lovecraftian deployment mechanism on mutable VMs because of how the application behaves. But hey, automated processes are better than manual processes, which beat undocumented processes. Baby steps.


> I've seen companies not using source control with no staging environments and no local development environments where all changes are being made directly on the production server via SFTP on a basic VPS.

Omg. And here I am feeling ashamed to tell others about my small small personal website with separate dev, qa, production environments on same server (via VirtualHost), code checked into github, deployed via Jenkins self hosted on another VPS, which was initially spun up with Ansible and shell scripts. All done by me for self training purpose. All because I thought businesses would have something more sophisticated with bells and whistles.

And then I hear there are businesses that make changes directly on live production servers...

But I'm not surprised by such stories as I have seen some bad workflow in real businesses that deal with tens of millions of dollars a year.

Years ago, I worked in the NOC of a company that's top in the small niche. They have dozens of employees, and been around for years.

Part of the job responsibility was rolling out patches to production servers. The kicker was the production servers were all Windows servers, running various versions, covering practically all Windows versions ever released by Microsoft. You can see where this is headed.

Rolling out a patch was all done manually, 1 Windows server at a time. Everything was manually done.

The instruction for deploying a change was easily multiple lines long, each with different style of writing/instruction. Often in plain text file format. We would print them out so we could check them off as we went down the list.

The CTO is still there, but everyone IT person under him has left or been let go. Working in the IT there is a struggle because lack of automation and old old stuff, but the CTO just blames bad employees and keeps churning them in/out. When the real issue is the decade or 2 worth of old legacy stuff that need to be cleaned up and/or thrown out, which can only be done by the direction of the CTO. But he knows he won't get that kind of budget from higher-ups so he will just keep hiring/firing employees and/or bring in some H1B workers who's basically trapped once they join. And of the few H1B workers I met there, they were truly completely non-technical. One did not want to learn how to use keyboard shortcuts to do common tasks... Good guy though.


> I've seen companies not using source control

> ...all changes are being made directly on the production servers via SFTP

I know this used to be common, but recently? Curious how often this is still the case.


> I know this used to be common, but recently? Curious how often this is still the case.

Several times within the last year for me. Not all companies have big tech departments with knowledgable developers advocating modern best practices. Some big internal systems can start out from someone internal applying some basic self-taught skills for example.

To be fair, the jump to using Git (and especially dealing with Git merge conflicts) is scary. It can be a hard sell to get people to move from a system they already completely understand and have used successfully for years, even if their system looks like ancient history to us.

Literally heard "...but my IDE already automatically uploads to FTP on save, I'm usually the only usually one editing it and I already know what I changed" last week.


I have seen it recently. I did my best to change the practice before I left the company, but was mostly unsuccessful. Given that they were still running some of the spaghetti-code PHP scripts that were written in 1999 and still used PHP4 in new development they were stuck in the stone ages. To give a little perspective, support for PHP4 ended in 2008, so they had almost a decade to update, but didn't.


"If it ain't broken, don't fix it". And then one day the server goes boom, the backup was incomplete, and everyone is trying to find the usb flash disk with Spinrite in it.

Meanwhile the CEO who was rejecting the €¥$£ in yh budget since 2000 is angry at everyone!

Oh the times I have seen this!!!


Oh, now that you mention backups, that was a nightmare too. Thankfully, the production database was backed up daily on magnetic tape and stored offsite, but the code was generally edited live on the server, and backups consisted of adding ".bak20190402" to the end of the file. Needless to say, losing code wasn't uncommon.

This was for a 100+ year old company with millions of dollars in annual revenue that was owned by the government. So, yeah. 100% the IT director's fault, who'd been there since the early 90s.


Its both the CEO's fault for either not understanding or not hiring someone who properly understands the risk they're taking on in their tech stack, and whoever's job it was to understand the risk. Part of being a responsible engineer (or IT manager, etc) is to be able to say "no" to new things and to explain that a bad day can and will take you down.


At my first (very small, research-group) employer, I was the one to introduce source control. Another company I saw ~4 years ago had Git, but no one knew how to use it adequately so "lost code" was still a regular event. I haven't seen it since, though; lots of people frightened by and badly misusing Git, but they mostly manage to keep code histories intact.

Having no testing/staging environments remains pretty common, along with its cousin "production work happens on staging". Partnering with not-primarily-software companies and asking about staging infrastructure, you har that regularly. And yeah, SFTP/SCP/SSH is a standard push-and-deploy approach in places where that happens.


On the one hand, source control is really the most basic thing you can find anywhere. It's really hard to find an actual development shop without source control.

On the other hand, outside of a tech company and with less than a dozen developers, don't expect to find any source control. Consultants see a lot of this shit, they work in all industries including where developers don't exist, with a lot of thrown away projects.

Funny thing. Git probably made it worse in recent years by being impossibly hard to use.


Its nice to have a source code history to see why and when something was changed. But a lot of tooling is usually to treat the symptoms of complexity.


Hey, at least they're using SFTP - you know, for the security!

/seriously, though - I...hope this isn't being done any longer - but I bet it is. Sigh...


Way too common. So many clients I've dealt with have a deployment workflow that is some variation of this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: