Maybe my experience is unusual, but I've never worked anywhere that the sysadmins knew more than the developers about how best to run their code in production. And when things go wrong with it how best to find the cause of the issue.
And I've never thrown code over a wall without having tested it in a representative environment.
The worst sysadmins get in the way of developers. Ones that scale down your CI server to the cheapest, throttled, one the hosting company has, leaving $800/day contract developers waiting for builds that run in 20 seconds on their laptops take nearly an hour. And then try and argue the toss about whether the CI server is cost effective and every few months keep switching it down despite the CTO saying it needs to be left alone.
When a sysadmin sees an issue in "their" environment that they understand there's a tendency for some of them to just see that issue as the only thing the developer has had to deal with that month. In all likelihood, in a productive company, it's the most trivial issue the developer has had to resolve that day.
Often this stuff goes more smoothly where the developers (I mean, it's not as though if you're going to drop one of the two groups of people it's going to be them going) manage production and there aren't people with separate job titles and the resulting friction between them.
Sorry. There must be great sysadmins out there struggling with terrible developers, I'm sure of it. I just haven't seen it.
Sysadmins must deal with interrupts (requests, crises, things driven by external schedules etc) and then in the rest of their time build systems to manage or reduce the interrupts. Developers are expected to produce work on a predictable schedule. This is disrupted by interrupts and obliterates the schedule for proactive work unless your management is very good at making it a priority.
The "prevention of information services" problem is certainly real though. Perhaps it could be addressed by embedding the sysadmins in the dev teams rather than having a department of their own, but then you have to fight org hierarchy.
Having said that, the central assertion is still correct: the absolute best developers I've ever worked with were also top-tier sysadmins (or linux experts, depending on what you want to call it).
The upside that in a three-person IT department there is very little bureaucracy to fight, just the odd "organically grown" legacy system.
I figured the etymology of that word was rather interesting. But yeah, I get the whole SysAd/Dev dual job. They're tough to balance and do effectively. SysAds are firefighters. When the nag(ios) alarm rings, we come a-callin.
The Talmud teaches homiletically that the word amen is an acronym
The etymology section shows the word has much more prosaic roots. The Talmudic acronym seems to rather be an interesting backronym.
I did not know that. ;-) Thanks!
Sysadmins, who often manage crises, acquire this experience way or another (e.g, by researching options to fit a square peg in a round hole without leaks), so developers with sysadmin experience tend to all have it. I think though that the key part is the "engineer" part and it can be acquired and used without sysadmin-imposed hassles (interrupts, crises, being underappreciated).
I'm sorry you've had such an awful experience with sysadmin colleagues that you've developed such a corrosive attitude towards them. I've worked in lots of good environments, where dev/ops was being effectively practiced, and sysadmins there were the most effective force multipliers imaginable.
This is going to be a sensitive topic, but can we talk about ""BOFH"" culture somewhere on this thread? (Maybe I'm old and it's now dead, but I think some of it persists)
I kind of understand it as a product of working in an environment where everything is urgent and nothing is appreciated, but when sysadmins come to resent the people they're supposed to be supporting then the force multiplier turns negative. Sysadmins develop strategies for reducing the number of requests at any cost, usually by making the experience as opaque and unhelpful as possible.
The BOfH is the archetype of someone who is excellent both with technology and politics. When you are in a service role, you have two competing priorities. You must deliver people the things they want, but also keep things nice and stable for yourself so you don't go crazy.
An important dynamic in organizations is laziness vs. intimidation. Political savviness allows you to apply intimidation to get the lazy to do what you want. You can threaten to fire, or raise an issue that could possibly get them fired and even if it doesn't, won't make you look good. The BOfH is someone who can respond to political intimidation with adroit technical interventions to ensure that his second priority, ensuring a smoothly-running system, isn't threatened.
If you read the BOfH stories carefully, you see that the operator knows where his bread is buttered and is careful to remain on good terms with the people who really have the power in the company. The whole thing is a phenomenal read on organizational dynamics.
This is sometimes an organizational problem. I worked in a support role at VMware for about 5 years and this is what I observed:
- Support & IT departments typically have enough staff in the beginning
- The organization grows & the department grows to match the new work that exists
- At a certain point, the organizational view of Support & IT/Ops changes, and it's now viewed as a cost that you want to keep down.
- Leaders try to minimize the increases in budget, but the workload per sysadmin/engineer increases.
- The sysadmins/engineers have no control over the flow of new work, which effects the quality of work that gets done and can create a toxic environment.
It literally becomes impossible to handle all the incoming requests. Different people handle it differently. Good sysadmins would learn to prioritize properly, but due to the toxicity some people have trouble handling it so they end up developing strategies to make a certain number of requests "go away".
Anyway- just my two cents.
1) Leadership: Stop viewing the IT/Ops/Support department as a "cost to keep down".
2) Leadership: Treat the department like they are manned by people.
3) Realize that not all requests are created equal. Some take minutes, some take months.
4) Determine a reasonable number of requests/tickets per sysadmin/engineer. Make sure to add padding for things like project work, sick time, vacation, professional development, and so on.
5) Hire proactively to prevent the determined threshold above from being surpassed.
6) From the IT/Ops departments perspective: realize that the incoming requests are coming from people that need your help and they are effectively your clients/customers. Treat them as if customer satisfaction is extremely important!
There are also other strategies where you give a subset of people the ability to work on projects and designate a different subset to be interupted with urgent requests, and rotate the role. There are all kinds of things you can do to improve the situation :)
in such an environment i wouldn't expect any other result. fix the environment, not the rational human response to it.
Software developers, on the other hand, are responsible for making changes. Adding features, pushing fixes, and so forth.
These two points of view are inevitably going to cause friction. Developers are only recently starting to be held responsible for production uptime and the pages that come along with that - and it's a good thing for both sides.
That "most trivial issue" for a developer is something the sysadmin was woken up for 3x in the past, and doesn't want to be woken up for again, so he pushes back. How can he not?
How likely is that the sysadmins were told to 'just make it run cheaper, I don't care' by someone higher in the foodchain?
Having worked in ops for > 10 years, this is how it usually goes.
The SA's job tends to involve a lot of scepticism and caution. You look for problems and try to solve them proactively. One (often easy) way to solve many classes of problems is to throw hardware at them.
Management always pushes back on this tactic. That's reasonable; they need to justify capital expenses (especially if you're self-hosted).
The core issue though is that capital expenses are easy to quantify, while "lost productivity" is much harder to fully account for. If I complain that some hardware upgrade which costs $x could improve productivity, I just don't have hard numbers on my end - it's all napkin math.
In many places reluctance to spend money on infrastructure is also, I think, a symptom of headcount-itis. Managers love to have more employees, and love to have more for them to do, because that makes managers seem more impressive to the org. My manager might have perverse incentives; keeping the SAs busy fighting scaling fires both makes his team look impressive because they're busier, and makes him look better because the capital expenditures are lower.
Obviously, head count is expensive, so this is usually a game of appearances rather than an effective strategy to improve the bottom line. Good insight into productivity is required to catch this kind of stuff, but in the real world I've found that a lot of places just don't have an org structure capable of weighing cost / benefit properly when it comes to infrastructure.
If you're blindly following "orders" to reduce costs and doing things that push up costs elsewhere then you're not doing a good job. A good sysadmin (or the sysadmin's boss) should be able to pull up some numbers and say "Build tasks are being queued for an hour before they run. What impact is that having?", and call a wider meeting that brings together the higher-up-the-foodchain manager, the development team, and anyone else who might be affected. Ideally it'd be the higher up manager who calls that meeting of course, but they may not understand the technical issues.
This is the responsibility of someone above to know whether or not the orders they give should be given. If they need to ask for information from people below them, fantastic, please help them along.
Please don't fall on the sword for incompetent managers.
Yes, and part of that is the people in their team(s) helping them and understanding that they're fallible and may fail to ask a pertinent question. Equally, the manager needs to be open to updates volunteered by their team without a prompt. Ultimately everyone does better if the entire group works together.
All it needed was for the question to be asked on the company's internal board and listen to the answer. Even trying it once or twice and I'd probably have forgotten about it in short order. This went on for a couple of years though!
"I will force AV on reads on the developer boxes."
"I will install AV on the production DB servers without telling anyone in the development group, then make the developers prove AV was the cause of production slowness before removing it two weeks later."
"I will force this crazy group policy on developers and when they complain, I will totally ignore them."
A bad, or uncompromising sysadmin (one in the same) make development work a complete nightmare.
I half believe the reason developers are embracing cloud architecture so much is to remove so many sysadmins out of the equation.
On a side note, a tip for developers. Always make friends with the sysadmins. Buy them lunch or something. Right or wrong, they can make your lives much better or much more miserable.
It was literally true at one of my previous jobs. We couldn't install anything on our own dev machines without approval from Net Ops, not even Notepad++ (I don't think I ever got that installed, never got approval).
We once asked for a new server which mirrored the software of an existing server with two months lead time and got complaints that two months is not enough time to get a new server. I think we ended up getting it in three months, after the new project was supposed to be deployed to it.
Meanwhile we were starting to get into Azure, and we had a new server in Azure up and spinning with everything we needed installed on it in about 15 minutes.
The Lead Developer said, "We need to get as much stuff on the cloud as we can so we can stop dealing with this mess." We dealt with a lot of PHI there, though, so there was only so much we could do.
I guess IT in this place really is getting up to speed.
So they become intentionally opaque to move that discussion out of their laps and make it come via the development team managers confronting the operations managers and having the fight on that turf.
Such situations occurring is a sign that the organization is not set up effectively. This sort of confrontation shouldn't need to be happening.
Ideally the development team's lead and/or project managers are involved with, are informed ahead of time, or are even contributing to the policy decisions on the operational side.
Couldn't this argument be applied to any developer for any discipline/speciality?
Sure, more knowledge/context is always better if reasonable to attain, but my experience suggests that your above concerns could also be addressed via team organization rather than expecting all developers to know all things.
I was a sysadmin with various ISPs in various countries for 15 years before I "turned to the dark side". I'd been using Ruby for a few years with Puppet and Chef, and after dealing with one too many "flaky coders", I picked it up.
I have to say, coding is far more enjoyable, though both come in handy in my day-to-day life.
It sounds like you've dealt with a few "BOFH" sysadmins. Don't worry, we're not all like that, and those that have been on both sides of the team will probably see your way.
Tell your boss I'm available (remotely), by the way ;)
D: I have noticed that task Frobnicate has not been running in Production for a month, then checked and it is not even added to scheduler!
SA: There is no mention of Frobnicate in the pipeline for scheduled tasks.
D: What pipeline? FancyPancyScheduler is bundled with application and tasks are defined in DB, I have done it in Staging and everything worked, Frobnicate is all the fuss in the team, you must have heard about it, why don't you check for changes in Staging?
SA: We have well defined pipeline to manage scheduled tasks, currently the executing agent is Cron, not FancyPancyScheduler.
Developers and Admins have more or less the same goals (stable, maintainable and extensible), but on different pieces of the system (code vs infrastructure). In my short career I have seen problems arise where one party makes plans and changes according to current or even past (it worked like this earlier) state of the other party. This applies to both developers and admins.
So I sort of agree with your sentiment, that developers need understanding of system administration. Though, depending on team size, I believe it is entirely sufficient to have someone in Developement who understands system administration and actual infrastructure, and someone in Operations who understands developement and actual stack. This is where I hope DevOps will end up at: arbitration between Developement and Operations to ensure smooth sailing forwards. Because the debate "I will do it in code" versus "this must be done on the edge" (e.g. static assets in a website. Served by application or web frontend?) will never be resolved.
I disagree because this generalizes both developers and admins too much for my own comfort. I've seen sysadmins get really sloppy in the name of getting something into production quickly out of hubris without thinking about the full lifecycle of an application (common with developer-turned-sysadmin engineers - I am one and tend to be more reckless due to the reality that most of the errors I've observed would not have been caught going super slow - that adding more test code does not necessarily find the most critical of errors, just increases confidence) and especially in enterprise software most developers are sitting on features and are nearly allergic to new trends by their organizations valuing revenue loss far above losing growth opportunities.
Of course the stereotype is that operations wants things stable and manageable at the behest of business while developers want to deploy new stuff faster (because the idea of development in most places is to create something new). Modern infrastructure becomes increasingly code-driven and emergent as opposed to manually formed and restrictively managed sysadmins will have more room for errors that may change this into the future. Meanwhile, developers are increasingly under greater scrutiny by society when rolling out features such that nobody can ignore the concerns and they may be eventually forced into nearly waterfall-like development patterns. We can already observe this with the infection of Agile with enterprise bureaucracy / overmanagement back into the rest of the software industry as many of the former smaller, agile tech companies become big behemoths themselves.
this is the weirdest part of the whole devops mantra. like, I know how to evaluate the complexity and memory requirement of code way before I write it and I guess most compsci should be able to do the same.
so either it's yet one more attempt in getting cheap labor into workable territory or plenty people where this myth originated are being cheated out of their money for a graduated curriculum that teaches nothing of value.
Those are only tiny slices of real production bugs. No amount of complexity analysis of your code ahead of time is going to protect you from all of the issues that arise with integrating any large system dealing with lots of requests. You run into all kinds of things like query optimization, kernel TCP tuning, load balancer problems, cache thrashing, high latency clients, out of spec clients, power failures, etc.
If you think knowing the theoretical behavior of your program in an ideal environment is enough, you are exactly the type that throws code over a wall without having tested it.
sure if you bounce them all up like that it might look like you have a point, except it falls apart when you attribute concerns properly.
or please explain, how would dealing with kernel tcp tuning part-time help Joe Random developer write better code?
Cache thrashing also can't be audited in code alone without understanding the architecture that the the app is going to be deployed on. It's highly unlikely that the servers will have the same processor cache sizes, memory sizes, and numa architecture of the dev's laptop.
Load balancer is something a developer should know about as well. A developer has to consider the behavior required by the application (e.g. backend session persistence, headers injected, etc).
>please explain, how would dealing with kernel tcp tuning part-time help Joe Random developer write better code?
Joe might learn that connections aren't as cheap as he thinks and maybe it isn't a great idea for each client to require 50 connections for the app to function. He might also learn that TCP isn't very efficient on high bandwidth, high latency, lossy networks and decide to switch to UDP with error correction.
Long story short, a good developer should know everything about the environment in which the app is intended to run. "It performed ideally on my laptop" is throwing code over the wall.
Civil engineers don't design bridges without understanding where the bridge will go. The same applies to software.
so we're back to point one, you need devs that go trough basic education and stop cheapening out hiring Joe / or Joe should ask a refund from his tuition fees.
> snip of stuff that one does not know off the bat
sure but it is knowable, it's not exactly hard. database are predictable, building indexes on the right places is not an esoteric practice that can only be done by trial error and rituals etc etc. literature is quite adbuntant, easy to process and complete with tradeoffs about different approaches and how they impact performance, maintainability etc.
99% programmers aren't breaking new ground.
There is no basic education that covers the associated costs of a TCP connection in the kernel of a modern operating system or in the load balancers it passes through on the edge of the network.
>sure but it is knowable
So you're saying it is important for a developer to understand the infrastructure the code will run on. Thank you
The reason I brought up all of those points is because they are things not covered in CS educations and they hammer "hands off" devs all of the time.
I've worked with tons of junior devs from all kinds of good schools (Stanford, MIT, UC Berkeley, etc) and they almost always get bitten by this stuff because they throw their code over the wall and don't make an effort to understand the operational environment. It has nothing to do with a good education, it has to do with a mindset of not operating in a vacuum.
Many devs out there who work with Windows or do mostly front end often have little experience in that domain.
Seeing alot of work get done at uni by students - who also actually some backend (friend did a blockchain project recently) did infact do very little backend discovery - the job was delegated to another student to get the env. Up and running.
I've had the exact opposite experience. In most of the organizations I've worked in the "sysadmins" (mostly Systems Engineers/Operations Engineers actually) were stronger developers than the people who were actually developing the software. But that could just be a title shift, because what I've seen happen is that people who care about systems but have a development background gravitate towards operations roles and end up filling in as the "actually Senior" developer for the dev teams.
In the 15 years I've been doing this, I've only occasionally met someone who has stuck hard to the development side of the house but actually is competent when it comes to systems. Most developers have zero care about any of the lower level things like networks, hardware, and even backend software/databases which are required for their application to succeed. A common scenario is that the devs choose an inappropriate backend stack because they chose the easiest things to deploy rather than what is best suited for the use case. Then when things blow up, they beg for an ops team to be created, which usually starts by hiring people who are competent enough developers they can relatively painlessly replace the entire backend with something sane (e.g. Mongo to Postgres shifts are commonplace, because Mongo is a dog in the real world).
> The worst sysadmins get in the way of developers. Ones that scale down your CI server to the cheapest, throttled, one the hosting company has, leaving $800/day contract developers waiting for builds that run in 20 seconds on their laptops take nearly an hour. And then try and argue the toss about whether the CI server is cost effective and every few months keep switching it down despite the CTO saying it needs to be left alone.
Yeah, that does sound terrible. I agree. My top 5 jobs as a systems person is the following in priority order:
1. Make sure production stays up for our customers so we keep making money. (5 9s targets)
2. Ensure the security (and compliance) of our systems so we don't get hacked and we maintain customer expectations about compliance.
3. Ensure the performance of our product/systems is up to customer expectations.
4. Make sure deployment automation is solid and streamlined so that deployments are frictionless
5. Make sure new code is actually being deployed regularly and remove impediments to deployment so customers get features faster.
You'll notice a trend here I'm sure. The most important thing is the customer, then the developer. The biggest frictions I've seen between systems/development teams is when the development team believes that their desires/needs are the highest priority. The systems team is /not/ there to be at the beck and call of the development team, it's to be at the beck and call of the customer who is paying the company money. As much as possible I try to ensure the development team is having a frictionless experience, but if something will negatively impact the customer it is 100% my job to throw a roadblock in the way of the development team to prevent that. The customer of the company is my priority, and everything else is secondary.
It is an interesting exercise to generalize this statement in context of general engineering.
It seems either your conclusion is held to be incorrect, or, we reach the conclusion that software development is not engineering.
But here, as example is my BSEE a.m.: http://eng.rpi.edu/academics
And to this day, we hear about "software engineers" and "software engineering".
Per my OP: "It seems either your conclusion is held to be incorrect, or, we reach the conclusion that software development is not engineering."
Possibly, one reason for the prevalent problems in the pedagogical & human resource fulfillment aspects of the field is due to a miscategorization of the field.
> In fact, I feel like a huge part of a successful software career is learning to see the similarities in disparate fields and draw from them positive architectural benefits, while keeping other-profession-spire-dwellers properly onside/placated.
Fully agreed. In fact that has been my guiding light in my own approach to software development. To clarify my view, I think software, very much like architecture, is a polyglot yet distinct discipline. It is not engineering. It not mathematical logic. It is not process engineering. It is not logistics (provisioning). Etc. (Just like architecture is not civil engineering. It is not philosophy. It is not art. It is not environmental systems engineering. Etc. It is architecture.)
-- p.s. edit --
I would like to bolster my earlier statement that software development has more in common with architecture, theatre, film, etc., than with engineering:
I would like to propose and roughly define a notion of 'semantic gap'. A sort of soft measure of the degree to which the formally expressible definition of a 'production' falls short of permitting the realization of the 'product' without the intervention of the 'designer'.
With that definition in hand, I propose that "engineering" discipines are those creative productions that have minimized the semantic gap to a degree that permits strict divisions of labor in the production.
Where as the "arts" are those creative endeavors that are faced with an intrinsic constraint on the degree to which the semantic gap can be minimized, and, that this maximally reducible semantic gap requires subjective and/or contextual 'interpretation' of the formally expressed design.
By way of example, there are many successful artistic projects that utilized the talents of multiple artists in parallel (lots of murals and mosaics, for instance).
In larger scale computing projects, frequently the (mechanics of the) interfaces provide bigger problems than the vision statement or overall goal, whereas in artistic projects indefinable aesthetics may be the shopstopper, despite perfect comprehension and collaboration.
This is not personal criticism but you know how I know you're not working in a highly regulated environment? Check out the Carnegie Mellon Capability and Maturity Model (CMM) as a counterexample of where some companies go. Development is not at one remove but two from production support. There's an "operate" team between them and production environments and in a regulated environment operate doesn't have privileged access either. That'll be a third team due to separation of duties requirements.
Now imagine you're paged out to a call where your code is slow or failing and you're not even allowed to login to where the issue's happening. Fun, right?
This is why I'm absolutely loving the devops changes we're seeing now - because developers can control the environment without retaining control of it. My ideal is to apply some sensible defaults (no, you can't have all my crashdump space for your app logging; ask for more disk instead, no you can't run ghost/glibc/pooodle vulnerable versions of libraries) and otherwise let the developers spec the OS as a template or dependency for their app. It's much better for me since if I'm required to troubleshoot I know my requirements are met and otherwise the developer may do as they wish. Everyone wins and my control requirements are satisfied because remember developers are never allowed production access in regulated environments.
>Maybe my experience is unusual, but I've never worked anywhere that the sysadmins knew more than the developers about how best to run their code in production. And when things go wrong with it how best to find the cause of the issue.
I guess it depends on what you mean? The developer is in the best position to know what logging there is and how to enable or it increase verbosity. But they may be completely ignorant of how the operating system's tcp stack, memory management or other mechanisms work. Have you ever had to explain to someone that a java out of memory error had nothing do with the fact that linux is using otherwise idle memory to buffer i/o and that they're misreading top output? That the actual issue is their object management and just increasing the JVM's heap size is at best a bandaid?
If you have a developer who insists every issue is the operating system, sometimes the SA has to know how to dig in and run stack traces, probe tools (systrace, dtrace, whatever), jmx queries, etc until they can pinpoint the offending code.
As another example if you have an application that isn't draining queues quickly enough and therefore sending back tcp zero window frames upstream, what's the solution? A hypothetical lazy developer will say "it's the OS not queuing enough data, increase the OS buffers." A hypothetical lazy SA may say "it's the app not consuming packets quickly enough, rewrite the app."
In reality if we've all been paged to a priority one bridge the solution will probably be the combination of the two - tactical fix of increasing buffers to create some time for development to understand why the code isn't doing what it should and fix it.
When even the US military - one of the world's foremost investors in management and leadership research - has largely abandoned command and control (the military equivalent of Taylorism) we really need to ask whether structures that enforce a management/worker caste vs. one that empowers those closest to a problem are effective beyond any meaningful scale.
Even so my original point remains - in some kinds of highly regulated shops there's enough external pressure for controls and separation of duties that the developer simply cannot have access to production. I'm not defending either practice (CMMI or seperation of duties), I'm just saying in some places it's reality, regardless of perceived drawbacks or overhead.
I've not seen a professional developer be confused about Linux using otherwise idle memory to buffer, no. I have seen that with sysadmins who mostly look after Windows boxes and were somewhat unfairly dropped in at the deep end.
I've seen JVM heap OOM errors be caused by both object management issues and applications that would legitimately benefit from larger heap sizes. Many, many times.
I think I'd fall off my seat if I saw a sysadmin use JMX to find an issue. I did see a security guy (so not really a sysadmin, but he was doing a related job) use strace once. He was remarkable enough to have his own Wikipedia page.
Is there somewhere to read about this, I this might have come up with one of our projects. (At the time, from googling, I suggested they try mark and sweep - I didn't really have any idea but was of the opinion they had lots of small objects.) I don't have much experience in Java but was trying to be helpful!
I think you've just described me. And I don't see the argument against sysadmin.
> leaving $800/day contract developers waiting for builds that run in 20 seconds on their laptops take nearly an hour.
Sorry I just can't take you seriously.
When they're out of CPU credits it's game over.
When I switched companies, I came across better developers. Some had decent sysadmin skills, but the main difference was that they actually took interest in how things worked past the 'git push', and when I asked / required them to make some changes that would make my life easier, they listened, discussed and adopted when appropriate.
With those same guys, I took interest in what they were doing, what their actual job was and came up with ideas that would make things easyier and run smoothly on both ends.
After a while I figured out that they weren't actually better developers - they were better people.
(Also, I figured out that being grumpy was not the best approach and that patience, kindness and gratitude could get people to do more than snark, humiliation and flame-throwers.)
I guess my point is: you don't really NEED to have sysadmin skills to be a decent developer; what you really need is to care about what sysadmins do - be curious, talk with them and trust them when they say that your brilliant idea won't work in production.
I've seen this ignorance even in college professors. In my first programming class in college I took a CS class that had both CS and IT students in it since it was required for both kinds of students. The (CS) professor kept trying to convince students how much better CS was and gave some good arguments (ie: salary) but the most arrogant thing he said is that IT is a subset of CS and that by doing a CS degree you would understand everything it takes to be in IT. He also mentioned how in IT you would be constantly fixing other people's computer problems but as a software engineer you wouldn't need IT's help since you can fix it yourself. The funny part is part-way through my degree I realized that college didn't even offer a real CS degree it was called "CIT with Computer Science Emphasis" which none of my advisers nor professors mentioned would cause issues getting jobs outside of Utah, the best thing I did was leave that school and finish my CS degree elsewhere which caused me to lose a lot of unnecessary credits and almost felt like I was starting over. I feel like I got scammed but that's beside the point I am yet to work for a company where a software engineer gets to manage his own computer without following IT guidelines like my CS prof had described.
I didn't realize the importance of all the "admin stuff", before the our newly hired sysadmin came to me and asked if I could help him figure out how to deploy the project I was working on.
This ended up being a looong chat about monitoring, redundancy, architecture, security... you name it. What I've always thought of as installing and configuring software turned out to also touch designing the software so that it works reliably and is easy to maintain.
I don't think I'll ever have plenty of sysadmin skills, but knowing even the general idea of what's important to sysadmins helps a lot. Also, being able to become another interruption in their day and consult ideas is priceless. :)
That's an individual skill as well as a systemic one, though.
Either way, this is a two-way street. And often times the culture of one group or the other gets in the way. Which is really unfortunate.
Having been an admin myself, gradually moving more and more towards development, I understand what things in software are annoying for an admin to have to deal with. Most developers simply don't care. Grant full permissions or don't expect anything to work. Any objections and you're a troublemaker. A better attitude would have gone a long way, but firsthand experience works best.
In addition, it helps me greatly when there is no (decent) admin around. I know whether to suspect the software or the system it's running on, how to keep things running on a less than ideally configured/maintained system without completely compromising security, can help users when the problem they're having is not a problem with the software, but a problem to them anyway – they love the extra mile – et cetera.
It must be said that some admins are just as shortsighted. Knowing what kind of measures actually work for stability, security, and so on, I've come to strongly dislike those who only complicate the situation to no benefit, as well as those who point their finger at the software when it really is their system that's causing problems.
I say this because if I had to wait for a sysadmin every time I wanted to see if something worked, I'd spend a lot of time doing nothing. And it's likely that I couldn't even solve a lot of problems.
So I think you not only have to know what they do, but some of how to do it.
Why do things like Docker exist? Because developers got tired of sysadmins saying "sorry, you can't upgrade Ruby in the middle of this project". Why does virtualenv exist? A similar reason.
Containerized ecosystems (which is to say basically all of them now) are really a sign of those of us on the sysadmin side of the aisle capitulating and saying that developers can't be stopped from having the newest version of things, and I think that's a bad idea.
15 years ago, when a project would kick-off, as a sysadmin I'd be invited in and the developers and I would hash out what versions of each language and library involved the project would use. This worked well with Perl; once the stacks started gravitating to Ruby and Python it was a dismal failure.
Why? Because those two ecosystems release like hummingbirds off of their ritalin. Take the release history for pip (and I'm not calling pip out as particularly bad; I'm calling pip out as particularly average, which is the problem): in the year 2015, pip went from version 1.5.6 to 8.1.1 (!) through 24 version bumps, introducing thirteen (documented) backwards incompatibilities. Furthermore, there were more regression fixes from previous bumps than feature additions. You'll also notice that none of these releases are tagged "-rc1", etc., though the fact that regressions were fixed in a new bump the next day means they were release candidates rather than releases. Ruby is just as bad; the famous (and I've experienced this) example is that an in-depth tutorial can be obsoleted in the two weeks it takes you to work through it.
Devs are chasing a moving target, and devs who haven't been sysadmins may have trouble seeing why that's a bad idea.
Those technologies don't exist so Developers can get around Sys Admins and ignore your helpful advice. They exist to solve that problem that makes the Sys Admins role there necessary. It removes the underlying need for a Sys Admin to worry about the versions. Admins should see this as a good thing, but in my experience many dislike it because it takes them out of their Gatekeeper role. We shouldn't WANT to stop the Developers from from having the newest version of things. They aren't kids playing with toys that we need to nanny over, they're doing work that creates values and the fewer things we do to get in the way of that, the better.
If something breaks due to version changes, their testing should catch it. If things are breaking in production, we ought to get involved because there's some other problem, but before that we, as a profession, need to learn to get out of the way and let people work by letting technology handle the problems. The "Gatekeeper" mentality needs to die as quickly as it possibly can.
In my experience (university), yes they are, and they should do that at home.
Why do you need the latest bleeding versions in the first place?
In my sysadmin experience, people believe software gets bad and deprecated as soon as the glory next breaking version appears. I don't think I need to argue why this is an illogical stance.
With my developers hat on, bumping to the next version mid-process reliably introduces more friction than is worth it. People think the next version solves that one weird issue but ignore that it introduces two new ones and that the software must be changed to fix five new incompatibilities.
But the solution reliably is to just not use that weird feature that caused the bug in the first place, and think what a clean solution would have been. And guess what, the result is a cleaner and more compatible code base. It's a tip that works for me again and again: If there is friction, think - before spending the next hours with an update that will soon lead to new problems.
It's great that you can for example compile Linux without too much friction. It's great that arcane shell scripts can run on any system. Stability (in a compatibility sense) is not a nice-to-have, it's basic sanity.
Stability, sanity, all that is amazing, and a must have.
But also bug-fixes, security improvements, and performance improvements are wonderful too, which tends to come with using up-to-date dependencies.
The problem with the latter, as you mentioned, is when it introduces breaking API changes and is wholly not backwards compatible. This is not a "kids playing with toys wanting to experiment problem" this is a bad software problem, which is why I like Go, and why I liked Java when I was doing it full time. If the language you use has backwards compatibility as a first-class citizen, most likely the package authors will act that way too, and then the maintainers, and eventually the developers. Limit your software choices to those who care about not breaking everyone's shit every 2 weeks. Heck even when I write my own API's now that I know only my company is going to use internally I am thinking about this.
I do not understand the disconnect that developers have with understanding all of the benefits that it brings. Yes, you have some extra code in your code base so it's less clean. You also have a stable environment as a result. The first affects only your personal preference. The latter affects all of your developers and users.
Unless you have a situation where it's impossible to maintain, not insisting on it is pure self interest.
Because they've never worked on an old codebase, because front-end technologies change so often and everything just gets re-written anyway. It's a waste of time worrying about this when the code won't make it to is first birthday.
If you were speaking to seasoned C and DB developers about stability in the tools and the platform, you'd be preaching to the choir.
> It hardly seems worth even having a bug system if the frequency of from-scratch rewrites always outstrips the pace of bug fixing. Why not be honest and resign yourself to the fact that version 0.8 is followed by version 0.8, which is then followed by version 0.8?
Backward compatibility has real costs. You cannot restructure your code base as easy, you cannot deprecate bad ideas, cannot extend it as easy more and so on and so on. Sure, it also has real benefits (as you've stated), but missing the disadvantages while only highlighting the advantages is not a useful approaching, it only shows "your personal preference".
No, the former results in a bloated code base full of old legacy crap that no one understands and is afraid to touch because it might break. You have to insert weird workarounds because that bug is now a feature to some idiot and you provide backwards compatibility so it lives forever now.
I don't think so.
Its just plain arrogance to believe your are a better developer than the guy who came before. Having some fear of breaking the code base is healthy, the same way that having a little fear that the chainsaw is going to cut off your leg makes you safer.
I'll add as an anecdote that I do follow your practices (limiting dependencies). It works wonderfully on Debian stable (most of the software there is now >2 years old, the next version has just been soft frozen). I have the occasional package pulled from testing: I recently toyed with Perl6. And currently I use a newer version of python3-sphinx for a nicer doc syntax but I could do without. It causes no headaches at all.
I don't particularly care about having my software on the latest version. I personally prefer using the old version for six months while the newest version gets the bugs worked out of it.
I know sysadmins value reliability and security, but it's really frustrating when every upgrade takes dozens of hours of work to approve. Questions like "What features do you need in the new version" miss the point. It isn't about the features of the software, it is about maintaining a modern code base.
Upgrades always have the potential to break things, but when you keep up with the upgrades it is easier to achieve the stability and security goals the sysadmin wants. When you upgrade often, it is easier to read the documentation and find where changes might break something, and when things do break it is easier to fix them. Upgrades that jump over several versions at a time are a nightmare to debug, and it creates a lot of technological debt that you have to work out later.
Ultimately, sticking with a version of software because it works is trading a little stability now for an absolute mess down the line.
I don't think this thread is about "maintaining a modern code base" at all. Whatever that should mean -- My impression is you've fallen victim to the hype train.
In my perception the thread is about always catching up with the latest and greatest. Would you say in all earnest that my code is not modern because I make a point of developping against solid standards and not constantly longing for things that are not in my distribution (the software there is usually 0.5-2.5 years old)?
You can check some of my code at https://github.com/jstimpfle. Is it "not modern"? I'm a reasonable but not outstanding developer, and not saying that everything will work on your computer (since I'm usually the only tester) -- but I'm pretty sure I can get everything there to run on your computer with minor effort.
> When you upgrade often, it is easier to read the documentation and find where changes might break something, and when things do break it is easier to fix them.
No. Breakages are less frequent because the software is not brand new, and they are better known because all people using the stable release are on the same version. Documentation comes with the distribution, but I don't have any problems googling it by giving the version string either.
> Upgrades always have the potential to break things, but when you keep up with the upgrades it is easier to achieve the stability and security goals the sysadmin wants.
This thread was never about security and I don't approve. I don't think you are familiar with the concept of a stable release.
> Upgrades that jump over several versions at a time are a nightmare to debug, and it creates a lot of technological debt that you have to work out later.
No. If you develop against solid standards you have less breakage. It's not about incompatibility with the most recent versions. That would be a stupid idea. It's about compatibility with releases other than the latest and greatest. This means not depending on the hot new features that are only in these versions, simple as that.
That's fine for an OS, but what do you think business customers would say if you said "sorry, that feature won't be added until the next release in 2 years time"?. That's were tools like pip come in, they let the software move faster, which it often needs too.
We say that all the time; we have a two-year release cycle. And in our field (aviation) that's considered breakneck.
Please list more than only one. It's simple to make exceptions for exceptional requirements.
With most of them security fixes will only go into the latest version though, so once you get behind your system is insecure.
Applications aren't something you build and forget. An unmaintained project is a dead one.
In conclusion you don't use any libraries that are packaged for any OS.
Also assuming that some libraries you use don't exist for your OS, that doesn't mean that you absolutely need the latest and greatest in a business critical way. So, not approved.
All in all, not too fond of the reasoning and the evidence you provide.
Your policy may work in a university, but you'd be fired from any real business.
There are 2050 python3-* packages on my system. Not that I think it's a good idea to use most of them. What's "compatible"?
So what are the libraries you absolutely need? What is this week's secret sauce?
>There are 2050 python3-* packages on my system. Not that I think it's a good idea to use most of them. What's "compatible"?
2000 of them are random versions someone made a package for that are unknown to the core team and probably not receiving updates.
Your list of projects sounds like typical academic ones, not tools used by businesses that employ most software developers.
You're also missing the other benefit of these tools, that we develop against the deployed version. There are no compatibility issues because we develop on ubuntu and host production on redhat.
.NET... It's MS, do you run on Mono? How does the question of requiring the latest version apply?
Sorry, but that is a hilarious argument, almost straight from Gentoo is Rice: https://fun.irq.dk/funroll-loops.org/
Nah. If we define "real business" as something with a decent turnover, employing over 250 people, and being in business for over 8 to 10 years; a business that isn't actually in the business of writing software (the majority of what makes up global stockmarkets, or "real business" in most peoples' eyes) then you will find that OP's attitude and policy-making philosophy is right on the money. (Source: Was CTO in exactly the above type businesses for many years)
Because the newest version has several features that we would like to take advantage of immediately?
Look at PHP 7.0 which introduced return types, and 7.1 which introduced nullable return types. These are features I really want in my application, so we upgrade.
The sysadmin role has traditionally been a focus in that environment (e.g. controlling access to cluster resources).
The definition of 'professional' is up for debate, but I'd encourage people to weigh in on the following (to IEEE, not me):
- an appropriate engineering education background (ABET/EAC)
- at least four years of engineering experience in your field and under the supervision of qualified engineers
- passed two exams (the Fundamentals of Engineering [FE] exam, which is now a computer-based test available essentially year round, and the eight-hour PE exam)
- kept current by as a minimum meeting your state's continuing education requirements.
 I think it would be worthwhile to consider apprenticeships, equivalent to the 'law office study' path to attorneys' bar certification.
Well - that's all well and good, when put such that "Gatekeepers" are viewed as blockers.
However, we "Gatekeepers" are the ones that get paged and / or yelled at by a CTO when an application keels over. Not the developers. The developers get to sit in their sandbox (otherwise known as "production" in 2016/17) of ever-changing library versions that were only rapidly tested in QA. Then they play a game of Starcraft II, scan HN and go to bed. When something runs out of memory or crashes in the middle of the night, we get paged. So, hell yes we should be involved in the process.
When I started at my current company the traditional silo between dev and systems was there (although we were allowed to deploy our own stuff) - they managed everything we ran our apps on and we just deployed them to servers they had already configured. Over the past ~3 years we've made a lot of changes, the department manager for our IS team is present in our daily standup calls to relay information between our two teams and we now have a couple separate VMWare clusters dedicated to our applications and VM running on them is our responsibility for the most part. We are the first to get called for issues with our applications, and where necessary we work collaboratively with our systems team to resolve them - we don't throw blame around, it does no good.
I should add most of this is only possible because we have real DevOps people on our team (well, really, it's just me right now - we lost our other and need to hire a replacement still) - not developers who know enough to copy a blob of crap to a server to run, but people who have real skills in both aspects. We are trusted to maintain things because we can do it right, and while it took a lot of work (and some unfortunate infighting) to get to this point both of our departments are working great with this arrangement.
There's still kinks that need ironing, we've not done an adequate job at writing documentation so our systems team can help with some failures (primarily on our Linux VM's, our whole systems team is Windows admins) if we aren't available - but it's on the radar as well as getting PagerDuty set up to escalate alerts to them if we don't respond in time (like having our PostgreSQL data volume fill up over the weekend, not a call I wanted to get at 10AM on Sunday).
So yeah, fix your culture issue, get people communicating daily between your teams, share responsibility for issues instead of placing blame.
And that's why people are moving away from that model. It's part of the reason DevOps is being embraced as a model. Developers should be on call to support the applications they build. You get benefits all around.
I wonder how many companies are doing it right vs doing it wrong? Any anecdotes from a proper devops group?
This is institutional failure.
If only customers weren't fickle and might learn not to demand new features all the time whatever the costs...
It's turtles all the way down.
No, they really don't: they remove the ability of system administrators to administer versions of software across the total system.
This is bad, e.g. when a new OpenSSL vulnerability comes out (it being a day ending in -y) and every piece of software has to be updated.
> We shouldn't WANT to stop the Developers from from having the newest version of things. They aren't kids playing with toys that we need to nanny over, they're doing work that creates values and the fewer things we do to get in the way of that, the better.
I am a developer, and I disagree. We are, by and large, kids playing rather than adults making carefully considered decisions. We'd rather use v3.0.rc-1-awesome rather than 2.17.12, because the former is the version that adds an API that saves us from writing twenty lines of code, never mind that it also is untested, unstable and very likely insecure.
We need adult supervision. We need oversight. That's why I argue for using stable, LTS-style distributions, and running against the distro packages unless there is a very good business reason not to (and yes, 'we can't implement necessary functionality in a cost-effective timeframe' is a valid business reason). I'm not opposed to using the bleeding edge when it makes business sense; I'm opposed to developers using the bleeding edge because they like it, and keeping the business in the dark.
From an architectural point of view, microservices take the reductionist approach to system design to an absurd limit, and per my professional experience (fwiw & ymmv) are due to the general architectural illiteracy of the rank and file practitioners in this field.
Yes the 'no-architecture architecture'. It's very Zen. /s
> how it all hangs together
In case you are interested in rescuing :) young but promising talent in the field, next time you find yourself involved in a discussion about microservices "architecture", point out the realization of a single-node application per this approach, where every function is a process, and the call stack requires IPC, and the 'linker' is considered obsolete and outdated technology.
After reading your comment it now occurs to me that Docker and other container systems are actually a huge organizational tool. One issue I have encountered at companies is keeping the IT and development departments on the same high level organizational incentives to keep political barriers from coming up between them (and conflicts arising).
Containers can help keep everyone's incentives aligned because System admins can focus on the actual administration aspects of the systems and infrastructure (that devs do not need to be concerned about, like vnet layouts and whatnot) while devs can focus on the actual development and deployment without having to have everything confirmed and approved by the IT departments.
(I have been both the sysadmin saying "no" and the developer mad at sysadmins saying "no". But going slower and doing our homework has never, ever hurt me or my employers.)
Security of the application is very much so responsibility of developers, not system admins, as the developers have the best point of view to understand the implications of the software they are developing/integrating with.
If there are routine violations of security at the application level that aren't being caught by developers working with those systems then the company as a whole needs to sit down and make sure the development teams have the proper security procedures in place, because putting a department in charge of security that has all accountability but no power to remedy the situation is a recipe for political fights between departments and a disaster. Proper code reviews and team leads with experience should be able to catch more security issues than sysadmins will.
If your sysadmins are in charge of security review of the application then they have to be in charge of security review of every low level dependency at the individual package level. Otherwise your developers won't think about it because it's not their problem (IT will review and let me know if anythings bad) and it encourages them to lack accountability of the security of their own software.
Developers may not be as aware of those topics as sysadmins.
I've worked at a place where they were running PHP 4.4.9 until about 8 months ago. And they were upgrading to 5.4! I get that it was work to convert a lot of the older code base to 5.4, but it was already passed EOL when they were switching to it. And 5.5 wasn't much behind it.
So now in the near future, they'll need to upgrade again (though they probably wont), and they'll probably jump to 5.6, which EOLs in two years (probably two years after it EOLs).
I have regression tests to catch if an upgrade breaks anyyhing. What does a sysadmin have to approve or deny an upgrade? A little beard stroking and changelog reading?
I think the movement towards containers is like you said, to keep sysadmins off the code. Sysadmins add value in setting up the infastructure and keeping it running. They subtract value when they want to tell developers what version of a library to use.
Because he maintains that installation and you don't? But, yeah, that's why virtualenv, Docker, etc. were invented, because devs kept getting sick of installations having consequences.
What does a sysadmin have to approve or deny an upgrade?
Check for conflicts of this version of this library with other software currently in use (by other developers maybe, or even by the same developer). Add it to the watchlist on the dozen or so security mailing lists and newsfeeds he checks daily. Read the changelog and look for implementation problems. Read fora and look for performance problems people are reporting. Yes, beards get stroked during this process, but time and again we see that developers refuse to do this, and wind up coming to us when they break something because of that...
Also, docker (and containerization in general) is a wonderful thing for both of us. It decouples the fickle apps from systems (also moving targets) and the other apps which are constantly seeking out new and creative version incompatibilities. It makes migration and maintenance a much less frustrating endeavor with fewer surprises along the way.
So why is that an acceptable mentality for "in-house" developed software but if you buy something proprietary from a third party where you have zero say over what lib/langs are used, it's A-OK?
>Check for conflicts of this version of this library with other software currently in use (by other developers maybe, or even by the same developer).
That's not the case when using containers properly. Every service gets it's own environment so whatever version of lib-xyz is needed, even if incompatible with other parts of the project, are walled off for only the service that needs it.
>Add it to the watchlist on the dozen or so security mailing lists and newsfeeds he checks daily.
Ok this is where I completely agree with you as we have been working on this at our company. My personal solution seems pretty logical though so hear me out.
1) Build a docker file that fully documents the install of your service as well as any OS level dependencies. Ensure that any config files are external to the container to allow sysadmin access.
2) Document in a central location (say an internal wiki) what the external services, servers, repositories, developers, and admins are responsible for the service.
3) Automate builds of containers from repo and add automated testing post containerization.
4) Sysadmins monitor repositories for changes to docker files or wiki articles for new services, databases and libraries as well as taking note of library versions. If an issue with a particular lib or service is discovered, the config files can be edited to point to a new service. Or a new container build can be triggered with zero changes to the source code, but a forced update to the OS packages for the container.
In a tight situation where a developer might not be available on-call, the sysadmins have more control over a similar proprietary product but don't have a workflow for messing with source code (which they are likely not familiar with regardless).
There are solutions to the issues you raise (often trivial ones at that), they just require an adjustment to workflow and an increase in communication between developers and their sysadmins.
And, ultimately, Docker lets that not be my problem, because they have to deal with it when the next leftpad happens. So, yeah: they should have at it. I guess I still think there's something to be said for the cathedral pace, though.
Well there are two ways to approach this IMO.
One be proactive. Create them a vetted centos or whatever OS environment for them to base off.
The problem is, if you don't keep on top of it as a sysadmin, the developers will just figure out another way to wall you off.
Alternatively, accept that it doesn't matter what underlying OS they use, because a patched OS >> than unpatched and that when done correctly there is minimal exposure even when the service has an exploitable lib due to the jailed nature of containers.
Assuming "latest introduces new problems" too often builds an aversion to patching which can lead to worse issues down the road.
>And, ultimately, Docker lets that not be my problem, because they have to deal with it when the next leftpad happens. So, yeah: they should have at it.
Exactly! The only one responsible for libs are the parties directly leveraging them. Not that developers shouldn't make that info known. It has to be documented to remove the bus-factor of 1, and if it isn't the sysadmins should work with the devs to get it documented.
> I guess I still think there's something to be said for the cathedral pace, though.
I think it depends a lot on your resources as a department/company. You should always execute as quickly as feasible given your team size and work load. Otherwise technical debt has a way of piling up faster than you can offload it.
They also maintain the base docker images that we're expected to use, as well as the docker build infrastructure.
Facilitation with guardrails, not blockers.
> So why is that an acceptable mentality for "in-house" developed software but if you buy something proprietary from a third party where you have zero say over what lib/langs are used, it's A-OK?
Proprietary software generally has a support agreement and SLA for fixing things instead of getting the response "it works in dev!"
> That's not the case when using containers properly. Every service gets it's own environment so whatever version of lib-xyz is needed, even if incompatible with other parts of the project, are walled off for only the service that needs it.
That's why containers are great, but you have to remember most of the world isn't as fast as this community to adopt things, a lot of things are still being managed the hard way on shared servers with literally thousands of dependencies. Migrating to containers in these instances can't happen fast enough.
> There are solutions to the issues you raise (often trivial ones at that), they just require an adjustment to workflow and an increase in communication between developers and their sysadmins.
Implementing even trivial changes to processes that impact hundreds of people across multiple continents is often not trivial. Devs in India, devs in the US, hosting teams, release management, etc. A lot of those people are doing just enough to get by and not up-to-date tech wise, so not only are you implementing new tools and processes, but you're building out training programs around using them, etc.
These processes are old and will be modernized in time but that's the reality for a lot of "sysadmins."
This illustrates why developers should have some experience with administering
systems: do not deploy unrelated services on the same machine.
And you know what happens as a byproduct of this rule of hygiene? Suddenly
the version conflicts disappear, at least for things that aren't broken
Or not. You're devs are on-call, aren't they? They are maintaining their own software, right?
Because a Python sysadmin has been through all the transitions of packaging systems, all the nasty corners of "backwards compatible" changes, and knows how underlying changes to the operating system will affect your code, what the storage behaves like under load, and why one tech is not "better" than another. If you really hired an admin (cough, "reliability engineer") for a Python codebase that doesn't know Python, well, that's a different question altogether.
> I have regression tests to catch if an upgrade breaks anyyhing
You don't know what you don't know.
When you can reason about the multiple ways the above statement can fail, congratulations! You are now a seasoned sysadmin, the scorn of junior developers who just want to get things done (who incidentally read a great blog post the other day about a new packaging system that we should immediately transition to and by the way it's all backwards compatible).
That's the point of regression tests. The sysadmin also doesn't know. Unless he's the one writing the tests (and IME he's not) or he's painstakingly regression testing everything by hand (trust me, he's not doing that either), making him a gatekeeper for all library upgrades achieves very little except adding bureaucratic friction.
I understand this does not make sense when you are not more people than can fit around a table, but as you grow you will feel the need for more and more specialized roles to fit the changing requirements. The first specialized role is probably the sysadmin (devops, reliability engineer, whatever you call it) and he or she should preferrably be the one on the team with the most knowledge of how things work "under the hood" because that person is the one that can save you when things go haywire. Unless you trust this person to be more knowledgable than you are in those areas, as they rightfully should be, you're going to have a problem.
No, ideally not - that's the idea behind https://en.wikipedia.org/wiki/Continuous_delivery
Where gatekeepers are required (because regression testing is not yet fully trusted enough for continuous delivery), QA should be the gatekeeper, not sysadmins.
>For your small little web project
My comments are based upon working on projects with a turnover of > ~1-1.5 million USD / day.
>But as soon as you are under audit rules you need it, and we call this specialized role the admin. When you grow bigger this will likely branch out to a dedicated change manager
Every time I've worked with somebody whose role was "change manager" this role was introduced:
* As a response to repeated downtime in the past caused by some kind of idiocy.
* They were required to "sign off" on releases purely as an added bureaucratic step to cover some manager's ass.
* They never once prevented or caught a production issue.
* They always slowed down releases.
>The first specialized role is probably the sysadmin (devops, reliability engineer, whatever you call it) and he or she should preferrably be the one on the team with the most knowledge of how things work "under the hood" because that person is the one that can save you when things go haywire. Unless you trust this person to be more knowledgable than you are in those areas
Ironically the whole idea behind devops (which I fully agree with) is that it should not be a specialized role - developers and ops teams should be blended.
This is precisely because if the two teams are separate and one throws code over the wall to the other then things will go wrong. Then a manager will insist on a gatekeeper.
In the python-specific case -- the requirements.in / .txt files for the virtualenv should be part of the software VCS, but the sysadmin should be able to edit & pin things just like the devs, so that they can bring their expertise to the container, rather than having to fight it.
Mind you, my opinion might not scale - I'm part of a small enough team that I'm holding both those roles, but I try make sure to spent time wearing both "hats", so that one role doesn't get more man-hours clocked.
If they want to enforce a policy of pinning versions, that's very welcome (though I would do that anyway).
If they have specific, relevant comments about upgrades of specific packages - again, fine (though in practice they never do).
If they want to be a gatekeeper for changes to that file they can fuck off.
Minimize your dependencies. It's incidentally also what leads to clean code bases.
Oh hell no. I have wasted far too much of my life maintaining buggy, technical debt ridden reinvented wheels where there was a well maintained package that could just have been used instead.
You also crush velocity. Smart use of libraries lets you ship code 10x faster. Two identical businesses.. one writes all their own code, one is smart about using libraries. Which one makes it to IPO first, and which goes bankrupt?
The company "smartly" using libraries might stuck maintaining a monster of dependencies that only ever was meant to be an MVP. It will require 10x more engineers and while they might move fast at the beginning, they will only slow down over time.
The company minimizing their dependencies and paying attention to their stack will be able to add complexity over time without breaking a sweat. Their costs will be 10th the cost, and they will be able to run profitably.
I am not anti-library or anything, but e.g. adding sci-py to your python project just because you need a gaussian function in one place in your code is just lazy.
Managing dependencies wisely is one of the hardest thing in software development.
It's right after cache invalidation and naming things ;)
Then the app is broken or has memory crashes, or the final binary is 10x the size it needs to be. 9 out of 10 times it's a third party library. This is why I ban the use of them unless absolutely needed.
I've had this problem over and over again (biggest one last being with the US Census). Folks insisted on upgrading a python library and auth to the hosts stopped working.
What if the sysadmin can code in the same language as you, faster and with less bugs?
> 15 years ago, when a project would kick-off, as a sysadmin
> I'd be invited in and the developers and I would hash out
> what versions of each language and library involved the
> project would use
(As for software that releases often, maybe it's an over-correction, but there's a reason things don't work as they did in the glory days, and that's because they were never really that glorious.)
This doesn't necessarily rule out the expertise of systems administration, because the platforms for all of this need to be built & maintained, and there's still a lot of work to be done on network boarder security, etc. It's a movement that focuses systems administration to systems administration, instead of having to be this big org arbiter of microdecisions, and all the baggage that goes along with trying to be the gatekeeper of all.
Because that's how software developers wrote every dominant packaging system :P
There are tradeoffs to self-contained units. Disk space isn't so much of a practical concern these days, but security is very real: with a dozen apps, you could be at the mercy of a dozen different entities to update their embedded OpenSSL libraries.
Devs come from a mindset to actively create change. This is to add new features and deliver new value and product to the business. As a Dev I do have to say that many Devs don't have enough experience in operations to understand properly how to help sysadmins, many don't understand the complexities of that job.
These two perspectives are at odds, and they should be. The new tools, like docker, start giving everyone what they want... Devs pick their dependencies, and in theory, can't stomp on the sysadmins pristine environment.
To respond directly to your question: because there are new things available in new libraries that allow us to develop new features!
If it were only that, we would have an easy time. The new things you need to develop new features are far and far between.
99% of web software written these days could fulfil identical use cases on an IBM 3270 from 40 years ago. You enter something into a form and it gets stored in a database. You enter something into a field and it generates a report. That's all Amazon, Facebook, Google, any e-commerce site are.
Sure it might be nice to use a new version of that new JS framework that all the twitterati are going crazy about, but does it deliver value to the business that justifies the risk and investment?
If developers want to use newer stuff usually they have a good reason. The ability to hack around the deficiencies of old dependencies does not mean that one couldn't get a better, cheaper solution with newer technology.
That's not the situation I've described - punch cards disqualify.
The situation I mean is where developers insist on writing software on version X, which doesn't compile on X-1 and is buggy on version X (and might not compile again on X+1). For a concrete example, new C++ features that aren't correctly implemented and lead to harder to read code and worse error messages when applied to day-to-day problems (which these features were never meant for).
To have progress we need to change things. When we change things, we may break things, regardless of tests.
To quote deijkstra: "testing can be a very effective way of showing the presence of bugs, but it is hopelessly inadequate to show their absence". From 'the humble programmer'.
Production is the only way to eventually discover the stability of any software, even with 100% test coverage. It's a necessary evil in the support of progress.
Software needs to be tested. But your view that the whole world needs to jump on it at once is very black-and-white.
If you run into a bug or problem with a 3rd party component (open source library, commercial tool, whatever), one of the first things they are going to ask you to do is upgrade. The fact you're on an old version of some library is an easy (and sometimes correct) scapegoat for problems.
Put yourself in the 3rd party's shoes: if you spend a bunch of time trying to fix a problem that turns out to be a bug in a separate library that's already been fixed, that's entirely wasted time.
The same goes for direct usage: you're likely to spend time fixing problems that have already been fixed.
Put another way, a sysadmin could feel confident that moving from 1.52->1.53 would be a painless and transparent operation and that the provider of said library would continue to release 1.x branches with little ABI changes for some length of time. The expectation was that at some point the library provider would release a 2.0, which would require a more careful testing/deployment schedule likely with other upgrades to the system.
Today, that is all out the window, very few open source projects (and its infecting the commercial software too) provide "stable" branches. The agile, throw out the latest untested version mentality is less work than the careful plan/code/test/release, followed by fix/test/release, cycles.
This is a major rant of mine, as upgrading the vast majority of open source libraries usually just replaces one set of problems with another. Having been on the hook for providing a rock solid stable environment for critical infrastructure (think emergency services, banks, power plants, etc) I came to the conclusion that for many libraries/tools you had better be prepared to fix and backport bug fixes yourself unless you were solely relying on only libraries shipped in something like RHEL/SLES (and even then if you wanted it fixed fast, you had better be prepared to duplicate/debug the problem yourself).
This is what Semantic Versioning  aims to achieve, but as you highlighted, it still requires the maintainer(s) of the project to actually deliver stable software, regardless of what the version is. I think some people took "move fast and break things" a bit too literally.
A project following SemVer and that has good automated test coverage is definitely on the right track though, and in generally should be a pretty safe upgrade (of course it's important to know their track record).
"Move fast and break things ... in a separate branch with continuous integration running an extensive test suite" isn't quite as catchy but is what should be happening.
That depends on whether it's a feature or a fix release. Feature releases might or might not include bug fixes, but they typically include new bugs. I welcome localized fixes, however they are not as common because of constrained resources. (Fix releases is the idea behind Debian stable. Of course it only works to an extent).
A different perspective, I prefer to have the bugs that I already know, and know not to trigger.
The reason why virtualenv exists is because different apps may have conflicting requirements, and you have apps that need to be deployed in different environments with different versions of different libraries. I know that even if I were developing against versions of libraries in system packages, I'd still end up having to use virtualenv in development (EDIT: I wrote 'production' here by accident) because my stuff gets deployed on different versions of Debian and RHEL, necessitating virtual environments if only so that I can make my development environment as close to production as possible.
> In the year 2015, pip went from version 1.5.6 to 8.1.1 (!) through 24 version bumps, introducing thirteen (documented) backwards incompatibilities.
Much of that has been down to efforts in recent years to finally fix the major issues with Python packaging. It has settled down quite a bit. Also, the 1.* to 8.* change is because the initial '1' was dropped: 8.* is essentially 1.8.* in the old versioning scheme.
I'm not saying that this couldn't have been handled better, but it's not just a 'hummingbirds off of their ritalin' situation: Python spent many years with packaging stagnated, and what you're seeing is rapid development to fix the mess that years of PJE-related neglect caused.
As a Ruby developer, I can only laugh at this particular example. No Ruby project I've ever worked on ever upgraded their gems midway through a project, much less the version of Ruby. Developing procedures for this kind of ongoing maintenance is just way too much to ask.
This stuff tends to get done years after the original devs have all moved on. Maybe they tried that kind of thing back in the early days, before I started working with Ruby, definitely not today.
Yep, that sounds like that 'long time ago' I was talking about. Nowadays you can do that, no sysadmin to tell you not to, but nobody bothers.
Ironically enough, I think the current DevOps culture emerged partially because sysadmins got tired of saying no (if only so they could sleep through the night), so now they let developers tie their own nooses so they can be woken up at night.
It's wonderful to give up all of those software pages back to developers. And the developers do seem motivated to fix the bugs which wake them up at 3am, so it turns into a win all around. It's still hard to watch a new team come up to speed though, knowing how little sleep they will be getting over the next month because they made their new docker program stateful...
One had to read MSDN every day to keep up with what might break on sites you had no control over.
> in the year 2015, pip went from version 1.5.6 to 8.1.1
The only releases in 2015 were 6.x and 7.x.
There were 8 documented backwards incompatibilities, 4 deprecated the previous year, and 3 documenting a couple bugs that were fixed several days after the 7.0.0 release.
These are the sorts of thing an aware Python developer will know.
We may be counting regressions differently; I'm including both adding and removing the spinner as a regression, for instance (since both the addition and removal added unexpected behavior).
Note that the undeniable regressions that occurred in releases during those 15 months included:
1. Exceptions raised in any command on Windows
2. Switching from not installing standard libraries to installing them back to not installing them
3. Blocking if the particular server pypi.python.org was down
4. An infinite loop on filesystems that do not allow hard links
Note that in that time they also added yet another internal package management system (incompatible with the existing two), changed the versioning semantics twice, and dropped support for versions of python that were 3 years old at that point.
And, again, there's nothing particularly wrong with or bad about pip; this is just what a younger generation of developers are used to.
Releasing an RC often results in nobody using it and hence not finding the bug even in several weeks, but it gets caught almost instantly in a release… At least, that's my experience in shipping various RCs that have led to next-day regression-fixes once it does ship.
While yes, better testing would solve such issues, but at some point the line has to be drawn as "good enough", because there's ultimately a limit to what is reasonable.
For development as a whole it is really great though in my opinion.
Or you simply use something like this: https://bazel.build/
The "backwards compatibility" philosophy isn't so explicit for the ecosystem, mostly the language? Is the test-on-install-by-default making a big difference there?
I also think the widespread use of VPSs rather than accounts on shared servers (again, containerization) was a factor. In the 90s and early 2000s, you usually (even in a corporate setting) had an unprivileged account on a server with a given version of apache and perl, your own cgi-bin directory, and possibly some latitude on a personal CPAN install directory. The lack of containerization meant you had to compromise between using newer software and breaking existing use cases.
So I guess I think it's not so much about Python vs. Perl per se but about the technologies available when those languages became popular among developers.
That said, I haven't had any more problems with PyPi packages than I did in the past with CPAN. Yes, pip always wants to upgrade itself, but that sort of every-damn-day software upgrade cycle seems to have become quite prevalent, not just in the Python world.
I think the real problem with Python/Ruby/etc is the surprising lack of an analogue to CPAN Testers.
It isn't just all of CPAN that is tested on different OS/Perl version combinations, it also stress test the Perl versions.
I have been involved as a consultant in large software projects in the last two years and a vast majority of money lost in delays and bugs was caused by devs not understanding:
1) the difference between virtual memory and physical memory
2) the difference between costs of data storage per storage medium
3) the concepts of network round-trips
4) and hardware bandwidths
5) how to install and configure a web server on a workstation
6) how DNS works
7) how AD authentication works
8) what ORM frameworks do
9) how to write a raw database query (not necessarily sql)
10) the difference between navigating through database records on a database server vs. an application server vs. a client,
11) HOW TO INSTALL THEIR OWN WORKSTATION AND TROUBLESHOOT IT!!!
N) etc. and those are just the topics that I can immediately remember.
As I see it, it's not about "they should". For me it's about understanding how many devs deal with such a level of ignorance on the systems they interact with, on a daily basis. This situation hurt my feelings everytime it happened and I struggled to accept it. I am not a sysadmin nor a developer but my daily work is insanely improved by my (even basic) understanding of how my workstation works and how to manage it.
Would you trust a RF engineer who couldn't troubleshoot his own radio designs? why would you troubleshoot a software engineer who can't troubleshoot his own software as deployed in a real world environment?
Looking at it short term a well paid developer troubleshooting all issues on their work laptop could be seen as a waste of resources but for me software development is a beautiful craft. I also wouldn't trust a carpenter who can't obsess over wood or tools. If I overhear two developers comparing notes on their tmux setup somewhere I mentally upgrade them into the interesting category right away.
Last time I tried, half of the download links were absolutely non-functional. Their documentation didn't help much, either, since they pointed to the non-functioning links. I got a lot of shit for that one, but felt completely vindicated when it happened to someone else a year later.
It takes a while but I've never found it to be a pain in the butt and I've used it, off and on, since version 6 in the late 90s.
If the install errors out it gives you an error message, you google for it, figure out what the problem is (most common messages are easy to find solutions for), fix the problem, reinstall, done.
I remember on the first day of a new job being given a folder containing all the MSDN subscription disks in the mid-2000s and being told to install visual studio.
I'd only ever used notepad as an editor before.
This is a massive stack of DVDs with multiple disks, but worse there are a bunch of cds listing different versions of a thing called "visual studio".
After 15 minutes of struggling and surreptitious googling because I didn't want to look stupid on my new job, a colleague walked by, went "oh", picked out the right disk and said "that's the one you need". And I had to do the same for multiple new starters.
Even today when you have to install something from MSDN, you search for "Office" and get a bunch of irrelevant language packs listed at the top which is definitely not what you want, then also have to know what x86 and x64 means, something a novice will not know, and know what "SP" means and that "SP2" is better than "SP1".
I worked in 3 Microsoft dev shops and 1 that had a mixture. As far as I know I hadn't heard of anyone having issues getting it installed except for the rare, occasional error that could be Googled and fixed. I'm not sure I'd call that luck, sounds like you just had a bad experience.
But yeah back in the day it was a stack of discs (I think the last disc version I used had 2 discs for visual studio and 4 for the msdn) but they were always clearly labeled. One for Visual Studio, one for additional add ons and stuff and the rest for MSDN documentation.
The first versions (200X) had some challenge at times. Starting from locating the installer for the right edition and the license. Then minimal setup to have a working environment was split in 5 different installer/projects to be executed in orders (one VS pack per language + the Windows SDK + the debugger kit + the ATL/MFC package + the driver kit [if you dev drivers] + the DirectX SDK [if you need it]). Then configure some PATH and libraries to link together all of that.
Last I checked, in 201X editions. A lot have been regrouped in a single setup. That's enough for most developments. And the optional packages have auto detection (and it ain't fucked it you run it twice).
So the MSDN subscription is different. The MSDN subscription is the full Microsoft catalog of software. Every version of Visual Studio, every Windows, Office, MSDN documentation; it's literally everything.
The parents above were talking about just installing Visual Studio. When you purchase Visual Studio it was usually 2-6 discs in my experience (most containing the MSDN documentation). But the MSDN subscription is a very different beast. Granted there should have still been a Visual Studio disc for a specific architecture that you were using and your group should have known if they're using Professional, Team, etc as you'd likely need the same.
That was a fun misunderstanding though :)
After all--we're all developers. Automate!
Of your 11 points, I understand 1-10 quite well, but I'm not great at 11.
I think the skill sets are quite different, despite the fact that a lot of people have both. I was never really that "into computers", but I have a burning passion for building large software systems fast and well.
Metaphor: I love to travel to exotic locations across the planet. That doesn't mean I'm also interested in building airplane engines.
To be clear, I'm not bragging. It would be great if I was good at this stuff.
I've not had them myself, but I have talked to other sysadmins who've had devs that couldn't install their own IDE. These people weren't seniors, admittedly, but they were still drawing pay...
For my defense (I'm a dev), OSes don't make it clear. Mac OS becomes extremely slow when I load a big virtual machine and yet displays "Swap 450Kb, 500Mb RAM free". Or with a sole text editor open after a long session it may say "Swap 750Mb". In both cases my logic tells me the swap and free memory should display the opposite, so I can't match my knowledge with the OS behaviour. Then comes Java which adds another layer of memory limitations.
> how many devs deal with such a level of ignorance
I can talk because I was 4 years ignorant, then met the right teams. It's impossible to learn and gain trust in your learnings if you start ignorant, and ignorant devs know it. We constantly need help and don't understand how ticking a weird checkbox in Eclipse makes the compilation different: Without directly executing the original command line, you can't learn anything, and architects in those kinds of companies give you too many proxy tools ("SDK") that you can't improve. You're on an old 14" screen anyway. Also, Windows is so inconsistent and weird that you just assume sysadmin is for people from another planet. My skills only took off 4 years later when I installed Linux, then Mac, and was thrown in open-source libs. It was so easy, in retrospect, and I'm so happy having been in the right context.
Udacity has a pretty good OS summary course. I had personally forgotten what a TLB was until I watched watched it.
yes, however there are items that are more important (ACLs, AD, Security, OID ... etc) that are a little more important than the new shiny JS framework.
I'd rather burn brain cells learning Haskell than trying to understand why so much effort is being spent on JS.
1) Building the wrong piece of software/end-users not having enough influence on what get's built.
2) Lack of delegation, having people make technological/feature decisions about a product they only spend 5% of their time thinking about.
3) Organizational incentives not aligned correctly.
4) Not following software engineering principles that were discovered in the 70's(they also don't follow any software engineering principles discovered since then, but I'll give them a pass)
"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects."
— Robert Heinlein, Time Enough for Love
I've work with many of them, very good about Adobe products and Apple last thing, but clueless about, why their final code is of poor quality.
Fortunately, many of them did get their Ego down, when the web did start to move from markup (html+css) to code (js). Until then, they did take any feedback from a sysadmin, as an attack to their work.
About developers and architects... they should simply be the final support (24/7 calls included) for their work and changes.
As a side note... a sysadmin IS A developer (a systems developer). An application developer, usually should be a systems developer if he/she is working in a project that includes a platform to run (unless he/she is just uploading an app to any appstore). An operator with limited privileges, is just that. The concept of "pure developer" or "pure sysadmin" is so 199x...
This is why Ilya Grigorik's book, High Performance Browser Networking, exists. It's a great reference to that end as well.
But then you need to understand the differences between HTTP 1.1 and 2.0 and how it handles multiple requests from the same page. 1.1, you would probably be better off with a single concatenated file, 2.0, perhaps several. And then what about compression? Any good UI developer should be considering compression of that file, cache control, etc.
I see too many front end developers shy away from this stuff and create bloated monstrosities. They feel it's someone else's job... but if not the UI developer, then who??
This stuff does get pretty complicated. Mainly because the tech is evolving fast (browsers do a lot of tricks now). Even in HTTP 1.1, there are times where multiple files might be more interesting due to how large files are handled
Well, indeed, the higher level protocols, like HTTP, DNS, etc, are more useful on a daily basis.
How you get there is up to you. My own path was freakishly meandering. Don't be afraid to get out of your depth, but try to have a mentor around to stop you drowning.
And remember that DevOps is a mindset, not a job. If someone tries to setup a "devops team", run away.
Same here, though I was aided by being able to be really confident about stuff when people came calling with the checkbook and a quick study once I'd landed a gig. I learned to do what we would now call "sysadmin stuff" (manual administration of hardware) as a kid because I broke my Linux machines a lot. Then I went into the web development gristmill for a while. Ended up leading a multi-platform mobile team with zero mobile experience because "you're a good developer, you'll pick it up" (I did); I literally went into a devops role knowing no Ruby (to say nothing of Chef) under pretty much the same rationale.
"Fake it till you make it" is real, but then you gotta make it. ;)
Therein lies the best career advice I could possibly dispense: just DO things. Chase after the things that interest you and make you happy. Stop acting like you have a set path, because you don't. No one does. You shouldn't be trying to check off the boxes of life; they aren't real and they were created by other people, not you. There is no explicit path I'm following, and I'm not walking in anyone else's footsteps. I'm making it up as I go. - Charlie Hoehn
I've worked with developers who fully understand the systems they are deploying on, and developers who develop for an abstraction of services rather than a real world environment - I tend to prefer the former to the later because the deployment process is much less taxing, even though the later tend to have better luck moving their software to another platform in the future.
But I'll echo you, every time I've learned something hard, it's because I got way out of my depth and had to learn how to swim all over again.
I think this is kind of a given, today. I don't know many healthy, growing organizations for whom their "sysadmins" are not either originally developers or proficient in writing at least domain-specific code (I have always said that I am a software developer whose output is systems rather than web apps, because it's true; right now, with my current projects, I just wear both hats and get on with it!). Even the term "sysadmin" has largely disappeared in my neck of the woods; it has been largely replaced with "SRE" or similar, but that sort of position invariably seems to have development connotations.
I suspect I'd have a hard time finding employment doing the kind of work I'd want to do at the kind of salary I'd expect to earn. I'm a decent programmer with broad but rarely deep experience, a better than average sysadmin with ridiculously broad experience, a passable designer (better than half of the "real", but merely average, designers I've worked with over the years, but so far behind the good ones that I'm hesitant to use the same term to describe what I do when I build websites), a passable sales person, a pretty good writer and copy editor, and the list goes on and on, because I've run my own companies for the past 17 years. I've touched everything that a business has to do, and I've somehow muddled through and kept the bills paid and the customers coming back.
But, I'm not a "rock star" at any particular task. I couldn't wow anyone with my algorithms knowledge, though I've always figured out how to solve the problems I needed to solve. That's not a very compelling sales pitch when talking about a $100k+/year job for a company that has a specialist in all of the above-mentioned roles.
So, I think it really depends on what you want out of life. If you want to maximize security and income, focus on a high value skill. Become the best in your market, or as close to it as you can manage. Eschew all distractions from that skill; don't fuck around with weird Linux distros, figuring out how DNSSEC works, building your own mail server, setting up CI, self-hosting all of your own services and web apps, or otherwise becoming a "jack of all trades, master of none". If, on the other hand, all of those distractions sound like the best reason to be in tech (and, that's the way it's always added up for me, even when it's cost me time and money), and you're willing to take on a lot more risk building your own business (whether consulting or building products), I guess being a jack of all trades isn't so bad.
But, and this is a big but: There's only so many hours in the day, and so many productive days in your life (and you also have to take time away from productivity to have a life outside of work/tech). As I get older I realize more and more that I have probably valued my time less than I should and valued my ability to effectively DIY my way to success too highly. I've spent many hours fucking around with stuff that I could have paid someone a few (or a few hundred, or a few thousand) bucks to make the problem go away, and it would have been worth it in a lot of those cases.
If you aggregate all of the "Every developer should know X" posts and blogs, the list would probably be very long. It only promotes shallow signaling instead of actual competence (I only need to know enough about X to make people think I know about X).
Meanwhile, your salary will still only compensate you for one skill set: software development.
Devs will slowly learn the relevant knowledge anyway, just at a slower pace than the immediate needs. And yes, after a certain number of years, that dev could be good at both. But you can't only hire devs good in both places for every position at every company.
I see no harm in one having basic experience in multiple fields. Especially when those fields are related. Actually, this concept I kind of "market" it among my circles. A mobile developer should know the basics of web development and vice versa. That way they communicate better.
I claim that already happens naturally. It's our drive to quickly build a niche (e.g. "I'm a professional X developer"), in addition to insecurities that lead to saying "X is not my responsibility", is what gets in the way of expanding our fields of expertise.
That would be mostly Python these days. If someone here would touch Perl on my systems, they're gonna have a very bad day. Sure I wrote stuff in Perl back in the days, but those days are over.
I see this as Perl's problem: to be good enough at Perl, you'd have to frequently use it, but Perl is in my eyes only suitable for quick hacky run-once scripts - which should not be written frequently, so you shouldn't be good at Perl. My Perl is pretty damn rusty these days.
If someone is still writing large scripts/apps in Perl these days, I question their judgement in technologies and ability to keep up with the times. Sure you can write larger scripts in Perl - but what's the point? You have to take care not to make things unreadable, while when using something like Python, it's much harder to make a script that's unreadable.
Yes. Where I work, they're all expected to be able to cut code. They might pair with devs at various stages, but a sysadmin who can't write code isn't going to be able to work with some of the more modern frameworks for orchestrations because they are in essence DSLs: they _are_ code.
> Plan an invasion
This is actually a massive undertaking. An undergraduate at MIT taking a semester-long course on this will barely scratch the surface of it. Furthermore, you're never going to suddenly and unexpectedly need to know this. Any situation where you plan an invasion is going to be preceded by spending a long time getting into the position where people trust you with their lives and the fates of their nation.
> die gallantly
You're only ever going to be in this situation once, and probably not even that. Why does it matter how gallant your heart attack is?
These skills make a bit more sense in a world where most of us need to march off to war. Happily, we don't live in that world.
You should prepare for the situations you are only mildly unlikely to be in and where your skill matters.
Depends how old you are.. some historians are keen to point out that the global climate is very similar to the conditions just before WW1. Will you be of conscription age if WW3 does break out in the next decade?
> You're only ever going to be in this situation once, and probably not even that. Why does it matter how gallant your heart attack is?
I think doesn't exactly refer to the act of dying, but is more along the lines of "The object of war is not to die for your country but to make the other bastard die for his." So don't exactly want to die; rather, you want to avoid it, but should it happen, you just want to make it very expensive.
But of course, it could also have nothing to do with war; it could be about how you face death, and how much of a burden you leave to your loved ones. (E.g. don't commit suicide leaving a note that tries to shift the blame to your family, or whatever).
May be. Not only you understand better the needs of software developers. But also, you are able to automate a large chunk of your work. Paraphrasing Larry Wall: Sysadmins should be Lazy. "Laziness: The quality that makes you go to great effort to reduce overall energy expenditure. ..."