Their method for avoiding the proliferation of different tools is interesting.
His discussion of the kind of people they want, once you ignore the semantics, is interesting. They want people who take responsibility and are always pushing the boundary of their knowledge into new fields. That's interesting.
My favorite part of the article was right at the end: "instead of asking questions about “why did something fail,” we want to ask why something succeeded, which is really easy to skip over."
What does irk me is that in real life, software developers who meet the Etsy CTO's standard often can't call themselves engineers. In many countries, misrepresenting oneself as an engineer without accreditation is against the law! 
In the U.S., there's no official occupational category for Software Engineer. It's just Software Developer (Applications), Software Developer (Systems), and Computer Programmer.
It is possible to get accreditation (http://ncees.org/exams/pe-exam/, under Software), but it didn't appear very useful to me aside from being able to call yourself an Engineer.
There is actually software engineers. At the U of A, where I went, Software Engineering is separate than Computing Science. Software Engineering was run by the Engineering department, and they had the same rigours as the rest of engineering has. However, Computing Science was run by the Science department.
In 99.99% of software jobs in the Canadian or US job markets you're simply never going to be able to accumulate enough eligible hours in your 6 year EIT eligibility period, because of the way APEGA defines things. Software jobs where you're supervised by a P.Eng., as required by APEGA, are likewise virtually unheard of.
My opinion (as a jr mechanical engineer) is that the whole law is a bit outdated and seems to be geared 100% to civil engineers who actually need to stamp drawings.
APEGA has essentially the same rule in Alberta. That's what makes it impossible to log enough eligible hours to qualify for a P.Eng. It may well be the same in all provinces.
Consequently, although APEGA insists that only Professional Engineers can use the title "Software Engineer", they simultaneously make it impossible for anyone to ever use that title in practice. There's been at least one court case about it that I can recall.
It's reasonably common for engineering grads of any speciality to simply end up working somewhere they don't need a P.Eng. While it might be nice if that were less common for Software Engineers, a P.Eng. must stand behind their work. Qualified candidates should show they worked in a manner that made an existing engineer comfortable ok'ing their work. That may not be common, but it's fundamental to the process.
Must be. Of the dozens of people who graduated from the Software Engineering program that I know or have worked with, you're the first I've heard of who's actually managed to obtain their P.Eng. I've known a couple of people who talked to APEGA extensively and were told that nothing they were doing (writing code, architecting solutions, etc) counted. Probably comes down to 99.99% of places aren't doing safety critical work and/or don't want to pay for formal verification.
Glad to know the title is being used, anyways.
UofA means University of Arizona around here
semantics == meaning. once you ignore what he's trying to say, the discussion ends, no?
edit: yes, i could care less about semantics of semantics, but i can't bring myself to it.
When large projects squeeze into tight deadlines, how much of that boundary pushing will be abandoned in favor of shipping things?
For instance just putting a platform like Etsy entirely on Google App Engine would allow Etsy to focus on the liberal arts side of the problem. Because GAE "solves" most all the engineering problems that typically have plagued software practices on the web...
But to me that was more of an advanced (one hopes) end-user. Someone who could take a bunch of large, mostly-complete logical components that somebody else engineered and then use them to stitch together a solution by integrating these existing frameworks that already provide the first 80+% of the technical solution to carry the last ~20% toward a domain-specific use-case.
What I was looking for wasn't somebody who knew how to use something like HDFS. I was looking for someone who could build something as good or better than HDFS from nothing if they had to. A lot of what passes for "engineering" today, at least by marketplace label, tends to resemble the former rather than the latter.
There's definitely all kinds of space for both kinds of builders/creators depending on the needs and the project, but it certainly doesn't help that the English language and it's colloquial application to the problem space has grossly blurred the distinction.
I don't find this distinction very useful. We're all end-users at different layers in the stack. Building HDFS from scratch is also mostly taking others' components and ideas and stitching them together. That's what progress and innovation looks like. I think you're looking for engineers at a lower level in the stack than the applicants you received.
Additionally, if you're building the next distributed filesystem, you'll be much more successful if you're also an end-user of existing distributed filesystems, so you know the strengths, weaknesses, user preferences, etc. of the existing products. If you're building something without knowing how it's going to be used...well...you're probably not going to build the right thing.
The attribution for me had a lot more to do with the balance of optimism and skepticism. In my head the "developer" sees HDFS and goes, "Sweet, somebody solved this problem, now let me go use that thing and it will give me all these wonderful solved-problem qualities I don't have to think about anymore. This is going to save me a ton of time." The "engineer" looks at HDFS and goes, "Hmmm. This thing seems interesting, but this feature over here must be an incredibly painful one to use despite the fact that it seems super useful and is plastered all over their docs as being awesome. Because there's no free lunch in this problem space. So what possible methods are there to have implemented this kind of thing and how exactly can I test and exploit just how weak these floorboards are before I decide to start building on it?"
Again, not a very useful distinction. Agreed.
* or substitute whatever other enterprise framework you want
...because that's all that most business require, 99% of the time. To you know, get things done and make money and stuff.
Which may not fit your needs, but why be "depressed" about it? It's just the way things are in the commercial world.
If you want someone with more fine-grained stills, try articulating that in your job postings. What we see, all the time, are ads mentioning platform X, with no articulation whatsoever as to where, even on some approximate logarithmic skill, they'd like the skill level and comfort with platform X to be.
And in the few firms that actually do have real engineering trade-offs that favor the use of those types of frameworks, they tend to hire people who are well-suited for the role, and then create job functions surrounding them that are respectful of aptitudes and skills of the people they hire.
In most firms that adopt these frameworks (for status effects), they are just desperate to fill seats and increase engineering headcount. They don't respect your skill set or even care if it matches the business need. They just need to get you in the door, and then find a way to deal with inevitable dissatisfaction later.
Welcome to the real world.
For those positions, I am sure that smart full-stack developers could easily pick up the statistics for data cleaning and quickly gain a passable understanding of the models consumed from APIs in a black box way. In fact, full-stack devs may be happier in these jobs due to the visualization and database components.
A much smaller subset of data science is actually focused on solving novel business problems and may centrally focus on deeper knowledge of a given technique, like MCMC methods, deep learning, real-time classifier systems, etc. For these, you do tend to need more significant experience with the specific machine learning tools being used (or enough general skill in statistics to pick them up quickly). Smart people of all stripes could still learn that stuff, but it's a lot harder to see them being able to convince a firm to hire them in that capacity.
The second type of these jobs is really, really rare though.
This seems like a strange objection, considering that a) getting technology to fit the use case is all virtually everybody wants, and b) having to do 20% of the solution from scratch--rather than, say, the last 0.10%--would be an _enormous_ undertaking. Or don't you consider the silicon, microcode, network, servers, physical protocol, wire protocol, operating system, standards, tools, language, and compiler in that equation? If not then where do you draw the line?
Well, for 90 percent of job offers, what a company claims to need is not what it actually needs (i.e. the typical "10 years experience with 5 year old technology" bullshit). If your company is not a very unique snowflake or in an academic setting, believing that it makes zero economic sense to completely reinvent the wheel from first principles is a valid assumption for applicants to make.
That said, in my particular case it had a lot less to do with trying to necessarily rebuild HDFS from nothing, and more to do with a mindset of rigor and principles necessary to do so. Because being able to work all the way through that problem domain in both broad strokes and in meticulous detail would hopefully lend itself toward also considering ways to validate and attest the correctness of not only things like HDFS (rather than treating it like a solved problem ready built for use), but also applying that same level of rigor and principle to the stuff we actually do have to build from scratch.
Though to your point... a non-trivial amount of this concern and necessity is borne out of the market and regulatory regimes this stuff has to service and abide. That fact that it's not necessary for huge swaths of the marketplace is evidenced by the fact that things like property-based testing, mutation testing, chaos testing, and formal verification are fringe skills (at best) out there... yet the tech world continues to turn out totally awesome cool new stuff with none of that overhead all that time that still transforms all manner of life.
I actually think that "developers" and "engineers" are mostly pretty transparent about what sort they are. Or at the very least it's trivially easy to assess within just a few minutes with the right questions and conversation space. The harder part is getting non-technical people to understand that there's a distinction and that technical people aren't all just a fungible commodity. The weirdest part is that they get that on some level, especially when suddenly they're hit by a bus-factor problem, but that realization hasn't seemed to make a big impact in business/hiring process, practices, etc.
We can keep reclassifying what it takes to earn that label until we've eliminated all but the geniuses of the software world. The titles (and seniority) are, frankly, useless because they aren't legally enforced because no one has a good and popular way to test for competency. If they did, that would be the technical interview and then market forces could once again weed out people who don't make it.
There really isn't any push back when you have an opinion on what a software engineer ought to be when you're hiring, so naturally, a lot of people have their own opinion. Figuring out which direction to go if you are one of those supposed engineers is pretty much a crap shoot but still better than just not learning anything new.
Did you try articulating that distinction in your job ads?
Or if not, can you really blame people for applying when your ads read like 99% of help wanted ads in the fin-de-boom era; like, you know:
We've got the coolest office in the Mission, with a climbing wall and jamming room, we do beer bongs every Thursday and play lawn bowling together on weekends! And of course you can bring your dog in everyday, too! Keywords: Node, Python, D3, Spark, Hadoop, HDFS
There was some consternation over it because the recruiting staff didn't know how to go find people based on this new buzzword-free criteria, so I helped them to identify where to look, and persuaded them not to be the one to contact them & let that be the responsibility of one of the people already on my team.
It eventually worked.
Also, be fully aware I'm not holding the discrepancy against the applicants. It's not their fault. They're getting signals from the marketplace that they should call themselves a "Distributed Systems Engineer". I'm holding this problem against the institutional forces that are giving these people the signals to describe themselves this way to begin with. Because it makes it much harder to find one when you actually need one.
Run into a complex bug in library X? Well, we don't want you spending a week to debug it, write a patch and try to upstream it. Work around it for now instead. Or we'll defer that feature since we don't want you to spend that much time on "non-company" projects. Etc.
I think this is usually all the more true when your employer expects you to be a generalist of some kind and I've become a bit jealous of people whose employers let them really dig into things and even deviate to ecosystem projects instead of focusing on keeping the internal hamster wheel spinning.
On another tangent, one thing I found very frustrating early in my career and which I still feel is problematic in our field is that there's kind of a double standard when it comes to approaching things "under the hood."
At my first job I'd always ask my supervisor when when things lead there and then get told not to go there (despite wanting to) while a coworker of similar experience hired at around the same time would constantly get everything done behind schedule because he just did the things under the hood I made the mistake of asking about, but he'd get applauded for it continually. (Perhaps I learned from this, but it was pretty frustrating and disheartening at the time.)
Nowadays, I'm a team lead and I am guilty of telling people "not to go there" (about half the time). It's funny because it actually conflicts with my opinion that I want people to dig in! The choice to "dig in" is a personal risk/reward. It's a risk that an engineer must take while practicing good time management. Asking your manager is akin to making them take that risk for you (the risk of wasted time, passing deadlines, etc).
My crappy advice is... ask for forgiveness, not for permission. If you're a good engineer, you'll come out on top!
Maybe more to the point, I don't think my behavior at the time was an attempt at pushing the risk up the chain as much as it was a manifestation of fear/anxiety that I'd somehow be outside of company expectations/norms or would end up spending a bunch of time working around/fixing something that had already been solved in a way that I, as a newer employee, just didn't know about.
But the real cause of my dissatisfaction is that I find it harder to glue together existing frameworks -- I don't know where to begin, until I've dug in a bit to find out how something works underneath. Meanwhile other people whose approaches I think of as simplistic are able to leap in and get things done more quickly than I can.
(Similarly I find it almost impossible to learn much of a foreign language without studying the grammar, while some other people can immerse themselves and become quite conversant without ever thinking about grammar.)
Maybe the "non-engineer" developers are just working at a higher, and more immediately useful, level of interpretation.
I'm the kind of person you'd find frustrating at first. I don't want to know all the internals of what I'm doing before I dive in. I want a nice API and good docs, and I want to build some stuff right away. And if I can't do that, I give up and move on to something else.
If I like it, and it gives me quick wins, then I'll dive into the guts of how it works. Because I'm a responsible developer.
But I have learned that with all the things out there to learn and absorb, I don't have time to deep-dive a framework before choosing to use it. There's something else coming that I have to do next week that uses something I don't know.
So the more I build stuff, the faster I get at picking it up.
"The Law of Leaky Abstractions is dragging us down." - Joel Spolsky (2002)
"Engineering, as a discipline and as an activity, is multi-disciplinary. It’s just messy. And that’s actually the best part of engineering. It’s not about everyone knowing everything. It’s about paying attention to the shared, mutual understanding. "
But the author doesn't contrast that with what development is. I used the words interchangeably like film vs movie.
"Software engineering," as the term is usually used, is really a joke.
In my mind, engineering is about rigor: process, measurability and discipline. The hallmark of a well-engineered system, in my opinion, is reliability. Software is anything but reliable.
Too much of software development is throwing things at the wall and hoping that sticks, because there is not a good understanding (or willful neglect) of how the different parts of the stack may adversely affect your application. Add that to ever-expanding requirements scope, poorly designed/maintained/understood code artifacts, and developer churn, and the typical software project is rather frail.
There are considerations you can make for more reliable software: a testing regimen and release planning, conservative resource estimates and knowing your bottlenecks, strategies for degraded operating conditions, fallback and error mitigation, scaling, consistent documentation, and basically knowing the seams of your software, where things might break, under what conditions, and what corrective action could be done.
Most software projects either move too quickly or are simply not important enough to hit these points. There are exceptions, of course (most well-known software we use would qualify), but those are not the rule: most YC companies certainly would not qualify as doing "software engineering." In fact, that almost seems to be the antithesis of a fast-moving startup.
Imagine your civil engineer did not take shear, vibration, bedrock, joint and material strength, etc into account when designing a structure, or allowed a good design to be constructed with shoddy labor, duct-taped together. That is exactly what we see from most software "engineering" today. Move fast and break things, indeed.
As you hint at, most companies likely don't have the rigors of an "actual" engineer. But places like NASA or a medical company where lives are at stake would likely have the same rigors and reliability of a civil engineer and their bridge.
I think it is unfair to compare software _____s to civil engineers. We just don't have anywhere near the level of laws and regulations on most of our projects as someone who is building things that lives depend on being reliable. And even then, bridges can and do fail.
If a civil engineer were building bridges for his kid's matchbox cars I bet they do not put in the same level of reliability as a bridge where human lives must cross it daily for many years.
Since no lives are at stake for most software in the world, the guys up top will opt for the cheaper route over the route that involves a high level of process, measurability, and discipline.
Once they start losing sales/customers due to bugs they may change their tune of course :)
I would qualify that and say "software engineering," as we understand it today is messy. That's because it's not engineering as a recognized discipline (and seems to be the reason why we can get away with calling ourselves engineers and not be sued).
If your process involves thinking immediately about how to write the solution in code you're doing a very sloppy form of engineering. If you're the kind to write notes down about your design and possibly share some kind of diagram/written specification... you're doing a very weak form of engineering. I don't think it would pass at Lockheed Martin or JPL. You need to be using formal methods and mathematically modelling, checking, and possibly even proving enabled actions on your state and the invariants you hope to maintain. You need to start thinking at a higher level and have liability and all of those other things that drive you to get more guarantees and rigor out of your process.
My theory is that "formal methods" are not out of the reach of hackers, developers, and the wider industry. With a smattering of complicated-sounding things like predicate calculus and temporal logic you can get a psuedo-language for exhaustively checking your designs before you write the code to make sure you haven't forgotten important things like building the right solution.
It's really cool stuff... I'm learning TLA+ right now and loving it. I hope more people will find it as useful as I do.
I'm consistently surprised at how little programmers/developers and software engineers are aware of the progress made here over the past 5 years. Software engineering most definitely is a recognized engineering discipline nationally and in most states:
We can debate whether or not these developments are good for our profession (good, in my view). However, it's no longer disputable, at least in the US, that software engineering is a recognized engineering discipline. At least unless you don't recognize the authority of the IEEE/NCEES in this matter (in which case you don't recognize the credentials of most US engineers from any engineering discipline).
That said, I strongly agree with your emphasis on formal methods. I'm disappointed that software developers/engineers don't put more emphasis on these tools, even if they fail to convince their managers to actually use them. Yes, we don't need to use formal methods for a few jQuery scripts on a web page, but there are lots of places where they could find good use.
As one example, for general application development, Ada/Spark is an excellent example of an engineering-focused language and environment, and I wish other languages took this approach. For embedded designs, solutions like Ivory (and Ada, too) provide a fairly rigorous approach to software development. TLA+ is another interesting tool (that I know relatively little about). These are the sorts of tools we need to emphasis.
Unfortunately, most clients/companies aren't really interested in these methods, even when they're building applications where security matters. There are obvious exceptions, like some medical and avionics software, but even the automotive software I've looked into seems to have been developed in a sloppy, ad-hoc manner (and surprisingly, often by people trained as traditional engineers).
While they will charter someone as an ICT Technician (someone working with computer hardware or software), they wont recognise them as a chartered engineer without relevent civil engineering qualifications and experience.
You wouldn't use engineer on your CV in the UK in my experience, you would use software developer or programmer.
Which was why I stipulated "the US", "nationally", and "many states".
I think it's a necessary move. Liability should definitely be on the table as well at some point. The undertones at Blackhat and Defcon seemed to suggest some people think it's inevitable.
There's nothing to be lost by adopting a more rigorous process and only much to gain. Especially with the advances in tooling we have available: TLA+, Unit-B, Agda, Ada... etc.
I have a feeling Sussman may be right when it comes to lawyers and software -- the safest thing to do for a system where liability is a concern is probably to shut off in the face of a sub-system failure... which is probably less than ideal. However I'm confident there could be ways to specify a system that can do "reasonable" things in the face of non-terminal situations (ie: if I lose my writing arm I still have another one that's capable of doing a reasonable job to carry out the task).
It's not that simple. The quickest path to licensure is to graduate from an ABET-accredited engineering program (4 years), pass the fundamentals of engineering exam (which, likely being the "other disciplines" exam, requires general knowledge of engineering and science), perform engineering work supervised by a licensed engineer (I think you typically need 3 references), document your engineering work time (4 years are needed with a BS), and then pass the principles and practice in engineering exam. There are alternative paths that differ on the state level, but they take years longer, e.g., lacking the accredited BS would require something like 8 years of experience before one could sit for the FE exam. I think there's also a plan to require an MS at minimum to sit for the PE exam.
Point being, licensure is far more involved than passing one exam.
I've been picking up formal specifications, predicate calculus, etc rather well I think. I'm not afraid of doing the work: it's important!
And besides, the maths are beautiful.
I view the PE in software as being for managers on software-intensive engineering projects, say for example, plant controls or electronic medical devices. If you look at the current licensing path, you'll see that it requires deep domain-specific engineering knowledge (e.g., mechanical engineering) or broad engineering knowledge (e.g., statics and dynamics + fluid dynamics + materials science + ...), neither of which are common among self-labeled software engineers: this is why I view a PE in software as being intended for a controls engineer who does a lot of software projects (as one example). It's not really relevant for most software domains, nor is it intended to be.
I'm still enthused about introducing more formal methods and "engineering" practices into software development. I think it is very useful and indeed as the world becomes more reliant on open-source software... there needs to be some sort of protection of the public good, no?
Thanks for taking to time to respond to my, sometimes naïve, comments.
Here's an old method, Cleanroom, that was cost-effective for business with low, defect rate even on first use:
Altran[-Praxis] is a modern company engineering solutions with their Correct by Construction methodology:
They warranty their code like Cleanroom teams used to. Still in business so the method works. ;) Their SPARK tool is now GPL with lots of learning resources available. So, those are a few examples of engineering software vs just developing it. The results in terms of predictability, reliability and low defects speak for themselves.
Personally, as it's function-oriented, I'd combine it with a subset of Haskell that was easy to translate into imperative code. Build the app with Haskell, use QuickCheck/QuickSpec, test every execution path, covert channels via Flow Caml-like setup, and certified compilation to target with pre-verified runtime. Now, we can use that directly or use it as an executable spec for an imperative implementation.
So, that's how I see applying Cleanroom today for most benefit. Maybe drop the statistical stuff, too.
Edit: added them in a reply via pastebin.
Engineering is applying scientific principles to solve a problem. Development is the process of improving something.
I don't see why they should be at odds; the concepts seem orthogonal.
Instead of insulting rigorous, creative professionals who prefer the name "developer" over "engineer", maybe he could have said that he wants to hire people that won't go CYA when problems happen.
Instead, both hobbyist and professional developers seem to systematically ignore every proven thing in their field outside of libraries, apps, or practices that are mainstreamed. It's driven by fads and a throw it together mindset rather than science and robust composition mindset.
Meanwhile, groups like Altran Praxis with their Correct by Construction approach continue to show benefits of engineering software.
What disciplines in software development truly need the concept of a a big-e Engineer? Which don't?
More fundamentally, have the practices and knowledge in software engineering yet reached a point of maturity, that licensing Professional Software Engineers would be ethical?
Until then, it doesn't feel particularly useful to have folks coming up with their own pet definitions and adding confusion to a sufficiently broad, fluid profession.
Against this, there were people who threw together Fortran, COBOL, later C, and so on. They had an idea, wrote whatever code seemed to implement it, maybe did some testing, and put that stuff in production. Problems, often predictable, occurred that disrupted service and leaked people's data. Over time, almost trial and error, they re-discovered a subset of prior engineering practice that prevented some problems, continued to ignore others, and developed best practices of their own within silo'd groups. Their work continued to be lower in quality and predictability versus those like Altran that continued engineering tradition.
So, I think there's a clear distinction between two approaches to constructing information systems. One strictly leverages proven techniques in careful combination with lots of review, analysis, and testing. One does whatever it feels like with some feedback from others in their camp and optionally some engineering tricks. The disparity in results confirms both that there's a difference and the superiority of engineering rather than developing software.
Now, the Etsy CTO might be adding his own stuff in there. This probably isn't warranted as it does cause confusion. I'm sticking with the original definitions centered on problem-solving philosophy and evidence-driven practices.
The part where I see developers at fault is rejecting options that improve things, for them and users, that they can actually use within this mess. An example might be the web security frameworks for PHP while developing a PHP service. It takes almost no work to use them with many components pre-built. They prevent total disruption of service or record theft from hackers. So, why aren't the developers using them?
This problem manifests all over the stack for many preventable issues. Most developers consistently refuse to put in a little extra rigor or effort upfront to prevent problems down the line. Those that do... a little closer to engineers to me... experience rewards then write blog posts or papers encouraging others to do it. They're mostly ignored. This is the problem. Management and users aren't causing this one.
This is too broad a definition to be useful. Any respected professional will - or at leas try - to do this - be it a medical problem, a law problem, a social problem.
What defines the engineer apart is that we try to solve the problems by designing and creating stuff.
I was told I would have a phone interview with someone from the hiring team. Weeks went by and I got zero response. I figured I had just been rejected with usual company non-response b.s.
But after about three weeks I received an impromptu message from HR saying they were sorry about the delay, and also that unfortunately someone internal had put in a request for the job I would have interviewed for, and they wanted to proceed with their internal candidate.
I was grateful for the message and thanked that HR representative. I think it spoke well of Etsy that they bothered to give me some reply about my status.
But I also thought, as far as engineering goes, that it sounded less than ideal to me that their hiring process could go like that. Why wouldn't internal candidates have been vetted prior to spending time on the application process and phone interviews with others?
They said they thanked me for my understanding and would be in touch about future roles. I have never heard back. So it's puzzling that they seem to want to hire developers with a focus on craftsmanship, but aren't willing to consider me after previously at least feeling that I was good enough for a phone screen.
It's not so much that this is "bad" or "good" for Etsy, but more that it gives an overwhelming feeling of "What on earth is going on inside Etsy? What are they doing?"
Requiring a public posting is a good idea in theory, but in reality it seems most places are just wasting people's time because external candidates are never seriously considered.
That's why it struck me as so strange.
The most egregious interview hassle I had was with a large insurance company based in Connecticut. They had me do 1 HR phone call and then a difficult 1-hour technical phone interview. After that interview they gave me a lot of positive feedback and said they wanted to proceed, but then pressured me heavily that in order to proceed past that point, they needed a lot of specifics about my desired salary. Foolishly, I agreed to share, thinking that I didn't want to miss the chance for additional interviews.
I did not hear a word of reply from them for months, at least 4 months. Then, out of nowhere, I received what was clearly an automated email sent to many people, which said basically the following:
"Thank you for participating in our exploratory market survey for the role of 'Data Scientist' -- regretfully, we have decided to move the company in a different direction and so the position will not be filled."
That dramatically changed my perspective about how to approach job interviews, how to relentlessly avoid ever sharing salary data, etc.
It could be that the internal process started simultaneously, just that the internal candidate "grabbed the thread", and either the HR contact wasn't informed, or didn't think / wasn't coached to notify the external candidates as to the change in status. Until they went back to clean up their inbox.
"What on earth is going on inside Etsy? What are they doing?"
I've come to accept that even when all parties involved have the best of intentions, the workflow of candidate vetting, combined with the inherent game theoretic aspects of finding the best available candidate on the market (when one generally has no idea what market looks like until one has churned through a goodly number of candidates), all come together to make the process more complex (and time consuming) than people generally anticipate.
Unfortunately what gets added to this is that seldom do people stop to think, "Wait, if we don't treat ALL candidates nicely during the process, that means the good ones won't apply again. And they won't recommend their friends to apply to join us, either." And so, "Maybe we should put a little more thought into candidate experience, and make a point of treating them better."
Again, this is under the best of circumstances, and assuming that the parties involve seem to have put some degree of thought into, and to have some sense of discipline behind their vetting and communication process. Sadly, this isn't very often the case.
At the end of the day companies want to hire people who love what they do, a fiery passion to for continuous self-improvement, extremely competent at their job, and keenly aware of how their role impacts the overall success of the company. Simply changing 'developer' to 'engineer' in your recruiting efforts sure as hell doesn't guarantee those attributes.
Most well known software companies I've checked so far seem to go for "engineer". StackExchange however seems to have "developer" titles.
Bullshit. Complete and utter bullshit. If you can't do the algorithm dance on the whiteboard you're "not technical enough" - after all, these companies get such a high volume of applicants that there has to be some way to filter them out. Curiosity isn't a measurable metric - it's a "nice-to-have".
But something else has occurred to recently about the never-ending discussion on "whiteboarding" and other interview techniques:
What's perhaps wrong about the overemphasis on the "algorithm dance" (be it at the whiteboard, over a website, or over the phone) is that people revert to it because it (at least appears to be) measurable, and can be roughly assessed within a reasonably short timeframe (and more or less reproducibly so), and at low cost/risk to the company.
While more nuanced, and arguably much more valuable skills -- like the ability to manage complexity; being generally ego-divorced, and immune to silly hangups about one's platform or style choices, or those of others; seeing the forest for the trees (but the trees also), generally; not being a jerk; and yes, curiosity -- basically can't be measured in an interview setting, or in any way other than through shared experience on actual work projects. At least not reliably, and certainly not reproducibly.
The old "searching for the car keys under the lamp post, because that's where the light is" fallacy, in other words.
You're distilling what they're looking for into an ambiguous term of "curiosity" which isn't a good representation of what they're looking for. They're looking for someone who can expand their understanding beyond the scope of just development. This is something you can easily get into when interviewing someone.
Indeed. When I've interviewed junior candidates, I gave them a function with some inputs and asked them to produce the outputs. At least this requires them to exhibits some degree of analytical reasoning on something you've never before seen. You'd be surprised how many candidates can't even do that.
Multi-disciplinary is over preached and create average quality across a system instead of having best practices and efficient development everywhere.
* Must do backend
* Must do frontend
* Must do devops
* Must do database schemas like a wizard
* Must know all of AWS inside and out
* Do you like customer support? You'll be doing customer support.
* You'll be doing marketing disguised as engineering blog posts.
I am none of these.
That said, I personally like working with engineers who at least try to participate in the design/front-end process instead of just passing off to me like they don't give a shit. Their opinions can be informative and it's also a way for me to help teach them a thing or two. Plus sometimes knowing small UI things can help alleviate my workload so that I'm not doing really tiny tasks associated with a project that's primarily driven by a backend engineer.
As a longtime designer/UI guy, I have no illusions of ever being an elite backend programmer. But I actually enjoy learning backend on the side. I feel like it's a way of me having a better understanding how the teams entire codebase works and allows me to have a slightly better understanding of what my team members discuss during meetings. And even for side projects, it's just empowering to be able to implement a product idea from end to end (even though yes, it's likely not stellar code). Overall, just having a growing understanding of how it all connects together is truly rewarding.
-Robert A. Heinlein
Specialization is for people living in societies. Modern civilization would never have come into existence and cannot continue to exist without specialization.
I think Eric's article was able to articulate differences between a programmer and a developer. Reading this interview, I don't think Etsy's CTO is able to clarify what makes an engineer different from a developer.
An engineer engineers solutions, frequently from scratch or very small pieces.
A developer develops solutions, often from larger pieces, focusing on implementing them to solve business needs.
It's subtle but I think there is some differentiation. I think the vast majority of companies only require developers, or mostly developers, including Etsy.. because let's be honest what is Etsy doing that hasn't been done before?
Developers build specific tools using generic pieces. Think of using AWS to serve up a CMS with business-specific code hooking into a parent company's payment processor.
Engineers build general tools using specific pieces. Think of building AWS DynamoDB.
edit: It's not about LOC, it's about understanding systems and algorithms... the distinction between putting Lego blocks together, and understanding how and why those bricks were made.
In traditional engineering, like mechanical or chemical, change in the field happens slowly. Standardizing tasks has happened just because time has allowed for it. I think professional software development/engineering/whatever you call it is trending this way.
Example: I studied chemical engineering where we learned how to size a tank that would be pressurized containing some hydro-carbon. You pressurize it to keep as much of it liquid to minimize the tank size; now you have to figure out how thick the walls of the tank need to be to handle the desired pressure based on the expected liquid composition and volume, among other things.
Are you surprised if I say this was stuff chemical engineering programs cover in the last year of study? In the previous 3 years, topics include thermodynamics, physical/organic chemistry and other much more analytical, bookish things that outsiders consider chemical engineering to be about. In oil refining, where all the money is as an undergrad, they're paying you to size a tank.
If you consider the historical timeline of when these topics could be considered "understood", the way the course plan is laid out might make more sense. To roughly break it out, we understood a lot more about lab scale chemistry pre-20th century than we did about tank sizing. Tank sizing is really heuristic and fudge-factor driven. IIRC from our textbooks, a lot of these were lab determined in the 1930's and 40's. They basically built a tank with some thickness and size, then drew up a list of liquid compositions with different properties and starting measuring pressures. The plotted data would get curve fit and, viola, a formula for how to size a tank.
You can't use the formula without understand your liquid compositions, which doesn't really work if you don't understand thermodynamics and physical chemistry, but it is also really hard to use only thermodynamics and physical chemistry to figure out how to size a tank.
So if I were to say where software is today, it's that we haven't standardized how to size the metaphorical tank, where that tank could be, e.g. different data access permissions for hospital records?
We should just call ourselves codemonkeys and focus on doing the job well regardless.
Not every problem needs to be engineered. Sometimes you just need a good enough solution developed, and an engineered one would be overkill.
For a CTO, a long-term view is necessary and he probably wants software engineers in the long-term -- not just developers who have limited utility.
Based on our analysis of LinkedIn, there are 1.7 developers for each SW engineer. Raw data here: bit.ly/1QOobbN
Sure, but it doesn't mean good engineers don't know what they are worth. If Etsy has hard time recruiting "engineers", Etsy should ask themselves what kind of deal they are offering in order to attract the profile they seek and not write a blog that dismiss "developers" as inferior. Because it reads like "if you're a developer stop applying", well if Etsy wants something else, or Etsy feels like it's not getting "the best of the best", it's Etsy's problem and what they offer, not the candidates.
Who's going to work at Etsy for a pittance when other companies offer them a better deal? If Etsy has hard time recruiting "engineers" then problem is at Etsy. They should ask themselves why they have hard time recruiting the profile they seek instead of blaming "developers".
At least for me, developer is solely focused on shipping code, although it's unavoidable to meet problems that require engineering. When facing such problem, the developer will focus on 'making it work', instead of focusing on answering 'how should I proper architecture it?'.
The curiosity of answering the later question comes from an engineer that also wants to know about networking, architecture and better ways to develop systems.
In simple terms this distinction answers if the professional is curious about development only (developer) or curious about the whole stack plus development (software engineer).
Job descriptions act like a constraint on the human complexity of a team or organization. They help codify the way people conceive of modularizing known workflows and discretizing for the purposes of having employees solve them.
Crucially, a job description defines the boundaries of what an employee can say no to -- "that's not in my job description."
In the modern hiring landscape, this would be immediately met with vapid criticisms, like you are not a team player or you won't "wear many hats" or whatever.
But the benefit of job description boundaries is not related to entitling programmers to be whiny and complacent about tasks they don't like. That's naive. The reason it's good to empower your workers to tell you no, especially knowledge workers, is because it preserves organization structure and planning. It highlights bottlenecks for you and points out the discrepancy between what you need and what you have. Otherwise, while your machine learning expert is repressing her fury over being assigned to clean up some legacy Rails codebase, you might be off playing golf thinking you've got your Rails maintenance needs correctly covered, when really you don't at all (and you're leaving money on the table by not extract the full value from your machine learning engineer that you otherwise could).
This, by the way, is perhaps the biggest hallmark of a good manager. Good managers will act like double-sided adapters who process arbitrary inputs from the business-problem-stream on one side, and turn them into collaborations with subordinates that fully respect those subordinates' specialities, aptitudes, and goals. That is a damn hard job, and when done properly it is big justification for the increased compensation and status usually given to management-level employees. Unfortunately, many modern firms do not expect managers to do this, and instead they seek to build teams of so-called "full-stack" developers, for everything, so that managers are only receiving business problem inputs and never doing the actually difficult part of turning them into workflows that show respect for subordinates (... you know, managing them).
In fact, I would argue that this is exactly what it means to even have a business model at all. If your "business model" involves hiring people and then screaming at them to do whatever they are told, instead of what they are good at or motivated to do, because "that's just how jobs work" then in effect you actually don't have a business model. You have not yet done the hard part -- figuring out that double-sided adapter layer that translates real world business concerns into things that disparate specialists want to do and won't refuse to do.
When I read this by the Etsy CTO, it strikes me as a big, whiny excuse to try to hire more people who are "arbitrary work receptacles." That seems to be what he is trying to describe as "engineering" -- He uses words like multi-disciplinary and messy, when really he's trying to convey "I want you to do whatever arbitrary work I say, regardless of how poorly it matches your aptitudes, interests, or goals."
I feel developers should more strongly demand legitimate job descriptions, and ask during the hiring process for specifics on the ways that managers will be held accountable for protecting the job description, instead of forcing employees to bottomlessly compromise on their goals in order to be arbitrary work receptacles.
It's not about mere displeasure over having to do arbitrary tasks. It's about the fact that a lack of planning or modularity codified through job descriptions is bad for business. Properly respecting employee aptitudes is a sign of a healthy business. Demanding arbitrary cross-functionality all the time is a sign of needless chaos. We can see this clearly in software itself: if all of your classes make everything public, silently mutate each other's state, and everything grows purely by attrition with no attention paid to e.g. Single Responsibility Principle, it's a huge red flag of bad code and impending system failure. But we bury our heads in the sand when the same ideas become apparent when managing people complexity instead of software complexity.
The "engineers" he speaks about are people who cannot limit themselves like that. However, "engineer" sounds a bit pretentious :D
 http://akkartik.name/post/libraries and, rewritten, http://akkartik.name/post/libraries2
The only I can spot is that engineers are interested in how the abstractions they use work. I haven't met many developers that aren't interested in how the abstractions they use work.
I simply can't believe he's telling the truth here. I mean networking, sure, but databases are another story.
I just fought with people on my team about this yesterday. When you're talking about services in production, the more boring the better. It's unfortunate that Java isn't as cool as your cool thing you'd like to YOLO out to production, but the rest of us have to bear the on-call responsibility for your cowboy shit. Just use whatever we already use and keep it boring.