This is incredibly true. I once turned 60kLoC of classic ASP in VBScript into about 20kLoC of python/django including templates. And added a bunch of features that would have been impossible on the old code-base.
It turned the job from hellish (features were impossible to add) to very nearly boring (there wasn't much to do anymore). So with this newfound freedom I built some machines to automate the data entry and once that got rolling the job got even more boring. Because it was a small company with a very long learning curve the owner didn't let people go, but instead kept them on so that he didn't have to hire and train new people as growth accelerated.
But with all the automation some slack found its way into the system and problems that had normally required me to stop working on my normal job and help put out fires now got handled by people who weren't stretched a little too thin.
Sadly there's (seemingly) no way to interview people for this ability so we're stuck with the standard "write some algorithm on a whiteboard" type problems that are in no way indicative of real world capabilities.
I got fired from a job because I had done this and they couldn't figure out how to keep me billable after that. Well, I got fired because I would "show up late" and "leave early" because there was nothing else for me to do (even though I was actually pulling a full 40 hours, they were expecting 60 out of people).
They could have given me more work, put me on new projects, etc., like I kept asking to be done, but instead they just wanted me to figure out some way to bill the client for 60 hours a week, and not call it "bug fixes" or "new features", because the client wasn't paying for it. That left data entry, which I had automated from a whole day to 10 minutes. They wanted me to go back to entering data by hand.
I would have quit then, but I didn't want to give them the pleasure. A flood had recently destroyed all of my stuff and I used the insurance check to get out of debt rather than replace the lost stuff, so I suddenly didn't need money for a LONG time. It's funny how easily debt can make you a compliant little slave towards people who want you to perform bullshit tasks.
I've been freelancing ever since, making twice what they paid me for only half a week's worth of work. And the flexibility of the freelancing enabled me to pursue a long-distance relationship that eventually lead to marriage. So I say getting fired was the greatest thing that ever happened to me.
I think P.T. Barnum wrote the prototypical "Self-help: finance/entrepreneurship" book in "The Art of Money Getting: Or, Golden Rules for Making Money" [0]. A lot of it sounds like the stuff you hear getting hammered to death on motivational TED talks. It's a pretty short book and he doesn't go into too much exposition on a lot of the issues he addresses, with the exception of debt. He probably spends half of the already short book on talking about the evils of debt, especially debt put into things that aren't assets. It's not the debt that is the problem, it's the mindset you have to be in to take the debt, the mindset that the debt puts you in, and the mindset of the type of person that continues to use debt, especially destructively. There are some closely related issues with spending money, which tie in nicely.
It's an interesting read. This book was written in 1880, and if not for the dated writing style, would fit right in to the modern day self-help genre. It really helps underscore the idea that there's "nothing new under the sun."
I've come across this a few times. The managers' thoughts were probably along the lines of "great, I got all my code cleaned up/the hard part of my product built for the expensive rate, now I can fire him and pass on maintenance to the cheap guys over there".
And of course your thoughts were "I'm going to work extra hard initially, which will make a good impression and get the product built on time and all the tasks automated, and then I can coast and it will still be worth their money."
The manager doesn't care because when your code starts rotting due to bad maintenance and badly added features (or just the fact that nobody has bothered taking it over), he'll already have been promoted or changed companies.
Except this was a 5-person company that could have used all the help they could get. They styled themselves a "startup", but I don't know how valid it is to call yourself a startup after 8 years. They had me very cheaply, but I think they knew I was never going fit their "company culture" of lying to clients.
Billing by the hour is basically just telling people how much effort you expect the task to require smoothed by how valuable it is to you. If you can do it quicker, I say bill the full amount of time and if they have extra inquiries, fit it into the buffer zone.
For one of my current clients, I still bill only the time I actually work. It's a long-term project with no explicit end-date. I'm trying to get out of, I want to do shorter-term projects now, but while I'm here, they've hired me for the fact that I can crush out code and make things better than their own employees. They didn't buy a specific thing, they buy my expertise.
It means I'm not making as much as I could be, but it's not their fault I agreed to the arrangement 3 years ago.
I had an interview recently where we did something similar and it was awesome. They had picked an especially good example: the code was small enough that it didn't take long, but it has a variety of sorts of badness, from coding errors to bad names to bad method grouping to logical errors.
I wrote one of these for my work as well. Definitely our go-to for getting a sense of how well people can understand others' code and how they think about data modeling/complexity. A very small initial page of code can go a long way.
This is great. Reading (and maintaining, refactoring, improving, etc.) obtuse code is often much more difficult than a fresh slate. Its also a major real world requirement. Where is that statistic on number of engineers doing maintenance vs greenfield development?
Agreed, being able to read and reconstruct models from OPC (other peoples code) is such a big piece of coding but hardly ever a part of the interview process.
(OPC also refers to one's own code that when you can't remember wtf you did and the comments aren't useful)
I think it was a class of 30-50 lines of Java code.
The problems were those you see daily in production code. Misleading names. Duplicated constants. Overly complex constructs. Off by one errors. Maybe a resource leak when exceptions are thrown. Etc.
> Sadly there's (seemingly) no way to interview people for this ability
I think there is. Show the candidate some bad code. Not WTF code, but something you wouldn't be proud of but is realistic. Bonus points if you're comfortable enough to pull this out of your own production system. Ask the candidate about adding some functionality, but be clear that cleanup and refactoring changes are fine (even encouraged). The conversation should be enlightening.
This happened to me once. I went for an an interview at a huge real estate company. They had a chat with me for exactly five minutes, then gave me a computer (an old one, with keyboard not working properly) and some legacy code. I wasn't asked to fix it, but I was asked to make a small change and given half hour for it. That code had no documentation and no comments. Then 5 more mins to explain what I understood about the codebase - no questions, just me talking about the codebase as much as I understood in that half hour. The whole thing took like 45 mins or so in total. No whiteboard questions etc. And they made an offer.
> gave me a computer (an old one, with keyboard not working properly)
What I like most about this is how close to reality their interview model is. Less than stellar equipment, legacy code, and a maintenance task... just a normal Monday.
Another story - I walk in on my first day (this is a subsidiary of a HUGE company). Nobody remembered that a new guy was supposed to join that day - so no desk or computer for me. First couple of hours I spent reading magazines, then I was given the oldest (probably older than me, ha) computer I've seen in my life. I am not kidding - I spent half hour just cleaning the thing (not talking about software, but hardware. it had half inch of dust on it).
You know, interviews ideally go both ways. If a company was not willing to provide me with working equipment, I would likely decline any offer they might make. Not because I need the best of the best hardware or anything, but because it would be a huge red flag that they don't value the position or the people that fill it.
Our main product was developed on a keyboard that was missing a "w". It's not a big deal, though - you can use a for(;;) loop instead of a while loop without hurting performance. And it's easy to just avoid "w" words in the view templates by being clever with word choice.
Of course, that's not true at all. But giving someone a defective-but-not-entirely-broken tool can be a clever way of seeing how they work around problems or how they deal with frustration. Joking aside, we once let a guy go during his 1 week probationary period because he got a little too angry at a slow server. It's a valuable test.
It's too contrived though; not having a single working keyboard anywhere isn't a real business situation. A slow bottlenecked server, sure.
If the "w" key weren't working I'd copy-paste "w" characters from elsewhere in the document, or bring up the on-screen keyboard. Writing bastardized code -- for (;;), really?! -- would be a huge negative against a candidate. And, of course, if a company really didn't have a working keyboard to provide me with, I'd never work there in the first place, because if they can't even provide working peripherals there's no limit to what else they might skimp on.
I actually have three keyboards at my desk right now -- one with Cherry MX blues, one with Cherry MX browns, and a Model M. No way in hell would I ever write code without "w"s.
Whenever you need a character your keyboard won't produce - either because it's broken or because it's in a different language and you can't figure out how to get it, just google the character and copy/paste it. i.e. Google "double u"
Or just use the mouse to open your whatever "Character Map" program comes with your OS... which works when you're missing the same letters you'd need to "sound them out", or even with no keyboard at all. (Assuming you managed to log in.)
I actually like "for (;;) {}" as an idiom for an infinite loop (or at least, the terminating conditions are in the body rather than the loop construct itself.) It maybe stands out more than "while(true)", so readers can easily see what's going on. But maybe that's just a justification, and the real reason is that it looks clever because not everybody knows you can do that with a C-style for loop.
Not at a company, but in Bulgaria every year there is a National Programming Olympiad - basically high-schoolers (but some time even mid-schoolers) would compete by solving 2 or 3 exercises in small amount of time. Their code would be ran, against tests, and it should finish in time, and with correct answers. Possibly code review might be done as well. Most used languages were Pascal, C, C++, Basic.
But then computers were not provided, each school had to travel with a computer for each student. So around 1993/94 a friend of mine, who also had some terrible medial situation at the time, and was quite weak decided to compete, and to make things worse for him - his spacebar wasn't working. I think he did a lot of presses of Alt+32 (or was it Alt+20 - I haven't used this combination of entering ASCII codes in a long time). He finish pretty good for the situation and keyboard problem he had, and next year when he was healthy and with good keyboard he did just awesome!
I'm sorry but I can't help to find this situation ridiculous.
Unless you're from a very poor country there is no excuse for not having a working keyboard.
I don't know if that was part of the test, but if it was, it's worse than the big blue chip corps asking about the number of piano tuners. Which actually can be valuable at understanding how one reasons about unknown problems/areas.
About the server being slow, well I don't know the magnitude of the slowness or the anger, but unless you're a ramen fueled startup there is no excuse for having slow machines. It's management failure. It's a waste of developer time. Instead of coding the dev is having to deal with stress inducing constant 5 second hiccups or similar things.
Put yourself in the interviewee's shoes. Do you really want to work in a company that can't conduct a proper interview and has broken/slow hardware?
> Do you really want to work in a company that can't conduct a proper interview and has broken/slow hardware?
Slow is relative. The employee in question was trying to figure out why a remote server was experiencing extreme slowdown. He ssh-ed in, but he was able to type far faster than the beleaguered remote could echo his keystrokes. So he needed to just carefully type his commands, wait for them to appear, and then press enter. Instead, he typed angrily and too quickly, swore at the connection, and eventually started slamming his keyboard in a fit of pique.
It was a totally reasonable real-world slow machine problem, and a totally useful insight into the mindset of a potential new employee.
Not egregious at all. We're developers, sometimes we have to walk into an annoying situation and deal with it like adults.
I have been forced to work with really old computers connected to as old research hardware. At some point it is probably more economical to let someone figure out the interface and solder something together with an Arduino or similar so we can start using a new computer. But that point is never now. Until then we have this old chain of hardware just to get the data from the old machine via 5 1/4 floppy disks.
Maybe it's why I'm not a developer anymore, but my approach to a broken "w" key wouldn't be figuring out a workaround to using "while", but rather copying and pasting a "w" from somewhere else.
If someone's approach to a broken keyboard is some workaround instead of just asking for another keyboard it is a sign they handle problems very, very poorly.
^^^this is the correct answer. as a manager I want to know if my team is experiencing an obstacle that should be simple for me to fix for them. A managers job is to remove these kind of obstacles. I don't want them wasting time with poor workarounds.
My would be to (in order): jury-rig the keyboard to make it type "w" anyway, rebind it to some unused key, or just bring my own keyboard from home (if this would not be an interview but probational period setting).
It does, but having to click to type that one letter would be annoying as hell, and a last-ditch resort (after copy-pasting the letter itself from some text that contains it).
If the 'w' key is the only worn out key on a keyboard, I would suspect that it had seen heavy use attached to a gaming PC, as the "forward" key in the default WASD movement layout.
I have a keyboard with a broken 'w' key myself, and I know exactly why that key cap died young. It sure wasn't from typing out "while".
Somewhat off-topic, but I'm suddenly reminded of a joke proposal for Fortran to drop the letter 'O' entirely from its character set (in response to frustration regarding its confusion with '0'), the justification being that 'GOTO' statements would suddenly be impossible and therefore incapable of doing harm.
No, I had another offer that I took. I'm an average programmer and was able to do the task easily, but that codebase really scared me. Plus the other job was to build something from ground up and I really liked the CTO, so I took it. They were primarily hiring someone just to keep it running than adding new features.
It was a great interview experience though. One of the engineers commented that they gave the same task to other interviewees too.
I think this is a smart and lazy way to hire people :P No need to spend an hour in the room asking random questions, having awkward pauses etc. Just let the candidate fight with the codebase - lazy and simple!
We did something similar at previous job. We handed the candidate some code that was intentionally bad, explained that it was so (so they wouldn't think we wrote like that every day), and then asked what they'd do to improve it. This let us find out a few things - whether they could read & understand someone else's code, whether they could make improvements to existing code, what levels of abstraction they were comfortable working at, and whether they had a strong enough personality to mix with everyone else.
The key was in how we presented it - we didn't want to come across as elitist jerks (but we probably did to some people), so we tried to soft-sell it to them and work it in gradually during the interview.
That sounds like a good way to set expectations for communication and introduce the interviewee to the way your team respects each other.
Some teams are like "Man, what idiot wrote this code! This is garbage!" (sometimes in other words). I can imagine they couldn't do the type of interview you're doing.
In your interview, you show them some bad code, ask them to change it, and if they start disparaging whoever wrote it, you can red card them before they start dragging down team morale.
Or, if they start wondering about 'how did this code get like this? are there other issues leading to the bad code?' they've got some strengths in areas outside of development.
And then -- and this is important -- actually follow through on allowing and encouraging refactoring, as indicated in the interviews, instead of insisting on "features first, features fast" in practice.
I agree. There are tons of ways to test for this. Write a simple library that doesn't use any good design principles and have your candidate spend some time refactoring it. You can even use the same library multiple times. Heck, have your senior engineers also do this, so that you can get a good baseline for what an ideal candidate should come up with.
It would be actually interesting to look at the commits of a candidate's any online project repository from the very start and observe how the candidate made the progress with respect to the structure of code and improvement in coding practice. To check whether the candidate optimizes and refactors the code during the development of the project leading to a stable release.
This assumes that developers' off-work behavior is similar enough to their work behavior that it's a reliable indicator of their workplace performance. Or that developers should comport themselves on their off time to a workplace-level standard.
The problem with this is that it doesn't let my personal projects be Play. If I'm writing something for myself, it's not going to be as clean and documented to the level it would be in the workplace. And why should it? It's supposed to be for me.
And if I'm learning a new technology, I'm going to be sloppy. I'm going to make mistakes. That's the very nature of learning-by-doing.
But in the real world of managers and companies looking for reasons to say "no hire", I have to assume that every public checkin I do is another potential "no hire" justification. Which means if make all my checkins public, I have to treat my personal projects as Work, not as play. Which is an excellent way to burn out.
Or I just give you a curated look at the final product. Which ought to be enough.
>This assumes that developers' off-work behavior is similar enough to their work behavior that it's a reliable indicator of their workplace performance. Or that developers should comport themselves on their off time to a workplace-level standard.
Actually I hold myself to a higher standard when writing open source - A) because I know everything is open to the world, and B) because I'm not under tight time constraints.
Nonetheless, there's no perfect project, and any employer that looked at a github project and was needlessly, ridiculously picky or made a presumption that there shouldn't ever be mistakes would end up just not hiring anybody at all.
>And if I'm learning a new technology, I'm going to be sloppy. I'm going to make mistakes. That's the very nature of learning-by-doing.
Only idiots expect a perfectionist. Thus if you try to make every commit an exercise in perfection you're only selecting out the idiots.
I'd actually far rather see incremental improvement (in code quality, documentation clarity & commit messages) than sheer perfectionism.
Also bear in mind that no matter how interested the employer is in you, the chances of them taking anything but the most cursory glance at anything beyond HEAD is very small.
Yeah, personal projects could be sloppy, as it's mostly for learning new technology. And it may also be unfair to say that a good programmer always produce clean code, no matter the reason for development.
But still, there must be one project that the candidate considers his best work, one he/she is proud of, one where he/she has motivation to do best. If the employer can just ask the candidate for that project, and the candidate can show that project in a public or private repository, this would give really important insights to the employer about the candidate which would be beneficial for that both.
Also, this doesn't mean that the candidates who doesn't have public repository to show have any disadvantage, they are at the same point. Its just that the candidate who is able to provide a project of his choice through public repository would be a few points ahead other candidates.
>But still, there must be one project that the candidate considers his best work, one he/she is proud of, one where he has motivation to do best.
For an experienced developer, that would be a workplace project, which you probably could not see. Indeed, it _should_ be a workplace project; would you hire a professional developer whose best work is their personal project?
But if what you're saying is that the mark of an ideal developer is one who has produced a personal-time product of professional quality, where every checkin is to workplace standards, and has developed the project from conception all the way to release, again all on personal time, then what you're asking for is not a developer who also codes as a hobby, but a developer who (at least part of the time) works two actual jobs. One of which they do for free for portfolio development. I don't find that to be a reasonable expectation, even if it would give employers "important insights".
When I was writing this, I actually had only new grads in my mind. You got me there. But, I think this would especially helpful incase of recent graduates. What I am trying to say is that if a candidate can show the life-cycle of some academic/personal project in a public/private manner, it would help the employer make a better decision, which will inturn benefit the candidate.
> But still, there must be one project that the candidate considers his best work, one he/she is proud of, one where he/she has motivation to do best.
Were I to have such a project that I could show to people, it would be for a company that I had founded and as such despite my having the ability to show it, I would have absolutely no incentive to do so.
Basically if I had EXACTLY everything you purport to want in an employee I would be entirely unmanageable. Someone who has not only the skill and ability to turn out such a project but the motivation to do so on their own time is probably the definition of a founder. And founders are often terrible employees.
So while you're right that it would theoretically be better for candidates to share their codebases with potential employers, in practice I suspect that it rarely happens.
And that online repo is fully complex and developed. And has a stable release, isn't a work in progress. That code you hacked together in a couple hours to solve a quick problem isn't going to be an indication of how you handle a mature project with multiple developers.
>I think this is quite a discriminating practice - what if i don't do public projects?
Then either start one or resign yourself to not getting the best possible tech jobs out there?
As a discriminatory practice it's probably one of the most benign. There's a low bar to putting open source code out there. You can write a simple project and put it out there in a couple of weekends. That alone puts you ahead of about 80% of candidates.
It's a way better discriminatory practice than the current default assumption that the best developers out there are white, male twenty-somethings.
> default assumption that the best developers out there are white, male twenty-somethings.
don't you think that there's a higher proportion of white, twenty-something people who would have public projects (because they tend to have the most spare time)?
Don't you think that the over-worked CRUD programmer at an obscure insurance company wanting to get out because they are in a difficult situation (may be financially, may be race/background etc), would find it hard to have time for a public showing of their coding prowess?
No one is saying don't use public projects, but i would only consider them after other aspects, and certainly wouldn't cull people because of it.
>Don't you think that the over-worked CRUD programmer at an obscure insurance company wanting to get out because they are in a difficult situation (may be financially, may be race/background etc), would find it hard to have time for a public showing of their coding prowess?
Like I said: the amount of time you realistically need to get something out there is about a couple of weekends.
The bar is not necessarily as low as you think. At companies like Amazon you have to get approval for even the simplest personal projects, which could take months if you're unlucky.
Generally speaking, I preferentially hire developers with public projects--though I do not just pick one at random and ask for one they're particularly proud of. I think open source development is virtuous and is something that I want to promote and encourage; in the process, I get what I think is a better view into how somebody works.
If you don't do public projects, then you start a few steps behind the other candidates. This isn't insurmountable, but the error bars are wider, making you a somewhat greater risk.
Only in the weirdly defensive world of Hacker News does preferring people who contribute to open source software imply throwing out resumes without it, does it imply "missing out" on those people. Of course I consider resumes without open source work. It would be stupid to not do so. But I will look first at those with a publicly auditable track record. Is that a surprise? I look, earlier in the list, at people who've worked at a number of large firms with strong technical teams, too. They shrink the error bars on hiring.
Though, even if it didn't mean I was getting a view into how they actually work, I'm okay with preferring public-minded people who give things back and make the world a better place through it. I try very hard to live by the motto "pay it forward" and I find it to be rewarding and pleasant to work with those who do likewise.
> Then pay them for it.
My last two employers did exactly that. Should I need to hire directly for my consulting adventures, I'll do it there, too. I mean, this reply makes no sense to me: why would I not?
So I have a small side project I have hosted on Heroku. Getting stuff working on Heroku was a pain, and involved a commit for every thing I changed to try and make it work. How would you view a ton of commit messages in that context?
> Sadly there's (seemingly) no way to interview people for this ability
In a recent interview, I spent 45 minutes pairing with a current employee. Our goal was to take a (contrived) piece of legacy code, clean it up, and add a new feature to it. I imagine it was a very telling experience for them, and I feel it was more useful than the whiteboarding we'd done before.
Hopefully we as an industry will keep iterating on this sort of interview.
Yeah, that is cool and lets you get a hand on the job and a feel for what you'd be working on, but it makes me a little wary. Doing work for some employer without getting paid is a red flag. It's not a total klaxon blaring, but it does raise eyebrows.
Spending 45 minutes working on "a contrived piece of legacy code" with an employee competent enough to evaluate your effort doesn't sound to me like "doing work" for the employer. I get what you're saying, though... I think if the evaluator were incompetent and you were solving an important problem for them (things that would be pretty obvious), it would be an awful interview. But hey, that's 45 minutes well spent if it shows you for sure that you wouldn't want to work there.
Pairing makes this seem less egregious to me, even if the company is getting some value from the process. In a traditional interview, both you and the interviewer would be spending the same amount of time, and arguably learning less about each other.
Asking for some work to be done off site before an initial on site interview, where the potential employer is getting something of value, would be a much bigger red flag to me, personally.
My policy for this is that we have a pairing exercise that either: (a) is on sample code that the candidate brought, (b) an open source project with the end-goal being a pull request, or (c) a Make Work exercise. I do not want to have a candidate working on our real code precisely because I do not want to have the charge of unpaid work levelled against us.
Well, to get to the point of being interviewed, you had to have some education (either formal or self taught, or both). This involved writing a lot of code (homework assignments, etc) that you didn't get paid for. And you had to write / polish up your resume, again without getting paid for it. And you had to drive to the site without compensation (unless the place is big enough to fly candidates across country for the interview). So why would actually demonstrating your competence raise a red flag? Either they like what you did and you get hired (in which case you are getting paid for the work), or they don't like it, won't use it, and it is no worse than putting a bit of work into a homework assignment that you ended up getting a C on.
A for profit company isn't benefiting from those activities, though. I can easily see a situation where a less than scrupulous company brings in a bunch of candidates to do this in order to get free labor.
There's a lot of hate for whiteboard coding. Having been on both sides of whiteboard, code reading, and code pairing interviews, I like them all for different reasons:
Code reading: In <10 minutes, I can have a basic grasp on someone's ability to read, understand, and explain new code. Usually simple examples of <50 LOC in the language of there choice. I've used it a lot in phone screens (just providing URL to the code) and gained the reputation of being able to predict fairly well the quality of a code submission to a "take-home coding challenge".
Whiteboard: in ~15-20 min I can get a grasp on how someone takes a vague problem and works toward a solution and how they are to work with. I'm actually less concerned about code, and more about how they handle ambiguity, flesh out requirements, communicate issues, and think holistically without the distraction of a compiler/syntax-highlighter/etc. I think whiteboarding is slightly more about communication than coding and can provide a lot of context in a short amount of time. Of course, not all whiteboarding interviews are equal - I spent plenty of days complaining about certain whiteboarding interviews, and now I'm clearly biased to believe that mine take the good and leave behind [most of] the bad.
Code pairing. Takes ~1hr to understand and modify some piece of code in a meaninful way. (Maybe there are simpler versions - but in an effort to feel more "realistic", I've always focused on changing a requirement to the candidate's code submission). It's more realistic than whiteboarding, but it carries the [real-world] overhead of getting syntax right, finding docs for a specific function, and other surpries that come up (network issues, desktop crashes, editor problems, etc.). I've seen it work well when there are several interviews that focus on very different skills, but inevitbaly after such interview, I've often felt like I had a very narrow view of a candidate.
I've done a few similar things at my previous employer. Basically, taken it upon myself to write internal reporting applications, to much success. I've also done some other small things (AHK, etc..) which worked out well.
Point is, I've been modestly bringing these up during interviews when asked what I've been doing the past few years. It is always glossed over.
I don't think companies want or recognize that they want someone who will be able to handle these tasks. If they knew these tasks needed to be done, they'd hire for it, right?
Seems some of the most important things that can be done for a company may be hidden to the executive team.
I was fortunate to be a valued player on a small team. But what you're talking about is basically trying to hire yourself in as a tech exec when you're just applying for a job as a programmer. I think that's because management (generally) doesn't really want to hear that they've got a blind spot even if they do.
Ultimately all you can really do is try and work your way into a situation where you have some kind of meaningful equity. Founder, co-founder, early hire, buy a piece of a company, whatever. That's the only way to really get rewarded for your skills; knowing enough about the state of the art to see what can be done better and also having the skills to see those projects through to completion and realizing the benefits thereof.
I wish there was a job title for "good all-arounder with knowledge of the state of the art in a role with substantial self-direction", but I think that's probably "founder" or something. Or "owner of a very small business" maybe?
> I wish there was a job title for "good all-arounder with knowledge of the state of the art in a role with substantial self-direction", but I think that's probably "founder" or something. Or "owner of a very small business" maybe?
That's the way I am drifting. I just can't seem to get in gear on my own. :(
Part of it is social and general anxiety. I'm afraid of putting myself out there, having my ideas and implementations judged, and finding both coworkers and customers.
Part of it is having trouble focusing on a single idea. Finding the right partner would help here: see #1.
Part of it is fluctuating levels of depression sapping my motivation outside, and sometimes inside, work. I have been fighting this for almost a decade. At one point several years ago the mere thought of doing things outside of work so reinforced how miserable I was in my day job that I couldn't get anything done.
I'm slowly working through my barriers. My most recent dip into the job market has demonstrated that the only way I am going to get the job I want where I want it is to make it myself, so I have renewed motivation to get moving.
Where are you located, and what are your qualifications? I'm NYC, Python/full stack. Email me!
I'd be interested in working on a project together (I have a recent "good idea" but it will need to be built ground-up). I don't have a lot of money, but I'd be fine with putting some capital down for hosting, domains, etc..
It would be very casual. No pay either way of course. Ownership is something to discuss.
For your first part, I feel everyone has that to some degree. You just have to do it. Email startups.
In my case, it's the "lot of work" issue that is the problem. It's really easy for me to ignore the work I don't understand very well (marketing) or don't enjoy very much (which is largely a function of not understanding very well) for raw programming.
I have a couple of friends doing a startup in the same space in which I'm writing some software, and their software is nowhere near as far along as mine, yet I don't have anywhere near the "business development" that they do. I'm not there because it's too much work for one person and I've focused on the tech. Their tech isn't that great because it's too much work for one person and they've focused on the biz.
Aaaaah, I just don't know. They're pretty inexperienced, mostly just straight out of college. I'd like to find one person who is as passionate and experienced about business development as I am about software development. Instead, I mostly get contacts from guys who've worked in government offices all their lives, looking for a free programmer to boss around on their "1 in a million idea".
Why don't you join forces with your friends? If your tech is as far advanced as their business development, it sounds like you could join as an equal founder.
Well, they're pretty sold into their idea (which doesn't interest me) and they have one or two other people involved (so I'd be coming in as an outsider anyway). I've met their developer once and he seemed like a nice guy, though very inexperienced (not to speak ill, he's just young). So basically, they have a team together and I don't want to break them up, before their time, if that time should ever come.
Some of it is selfishness on my part, too. I want as much ownership and control as I can keep, and I want it to be on my ideas. I've tried working on other people's ideas before and it just doesn't move me. I have a pretty clear vision of where I want to go, both tech and business. I guess I'd rather keep going slow right now than get side-tracked working on someone else's ideas.
If their current project ever tanks, they might be interested in coming on board with me. I think we're both just trying to see where everything we're currently working on is going.
Other than "virtual reality" with some aspect of "multiplayer", they don't. I'm going after productivity applications. They're going after social networking.
Back when I worked corporate I loved when I got time to make the apps I was responsible for better. Getting free time to refactor an outdated procedure written by four different developers who'd all left that just barely worked, albeit slowly, was something I enjoyed. As was getting a couple days for minor bug fixes and small feature adds. I happily had the freedom to sit with the users directly and ask them about how they used the software in their jobs as part of those few bug-fix and feature days for the "late lifecycle" apps (read: they worked OK, so almost zero time was spent updating them other than major crashes and no major versions were scheduled). So, I got to implement unique features based on how they needed to do their jobs that previous developers hadn't thought to add because they'd only done the official 'users get together and ask for features, department head filters it, hands it to our department, department head filters it, assigns it to one of us to do in spare time not working on more important software' thing.
That sounds really nice. At my current position, we never get time just to refactor, though supposedly that time is "just around the corner." We are always cranking against tight deadlines. Consequently, we end up with some pretty jacked up code, since as the saying goes "you never get it right the first time."
It was. It wasn't as often as I'd have liked. But it was nice to have once in a while. And the users loved it. It's amazing how much someone likes a developer sitting down next to them applying their full ability to try and make their job better.
There's no way to interview for what you're looking for, but compilers seem to require both great organizational skills and some algorithmic chops. there's enough complexity you can't hide lack of one side or the other.
Anyway, asking about building and enhancing a compiler seems like one of the ways to get insight about how people grow code. Every program is a seed. Is this the kind of guy that will end up with a mighty oak tree, or kudzu in 10 years?
Compilers aren't real-time, distributed, multi-user, ... basically anything that is hard from a software engineering perspective which includes deployment and testing. Compilers have reproducible test cases (if they don't, that is a bug in itself). You can catch regressions in a compiler with an automated test suite, and bisect their version history to see where it was introduced. Bug reports from the field tend to be easy to confirm. A compiler isn't going to work well when it has 10,000 users, but mysteriously spiral to a crash when there are 10,100.
Compilers have been developed by lone developers in isolation. Some compilers are only a few thousand lines of code.
Someone who can demonstrate knowledge of making a compiler could well be completely behind in their knowledge about language design. He or she shows how calculations are turned into a tree and then into code on a register machine (i.e. "for-mula tran-slation"). But no idea how to compile exception handling, closures, and doing advanced things with type (beyond just checks) and such.
That is to say, the bar for what can be legitimately called a compiler is quite low.
On the topic of languages, I'd rather interview someone who knows about a lot of modern language features and how they can contribute to the improvement of program organization, yet is foggy about the details of how some of them are might be compiled to machine code.
There is still "compiler worship". On a previous job, I fixed a code generation bug in gcc affecting MIPS targets. Everyone was talking about that: "like wow, he fixed gcc".
If the point is looking for organizational skills, i think compilers are legitimate. They're big complicated beasts. Sure, some people in isolation can make amazing things with a few thousand lines of code - I think most people would be happy employing Fabrice Bellard.
Writing a C compiler seems like one of those standard undergrad activities. If a programmer can make that work, they probably have sufficient organizational skills. It seems like real-time, distributed, multi user app development is a separate filter.
I guess it's the distinction between looking for someone who is capable of solving those problems vs looking for someone who has solved those problems.
"Tell me about writing a compiler" seems mostly to be a question intended to restrict your hires to fresh grads who took a compiler course or if you're trying to poach compiler writers from another company. You're not likely to get a very useful response from someone who hasn't written one recently or at all. You could just as well ask "Tell me about writing a POSIX shell", or "Tell me about writing a browser", or basically any "Tell me about writing a <FOO>". It's not a useful question unless you're specifically hiring them to write a <FOO>.
It's both. If it doesn't work with threads, there is no concurrency.
If the threads have to stop, it's still concurrency in the sense that threads are stopped in arbitrary places (just like an interrupt can stop a task at any point in its execution and dispatch another task or whatever).
But the garbage collector isn't running concurrently with the threads.
I.e. I think the term "concurrent garbage collector" is understood to be in contrast with "stop-the-world garbage collector" not with "synchronous garbage collector as a library function in a single-threaded virtual machine".
So when you say "real GC, not reference counting", you have to understand that tracing GC and reference counting are duals, and every concurrent collector is going to be some hybrid of the two [1].
But there are a number of algorithms that most people would probably consider "real GC" that don't stop the world. Modern Java GCs use the train algorithm [2], which bounds pause times to some arbitrarily-sized constant. Erlang per-process heaps [3] also give good soft-real-time characteristics.
It is theoretically possible to run the whole GC in a background thread (eg. while a GUI is idle). Both the JVM and the CLR have options for this [4][5], but they're only useful for workloads with large pause times (eg. GUI apps, lightly-loaded servers) and so aren't enabled by default.
In typical software today, much of the complexity arises from interactivity. There are user actions, network events and all sorts of contingencies that make the program flow non-deterministic.
A compiler doesn't have that. It's blissfully old-fashioned, really: read files in, write files out -- no interruptions or real-time requirements. You could do compilers on punch cards.
In a way, the compiler is the most complex piece of software that you could imagine in 1960... But software has moved on and faces a different kind of complexity.
Hence I'm not sure if asking about compiler architecture is really representative of the organizational skills needed by today's software engineering.
I used to work on the Delphi compiler, for 6 years or so. That compiler was not a simple function of file input to file output. It lived in an IDE. It had to do incremental compilation, throwing away the compiled version of an in-memory file and recompiling it, but avoid recompiling the whole world of downstream dependencies if not necessary. It had to provide code completion, which meant parsing and type analysis of a file, but skipping codegen, with reasonably sophisticated error recovery. It had to provide the debug expression evaluator, which lived intermixed with the constant evaluator, and needed to know how to set up calls to symbols in the program being debugged. And it had to do all of these things in a long-lived process, with no memory leaks and high reliability.
The degree of asynchronicity is low, I agree. The most significant was just interrupting compilation on user request. But the complexity was not trivial.
These days I work at a startup on a full stack SaaS; C++, Java, Rails, and Coffeescript with as few blocking browser actions as possible. It is far less complex than compiler work, but it pays better. I've never had to debug OS code to diagnose runtime library issues in this job; nor write manual structured exception handling. I haven't had to try and build stack frames dynamically to implement reflection calls, nor unpick stack frames dynamically to implement virtual method interception. The relative complexity of potential races with SQL read-committed, or writing UI such that it can't become inconsistent, really doesn't compare.
Compilers are a nice example because they require a lot of code, and all the code has to be right. If you can't impose enough organization there, you don't have a chance at getting a distributed event driven system to work.
Compilers are simple. They are organized as a pipeline. Frontend, middleend, backend. In more detail: lexer, parser, semantic analysis, optimzation 1 to n, instruction selection, scheduling, register allocation, assembly.
Ok, you can do more complex. I work on a compiler, which does it lazily. Transformations are per file and depend on each other. If something goes wrong, you print out the graph to figure out the problem. No idea what the benefits could be. Might actually be a good example, where it could have been simpler.
Yes, but there are a bunch of tradeoffs. do you try to lex the floating point format, or push that back to the parser? Do in introduce an extra pass for constant folding or do you try to shoehorn it into AST generation in the parser?
The nice thing about compilers, it requires a ton of code.
It's the kind of thing that any programmer can do (well good ones), and there are organizational tradeoffs to be made. There is no "right" answer, it's an opportunity to talk about organization. There are blind alleys that seem like a good idea, but turn out not to be. There are opportunities to merge and split passes. Some layers might want information from other layers, some layers might look very similar to other layers.
Something I just realized that I wanted to write, but didn't, was that I generally don't do well in whiteboard interviews. I've done a few and it's always algorithms that I didn't learn in school because I was EE/CE (not CS) and while I took Data Structures and Algorithms, I didn't take Advanced Data Structures and Algorithms or anything like that. I took two compilers classes (which were great!) and a pile of classes dealing with hardware and the practical realities of computers rather than the theoretical aspects.
I'm pretty happy that I did what I did in school, but I wish that people didn't rely on the whiteboard situation as much. If I can talk about what I did to find the edges and corners of a piece of mail in a picture, and how I used that I can rotate the image to be square and then crop the image so that all you see is the letter and not the background, perhaps my ability to do some kind of whiteboard coding exercise isn't terribly relevant. The odds that I can answer questions about how I developed such an algorithm while still not being able to actually write code are pretty small.
You're hiring me to solve business problems and last I checked FizzBuzz or reversing a singly-linked-list aren't business problems. End Rant.
The interview simply doesn't have the time required and both sides will have to be okay with spending more time evaluating each other if we want to move forward.
Also we simply need to teach people the things that we want them to know after we hire them instead of looking for perfect fits.
You could give them some mild spaghetti, and ask how they'd try to improve the code.
I think one reason why what you did doesn't happen more often, though, is because they wouldn't have the freedom to reimplement in a language they felt was more suitable.
Posted on 18 June 2015 by John
Here’s an insightful paragraph from James Hague’s blog post Organization skills beat algorithmic wizardry:
When it comes to writing code, the number one most important skill is how to keep a tangle of features from collapsing under the weight of its own complexity. I’ve worked on large telecommunications systems, console games, blogging software, a bunch of personal tools, and very rarely is there some tricky data structure or algorithm that casts a looming shadow over everything else. But there’s always lots of state to keep track of, rearranging of values, handling special cases, and carefully working out how all the pieces of a system interact. To a great extent the act of coding is one of organization. Refactoring. Simplifying. Figuring out how to remove extraneous manipulations here and there.
Algorithmic wizardry is easier to teach and easier to blog about than organizational skill, so we teach and blog about it instead. A one-hour class, or a blog post, can showcase a clever algorithm. But how do you present a clever bit of organization? If you jump to the solution, it’s unimpressive. “Here’s something simple I came up with. It may not look like much, but trust me, it was really hard to realize this was all I needed to do.” Or worse, “Here’s a moderately complicated pile of code, but you should have seen how much more complicated it was before. At least now someone stands a shot of understanding it.” Ho hum. I guess you had to be there.
You can’t appreciate a feat of organization until you experience the disorganization. But it’s hard to have the patience to wrap your head around a disorganized mess that you don’t care about. Only if the disorganized mess is your responsibility, something that means more to you than a case study, can you wrap your head around it and appreciate improvements. This means that while you can learn algorithmic wizardry through homework assignments, you’re unlikely to learn organization skills unless you work on a large project you care about, most likely because you’re paid to care about it.
the number one most important skill is how to keep a tangle of features from collapsing under the weight of its own complexity
Agreed.
very rarely is there some tricky data structure or algorithm that casts a looming shadow over everything else
Agreed, BUT...
In order to organize, sooner or later, you will have to get clever (with tricky data structures or algorithms).
How I have always built something big and/or complex:
1. Add lines of code.
2. Add lines of code.
3. Add lines of code.
4. Holy shit! What have I done?
5. Refactor.
6. Combine similar sections.
7. Genericize with parameter-driven modules.
8. Still too many lines of code! Optimize Step #7 with something clever.
9. Go to Step 1.
2 years later: What the hell was this clever parameter-driven multi-nested process for? Why didn't I just code it straight up?
For me, organizing complex code has always been a delicate balance between readibility and cleverness.
Don't remove enough lines of code and it's too much to navigate. Remove too many and it's too much to comprehend.
Organization + Cleverness + Balance = Long Term Maintainability
> Nobody is really smart enough to program computers. Fully understanding an average program requires an almost limitless capacity to absorb details and an equal capacity to comprehend them all at the same time. The way you focus your intelligence is more important than how much intelligence you have.
> At the 1972 Turing Award lecture, Edsger Dijkstra delivered a paper titled "The Humble Programmer." He argued that most of programming is an attempt to compensate for the strictly limited size of our skulls. The people who are best at programming are the people who realize how small their brains are. They are humble. The people who are the worst at programming are the people who refuse to accept the fact that their brains aren't equal to the task. Their egos keep them from being great programmers. The more you learn to compensate for your small brain, the better a programmer you'll be. The more humble you are, the faster you'll improve.
I've always described myself as too dumb to write clever code.
I try to imagine that 6 months from now I'll have a critical bug in whatever I'm writing and so I write it for that.
Apropos of nothing, I'd love an editor that could completely hide (not collapse) comments, as when something I write them I don't need them so hiding them would be great but showing them but been able to toggle them on later would be awesome.
I find that the older I grow as a programmer the less I enjoy cleverness. There's never a good enough reason to use clever algorithms over boring ones. In fact, most of the refactors I did over the past year was to 'undo' cleverness (thankfully those programmers are no longer working here!)
In terms of performance you sometimes need to do less conventional things, but that's precisely where documenting WHY you're doing it that way is so important.
The small bits of performance you're going to get isn't worth the maintenance hell it's going to cause.
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. - Brian W. Kernighan
Being organised has nothing to do with fewer lines of code. A 10 line script can be disorganised and a million LoC app can be well organised. What makes code organised is things not being tangled up together - having well encapsulated logical modules that do one thing (or a set of related things), with a well designed API to get each bit to work with the other bits.
I've found that the more I write testable code the better organised my code gets. Automated testing forces you to design maintainable, organised code - because your test suite really won't cope with disorganised code.
> I've found that the more I write testable code the better organised my code gets. Automated testing forces you to design maintainable, organised code - because your test suite really won't cope with disorganised code.
It can, but it will get out of hand really quickly. Worst case scenario the test suite becomes so tightly bound to the implementation that is becomes a part of it. This forces the test suite to be rewritten in the implementation changes, this is the worst case scenario and must be avoided at all costs.
An insight I've had here is to write the code the same way I would have as if I'd done red green refactor tdd, but then add only the essential unit tests at the end. Works very well and is much much faster, but obviously you'd have to have done a lot of red green refactor tdd previously in your career or on side projects to really know what testable code looked like. Reading Working With Legacy Code by Michael Feathers may be a shortcut to that knowledge as well.
There are two kinds of cleverness, and both require balance in their application.
The first is the more obvious kind: exploiting various conditions that occur in your code, in order to come up with a nonstandard solution. E.g. using if (x mod 15 == 0) in fizzbuzz. The problem with this kind of cleverness is that it's hard to understand the code, and it breaks whenever the conditions it relies on stop holding.
The second kind of more subtle. It is where you organize code to properly express the nature of the problem, but this organization is itself conceptually complex. A typical example is using C++ templates where they make logical sense, but they make the code that much harder to read.
The second kind is where balance is needed most. If you use too little cleverness (or maybe too much humility), you end up with spaghetti code, or code where all the organization is done via convention instead of enforced by the compiler. If you use too much cleverness, you end up with code that is hard to read, and brittle, since so many assumptions are embedded in the code.
I always liked Linuses quote about bad programmers worrying about the code and good programers worrying about the data structures. I often find reorganizing my data models can cut down on a lot of code.
One of the problems I hit was that I would get really grumpy when particular things were asked for.
Once I learnt to think about why I was getting grumpy, and vocalize the actual problem it was easier.
e.g. A client said to me recently could clicking the blog link open a new window? When I started programming I'd have said "Yes! I can do that!". Then after a few years I'd have got grumpy about it and said something passive aggressive "I guess so. That's not normal though...". Now I think about it, realized I hate it when links on the web behave unexpectedly and that's what's making me grumpy. Then explain to the client that that's you shouldn't take control of other people's browsers on public websites, and the better way is to have a clear link to get back to the shop on the blog.
Related to what you are saying, I read somewhere, many years ago, that the best thing to do in cases where you feel like being grumpy or dismissing something that was asked for was to, instead, ask 'why'.
The reasoning is that sometimes what the client is asking is his best guess as meeting a need he has. By asking why, chances are you'll get to the actual need and, oftentimes, find a better solution where both the client and you are happy.
Over the years as a grumpy developer, this piece of advice has served me wonderfully well.
I agree, diplomacy is a critical skill. Grumpiness is basically anger, and all anger comes from pain. Programming is fun, but getting yelled because we missed a deadline or got some feature slightly wrong is painful. When Suzy asks for a change to the link, she's not trying to cause pain, but it's easy for all that previous pain to come channeling through and end up all over her. Road rage works the same way.
I'd say it's not really about "organizing complexity" - because that tends to just push it around somewhere else - but reducing complexity which is most important.
In my experience it has been that designs which look "locally simple" because they have such a high level of abstraction are actually the most complex overall. Such simplicity is deceptive. I think it's this deceptive simplicity which causes people to write a dozen classes with 2-3 methods of 2-3 lines each to do something that should only take a dozen lines of code (this is not that extreme - I've seen and rewritten such things before.)
Perhaps we should be focusing on teaching the techniques for reducing complexity more than hiding it, and that abstraction is more of a necessary evil than something to be applied liberally. From the beginning, programmers should be exposed to simple solutions so they can develop a good estimate of how much code it really takes to solve a problem, as seeing massively overcomplex solutions tends to distort their perspective on this; at the least, if more of them would be asking things like "why do I need to write all this code just to print 'Hello world', and the binary require over a million bytes of memory to run? Isn't that a bit too much?", that would be a good start.
“Here’s something simple I came up with. It may not look like much, but trust me, it was really hard to realize this was all I needed to do.”
This reminds me of something Scott Shenker, my computer networking professor at Berkeley, drilled into us every chance he got: Don't manage complexity. Extract simplicity.
Finding complex solutions to complex problems is comparatively easy. Finding simple solutions to complex problems is hard.
I can't second this enough. This is why it's so important to throw away your prototype code, sometimes, instead of trying to fix it. It's also the big value-add of TDD: Well written tests make that extracted simplicity explicit.
This times 1000! The really sad thing is that 9.9 out of 10 technical interviews are all about how many Algo's you know. Even if your job will never require actually implementing a single one.
These interviewers do not seem to care at all about the actual process of writing and refactoring code, or what your finished products actually look like.
I know plenty of programmers who can write highly optimized code, but do so in a way where it is completely impossible to maintain.
It's especially heightened when you are given a problem such as "validate if this uses valid brackets, and you have 10 minutes." When under time constraints to solve what is basically an algorithm 99% of programmers are not going write their best code or write code in the way they would normally write code.
If you are using lots of algo's on your programming interviews, I suggest you take a step back and determine if those skills are actually what you want to be testing for in this job. Odds are that it is NOT algo's. Give your interviewer some really sloppy code to refactor into something beautiful. Give them a few hours to work on it, watch how they put the pieces together.
If your position isn't going to require someone to write advanced algorithms on daily basis, testing for them only cuts out a huge swath of potential talent. I also think it probably leads to less diversity in your work place, which is a bad thing.
A Web Developer will never need to solve the towers of Hanoi problem, but they will need to write clean code that can be maintained by others.
And as someone who is about to graduate computer science, your rant makes me hopeful :)
I am much better at refactoring and finding patterns than memorizing algorithms and other things (mainly because they are relatively easy to look up). Do you suggest that I try to strike a specific balance between practicing dealing with complexity, optimizing and making code readable, versus actually trying to memorize and implement certain algorithms for learning's sake?
I'm not a computer science grad, and it puts me at a huge disadvantage simply because I never had any formal training about algorithms. And my day job is to write testable, well-designed production quality code. I don't get to spend a lot of time rewriting algorithms, which as you note, can be easily looked up. The fact is that as a full-stack developer I can count with zero fingers the amount of times I've had to use an self-made algorithm in a production system.
That being said, I do force myself to practice algorithms thru various coding challenges only because I know that this is the specific skill that will land you a job. Which is a shame.
That is not to say that there are not positions that really do require this specific skill, but the vast majority of development positions don't and yet nearly every employer tests their candidates as if their success depends upon their ability to solve an algo in under 5 minutes, instead of their ability to write a good library.
It's a disservice to those of us who do not have CS degrees and have learned to code by spending a great deal of our time writing code.
I agree that it is an important skill in engineering to recognize when the complexity got too high - At that point you need to take a step back (or several steps back) and find a different path to the solution - Sometimes that means throwing away large amounts of code. It's about acknowledging (and correcting) small mistakes to make sure that they don't pile up into a giant, disastrous one.
Another thing I learned is that dumb, explicit code is highly desirable - It's better to have 10 dumb components to handle 10 different use cases than 1 clever component that can handle all 10 cases.
I think the most important skill is being able to break down problems into their essential parts and then addressing each part individually but without losing track of the big picture.
> Another thing I learned is that dumb, explicit code is highly desirable
I knew for a long time that "dumb" code was easier to reason about, but it wasn't until I started looking at compiler output and profiling my code that I realized being "clever" in the source text really didn't matter. Now I'm quite happy to write code in the most straightforward fashion, rather than foolishly "compress" things. I imagine the same experience could be enlightening for a lot of newbie and intermediate programmers.
There are definitely exceptions to this (generally correct) rule, in situations where performance does matter. RAM is still incredibly slow compared to the CPU caches, so painstaking cache-optimization can make a huge difference when that's the bottleneck.
Of course, if you're just writing some I/O bound program anyway, it hardly matters.
I agree. I like to remember that I'm passing code into the future where I might not be working on it anymore. Clear, well named code helps the next developer far more than an obtuse algorithm that smashes all logic into a single line.
Almost always this is true. But mature code has been iterated over, and some critical places the code has been optimized out of clarity. Nothing helps then but a document - comments are just not enough to explain the crooked road that lead straight to that solution.
Agreed. From my experience breaking things down into smaller pieces helps so much with initial development speed, future code change speed, and testability.
Organisational skill is actually an entrepreneurial skill. I'm learning the hard way that there are two diametrically opposing skills you need to master for both running a business and programming effectively:
1. Skills in Creating
2. Skills in Organising those creations
The thing is, the Creating part is always exciting but it's disruptive in nature. The Organising part is boring because it's about taming or tempering the creation in such a way that it can be referenced later on (just like filing your invoices, or timesheets - yuck - but necessary).
Unless you've got a systematic method to organise your creations, you will always be alone with your ideas, find it hard to resume creative chains of efforts and ultimately flounder without profit.
Both in business and in programming.
Damn right it's the most important software development skill.
This is why most of my personal toy projects are coded like a 12 years old with giant functions, god objects everywhere and horrible, horrible, one liners. It's liberating.
This is also why I like game programming that much. A big chunk of the codebase is throwaway code that won't be transported to the next project. When the final deadline comes, you don't feel bad making hundreds of hacks everywhere, and it's fun.
Martin Fowler really struck a chord all the developers trying to do the right thing by cleaning up badly structured code, by giving the practice a name and explaining why it's important. Refactoring is definitely a widely acknowledged and accepted practice today, although probably more so in some communities than others.
This has been my experience too. I've been involved cleaning up a half dozen or so projects that got out of control. In each case, technical complexity was blamed as the reason. After digging in, I found that following commonalities:
- Incomplete, conflicting and misunderstood requirements.
- Lots of "We never thought we would need to do X".
- Poor team communication.
- Mistrust, frequently well earned.
- Harmony valued over truth.
Once these were winnowed away, the problems rarely overwhelmed the technical teams. This isn't to diminish the importance of technical skills. Rather - when everything else is f*cked up, you can't blame the technology or expect a 10xer to pull you out of it.
The advantage of the 10x-er is that he will be reporting directly to a Jr. Executive and therefore cutting ahead of the middle management echo-chamber.
If you genuinely can write code 10x as fast as the average developer, and get feed requirements through a 10x faster channel, that would be the life! I suspect that's where the mythical 100x developer comes from.
I agree. There are actually two 10x things that can happen - sometimes in one person. One is choosing the right things to do the first time around. The other is doing them very efficiently. When you get a developer who intuitively understands the domain, and programs very quickly, the sky is the limit.
I certainly don't intend to diminish the impact of these 10x developers, just that projects generally get messed up independent of developer quality.
I like this little article a lot. Personally, I try to write code in a way that reads like a book. Lots of comments, explicit function names, explicit variable names, object names, class names, ect. Talking about languages higher level than C/assembly here obviously.
I am amazed at all the code I see that has terrible/too generic names for functions and variables and objects. Some people get so obsessed over short function names, one character variable names, and complicated looking one liner evaluations.
Totally agree on variable/function names, but for some reason I love complicated one liners. I know its wrong, but its my way of measuring how smart I really am(even though its not).
Much respect to you for acknowledging that it gets your brain off mentally! Need a 12 step program for folks like you. Complicated one-liners anonymous.
There is just a nice magical moment though when you grok what an application does and how it is organized, especially when you didn't write it, which is likely the most often case.
I think a large issue is when an application has new people working on it or entirely new teams that maintain it. That is when the original authors methodology and organization of the application falls to the immediate need of the day.
The functionality of the software should speak for itself. Commenting your code with why you are doing something is important to help other maintainers later on, including yourself, understand what you were thinking when it was written.
This. Organization is in the eye of the beholder. I've seen my code bases taken over and completely reversed, and frankly I have done the same. If only there was a better way to communicate exactly what you were trying to do.
I don't know if this is solvable, it could just be human nature.
Some folks like the analogy of software and building construction. I like the comparison to cooking... in this case the closest thing would be how a chef sets up their work space and how it is organized for what they are cooking.
Some like it one way, some like it another - is one way better or worse?
I would add to the conversation the fact that most projects fail on the set of the initial requirements. In my experience so far I have seen that constantly changing what you want the app to do creates a huge mess in the codebase even if you use latest tool and methodologies.
Looks like there is a great value to organise your app in way to be able to throw away large chunks of code and start over in case there is a big design change.
"The art of programming is the art of organizing complexity, of mastering multitude and avoiding its bastard chaos as effectively as possible." --Dijkstra
Organization is the hardest part for me personally in getting better as a developer. How to build a structure that is easy to change and extend. Any tips where to find good books or online sources?
Structure and Interpretation of Computer Programs.
This famous book emphasized controlling complexity and how to build a structure that is easy to extend.
Same here. I only have 2 years of experience, but right now my opinion is that this is not something you learn in a book (for the moment).
You can learn how to create clean, readable methods and classes with a book (1).
You can learn how to refactor old methods and classes with a book (2).
You can learn how to organize a small team to allow fast iterations with many books.
But building a project lasting more than a few months with constant changes in the requirements, new developers every month, new SDKs and frameworks every 3 days, without the code rotting to death and everything going out of control is a different story, at least for me.
I guess you just learn by watching old guys do what they do after decades of experience...unless someone has a magic book for me?
I just finished a book called "Practical Object Oriented Design in Ruby." It teaches you how to build OO applications that are pleasant to maintain. I highly recommend it.
My personal experience is that I had to maintain a set of applications I wrote for a company. The same application had to be used repeatedly (3-5 times) a year because it supported a specific business processes.
It was mundane work, but did it teach me how to organize because it made the pain of disorganized code very real.
That one hack comes back to haunt you several times a year.
I was lucky that this was one of the first things I had learned as a developer out of school. It was a little different because the projects were small enough to be solo, but it was humbling because all of the mistakes were my own.
I disagree. Communication is the most important. It's the number one cause of failed software project. Miscommunicated features, capabilities, scope, failure to name a few. My favourite is the last one. Not standing up and recognizing that an approach is not working due to fear has to stand out as a big one.
Which is why jupyter/ipython is so great -> you can do some fantastic documentation of code with working examples and visualisation, you can do it while you write it !
Summary: Toyota settled an unintended acceleration lawsuit connected with analysis of the source code for a 2005 Toyota Camry showed it was defective "spaghetti code."
There'a a lot of poorly-organized code in the world, and a typical excuse for not cleaning it up is that "it works" so there would be no return on fixing it. In the Toyota case, the code may have contributed to unintended acceleration, and did result in a legal exposure for which Toyota felt it was necessary to settle a lawsuit.
Nice post, organization definitely is one of the most overlooked aspects of programming. It takes a lot of experience and thinking to be able to organize properly. It's really what separates the beginner programmers and the experienced ones.
I agree that complexity is far up there. But also risk. Also long term thinking. And net cost or net profit. The more years I have under my belt, I think more and more not only about complexity, but also risk, cost, profit. Code and hardware is just a means to an end. Not the end itself.
But yes, seek the minimum amount of complexity to materialize the inherent, necessary complexity. But don't allow a drop of complexity more than that. Architecture astronauts, pattern fashionistas, I'm looking at you. KISS. Spend your complexity dollars where it gives you something you truly need or want. Don't do things Just Because.
If it were that easy, you could write a compiler compiler that knows all the “design patterns” and generates perfect solutions for all problems.
Remember the 239th Rule of Acquisition, “Never be afraid to mislabel a product.” That goes for methodologies, too. If they could get away with it, they would probably say it cures baldness, too.
So it would boil down to the quest for the “sufficiently smart problem specification format”. OK, aren't programming languages standard problem specification formats? Why do you suppose there are different PLs? Right, because all problems _can't_ be specified advantageously in the same format. Which brings you full circle to where you started. That's by no means a discouragement. If you invent a PL that's better than what we have now, it's still progress. But, to use a famous quote, “there is no silver bullet”.
Design patterns are excellent at creating the illusion of organisation.
I recently had to modify someone else's Mac app. On the surface it looked like a textbook example of successful use of design patterns. Everything was highly modular, functions were simple, there was clear separation of concerns, controllers and views were distinct, and so on.
Problem was, the key features were still baked in and hard to modify. So while it looked modular, it wasn't - the affordances were fixed and there was no way to generalise them without pulling apart at least a couple of levels.
IMO it's not abstract organisation that matters. Organisation is only good if it makes it easier to achieve clearly-defined goals and benefits. You need to know what the goals are, and make some guesses about what they might be in the future. Otherwise you're just putting stuff in layers and boxes and drawing arrows everywhere for no good reason.
Well said. I would add to this that design patterns do not discern between simple concerns and cross-cutting concerns and those are key elements to get right at the organizational level.
Features are by definition almost always cross-cutting concerns in that they touch half the systems that makes up the application.
Even if your systems are pure isolated islands properly decoupled from each other, they can still be tangled back into a spaghetti mess when wiring it all together.
Furthermore, design patterns define data structures rather than data flow, even the behavioral ones, making them horrible at organizing the application's features.
Er, isn't design patterns (if you take the buzzwordery away) just a map from “conventional names” to “standard algorithms”?
This article isn't about the choice of an algorithm, it's about organisational aspects. That's something different altogether, and an even larger topic.
Interesting. Most comments seem focused on code complexity. In real life situations the complexity is blend of human interactions (attitude, consistency,team,leads, peers, family, emotions) business, market, competition, budget, time, attrition, unexpected events, and more.
Life is complex. Business and workplace dynamics can be complex. People are complex with their own strengths, quirks and situations. Having a broad outlook, developing patience and skills to deal with life and work is part of becoming mature.
This is always a good idea. Here are some of the things I do
- Establish conventions early; Conventions in managing projects, conventions in code. And stick to those conventions. Be predictable,
in the code. Use simple names.
- Protect your interfaces. By this I mean, use a central place like a wiki to document your interfaces, so all involved parties may agree upon. Write unit-tests for interfaces. Use libraries like mockito and hamcrest that make it a breeze.(You would lock your home every time you go out, don't you?)
- I mentioned this in the previous bullet, but write tests. Write lots of them, write tests that document any tricky, magical behavior.
Cover all the cases(A boat with one hole is bad as one with two holes). Cover all the invariants, any thing that you notice but didn't mention in the source code. Write tests for the bugs you just fixed.
- If you are developing in Java, please use an IDE. I use Intellij, but Eclipse is good too. It makes refactoring code much easier.
Rename fields, pull classes up the hierarchy, create abstract classes, create getters and setters automatically with refactoring tools. I am not against emacs or vi, but it is hard to manage Java's sprawl with them.
One of the best programmers I know writes code like it has been generated with a program. It is boring, dull and looks alike in every direction. Every field and method is documented, it says what its purpose is, and why it is needed. He is very fast(fast for a startup, not your average enterprise), accurate and gets a lot of stuff done without magic.
No part of a system is the most important part, from a car to a huge organization, all parts are equally required to interact and hence make such system 'work'.
Having a clear functional organization at the start (and respect it throughout development) is very important, but after that is equally important to code clean and efficient code, to test, to debug, etc. Then, going up and dow on the solution stack is important to make the right decision on hardware, OS, server, services, etc.
Okay, if you need to be so detailed you also have to argue keep in mind that some parts are more important than others. Example with the car: If you leave out the head-rest you might sell your car for a few bucks less or have a harder time to get a customer (low importance), but if you have a car without some kind of motor you don't even have a car (high importance).
While there are many things that are important they all have different priorities, though.
All that said I'm also very sceptical that there is one thing you can learn and then you are a the best programmer. You have to learn thousands of things. And sometimes learning a currently low priority thing now might save you hundreds of hours of pain further down the road. So it's really hard to say what you have to learn.
Nope, a car is not a system that was built to be 'sold', it was built to transport people/things safely. If you remove a part of a system and the system still works as intended, it was not part of the system. The head-rest can be removed and the car will still run but with less safety. Hence: all parts of the system are equally required to interact and make the system do what the system should do.
Not only is it the most important skill (unless for small home projects maybe), it's also the one which takes the longest to learn and hence the one you really improve on during the years and which distinguishes the experienced ones from the lesser experienced ones. Coincidently, it's also the skill for which you won't find a ready-made answer on stackoverflow or any other site/book.
thinking of it, I've also seen this as a typical difference between fresh CS graduates and those who have been programming for 10+ years. The latter would sometime take way longer to come up with clever math-oriented algorythms than the first, because the graduate has been trained for it and still has it fresh in memory, but experienced programmer would make up for that by being able to use the algorithms in all proper 'best practice' ways one can think of. Whereas the graduate would just slam it in somewhere and call it a day even though there are now x more dependencies and whatnot, you get the picture.
This is pretty much how I feel just now (after 12 years). I know Django pretty well after 4 years, and know where and how to add things to avoid breaking other parts of the code and make it work consistently with other parts.
Yet nearly every job I apply for these days wants a 90 minute online test. (I just had to implement a sorting algorithm in psuedo code - I have never in 12 years had to implement a sorting algorithm as I choose an appropriate library to do that for me).
Yup, exactly this. Which is why when we hire, we try to at least get hold of code written by the interviewee and preferrably make him/her come over for a day or so and have him/her do some programming. Much more valueable than 90 minutes of putting someone under some stres-test which doesn't resemble an actual job at all. I read similar things on other comments here so hopefully there's a shift going on. Especially because I would fail terribly at standard interviews as well :]
>> Yet nearly every job I apply for these days wants a 90 minute online test. (I just had to implement a sorting algorithm in psuedo code - I have never in 12 years had to implement a sorting algorithm as I choose an appropriate library to do that for me).
True, but if you can't implement a simple sorting algorithm how are you going to implement _____ ?
Since the process of implementing _____ is almost certainly completely orthogonal to regurgitating a memorized algorithm one would never directly write themselves (to a fantastically close approximation of never) I'm not sure how that question has any validity at all.
I can't load the page, but I would definitely agree with the title.
From my own experience programming here are some the most common and best ways to better organize complexity:
1) Create DSLs. The Sapir-Whorf hypothesis is the theory that an individual's thoughts and actions are determined by the language or languages that individual speaks. By creating DSLs we are able to reason about a problem domain with increased efficiency.
2) Reduce cognitive load by reducing state. By reducing the number of variables in a given section of code we can more easily reason about it. values.map (x) -> x * x is a lot more understandable than newArr = [] ; for (i=0; i<values.length; i++) { newArr.push( values[i] * values[i] ); }
3) Build tools to build tools. The history of computing is one of building tools on top of tools. From assembly language to C to high level languages. What is is the next step? I suspect it is some kind of polyglot environment that is a hodgepodge of languages all working together combined with automated code creation from AI.
Since so many people prioritize getting then task done than writing organized/beautiful code more often then not we get code that isn't organized properly.
Thus, as a result: Interpreting complexity is by far the most important skill in software development. More-so then organizing complexity.
Aside from personal discipline and experience, I’ve found that using strongly typed and compiled languages combined with good tools are the best way to accomplish this.
Being able to search for and manipulate symbols at the AST level goes a long way towards eliminating any resistance to refactoring.
This is why I love Haskell. Haskell seems to increase my ability to manage complexity by one level.
Disclaimer: Haskell is not a silver bullet, not a panacea and I'm only claiming a modest increase, not miracles, but it helps me deal with complexity better than any other language I know.
Reducing LOC is not always necessarily a good thing. I'd always favor a more readable codebase that takes some effort to explain its purpose than terse one that is vague and obscure that makes your head spin trying to figure out its raison d'etre
Or in other words: code should be as simple as possible, but no simpler. I guess you could call it organization or readability or just good design. It requires deeply understanding what you're trying to accomplish and structuring your code to reflect that. I don't think there's any rote, step-by-step procedure that will get you there. Often it is a flash of creative insight that breaks the log-jam and reveals the hidden inner structure of the problem. Once that is revealed the code writes itself. In other words, good code should always make the problem that was solved look easy.
Developers understand the word "simple" in different ways.
I personally think of the UI/API: what the code exposes to the outside world should be "simple".
Some people might think about language features. "Simple" is being very selective about what language features you use and what you avoid.
Others might think design patterns, and think the code is "simple" when you can point to any class and immediately recognize what pattern that class implements.
I've come to learn the word "simple" as in KISS (Keep It Simple, Stupid) isn't very useful at all.
I was thinking of more than the UI/API, although I agree that should be simple as well. I think the internals of the code should be simple, in the sense of being constructed out of simple parts with each part being simply connected to the other parts. This means each method should have a clear purpose and it should accomplish it cleanly without surprising side-effects, and that the flow of method calls should be straightforward.
I take it from his site not loading that the most important skill is learning how to scale your site and ensuring it remains accessible during high load times, or using CloudFlare to at least ensure it gets cached.
"""But there’s always lots of state to keep track of, rearranging of values, handling special cases, and carefully working out how all the pieces of a system interact."""
I'm not a functional programming evangelist but that reads like a very good reason to go for FP. I think a similar point was made in "Functional JavaScript". I don't remember it exactly and it's on my shelf at home but there was some passage about the biggest downside of typical OOP codebases being the mental effort of keeping track of values and value changes.
And Metalevel complexity. I may have been able to craft beautiful code, but I sensed chaos in the way I operate and manages my own resources / tools. I've seen people being organized at many, if not all, layers, solving problem and solving how to help solving problems. Witnessing that makes me feel calm and envious at the same time.
ps: it's also reminiscent of recursion, dogfooding etc.
It's a very important skill for a programmer to have, especially in the modern environment where distributed systems built from many integrated components are the norm. That said, it's awfully difficult to disentangle the various skills needed for programming and assign an importance to each one, mush less to determine which of them is actually most important of all.
I worked on a system where we had to support imperial and metric units. It was done in a pretty bolted on fashion with if statements all over the place. And sometimes it isn't even clear if it could be done in any other way.
Any HN'ers have suggestions on how to do it elegantly.
> Organizing complexity is the most important skill in software development
I agree with this profoundly. Unfortunately, complexity is in the eye of the beholder. When comparing solutions to a problem, different developers will not always agree on which is the least complex.
Our humans are "comparing machines".
For this reason we tend to valuate people that solve problems more than the ones who never create them. This is really bad.
Also, in business, if someone is really good administrator, it seems he never does nothing.
On the topic of organization and related to another post about good Software Development books. What are some books that teach code organization as discussed in this post. One I can think of is "Refactoring" by Martin Fowler.
I would submit the main point is: You can't learn that from a book but only from experience. The best balance isn't cast in stone but must be found for each problem domain. As much as you can, do your own experiments and experience the consequences of your decisions yourself. Remember that for nontrivial problems, a usual part of the job is learning exactly what problem it is that you have to solve.
For a novice coder I couldn't recommend Head First Design Patterns enough - the gang of four design patterns are a bit controversial these days but the way it steps you through and makes you think twice about how to organise things was really eye-opening for me back in the day.
I was thinking that recently. The way I try to stay organized is to comment(with timestamp) every time something is changed so I can refer to it later. Does anyone have more tips on how to stay more organized?
there is an excellent book by jon lakos called 'large scale c++ design' which treats organizational or physical design of large scale c++ projects (> 1giga loc). highly recommended.
I would give "API design for C++" by Martin Reddy a shot. The only thing I remember now from the large scale book is to pay attention to compile-time and link-time dependencies, although when I read it I remember that it contained some nice tidbits of info that I had not seen in other books
The big takeaway was that the way the project is split into libraries, files, folders etc and how these map to classes, modules and subsystems matters.
Arguably, this is what higher level languages like Java and C++ provide. Tight organizational language metaphors that help implement design patterns in a thoughtful, consistently structured manner.
That's kind of like saying that McDonalds provides a healthy meal because they have salads on the menu. The truth is that most people still eat the burgers and fries.
Although higher level languages provide some features for organization, you still must have the discipline and know what you're doing to use them properly.
The abstractions provided by Java and C++ aren't very good compared to those provided by, say, Haskell and Clojure.
For the most part, design patterns exist merely to fix the lack of a simpler lambda construct in the language. They more often than not add complexity rather than remove it.
The biggest balls of spaghetti code I've ever seen we're always OOP with design patterns. I've seen far more clean C codebases than Java or C++ ones.
I would be hesitant of this so easily accepting this view due to selection bias.
Java and C++ are often found in huge, legacy, or enterprise oriented code bases.
The are plenty of ways to use the abstractions in Java and C++ to write nice code. Just like it is possible to find a Scala or Clojure code base that went crazy with the usage of "cutting edge" features and abstractions in those languages.
And at least with Java, your IDE will always be able to navigate the code effectively.
I completely agree it takes discipline and experience to write clean code in any language.
What I'm saying is that it takes more discipline to cleanly use Java or C++ than it does to use Haskell or Clojure. For the simple reason that most of the abstractions provided by the former languages add to the program's complexity rather than remove it.
My bad, I should've specified legacy C++. You're right that with lambdas now C++ has gotten much better.
However I would argue against the STL being well thought out. Every single large-scale C++ project I've worked on threw it out the window before even staring out. Our current project already takes an hour and a half to compile without parallel builds and barely uses templates at all.
I still prefer plain old C to C++ most of the time.
Important according to what metric? Making the software developer feel good, or making the company money? The former is almost certainly true, the latter is almost certainly not.
No, that's a totally different thing. The scope (requirements) of the software might be huge but the software could be designed and implemented cleanly and with maintainability in mind. Yet the scope might be tiny and the implementation a mess. This article discusses the skill of taking a mess and turning it into something clean.
I guess it depends your usage of scope. One is used in defining the problem, the other, the solution. I'm thinking of it's usage when describing the solution.
Out of all the things I know how to do in programming, reducing complexity is probably the one I'm best at. So how do I get a job doing this? Or a series of lucrative consulting gigs? :-)
I'm pretty sure I'm not as smart as I used to be, and I'm definitely not as smart or productive as some of the younger programmers I've worked with. (Sorry for the ageist remark!)
This may be my secret advantage: I have to keep my code simple enough that even I can understand it.
Here's a fun example that I've seen more than a few times in various forms: four-way navigation, either involving up/down/left/right or north/south/east/west, or both.
In one (somewhat disguised) project it worked like this: the code had several different modules to provide a keyboard interface for geographic navigation, while keeping the geo code separated from the low level details of key codes and events and such.
There was a keyboard manager that mapped keycodes to readable names that were defined in an enum:
switch( keyCode ) {
case 37:
return KEY_LEFT;
case 38:
return KEY_UP;
case 39:
return KEY_RIGHT;
case 40:
return KEY_DOWN;
}
Then an event manager broadcast navigation messages based on the KEY_xxxx codes:
switch( keyEnum ) {
case KEY_LEFT:
BroadcastMessage( 'keyLeft' );
case KEY_RIGHT:
BroadcastMessage( 'keyRight' );
case KEY_UP:
BroadcastMessage( 'keyUp' );
case KEY_DOWN:
BroadcastMessage( 'keyDown' );
}
A navigation manager received these messages and called individual navigation functions:
These navigation functions panned a map in one compass direction or another:
function moveUp() {
map.pan( maps.DIRECTION_NORTH );
}
function moveDown() {
map.pan( maps.DIRECTION_SOUTH );
}
function moveLeft() {
map.pan( maps.DIRECTION_WEST );
}
function moveRight() {
map.pan( maps.DIRECTION_EAST );
}
Of course most of you reading this can see the problem at a glance: Besides having so many layers of code, how many different names can we give to the same concept? We've got KEY_LEFT, keyLeft, moveLeft, and DIRECTION_WEST that all mean pretty much the same thing!
Imagine if math worked like this: You'd have to have two of every function, one for the positive numbers and another one for negative numbers. And probably four different functions if you are dealing with a complex number!
That of course suggests a solution: use numbers instead of names, +1 for up and -1 for down, ditto for right and left. And pass these numbers on through any of these layers of code so you only need half the functions. If you need to flip directions along the way (like the left arrow key navigating right), just multiply by -1 to reverse it instead of having to make special cases for each direction name.
You might even decide to combine the two axes, so instead of vertical and horizontal, you've got +1 and -1 there too (or 1 and 0, or something that lets you handle both axes with one piece of code). Now you could be down to a quarter of the original code.
Unfortunately, I was brought in on this project near the end to help wrap up a few other tricky problems, and all this navigation code was already set in stone. (And to be fair, working and tested, and who wants to go back and rewrite proven code, even if it is four times the code you need?)
But this would make a pretty good "how would you clean this code up" interview question!
It's actually reasonable. Step 1, translate key push. Step 2, broadcast key message. What you're missing is that there's an alternate Step 1, translate joystick hat push (or mouse drag, or whatever). So Step 2 should be broadcast generic up/down/left/right input message. Step 3, translate message into movement. But what you're missing is that dependent on context, there's going to be other things besides movement that those keys do. Step 4, pan the map with N/S/E/W. Again here, you have the flexibility to work with a rotated map. Your code was probably overdesigned for the use case, but I've seen your solution many times, in books and in real life.
However, I've also solved this problem by passing around polar coordinates, it's elegant and very flexible but you have to munge some data at the beginning. You can also pass around a simple vector [{-1, 0, 1}, {-1, 0, 1}], which is basically what you describe.
Honestly it seems like the many layers of indirection would be a much bigger problem for complexity. It may not be bad in this situation, but it certainly takes more mental effort to comprehend than seeing a corresponding "right", "down" and "up" for every "left".
I agree that most complexity in software systems comes from managing state. So here is a simple solution - stop doing it. Stop managing state.
Use the right tools for the job. Most mainstream programming languages are ridiculously inadequate for building any production software of any significant complexity without introducing more problems than you are trying to solve.
Use a mature functional programming language that works with immutable data and is designed for building complex industrial systems.
Use a language that was designed from the beginning with the understanding that your software WILL be full of bugs and errors, yet systems must always continue to run.
Some inadequacies are much bigger than others. The two big ones for me are nulls AKA the billion dollar mistake and mutability by default. I agree that functional programming won't solve all your problems, but it has a lot of very useful features. Some features, like higher order functions, have made their way to more imperative languages. I use higher order functions all the time in my C# programming. And a lot more functional features (or at least F# features), such as tuples, pattern matching, records, and immutable types, are in the C# 7 worklist: https://github.com/dotnet/roslyn/issues/2136
It turned the job from hellish (features were impossible to add) to very nearly boring (there wasn't much to do anymore). So with this newfound freedom I built some machines to automate the data entry and once that got rolling the job got even more boring. Because it was a small company with a very long learning curve the owner didn't let people go, but instead kept them on so that he didn't have to hire and train new people as growth accelerated.
But with all the automation some slack found its way into the system and problems that had normally required me to stop working on my normal job and help put out fires now got handled by people who weren't stretched a little too thin.
Sadly there's (seemingly) no way to interview people for this ability so we're stuck with the standard "write some algorithm on a whiteboard" type problems that are in no way indicative of real world capabilities.