Hacker News new | past | comments | ask | show | jobs | submit login
How necessary are the programming fundamentals? (swiftrocks.com)
118 points by rockbruno on May 19, 2021 | hide | past | favorite | 112 comments



I think people certainly need to understand that DS&A are a thing, and time/space complexity is a thing.

The problem is the way they test for this makes no sense. At most having done DS&A will flash some lights in my mind when an n^2 or higher solution comes up, ie I will think about how to do it better. But that thinking will inevitably lead me to google for a better answer, rather than plumb my limited memory capacity for how to do it. So it's a principle that relies on the right tools to get implemented, and that's rarely what anyone asks for in the interview.

More than anything else I've ever done, coding is an exploration. And nobody seems to test for that, they seem to all test for whether you've already been down a certain cave, recently. For instance, I can write a mobile app that trades stocks on an exchange through a server. I could write both the apps and the server for you. I've literally done all the bits that you would need to do this. If you ask me some random question about Swift or C++ syntax, I will fail. Because having those in my mind's cache is not sensible. Knowing that sorting is probably already a solved problem, or knowing what the unsolved problems ahead are, those are useful.

The real thing you need to do as a programmer is to handle complexity. Not in the big-o sense, but in the sense of keeping the mess of code a sensible size, in a way that allows you to make future changes easily, and allows collaborators to contribute easily. This is both a code thing and a people thing. Yet I don't come across a lot of people asking about how this is done.


Exactly on point, which is why for interviews I like to discuss code snippets. Asking what's wrong here, is this the right collection type to use here, what would you do instead, how would you refactor this.

I do like to ask algorithmic questions, but I don't expect the algorithm in detail. Instead I want to know if people understand when to use it, and what is the principle behind it. Because while you don't need to implement your own collections, you should know what their advantages and disadvantages are.


>> More than anything else I've ever done, coding is an exploration. And nobody seems to test for that, they seem to all test for whether you've already been down a certain cave, recently

> Exactly on point, which is why for interviews I like to discuss code snippets. Asking what's wrong here, is this the right collection type to use here, what would you do instead, how would you refactor this

LeetCode can be great for both of those--but not the way most interviewers use it.

Instead of using it as a source of problems to throw at someone and expecting them to produce in short order a working, correct implementation, use it collaboratively with the candidate.

Pick a problem, read it with them, and discuss possible approaches. Try to let them take the lead on this, nudging them as needed to keep them going in the general direction of a viable approach.

When that is done go to the discussion area for the problem. There people post all kinds of purported solutions. Go over a few of those with the candidate, critiquing them together. Plenty of the posts will contain errors, ranging from minor to severe, providing plenty of points for discussion.


That's a cool idea. I rarely think about these platforms due to my personal distaste for them, but that's actually useful.


> More than anything else I've ever done, coding is an exploration

The fundamentals of exploration is the ability to traverse different types of obstacles. In programming those obstacles are traversed by being creative with algorithms. Therefore companies test you if you can be creative with algorithms.

Of course if you have never done that then you don't see it as exploration but instead tedious nonsense. So at least in my eyes anyone who sees these things as just tedious nonsense are self filtered out, why would you want to hire people who have never tried to be explorative in this field? Them not applying in the first place is great!


Being creative with algorithms is different from carrying algorithms in your head..


That's why the best coding interview questions are the ones that can be solved without fancy algorithms.


Interviews almost never require fancy algorithms, just the super basic ones that you learn in the first course in college.


I've seen a ton of interviews at Google, none of them required fancy algorithms. They did require you to take some fundamental algorithm and change it up a bit, but the end result wasn't fancy algorithms just basic ones with a twist.


If you have been creative with algorithms then you will have the fundamental ones in your head. Every time you add layers the last layers needs to be really stable, so testing the layers they need to have built upon is a good way to filter out people who haven't progressed past that layer.


> If you have been creative with algorithms then you will have the fundamental ones in your head.

You can be creative with algorithms without having the "fundamental" ones in your head because you know where to look them up. Therefore, testing the layers; tests for recall and not for those layers. In most situations, these tests only test for short-term preparation.


Once I was interviewed in this way: I was given a test with various questions (algorithms, general knowledge, etc.), and I was given a tablet with which I could google whatever I wanted. I was left alone with the test and the tablet and I was given some time to finish that.

I think it's one of the best way that I've ever witnessed or heard of to test for what you mention: the exploration and research that happens in real life when you encounter a problem to solve at work.

I haven't seen this anywhere else unfortunately.


> Yet I don't come across a lot of people asking about how this is done.

Please elaborate! How do you handle complexity?


You can see how people explain what they do. Does the explanation itself become massively complex? Is there a reasonable high level explanation, or is it a mess of ifs and buts?

As for actually keeping things clean there's any number of things that can be mentioned. Reasonable abstractions, issue tracking, version control, documentation, choice of frameworks, and so on.


I really like this article and I love the analogy with music theory, which seems spot-on... but there’s one big difference, which is that you absolutely do not need to know music theory to be a professional musician!

At least in pop music, plenty of successful bands (maybe even the majority) don’t know music theory, they just know how to make music that sounds great (through natural talent and/or hard work).

There’s even a school of thought that music theory is actively bad for pop musicians, as it will tend to steer you down well-trodden lines and stifle innovation. For example, Paul McCartney supposedly avoiding learning any music theory when he was learning the piano, specifically to avoid losing his originality. He got the words for Golden Slumbers from an old piano piece belonging to his sister, but had to write an awesome new melody for it because he couldn’t read the music! (see https://www.the-paulmccartney-project.com/song/golden-slumbe...)

I think the analogy still works, though. To play in an orchestra you absolutely do need to be an expert sight-reader, and ideally have a good grounding in music theory. So working at Google or Facebook would be analogous to playing in an orchestra. They don’t want buskers, they want somebody who can read and understand a Messiaen score.

And maybe a small startup is more like a rock band -- you can get along perfectly well without any theory training. But if you want to grow into an orchestra (or in tech terms, more like 100 or 1000 orchestras) some theory training is essential.

Also, even in a small group, a mix of backgrounds can be beneficial -- look at the Beatles again, where George Martin’s classical training was a significant part of the overall mix (not to say a key part; Martin was the fifth Beatle at best).


Hi - I’m a largely self-taught musician/music teacher/producer of 15 years+ experience …played in loads of bands in many different styles and am a huge fan of the Beatles... Both you and the article are misguided in your assumptions about music theory IMHO. Whether they consciously know/admit it or not, every decent musician benefits from and needs to know about some sort of ‘music theory’ at every level of playing. I guarantee you that Paul McCartney and the Beatles knew/know rather more about music than you allow (or possibly than they recognise/would like to admit...). Granted, they could not perhaps sight-read musical notation and so on or arrange an orchestral score (though if you look into P.M./other Beatles’s background/family you’ll see that it was indeed very musical and he has done some pretty spectacular stuff with the hel of others…), but even if you want to play rock-n-roll covers in different keys you kind of need some understanding of eg. how a basic twelve bar blues format works, V-I change, some idea about scales/arpeggios/which notes ‘go together well’ etc. You don’t need to know the proper terminology for it or be able to do a full formal/Riemannian analysis or something, but you are using and systematising a kind of music theory directly, even if just on your own idiosyncratic terms... Theory and an ability to work out/make up /remember/categorise song’s etc. by ear or otherwise is more linked than you might superficially think I would suggest. Music theory across different cultures is not a science (as I presume computer science is?), but is based on kind of formalised aspects of that culture’s understanding of ‘common practise’. On a fundamental level you could look at it as being informed by how human beings perceive sound/frequencies/mathematical/aesthetic ‘rules’, but a large part is also based on conventions …so, even in the process of learning new songs/repertoire and seeing commonalities between them/remembering them, you are learning a type of music theory you could argue… Anyhow, I get where you’re coming from and there are some similarities, but sometimes I can’t help feeling people do tend to talk an awful lot of bunk/received nonsense about music and music theory… (the idea of ‘natural talent’ - although no doubt people have very different mental furniture/natural aptitude’s - is one of those things that can be very damaging to people progressing/learning about stuff I have found…)


I think we’re mostly in agreement! Except I wasn’t totally clear what I meant by “music theory”, sorry :)

You don’t need to know the proper terminology for it or be able to do a full formal/Riemannian analysis or something, but you are using and systematising a kind of music theory directly, even if just on your own idiosyncratic terms

Yes, absolutely. Likewise, a self-taught programmer isn’t totally ignorant of complexity and data structures, they just have their own idiosyncratic understanding of them.

I guess when I say “knowing music theory” I really mean “formal music education”. You’re totally right that somebody like McCartney understands music at a deep level -- and in a knowledgeable way, not simply intuitively. But I still think it’s important to note that he doesn’t read music, doesn’t feel the need to, and has still been massively successful.

Similarly, you can be self-taught in software engineering and still be massively successful, even if you’re weak on some of the areas that would be covered in a formal CS degree.

I think the key question is whether there’s some part of formal CS education that’s essential to success at the biggest tech companies. I guess I’m arguing that there’s really nothing essential, although there’s a lot that’s very useful; and that it can be picked up in other ways.

I think that’s similar to what you’re arguing for in music theory -- even if you don’t have formal classical training, you can still “learn music theory” through practice, collaboration and self-reflection.

the idea of ‘natural talent’ - although no doubt people have very different mental furniture/natural aptitude’s - is one of those things that can be very damaging to people progressing/learning about stuff I have found…

Yes, you’re right. I used to think talent was essential in programming -- that there are just people who “get it” and people who don’t. That’s how it seemed to me in the first year of my CS degree, anyway. But I’ve since realised that what I saw as “talent” was simply “did programming for a hobby as a kid”, i.e. had a head start of 5–10 years of self-directed practice.

There’s clearly a cluster of related things. If you practice more you’ll tend to get better; if you enjoy it you’ll tend to practice more; if you have “talent” you’ll tend to enjoy it more; if you have exposure at a young age that can manifest as “talent”; and so on.


Thank you for your thoughtful reply - you’re right, there are some interesting parallels! :) Perhaps the idea is that a greater knowledge of the formal or abstracted aspects of c.s. lends a kind of long-term versatility or adaptability to candidates for large companies/an almost academic rigour in c.s. research for longer term developments... Employees may eventually move between different roles, collaboration may happen between more disparate/diverse areas of the company, or people may move up into managerial positions where they may have to cross-over/communicate many focuses/areas of business or developmental planning - therefore their thinking may eventually become more abstracted from the day-to-day practise? (as a music producer, if you understand how music is put together /psychoacoustics/some physics it certainly helps enormously with all sorts of stuff - although again, as it is an artform rather than a science of efficiency, knowing the accepted solution isn’t always an advantage I suppose - though you can always choose to ignore those rules…). If it became important to be able to document/systematise/migrate/publish a codebase then an ability to break down and communicate what you have done on a formal level would be an obvious advantage I guess? On the other hand, a formalised system, if too rigidly adhered to, might preclude a freshness of approach or openness to different ways of doing things/hacking/combining aspects from other fields/cultures etc... Sometimes the over-specialism that you get in big organisations might kill certain kinds of innovation or the ability to ‘think outside the box’/see a bigger picture? …You could maybe make some sort of a comparison with session musicians who are (/were before the more traditional recording industry kind of died a bit of a techno-death at least! :) often expected to be very versatile, to read notation and pick things up very quickly, work in various sized ensembles/ad hoc groups etc. but were often (sometimes unfairly) criticised for being ‘bland’/sim.? …but then you might have some jazz musicians (…crossing over with many great session players…) who might have a deeper and simultaneously more intuitive understanding of music theory than some amazing classical musicians (who have the edge perhaps in immense attention to detail and consistency of performance…) …I wish I knew more about programming/computer science to be able to better discuss it! :) …I’m fascinated by the parallels between languages and music also - I guess programming languages are another variety of symbolic representation (like maths?) …I just wonder what the parallels really are between evoking emotion/promoting cold efficiency at performing a task, but I guess a lot of modern programming actually comes closer to design in a sense?(/production/library music heheh) …all primarily involve communicating with others/(the ‘other’ as machine?) and problem solving/logical resolution to a greater/lesser extent I suppose… (…even a ‘star’ has a panoply of people collected around them - producers, engineers, promoters, managers, distributors etc. and other musicians - some trained…)


Now that's a good analogy with the orchestra being like a big-co! My thoughts exactly.


I have not been tested on the fundamentals of programming taking interviews from FAAGM and other famous tech companies. Instead I've been asked "puzzle" questions that generally have nothing to do with the millions of lines of code I've written in the past nor anything to do with what I will do for the company. All those questions on leetcode are 95% unrelated to actual problems IMO.

95% of every web app, mobile app, or native app is just very basic stuff. How often do you need to be able reverse the first half of signally linked list? Or compute the most amount of water that will fill a vertical grid. Or find the largest rectangle in a bitmap. Or divide to giant integers. Or solve sudoku puzzles?

The interview should not be "can you solve a puzzle that took 50yrs of programming before someone noticed the solution". It should be "will you be productive on our team". I don't feel most tech interviews actually check for that. In fact I know several people who got through the interviews at the last company I was at (one of the top 4 tech companies) that were not productive at all. Sure they could answer the leetcode type puzzles but they couldn't actually get work done. Many of them were eventually let go when enough colleagues reviewed them poorly.

The article claims these interview questions test for fundamentals and I agree that fundamentals are important. I just don't agree the leetcode style questions asked test for that.


> 95% of every web app, mobile app, or native app is just very basic stuff.

But 5% isn't, and can absolutely derail a project.

I'm a consultant that goes to various orgs to help them with a huge variety of things, much of which boils down to fixing issues with "simple web apps".

I literally cannot remember the last time I saw any application, written in-house or by a vendor, that had even the basic set of essentially mandatory indexes in the database. I don't mean clever stuff like partitioning, or sorting tables by a column other than the primary key. I mean that most of these databases had no indexes at all on 100M row tables. Just a week ago a large software vendor argued with me, stating that it's absolutely fine to require 10 GB of parallel table scans to display one web form to one user!

That in my opinion stems from a lack of understanding of what a B-tree is, how it works, and how databases utilise it.

That's firmly in the "fundamentals of programming" that every developer writing "simple web apps" should know, and in my experience almost none do.

These people go about their work every day with a firmly held belief that the database server is essentially a magic black box that can never be understood.

Is that okay?


Knowing to use indexes you should know, and I know it from practical experience, not from learning comp science or theory. However I don't know sorting algorithms top of my head and their complexities, since database does the sorting and heavy lifting underneath. This is the key difference.

When I have gone through FAANG level companies, I spent 2 weeks preparing and learning the fundamental theory, a year or two later I have forgotten most of this and I'd have to prepare like that again.

It's interesting and can be useful to go through the material on how database engine exactly works underneath, but you definitely shouldn't have to memorise this long after you went through the material and you probably won't use this information in your real life, except for the interview process, of course.

Pretty sure indexes are basic things you learn when setting up your database schema, and most ORMs provide very easy way to do that and go over it in their getting started section. They usually have indexes by default for ID fields and foreign key fields. I had no problems understanding having to add indexes to often queried fields and the trade off for having too many indexes. When I started learning how to actually use something like MySQL I had no clue what data structure indexes used underneath, but the concept itself is fairly simple compared to how b-tree works. The vendors you have met have been incompetent for some other reason as opposed to not knowing data structures.


> But 5% isn't, and can absolutely derail a project.

Precisely.

Do you know something about byte order? Can you set or reset a bit? Can you check that your if-statement conditions are exhaustive?

Those are from a second-year class in most CS curricula. And I have seen people programming for 10+ years screw them up. Horribly.

I have seen 10+ year programmers use "addition" for "logical or". And it works, mostly, until it doesn't. And it creates mysterious failures that they can't debug because they don't even have the concept that they need to hypothesize correctly.

Most of the people whining about this stuff are whining about whiteboarding things like "graph algorithms"--and they're right. That's a stupid thing to test for since you rarely use graph algorithms unless you use them every day at which point you'll come up the curve anyhow.

If I'm interviewing, I can ask really simple questions like the "set a bit" or "change from network to machine order" and still get the information I need. If you know the answer, that tells me only one thing--you've probably programmed enough that you hit it. I'll keep increasing the problem difficulties until you don't know the answer and have to think.

To be fair, if I have to wind the difficulty up to graph algorithm questions, you've likely already passed the interview long ago.


I have done 6+ years of practical, full stack development, and I have never had to set a bit... What? In which case?

If I had learnt it 6+ years ago, I would have forgot it by now.

And I actually don't know how to do it without Googling, yet I have been able to be top performer in my current company for many years, promoting very quickly and providing tons of value. Maybe they have assessed me wrong in some way.

I have never ran into such a bug, and I have never had a debugging issue that took too long to solve because of my lack of knowledge of computer science fundamentals.

To clarify what full stack development has been about for me:

1) Single page apps/UIs

2) Managing hosts/containers/load balancing/scaling

3) APIs, for the UI mostly

4) CI/CD

5) Databases, querying, setting up, configuring.

Basically anything you need to spin up a web app from 0 to 1. These systems have provided actual value to customers and have been performant to scale with demands. They have been providing enough actual value to justify my salary many times over which can be proven using raw numbers.

I think if I ever had to set a bit, it means I have chosen bad tech/framework/wrong abstraction for what I'm trying to build, which is much, much worse than not knowing how to set a bit, in my view. If the tech I chose required setting a bit and similar other lower level things, it sounds like it would take much, much more time to actually develop it and it would be much more error prone, making it ultimately a very bad decision.


> I have never had to set a bit... What? In which case?

Access modes on a file.

C# enums with the [flags] attribute.

Etc...

It comes up.

This is literally just OR-ing a value with a value. Not exactly rocket science!

    flag = flag | thebit;
You say you have 6 years of experience, and you say you have never done that? Really?


I have changed file permissions using chmod, I understand how you can with a single value represent state of different flags/settings, by assigning 1, 2, 4, 8... for each flag, however I wouldn't have thought of it if someone asked me about setting a bit. It's not rocket science, I'm just not aware what the term means, nor have I ever consciously programmed something to "set a bit". I don't remember using syntax like flag = flag | thebit. And if I were to change file permissions programatically, it's very likely something like this would and should be abstracted away.

And for enums/config in general I have used something on a higher level, like json/key values config or when I code programmatically, having actual string values for the enums. It seems in most cases representing a config with a single value is overkill and would only decrease code readability. And for readability/maintainability purposes it doesn't make sense to me to have arbitrary numeric values representing setting values, unless we truly have storage issues, but for web apps readability trumps at least here.

Maybe I'm even now still misunderstanding why setting a bit is important.


I think what GP is trying to say here is that they haven't needed to think about OR-ing in such low-level terms: they could just do it without worrying about the underlying theory and implementation.


Yeah, and I think I have never actually programmatically used this type of pattern. I have interfaced with things that do use this pattern as mentioned like file access system, which most of the changes I have done manually, understanding how single value represents various states. I'd argue if one were to do it programmatically, in most cases unless you are building the APIs for this yourself, in which case you should acquainted with the domain itself, and would learn about setting a bit during the process, you would use something like "file.setAccessMode("write", true)" etc, not only because it makes it easier to read, but because underlying implementation here shouldn't matter, since using an API like that means that setting this access mode could potentially work with any operation system etc, which might not be using the same representation of numeric value for read, write and execute.

In all non low level cases you should be representing enums with strings, not with numbers like 1, 2, 4, 8, because the way I see it, it can only cause scalability, readability and maintainability issues. Just have a key value object, json or anything similar which would be structures like that const access { "write": true, "read": true, "execute": false }. And yes I have seen some legacy library code that uses 1, 2, 4, 8, but I've always thought naturally that it's unnecessarily complicated pattern, I have understood it, but I haven't known it as "setting the bit" and I don't think I have ever tweaked code like that neither created this type of code myself, so I actually may have never written a line of code to actually set a bit.


No it's not ok. But the people who didn't add indices maybe even had all the basic training on DS and algos.

But they were simply either incompetent or more likely simply careless and didn't give a sh_t. Because adding indices in the proper places requires a few minutes of PLANNING. Who has few minutes nowadays, when you can read HN? On top of that their management/leadership was careless, and didn't give a sh_t. In the end, the poor database tables were left all alone, in the dark, without indices :).


I once had a friendly debate with a database professional. I argued that these days with storage being so cheap, databases should automatically create indexes. E.g.: anything referenced in a view or a foreign key, and also anything that occurs regularly in production. He said he thought that was a bad idea, because a tiny fraction of the time it could cause a performance regression.

Now, years later, Azure SQL Database has automatic indexing and I feel a little bit vindicated. He does have a point that this can go wrong, but compared to what the typical developer does, it basically can't make things worse.


Probably because they used their gut feeling/intuition which stems from experience, so they started suggesting arguments to support their gut feeling. Their gut feeling may be correct, it's just that they might not be able to see all the arguments or edge cases that may make this indexing a bad idea so they are concerned about implementing out of the box.

It's very typical when suggesting ideas that go against the usual best practices, status quo, but it's understandable since you actually should consider all the edge cases before moving away from known ways of doing things.


Yes, you have a point. I think there’s no fundamental limitation today why everything can’t be indexed. There’s definitely some overhead to updating the indices on every insert of course, but that’s probably a relatively fixed ~O(1) cost in most cases.


I agree with you fundamentals are important. My point was, leetcode and leetcode style interview questions don't test that. As an example I don't think there's a single "leetcode" question related to database indices.


To add: 1) The fundamentals of algorithms can be relevant, such as understanding that one solution is more or less computational (or memory) efficient then the next (and why), but I agree that figuring out how to do a certain a algorithm in a specific situation (usually quite far from anything the person will likely be doing) is rarely useful (unless that's the job, but that's rarely the case). There are more nuances to it, but that's the gist of it imo.

2) Fundamentals on writing maintainable code on the other hand is rarely focused on (AFAIK, I personally don't really have any experience interviewing with such companies) - even though that's usually something that is emphasized a lot in most people's jobs. Even more interesting is that, based on my experience (I've interviewed a significant amount of devs), devs are surprisingly bad at these questions and also quite often become defensive about those type of questions (very similar to how I imagine people become about algorithm questions).

I believe the industry would produce better solutions if we focused more on writing cleaner, simpler and more maintainable code - also in the hiring process (thus incentivizing the educational system to align with that) than our current algorithm focus.


>I have not been tested on the fundamentals of programming taking interviews from FAAGM and other famous tech companies

I've interviewed at three of those letters, but as a senior manager, and I (somewhat ironically) better covered my nearly 20 years of writing code (as an IC, to use the parlance of the industry) than the interviews given to coders.

I could certainly solve hard coding problems when needed, but later in my career I was doing mostly system design, which is figuring out how to put all the bits together in a way that could be built modularly and operated efficiently. Another question from one of those focused on how to handle an actual problem that came up regarding migrating log analyzers when a simple cutover wasn't possible. In general, I love those kinds of questions because my experienced and knowledge come in to play.

It boggles my mind when senior ICs get dinged because they're not able to traverse a tree in the way someone wanted in 30 minutes while coding in a plain text editor on a video chat. IMHO, 5 minutes to install the correct package and write a couple tests should be sufficient so we can get the REAL discussion which is to talk about whether a tree is the right structure and what we should do about it if it's not. ;)


> I have not been tested on the fundamentals of programming taking interviews from FAAGM and other famous tech companies. Instead I've been asked "puzzle" questions that generally have nothing to do with the millions of lines of code I've written in the past nor anything to do with what I will do for the company. All those questions on leetcode are 95% unrelated to actual problems IMO.

> 95% of every web app, mobile app, or native app is just very basic stuff. How often do you need to be able reverse the first half of signally linked list? Or compute the most amount of water that will fill a vertical grid. Or find the largest rectangle in a bitmap. Or divide to giant integers. Or solve sudoku puzzles?

I accept your point. But I do believe the much more interesting question to ask is: why do they do these puzzle questions instead of questions that are at least somewhat related to actual programming problems that might occur as part of the prospective job?


IIRC their claim is they are trying to avoid bad hires more than skip good hires. I haven't seen any data though that their interviewing methods achieve those results. If I was to guess why they do it that way it's just because of momentum and lack of any "known" good alternative.


> music theory has also an implicit application, which is that people who learn music theory become better musicians in general. Even though the person might not be composing their own songs, their understanding of how music works likely makes them extremely comfortable playing and improvising any kind of song.

I think the author is trying too hard to draw a correlation between algorithm and programming. Knowing algorithm doesn't mean you implicitly know how to program.

He is also ignoring the masses that simply memorize the answers without knowing why a specific algorithm is used. Most will freely admit they can't remember these fundamentals after a few months. I don't blame them though because you are expect to give the optimal answer with a snap of a finger in interview. All those interviewers that argue they want to understand the thought process of the interviewee are simply lying to themselves. Some of them are even browsing the internet while you solve the question.

The real reason large companies use algorithms is because too many people apply for a job. Many are equally qualified and they need a way to justify their hire. 200 applicants in 1 hour for every single job.

The real sin are startups that think they are Google and use the same interview process to feel better about themselves. I have seen more than a few interviewers with a chip on their shoulder at some no-name startup.


Also, a lot of the sentences on music here are just not right. Jazz musician/composer here, who knows most of what music theory there is to know in a few genres, which is not much. Most of the musical knowledge good musicians, improvising musicians at least, have, is private and unique to themselves, or stored in their bodies and muscle memory and even they don't "know" or understand it, they can just play. "Music theory" just doesn't take you very far at all. Shakespeare couldn't have explained what he was doing or how he did it, nor could Hendrix. They just could do it, natural as breathing, or like walking, which people can usually do without the help of theory or being able to explain how.

> Even though the person might not be composing their own songs, their understanding of how music works likely makes them extremely comfortable playing and improvising any kind of song.

Understanding music theory doesn't make you comfortable playing and improvising at all. Being able to play an instrument and improvise makes you comfortable doing that, which comes from doing them a lot.

> if the person doesn't know the theory, it's likely that these songs are going to be hilariously bad.

No, the songs of people who don't know music theory aren't hilariously bad.

> To make it worse, musical progression is not a thing you just come up with -- songs aren't random, the chords for each song have a theoretical relation to each other. Making a decent song is a science, and one can learn it by understanding the theory.

But a chord progression or melody or lyrics are often a thing people just come up with. Writing songs is in no way a science. You can't learn "making a decent song" by understanding music theory. The characteristic, memorable, wonderful moments in each song are usually the unique, irregular ones, about which there is no theory. Etc etc.


Lifelong musician and programmer here. It's an interesting analogy actually.

I learned music by ear, then learned Theory. Then I found I had to forget some of my theory to get back to writing songs that felt right to me.

I experienced a similar arc in programming. Starting out self-taught, I absorbed the theoretical aspects later as I worked on problems that were higher level than just writing code.

In both cases, I think the theory is more descriptive than generative. Both programming and music require combinations of creative and Technical mastery to achieve a high level of competence. Theory can help you with the technical aspects of the craft. More than anything else, it gives you tools to analyze work that others have done. What Theory cannot do however, is give you the gift of creativity. That has to come from within, and I don't think there's an easy way to teach it in a book, or measure it in an interview.


Thank you, it's quite rare that people reply to my comments on here and I'm glad they did! haha.

Every musician/singer that is famous, sounds unique—like themselves and no-one else.

I keep coming across, in many fields, people saying that their subject can be learnt but not taught. Botvinnik (founder of the Russian School of Chess) said it about chess! I think it's true of jazz. You have to learn it for yourself, by listening to what you love and selecting from it and emulating what you love.


Not an expert on Iron Maiden (I do enjoy various metal/heavy rock) but just a casual read through Wikipedia doesn't scream "music theory as the cornerstone of their success". A forced analogy that I think does not really stand on its feet.

I disagree with the premise that there's some crystal ball that those companies have which tells them that asking those questions somehow determines good candidates. It's more likely that it actually filters for the type of person who is willing to do his homework and has a good baseline of discipline and is willing to prepare. Which arguably has some value in itself.

I don't have a formal CS degree but I have studied and implemented data structures/algos and spent quite a bit of time (months) to make them performant inside a database. I can guarantee you that the knowledge that you will ask for in a whiteboard interview is way less than one percent of what's required to implement performant and practically useable data structures for use in modern programming languages and databases.

And the punchline is that it is all about the seeming minutia that the author is dismissive about. The reality is that the market for simple, generalist solutions in terms of programming languages and databases is zero. There's a ton of optimizations and literally HACKS (for a lack of a better) word that make real-world data structures and algos go fast. Just take a look here and tell me how many platform-specific things you can find (this is bulk of the implementation of persistent immutable maps for Clojure): https://github.com/clojure/clojure/blob/master/src/jvm/cloju...

Regular whiteboard datastructure/algo questions are nothing more than regurgitating 1970s theoretical solutions without much practical day-to-day value. Is there a hint of a knowledge you can extract by knowing them? Sure. Is it much more than that? I don't think so.


One of the defining traits of Iron Maiden is their use of double and triple guitar harmonies. You could say they popularized the concept and many newer bands were inspired by it. My favorite Maiden song section is the solo/post-solo part in Brave New World (live at Rock in Rio 2001). I think the success of Iron Maiden comes from the combination of theory, performance, lyrical content, image, live ambiance and a very charismatic frontman.

A better example of a band with music theory as the cornerstone of their music is Tool. I keep discovering new things in songs like Schism even after listening to them hundreds of times.


Thank you for the informative explanation!


There is a large difference between having a decent grasp of the fundamentals, and being able to regurgitate parts from pure memory while being grilled by the interviewer in a high stress situation. While we may like to think the interviewee is an idiot who doesn't understand what a tree is, they may just be under a lot of pressure.

This article comes across as tone deaf and unsympathetic to me.


I think the right answer is something of a middle ground. You need to understand the theory, but at an abstract level. Unfortunately, that can be pretty hard to test for. Companies are making the same mistake teachers make in college--testing by asking for regurgitation of details. You need to understand the forest, knowing individual trees is unimportant. Look at the concept of cramming for an exam--memorizing details, not understanding. You can't cram for a truly well-written test.


Since I started interviewing applicants, my two favourite questions are:

a) What's the difference between an array and a set?

b) Describe how a hash table works.

The reasons are:

a) It is important to know the difference between ordered and unordered collections and I don't want to be in a position where I don't trust that my teammate will use the right data structure in their PR.

b) a hash table works with the same principles on which scalable services are built: database sharding, traffic loadbalancing (with stickiness), kafka stream processing, etc.

In my experience, those two questions are a good indicators on how the interviewee will perform in the rest of the interview.


When hiring for frontend developers, I don’t think there’s any point to those questions.

If someone is not using the right data structure in their PR, that’s something we can remedy by means of a short discussion.

Same for a hash table. Describing why it is better for some things is quick.

The goal of the interview is mostly that I trust that someone will understand (and listen) when I explain things like this. Not whether they know already.


>When hiring for frontend developers, I don’t think there’s any point to those questions

I've been at a company in which the website had very slow performance because the FE did not know the differences between those data structures and did a O(n*m) algorithm to match elements from two collections (`listA.forEach(i -> listB.indexOf(i.foo))`). The backend returned JSON objects with the keys ready to be queried correctly and the FE converted it to arrays for some reason.

Nobody in the FE team realized this was a bad idea until someone complained about how slow the site was.

>The goal of the interview is mostly that I trust that someone will understand (and listen) when I explain things like this. Not whether they know already.

It hugely depends on the expected experience of the role for which the candidate is being interviewed. I would absolutely expect a mid-level engineer to know those concepts.


Doesn't mean they were incompetent for this reason in particular.

First good FE should be able to notice when site is performing badly. This has nothing to do with data structures. Whether by themselves, or knowing how to add tools that automatically diagnose issues like that. The same way they should be able to notice other UX frustrations/issues. UX designer can't think of every little detail ahead of time/give a screen for every edge case, so FE should be able to use their experience, to understand some common best UX practices on how to display errors for an input, when to validate etc --- on blur, on type, the fact that modal should close when you click on the darkened background, etc.

Then they should be able to debug why it's taking that much time, by using developer tools, seeing where the bottleneck lies, then common sense can be used to determine that this is the algorithm causing the issue.

In FE, 99% of the time there would be performance issues unrelated to this type of situation. It could be for example when using React and causing unnecessary calculations and re-renders, potentially using some library in invalid more or dev mode, so to figure that out in most cases it's just most straight forward to use the tools, that will point out where the issue lies and then solve it. I bet more often it would be some small component, triggering re-render of whole page when typing something to an input, or for instance react-hooks causing unnecessary re-renders. Learning data structures won't prepare you for this. Your time is better spent on other items. There's way more important things around FE that are way more important to know whether eng knows about or is experienced with those.

So rather than asking about data structures, create an existing front end project, where there is some bad code used which causes bad performance and see how they go about debugging and discovering the underlying issue. Or maybe, just if they notice the issue at all.

This is what we are doing in our interview processes. We provide an existing mock project, maybe including both backend and frontend, where we ask the interviewee to either fix something, add some new feature and/or suggest improvements to the code/UI, frontend in general. Simulating the real world work as closely as possible. You can't do 1,000 hours of leetcode for that, you need actual FE skills and experience. You can understand that double .forEach over a large list of items is a bad thing without ever having to know about big O notation.

This is also much better than asking obscure questions on about how JavaScript hoisting, object inheritance and other things you really don't think about when writing ES6 and adhering eslint practices work.

Whenever we have asked candidates at the end what they thought of this exercise the feedback has been positive, of course maybe they were lying to not step on any toes, but they seemed sincere to me. Secondly I would never be confident that someone is good/experienced with FE after them being able to just answer some data structure questions and not seeing them actually try to solve some FE related problem.


> When hiring for frontend developers, I don’t think there’s any point to those questions.

Why not? Array and Set are both part of JavaScript.


Whats so terrible about telling someone to use an ordered collection in their PR?


Depending on the usage and how core it is to the problem we are solving, the architecture of the subsystem may be affected negatively by the wrong choice of data structure.

It is also not hard at all to learn and understand the difference of these two fundamental data structures.


It could (probably not) be affected negatively by the wrong choice of a data structure, but what harm is it if the coder doesn't know, is then informed of it as part of peer review and that's that?


I have been working as a programmer since 2015 and I'm not sure if I could answer these questions.


Why aren't you sure? These seem like straightforward questions for people with a little experience.


An experienced programmer in Python or Java would certainly know the difference between an array and a set.

But someone doing embedded C programming, where a mere printf function is an extravagant luxury? You could program for years and not encounter a set.


At that level it doesn't have the label "set" but bit flags are a form of set.


Some companies are sorely mistaken about what they actually need, though. I was once interviewed (and accepted) at a small company that apparently needed fundamental 3D skills. I ended up being grilled quite a bit about rotation matrices and the like.

The reason for this? They maintained an in-house 3D engine they use for various products they sell to important customers. What I only realised later was that a grand total of two people actually worked on that engine. The rest of us ended up just writing and maintaining large C++/Qt programs riddled with technical debt.

What they really needed were people who can wade through large code bases with significant tech debt. I wasn't interviewed for that skill however, and I happen to not be very good at it. (I'm especially bad at resisting the urge to fix tech debt that is currently hindering my own work, even though sometimes I have to let it slide because of deadlines.)

I didn't last long there, and learned later they weren't very good at retaining developers there.


I have pretty much the opposite of this story. I contracted with a company doing a fairly basic photo editing app, who had managed to code themselves into some godawful mess. Basic frontend changes took days or weeks because they had a handrolled layout / animation system from someone who had no idea you could a) specify complex motion of objects over time via linear algebra, b) sequence operations more easily by even just slightly reifying some state machines. Instead the whole code was full of hundreds of variations of "if picture.x != picture.goalX && picture.rotating { picture.x += 3; picture.angle += 1; }".

I have seen this pattern over and over, simple applications grown into complex unmaintainable garbage because they didn't have someone who could write a state machine, or a simple interpreter, or a constraint system. Or because they needed something small but reached for something big instead when they didn't know how to implement the small thing, e.g. a task queue that could've just been an array with locks around it instead turned into 0MQ (or worse, MySQL). And these are not companies with only interns/juniors, often people with 10+ years experience who think "fundamentals don't really matter."


> people with 10+ years experience who think "fundamentals don't really matter."

Sounds like people who think "programming is not math".

Granted, it's quite unlike the math we were taught in high school. But it does have patterns, opportunities for simplification, similar correctness problems, and sometimes good honest Math™ with geometry, linear algebra, and more. One can easily miss that when they assume programming is not math to begin with.

Even if those programmers didn't know the fundamentals behind animation systems, they could have reached for those if they recognised that this part may have some significant math behind it, and sought out relevant material.


I can write btrees, radix trees etc but I cannot always solve problems on a white board. I'd rather take a problem home and work on it.

I'm against whiteboard interviews


I've been writing code for 40 years (admittedly, not all of that time was writing it professionally), you say "btree" which to me could be a binary tree (which I've dealt with in the distant past but not for a long time) or balanced tree, something I've read about but never had the slightest reason to implement. I can't even recall ever hearing of a radix tree.

You go dig up the specialized stuff when you need it, it's not common enough to be worth learning in general. What's important is enough foundation to understand the answer when you look it up.


I am interested in database internals. Btrees come up with regard to designing database systems that are efficient to query on disk. Postgres uses them for its indexes. Radix trees are memory efficient tries which are useful for answering prefix queries. They're also called prefix trees. I use them to get a list of prefixes of a string. Useful for simple intellisense style forms or dynamodb style querying. I've also been studying LSM trees which are used in Leveldb and RocksDB.

I experiment with database technology in my experimental project hash-db https://github.com/samsquire/hash-db The code should be readable.

I need to change my search tree to be self balancing currently it grows to the left or right without balancing. I think I need to use tree rotation depending on which branch has the highest height.


Yeah, but how many programmers write database engines? That's mostly a don't-reinvent-the-wheel thing.


A knowledge of the fundamentals shows an appreciation of programming as an academic discipline, and it shows an interest on its academic (theoretical) foundations. And it requires effort and intelligence to master these topics, which is a good sign. To me, personally, it makes sense to ask these kind of questions about the fundamentals.


My issue is that when you are an experienced engineer, and you do not use those fundamentals daily you will simply forget them, even having learned them in the first place, so the folks who pass those interviews are not the folks who do actual work on projects, but ones who practiced countless hours on LeetCode and or recently studying theory in order to game the interview system.

Do you prefer a person who spent their last 6 months leetcoding or someone who has been working last 6 months on actual projects?


Software Engineers aren't equal to Software Engineers. What different subfields require is so different, some are fine with bootcamp grads and for others they absolutely need most of it. The confusion stems from still using the same job title for those vastly different jobs.


What's the hardest area?

Have you got a list from easiest to hardest areas?


There are so many dimensions of skills related to software development that it's really meaningless to compare which areas are "easier" or "harder".

Some people are better in some areas, and it's usually a combination of time invested and talent.


So if you want to try out a potential new member of a band, the correct approach is to grill them on the circle of fifth.


I think fundamentals are extremely important, but there is a confusion what is fundamental and what is not.

I have interviewed thousands of candidates and hired and worked with dozens of them.

In my experience, the most important, most fundamental abilities for a programmer are:

- analytical approach,

- ability to predict what the code is going to do.

I don't care if the candidate knows how to implement A* on a whiteboard. Frankly, I think this is stupid if he/she will never see a problem like that and if it happens, the solution is just a google search away.

Don't get me wrong, knowing algorithms is very useful. But it is not critical to writing working code.

What I do care about is that you can reason about the code you are given, that you can construct/modify the code methodically. That you can predict what the changes you are going to make will do to the program. That you can have workable mental model of the program that.

The worst type of developer I have seen have memorized all algorithms, data structures, language details that were in the book. Yet they can't modify the program and tell what will happen without actually running the code.

When you observe them writing code it seems they are modifying it at random until it happens to work. Then they are happy they have "solved" the problem after two dozen or more iterations.

When it fails they are not at all interested in understanding how it failed, only where it failed -- to have a starting point to keep changing they code until it runs.

For this reason -- to filter out those candidates -- I will never pass a person without seeing them writing code.

I have observed over the years HN community being somewhat negative to programming on interviews. Ask yourself, would you hire somebody to build your house based on only how they market-speak how fanstastically they can build fantastic houses?


IMO, there's a difference between people who are very good at connecting frameworks together and have a strong knowledge of many frameworks and platforms, and people are who are actually building those frameworks/services for others to use. It's the latter who are hard to find & hire, as most people (esp. coding bootcamps and self-taught courses) focus on the former because you can quickly create working mobile and web applications.

One question that I ask during interviews is to write a method to find the intersection of two lists of integers in better than O(n^2) time - basically, any solution that is not two nested `for` loops.

    intersect([1, 2, 3, 4], [2, 4, 6, 8]) => [2, 4]
Half the people I talk to cannot do that in 30 minutes. (no, you cannot use the built-in set operators)


I'm a self taught programmer who got his first full time job in tech in June 2020 after 13 years of practicing law. Here's how I would do it:

const intersect = arr1.filter(value => arr2.includes(value))

I don't know much about Big O notation yet, so I don't know if my solution would meet your speed requirements or not, but I'd be interested in your feedback.

You could also turn the two arrays into sets and then compare them. But I'm not sure what impact that would have on performance, if any, in your example.


`arr1.filter()` says "apply a filtering function to every value in arr1". This is O(n) because as the size of arr1 (n) grows larger, the work grows at the same rate (linearly).

`arr2.includes()` says "check if value is in arr2", and most languages implement this method as a linear search (esp. if arr2 is an array). This is also O(m) because as the size of arr2 (m) grows larger, the work (linear search) grows at the same rate.

Taken together, the algorithm is doing "for every value of arr1, apply a filter function which linearly searches for the value in arr2". If arr1 is size 5 (n), and arr2 is size 10 (m), then the total work being done is 5*10 (n*m). Thus the overall performance is O(n*m), though it is commonly simplified to the worst case where both sizes are the same: O(n*n), or O(n^2).

This is the same as using nested for loops, just more elegant (and functional):

    intersection = list()
    for i in arr1:
        for j in arr2:
            if i == j:
                intersection.append(1)
    return intersection

---- SOLUTION ----

One approach is to convert one list into a Set, O(n) because you have to traverse the list once. Then iterate the second list and check if it exists in the Set. Checking if something exists in a Set is constant time, O(1), so you only have the cost of traversing the second list once O(m).

The overall performance is O(n + m * 1), or O(2n) because we assume worst case of both lists being the same size. That is usually simplified to O(n) as we're usually only concerned about the performance behavior, e.g. "how does it change as it gets bigger?" and not directly comparing say two algorithms of O(3n) and O(5n + 1).

There's still a lot of unhandled edge cases and other issues that need addressing in the solution described above, and there are other valid solutions as well. The goal is only to be better than O(n^2), not optimal.


Thanks! I appreciate you taking the time to explain and critique my solution.

I've been slowly trying to go back and learn some computer science topics that I missed out on self-teaching and I still have a long way to go.


No problem, even if you had it in school it takes some practice thinking this way to become fluent.

A fun exercise to give reality to the theoretical is to run some benchmarks for your solution, providing inputs of size 10, 100, 1K, 10K, 100K, 1M, etc. and seeing how the performance changes. You can also (with some benchmarking tools) look at the memory usage as well.

When talking about algorithm performance, there's usually considerations for both time and space (memory or disk) that need to be considered.

One thing you'll notice is that most solutions, even the O(n^2), are "fast enough" at small sizes that it doesn't matter what approach you take. In those situations, readability/clarity usually wins such as with your proposed solution.

If I knew my lists would never get larger than 100 elements (or even 1K), I would totally do what you did. The secret is in knowing the problem you're solving.


Question is if you allow using prebuilt Map type data structures.

Then it is easy just bucket count with keys being integers in the list and iterate afterwards filtering for value >= 2

Now if no Maps are allowed it is a bit trickier... you might need your own hashing function or as someone else suggested write your own nlogn sort..


Sorted or unsorted?


You can sort yourself, it's only O(n*logn), so it's still gonna be better than O(n^2), even when you include the linear scan to get the intersection.


And this is why the fundamentals matter.


I agree with a lot of this, but I think it misses an important argument for why it's worth learning programming fundamentals: the eternal nature of it.

Think about it, in our fast phased industry, few things have the property that you can learn it today and it will remain useful and up to date until you retire. Programming fundamentals are one of those things. Why would you avoid learning something that will be useful your whole career.

With how much time we spend learnings new frameworks, APIs, and programming languages that rarely have the same staying power, learning some fundamentals is a no brainer.


One can argue endlessly about necessity of knowing theory, but instead I would describe a funny situation I have seen once.

There was a piece of code that had to receive some items asynchronously one by one while they were being downloaded. Also, duplicates had to be found by some key and merged together. So, first a dumb and straightforward implementation was done using a vector and linear search. After some time management decided to stress test it with some extreme amount of items. And instead of fraction of second or seconds it was taking minutes which was unacceptable. A colleague of mine was assigned the task to fix this. What has he done, what do you think? He spawned another thread for search of duplicates. On a single core hardware. It didn't help. So he asked our team leader for help. What did he do? He increased a share of CPU time for that thread (it was a RTOS). That also didn't help. I was watching at all this with interest and suspicion and remembered at some moment that there is an algorithm for binary search which is much faster than linear search. So I removed all the code for additional thread and instead made vector to be always sorted and applied binary search. Download time changed from minutes to 10 seconds.

This way I learned about importance of algorithms (which for some strange reason haven't been taught in my university). But my colleague didn't learn anything, because he still thinks that theory of algorithms is for that kind of people that take part in programming contests and not for real programmers who "get sh*t done."

Let's face it: even programming is full of people with attitude of anti-intellectualism. Although this whole industry wouldn't be born with such attitude.


Programming fundamentals are valuable, but whiteboard interviews have little to do with programming fundamentals.

If you look at (hackerrank, leetcode, etc.) you will see that many questions asked in actual interviews are the sort of algorithm puzzles that you might see in "programming" contests like ICPC. As such they can be impossible to solve on a whiteboard or google doc in 45 minutes unless either a) you are Donald Knuth or b) you have a lot of experience with algorithm puzzle contests and have solved a similar problem before.

However, there do seem to be a number of NP-complete combinatorial optimization problems, so I think may be useful to be able to tell your interviewer "well, this problem is NP-complete, so in the worst case it's likely to be exponential."


Isn't this just the old engineer/technician dichotomy but applied to programming?

I mean sure you can design circuits without learning all the differential equations governing how each component behaves, and you can fire up a copy of SolidWorks and build a lot of really nifty stuff without ever studying resistance of materials. But if you want to be an engineer in one of those fields you're definitely going to have to knuckle down and learn the respective theory at some point, even if you find yourself using very little of it on the job.

I've seen people with a more practical "technician" training brag that the engineers are "mired in theory and can't do anything useful", and then I see people with a more theoretical background think anything practical is trivial and beneath them.

Wanna know what I think? Learn both!


I'm always surprised that companies interview based on obscure knowledge in specific PLs. If you know the fundamentals of CS (not just A&DS), you can learn a language in a month (sometimes a week). Concepts translate between languages.


Funny thing, regarding the music analogy, The Beatles never studied music theory. Not even the basics (ie reading music sheets).

They did however play a lot, and probably deconstruct and analyse many of the cover songs they played at their beginnings.


Fundamentals are important for software engineering, just like they are important for any type of engineering! I certainly would be more comfortable driving across a bridge designed by engineers who understand material properties and so on. Understanding the fundamentals allows them to select an appropriate design based on the requirements and budget.

That said, there is a lot of programming work where you don't need an advanced degree to be effective. But I still think fundamentals are an important part of application design - being able to select the appropriate system design for a problem.


While that's definitely true, people were building bridges for thousands of years before formal engineering was a thing.

Of course some number of people probably died crossing those, and Golden Gate scale bridges weren't built.

Just to torture your analogy, maybe it depends on the scale and weight requirements of the software as to exactly how much fundamentals will be required.

Sometimes all that is needed is to lay a 2x4 across a mud puddle and that's fine. It's also how a lot of people get started.


Does 1000+ Leetcode questions mean you've grasped the fundamentals?


Once I had to solve a memory problem because someone assigned what should have been a byte array to a list of byte. Nice enough programmer for high level concepts but needed the fundamentals. I think a good skills assessment is needed and then you patch the missing knowledge. A smart resource who can learn and is nice to work with is still a good hire.


TIL that Iron Maiden are masters of Music Theory


Surly the fundamentals of programming are converting between binary, octal hex by hand basic set theory, the standard control structures etc.

Bad analogy with music theory though at the super high end it gets more about the theory than the actual music.

Most musicians are not going to be using exotica like Xenharmony


Just thought with the analogy with music is there a market for Fake/Real Books for developers.


What are good introductions to Data Structures and Algorithms for a hobbyist developer?


I've been slowly making my way through a course on udemy, in Javascript, focused on DS&A - https://www.udemy.com/course/js-algorithms-and-data-structur.... I've been developing for 10+ years, mainly on Frontend, but it's given me more confidence in how I approach writing code.


MIT Press ‘Introduction to Algorithms’ by Cormen et al. is an excellent book to learn although it is perhaps not an easy and accessible introduction; it’s academical and extensive. It would require time and effort. Maybe other options are better, like a Coursera course, if you want to get a superficial taste of the topic instead of a more in-depth exploration.


Coursera and many similar online resources all have good DS&A courses.

Check out https://www.coursera.org/browse/computer-science/algorithms or just search for free courses, they all cover the same things so just find one you like.


As someone that was almost entirely self-taught (started as an EE -also heavily self-taught), I’ve looked up many, many noses. One of the manifestations, is that I often don’t know the “official” name of a pattern or the “standard” way to do something, despite being quite good at it.

So I don’t do things the way everyone else does it, and I often fail to deliver or comprehend industry jargon.

But I have been shipping software for my entire adult life. I have an extensive open-source portfolio of extremely well-executed projects that anyone can clone, build and incorporate into a shipping project without worrying about quality or performance. I have had to deliver the goods -in public, and under a microscope- since I’ve started.

I’m highly disciplined, and have been working on large, heterogeneous teams, my entire career. These days, I work alone, or in small teams. Not by choice. Working for a Japanese corporation for 27 years, means that I am very used to hard taskmasters. I’ve had my feelings hurt more often than many.

So I’m used to backing up my rhetoric with deliverables, and watching those deliverables get inspected, criticized, rejected, and improved. It’s made me take Quality rather seriously. It’s also made me quite aware of schedules and deliverables.

Q. C. D.

I’m not particularly competitive, and don’t especially care whether or not I’m “better” than anyone else. I’m used to being the dumbest person in the room (I spent most of my career, working with some of the best engineers and scientists in the industry).

So I work well in teams, know how to bite my tongue, and listen.

Every single day -every single day- I start the day with some scary, insurmountable issue that needs to be solved, or the whole project goes into the bin; and I solve it -often in an unintuitive manner.

So I’m fairly good at solving problems. I’ve been doing it since I was 21, and trained as an electronic RF tech.

I also have dozens of articles, sites, and documentation that I’ve personally written; often going into great detail about how I architect, troubleshoot, test, configure and release. I tend to write in the vernacular, which also means I come across as plebeian.

So I’m pretty much an “open book,” and I’m fairly decent at writing in an approachable, accessible, entertaining manner, as well as coding.

I’d have been happy to have had a more formal introduction to things, but that was not in the cards.

Also, since I’m constantly in “ship mode,” I have to stay focused on practicum, as opposed to theory and exploration. That slows down learning some of the cooler, more esoteric things.

Nevertheless, every day is a new adventure, and I am still learning new stuff, regularly. I feel as if my experience helps me to establish a context that makes learning easier, faster, and more comprehensive.

There’s a lot to learn, and I have miles to go, before I sleep.


> I often fail to deliver or comprehend industry jargon.

In my experience, that alone can act as a powerful blocker. Although I'm quite sure that the jargon thing is being used as a proxy for something more substantial, I doubt that it's a useful one. I suspect it's actually more of a social-group thing (are you “in” or are you “out”) than anything else.

One question about the “Q.C.D.”, though (quantum chromodynamics?). Could it be that you meant q.e.d. — quod erat demonstrandum (≈ “what was to be proven”)? Lowercase to distinguish it from the acronym for quantum electrodynamics.


Quality, Cost, Deliverable.

The Japanese live by it. If you ever want to work with a Japanese company, it's a good idea to know it. I think Deming may have started it.

"Quality":

How do we measure the deliverable? Not just testing targets. Any form of quantification. Progress tracking goes here, as does a lot of things like CI.

"Cost":

What will the cost of the deliverable be? Not just money, but also schedule, the number of folks working on it, prototypes, BoM, progress meetings, buy-in from stakeholders, risk management, etc.

"Deliverable":

What will be delivered? Please describe expected deliverable, what will be delivered at milestones defined in (Q) and (C).

It's a very rigid process (good for hardware), but can be difficult to deal with in software. I've learned to do my own version of it. I've written about it.

BTW: I just demonstrated "jargon," but in a different way. I do not think less of you for not knowing it.


Thank you for the explanation! I actually didn't know that.


> I often don’t know the “official” name of a pattern or the “standard” way to do something

So what you’re telling us is that you don’t communicate well, or at all, with other people. Other people who will, later, have to maintain your code.


I saw what you did there.

That was cute.

Hey, here’s an idea…why don’t you check for yourself? All you need to do, is go to my HN handle.

https://news.ycombinator.com/user?id=ChrisMarshallNY


I’d rather hear from your co-workers, or, preferably, the current maintainers of software which you originally wrote.


Well...you're in luck!

My LI profile is full of testimonials from both. Feel free to ask me for references. I'm not looking for a job, but I can probably scare up a couple of folks willing to go on record on my behalf. I would also be happy to send you some very relevant links.

You could also order the DSLR interface SDK from my former corporation. One of the APIs is a cross-platform imaging device communication layer I wrote in 1995 (in C). I think they still use it.

EDIT: Listen, in all seriousness, I'm a really decent chap, as, I'm sure, are you. I regret that we have begun our relationship on this note, and I will no longer engage you. I scrubbed some of the snark off of this reply.


I wasn’t meaning to be overly snarky in my original comment, it was also an invitation for you to push back with some references of how you do communicate well with others and your code can be easily be maintained by others. This, I imagine, could be due a few different possibilities. It might be that:

1. You do use common idioms and patters, but you don’t know that you do. Perhaps because you haven’t specifically studied them; you might have just absorbed them by cultural osmosis.

2. You might be good enough at explaining things (and naming things) so that you, like Richard Feynman, can explain the most complex things in common simple terms, even though they are not the industry standard terminology.

3. You always find solutions and algorithms which are so simple to understand that you have never needed anything more complex.

4. You have some other explanation which I can’t think of just now.

Instead, what you said was some variation of “my work speaks for itself” and “do your own research”, which does not inspire confidence in your communication abilities.


OK. I know I said I wouldn't engage more.

I lied. I liked your response.

What I wrote was completely self-expository. It really is who I am, and I'm used to writing that way. It was not a challenge, and it was not an attack on anything (I can easily do "attack" -I'm an old UseNet troll).

As I mentioned above (below?), it saddens me that people see self-exposition from others as weakness and a vector for attack. That's one of the reasons that I'm no longer interested in working for anyone else in this industry, and why I no longer want to be a manager.

Things are nasty these days. Hyper-competitive. We can't just be good at something. We need to be better than someone else.

We can't just be enthusiastic. We need to be combative. Everything is a gladiatorial contest. Thumbs up/down.

After leaving that silo I'd been in for 27 years, I started looking for work, and rapidly discovered what this industry I love has become.

I don't really want to play in the cesspool, but I love to code.

I guess I am "taking my toys and going home," but I am very, very fortunate in being able to do so.

I'm working on a fairly ambitious project, right now, that will (I hope) help out a lot of folks. I have some prior art in the area. I'm working for a non-profit, for free.

And loving it.


This right here. We often talk about "Communication skills". This isn't separable from "being able to describe a problem or solution using standard terminology".

It's no less important to know what common things are called, than it is to know how common things work.

There is no brighter red flag to a than people who just "gets things done" or "solve problems in unorthodox ways", so a candidate who describes themselves as a problem solver that might not know all the "words" for it is to me a big red flag. This isn't gatekeeping or saying everyone needs a CS degree to be a decent developer. Terminology is a trivial addition to the skillset and ignoring it just means one doesn't care about communication.


...and then, there's spending 27 years, working for a conservative Japanese corporation, where they don't know the jargon (but are seriously good engineers). I learned to make things fairly clear, without jargon, and deliver very maintainable code (I'd have been fired, otherwise).

Listen, I understand that researching people before we insult them is passé, but I'm a really good co-worker. I wrote the above in a manner that was all focused on me, not anyone else. I am sad that you saw it as an opportunity to attack.

Maybe we would not get along IRL, and that's fine (but sad). I've spent my entire adult life, working with some of the most difficult folks on Earth (not in the tech industry). I have learned that we all have a story, and we all have value.

If the only way that we can measure our personal worth is to compare it favorably against others, I can't help, there, except to say that it didn't work for me.

YMMV


Most companies that people put up as examples of asking "fundamentals" that are unrelated to the job, actually don't at all ask candidates to regurgitate fundamentals. Instead they ask them to solve a programming problem, which commonly requires them to apply their knowledge of some very basic fundamentals. This is very different. "Very basic" is important to note here, because almost no companies nowadays ask anything other than very basic application of common data-structures. If we are sticking to the music analogies - this is closer to asking a musician to perform a well known piece of music from a sheet than regurgitate esoteric music theory.

The performance part might be a problem for some people and I agree - programming is not music, and it's tricky and unnatural to program in front of an audience, but this is a completely learnable skill by practice. That said I agree that it will always penalise a subset of people who will always perform less good under pressure.

There is a flip side to this though which often goes overlooked and author hints at it: programming interviews are a great equaliser - there are no credential checks (or they are getting much less common) for getting into top companies, and you can be based almost anywhere in the world - all you need to do to get in is crush the coding interview which is totally a learnable skill. This has opened many doors and economic opportunities for people that did not have them even a decade ago, and I think we as an industry don't talk enough about the opportunities our attitude to barriers to entry create.

Finally there's the question of whether the skill tested in coding interviews are really unrelated to the job performance. I've seen arguments that state: "some data shows that how a person performed on the interview is unrelated to how they perform as an employee". But this kind of reasoning almost certainly suffers from selection bias, as we'd really need to include how people who did not pass the interviews perform at the job (for obvious reasons companies don't have this data). I would bet that there would be a much stronger link with interview performance in that case.

Overall I think it's more likely that coding interviews are the way they are because they work extremely well in practice, most of the time. They are also on average a massive equaliser and overall a good thing for almost everyone involved provided they are executed well. They do have their problems, but commonly proposed alternatives (like take home tests etc.) have problems too. Coding interviews are not a thing because there is some kind of collective madness in the whole industry, where people in charge of procuring the most expensive resource just can't help themselves succumb to group-think. They are a thing because they work well (but not perfectly).


The only answer is it depends on your job, IMO




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: