Today, I recognize that good software development owes nothing to data structure knowledge or obscure algorithms. This knowledge can't hurt but very few jobs or languages require you build a skip list from scratch. Instead, writing good software requires the ability to maintain a system, to debug problems, and to give constructive code review. That's it.
How this plays in interviews is interesting for just how inconsistent it is. Sometimes I win the algorithm lottery and have had reason to implement the obvious solution myself at least once, and the interviewer learns my memory works. Sometimes I don't know what they're going for but come up with a reasonable answer anyway and the interviewer is thrilled I managed to have an independent thought. Sometimes I lose and the interviewer is clearly trying to guide me to the answer they want to see and I have no idea what they're getting at.
Though not quite as bad as a friend failing an interview for not using a hash table, even though the answer he described was a hash table—just with the identity function as the hash.
(Though maybe about as absurd as one interviewer digging for any formal training I might have had in C++. Like you can teach that language in a semester—please. There are maybe 4 proper experts on the planet.)
There was this one candidate who, by mistake, was given the test twice. Apparently he took offence to that. He found that his friend did the exact same tasks so he copied them.
The designers of that test tout that their system is able to detect plagiarism, but that was not the case here.
Anyway eventually we hired him because even though his first attempt wasn't particularly impressive and he cheated a face-to-face interview revealed that he was a skillful programmer.
One thing I learned from this experience is most of the time these test simply confirm that the candidate went to college - sometimes a particular one.
While I don't think that the current state of interviewing is perfect, this statement is pretty out there. Good software development owes NOTHING to data structure knowledge? Choosing the right data structure is critical for software that performs well (or performs at all once you get to a certain scale), and demonstrating knowledge of various data structures shows at least that a candidate knows a bit about which tools they have available to them on the job.
Of course not many people are implementing skip lists from scratch in their everyday job, that's why in the list of 500 questions none of them are "Implement a skip list". Data structure questions rarely ask you to implement some data structure from scratch, they ask you to look at a problem and figure out which data structure best lends itself to the solution.
I won't argue that asking questions about obscure algorithms isn't a problem, but good interviewers aren't asking questions that need obscure algorithms, they are asking questions that need algorithms like DFS, basic variations of sorting, and basic DP problems.
> Instead, writing good software requires the ability to maintain a system, to debug problems, and to give constructive code review. That's it.
Code review is the best time to talk about data structures and algorithms as they are a time to apply theoretical knowledge to real world problems. Good, constructive code reviews are about education and maintaining a quality code base.
Interview questions about data structures and algorithms rewards memorization and classroom knowledge. I've worked with way too many mediocre developers who are great at whiteboarding interview questions but who couldn't build a maintainable system if their lives depended on it. Consequently, I don't give a damn about a candidate's performance on these types of questions. Instead, I'd rather learn about some projects they've built, problems they've experienced and their experience with code reviews.
For example, in junior math the prof went carefully through deriving the Fourier Transform. On the final exam, the question was to derive the Hyperbolic Transform. This was straightforward if you understood the FT, but not if all you could do was parrot it.
There are a lot of benefits to this approach:
1. Keeps the topic on something they are presumably comfortable with.
2. Gauges their seniority level: more senior folks can talk deeply about a wider variety of topics.
3. Tests their communication and explanation skills, again about a topic they should be comfortable with.
Some potential pitfalls:
1. Makes it harder to objectively compare candidates. I think this is impossible anyway, so I don't consider it a big deal.
2. Some candidates, especially junior ones, simply don't have many projects on their resume to talk about. In this case I'll often ask them to explain a skill or technology they are experienced with.
3. It's harder to conduct these interviews. Giving a problem and passively watching and helping as needed is a lot easier than keeping up an active dynamic conversation.
Firstly, write actual code to solve a simple but real, or at least realistic, problem. That could be a take-home exercise, or it could be on-site pair programming, or some other format. Memorable examples: a stock exchange (submit orders, get trades); a Tetris engine (submit blocks, get rows filled); an auction house (submit bids, get winners); an uploader for the Azure blobstore (submit blocks, get files); an online metric summarizer (submit samples, get summaries). This tests someone's ability to actually write code.
Secondly, suggest an algorithm for a real, or realistic, problem that doesn't have a well-known good solution. Memorable examples: subset-sum, but on an ordered set, where the solution must be contiguous; placing primers for PCR ; online k-anonymisation . This tests someone's ability to think on their feet, and synthesize facts they've learnt into something new.
I'm a bit dubious about the "tell me about something you worked on" question. Firstly, because it requires the candidate to have worked on something chewy, and there are lots of capable people out there who are unfortunate enough to have had crummy jobs where they don't get to do that. Secondly, because it attempts to evaluate the judgement of an individual by looking at the work of a team. Too often, as a candidate, i've had to answer "why did you do it that way?" with "because the client specifically asked for it" or "because my colleagues who were working on it at the time decided so", which isn't useful for me or the interviewer.
What do you mean by "maintain a system"?
I have only heard of maintaining systems in contexts where it means keeping a system running. I don't see what that has to do with the ability to WRITE software (good or otherwise). Hence, my question.
- Software is increasingly built in teams. We don't ask about "how you handled a difficult person on your code review" or "how would you break this change in to two logical commits" or even "what's a good name for this function", all of which are more important than knowing the big-o for some obscure interval tree.
- CS 101 algorithms have very little to do with how they are implemented. e.g, in a recent discussion with a co-worker I discovered libstdc++ hashtable's iteration orders depend not only on hash code but also on insertion order. Looking at how libstdc++ and MSVC implemented hashtables was both very difficult and exhausting to follow and had many many more details than the typical CS 101 approach of just hash the key and direct lookup the bucket! Turns out writing code that is really fast needs more than knowing the big-o.
- Big O is such a garbage simplification of all the "fundamental" knowledge. Why not just ask some fundamental graph or combinatoric counting questions? Why nothing about what is computable and what is not, or which languages are regular? What about information theory? How come no information recall or database questions? or even these days with all the ML, why not some basic linear algebra?
A REAL engineer would spend the same time doing something productive....things like:
- Writing a sql optimization for their current company that saves resources
- Creating shared memory interfaces for faster processing of logs
- Modifying filesystems or kernel to create their own log processing
- Modifying JVM to optimize critical code
- Rewriting parts of a server to set up security alerts
- Fast encryption and decryption of in-memory cache records
- Writing an interpreter for faster generation of low level code for high speed processing
- And on and on
My point is, today's interview process treats engineers with such complicated experience as shit because they spent time working on complex projects and couldn't memorize these questions. But then these same companies will hire some mediocre engineer who cannot think out of the box but is an expert at memorization.
Said mediocre engineers then hire other mediocre engineers using the same process, because these mediocre engineers do not even have the ability to grasp what kind of engineering would lead to above points.
To top it off, these same mediocre engineers sweep this problem under the rug claiming they are trying to reduce false negatives. Lol!
Sad state of the industry!
It's similar to how investment banking jobs place such a high premium on Ivy league degrees with high GPAs despite the actual job itself being so simple a high school student could do it. Someone who's willing to jump through all those hoops to get into a top school and graduate with a high GPA has a strong track record of following orders and is less likely to complain about 100 hours workweeks.
Try saying this obvious observation to some Google engineers in real life. I've never seen such butthurt adult-kids!
It would, if companies realize that they can save millions by improving their interview processes and not have to aqui-hire so much.
But this insight requires smart people (who they eliminated with this interview process) to be already hired in those companies.
You realize they got hired because of their fame and expertise? They didn't have to go through intense whiteboarding. And even if they did, they would be hired even if they didn't solve a stupid N-Queens problem?
I suspect your definition of CS fundamentals is only Algos and Datastruct problems. Said experts are very good at what they do. That does not mean they will whip out an algorithmic puzzle solution on a whiteboard under pressure.
I also suspect that those experts have better things to do than get a job at Google. They'd rather not go through this process. Google wants them to be there not the other way around.
1) You've not defined intelligence in an agreeable way, such that all parties are discussing the same thing (you seem to have interpreted his point about Sundar Pachai as successful due to being CEO of Google, but that may not be the case).
2) You've not defined intelligence in a rigorous way, such that all parties know it's meaningful even if they agree on it (is being CEO of a publicly traded company a robust proxy for intelligence? Can we positively correlate corporate success with intellectual formidability?)
3) You have not established any empirical way to assess intelligence in these people aside from what you'd consider tautological (CEOs are intelligent, Pachai is a CEO, Pachai is intelligent) and therefore self-evident. But you can't discriminate between the relative intelligences of members in this group, especially remotely on a forum, which makes this an issue even if 1) and 2) are solved.
I know, taking dialectics this seriously on an internet messageboard is a major pain in the ass. But this brings us full circle to software engineering interviews. One person mentioned a potential proxy for intelligence they consider self-evident, someone else argued against it without full and transparent clarification, and the cycle continues. This is how whiteboard interviews were born, this is how they became controversial, and this cycle continues even on discussions of whiteboard interviews.
People who want to filter for "intelligent" in their interviewing process can't define the term. People who come up with methodologies for assessing it, or identify proxies to it, can't prove they work. People who argue against these assessments or proxies can't prove they don't work because no one can even agree on what should be assessed. It's a Wittgensteinian clusterfuck all the way down, but the same human habits that gave rise to whiteboard interviews are perfectly manifested in the counterculture reaction to them and the resulting controversy.
I propose instead that you resist the temptation to use the word "smart", even in situations you'd consider the quality self-evident. "Intelligence" is like AI - once it's rigorously defined, no one agrees it's AI and it's so narrow in scope that no one cares about it because it's not weaselly and magical anymore (example - the IQ test). When you appease everyone's inclinations and talk about it without defining it, madness ensues.
Rigorously define a problem space, correctly map a set of empirically measurable capabilities to it, and hire people who demonstrate those capabilities. But don't use descriptions like "smart" which mean everything to everyone.
My last line was, "Perhaps stop correlating wrong things together?". My intention was to indicate that the person who wrote about Sundar Pichai is trying correlate something about the Google CEO to my original comment about aqui-hires.
It was not an exercise to define intelligence. I simply had to point out naivety of any correlations the parent commenter was trying to make.
>> 2) You've not defined intelligence in a rigorous way, such that all parties know it's meaningful even if they agree on it (is being CEO of a publicly traded company a robust proxy for intelligence? Can we positively correlate corporate success with intellectual formidability?)
Again, there was no exercise to define intelligence. You are misreading the context
>> 3) You have not established any empirical way to assess intelligence in these people aside from what you'd consider tautological (CEOs are intelligent, Pachai is a CEO, Pachai is intelligent) and therefore self-evident. But you can't discriminate between the relative intelligences of members in this group, especially remotely on a forum, which makes this an issue even if 1) and 2) are solved.
Same response as above.
>> People who want to filter for "intelligent" in their interviewing process can't define the term. People who come up with methodologies for assessing it, or identify proxies to it, can't prove they work. People who argue against these assessments or proxies can't prove they don't work because no one can even agree on what should be assessed. It's a Wittgensteinian clusterfuck all the way down, but the same human habits that gave rise to whiteboard interviews are perfectly manifested in the counterculture reaction to them and the resulting controversy.
Interesting you bring this up. Let us play a thought experiment. Who would you rather work with?
1. A person who wrote a VM for handling their cross-platform message queue but couldn't solve N-Queens problem
2. A person who can solve N-Queens problem on the whiteboard but doesn't have the engineering appetite/desire to learn something like VMs in detail
Now that you know who you want to work with (hoping you chose 1), let me ask you this. Why is data king? Have test scores ever been a representative of smartness? Is a person with 3.5 GPA worse than 3.75 GPA?
If your answer to above question is no. Then why is Google hiring committee rating people with their own notion of GPA and hiring people with a certain Google GPA?
Because, in that GPA competition, candidate 2 will win. But you didn't want to work with candidate 2 did you?
1) Here is a very small VM (fewer registers and fewer instructions than you can count on your hands). Write an interpreter for it.
2) Add an I/O instruction.
3) Having your interpreter and your I/O instruction, you are given at least one example of a program with a specific property. Write a program analysis which will detect the property soundly and completely.
Managing to finish (3) relies on knowing about certain fundamental concepts from automaton theory and program analysis. The company does actually use this knowledge regularly, so it makes sense that they'll ask. However, even parts (1) and (2) actually manage to test basic programming in a reasonable way.
Step 2: Find somebody with relevant expertise to interview that candidate. Don't send the generic engineer who doesn't know anything beyond using hashmaps for all Java programs.
Wrong. Questions aren't designed to be memorized. It shouldn't take months, or years, to prepare for this stuff if you already have a CS background unless you're really slow.
Do you know that 'detect a cycle in a linked list' question? I can't imagine many people being able to answer that without having seen the question previously.
It's trivial to explain:
x^n = x(x^2)^((n-1)/2), if n is odd
= (x^2)^(n/2) , if n is even
Maybe that's what they intended. It's certainly legit if the job will have you actually doing, rather than applying, math. Except that it concludes you can math if you're smart enough to pretend to derive the answer you memorized after encountering it before. You might accidentally hire a bunch of crypto nerds.
The difficult thing about that shortcut isn't that you couldn't come up with it, it's that you wouldn't think to try coming up with it unless you are in a particularly painful situation.
I don't view this exponentiation speedup stuff as math-specific, it's something you can come across when efficiently updating string checksums under random access modifications and concatenations, or when solving some graph problem where you exponentiate an adjacency matrix.
Do you want to work with people who memorize solutions or do you want people who are innovative, genuine problem solvers who've solved interesting problems?
Google apparently has an NDA for the interview questions they ask, so they're trying to avoid quizzing people on stuff that they've memorized. I know any place I worked at always tried to ask "fresh" questions to people. There's just no point if they already know the answer.
Dude, is your definition of problem solving limited only to Algos and datastructs?
Because I can imagine sql optimization in the following ways:
1. Compiler modification to use different instructions
2. Hinting branch predictors
3. Modifying the database environment to use different operations under the hood
4. Use a different hardware
5. Use a different language
6. Allocate different memory configs
7. Use SSD
Do you even realize that CS is not algos and datastructs only?
An insight that can only come from experience and to someone who is actually interested in this.
I want to work with people like you, not with the person who can solve some stupid dynamic programming puzzle but can't do any real interesting stuff in the real world.
I'm seriously considering using it as a dog whistle and skipping the experience deep-dive (which tests if you are a good story teller), the systems design questions (which tests if you can sync to the interviewer's sense of judgement), and the algorithm hazing (which tests social anxiety). But I'm worried there is some kind of discriminatory correlation baked into that metric.
Still, i'm glad i know people like that, because it's really easy to get a knowledgeable recommendation when i need a new keyboard!
However, I'm pretty sure that out of the many, many developers I've worked with, including some brilliant ones, I'm one of barely a handful who gives much serious thought to editors. This could also reflect a bias of location - HN'ers are more likely to geek out over editors, but I think in general people in Silicon Valley tend to care more about this, as opposed to people in Israel, where I'm from.
I do sometimes ask similar questions about more technical topics that the candidate should be familiar with, e.g. which DB do you use (backend engineer), which JS framework you use, etc. This is something that the strong candidates must have brushed up against, and should have a strong opinion about.
The other nice thing I find about asking this is that it isn't a question they've prepared for, so you tend to get an honest answer.
My initial thought was that you are a very careful programmer that tries not to make too many mistakes. I also thought that you probably try to write code in one pass and that you avoid TDD and other techniques that cause you to go back and revisit code again and again.
But, I cheated a bit and noticed in your profile that you have sysadmin experience, so now I wonder if the cheap keyboard is simply the keyboard that happens to be there. Nano is because you can't be bothered to spend your life learning vi and nano is the other editor that's guaranteed to be on whatever machine you happen to be logged in to. So that would make you pragmatic (as any good sysadmin is).
What's kind of interesting about this process for me is that I now have this utterly fictitious view of you based on ridiculous stereotypes. We could probably have a good conversation where you poke a million holes in my view, but without really challenging anything that I hold dear. Not really sure if it's a good idea, but it is definitely interesting to think about.
I think the person we're replying to is looking for someone that wrote a few popular VIM/sublime plugins and uses an ergodex they built themselves.
I don't use an ergodox because I wanted an RGB keyboard, and I use sublime with a handful of simple plugins because that's what I like. I used to hate vim, but some of my coworkers love it, so I respect that and have tried to use it more often. :) The good news is the coworkers that live out of vim like sublime, too. They're just used to vim, nothing wrong with that.
It's not a terrible place to start though.
But isn't an interest in tools pretty common in most crafts/trades? If I were hiring a woodworker for my cabinet shop, I'd probably ask about favourite tools. I wouldn't actually care if they talked about how a bandsaw beats a tablesaw every time, or launched into a sermon about shopmade wooden handplanes versus metal monstrosities. It's the presence of an opinion at all that matters.
edit: I don't actually have a cabinet shop. :(
This is by far the most realistic scenario employees of most companies face. It doesn't matter so much that one doesn't remember the specific details of tree traversal or how linked lists are implemented. It matters much more that one is able to find it out quickly, understand it and apply it. Those are the skills you're looking for.
Anyway, algorithms matter.
Someone could go through these 500 questions and for (almost) all of them formulate variants/extensions that would be open research questions. Or alternatively, ask for each of them: is this the best possible solution? Can I prove it? What if I restrict what the algorithm can do? What if the data comes online? What if the input is noisy? etc. etc. etc.
Please stop telling companies not to use this method of interview.
I use it to filter myself out before wasting anyone's time.
What I see here is an AND gate, and their use of this process itself triggers the false in my own filters-for-job-positions system.
I see it as an uniform filter. The company is filtering me out, or in. And, if I understand the reality of their options for filtering methods, I can use that information to filter them back. Interestingly enough, this one is pretty uniform for me, if they use the method of out-of-a-book problems in either an on-the-whiteboard or pair-programming session, they have solved the problem of whether I want to work in that position without my having to trouble myself with memorizing problems to answer, nor waste their time screening me.
The filter I am applying here is against the out-of-the-book part of this. If a company wanted to provide a previously solved bug in a section of their own code, with enough information to debug but perhaps not enough testing written into the system, to me this would not filter that position out. As long as the job would be similar in setup to the methods used during the interview this would be fine by me. Difficult, as the level of chaos internally to any interviewee is very high, and most positions would not be nearly as chaotic as the induced level this hiring process creates.
I think asking on-the-board questions, or running some pair programming, is a filter for perspective, and attempts to identify how an applicant handles chaos.
To put it another way, the company wants people who when they hear an alarm will file out of the business in an orderly fashion, doing that job that has out-of-a-book interviews during hiring. I personally walk towards alarms, not away, and am plenty happy not to be stuck in a position that isn't supposed to have someone like me in it.
My system is tragically flawed, of course. A company could easily use an interview process they consider arbitrary to filter the lowest tier of applicants (non-tech people fishing for a few months paid internet browsing), and I could, not recognizing the insignificance of it, preselect them false when they would have stopped that interview after ten minutes and hired me on a handshake.