There's a common narrative too, that it's all about algorithm question prep and it's a "game" you can win. One youtuber who recently landed a ~$275K TC gig said it's not about being "intelligent" it's about practice and rapid pattern recognition (he also was pitching his own coding interview prep course). Said he forgot half of it after getting the job.
This could all be true but this narrative is coming from sources that have a financial interest in making you believe that. What do you think HN? I mean I know algorithm prep is an important part of the interview but.. there's just something sketch about all these outfits I can't quite put my finger on.
(I also wonder if the people who get hired based on prep vs. aptitude then have a kind of survivor bias and hire others who are strong on this limited range of pattern recognition... could it be a problem for the industry?)
Most interviews have 4 sessions broken up in the following manner:
1 hour: coding
1 hour: lunch
1 hour: system design
1 hour: cross-functional
In my experience though, the first two are the ones that really make or break candidates, if you don't pass both, you're not getting hired. And yes, most of those interviews just test your knowledge of how quickly you can figure out the right data structure and algo for the question at hand. And even if you know all of the data structures and the applications of each, it can be incredibly difficult to solve the problems in the allotted time.
The other funny thing is that your previous experience doesn't do anything for you other than maybe getting to skip the technical phone screen. Everyone goes through the same process. e.g. https://twitter.com/mxcl/status/608682016205344768?lang=en
On the bright side, it does look like many companies are now realizing this isn't a great way to evaluate someone's ability and are beginning to incorporate more practical exercises.
You could also argue that this is fair chance between those who just came to the field.
> On the bright side, it does look like many companies are now realizing this isn't a great way to evaluate someone's ability and are beginning to incorporate more practical exercises.
It's the other way around: many more companies are incorporating Leetcode style interviews as their first "filters" either via online test or quick on-site.
Here's what happened: it used to be the case that only these FAANG companies have the resources and the know-how to either create these "trick" questions or picked a few from their favourite Algorithms book "exercise" part.
These days thanks to many "algo" challenge websites (like InterviewCake.com) that not only provides questions but also correct, detailed, and optimized answers, it is getting easier for employers to just pick a random ones and use that as interview materials.
It /might/ be /somewhat/ true. But I'm skeptical.
Anecdotally, when I started Interview Cake 6 years ago it was already true that most small companies my friends and I interviewed with were using these sorts of data structures and algorithms questions. The handful of exceptions were mostly companies that were outside the "scene" (usually because they weren't in SF or NY).
This was not the case 5-6 years ago.
I wonder how rigorous non-algo challenges can be inside of an hour. Or how you would “get a sense” of CS fundamentals without algo tests.
What are the top level concerns, how do you break that problem down, how might your problems scale, etc. There's a lot of questions I'd expect to get to, and this should be something done along with the team kind of like we're all working on the problem together.
I find it useful to see how well people can talk through the problem, it can lead easily into questions about licensing and rights of reuse, types of errors, etc. If they suggest an approach they've used before, can they explain likely failure cases / benefits? Are there workarounds, detection methods? For example, if you're doing text classification then tfidf+svm is a solid first thing to try, and there's easy ways that can fail which we could talk about.
There's a lot of that you can cover in an hour, and it tests whether someone can explain a potential solution to the team effectively, just as they would have to on a day-to-day basis. We can bring up specific types of problems that we face within it, what we've tried, and we can constrain the problem more or lead someone to starting points if it's a bit overwhelming.
edit - I guess this would fall under some data science fundamentals, but the approach I think works for CS fundamentals. What data structures could you use? What are the tradeoffs? It's not about finding the one optimal solution, but about how to proceed.
This is screening for specific domain knowledge (text processing) not general programming aptitude. That's ok if you want specific kinds of prior knowledge on Day 1 but it is not a way to hire generally smart people.
> I guess this would fall under some data science fundamentals, but the approach I think works for CS fundamentals. What data structures could you use? What are the tradeoffs? It's not about finding the one optimal solution, but about how to proceed.
This is exactly how most algorithm interview questions work.
I'm trying to understand what the OP meant by "real problems" not "academic puzzles". It sounded like they avoided hard algorithms yet "got a sense" of CS fundamentals somehow.
Not really, the domain knowledge here is much more the bibliometrics stuff, which we don't need usually. What I do need is someone who knows that they can't just take any data source they find and throw the latest deep learning hotness at it and call it a day because the F score is over some random threshold.
You can absolutely use this to hire generally smart people, what you are right about is I won't be hiring people who are generally smart but have no basic understanding of any of the types of solutions they'll need to work on and with. Based on team size and where we are, that's completely fine for me right now.
I think this helps find people that:
1. Are able to talk through a problem
2. Understand the kinds of issues they may face, and discuss how to go with that (including business level work, when to / not to use humans, etc)
3. Have experience working on the kinds of problems they are going to face
The tfidf+svm example was not intended as "ah yes, they said the algorithm I wanted" but as a springboard into a further discussion. Maybe they talk about word vectors, whatever, can they explain what the pros and cons are? Where might it fail, or more importantly what would they want to test? Where do we get training data, how long might that take for reasonable quality, how do you measure that, etc.
> This is exactly how most algorithm interview questions work.
The puzzles I think they're talking about are "here is a theoretical problem, find the optimal solution". Like "you have X eggs and need to find the highest floor you can drop them from without them breaking" or the classic "implement a doubly linked list with a single pointer" despite the fact _almost nobody_ would actually implement that.
I'm talking about giving an actual issue they're likely to face and talking through it. Maybe that's "how would you implement user flow for X, given that we've got institutional customers with multiple clients" or "we need to do rate limiting and have X servers, how would you go about that?". For the latter that might go into a discussion on what the problems are with different approaches in complexity, what the cost is of letting someone exceed their rate, of cutting someone off early, etc. Those aren't really my field so maybe the questions are a bit off, but the point is can they contribute to a discussion on the way forwards on a problem which represents something they will realistically face.
The only large company I can think of that matches or even exceeds the offers those companies give out is (surprisingly) Snap. If we go smaller, Airbnb and Pinterest are up there too, but they don't have nearly as many employees or open positions.
Not sure that's true. Just a few days ago someone was commenting here how they were a frontend developer with a few years of experience, yet they didn't know what a for loop was. These interviews aren't about how quickly you can figure out the right data structure, they're about separating developers from "developers".
In my experience, all the questions I was asked were fair game - all of them were about implementing a toy version of a real-life feature (simplified for the time constraints of the interview). All of them felt like "yeah, a competent engineer should be able to do this".
There's no question that a competent, well-rounded programmer should know about these algorithms' existence and when to use them. But whether testing if said programmer can implement them from scratch during an interview... that's a million dollar question whether these questions provide optimal outcomes. Anecdotally, yeah, sure, I did these things. In S/370 assembler at that. But did I use 90% of them since then(and that was when Apple was the only FAANG in existence)? No. So I think the feeling is that this kind of interview tests not so much how good, or experienced, or what not, you are at programming but rather how desperate you are to work at that kind of company to spend a good amount of time memorizing algos and data structures you've learned and forgotten years ago.
I'm a developer with about 6 years experience in full stack web app development (Rails & Django mostly), and I recently got my ass absolutely kicked during a couple of live coding challenges that were very heavy on theory (also worth mentioning that I'm a self-taught dev without a CS degree).
I started talking to some former coworkers about my experience, and I got 2 main responses: A) it's imperative to have a CS foundation regardless of your degree if you want to advance in your career, and B) employers only consider prior experience to certain degree.
The two interviews that I bombed involved heavily contrived coding challenges that tested CS fundamentals in ways that I've never encountered in a professional environment, so it doesn't surprise me at all to see services like this popping up.
Compare it to equivalent high-paying fields like finance or law, where you can hit a career ceiling based simply on what school you went to.
I hate studying for tech interviews. It feels like a sisyphean task, since I'll get really into prepping for a few days, then taper off until I've forgotten everything. At least it's easier to start up again the next time. This is now my third time attempting to prep.
I wish I was brave enough to just quit my job and spend 2 months studying. I'm in SF now, and all of my friends here make at least $300k when counting stocks and bonuses. I'm just barely short of half that (with no stocks or bonuses), and getting there is my next big career goal.
I do think it would do you some good to read up on core CS and algorithms though. I also did not get a CS degree and learned in the field, but now having studied up on it I feel much more in command of my craft. They do come into play now and then, it’s not just brain teasers. It does take weeks to months to really learn it though so I understand the resistance.
I think if you have a really strong “portfolio” of individual work that can be validated as being yours, like side projects, that can be a valid pathway into at least non senior roles. Maybe you’re weak on algos and other team members can help out at key points.
But I don’t know a better objective and direct measure of coding aptitude inside of 60 minutes. My issue is not with it in principle, at least for senior roles. It’s taking it to an extreme and misinterpreting the results. It gives this false sense of certainty about aptitude.
I agree with you on this. They do have taken this to the extreme because the questions are getting harder and harder.
This is hard tbh because if you're an employer whom used to be able to filter candidates by asking "easy" algorithms questions and woke up one day to only find out that everybody knows the answer, what will you do?
"It used to work so we just have to improve it, right?" => make a more challenging questions.
Maybe that's what the industry needs.. its own SAT. Heck man, that would make interviewing a lot better. You take one brutal test over a few hours, get your score, and then apply to multiple companies. You don't have to do dozens of whiteboard code interviews. Job postings specify score ranges. There's different score components (algos vs system design, like math vs verbal).
I guess there are startups like triplebyte trying to do that, but they're not as independent as the SAT. They make money off hires.
Faang and multiple SV companies will ask 3-5 LeetCodes with 1-2 System Design. Some started to incorporate behavior question (like Amazon with 14 LPs)
You'll do the same thing most other professional industries do. List the requirements for the job: x to y years experience with technologies a,b and c. Grab a handful of the most polished resumes, interview them for personality, and hire the one you liked best.
Silicon Valley thrive in tricky algo questions. That's how they hired engineers since long time ago.
I was failing interviews too until I picked up an undergrad CS algorithms book and read through the first few chapters on data structures and theory. Then I signed up for LeetCode premium. After a couple months of reading and practicing, I was getting an offer from every place I interviewed at. Now I’m at one of those big name software companies, and if I’m ever asked to interview a candidate—well, I don’t know what I’ll do. Maybe something with a binary search tree. I’ve literally never had to use that in 8 years of development. :)
I'm horrified because I'm seeing some gross incompetence in the realm of organization in thinking, clean infrastructure and security awareness. And some of this punishes the old(er).
I'm skeptical because I think many of these interviewers are themselves lousy systems people, are building environments that may need to be bailed out one day, and to boot are probably not people persons. A people person can sense better how to conduct an interview to feel out how to determine whether the candidate is clever and useful.
I took a managerial route so I'm mostly immune now, but boy do those cocky coding interviews stick in my craw when I think about them.
I would not have passed the Google interview if I had not prepared. After 3 years, I have completely forgotten most of the prep. I couldn't even implement a binary search right now.
That said, I do have an innate ability for problem solving. I know another person, who has prepared significantly more than me (multiple months full-time) but still wasn't able to get an on-site interview. In my opinion, it requires at least a minimum level of intelligence, luck and preparation.
It's akin to the Fitness world. There are tons of people selling proteins whey, their fitness apps, fitness set "reps", etc. Can't blame them, they're selling what people want.
If u want to give "quiz" interviews, u will get employee's good doing them. Like those CS grads in my class who copied eachothers take-home assignments yet still passed because they were good at memoizing.
Furthermore, this is the first training session for your future employee - you are inducting them into your companies culture. If "answering meaningless questions for the sake of it, is what you want to portray to them. So be it." They have been warned!
Selecting for a certain subset of people, is a business decision. Be aware of what interview practices mean in the context of how that company operates. If the job add has a sticker list of tech-hype, a promise of re-dev'ing a monolith into k8 microservices, with a leetcode interview, you already know alot about that company!
This has its drawbacks too. You're biasing towards familiarity with your code base's specific tech stack. Good engineers can learn new tech stacks in days/weeks, so that's not that important to screen for.
Plus "everyday" code is a lot of plumbing and busywork, so it's not as good a screen for intelligence. Algos at least cut to the "hard stuff".
> Like those CS grads in my class who... passed because they were good at memoizing.
Getting good at memoizing would be a perfectly fine reason to pass a dynamic programming class. :-)
What's next? Gradient decent with no gradients? Fourrier transform with no functions?
If math is do boring just skip the whole thing, don't try to kid yourself by saying you're not doing it when you are in fact doing it.
I was disappointed.
Big-O notation is a useful tool in a programmer's belt, knowing the math behind it in details is less useful. What's the problem? It's the same thing as not needing to know how an engine works to be able to use a car
Big-O notation is a mathematical notation. By using it, you are using math. It's the same as using a car and saying you are not using a car.
This forces us to skip crucial points like why O(n^2) truly identically equals O(n^2/2 + 5n). The author instead seems to think they are approximately the same because the lower order terms are small compared to the highest one.
We only need to explain the definition and the reader could understand why an O(n) algorithm is in O(n^2), why some O(n^3) algorithms are much faster than others in O(n^~2.4), etc.
In fairness, this is actually why we care in practice about asymptotic issues. The fact that there is a rigorous mathematical viewpoint that makes them strictly comparable is helpful for being confident about the non-trivial results -- but the reason we care about it is because we want to know roughly how expensive our code is (or will be).
I was annoyed to see that comment about division, as of course dividing n by 2 has exactly the same effect regardless of the size of n. This is not the reason we ignore multiplicative constants in complexity analysis.
It reminded me of the joke that Earth's circumference is pretty much the same as its diameter, since the the size of pi is negligible at that scale.
What is the reason?
For example, a O(log(n)) function will be faster than a O(k * n) function, for any fixed k, when n>j (for some fixed j i.e. when n is large enough). When comparing major classes of functions (n, log(n), 2^n, etc,), constant multiples don't matter, mathematically.
The other reason is that when comparing two functions with a similar amount of constant steps, let's say 5n vs 2n, it's impossible to say which is faster in the real world, so it's simpler and just as useful to collapse the set of O(k * f(n)) functions into O(f(n)).
Another answer could be Moore's law: machines do get faster over time. (And also memory gets cheaper.) And so we wish to define efficiency (time or space) in a way which does not depend on the current technological state.
My former prof would have me spanked for an imprecise statement like this.
We're doing a poor job with navigation on the site ATM, so here are some "you might also like" hits for you:
A piece where we derive most of the main data structures step by step, starting with naked bits in RAM:
A reference / cheat sheet for the main data structures:
A reference / cheat sheet for sorting algorithms:
However, CS fundamentals are the language of software engineering. It's important for engineers to be able to communicate efficiently about things like runtime analysis, indexes, concurrency models, type systems, etc.
Your mechanic knows how a carburetor works, (s)he knows how brakes work. Mechanics have a common set of fundamentals that they all know and share, and can use to communicate with each other about their work. The purpose of this knowledge is not to be able to fabricate new car parts, although given time and the right equipment, it could probably be done.
The purpose of CS fundamentals is not to so much to implement complex algorithms, as to understand how to use tools and libraries that are based on them.
My experience says really good engineers know the latter, but I'd like to hear if this holds for others.
This function runs in O(1) time (or "constant time") relative to its input.
And if stdout is piped to a process that isn't consuming its input, it might never terminate naturally.
Also, since Python has arbitrarily large integers, an expression like "index += 1" probably runs in O(log(index)) time, not O(1) time.
More realistically, the page of memory that items is sitting on could be swapped to disk, requiring the OS to do some big operation.
trying to write a piece of code with the big-O you are looking for generally has a huge number of caveats. I'm undecided on whether this is a feature (abstraction) or a flaw (easily misleading)
But it is doesn't matter here because we have chosen n to be the size of the array, because it is what we are studying.
Here it is big-O "constant time" because the execution time doesn't depend on on the size of the array, assuming it is really big. Not that every call of the function will take the same time to run.
The reality is much more complicated, with things like caches to take into account. As a result, complexities under O(n) often don't mean much in practice. But the higher you go, the more important it becomes. As you get to O(n^2) it is time to take a hard look at your algorithms, or make really sure that n is small and stays small.
Then I sort of had an epiphany. When will I need this? (or rather "I am too old for this...") I must be the perfect customer for interview cake: paid but do not use it.
I make my own work and those kind of problems just do not come up.
I mean CLRS was fun a long time ago, but it seems to degenerate into skeet shooting pretty quickly:
“Shooting skeet eight hours a month was excellent training for them. It trained them to shoot skeet.”
It seems kind of basic, but I still find myself manually checking code instead of relying on a tool.
It is possible to determine interesting properties of programs if you weaken your requirements to approximations and/or add significant constraints. You can maintain properties by construction if you only allow yourself to use certain kinds of building blocks in assembling programs. The less powerful your computational model, the easier it is to prove things about it.
The core idea of a type system is to form a symbolic representation of the semantics of your program which is simpler than the program itself, so that you can prove properties of that simpler system instead of the intractable problem of doing so for the actual program. (If you aren't careful, your type system could end up sufficiently powerful that it is itself undecidable, and you're back where you started.)
That theorem proves that no general algorithm exists. That doesn’t mean it can’t be done for a particular program. The real challenge is that modern compilers and processors will execute in a way that is hard to predict from source code.
People don’t do it not only because it’s hard, also because running the function and measuring time + memory is much easier, also much more precise.
Even if you don’t care about time and memory complexity and only care about these useless O(N), it’s still very hard because complexity depends on input data. E.g. bubble sort is O(N^2) but it can also be O(N) if the data is mostly sorted already and only a single element needs to be moved.
f(n) = f(n-1) + f(n-1) has runtime O(2^n);
f(n) = f(n-1) + f(n-2) has runtmie O(~1.62^n);
f(n) = f(n/2) + f(n/2) has O(n log n);
for i = 1 to f(n) has runtime f(n).. good luck determining a function output through static analysis
Getting static guarantees about performance would require carefully constructing your code with the tool in mind.
 Although I still find it outrageous that CS departments describe their algorithms class as "math" when they do not accept this answer.