"CS trivia" interviews select for people who know the mathematics of computer science, not for people who know how to ship robust software systems on time and within budget. The two are only loosely related and mostly uncorrelated.
In the real world, you use a well-tested premade off the shelf library to traverse a DAG or implement a red/black tree. Just because someone can recite algorithms and data structures in their sleep doesn't mean they can effectively scope, plan, or execute a complex software dev project; and conversely many of the best engineers either never learned or long ago forgot the details of the CS.
This mode of interviewing is like interviewing the contractor you're considering to remodel your bathroom by handing them a pile of iron ore and asking them to smelt high grade structural steel out of it.
So why do they do this? Lack of any better option. Their previous "brainteaser" style interview didn't work at all, and this is probably marginally better than that at finding good engineers. But more importantly, it's much cheaper and faster and less risky in terms of confidentiality than hiring candidates as temporary contractors for a real world test doing actual work, which is the only remotely effective way to tell who is actually a good engineer.
Unless, you know, you are implementing the standard library for a new language, so that there is no such "off the shelf library" available. Which its not been completely unknown for Google software engineers to do.
"Off the shelf libraries" aren't things that are handed down on stone tablets to the prophet on a mountaintop, they are things that are created by software engineers.
In any case, understanding which algorithms and data structures to use, even if you are using a pre-made library, and the impacts of that decision requires an understanding of the algorithms and data structures similar to that which is needed to implement them (it doesn't strictly require the ability to implement, but ability to implement isn't an unreasonable way of filtering for the requisite knowledge.)
So, yeah, we needed engineers who were strongly familiar with the theory of how hard things are to solve and knew a lot about different types of data structures. We were in a space the existing libraries didn't really cover.
Maybe not all engineers need these skills, but it was valuable that our team all had them and were thus able to have deep design discussions about it.
The skill that's lacking the most at Google is project management, which is not necessarily a skill that every engineer needs.
I don't see how being able to recite trivia questions from memory is a good indicator even for implementing a standard library.
Having a rudimentary understanding of algorithms and data structures is useful but in many cases it's also something you could look up.
In these cases, most of your head-space is focused on the problem, which isn't written down anywhere other than PM's requirements doc. It's pretty critical that you understand the algorithm really well, because if you have to look it up you're going to have to unload all the messy problem-domain stuff that's in your brain.
Also, you can learn the algorithm first then figure out the problem. No need to do it on the fly, although sometimes that can help too.
In the cases where I didn't get the job, I naturally asked why. Despite repeatedly having assured me that they just wanted to know how my brain works and not whether I'd memorized specific concepts, I am usually flat-out told the rejection was because I had implemented some minor detail of algorithm X incorrectly; that my implementation failed in some corner case.
I've also passed and worked at companies which do CS trivia style interviews. When they later interviewed other candidates while I was there, I sometimes observed how the people conducting the interview would first study algorithms and data structures for hours on Wikipedia prior to going in -- because they themselves can't recite any of that without preloading.
Maybe Google works differently; I don't know. At most places, the "CS trivia" style interview seems to primarily serve as a vehicle for bolstering the egos of the people conducting the interview -- they get to feel superior and ding people for not knowing the gotchas of algorithm X.
Just came here to say that from my limited experience, I concur with this comment. Google also has improved their feedback loop after the interview, which I find severely lacking in other companies with similar interviewing styles.
The ability to write down a complex algorithm on the whiteboard without any references is not a very good proxy for being able to implement the algorithm in a standard library.
It vastly over-selects for people who are good at memorization.
I know how to use these algorithms and data structures, and in fact do use many of them regularly. I can even implement them myself when I need to. But I certainly don't go to implement them without access to any references—if you're writing a standard library and don't consult references when doing so, then that's a bigger indication of you being a bad engineer than a good one.
It's really just an elaborate signaling ratio that you're willing to devote lots of time to studying. I personally don't think Google is such a mythically fantastic employer that I would spend tons of time studying just to get a job — and that's probably the point.
The only common topic they all work on is selling our eyeballs and collecting our data. Interviewing for that would make more sense than the algo stuff.
P.S: there's a difference between knowing typical algorithms needed to solve a class of problems and studying to implement them on a whiteboard.
I'm not sure that's all that true; even if you haven't specifically studied to implement the particular algorithm, if you are familiar with the concept of the algorithm and proficient in the implementation language, then coming up with some kind of cut of an implementation isn't an unreasonable request. (Now, if the evaluation of the quality of the implementation is unreasonable for the context in which it was requested, that's a problem, but if its used to judge understanding of the concept, if potential problems in the implementation are the focus of probing questions to determine understanding rather than straight rejection, etc., I don't necessarily see a problem.)
The point is that that is not how these interviews work at all. They test for the ability to memorize and regurgitate on demand a standardized and generic set of "CS fundamentals" based on the curriculum of elite CS schools -- nothing more.
For almost all jobs, even at Google, almost all of that knowledge is never used at all. Even at Google, most coding jobs are unglamorous CRUD. You just don't hear about them in blog posts because it's far more compelling to write up their cool new programming language project.
I'm curious about this statement, you say this but... have you ever worked at Google? I'm genuinely interested, not trying to discredit you. It's a bold statement to make if you don't have direct experience on the job.
> I don't have months to study fulltime to cram for the CS trivia challenge
His implication that the reason why he didn't get into google isn't because he's not smart or knowledgeable enough, but because he didn't go to a prestigious school
> No, I don't work there. I didn't study CS in school (much less attend the elite CS schools they are selecting for)
The implication that maths is somehow less important than nebulous concepts like "shipping code within budget"
> "CS trivia" interviews select for people who know the mathematics of computer science, not for people who know how to ship robust software systems on time and within budget.
That interviews are processes which dumb people pass, unlike him
> Their interview doesn't select for "ability to figure things out" at all. It selects for "ability to memorize and regurgitate academic CS curriculum on demand."
And then the comments through which I came to know this guy: when someone makes a good point, he replies that "correlation doesn't imply causation", and then when asked a simple question, instead of answering the question his answer is "I have X credentials, therefore I know the answer very well":
> I have a philosophy degree, and I can say with some confidence that I understand that particular catchphrase pretty profoundly. Not everyone who happens to disagree with you is merely an ignorant barbarian in need of enlightenment. See Karl Popper's theory of critical rationalism to answer your questions about causation.
You see, checking boxes (like getting credentials and academic knowledge) is very important, but only the boxes he's checked.
I'm haven't enjoyed a thread this much in a very long time.
I admit it is based on hearsay -- I know a number of people who work there. They say things along the lines of, "The pay is great and the benefits are generous, but the actual work is more or less the same as any other tech company." They tell me about their projects, and it's true. It's mostly CRUD.
If you have a pile of raw iron and you're currently running your own smelt for cost savings over the other remodelers, maybe even though you are in the remodeling business you could use those skills.
I agree though that contracting is by far the best way to grade performance. But the Google style interview is the only reasonable alternative I know of to filter out people who don't actually know the technologies they are talking about and can fake it with other people's code for months.
(Also Off topic, but should I know who Sheldon is?)
I've never been at a company that had a dedicated "library team". The common code and functions that go into the libraries are artifacts of programming the other stuff that needs it. It's not a job title.
(Unless the company's product to sell is a compiler. In that case, it would follow Conway's Law and you'd have a set of programmer's implementing the standard library. I don't see how that division of responsibilities applies to Google or other companies.)
Any rank and file engineer can and do contribute to them. In that case I don't see why they should lower their bar to people who can merely consume these libraries when they can take their pick at people who can do both.
Angular vs React is a counterexample I can think of. Angular is shit.
These are only loosely related.
The point is that the job of "software engineer" is not the same as the job of "computer science professor." The required skill profiles are almost totally different. They require mostly disjoint skillsets, despite having some overlap.
If you're interviewing candidates to be a computer science professor at a university, then it is entirely appropriate to quiz them about abstract computer science during their interview.
Google, and other tech companies that inteview like them, are using a "CS professor" interview process to hire "software engineers." That's my point.
That's a terrible analogy. I'm hiring a home contractor for a discrete, predefined set of skills. They need to be able to install a sink and retile a floor -- and once they do that I'll never have need of them again.
Software engineers are not tradesmen. What you know is less important than your ability to figure things out. If Google wants to select heavily for that quality it's a totally reasonable approach.
The sooner you realize that this is true of every single human endeavour, the better off you'll be.
Also, even when you have time to figure things out as you go, knowing stuff makes you more efficient at it.
But that ten-thousandth time, when the standard job goes awry -- that's what makes the difference between a dude showing up to work and a tradesman. The ability to react quickly and appropriately when faced with a brand-new problem is valuable in any role. Pretending otherwise is hubristic at best and kind of gross in general.
I made it thru my interviews without remembering obscure things, only knowing basic O-notation, and generally not being a math genius.
The majority of interview questions I've encountered (at Google and elsewhere) are of the form: here's a data structure or API, can you use it to solve some problem? Nobody's ever asked me to prove A* from scratch. Even in cases where the question does revolve around a specific algorithm or CS concept they've always taken the time to explain it. I used to ask a question based around LZ77 encoding and I always went over the algorithm whether or not the candidate was familiar with it.
It is very different to attend a calculus class and learn to apply some formulas and that's all. You vaguely remember the formulas some years later, and the algebraic abilities are all you have, but they are still very powerful tools.
However, if you do a mathematics course like advanced linear algebra then you realize that for this particular teacher knowing the formulas doesn't matter, at all. You have to write proofs for everything. And these proofs require all of your imagination, all of your experience and then some.
It changes the way you approach every single problem from that moment.
Now, most people have never needed that ability. Most people will never exert themselves to the stress, the lost nights of sleep and the sheer single mindedness that drives everything out of your mind for days, until you get that proof. That damned proof you will never forget for months, if ever.
The end result is that it makes your mind work in overdrive for solving the same kind of problems.
So the issue is that some people have memorized stuff (and that never works well for too long), while other people has taken that stuff as the first step, and then through a long process of proving every single step of one or more theorems, have tattooed that stuff in some part of their minds, and are simply trying to find someone having a similar tattoo, to find someone with whom they can relate about their shared experience.
Anyway, the people I know that can do that, don't really like to program computers.
I was upset for a while but just kept iterating on my own projects and now I am passively making far more than I ever would at Google even including generous stock grants (mid 7 figures). It's much more rewarding too, knowing I created my own destiny, and not just some middle engineer on a mothership living off of an easy adwords monopoly.
My suggestion to anyone who has the perseverance to get through a guide like this is to just forget Google and build something new yourself.
How do you know that DAGs and rb-trees are the best solution to your problem if you don't even know basic data structures?
If you can drive, you probably have a pretty good idea of whether a car or a truck or a motorcycle or a tank is the best vehicle for a particular transport application, and you can probably actually drive all of those with minimal practice. That doesn't require you to know how to build a car, a truck, a motorcycle, and a tank from scratch. That's a different skillset.
Google pays far above market rate, so they have the luxury of hiring petrochemists to work a gas pump. Even at Google, once all the shiny is stripped away, most of the projects are simple CRUD.
2) I've never heard of anyone being asked to implement a red-black tree during a Google interview.
I know what it is and how to use it. I can implement and traverse one. I seriously doubt I could do it in an hour, especially if I don't have at least standard library documentation available.
Anyway, the post I was replying to said, " you can most likely implement and traverse one in way less than an hour" (emphasis mine). That statement has nothing to do with thought processes. To me, that requires "flesh[ing] out such things in any great detail".
[But ask yourself: how does your car's engine work?]
But here we are talking about one of the companies responsible for implementing such technologies, where the science he implies as not being very relevant IS relevant.
It's the difference between applying to a truck driver work, to a truck engine designer work. For the latter, knowing the theory of an internal combustion engine IS relevant.
The majority of engineering positions that Google hires for are exactly this. The Chrome / Chromium team is just one team at Google. Most of the engineers who work there are building/scaling iOS apps, web apps, and are doing the exact same work that I do all day long.
Now if I was just browsing the internet, then, sure, your analogy might be more apt. But that's not what we're doing.
But you can easily scale quite a lot of systems to millions of users without having to know any cs at all. You just need to know how to use and configure different layers of caches, design your database(s) properly and some minor tricks around code (and knowing how to analyze performance).
This is more like Al Gore claiming he invented/solved/built the internet.
If Google was only about CRUD and banal CSS, it probably wouldnt have become the Google that it is. They became so by handling challenging and ever changing problems. In such a scenario you need generalists with strong fundamentals who can pivot quickly from one problem to another, quickly, not a stackoverflow cut-paster. The latter skill has ts moments but you cannot survive on that alone, if you are in the critical path.
You can delegate to entities other than humans.
> If Google was only about CRUD and banal CSS, it probably wouldnt have become the Google that it is.
That's totally true, but how many times did PageRank need to be invented? After that the true problems at Google were scaling and monetization. Do you think the team responsible for making sure AdWords looked right on every device under the sun would agree with your statement? I'm sure a lot of the work that went into that "CRUD and banal CSS" is the same code that's made Google billions of dollars.
> They became so by handling challenging and ever changing problems. In such a scenario you need generalists with strong fundamentals who can pivot quickly from one problem to another, quickly, not a stackoverflow cut-paster. The latter skill has ts moments but you cannot survive on that alone, if you are in the critical path.
These skill-sets are not mutually exclusive.
Quite a few times actually. It was not quite obvious at that time how to run Pagerank and other algorithms efficiently at that scale while keeping running costs down. If it was just a library call and delegation away, they wouldnt have had such a meteoric rise.
We're talking about two different things here—the theoretical PageRank and the practical one. My point is that the skills required to scale a thing like PageRank—writing code to parallelize tasks, divvy up traffic, etc.— are very different than the ones involved in inventing PageRank as an algorithm.
No we are not.
Pagerank at its mathematical core was not that novel, basic undergrad stuff. The application was novel, not the equation, you would find that in a beginners linear algebra book. The real deal was (i) realizing that those equations can be applied for solving an aspect of web search and (ii) scaling it up with cheap hardware of that time and keep operational costs low to be profitable. What I am saying is that one needs a good understanding of CS fundamentals and the ability to reason to pull that off with a competitive advantage. You dont get that just by tweaking CSS or for example knowing your Java platform well or by delegating. These kind of problems are not one off. You have to keep ahead of the competition constantly, innovate constantly, have to do stuff that your competition has not yet figured out how to do.
Now that this particular scaling problem has been in the mainstream it does not seem that big a deal to solve, it was at that time. If it hadnt been, every run of the mill tech company would have been doing it to eat Google's lunch. Their manager's ability to delegate did not seem to have helped them much there.
> No we are not.
> Pagerank at its mathematical core was not that novel, basic undergrad stuff.
Whatever you say, Mr. Page. :)
> You dont get that just by tweaking CSS or for example knowing your Java platform well or by delegating. These kind of problems are not one off. You have to keep ahead of the competition constantly, innovate constantly, have to do stuff that your competition has not yet figured out how to do.
More false analogies here. The skills involved in solving technical problems are easily translatable from one technical domain to another. You make it seem like implementing an algorithm to scale servers is necessarily more complex than implementing an algorithm to stack shapes on a webpage in a space-efficient way. It's not.
> Their manager's ability to delegate did not seem to have helped them much there.
You're completely misunderstanding me. I'm not using the term "delegation" here as a managerial term. I would bet you that the team that scaled PageRank relied on countless open-source and freely available tools and tech that others wrote. This doesn't lessen their ingenuity at all, but it should be clear that even the most complex tech is built on the shoulders of others.
Oh, please. This back-and-forth and condescending attitude is getting so tiring for me. Yes, I've read the paper multiple times.
> this stuff is taught in intro linear algebra courses
I graduated with a B.S. in Applied Mathematics from Yale. I'm familiar with this stuff, thanks.
> At its core it's just a random walk on a graph;
I'm tired of making this point. There is a big difference between understanding a ground-breaking discovery and making the discovery itself.
> Of course, modelling the web this way and tweaking the middle to produce optimum results was a big deal. You'd be a fool to believe that PageRank hasn't evolved in 20 yrs.
I never once said this. I said that after PageRank was developed, most of Google's engineering resources went into scaling and monetization.
> While you have a valuable skillset, it's not what Google needs from a software engineer. You're good at a different job.
See https://news.ycombinator.com/item?id=12653628. Far more "software engineers" at Google do the exact same thing that I do all day long than do the type of work you're referring to.
I'm confused by this statement. What do you mean?
Like someone with a good understanding of computer science?
It's a big industry, plenty of different ecological niches to fill.
> I'm too busy dealing with CSS bugs, figuring out why customer retention is lower than it should be, training my clients on how to use a bug tracker, scaling, and hundreds of other things which are magnitudes more important than this implementation detail
For hard-tech those become important only after there is an implementation that solves a high barrier to entry technical problem. Then you can get a good run of the mill PM to keep it chugging.
I completely disagree, and your statement is emblematic of what is wrong with popular perceptions of what it means to be a good software engineer.
The best engineers I've ever worked with are fantastic communicators and understand the product that they are building to the core. Being able to ask a good question or drill down into correct requirements is far more important than knowing how to traverse a binary tree, at any level.
All I am saying is that if you are in a hard tech area with problems that has not yet been solved well enough to be monetizable, communication and delegation is not what is going to solve it. Agile, extreme or whatever is 'in' at the moment is not going to do it. It gets solved by ability to reason about technical things. I can for instance communicate the need to cure cancer (bad example sorry) or delegate willy nilly, but sorry thats not going to solve it.
> I can for instance communicate the need to cure cancer (bad example sorry) or delegate willy nilly, but sorry thats not going to solve it.
I, again, disagree. Technical ability and communication skills are equally important. You need both to be an effective engineer.
Google has turned interviewing into a science. They have interviewed millions of engineers, hired hundreds of thousands, rejected and later hired who knows how many, etc. Many incoming interviewees have also interviewed across silicon valley. And all along the way they've been collecting data and refining their process.
Do you have anything except your intuition to show us that the Google style interview doesn't work?
One thing you may fail to consider is that there doesn't need to be a direct and plain link between the interview questions and the work the engineer will end up doing. All that matters is whether the interview process produces a clean and strong signal that leads to quality hires.
As a person who's been through a Google interview, I can imagine many of my former co-workers performing poorly. Those are exactly the kinds of engineers I don't want to work with. I have spent too much of my time explaining basic concepts to engineers who should already be comfortable with those concepts, or who lack the critical thinking skills to be part of the conversation. So there's my anecdote.
Before telling a company how they should conduct interviews, perhaps you should gather some data.
I can't say for sure that their interviewing process is broken but as a user of many Google products (many of which get randomly discontinued) I will say there's something lacking.
Who do you think is building the UI and products? People that made it through the hiring process.
Just because something "works" does not mean it's optimal or even very good.
Google's interview strategy could also be to pick random resumes out of the pile, hire 10 people for every position, and fire the 9 who didn't work out. It would still "work" because enough people want to work at Google that they have a huge hiring pipeline and plenty of money for excess employees.
Worshipping Google as having a great interviews is not helpful to anyone. They can easily be doing it incorrectly.
In fact, the very data which you're talking about showed that many of their older techniques were in fact terrible indicators (brain teasers, GPA, etc.).
As for why they do this, your conclusion is kind of exactly what I got to in the last paragraph, "for lack of a better option." It's not that it works in some optimal sense; it just works less badly than the other options available, according to their business constraints.
I base my claim on my many years of personal experience as a software engineer, both being interviewed and interviewing others, discussing engineering and CS with numerous people. My anecdote: I once worked with a guy with quite reasonable academic CS skills who decided to re-implement SSL from scratch for a bog-standard webapp, rather than use a library to do HTTPS. That was not a good use of the company's limited budget, which he would have known if he had had any engineering skills (as opposed to CS skills.)
In my experience, there is little (though not zero) correlation between "good at CS" and "good at writing software commercially." The Google interview process is probably better than nothing, and probably better than their previous "brainteaser" style interview, but far from optimal.
As for what I'd select for? I'd certainly reject candidates who manifest anything like the condescending and arrogant attitude you display towards people you consider to be your intellectual inferiors. Even if you're an amazing engineer who can recite the CS in your sleep, that attitude is toxic for any team environment and will be a net loss to the company.
Their search crawler, their distributed computing technology from which projects like Hadoop drew inspiration from, their cloud platform with custom hardware, Tensorflow, their custom Linux kernel and Android, improvements to network protocols like HTTP, v8, Chrome, Project Zero, and user facing apps like Maps, Translate, etc, etc, etc. All those important projects will require you to know your way around the fundamentals from the ground up and think out of the box.
Then, they have lots of applicants. They can be more selective than they need to.
There are any number of engineers who can simply go read up on the fundamentals as they are needed, or who understand them well enough to work with them but lack a mathematical proof-level command of the material. Google just doesn't need to hire them because they're paying much higher than market rate.
It's easy to imagine a case where Google would need to implement something algorithmic because well tested libraries aren't quite doing the right thing. The libraries are generally designed to work in RAM / HD. Google needs them to work across multiple servers / datacenters.
Add more nodes. Scaling is not difficult.
I reject your real world assertion, unless you do not count Google as the real world. The reason these questions are asked is that they're relevant to the work engineers will be doing at Google.
But yea I agree with the rest of your post. Also another factor beyond confidentiality is that you can't hire people from other companies through that pipeline, cus they won't do it.
Also worth noting that for certain classes of engineers (new grads), Google does sort of hire people as contractors (though they're full-time with benefits), see the eng residency program.
That's what previous means in:
> Their previous "brainteaser" style interview didn't work at all
I suppose I should have capitalized the OR
The good thing about CS research is that it can be productized into libraries and tools
While this is very effective at filtering out poor applicants, it is also very effective at preventing good applicants from applying in the first place. Why would a great developer who probably already has a great job quit for a chance to maybe get hired at Google? In most cases they wouldn't. And at the scale Google hires, they can't afford to lose all of those potential employees.
Despite being an interviewer, I don't know if that's true. I do know it's not what I ask about.
As someone who helped set interview process at my previous employer (not Google), and other companies now as a consultant, your reasoning of why these companies have a process like this, looks right. Let's dig a little deeper too.
Consider the following facts:
0. Google has some 30k-40k engineers, with average tenure of say 7 years. So every 7 years, they are hiring 30-40k engineers. That is thousands of engineers every year, even if they are not growing (and they are). It's a massive undertaking.
1. In the field of software engineering, unfortunately, experience has little correlation to expertise. That has been known for a long time. See . And cognitive ability is at least a reasonable predictor of success, better than many others. See .
2. Companies like Google have no dearth of applications. Literally millions or engineers apply.
3. Interviewing is a chore, and mostly not fun. Most interviewERs hate spending time on it, and want to get out of it as quickly as possible. Interviewing is also on top of everyday work which is a lot at many growing companies.
4. As a hiring manager, I have to hire people. There is no choice. I must find a way to hire N people in X amount of time, or the company is literally doomed. e.g. if I'm eBay or Amazon, I can't miss the holiday season.
5. You have to involve multiple people in hiring. Just one person talking to the candidate and making a decision is not sufficient.
6. Programming is so vast, that engineers apply from all sorts of backgrounds and domains.
7. Companies are becoming more and more polyglot. You want engineers to move around different languages and stacks freely.
So when you have a lot of people to hire, a lot of people applying, work that's somewhat correlated to cognitive ability and very little time, what kind of process do you end up setting?
When you put these constraints together, you realize that your incentive is to design a generic process that's convenient to the company, and not convenient to the candidates.
You just want reasonably smart people, fast. You don't care about seniority much, and the type of problems asked, as long as it helps you close N people in X time, by putting in least amount of work. A different process might have selected for different kind of N people, but that process would take longer than this. And time is money too.
A process with DS/Algos is less subjective, fast (like you said), can be prepared for (you have to hire), has enough variety that multiple people can ask different questions, lets you interview across domains, and is at least somewhat defensible-ly relevant to the field.
And hence, here we are.
Not saying that the process is understood by everyone and executed well everywhere and every time, but for several years, we're all still looking for process that's lesser evil, and we haven't found one.
As long as the constraints outlined above remain, the process is going to stay. In some form or another. For a very long time. However much everyone hates it, including the interviewers, and the company itself. Many have tried otherwise (including myself), but most have come around to asking DS/Algos somewhere in the process in varying percentages.
I'm slowly learning that despite how much more money I've earned by skipping between companies, skipping is not a path to knowledge. It provides better earning opportunities (to a point), it provides new challenges on a regular basis, but I've lost the ability to go deep into a topic. The depth that can't be achieved in two or three years.
I guess I'd really like to see a bit more ink spilled on "congratulations, you got a job; now what?" What interpersonal skills are most important? What are good resources for continued learning? What do C level people actually do? How can I help the company succeed? What career opportunities are possible for me without having to skip jobs? What can I learn from this company before I go to create my own?
That would be much more valuable, IMO, than a primer on how to pass an interview process.
Not to knock this content, anyway. I found this repo to be well organized and genuinely interesting -- just not super relevant for me...
Being able to parrot the crap you where taught in college is only useful for interviews, which is a sad reflection of what interviews actually test for.
You are completely right about what you said but I'd like to add one thing is the most important for any software engineering roll at any company: your ability to learn as you continue practicing in our field. That should be the highest priority. How do I familiarize myself with all my tools and resources not so I can do my job really well but so I can move to any task and execute it well?
That should be the focus of any practicing software engineer. To be able to learn and adapt as requirements and goals change.
I agree that the emphasis Google puts on algorithmic interviews is a bit dumb, but that how their hiring system works. So anyone who really wants to get a job at Google needs to work on that, regardless on whether it will be useful inside the company or not.
There are millions of programmers, most of which will never get a job at Google. Is their time perhaps better spent optimizing their existing careers instead of doing "dumb" things for a chance at the gold-plated goose?
1. they have a lot of applicants to pick from.
2. they do it for cultural reasons rather than purely for on-the-job performance reasons.
Culture is a long term investment.
Pretty sure tons of content have been written about all of these topics, and not particularly hard to find.
Read Shapiro's "Corporate Confidential". Despite the rather click-baity (read-baity?) title of the book, it's an fairly good explanation of career management and politics of mid-size to large companies. I'd recommend it to all new graduates entering the workforce.
As you say, almost none of your actual job depends on knowing any of that material in any way. However, you aren't even getting in the door unless you know it all, which is indeed arbitrary and a waste of everyone's time and energy.
Thus, good engineers who didn't happen to study academic CS at Stanford or MIT must regrettably waste months of their lives self-studying CS trivia which they will never use again except at other interviews. Being good engineers, they have studied how the system works and taken the rational and necessary step to participate in it. That's why so much ink is spilled on it.
This is the key to only one (admittedly pretty) door, though. Are the benefits of getting through that particular door truly worth the cost?
I think this question is even more important when so many other companies are now moving away from algorithmic interviewing techniques since they aren't Google.
The signal that it is measuring is "How much did you prepare to get this job." Which isn't that bad of a signal.
At a glance, a few things I would throw in are:
* UTF-8 (a basic understanding of how wide characters are encoded)
* streaming algorithms
* lock-free data structures (what they are and why you probably shouldn't implement one)
In some sense it's like trying to hold the entirety of CS in your head for one day. You might spend 30 minutes learning about something so you can mention it as an aside. But... I think it's worth it, more for the knowledge.
Knowing that something is possible (e.g. encrypted search) opens doors in the same way that working cross-discipline can.
PS: Just to clarify, I haven't been hired by any of the "Big 4" yet, but that is because — again — I have been studying for only two weeks, if I invest more time on this, maybe in 5-6 months I can guarantee an offer in one of those popular companies that appear featured on HN all the time. And as the parent comment says, even if you have no intention to work for Google or any other of the big corporations, this still is a good list of resources to improve your skillset.
Google should not be the only option.
It's specifically the "future Googler" print-outs, and plethora of usernames and websites that include "Google" that I think could make it a heart-breaking rejection. (Though of course, I wish OP all the best and hope it doesn't come to that!)
Frankly, I can also imagine it being more than a little embarrassing after a successful application.
"If you like this project, please give me a star. "
So it's "like, comment, subscribe", and now "give me a star"? It's the first time I've seen this and I hope this doesn't catch on. The current system of aggregation/curation/voting works fine imo.
It's a different thing if he says that you must STAR before you can access the repo.
Re-reading it now, I'm actually not so sure what parent was complaining about.
This motivation also won't make you a better person.
It's nothing revolutionary, but its good practice to be mindful of when you can chop pieces off of your search space. Interviewers always love hearing something like "and if we organize the problem this way, we can always stop searching here because we know we can't get a better result".
P.S.- If you know python I'd recommend interviewing in that. You won't be able to implement anything with nodes very well, but you can write most code on the whiteboard very succinctly. They WILL want to see actual code.
- Thanks for the Amazing List of Resources
I have a CS degree and many successful software projects under my belt. Plenty of employers have happily paid me to ship software for them and I have repeated referral customers. I have no doubt that if given my computer and access to the internet, I can easily implement any algorithm in this list.
Yet I also have no doubt that I'd have to prep for at least a dozen hours before a Google interview if I wanted a chance to succeed.
Why on earth should going through a software engineering interview require pulling out your old textbooks and running through problems? Since this is really just a glorified IQ test, I'd much rather they just handed my an IQ test (or asked for my SAT).
memorization != understanding, I hope Google should give chance to the latter being the "time-tested proven" engineers which some can't answer "computer science trivia" because most of them don't spend time memorizing theories, they spend their time building and applying what they learned.
Although, you can relatively quickly grasp all basic algorithms and data structures, you most definitely can not quickly build up skills of recognizing these algorithms in problems (which is most valuable skill and most overlooked).
Another problem is that you also forget these algorithms pretty quickly and can't implement them after a few months.
I was on and off in algorithms since 2014 and already experienced these problems during last two years.
Now, I took another route which is much slower but now, I can remember, recognize and implement some algorithms after a few months.
I started to participate in algorithm contests, mainly on CodeForces, sometimes on HackerRank. Often these problems cover pretty narrow topics and what you can learn in two months on Coursera, on CodeForces you can learn in one year or even more.
Why algorithm competitions is useful? When you stuck on the problem for a while, then you read solution and you see that this problem is solved [for example] by Dijkstra algorithm, you remember this algorithm for a while. If you stuck on another problem, and then after reading solution you discover Dijkstra algorithm again. You remember this algorithm for very long time (provided you put significant effort to solve the problem). That's because your brain connects Dijkstra algorithm with some important problem because you are frustrated solving this problem during contest. So frustration is actually good for memorization! (if used correctly).
So there are 3 aspects which you should develop together in order to achieve really good results:
1. Implementation skills;
2. Problem solving skills (i.e. how you recognize algorithm in the problem);
3. Knowledge of algorithms, data structures, paradigms (like dynamic programming, greedy etc);
Often courses covers only third aspect.
Here is my description of CodeForces problems difficulty and relevant skills they develop:
Div2 A - trivial implementation problem, train accuracy;
Div2 B - little tricky but still trivial problem, find little things to solve the problem;
Div2 C - sometimes easy, but sometimes really hard to solve, covers topics: combinatorics, dynamic programming, greedy, elementary graph algorithms etc;
Div2 D - usually hard to solve, but can cover classical algorithms like Dijkstra, Kruskal's, binary indexed trees etc;
Div2 E - very hard to solve, way above my current level (the same applied to Div1 problems);
If you can solve Div2 C problems consistently within 15 minutes and sometimes solve D problem within 30-45 minutes, you will crush Google interview (at least their algorithms + data structures part, which is where most people stuck!).
All that time and energy focused on becoming an employee of that one company, Google, will create enormous expectations which are likely to be crushed by reality when the OP comes to the realization that, yes, even Google is just another place to work for a living. Does anyone who actually work there think otherwise?
This is also very creepy.
But he specifically mentions Google. That I have a problem with: you cannot advertise a program as an being effective approach to the Google interview, if you have not been an interviewer, or you have not received an offer as a result of the interview process.
I did notice that Indian and Pakistani engineers like to do those kinds of preparations, and reasons are mostly cultural.
You should always come prepared on the interview, but the better engineer you are, more companies and options you will have.
Trees, graphs, mathematics, binary? Sure. It's foundational.
What I dislike about these "interviews", so much that I have contemplated leaving the field, is that we, as developers, have to take them over and over again, under capricious and arbitrary circumstances.
Think about all the exams I mentioned above. They have a proper study path, they are administered consistently, they are graded and evaluated by respected and vetted professionals who will at least attempt to apply similar standards. You get information about what will be tested, you get feedback on how you did. You don't spend studying and preparing only to hear "we've decided not to pursue your (actuarial candidacy/bar candidacy/nursing boards) at this time".
That's what makes this all so awful. I feel, after 15+ years, that I've re-taken my fundamentals exam enough times. If it were trivial, that'd be fine, but it takes time and focus to get ready for it, time I'm less and less inclined to spend on it.
I think that there is an informal set of rights and responsibilities that evolved over time between exam-based professions and students. I believe that the "tech interview", which is really an exam, has almost all the downsides and none of the positives, none of the protections that evolved in the other fields.
The only positive thing I can say is that at least tech doesn't require a very specific degree, unnecessarily long, that can only be obtained by going through certain accredited programs that cost $40k+ a year in tuition (the 3 year law degree at sky high tuition that can only be obtained through certain programs is cartel-like, professional regulatory capture at its very worst). But you know, actuarial exams are rigorous, don't require a specific degree, you can prepare how you like. I'd say that's probably the best model for software.
I don't mind studying for this, but I don't want to have to do it over and over, and I'd like some assurances that it will administered properly.
For a place that claims to be culturally diverse, I realized there was a hidden list of acceptable cultures, and mine was not one of them. I politely declined the offer and decided to stay a bit closer to home.
Maybe its just a different flavor of Kool-Aid. Cynicism about X is often as much of an irrational orthodoxy as X is.
This doesn't invalidate your experience at all - the interviewer may or may not have been talking about this, and there may well be cultural issues there - but I learned this interesting distinction recently, figured I would pass it on.
Just the growth of the operations performed.
In the real world, it may thus be better to use a computationally more complex algorithm than a computationally simpler one in certain cases, if you happen to have a system that delivers big enough optimizations for the complex algorithm.
Anyone who said that to me would seriously risk a sock to the jaw from my non-Stanford educated fist, interview or no. That is an incredibly rude and condescending thing to say.
Thankfully that guy didn't interview me during my interview at Google, and my fist remained wrapped around my dry-erase marker instead.
Many young skinny-jeans-wearing progressives here quite openly harbor grossly ignorant steroetypes of a wide variety of unacceptable cultures: the south, the midwest, Republicans, Christians, and conservatives, among others.
When challenged, they typically deny that their cartoonish, caricatured views of these groups are stereotypes, and insist that that's simply the way it is.
Just wearing a button-down shirt (or, God forbid, a tie) is more than enough to stigmatize you as "one of them" and get you instantly ostracized (while they simultaneously loudly trumpet the value of diversity and tolerance.)
> cartoonish, caricatured views of these groups are stereotypes
I cannot tell if this is satire.
It's a dumb question to be authoritative about because there are different implementations of hashmap and hashcode meaning that you can't guarantee the runtime complexity.
According to the SO answer above Java8 will be mostly O(1) but full buckets may be stored as trees meaning that they will be O(log n). But as the answer says
"So no, O(1) certainly isn't guaranteed - but it's usually what you should assume when considering which algorithms and data structures to use."
The interviewer could have explored the candidates knowledge of how hashmap is implemented, or could be implemented, which would be more rewarding than insulting and gloating.
I think the big misunderstanding in most of these answers is assuming a fixed amount of buckets are used in Java's implementation. That is not correct, the put() method will rehash your table if it grows too big. (meaning that on average put() is O(1) but has a very real possibility of being O(n).
Calling a hash table O(log2(N)) gets misleading when you measure a hash function's performance, and it doesn't change as the table grows.
So you're right about your technicality, but you're using big-O slightly differently than how everyone else is using it.
The implications are interesting, in that if the hash computation did grow as O(log2(N)), then it would (theoretically) outperform the current typical O(1) hash table implementation, and they would meet only when the table is full.
Another way to see O(1) is that hash computation is always implemented not as a log2(N) function, but is log2(MAX_N), where MAX_N is the size of a full table, and since MAX_N is a constant, the complexity is O(1).
A side note is that not all hash tables are implemented using an array that can hold N items. Some hash tables use linked lists instead of re-hashing for collisions, meaning that your array can be arbitrarily less than N in size, meaning that it is definitely possible to have a hash function that takes less than log2(MAX_N) time. In that case, insertion time becomes explicitly greater than O(1).
"The implications are interesting, in that if the hash computation did grow as O(log2(N)), then it would (theoretically) outperform the current typical O(1) hash table implementation, and they would meet only when the table is full."
How would O(log n) ever out perform constant time, O(1)?
Could you elaborate?
In this case, if your hash always takes 64 bit values and always produces 64 bit values, it is O(1). If your hash were O(log2(N)), then your hash function with an empty table would do 1 bit worth of work, and would do only one operation. But when the table grew to contain 1,000 elements, it would do 10 bits worth of work.
The O(log2(N)) algorithm with N < 2^64 is always faster (in theory) than the O(1) algorithm that does a constant 64 bits of work.
The O(1) algorithm pays off in spades for N >> 2^64, it will become much faster than the logarithmic one, but for small N, log(N) is smaller than 64.
I usually think of hashtables though as being O(1) in referring to the look up(amortized) and not the hash code itself.
Is the n in log2(n) the size of thing you're hashing? AFAIK the O(1) statement is referring to how many things you have hashed.
It is O(1) if the hashmap is sparsely populated but if there are hash collisions the items are stored in a list (tree from 8) so is not O(1) in worst case
With a finite number of buckets (which is inherent to hashmap), asymptotic performance will never conform to the "sparsely populated" constraint, so its O(n).
(That its concrete performance is roughly constant-time within certain size bounds in non-pathological cases is important knowledge, too, though.)
See Karl Popper's theory of critical rationalism to answer your questions about causation.
What can I tell you, I think trying to use things you don't understand to show how much smarter you are, is the ultimate intellectual dishonesty. I think the world would be a better place (or at least more humble) if people were called out for such behaviours. You don't agree?
EDIT: Just to clarify, I agree with my comment not being respectful or make me look great, but I think it was substantial. If you want to be rigorous in science, you should know the answer to that question.
Ah fair enough. So there's often repeated phrase that "correlation doesn't imply causality". That's because often times people make the fallacy of going "A and B always happen together, therefore A must imply B!" (or the other way around). This is obviously wrong. So some people have learned that whenever someone says "A and B always happen together, so maybe one implies the other...?", they can shout CORRELATION DOESN'T IMPLY CAUSALITY and shut down the conversation.
But the truth is that these people for the most part don't understand that phrase very well, right? Because you take conclusions in your life, right? Science has learned many things about the universe, right? So SOMETHING must imply causality. _If it isn't correlation, what is it?_ Think about it, everything in life is correlation. If you open your hand and the stone drops, did the stone drop _because_ you opened your hand? How can you ever know? All you know is that two events "opening hand" and "stone dropping" are correlated, how can we know if one implies the other? But of course we know, don't we? ;-) I think this is a substantive observation :P
> You accused someone of not knowing what they were talking about and using phrases they didn't fully understand, and I didn't see any evidence that that was the case.
Well, he didn't know the answer to the question I posed before. His answer was "I am very very educated, you go read some books." To me that's plenty of evidence he doesn't know the answer. In fact, I would say telling someone "correlation doesn't imply causality" is strong circumstancial evidence to that effect :P
EDIT: If you say that saying "correlation doesn't imply causality" is bringing up a valid point, it must be the first time you see that phrase, am I right? Go on reddit and in every single conversation you'll see someone throwing that around as an excuse to not having to dig further.
I apologize for dragging this out, and I know it's unproductive, I just found it really baffling to watch someone respond to a standard point with such a high level of totally unwarranted disdain. And then to think any disagreement is because I've never heard the phrase "correlation does not imply causation"?