Hacker News new | comments | ask | show | jobs | submit login
Google Interview University – Plan for studying to become a Google engineer (github.com)
327 points by jonbaer on Oct 6, 2016 | hide | past | web | favorite | 264 comments



Google's sort of "computer science trivia" style interview is the bane of software engineering.

"CS trivia" interviews select for people who know the mathematics of computer science, not for people who know how to ship robust software systems on time and within budget. The two are only loosely related and mostly uncorrelated.

In the real world, you use a well-tested premade off the shelf library to traverse a DAG or implement a red/black tree. Just because someone can recite algorithms and data structures in their sleep doesn't mean they can effectively scope, plan, or execute a complex software dev project; and conversely many of the best engineers either never learned or long ago forgot the details of the CS.

This mode of interviewing is like interviewing the contractor you're considering to remodel your bathroom by handing them a pile of iron ore and asking them to smelt high grade structural steel out of it.

So why do they do this? Lack of any better option. Their previous "brainteaser" style interview didn't work at all, and this is probably marginally better than that at finding good engineers. But more importantly, it's much cheaper and faster and less risky in terms of confidentiality than hiring candidates as temporary contractors for a real world test doing actual work, which is the only remotely effective way to tell who is actually a good engineer.


> In the real world, you use a well-tested premade off the shelf library to traverse a DAG or implement a red/black tree.

Unless, you know, you are implementing the standard library for a new language, so that there is no such "off the shelf library" available. Which its not been completely unknown for Google software engineers to do.

"Off the shelf libraries" aren't things that are handed down on stone tablets to the prophet on a mountaintop, they are things that are created by software engineers.

In any case, understanding which algorithms and data structures to use, even if you are using a pre-made library, and the impacts of that decision requires an understanding of the algorithms and data structures similar to that which is needed to implement them (it doesn't strictly require the ability to implement, but ability to implement isn't an unreasonable way of filtering for the requisite knowledge.)


When I worked at Google Search Infrastructure (not any ML stuff) I actually built a performant, persistent red-black tree implementation in C++ for creating persistent sorted lists with good insert and delete characteristics. It's was a pretty niche need, but we had it and the existing libraries we could find didn't cover it. We needed it for doing some speculative programing where we were doing some constraint solving and were using a greedy-ish algorithm to explore the constraint space. Arbitrarily solving the constraints was theoretically and practically impossible in a performant way, so we settled for a greedy exploration of the search space but we needed a way to back out of failing lines of exploration.

So, yeah, we needed engineers who were strongly familiar with the theory of how hard things are to solve and knew a lot about different types of data structures. We were in a space the existing libraries didn't really cover.

Maybe not all engineers need these skills, but it was valuable that our team all had them and were thus able to have deep design discussions about it.

The skill that's lacking the most at Google is project management, which is not necessarily a skill that every engineer needs.


I would argue the exact opposite, more engineers need basic project management knowledge than algorithm knowledge, perhaps even at Google.


There is another point to this: You shouldn't have all in your head. If you're actually going to implement a standard library for a new language it's reasonable that you spend some time and energy on researching and looking stuff up.

I don't see how being able to recite trivia questions from memory is a good indicator even for implementing a standard library.

Having a rudimentary understanding of algorithms and data structures is useful but in many cases it's also something you could look up.


Well said. If I ever did actually have to implement a red/black tree or a TCP stack for my job as a software engineer (which has somehow never come up yet), I wouldn't do it by standing at a whiteboard racking my brain for the details, and I certainly wouldn't set a one-hour timer. I'd pick up a book or read some reputable websites to refresh my memory first, and I'd take my time writing it, since it'd have to be an important corner case to necessitate rewriting something complex that's already pretty well implemented.


The real use-case for algorithm knowledge at Google isn't implementing standard libraries, it's in distributed computing. Okay, you know how to implement a shortest-path algorithm. Do you know how to implement a shortest-path algorithm when the data is spread over 10,000 computers? How about when paths become blocked by traffic jams or accidents? Could you adapt it so you can update that shortest-path in real-time as weights changed?

In these cases, most of your head-space is focused on the problem, which isn't written down anywhere other than PM's requirements doc. It's pretty critical that you understand the algorithm really well, because if you have to look it up you're going to have to unload all the messy problem-domain stuff that's in your brain.


Distributed storage/compute is a different problem/abstraction from an adaptive shortest-path algorithm.

Also, you can learn the algorithm first then figure out the problem. No need to do it on the fly, although sometimes that can help too.


You seem to think you'd lose points in a Google interview for not knowing some specific concept. That's not how it works. Interviewers want to know how your brain works, even when presented with something you don't already know.


I admit I have never interviewed at Google. I have interviewed at a number of companies that use the "CS trivia" style interview process. I got some of these jobs and didn't get others.

In the cases where I didn't get the job, I naturally asked why. Despite repeatedly having assured me that they just wanted to know how my brain works and not whether I'd memorized specific concepts, I am usually flat-out told the rejection was because I had implemented some minor detail of algorithm X incorrectly; that my implementation failed in some corner case.

I've also passed and worked at companies which do CS trivia style interviews. When they later interviewed other candidates while I was there, I sometimes observed how the people conducting the interview would first study algorithms and data structures for hours on Wikipedia prior to going in -- because they themselves can't recite any of that without preloading.

Maybe Google works differently; I don't know. At most places, the "CS trivia" style interview seems to primarily serve as a vehicle for bolstering the egos of the people conducting the interview -- they get to feel superior and ding people for not knowing the gotchas of algorithm X.


I'm working on Google ads where a ,,minor detail'' in an algorithm implementation can easily cost $50000/day loss. It also means that the experiment has to be stopped, everything fixed, tested, approved again, which can easily take a month to get correctly, so a lot of time is lost. I'm not saying the interviewing system is perfect, but getting edge cases right in a big system is critical.


That's where peer-review comes in. Hopefully you aren't committing to master all by yourself these kind of changes.


When you're working on getting those edge cases right, are you allowed to use a book or a computer for reference? :)


From my experience, I agree with this. I interviewed at Google in 2014. I prepared extensively. However, I bombed the interview. The interviewers genuinely wanted to see how I think. I am not brightest guy around and couldn't provide great insights in my solutions. I got same feedback from recruiter.

Just came here to say that from my limited experience, I concur with this comment. Google also has improved their feedback loop after the interview, which I find severely lacking in other companies with similar interviewing styles.


> I wouldn't do it by standing at a whiteboard racking my brain for the details, and I certainly wouldn't set a one-hour timer.

That...


> Unless, you know, you are implementing the standard library for a new language, so that there is no such "off the shelf library" available.

The ability to write down a complex algorithm on the whiteboard without any references is not a very good proxy for being able to implement the algorithm in a standard library.

It vastly over-selects for people who are good at memorization.

I know how to use these algorithms and data structures, and in fact do use many of them regularly. I can even implement them myself when I need to. But I certainly don't go to implement them without access to any references—if you're writing a standard library and don't consult references when doing so, then that's a bigger indication of you being a bad engineer than a good one.

It's really just an elaborate signaling ratio that you're willing to devote lots of time to studying. I personally don't think Google is such a mythically fantastic employer that I would spend tons of time studying just to get a job — and that's probably the point.


Let's be frank, most google engineers don't create new programming languages or work on library infrastructure.

The only common topic they all work on is selling our eyeballs and collecting our data. Interviewing for that would make more sense than the algo stuff.

P.S: there's a difference between knowing typical algorithms needed to solve a class of problems and studying to implement them on a whiteboard.


> there's a difference between knowing typical algorithms needed to solve a class of problems and studying to implement them on a whiteboard.

I'm not sure that's all that true; even if you haven't specifically studied to implement the particular algorithm, if you are familiar with the concept of the algorithm and proficient in the implementation language, then coming up with some kind of cut of an implementation isn't an unreasonable request. (Now, if the evaluation of the quality of the implementation is unreasonable for the context in which it was requested, that's a problem, but if its used to judge understanding of the concept, if potential problems in the implementation are the focus of probing questions to determine understanding rather than straight rejection, etc., I don't necessarily see a problem.)


Yes, that is true. A tiny percentage of Google engineers build new languages. If you're hiring someone to write a compiler, say, then it is absolutely appropriate to ask interview questions about compiler design.

The point is that that is not how these interviews work at all. They test for the ability to memorize and regurgitate on demand a standardized and generic set of "CS fundamentals" based on the curriculum of elite CS schools -- nothing more.

For almost all jobs, even at Google, almost all of that knowledge is never used at all. Even at Google, most coding jobs are unglamorous CRUD. You just don't hear about them in blog posts because it's far more compelling to write up their cool new programming language project.


>For almost all jobs, even at Google, almost all of that knowledge is never used at all. Even at Google, most coding jobs are unglamorous CRUD. You just don't hear about them in blog posts because it's far more compelling to write up their cool new programming language project.

I'm curious about this statement, you say this but... have you ever worked at Google? I'm genuinely interested, not trying to discredit you. It's a bold statement to make if you don't have direct experience on the job.


Morgawr I suggest you go around this thread and read that guy's comments. You'll notice that they're all in the direction of "other people merits aren't such a big deal". His implication that passing an interview at google is just memorizing trivia

> I don't have months to study fulltime to cram for the CS trivia challenge

His implication that the reason why he didn't get into google isn't because he's not smart or knowledgeable enough, but because he didn't go to a prestigious school

> No, I don't work there. I didn't study CS in school (much less attend the elite CS schools they are selecting for)

The implication that maths is somehow less important than nebulous concepts like "shipping code within budget"

> "CS trivia" interviews select for people who know the mathematics of computer science, not for people who know how to ship robust software systems on time and within budget.

That interviews are processes which dumb people pass, unlike him

> Their interview doesn't select for "ability to figure things out" at all. It selects for "ability to memorize and regurgitate academic CS curriculum on demand."

And then the comments through which I came to know this guy: when someone makes a good point, he replies that "correlation doesn't imply causation", and then when asked a simple question, instead of answering the question his answer is "I have X credentials, therefore I know the answer very well":

> I have a philosophy degree, and I can say with some confidence that I understand that particular catchphrase pretty profoundly. Not everyone who happens to disagree with you is merely an ignorant barbarian in need of enlightenment. See Karl Popper's theory of critical rationalism to answer your questions about causation.

You see, checking boxes (like getting credentials and academic knowledge) is very important, but only the boxes he's checked.

I'm haven't enjoyed a thread this much in a very long time.


No, I don't work there. I didn't study CS in school (much less attend the elite CS schools they are selecting for) and I don't have months to study fulltime to cram for the CS trivia challenge that you need to do to get through the door at Google. Fortunately, there are plenty of other jobs for software engineers.

I admit it is based on hearsay -- I know a number of people who work there. They say things along the lines of, "The pay is great and the benefits are generous, but the actual work is more or less the same as any other tech company." They tell me about their projects, and it's true. It's mostly CRUD.


Trust me with your attitude Google isn't missing you either. Maybe one day you will appreciate fundamental CS knowledge as more than some "trivia" to study.


Who writes those premade robust libraries? Who improves them, when a few percentages of improvement will be multiplied by enough machines to come out to millions of dollars saved?

If you have a pile of raw iron and you're currently running your own smelt for cost savings over the other remodelers, maybe even though you are in the remodeling business you could use those skills.

I agree though that contracting is by far the best way to grade performance. But the Google style interview is the only reasonable alternative I know of to filter out people who don't actually know the technologies they are talking about and can fake it with other people's code for months.


He's saying every company needs a Sheldon or two; they don't need a company full of Sheldons.


Sure, but a library writer can use libraries; the converse is often not true. And most places pride themselves on a) hiring really good people and b) giving them mobility within the company. If I wanted someone to glue stuff together without contributing to company infrastructure why not just hire a contractor for the gig?

(Also Off topic, but should I know who Sheldon is?)


Character from the tv show Big Bang Theory. The character is highly intelligent but quirky.

http://www.fanpop.com/clubs/the-big-bang-theory/images/26450...


The point is the test doesn't select for "ability to contribute to company infrastructure" at all. It selects for "mathematical knowledge of theoretical computer science." These are related skillsets, but largely uncorrelated.


A company full of Sheldons? Come on man, it's just us.


Character in the tv-show The Big Bang Theory. Portrayed as highly intelligent but annoying.


Re libraries: the library team I would assume and hope, since this is a very specialised skill.


>the library team I would assume and hope,

I've never been at a company that had a dedicated "library team". The common code and functions that go into the libraries are artifacts of programming the other stuff that needs it. It's not a job title.

(Unless the company's product to sell is a compiler. In that case, it would follow Conway's Law and you'd have a set of programmer's implementing the standard library. I don't see how that division of responsibilities applies to Google or other companies.)


Pretty sure Google has multiple such teams dedicating to writing performance code that can be reused.


Don't forget - Google has a culture they're protecting. I'll play devil's advocate and say while it's maybe not immediately useful, it's probably a very good test for culture fit. If you're the kind of person who thinks about the math of algorithms in your free time, because you enjoy it or think it's fundamentally important, you're probably more likely to be a Google culture fit. If you're a successful product shipper, but you have an aggressive "get shit done" attitude and don't care much about the science and math, maybe you are not going to make the discussions with fellow Googlers that Google envisions happening around campus, or maybe you're likely to be highly success/ money oriented or I don't know, go against the culture fit criteria in some way.


Google internal libraries and infrastructure are very good and way better than most "off the shelf" libraries.

Any rank and file engineer can and do contribute to them. In that case I don't see why they should lower their bar to people who can merely consume these libraries when they can take their pick at people who can do both.


I've seen GWT, and Angular 2.0, they're not any better than other open source frameworks, and many would say they're much worse


Ha, yeah, if they could let some of that "magical internal tools" goodness seep out into the Android SDK I'd be pretty happy, since there's no sign of anything special there now, to put it mildly. Failing that, just buy Square and hand the whole thing over to them, since they seem to have a much better idea of how to write useful, usable, capable tools on the platform than Google does. Please.


> Google internal libraries and infrastructure are very good and way better than most "off the shelf" libraries.

Citations needed.


As much as I'd like to see them as well, I'd imagine we won't get much as that's Google's IP, but would love to know generically if that's true from anyone who has worked with said libraries.



Are you trying to make a point?


Well for one, see Guava and a bunch of the stuff that has been rolled into java proper over the years.


What is the equivalent that it is much better than?


Seems to be a competitor to Apache Commons, can decide for yourself whether or not you think it's better.


Apache Commons (Collections and some others) is a very old, basically legacy library, they are not nearly equivalent in terms of functionality so comparing them makes no sense. Google Guava is a nice library but it doesn't have any alternatives that I know of so it can not be used as an example to support the above generalization (which would need many examples to hold any weight anyway).

Angular vs React is a counterexample I can think of. Angular is shit.


The point was that their interview doesn't select for "ability to contribute to the company libraries" at all. It selects for "went to Stanford or MIT and has a good ability to memorize abstract computer science material."

These are only loosely related.


You seem to have some sort of disdain for "abstract computer science material".


Not at all. I love abstract computer science. I find it very interesting.

The point is that the job of "software engineer" is not the same as the job of "computer science professor." The required skill profiles are almost totally different. They require mostly disjoint skillsets, despite having some overlap.

If you're interviewing candidates to be a computer science professor at a university, then it is entirely appropriate to quiz them about abstract computer science during their interview.

Google, and other tech companies that inteview like them, are using a "CS professor" interview process to hire "software engineers." That's my point.


The material you get in interviews isn't CS professor level; it's basic undergrad algorithms level. It's a fundamental course in any CS degree, not just at elite universities.


> You seem to have some sort of disdain for "abstract computer science material".

See https://news.ycombinator.com/item?id=12654399


> This mode of interviewing is like interviewing the contractor you're considering to remodel your bathroom by handing them a pile of iron ore and asking them to smelt high grade structural steel out of it.

That's a terrible analogy. I'm hiring a home contractor for a discrete, predefined set of skills. They need to be able to install a sink and retile a floor -- and once they do that I'll never have need of them again.

Software engineers are not tradesmen. What you know is less important than your ability to figure things out. If Google wants to select heavily for that quality it's a totally reasonable approach.


> What you know is less important than your ability to figure things out.

The sooner you realize that this is true of every single human endeavour, the better off you'll be.


Does not apply to live performance, aviation, or surgery.

Also, even when you have time to figure things out as you go, knowing stuff makes you more efficient at it.


And that's why: 1) you train a lot before doing any of those alone 2) they change very slowly and every change is carefully evaluated (aviation/surgery) 3) aviation do have a lot of aids that are not memorized (SID/STAR, ATC information exchange, checklists, etc)


When my sink leaks, I'd rather hire a plumber that has done a standard job ten thousand times, than a bright guy with a pipe wrench, a bag of PVC, and an exacto knife, who can 'figure things out'.


That's the point. Google and the rest of the big 4 don't need ordinary plumbers, they need process engineers (the engineers who design the piping at oil refineries and chemical plants) because they are tackling problems that nobody has done ten times, let alone "ten thousand times".


This is how competent managers regard programmers, just so you know.

But that ten-thousandth time, when the standard job goes awry -- that's what makes the difference between a dude showing up to work and a tradesman. The ability to react quickly and appropriately when faced with a brand-new problem is valuable in any role. Pretending otherwise is hubristic at best and kind of gross in general.


Their interview doesn't select for "ability to figure things out" at all. It selects for "ability to memorize and regurgitate academic CS curriculum on demand." I'm sure many people who can do that are also good at figuring things out, but it's mostly uncorrelated.


Have you ever done of the interviews? Or given? I've done both, multiple times and currently work there. Generally speaking the interviews are not "regurgitate academic CS" on demand. It does test your algorithmic and data structure thinking. Reason being is it's hard to find questions that are complex enough to be hard, yet easy to explain in just a few minutes, AND has richness of details. Those kinds of problems tend to be mathematical, algorithmic, etc. Stuff that, yes, is unlike what you do 80% of the time, but that is expected to be able to do.

I made it thru my interviews without remembering obscure things, only knowing basic O-notation, and generally not being a math genius.


I'm not sure where this perception comes from -- my guess is it's a mix of cognitive dissonance and overgeneralizing from bad experiences.

The majority of interview questions I've encountered (at Google and elsewhere) are of the form: here's a data structure or API, can you use it to solve some problem? Nobody's ever asked me to prove A* from scratch. Even in cases where the question does revolve around a specific algorithm or CS concept they've always taken the time to explain it. I used to ask a question based around LZ77 encoding and I always went over the algorithm whether or not the candidate was familiar with it.


That's when I think there's some difference in the way mathematics is being taught in engineering and the way it is taught to mathematicians.

It is very different to attend a calculus class and learn to apply some formulas and that's all. You vaguely remember the formulas some years later, and the algebraic abilities are all you have, but they are still very powerful tools.

However, if you do a mathematics course like advanced linear algebra then you realize that for this particular teacher knowing the formulas doesn't matter, at all. You have to write proofs for everything. And these proofs require all of your imagination, all of your experience and then some.

It changes the way you approach every single problem from that moment.

Now, most people have never needed that ability. Most people will never exert themselves to the stress, the lost nights of sleep and the sheer single mindedness that drives everything out of your mind for days, until you get that proof. That damned proof you will never forget for months, if ever.

The end result is that it makes your mind work in overdrive for solving the same kind of problems.

So the issue is that some people have memorized stuff (and that never works well for too long), while other people has taken that stuff as the first step, and then through a long process of proving every single step of one or more theorems, have tattooed that stuff in some part of their minds, and are simply trying to find someone having a similar tattoo, to find someone with whom they can relate about their shared experience.

Anyway, the people I know that can do that, don't really like to program computers.


You have said that you haven't interviewed with them, and don't work there. You seem to be spouting a lot of bullshit for someone who hasn't had these experiences.


I am so happy that early on in my career I failed at multiple attempts to get a job at Google (after having graduated Stanford in CS in 2008). Despite having already built some objectively impressive and popular projects used by millions of people, I was asked to solve boggle, random C trivia questions, and write a sort algorithm I'll never need in my career, all on a whiteboard, and I just wasn't comfortable with that style of interview.

I was upset for a while but just kept iterating on my own projects and now I am passively making far more than I ever would at Google even including generous stock grants (mid 7 figures). It's much more rewarding too, knowing I created my own destiny, and not just some middle engineer on a mothership living off of an easy adwords monopoly.

My suggestion to anyone who has the perseverance to get through a guide like this is to just forget Google and build something new yourself.


> In the real world, you use a well-tested premade off the shelf library to traverse a DAG or implement a red/black tree.

How do you know that DAGs and rb-trees are the best solution to your problem if you don't even know basic data structures?


Knowing what these structures are and having a working knowledge of how to use them is not the same skill as being able to implement them from scratch (on a whiteboard, in an hour or less.)

If you can drive, you probably have a pretty good idea of whether a car or a truck or a motorcycle or a tank is the best vehicle for a particular transport application, and you can probably actually drive all of those with minimal practice. That doesn't require you to know how to build a car, a truck, a motorcycle, and a tank from scratch. That's a different skillset.

Google pays far above market rate, so they have the luxury of hiring petrochemists to work a gas pump. Even at Google, once all the shiny is stripped away, most of the projects are simple CRUD.


1) If you know what a DAG is and how to use it, you can most likely implement and traverse one in way less than an hour.

2) I've never heard of anyone being asked to implement a red-black tree during a Google interview.


> 1) If you know what a DAG is and how to use it, you can most likely implement and traverse one in way less than an hour.

I know what it is and how to use it. I can implement and traverse one. I seriously doubt I could do it in an hour, especially if I don't have at least standard library documentation available.


Whiteboard code interviews don't require you to flesh out such things in any great detail; they're intended as proxies for your thinking process.


You should tell that to some of my whiteboard coding interviewers, then. Plus, I feel that many of the interviewers who claim to be interested in though process still have a "right" way in mind and will dock interviewees for not approaching a problem the way they would.

Anyway, the post I was replying to said, " you can most likely implement and traverse one in way less than an hour" (emphasis mine). That statement has nothing to do with thought processes. To me, that requires "flesh[ing] out such things in any great detail".


Ugh, this sort of computer science-shaming needs to stop. I've been building software professionally for 10 years now and not once have I ever needed to weigh the pros / cons of a DAG versus rb-tree. I'm too busy dealing with CSS bugs, figuring out why customer retention is lower than it should be, training my clients on how to use a bug tracker, scaling, and hundreds of other things which are magnitudes more important than this implementation detail. Somehow I've managed to make a career out of this. ¯\_(ツ)_/¯


But ask yourself: how does your browser layout engine and scripting engine work? Is CSS some sort of self-hosting language that just materialized out of nowhere? Or there's somewhere a group of people responsible for it? (e.g: Google's Chromium team, Mozilla's Firefox team, Apple's Safari team, Microsoft's Edge team, etc.) Those people DO need to know data structures.

For you to be able to focus on CSS, HTML and JavaScript, someone, somewhere needs to make CSS, HTML and JavaScript work for you in the first place.


> But ask yourself: how does your browser layout engine and scripting engine work?

[But ask yourself: how does your car's engine work?]


My point is: he can focus on HTML, CSS and JavaScript all he wants, and remain abstracted from internals. That's fine.

But here we are talking about one of the companies responsible for implementing such technologies, where the science he implies as not being very relevant IS relevant.

It's the difference between applying to a truck driver work, to a truck engine designer work. For the latter, knowing the theory of an internal combustion engine IS relevant.


> My point is: he can focus on HTML, CSS and JavaScript all he wants, and remain abstracted from internals. That's fine.

The majority of engineering positions that Google hires for are exactly this. The Chrome / Chromium team is just one team at Google. Most of the engineers who work there are building/scaling iOS apps, web apps, and are doing the exact same work that I do all day long.


A better analogy would be hiring engine designers by spending 90% of the interview asking them questions about low level metallurgy, and 10% about engine design as such. I would reverse those ratios.


A lot more people do stuff that interacts with the details of the browser engine than they do that interacts with the car engine; much more common (and easier) to find bugs in the browser engine than in your car engine.

Now if I was just browsing the internet, then, sure, your analogy might be more apt. But that's not what we're doing.


How do you scale without worrying about implementation detail? You sound more like a product manager than a software engineer. Their interviews are very different.


Of course there are jobs that require deep knowledge of advanced data structures, algorithms and optimization.

But you can easily scale quite a lot of systems to millions of users without having to know any cs at all. You just need to know how to use and configure different layers of caches, design your database(s) properly and some minor tricks around code (and knowing how to analyze performance).


I built a web app that scaled from 0 to several hundred thousand users in a few days. Somehow I managed. A key thing in becoming a good engineer is learning how to delegate certain technical decisions to those who are better equipped to handle them.

https://news.ycombinator.com/item?id=2348702


Provided they exist in your organization and have the spare bandwidth for your needs, and itwould be them who solved the problem that needed to be solved not you.

This is more like Al Gore claiming he invented/solved/built the internet.

If Google was only about CRUD and banal CSS, it probably wouldnt have become the Google that it is. They became so by handling challenging and ever changing problems. In such a scenario you need generalists with strong fundamentals who can pivot quickly from one problem to another, quickly, not a stackoverflow cut-paster. The latter skill has ts moments but you cannot survive on that alone, if you are in the critical path.


> Provided they exist in your organization and have the spare bandwidth for your needs.

You can delegate to entities other than humans.

> If Google was only about CRUD and banal CSS, it probably wouldnt have become the Google that it is.

That's totally true, but how many times did PageRank need to be invented? After that the true problems at Google were scaling and monetization. Do you think the team responsible for making sure AdWords looked right on every device under the sun would agree with your statement? I'm sure a lot of the work that went into that "CRUD and banal CSS" is the same code that's made Google billions of dollars.

> They became so by handling challenging and ever changing problems. In such a scenario you need generalists with strong fundamentals who can pivot quickly from one problem to another, quickly, not a stackoverflow cut-paster. The latter skill has ts moments but you cannot survive on that alone, if you are in the critical path.

These skill-sets are not mutually exclusive.


> but how many times did PageRank need to be invented? After that the true problems at Google were scaling and monetization.

Quite a few times actually. It was not quite obvious at that time how to run Pagerank and other algorithms efficiently at that scale while keeping running costs down. If it was just a library call and delegation away, they wouldnt have had such a meteoric rise.


> Quite a few times actually. It was not quite obvious at that time how to run Pagerank and other algorithms efficiently at that scale while keeping running costs down. If it was just a library call and delegation away, they wouldnt have had such a meteoric rise.

We're talking about two different things here—the theoretical PageRank and the practical one. My point is that the skills required to scale a thing like PageRank—writing code to parallelize tasks, divvy up traffic, etc.— are very different than the ones involved in inventing PageRank as an algorithm.


> We're talking about two different things here

No we are not.

Pagerank at its mathematical core was not that novel, basic undergrad stuff. The application was novel, not the equation, you would find that in a beginners linear algebra book. The real deal was (i) realizing that those equations can be applied for solving an aspect of web search and (ii) scaling it up with cheap hardware of that time and keep operational costs low to be profitable. What I am saying is that one needs a good understanding of CS fundamentals and the ability to reason to pull that off with a competitive advantage. You dont get that just by tweaking CSS or for example knowing your Java platform well or by delegating. These kind of problems are not one off. You have to keep ahead of the competition constantly, innovate constantly, have to do stuff that your competition has not yet figured out how to do.

Now that this particular scaling problem has been in the mainstream it does not seem that big a deal to solve, it was at that time. If it hadnt been, every run of the mill tech company would have been doing it to eat Google's lunch. Their manager's ability to delegate did not seem to have helped them much there.


>> We're talking about two different things here

> No we are not.

> Pagerank at its mathematical core was not that novel, basic undergrad stuff.

Whatever you say, Mr. Page. :)

> You dont get that just by tweaking CSS or for example knowing your Java platform well or by delegating. These kind of problems are not one off. You have to keep ahead of the competition constantly, innovate constantly, have to do stuff that your competition has not yet figured out how to do.

More false analogies here. The skills involved in solving technical problems are easily translatable from one technical domain to another. You make it seem like implementing an algorithm to scale servers is necessarily more complex than implementing an algorithm to stack shapes on a webpage in a space-efficient way. It's not.

> Their manager's ability to delegate did not seem to have helped them much there.

You're completely misunderstanding me. I'm not using the term "delegation" here as a managerial term. I would bet you that the team that scaled PageRank relied on countless open-source and freely available tools and tech that others wrote. This doesn't lessen their ingenuity at all, but it should be clear that even the most complex tech is built on the shoulders of others.


Lol do you know PageRank? At its core it's just a random walk on a graph; this stuff is taught in intro linear algebra courses. Of course, modelling the web this way and tweaking the middle to produce optimum results was a big deal. You'd be a fool to believe that PageRank hasn't evolved in 20 yrs.


> Lol do you know PageRank?

Oh, please. This back-and-forth and condescending attitude is getting so tiring for me. Yes, I've read the paper multiple times.

> this stuff is taught in intro linear algebra courses

I graduated with a B.S. in Applied Mathematics from Yale. I'm familiar with this stuff, thanks.

> At its core it's just a random walk on a graph;

I'm tired of making this point. There is a big difference between understanding a ground-breaking discovery and making the discovery itself.

> Of course, modelling the web this way and tweaking the middle to produce optimum results was a big deal. You'd be a fool to believe that PageRank hasn't evolved in 20 yrs.

I never once said this. I said that after PageRank was developed, most of Google's engineering resources went into scaling and monetization.


Scaling has its own set of algorithmic challenges; at the small scale there's not much to be gained from asymptotic complexity improvements, but at the scales Google operates at it definitely is the case; I'd wager that a lot of work Google does on its distributed computing platforms involves algorithmic challenges.


Again, it sounds like your skills lie mostly in product management rather than software engineering. While you have a valuable skillset, it's not what Google needs from a software engineer. You're good at a different job.


TIL I'm not a software engineer. Who knew!

> While you have a valuable skillset, it's not what Google needs from a software engineer. You're good at a different job.

See https://news.ycombinator.com/item?id=12653628. Far more "software engineers" at Google do the exact same thing that I do all day long than do the type of work you're referring to.


If you're not a software engineer, why bring up your accomplishments as evidence that Google should hire you as one?

I'm pretty sure that the people who Google hires to do Javascript/HTML/CSS do not normally go through this type of interview.


> If you're not a software engineer, why bring up your accomplishments as evidence that Google should hire you as one?

I'm confused by this statement. What do you mean?

> I'm pretty sure that the people who Google hires to do Javascript/HTML/CSS do not normally go through this type of interview.

Completely false.


"A key thing in becoming a good engineer is learning how to delegate certain technical decisions to those who are better equipped to handle them."

Like someone with a good understanding of computer science?


Well, now think in Google scale. You said hundred thousand users? add at least 3 orders of magnitude


Horses for courses. There are people that would hate doing client training but love weighing the pros and cons of a DAG vs RB tree. They too can make a career of it.

It's a big industry, plenty of different ecological niches to fill.


I will hazard a guess that whatever you did it was not as impactful as GFS or mapreduce at scale.

> I'm too busy dealing with CSS bugs, figuring out why customer retention is lower than it should be, training my clients on how to use a bug tracker, scaling, and hundreds of other things which are magnitudes more important than this implementation detail

For hard-tech those become important only after there is an implementation that solves a high barrier to entry technical problem. Then you can get a good run of the mill PM to keep it chugging.


> For hard-tech those become important only after there is an implementation that solves a high barrier to entry technical problem. Then you can get a good run of the mill PM to keep it chugging.

I completely disagree, and your statement is emblematic of what is wrong with popular perceptions of what it means to be a good software engineer.

The best engineers I've ever worked with are fantastic communicators and understand the product that they are building to the core. Being able to ask a good question or drill down into correct requirements is far more important than knowing how to traverse a binary tree, at any level.


Where did I say engineers are not or dont need to be fantastic communicators.

All I am saying is that if you are in a hard tech area with problems that has not yet been solved well enough to be monetizable, communication and delegation is not what is going to solve it. Agile, extreme or whatever is 'in' at the moment is not going to do it. It gets solved by ability to reason about technical things. I can for instance communicate the need to cure cancer (bad example sorry) or delegate willy nilly, but sorry thats not going to solve it.


> It gets solved by ability to reason about technical things.

> I can for instance communicate the need to cure cancer (bad example sorry) or delegate willy nilly, but sorry thats not going to solve it.

I, again, disagree. Technical ability and communication skills are equally important. You need both to be an effective engineer.


Why do they do this? The answer you haven't considered is, "because it works." I assume you haven't considered it because doing so would mean wrestling with the fact that your intuition isn't always right.

Google has turned interviewing into a science. They have interviewed millions of engineers, hired hundreds of thousands, rejected and later hired who knows how many, etc. Many incoming interviewees have also interviewed across silicon valley. And all along the way they've been collecting data and refining their process.

Do you have anything except your intuition to show us that the Google style interview doesn't work?

One thing you may fail to consider is that there doesn't need to be a direct and plain link between the interview questions and the work the engineer will end up doing. All that matters is whether the interview process produces a clean and strong signal that leads to quality hires.

As a person who's been through a Google interview, I can imagine many of my former co-workers performing poorly. Those are exactly the kinds of engineers I don't want to work with. I have spent too much of my time explaining basic concepts to engineers who should already be comfortable with those concepts, or who lack the critical thinking skills to be part of the conversation. So there's my anecdote.

Before telling a company how they should conduct interviews, perhaps you should gather some data.


While it's undeniable that Google has an excellent engineering team, it's also undeniable that their products often/usually leave a lot to be desired from the end-user perspective. There's definitely a missing piece of the puzzle there when it comes to UX and human-centric design. Who's to say that their focus on low-level engineering at all costs hasn't left something on the table?

I can't say for sure that their interviewing process is broken but as a user of many Google products (many of which get randomly discontinued) I will say there's something lacking.


This is a red herring. Name a company that doesn't leave something to be desired. No company is perfect. The idea that Google's imperfections have something to do with their hiring practices is not prima facie ridiculous, but you've offered exactly zero support for it. You're just lobbing out complete conjecture. How does that advance the conversation at all?


Why are you being so defensive? I'll go further and say that Google's products are _uniquely_ bad as far as UX go. There's definitely missing product thinking in that organization.


It seems you didn't understand my last comment. You're making logical leaps here without connecting any dots. Google's hiring is broken does not follow from Google's UI sucks. You need to put some connecting logic in there if you want to convince anyone that you're even on topic right now.


> Google's hiring is broken does not follow from Google's UI sucks.

Who do you think is building the UI and products? People that made it through the hiring process.


> The answer you haven't considered is, "because it works."

Just because something "works" does not mean it's optimal or even very good.

Google's interview strategy could also be to pick random resumes out of the pile, hire 10 people for every position, and fire the 9 who didn't work out. It would still "work" because enough people want to work at Google that they have a huge hiring pipeline and plenty of money for excess employees.

Worshipping Google as having a great interviews is not helpful to anyone. They can easily be doing it incorrectly.

In fact, the very data which you're talking about showed that many of their older techniques were in fact terrible indicators (brain teasers, GPA, etc.).


I never told any company how they should conduct their interviews. Google is of course free to spend their money however they see fit. This was not written for the benefit of Google, but for the benefit of the many others who are considering whether something is a best practice simply because Google does it.

As for why they do this, your conclusion is kind of exactly what I got to in the last paragraph, "for lack of a better option." It's not that it works in some optimal sense; it just works less badly than the other options available, according to their business constraints.

I base my claim on my many years of personal experience as a software engineer, both being interviewed and interviewing others, discussing engineering and CS with numerous people. My anecdote: I once worked with a guy with quite reasonable academic CS skills who decided to re-implement SSL from scratch for a bog-standard webapp, rather than use a library to do HTTPS. That was not a good use of the company's limited budget, which he would have known if he had had any engineering skills (as opposed to CS skills.)

In my experience, there is little (though not zero) correlation between "good at CS" and "good at writing software commercially." The Google interview process is probably better than nothing, and probably better than their previous "brainteaser" style interview, but far from optimal.

As for what I'd select for? I'd certainly reject candidates who manifest anything like the condescending and arrogant attitude you display towards people you consider to be your intellectual inferiors. Even if you're an amazing engineer who can recite the CS in your sleep, that attitude is toxic for any team environment and will be a net loss to the company.


Google does have projects that require clear understanding of the things you classify as trivia.

Their search crawler, their distributed computing technology from which projects like Hadoop drew inspiration from, their cloud platform with custom hardware, Tensorflow, their custom Linux kernel and Android, improvements to network protocols like HTTP, v8, Chrome, Project Zero, and user facing apps like Maps, Translate, etc, etc, etc. All those important projects will require you to know your way around the fundamentals from the ground up and think out of the box.

Then, they have lots of applicants. They can be more selective than they need to.


The second factor is far more important than the first.

There are any number of engineers who can simply go read up on the fundamentals as they are needed, or who understand them well enough to work with them but lack a mathematical proof-level command of the material. Google just doesn't need to hire them because they're paying much higher than market rate.


Trial and error doesn't go that far. Also they have customers and a reputation to maintain, they cannot afford the risk of learning going wrong.


I don't completely disagree with you, but in Google mathy CS skills are probably better than robust SE skills Things need to operate on enormous scale.

It's easy to imagine a case where Google would need to implement something algorithmic because well tested libraries aren't quite doing the right thing. The libraries are generally designed to work in RAM / HD. Google needs them to work across multiple servers / datacenters.


The company itself started because they implemented 'something algorithmic'. They know that's what made them different in the first place.


> enormous scale

Add more nodes. Scaling is not difficult.


> In the real world, you use a well-tested premade off the shelf library to traverse a DAG or implement a red/black tree.

I reject your real world assertion, unless you do not count Google as the real world. The reason these questions are asked is that they're relevant to the work engineers will be doing at Google.


You're not supposed to ask brainteaser or trivia type questions as a Google interviewer anymore.

But yea I agree with the rest of your post. Also another factor beyond confidentiality is that you can't hire people from other companies through that pipeline, cus they won't do it.

Also worth noting that for certain classes of engineers (new grads), Google does sort of hire people as contractors (though they're full-time with benefits), see the eng residency program.


> You're not supposed to ask brainteaser or trivia type questions as a Google interviewer anymore.

That's what previous means in: > Their previous "brainteaser" style interview didn't work at all


Yes sorry, my sentence scans better with a capital 'OR'


Er. I actually said that they don't use brainteasers anymore in the post.


Yes, hence the OR. You're not really supposed to use cs trivia style questions either. Both are deprecated. It will take awhile for interviewers to catch up though...

I suppose I should have capitalized the OR


Building a good product is not just a matter of knowing CS questions

The good thing about CS research is that it can be productized into libraries and tools


How does one hire a contractor for other industries? I'd garner that word of mouth is the way. The alternative is an association and certification body (not too disimilar to the medical association or the bar for lawyers), and gating the access to this certificate.


> hiring candidates as temporary contractors for a real world test doing actual work

While this is very effective at filtering out poor applicants, it is also very effective at preventing good applicants from applying in the first place. Why would a great developer who probably already has a great job quit for a chance to maybe get hired at Google? In most cases they wouldn't. And at the scale Google hires, they can't afford to lose all of those potential employees.


From the article: "From what I've read, you won't implement a balanced search tree in your interview."

Despite being an interviewer, I don't know if that's true. I do know it's not what I ask about.


It's pretty safe to say you won't implement one, even if the interviewer does ask you to.


Well they are pretty successful, so they must be doing something right.


[Disclaimer: I run an interview prep bootcamp http://interviewkickstart.com]

As someone who helped set interview process at my previous employer (not Google), and other companies now as a consultant, your reasoning of why these companies have a process like this, looks right. Let's dig a little deeper too.

Consider the following facts:

0. Google has some 30k-40k engineers, with average tenure of say 7 years. So every 7 years, they are hiring 30-40k engineers. That is thousands of engineers every year, even if they are not growing (and they are). It's a massive undertaking.

1. In the field of software engineering, unfortunately, experience has little correlation to expertise. That has been known for a long time. See [1]. And cognitive ability is at least a reasonable predictor of success, better than many others. See [2].

2. Companies like Google have no dearth of applications. Literally millions or engineers apply.

3. Interviewing is a chore, and mostly not fun. Most interviewERs hate spending time on it, and want to get out of it as quickly as possible. Interviewing is also on top of everyday work which is a lot at many growing companies.

4. As a hiring manager, I have to hire people. There is no choice. I must find a way to hire N people in X amount of time, or the company is literally doomed. e.g. if I'm eBay or Amazon, I can't miss the holiday season.

5. You have to involve multiple people in hiring. Just one person talking to the candidate and making a decision is not sufficient.

6. Programming is so vast, that engineers apply from all sorts of backgrounds and domains.

7. Companies are becoming more and more polyglot. You want engineers to move around different languages and stacks freely.

So when you have a lot of people to hire, a lot of people applying, work that's somewhat correlated to cognitive ability and very little time, what kind of process do you end up setting?

When you put these constraints together, you realize that your incentive is to design a generic process that's convenient to the company, and not convenient to the candidates.

You just want reasonably smart people, fast. You don't care about seniority much, and the type of problems asked, as long as it helps you close N people in X time, by putting in least amount of work. A different process might have selected for different kind of N people, but that process would take longer than this. And time is money too.

A process with DS/Algos is less subjective, fast (like you said), can be prepared for (you have to hire), has enough variety that multiple people can ask different questions, lets you interview across domains, and is at least somewhat defensible-ly relevant to the field.

And hence, here we are.

Not saying that the process is understood by everyone and executed well everywhere and every time, but for several years, we're all still looking for process that's lesser evil, and we haven't found one.

As long as the constraints outlined above remain, the process is going to stay. In some form or another. For a very long time. However much everyone hates it, including the interviewers, and the company itself. Many have tried otherwise (including myself), but most have come around to asking DS/Algos somewhere in the process in varying percentages.

[1]. http://www.ida.liu.se/~nilda08/Anders_Ericsson/Ericsson_deli... [2]. http://mavweb.mnsu.edu/howard/Schmidt%20and%20Hunter%201998%...


That is is a good analogy.


This is a bit meta, but it bothers me sometimes how much virtual ink is spilled on how to pass algorithmic interviews. Especially considering how little of your career will actually depend on the knowledge being presented.

I'm slowly learning that despite how much more money I've earned by skipping between companies, skipping is not a path to knowledge. It provides better earning opportunities (to a point), it provides new challenges on a regular basis, but I've lost the ability to go deep into a topic. The depth that can't be achieved in two or three years.

I guess I'd really like to see a bit more ink spilled on "congratulations, you got a job; now what?" What interpersonal skills are most important? What are good resources for continued learning? What do C level people actually do? How can I help the company succeed? What career opportunities are possible for me without having to skip jobs? What can I learn from this company before I go to create my own?

That would be much more valuable, IMO, than a primer on how to pass an interview process.


In this vein, I wish people would ruminate on a topic like, "Senior Engineering: So now what?". When I first wanted to get into engineering, nothing helped or inspired quite like these posts. But, since I have "made it" so to speak. I feel like I've done the majority of growing and introspection required and now I can basically handle anything a Senior engineer should be able to handle. But... now what? I am kind of at a cross-roads where people tend to either stagnate (I've met a lot of 50+ peers with senior titles they earned 20 years ago), run off to manage people (do I want to do that? I dunno...) or end up as Principal Architects, co-founders, or some similar upper-echelon engineering role.

Not to knock this content, anyway. I found this repo to be well organized and genuinely interesting -- just not super relevant for me...


I think the only way to really prepare for working in industry is to actually work in industry.

Being able to parrot the crap you where taught in college is only useful for interviews, which is a sad reflection of what interviews actually test for.

You are completely right about what you said but I'd like to add one thing is the most important for any software engineering roll at any company: your ability to learn as you continue practicing in our field. That should be the highest priority. How do I familiarize myself with all my tools and resources not so I can do my job really well but so I can move to any task and execute it well?

That should be the focus of any practicing software engineer. To be able to learn and adapt as requirements and goals change.


I have done maybe 20 or so interviews in the last 2 years and this is the #1 thing I look for. However, I have still hired people who do not display a talent for continuous rapid learning, that is to say, they will only learn on company time and they take a long time (reasonably) to learn anything new. The flip side is that these people "work with what they know" and you can rely on them for a steady work pace and consistent quality. Much like so many development tools, people come in lots of shapes and sizes that benefit your organization differently.


You're probably right, but there are probably millions of software engineers who can have a successful career at Google. But they won't get in unless they can pass the algorithmic interviews.

I agree that the emphasis Google puts on algorithmic interviews is a bit dumb, but that how their hiring system works. So anyone who really wants to get a job at Google needs to work on that, regardless on whether it will be useful inside the company or not.


Are there really millions of engineers who will have a successful career at Google, or are there only a hundred or so every year?

There are millions of programmers, most of which will never get a job at Google. Is their time perhaps better spent optimizing their existing careers instead of doing "dumb" things for a chance at the gold-plated goose?


Most things I do in my day to day work are dumb. I code around browser incompatibility, write scripts to automate parts to byzantine and stupid buisines workflows away that survive for political reasons. Etc etc. The point is that if a few months of studying is underneath you, modern software development is going to have far more stupid schlep anyways; I imagine especially at a company of google's size.


I wouldn't recommend training specifically for Google in the hope of sometimes joining them, but if you get an interview at Google and are serious about joining them you should really train for it.


Agree that it's not a bad strategy for Google to hire the way they do. It is a bad strategy for most other companies though, as by running the same interview process, they'll get people who couldn't get a job at Google.


There are plenty of good engineers who can/do pass the Google interview but don't want to work there. I turned down a couple Google offers before ending up at my current job, and now work alongside many ex-Google employees. Part of the problem with discussions like this is that too many people assume that Google is the ultimate goal.


Agree that not everybody wants to work at Google. But it seems to me that today, it would be a first choice for many people.


Yeah, it was certainly my first choice before I started my career. After learning more, my opinion changed. People want to work in an imaginary ideal of Google.


Yes, the myth is one thing, but reality is another!


You seem to forget:

1. they have a lot of applicants to pick from.

2. they do it for cultural reasons rather than purely for on-the-job performance reasons.

Culture is a long term investment.


"What interpersonal skills are most important? What are good resources for continued learning? What do C level people actually do? How can I help the company succeed? What career opportunities are possible for me without having to skip jobs? What can I learn from this company before I go to create my own?"

Pretty sure tons of content have been written about all of these topics, and not particularly hard to find.


>What interpersonal skills are most important?

Read Shapiro's "Corporate Confidential". Despite the rather click-baity (read-baity?) title of the book, it's an fairly good explanation of career management and politics of mid-size to large companies. I'd recommend it to all new graduates entering the workforce.


The problem is that the algorithmic interview is the gatekeeper.

As you say, almost none of your actual job depends on knowing any of that material in any way. However, you aren't even getting in the door unless you know it all, which is indeed arbitrary and a waste of everyone's time and energy.

Thus, good engineers who didn't happen to study academic CS at Stanford or MIT must regrettably waste months of their lives self-studying CS trivia which they will never use again except at other interviews. Being good engineers, they have studied how the system works and taken the rational and necessary step to participate in it. That's why so much ink is spilled on it.


> aren't even getting in the door unless you know it all

This is the key to only one (admittedly pretty) door, though. Are the benefits of getting through that particular door truly worth the cost?

I think this question is even more important when so many other companies are now moving away from algorithmic interviewing techniques since they aren't Google.


I've seen the reverse: many companies are adopting CS trivia style interviews just because Google does it. They figure Google is the best of the best in software, so you can't go wrong by adopting as many of their practices as possible for your own software firm.


Whenever there's a test that gates perceived advancement, people will teach to the test as a way of circumventing the gate.


While we could debate on the meaningfulness of his goal, I find the person admirable. We could perhaps award more credit to the fact that this is a 44 year old person who is taking the difficult course of going through the material of CS on his own and then setting a clear goal to aim for. And of course not forgetting to celebrate his virtue of learning for life.


People complain about Google style of interviewing but I like the idea that you can at least prepare for them. I'm sure there are many companies that wouldn't even consider because you're too old or because you don't have the right background.


Yeah, I have a suspicion that this is the only reason why people still use it.

The signal that it is measuring is "How much did you prepare to get this job." Which isn't that bad of a signal.


Totally. Being 38 myself this is one of the things I've liked the most from this case.


I used the same approach last time I was interviewing -- keeping a wiki of "everything I knew or sort-of-knew about computer science" and then trying to fill in the gaps.

At a glance, a few things I would throw in are:

* UTF-8 (a basic understanding of how wide characters are encoded)

* streaming algorithms

* lock-free data structures (what they are and why you probably shouldn't implement one)

In some sense it's like trying to hold the entirety of CS in your head for one day. You might spend 30 minutes learning about something so you can mention it as an aside. But... I think it's worth it, more for the knowledge.

Knowing that something is possible (e.g. encrypted search) opens doors in the same way that working cross-discipline can.


I like the idea of using a wiki to track a lot of concepts you've learned. How do you keep track of everything that is in the wiki? Do you have a front page that is organized like an outline, or maybe a long list of concepts?


I went with the "long list" approach -- it was just a single page.


While joining Google is probably the last thing I want to do, this is a perfect resource for someone who hasn't had a formal education in computer science.


Indeed, I have no degree but have been working for +7 years in the field. Recently I decided to start searching for new challenges but thanks to ATS [1] my resume has been rejected 90% of the time, the other 10% has been invested in interviews in fairly popular companies including Amazon and Booking.com. After having studied CtCI [2] and practiced on LeetCode [3] for two weeks (yes, just two weeks) I was able to pass most of the technical interviews, so I can testify that using this repository as a way to improve your skills actually works.

PS: Just to clarify, I haven't been hired by any of the "Big 4" yet, but that is because — again — I have been studying for only two weeks, if I invest more time on this, maybe in 5-6 months I can guarantee an offer in one of those popular companies that appear featured on HN all the time. And as the parent comment says, even if you have no intention to work for Google or any other of the big corporations, this still is a good list of resources to improve your skillset.

[1] https://en.wikipedia.org/wiki/Applicant_tracking_system

[2] https://www.amazon.com/a/dp/0984782850

[3] https://leetcode.com/


Agree. IMO, if you master all the resources above then you probably can get into any software company you like to go.

Google should not be the only option.


More than a Google interview university this is the curriculum for an online degree in computer science. Thanks for sharing.


This seems like a great set of resources, but getting as keen as [0,1] just seems like the author setting himself up for disappointment.

[0]: https://github.com/jwasham/google-interview-university#get-i...

[1]: https://github.com/jwasham/google-interview-university#follo...


I also feel bad that he has seemingly tied up so much of himself into this. Although the technical interviews at Google kind of require this prep, at the end of the day interpersonal skills and other factors contribute to the final decision as well. It's possible someone interviewing him will find this distasteful. It's also possible someone might just not like him. It's impossible to know exactly what will happen, I just hope he's somewhat psychologically prepared to not be offered a job.


Even if he doesn't get the job he's going for, mastery of that curriculum will be tremendously valuable. So it's a pretty benign delusion, as long as it's motivating him to study.


I think the parent commenter was agreeing with me; I certainly wasn't suggesting the resources are or time spent learning them is a waste.

It's specifically the "future Googler" print-outs, and plethora of usernames and websites that include "Google" that I think could make it a heart-breaking rejection. (Though of course, I wish OP all the best and hope it doesn't come to that!)

Frankly, I can also imagine it being more than a little embarrassing after a successful application.


Yeah, I had thought this was just a fun way to organize a set of CS learning materials, but this guy seems to both 1) be very certain this will work, which is no guarantee even if the learning program is perfect, as there's obviously more that goes into hiring decisions, and 2) be really quite focused on Google specifically, which may not make sense.


His LinkedIn shows that he worked for/founded a lot of startups and other smaller ventures. I'm kind of lost as to why he thinks Google is the logical next step.


Maybe he wants something more stable?


Google is where software engineers go to retire.


Great ressource! Thanks for making this public.

"If you like this project, please give me a star. "

So it's "like, comment, subscribe", and now "give me a star"? It's the first time I've seen this and I hope this doesn't catch on. The current system of aggregation/curation/voting works fine imo.


A github like is a star. Or is there something I've misunderstood?


The system is not the problem. It's the solicitation.


I agree, if you agree with me can you upvote my comment? Thanks.


I don't get it. Why is the solicitation a problem?

It's a different thing if he says that you must STAR before you can access the repo.


I think it's a merit thing. If your content is truly worthwhile, is there a need to solicit/ask for stars? Researchers don't beg people to "like/comment/subscribe/star/tweet/etc" in their abstract, it feels odd to ask for a star in this manner.


The way I understood parent was that liking, commenting and subscribing was okay to ask for, but asking for stars was too much.

Re-reading it now, I'm actually not so sure what parent was complaining about.


It's not the same for me. Stars have a long history of been used as "Favorites", so it's more than a simple "like" in FB terms.


A lot of people will get turned off by the idea of having to or wanting to study so much just for a job at a company. The way I see it though, this is a great list of resources to learn about a lot of programming-related things.


Google pays pretty well. You have to spend so much time in school getting a degree and if you spend some extra time getting this job well as incremental cost it isn't too bad.


Good list of resources, but motivation to have a job in certain company seems, for me, like wrong approach. What if Google won't be the place you would like to work after two years (because something happens in your life, or at Google.) Or after few months of employment you got feeling it's not place for you?

This motivation also won't make you a better person.


Interviewing at Google gives him a specific target to motivate his efforts, but the material he is learning will be very useful to interviewing at other places as well.


Google people regularly tell me that qualified candidates have a 50/50 chance to make it through the interview process successfully. I'm sure they're working on that, but I would be thinking about it in those terms.


If you don't prepare the probability is much less than 50/50. I would rather have prepared and failed then just go it there and bomb it. You are going to have waste a day going to interview might as well be prepared.


That's kind of the point -- even if you prepare well, your chances are only 50/50, so don't be too disappointed if you don't make it.


These interviews are no different than a lottery where you hope and prey they choose an algorithm question you studied and learned well


A recruiter at Google literally told me which algorithms to know study and learn well, and only one of them came up in interviews (DFS).


I didn't say not to prepare. :) Just defining the terms. Ie, a "good" chance is 50/50. I think knowing that might influence how you prepare, how you go into it mentally -- and most importantly how you process a rejection if that's the outcome you get.


Where is the university for engineers that want to replace Google, not work for Google? Because working for Google is basically like working for IBM back in the 1970s.


Spend your time building shit instead of reading Cracking the Coding Interview and wasting time on Hacker Rank / Leetcode. There's the university.


I like your list! I'm going to give it to my little brother before he applies to work for Google. One thing I'd recommend looking into is deep search with pruning. It's nothing terribly clever- just boxing up something you've probably done before and giving it a name. It's a problem solving technique that often has tremendous real world benefits. Essentially- if you can take a problem and define it like a tree with potential solutions on the leaf nodes, can you define a search on that tree that gets you to the right solution as fast as possible? Often simple tricks can help you chop off entire branches of the tree so you never having to waste computation exploring them.

It's nothing revolutionary, but its good practice to be mindful of when you can chop pieces off of your search space. Interviewers always love hearing something like "and if we organize the problem this way, we can always stop searching here because we know we can't get a better result".

P.S.- If you know python I'd recommend interviewing in that. You won't be able to implement anything with nodes very well, but you can write most code on the whiteboard very succinctly. They WILL want to see actual code.


I would start doing this from today, Not for getting a Job in Google, but to become a better software engineer.

- Thanks for the Amazing List of Resources


How is this not an indictment of Google's interview process?

I have a CS degree and many successful software projects under my belt. Plenty of employers have happily paid me to ship software for them and I have repeated referral customers. I have no doubt that if given my computer and access to the internet, I can easily implement any algorithm in this list.

Yet I also have no doubt that I'd have to prep for at least a dozen hours before a Google interview if I wanted a chance to succeed.

Why on earth should going through a software engineering interview require pulling out your old textbooks and running through problems? Since this is really just a glorified IQ test, I'd much rather they just handed my an IQ test (or asked for my SAT).


Looks great even for just refreshing the basics again. Great work. Thanks.


If you like, I can forward you my emails from their recruiters too. They're starting to get a little passive-aggressive since I never reply to them.


They stopped emailing me when I told them they were mailing the wrong person with my name. It helped that they were actually trying to reach someone else, they referenced "my" experience at a company I've never worked for, and I found a blog post for that company by another person with my name.


This guy's website is weird. Feels like he's fetishizing Google the company. Not that there aren't people who do that, but still.


You'll get into Google if you have a CS degree plus good in memorization.

memorization != understanding, I hope Google should give chance to the latter being the "time-tested proven" engineers which some can't answer "computer science trivia" because most of them don't spend time memorizing theories, they spend their time building and applying what they learned.


This is a very ambitious list. It basically describes what you would end up going through in a CS undergrad curriculum (as others have pointed out). Personally it takes me a while to initially learn something and keep probing my understanding until I feel I have a solid understanding. There is so much to learn in this list I'm not sure how someone could reasonably understand most of these things without it taking quite a bit of time. I know for me if I tried to go through this list from top to bottom that the stuff I learned first would start to fade from memory as I kept adding more stuff at a rapid clip.


There is a flaw in this program and in many coursera courses about algorithms and data structures.

Although, you can relatively quickly grasp all basic algorithms and data structures, you most definitely can not quickly build up skills of recognizing these algorithms in problems (which is most valuable skill and most overlooked).

Another problem is that you also forget these algorithms pretty quickly and can't implement them after a few months.

I was on and off in algorithms since 2014 and already experienced these problems during last two years.

Now, I took another route which is much slower but now, I can remember, recognize and implement some algorithms after a few months.

I started to participate in algorithm contests, mainly on CodeForces, sometimes on HackerRank. Often these problems cover pretty narrow topics and what you can learn in two months on Coursera, on CodeForces you can learn in one year or even more.

Why algorithm competitions is useful? When you stuck on the problem for a while, then you read solution and you see that this problem is solved [for example] by Dijkstra algorithm, you remember this algorithm for a while. If you stuck on another problem, and then after reading solution you discover Dijkstra algorithm again. You remember this algorithm for very long time (provided you put significant effort to solve the problem). That's because your brain connects Dijkstra algorithm with some important problem because you are frustrated solving this problem during contest. So frustration is actually good for memorization! (if used correctly).

So there are 3 aspects which you should develop together in order to achieve really good results:

1. Implementation skills;

2. Problem solving skills (i.e. how you recognize algorithm in the problem);

3. Knowledge of algorithms, data structures, paradigms (like dynamic programming, greedy etc);

Often courses covers only third aspect.

Here is my description of CodeForces problems difficulty and relevant skills they develop:

Div2 A - trivial implementation problem, train accuracy;

Div2 B - little tricky but still trivial problem, find little things to solve the problem;

Div2 C - sometimes easy, but sometimes really hard to solve, covers topics: combinatorics, dynamic programming, greedy, elementary graph algorithms etc;

Div2 D - usually hard to solve, but can cover classical algorithms like Dijkstra, Kruskal's, binary indexed trees etc;

Div2 E - very hard to solve, way above my current level (the same applied to Div1 problems);

If you can solve Div2 C problems consistently within 15 minutes and sometimes solve D problem within 30-45 minutes, you will crush Google interview (at least their algorithms + data structures part, which is where most people stuck!).


I wish your comment were higher up. It does indeed take time to internalize how to use these data structures, and that's what the interviews usually test.


If you're already a successful entrepreneur, the only reason you'd want to be a Google software engineer is if you have no idea what you're signing up for. There's absolutely nothing magical about working at Google and the learning curve plateaus fast, at which point you're likely a salaried code monkey. If that's more money than you anticipate making on your own, then I guess go for it. Otherwise you're making a serious mistake.


Maybe its just me, but it sounds like the OP is setting himself up for profound disappointment even (and perhaps especially) if successful.

All that time and energy focused on becoming an employee of that one company, Google, will create enormous expectations which are likely to be crushed by reality when the OP comes to the realization that, yes, even Google is just another place to work for a living. Does anyone who actually work there think otherwise?


Why are people so obsessed at working for Google? Granted, I'm sure there are a few niche groups which are probably really cool to work in, but the others probably involve their fair share of CRUD work, like any other typical large corporation.


This is a great list of fundamentals for CS.

This is also very creepy.


Would be nice to have such a list more data science, database orientated, maybe someone knows of a similar list.


I think this is what you're looking for https://github.com/datasciencemasters/go


It is fine to create a training program targetting top tier technology companies. No problem with that so far.

But he specifically mentions Google. That I have a problem with: you cannot advertise a program as an being effective approach to the Google interview, if you have not been an interviewer, or you have not received an offer as a result of the interview process.


I don't think any corporation deserves this kind of attention.

I did notice that Indian and Pakistani engineers like to do those kinds of preparations, and reasons are mostly cultural.

You should always come prepared on the interview, but the better engineer you are, more companies and options you will have.


This list is really huge! Do we really need to learn all of this?


I don't really object to content of these exams any more than I'd object to taking the math and linear algebra exam if I were an actuary, or the bar exam if I were a lawyer. I certainly don't think this is all trivia. It's foundational, but not the sort of thing developers walk around ready to be tested on in their day to day lives. If you suddenly plunked a vector calc or numerical analysis exam in front of a senior actuary, many of them would probably fail, even though they all passed the test at some point. If you suddenly forced experienced lawyers to re-take the bar exam, many would fail, including those who did very well on the bar.

Trees, graphs, mathematics, binary? Sure. It's foundational.

What I dislike about these "interviews", so much that I have contemplated leaving the field, is that we, as developers, have to take them over and over again, under capricious and arbitrary circumstances.

Think about all the exams I mentioned above. They have a proper study path, they are administered consistently, they are graded and evaluated by respected and vetted professionals who will at least attempt to apply similar standards. You get information about what will be tested, you get feedback on how you did. You don't spend studying and preparing only to hear "we've decided not to pursue your (actuarial candidacy/bar candidacy/nursing boards) at this time".

That's what makes this all so awful. I feel, after 15+ years, that I've re-taken my fundamentals exam enough times. If it were trivial, that'd be fine, but it takes time and focus to get ready for it, time I'm less and less inclined to spend on it.

I think that there is an informal set of rights and responsibilities that evolved over time between exam-based professions and students. I believe that the "tech interview", which is really an exam, has almost all the downsides and none of the positives, none of the protections that evolved in the other fields.

The only positive thing I can say is that at least tech doesn't require a very specific degree, unnecessarily long, that can only be obtained by going through certain accredited programs that cost $40k+ a year in tuition (the 3 year law degree at sky high tuition that can only be obtained through certain programs is cartel-like, professional regulatory capture at its very worst). But you know, actuarial exams are rigorous, don't require a specific degree, you can prepare how you like. I'd say that's probably the best model for software.

I don't mind studying for this, but I don't want to have to do it over and over, and I'd like some assurances that it will administered properly.


FML I could not imagine prepping for something like this.


Isn't it kind of perverse in a way that all this CS talent is going towards supporting a company whose primary purpose is to mine your data and create profiles about your habits and needs?


also check out interviewcake.com


I once really wanted to work for Google, but after my in person interviews I realized I would never fit in. I was instantly judged for my soft Midwestern accent and dress shirts, while the guy in skinny jeans and explained to me that Java's HashMap.get() was not 0(1) and asked if I was taught that in my non-Stanford education. (It is btw, feel free to analyze the implementation)

For a place that claims to be culturally diverse, I realized there was a hidden list of acceptable cultures, and mine was not one of them. I politely declined the offer and decided to stay a bit closer to home.


I live in NYC and most of the Googlers I meet are from the Midwest or East Coast. It sounds to me like you happened to end up with an idiotic interviewer. I would personally chalk that up to bad luck more so than company culture.


In my experience Google NYC attracts a lot of people who are very cynical about the stereotypical bay area tech culture. I get the impression there is a lot less cool-aid being drunk here but then again I've never lived in the bay so I can't really compare.


> In my experience Google NYC attracts a lot of people who are very cynical about the stereotypical bay area tech culture. I get the impression there is a lot less cool-aid being drunk here but then again I've never lived in the bay so I can't really compare.

Maybe its just a different flavor of Kool-Aid. Cynicism about X is often as much of an irrational orthodoxy as X is.


Yeah, I'm not a Googler but the only one I know had no formal CS education and wore a full suit to his interview. Just study algorithms like mad and go for it if it's what you want. Technical merit will get you in.


When a company selects someone to interview a candidate they're also choosing that person to represent and sell the company. If that person turns out to be rude and condescending then I chalk that up to more than bad luck.


Doesn't Google train its interviewers I worked for a large company and before you could interview anyone there was a 2/3day course you had to pass.


Not to be pedantic, but while the computational complexity of the map.get() function is O(1), meaning that the number of computational steps executed is constant with the size, the runtime complexity is often O(f(N)), because cache fetches and disk fetch latency increase with the number of items stored, meaning that the amortized time of get() might be longer for larger datasets.

This doesn't invalidate your experience at all - the interviewer may or may not have been talking about this, and there may well be cultural issues there - but I learned this interesting distinction recently, figured I would pass it on.


I thought the whole point of O Notation is that your not thinking about low level operations like Disk io or caching..

Just the growth of the operations performed.


Computational complexity vs. runtime complexity: while the details are irrelevant for computational complexity, as you say, the runtime complexity is a different matter, because the execution environment (disk, cache, etc.) has been optimized to deliver better performance for certain types of operations.

In the real world, it may thus be better to use a computationally more complex algorithm than a computationally simpler one in certain cases, if you happen to have a system that delivers big enough optimizations for the complex algorithm.


> ...asked if I was taught that in my non-Stanford education...

Anyone who said that to me would seriously risk a sock to the jaw from my non-Stanford educated fist, interview or no. That is an incredibly rude and condescending thing to say.

Thankfully that guy didn't interview me during my interview at Google, and my fist remained wrapped around my dry-erase marker instead.


This sounds more like a feature of the SF Bay area tech scene rather than Google in particular.

Many young skinny-jeans-wearing progressives here quite openly harbor grossly ignorant steroetypes of a wide variety of unacceptable cultures: the south, the midwest, Republicans, Christians, and conservatives, among others.

When challenged, they typically deny that their cartoonish, caricatured views of these groups are stereotypes, and insist that that's simply the way it is.

Just wearing a button-down shirt (or, God forbid, a tie) is more than enough to stigmatize you as "one of them" and get you instantly ostracized (while they simultaneously loudly trumpet the value of diversity and tolerance.)


> young skinny-jeans-wearing progressives

> cartoonish, caricatured views of these groups are stereotypes

I cannot tell if this is satire.


You sound like a guy that quite openly harbor grossly ignorant stereotypes of what you think is an unacceptable culture.


Drat, I knew someone would say that. Thwarted again.


Maybe he was referring to the fact that it's O(M) in the worst case where M is the length of the collision list?


The reason the Java HashMap get() is not O(1) is because of separate chaining. The HashMap handles hash collisions by adding all colliding entries to a Linked List and lookups have to hash to the correct bucket and then iterate over the linked list in that bucket. So the complexity of a get() is O(1)+O(k) where k is the number of items in the LinkedList.

https://en.wikipedia.org/wiki/Hash_table#Separate_chaining


Wouldn't this just simplify to O(n)? Consider a case where the keys are objects whose hashcode() is overridden to return a constant (while equals() still uses reference equality from Object.equals).


http://stackoverflow.com/questions/4553624/hashmap-get-put-c...

It's a dumb question to be authoritative about because there are different implementations of hashmap and hashcode meaning that you can't guarantee the runtime complexity.

According to the SO answer above Java8 will be mostly O(1) but full buckets may be stored as trees meaning that they will be O(log n). But as the answer says

"So no, O(1) certainly isn't guaranteed - but it's usually what you should assume when considering which algorithms and data structures to use."

The interviewer could have explored the candidates knowledge of how hashmap is implemented, or could be implemented, which would be more rewarding than insulting and gloating.


Like I said, feel free to analyze the implementation (Or just look in the JavaDoc (https://docs.oracle.com/javase/8/docs/api/java/util/HashMap....) Given the hash function was known (it was a String) his answer of O(n) is incorrect, it is indeed O(1). If you violate the API and put something like "return 1" as your hashcode() you will get O(n), but this was not the case for this interview question.

I think the big misunderstanding in most of these answers is assuming a fixed amount of buckets are used in Java's implementation. That is not correct, the put() method will rehash your table if it grows too big. (meaning that on average put() is O(1) but has a very real possibility of being O(n).


Totally pedantic, but no HashMap.get function is O(1). In order do a look-up, you have to compute the hash, whose size is at least log2(n).


The reason hash computation is traditionally called constant time is because it doesn't change while the hash table grows. Big O describes how run time or memory consumption grows as input grows, and with a hash table, the hash computation typically does not grow.

Calling a hash table O(log2(N)) gets misleading when you measure a hash function's performance, and it doesn't change as the table grows.

So you're right about your technicality, but you're using big-O slightly differently than how everyone else is using it.

The implications are interesting, in that if the hash computation did grow as O(log2(N)), then it would (theoretically) outperform the current typical O(1) hash table implementation, and they would meet only when the table is full.

Another way to see O(1) is that hash computation is always implemented not as a log2(N) function, but is log2(MAX_N), where MAX_N is the size of a full table, and since MAX_N is a constant, the complexity is O(1).

A side note is that not all hash tables are implemented using an array that can hold N items. Some hash tables use linked lists instead of re-hashing for collisions, meaning that your array can be arbitrarily less than N in size, meaning that it is definitely possible to have a hash function that takes less than log2(MAX_N) time. In that case, insertion time becomes explicitly greater than O(1).


I'm not following your logic here:

"The implications are interesting, in that if the hash computation did grow as O(log2(N)), then it would (theoretically) outperform the current typical O(1) hash table implementation, and they would meet only when the table is full."

How would O(log n) ever out perform constant time, O(1)? Could you elaborate?


This is and always has been an important part of understanding what big-O notation actually means. O(1) means constant time, it does not tell you what the constant is. The constant can be large.

In this case, if your hash always takes 64 bit values and always produces 64 bit values, it is O(1). If your hash were O(log2(N)), then your hash function with an empty table would do 1 bit worth of work, and would do only one operation. But when the table grew to contain 1,000 elements, it would do 10 bits worth of work.

The O(log2(N)) algorithm with N < 2^64 is always faster (in theory) than the O(1) algorithm that does a constant 64 bits of work.

The O(1) algorithm pays off in spades for N >> 2^64, it will become much faster than the logarithmic one, but for small N, log(N) is smaller than 64.


Right, a constant i.e unchanging amount of work.

I usually think of hashtables though as being O(1) in referring to the look up(amortized) and not the hash code itself.


Yep, and a hash table's get is O(1) amortized even when you include the hash function itself, simply because the hash function doesn't change as the hash table grows. @rand_r is right that the hash function is doing C * log2(MAX_N) operations, but that doesn't actually affect the big-O metric, it only affects the run time. :P


Indeed, thanks for the clarification.


Not being funny, I actually don't know:

Is the n in log2(n) the size of thing you're hashing? AFAIK the O(1) statement is referring to how many things you have hashed.


In context, n is the number of things in the hashmap. You need log2(n) bits per item to store n unique items. So the hash function is operating on an input of log2(n) bits. There's no way the hash function takes less than log2(n) time to work on log2(n) bits of input.


You don't need hash codes to be unique for items in the table so there doesn't need to be any relation between the size of the hash and the size of the table. The time taken by the hash function depends on the size of the keys not the size of the table.


If they're not unique, it will have to linearly search through items with duplicate keys. This situation will begin to approach O(n) time, which isn't good.


Unless you're using an Open addressing scheme like cookoo which ensures constant lookup time in the worst case


If the cost of a hash function is related to anything, it is more usually the number of bits in the input.


I've always used the assumption that objects have pre-calculated/stored hashes


> Java's HashMap.get() was not 0(1) and asked if I was taught that in my non-Stanford education. (It is btw, feel free to analyze the implementation)

It is O(1) if the hashmap is sparsely populated but if there are hash collisions the items are stored in a list (tree from 8) so is not O(1) in worst case


> It is O(1) if the hashmap is sparsely populated

With a finite number of buckets (which is inherent to hashmap), asymptotic performance will never conform to the "sparsely populated" constraint, so its O(n).

(That its concrete performance is roughly constant-time within certain size bounds in non-pathological cases is important knowledge, too, though.)


Hashmap operations are O(n) and there is no way to reduce it since it is impossible to get a collision free hash. Being ignorant about this is a good reason not to hire someone since they likely have similar knowledge holes everywhere.



We detached this subthread from https://news.ycombinator.com/item?id=12653059 and marked it off-topic.


[flagged]


I have a philosophy degree, and I can say with some confidence that I understand that particular catchphrase pretty profoundly. Not everyone who happens to disagree with you is merely an ignorant barbarian in need of enlightenment.

See Karl Popper's theory of critical rationalism to answer your questions about causation.


I notice you never actually answered his question.


His snotty attitude made me disinclined to engage further and answer directly, though I did provide a general reference to where to find the answer.


I noticed the same thing. I couldn't help but read his response in Trump's voice. "I'm very very smart, believe me. I went to a tremendous school and learned yuge words. No one knows causation better than me let me tell you."


Whereas your comment was substantial, respectful, and made you look great.


I know, I know :-/

What can I tell you, I think trying to use things you don't understand to show how much smarter you are, is the ultimate intellectual dishonesty. I think the world would be a better place (or at least more humble) if people were called out for such behaviours. You don't agree?

EDIT: Just to clarify, I agree with my comment not being respectful or make me look great, but I think it was substantial. If you want to be rigorous in science, you should know the answer to that question.


What was the substance? You said you were throwing a wrench into the cogs of a brain that only mindlessly repeated things. OP wasn't mindlessly repeating, but bringing up a valid point, albeit not going into as much detail as I'd like. You accused someone of not knowing what they were talking about and using phrases they didn't fully understand, and I didn't see any evidence that that was the case.


> What was the substance? You said you were throwing a wrench into the cogs of a brain that only mindlessly repeated things. OP wasn't mindlessly repeating, but bringing up a valid point, albeit not going into as much detail as I'd like.

Ah fair enough. So there's often repeated phrase that "correlation doesn't imply causality". That's because often times people make the fallacy of going "A and B always happen together, therefore A must imply B!" (or the other way around). This is obviously wrong. So some people have learned that whenever someone says "A and B always happen together, so maybe one implies the other...?", they can shout CORRELATION DOESN'T IMPLY CAUSALITY and shut down the conversation.

But the truth is that these people for the most part don't understand that phrase very well, right? Because you take conclusions in your life, right? Science has learned many things about the universe, right? So SOMETHING must imply causality. _If it isn't correlation, what is it?_ Think about it, everything in life is correlation. If you open your hand and the stone drops, did the stone drop _because_ you opened your hand? How can you ever know? All you know is that two events "opening hand" and "stone dropping" are correlated, how can we know if one implies the other? But of course we know, don't we? ;-) I think this is a substantive observation :P

> You accused someone of not knowing what they were talking about and using phrases they didn't fully understand, and I didn't see any evidence that that was the case.

Well, he didn't know the answer to the question I posed before. His answer was "I am very very educated, you go read some books." To me that's plenty of evidence he doesn't know the answer. In fact, I would say telling someone "correlation doesn't imply causality" is strong circumstancial evidence to that effect :P

EDIT: If you say that saying "correlation doesn't imply causality" is bringing up a valid point, it must be the first time you see that phrase, am I right? Go on reddit and in every single conversation you'll see someone throwing that around as an excuse to not having to dig further.


I honestly can't tell if you're being sarcastic, or if this is some even deeper level of condescension.


Hahaha neither! I actually thought you were looking to understand what I meant. If I came across as condescending I guess you already knew all this, in which case I'm not sure why you said the other guy made a guy point. So now I'll have to assume you're just trolling. Well played :P But now I've lost interest :P


It's... everyone has heard the phrase "correlation does not imply causation." Of course I understood what you meant. Please do not condescend this much (again, I can't tell if your explanation of why this can be overapplied was meant to sound like it was directed at a five-year-old, or if that's just how you speak.) But just because it can be used incorrectly does not make it an invalid observation, and in the context of your discussion the meaning seemed perfectly clear. Just because a bunch of people on reddit are fond of using a phrase doesn't mean you should respond to a legitimate point by saying "You're just copying that from reddit! I've thrown a wrench into your mindless argument!" It's a well-understood phrase for a reason.

I apologize for dragging this out, and I know it's unproductive, I just found it really baffling to watch someone respond to a standard point with such a high level of totally unwarranted disdain. And then to think any disagreement is because I've never heard the phrase "correlation does not imply causation"?


Wow, no other answer could've made me this happy. Next time someone asks me why I think philosophy degrees are useless, I have something specific to point them to.


Sure, I'll provide you with a letter of introduction to that effect. I agree that philosophy degrees are mostly useless -- but they are occasionally quite useful for refuting a condescending troll at a party or on the Internet.

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: