Strange, I have yet to meet a single Google interviewer who was looking for perfect syntax during the interview. Last year I forgot the syntax for a data structure, told my interviewer "something like this," and he just said "that's fine." Got the internship later on. I even had one interviewer who was ok with me writing out matrix algebra mathematically instead of using np.matmul and all that.
As an interviewer, I can confirm that I don't care about syntax or whether the program compiles if I'm convinced their solution and approach would work. I'm also OK with candidates using placeholder helper functions or shorthand for trivial things (e.g. null/undefined check in JS) if they explain to me verbally what that part is supposed to do.
I also interview software engineering candidates at Google (n=150) and while I mostly agree, I do think there's some signal in whether a candidate can get the syntax right. It's not a dealbreaker if they don't, but all things considered someone who comfortably writes code all day is more likely to be able to write syntactically correct code than someone who doesn't.
The main things I want to see, though, are: can you communicate well about the parts of the problem that aren't clear to you? Can you analyze and compare solutions? Can you figure out something reasonably efficient? Do you understand your solution well enough to code it?
> I do think there's some signal in whether a candidate can get the syntax right.
I know you're talking about software engineers, not data analysts, but:
select from <table>
[oops... you're supposed to put an asterisk there]
select <column>, count(field) from <table> [holy shit... I forgot the group by]
select <column>, case when <condition> then <result> else <other> from <table>
[oh man, case statements need to be terminated with an `END`]
select <columns>, from <table> [oh no... SQL doesn't like that comma before the from statement]
select <...> from <table1> join <table2> table1.column = table2.column
[This is embarrassing, I forgot the `on` keyword]
select <stuff> from <table1> union <stuff> from <table2>
[Jesus... I forgot to type `select` after the union]
select <column>, sum(column2) over (partition by column3 between unbounded preceding and current row) as cumulative_sum from <table> [dang, SQL doesn't know which rows if I don't actually mention `ROWS BETWEEN`]
select <column>, count(case when <condition> then 1 else null end) as count from <table> order by count desc having count > 1 [ahh that's silly, you can't put your HAVING clause after the ORDER BY]
with cte_1 as (select <...>), cte_2 as (select <...>) cte_3 as (select <...>) select <columns and aggregations> from cte_1 join cte_2 on <...> left join cte_3 on <...> where <condition> group by <many columns>
[Oh my goodness, I forgot a comma after closing cte_2]
Now, perhaps this says more about myself more than anything, but I really do write code comfortably all day, I'm glad my current employer, or any number of the clients I've worked for, haven't had this philosophy (even when watching over my shoulder waiting for results they need at the moment). I'd be mortified if anyone ever dug up some of the atrocious things I've requested of the database in the server logs.
> > I do think there's some signal in whether a candidate can get the syntax right.
> I know you're talking about software engineers, not data analysts, but:
(inserts self-pwn car crash here)
I've done SQL for ~20 years and I'd say I'm good at it. I don't make as many mistakes as you've described but I know exactly what you mean, and I'd never hold that against you because I don't give a toss about mistakes that the language will catch.
I've rarely interviewed others, but when I did I asked high-level stuff approaches. I wanted to see if they could grasp the solution, not the physical framework.
Actual example: you're given a large collection of words, which you're allowed to pre-process - you have plenty of time to do this. Later on you are given another word, how would you very quickly find all acronymns of that word?
(inerviewee programmer didn't get it, so I tried it on a non-programmer we had around - she very quickly worked out you ordered the letters and saved them - she didn't explain it clearly (to repeat, she wasn't a programmer) but in programmer terms it was a dictionary with keys as the sorted letters and the values as a set of the words).
In this case, would you employ the supposed programmer who didn't get it, or the non-programmer who did?
Actual example: Show me how you'd represent an arithmetic expression using objects, and how you'd evaluate it in an OO style (was after class hierarchy of (op, leftexpr, rightexpr and .eval method. With plenty of time and pushes in the right direction, he still didn't get it despite claiming good OO on his CV)
(True story: to same guy who didn't get the OO expression question, I started off with an SQL question. His CV said SQL was his strong point, so I gave him an easy one: "explain to me what a left outer join does". He shook his head in confusion "Never heard of it". Actually happened! I'm not even exaggerating!)
>Actual example: Show me how you'd represent an arithmetic expression using objects, and how you'd evaluate it in an OO style (was after class hierarchy of (op, leftexpr, rightexpr and .eval method. With plenty of time and pushes in the right direction, he still didn't get it despite claiming good OO on his CV)
As a SQL guy who knows Python, but specifically just pandas/seaborn/numpy (matrix/set operations rather than the underlying constructs which make numpy/pandas possible), as opposed to a SWE with OO skills, could you point me in the right direction to learn how this question should be answered?
>"explain to me what a left outer join does". He shook his head in confusion "Never heard of it". Actually happened! I'm not even exaggerating!
I... I don't even know what to say here. That's absurd to me he would claim SQL knowledge and respond with that answer.
My response would be "that is the same as a `left join`" (then I'd explain what a left join was) and follow up with "I exclusively write 'left join' and never 'left outer join' as my experiences with the DB/MS I'm most familiar with (Postgres, Redshift, MySQL, MSSQL and a couple others) accept the `left join` syntax without specifying `outer`".
First up let me apologise for the abrasive and somewhat unpleasant reply I gave to you. Not my best, sorry.
Ok, couldn't find a sample on the web so here's mine. It's not right for brevity and because this is the first python code I've done in ~3 years, so any criticisms welcome. Hopefully can get the formatting right
# super.init omitted for brevity
class Expession: # abstract base class
def eval(): pass
class Literal(Expession):
def __init__(self, val): self.value = val
def eval(self): return self.value
lit1 = Literal(8)
print(lit1.eval()) # prints 8
class UnaryExpr(Expession): pass # base class for unary expressions
class Negate(UnaryExpr):
def __init__(self, expr): self.expression = expr
def eval(self): return - self.expression.eval()
lit2 = Literal(13)
neg = Negate(lit2)
print(neg.eval()) # prints -13
class BinaryExpression: pass # base class for binary expressions
# Note that subclasses Add and Multiply have the same
# __init__ code so I should hoist that into the BinaryExpression
# base class but for clarity I'm leaving it in the subclasses
class Add(BinaryExpression):
def __init__(self, leftExpr, rightExpr):
self.leftExpression = leftExpr
self.rightExpression = rightExpr
def eval(self):
return self.leftExpression.eval() + self.rightExpression.eval()
add2literals = Add(lit1, lit2) # 8 + 13
print(add2literals.eval()) # prints 21
class Multiply(BinaryExpression):
def __init__(self, leftExpr, rightExpr):
self.leftExpression = leftExpr
self.rightExpression = rightExpr
def eval(self): return self.leftExpression.eval() * self.rightExpression.eval()
mult2literals = Multiply(lit1, lit2) # 8 * 13
print(mult2literals.eval()) # prints 104
# now let's make a complex expression, say (7 + 2) * (-4)
# Doing this by hand but a parser would build this from that
# string
expr = Multiply(
Add(Literal(7), Literal(2))
,
Negate(Literal(4)))
print(expr.eval()) # prints -36
Basically it's a tree of objects that you call eval() on the root, and these recursively call eval down, then when they reach the bottom start returning their subtree-calculated values.
Make sense?
Re. the left join, I abbreviated it. Full event was that there was 2 interviewers, me + other guy. I said to our interviewee, "what's a left join?". Cue puzzled expression and headshake. My co-interviewer qualified that for him: "what's a left outer join?", getting the response "never heard of it". He claimed 4 years of sql on his CV. No job for you, matey.
This isn't rare either, worked at a recruitment office and overheard a conversation which recruitment agent used to check applicant wasn't clueless. Applicant was applying for C++ job. Q: "give me 4 STL containers". Applicant replied "cin and cout".
If you've done no C++ that's like asking a python guy "give me some python data structures" and getting back the reply "input() and print()"
Edit: to clarify about the expression eval stuff, I wasn't expecting code, just an obvious grasp of a tree of objects with relevant subtypes, and eval(). He knew roughly how to do it procedurally, but blatantly had no clue on the OO style (which, yes, he claimed to have on his CV).
Incidentally, I'm just starting my very first step into Pandas today. Looks SQL-ish!
Again, I don't think this is the only signal, or the most important signal, but there's still signal there. Imagine I get two data science candidates who seem basically the same and say they work in SQL daily. One of them can comfortably write out a JOIN with a GROUP BY and an ORDER BY when the problem calls for it, the other struggles and makes a lot of errors. I'm going to guess that the former is better at SQL, acknowledging that this is an imperfect proxy.
I'm also not nearly as picky as a compiler/interpreter. I can tell what you're trying to write, and lots of people make silly mistakes. But a high volume of mistakes, or especially difficulty getting anything substantial out at all, those start to make me worry.
Better designs require fewer lines of code [#]. Maybe the person spends more time thinking than writing. Also think about a typical enterprise Java program - writing lots of boilerplate code is bound to create fantastic muscle/syntax memory and no useful skill beyond that.
[#] If this doesn't seem obvious consider the inverse - worse designs will inevitably require more lines of code to get the same result.
There are simply no metric which can automatically measure code/design quality. Here's how I feel about code/design quality:
1) How many files I need to open to understand what the code does (less is better)
2) How many times I have to jump to follow the execution path (less is better)
3) How easy can I remove or rewrite this piece of code
> I do think there's some signal in whether a candidate can get the syntax right
I agree with that. I've had candidates explain certain points of syntax as they worked, or demonstrate their knowledge in other ways, or write their code particularly well or cleanly, and I make sure to mention that positively in my notes. But less-than-flawless syntax by itself isn't a negative for me.
I agree with the main things you look for - I too try to focus on those more than syntax.
I can confirm. In one of my interviews, when writing Python, I used arr.push instead of arr.append (I don't usually write JS, but idk what happened), and I realised the mistake only half-way through. The interviewer noticed it apparently, and he said he didn't care. I imagine he didn't want to throw me off my thought train, which was nice. Got the offer too, so it really must not have mattered.
I interviewed with Google NYC for Senior Dev position once, prepared well and thought it went well.
Received the green light and moved to the next phase, where I spoke with potential teams over the phone, then settled with Google Maps. Met with one of their Tech Leads, cool. I was really happy and though that all my effort to prepare for the “Google interview” had payed off.
Then no word back from the recruiter with a final offer. It turns out the VP of eng saw some red flags in my interview and decided to bail.
I felt really frustrated and while I spoke I spoke with the recruiter he apologized and even said the hiring manager was on my side and people overall liked me but there were two engs that were on the fence. Gosh, I think I met with 7-8 engs, all the seniors seemed to like me. I remember not having the best conversation with 2 engs who were new to the company and could not relax nor communicate well.
Bottom line, prepare but also be prepared for some degree of luck and arbitrary judgements.
Yes, there are great interviewers in google. Engs that are engs in their minds and hearts, who can see the process is not perfect but work to get it better. But unfortunately, there are insecure folks who should be better trained for before interviewing candidates.
That was 5 years ago, not sure I would subject myself to this sort of loop even again. And strange enough they contact me few months after to reinterview but this time I could skip the big loop and meet with just 3 engs... I said no since I was already in a new job.
A friend said recently, "people want to be employed without becoming employable". These guides really exemplify this obsession. Sure, Google has a nice salary and good perks and whatever. But after you get the job, you have to do the job. I wonder if the people who read these guides and try to study just the right topics to get a job, whether they actually like programming.
These guides act as optimizations, shortening the path you need to take to get the job, shortening the stuff you need to learn, etc. But in the end, the path is all you get. If you don't like programming and if you don't like learning, then are you really gonna like Google?
I suppose there's people who genuinely like programming who just need a manual to teach them how to play the game. Lord knows I've practiced my fair share of whiteboard problems when I'd rather be reading about compilers. But there's something wrong about having to play a game to get the job.
> If you don't like programming and if you don't like learning, then are you really gonna like Google?
There are also many people who are great at programming, wh love it, who are terrible at interviewing. After all, these are two related, but ultimately different skills. You talked about it yourself in your last paragraph, ending with:
> But there's something wrong about having to play a game to get the job.
Sounds like the fault is on the employer that makes you play the game, not on "people want to be employed without becoming employable".
I mean, if you want to work at Google and the likes, you need to indulge in the game right? And working at Google isn't just about the pay and the salary - thats quite a shallow thing to say. Engineers there handle data of astronomcial proportions, scale their systems every second to handle the ever-growing traffic, innovate on solutions that are used by millions of people around the world. I'd say, if you truly love programming and computer science, thats a pretty sweet deal.
The people who spend their days obsessively networking and practicing interview questions are generally not the people who would find problems about scaling to handle astronomical proportions of data interesting. The people who find that interesting tend to spend their days reading about scalability and performance. I'm not talking about someone putting aside a few weeks to a month to study up on these questions, I'm talking about people who straight up neglect their CS education because they want to pass a technical interview.
I'm scared shitless of whiteboard exercises (and - probably biased by that - see no point in them). There's no way I'd be able to get through them UNLESS I optimize for .. whiteboard programming interviews w/ resources like this site.
In spite of writing code every day, for 15 years plus, and although I LOVE programming, this just excludes me from the list.
(This hits a bit close to home for me because my employer of 13 years just got bought and I'm in the process of looking for interviews again - for the first time since 2006..)
I feel your pain. I started in 2008 and have been running various R&D related development contracts through the same core employer but am in the process of moving locations.
When I interviewed around 2008, there was whiteboarding, but only psuedo code based which I could do just fine. The majority of interviews focused on questions regarding time/space complexity trade offs, design choices, etc. not on-the-fly fully optimized implementation solutions, first pass. The worst thing I ran into was having to write merge sort as the FizzBizz of the time.
Now, it's an absolute circus. Most in the industry really don't know what they're looking for and how to adequately assess abilities. They're far more concerned with trivia and memory recall and filtering any remote risk of a false negative than actually accomplishing the tasks for the position at hand.
The current process is very well designed on multiple fronts to attempt to delegitimize professionals and is quite optimized at grabbing fresh grads desperate for work experience or finding cheaper labor without raising red flags on illegal hiring processes. Why people have put up with this practice boggles my mind.
The vast majority of roles don't need Alan Turing, Donald Knuth, or Jon von Neumann to accomplish some basic business goals so let's be realistic and stop pretending they do.
When the system's metric is proficiency at timed whiteboard programming questions, then people in that system will optimize for that metric (i.e. don't hate the player, hate the game).
The flip side of this is that having worked at big companies like this I can comfortably say that our bar for new employees is WAY higher than the quality of our current staff.
No one at Google is going to recommend hiring you if they think you'll be in the bottom 50% of engineers at Google when you join. However, 50% of the people hired by Google end up in the bottom 50% of engineers. And frankly, hell there's plenty of jobs that don't require a rockstar. So in many ways you need to be better to get the job than to do the job.
Learning to be good in interview doesn't make you bad at programming.
Theses are 2 skills totally different and not exclusive at all.
A skill need to be learned and practiced. In the case of interview, for a pretty good reason, it's not a skill that we practice usually and once you need to do it, you lack experience and fail even though you could have what needed to do the job.
The amount of time required to 'learn' the Google interview would be time that could be spent learning more universally applicable skills.
Is it true that an experienced developer would not be able to pass the interview without studying using a similar guide? If so, then the interview process is... fubar.
A perfect job interview could be defined as an interview for which the best study technique is to become a better choice for the role. Studying skills that would not directly contribute to job performance would not change the result of a perfect job interview in any way.
In that sense, it's probably a good indicator for Google that the interview advice includes "practice writing code", "make it a habit to validate input", and "learn about data structures", and it's probably a bad indicator for Google that the advice includes "practice writing syntactically correct code on a whiteboard" and "practice solving problems with a 30 minute timer."
> practice writing syntactically correct code on a whiteboard
This probably differs from interviewer to interviewer as to how strictly it's adhered to, but it's not really a hard and fast rule. I'm sure there are some interviewers that will ding you on a forgotten semicolon, but I suspect that most would not.
Personally I look for code that isn't so far from syntactically correct that it's clear you are trying to BS me. I'll even accept pseudocode for the most part. But I've had candidates that try to make up language features, and that doesn't fly with me.
> practice solving problems with a 30 minute timer
I only give my candidates 30 minutes. The whole interview is 45, I spend 5 minutes introducing myself and setting up expectations, 30 on the question, and 10 on answering their questions (after all, they're also interviewing us).
You can tell pretty early whether they're on a solid trajectory, and I'll offer the occasional hint to keep someone on track, or ask tangential questions if they're doing well on time. Not finishing isn't a deal killer, provided you had a solid approach and weren't just running in circles. But a good candidate will finish in about 25 minutes and we can spend some time talking about alternate approaches. Sometimes I'll show them the optional approach and see how that conversation goes.
Nine times out of ten a candidate scores low because they overlooked an infinite loop or code would crash on boundary conditions and candidate wasn't able to realize that even with hints.
>Nine times out of ten a candidate scores low because they overlooked an infinite loop or code would crash on boundary conditions and candidate wasn't able to realize that even with hints.
I guess this depends on how you run the interview but one of the things that frustrates me about whiteboard interviews that focus on the coding rather than the design is that I have to step through test cases (especially edge cases) like this manually, which is tedious and not at all how you do things in real life, where you just run the test cases and see what happens instead of having to run your own code on a whiteboard.
My question is about walking around a data structure. Most candidates choose to represent it with arrays.
I agree with stepping through code being lame. If a candidate has tried to do some sort of boundary condition check and mentions a test that'd catch it, then I'll give them a pass. If a candidate just blows past the code without any attempt whatsoever to check that they're in bounds on an array, then I'll ding them for that. You have no idea how often I see code like...
Node neighbor = data[x+1][y];
... without any check at all to see if x+1 is in bounds for data.
The infinite loop in my question is fairly obvious (because it involves walking around a data structure iteratively). Candidates either see it right away and handle it as they solve, notice it halfway through and crowbar it awkwardly in, or don't notice it until asked to walk through a specific test case, or don't even notice it while walking through the test case.
Ultimately, if the code looks correct to me, or even close (I mess up occasionally) then I'll just ask what cases they would test.
does not require an index boundary check when you know that 'x + 1 < length(data)' for any 'x'. This could be due to the iteration of 'data' excluding the last element or any other condition that's always hold.
I'm like you in that I used to code "against the IDE and test cases" by basically rapidly iterating and kind of refining as I go along. Always fast to compile and see what happens.
Learning to write code on paper for exams and on white boards for interviews actually improved my skills a lot. Maybe not everyone coded like I did, "against constraints", but having to slow down and take into account the constraints mentally led to making better code.
I'm still floored by how many candidates I've talked to that assert with great confidence that the local variables they declare will still be there with the same values when they call the function recursively.
And most recently, when iterating over a string's characters the underlying string methods KNOW that the string is being iterated and will pick up at the current iteration point. For example, you've got the string "5432112345". And via iteration, you're currently pointing at the first "2" at index 3. The candidate asserted that if you call "indexOf('2')", it would return 6 because "indexOf" knows that it is iterating and should start 1 beyond where it is pointing at.
And my favorite - once had a candidate that misspelled a function name while coding in Ruby. I didn't ding them for it. But I pointed it out because it just bugged me. And the candidate then swore to me that Ruby will automatically call the correct method if there was only 1 candidate based on the misspelling. You know when you do things like "git inti" and it says "did you mean init?". The candidate swore that Ruby would just call "init" for you because it knew what you wanted.
I once passed a whiteboard interview just calling random made-up operations on generic java arrays, like Array.flatten().
I was relatively new to programming and had been practicing in Java but didn't know how to execute a lot of map/filter/reduce operations off the top of my head like that. I did, however, know that my interviewer was a Python programmer that probably didn't know much about Java language features.
I thought you were allowed to assume functions you needed in the interest of a modular solution. Array.flatten is an obvious “assume I have this, I would write it anyways.”
I think most decent interviews will give you a pass on an enhanced standard library.
When doing an interview at Google in C, I asked if I could assume I had a hashtable implementation with so-and-so interface, and the interviewer said no ¯\_(ツ)_/¯
If you can explain what it is supposed to do, I don't care a whit if you make up a method that probably exists in a standard library somewhere (Guava / Apache Commons, or wherever). If it's slightly more obscure, you might have to write pseudocode to show you know how it might be implemented.
This one candidate had been allowed to skip the phone screen, and phone screens would likely have filtered them out. Candidate used Java and claimed to be deeply familiar with it. Obviously this isn't exact, but the result looked something like this:
public isTheFooBarred(class Node { int[][] positions, int size } node, int x, int y, Set visited = new HashSet(node.size)) {
// ...
}
The first parameter to the method had an inlined class definition, and the last parameter was optional. We discussed this and the candidate claimed that this was legal vanilla Java. In the case of the last parameter, the candidate claimed that the compiler generated every possible combination of methods including and excluding each of the optional parameters (so, 2^n methods).
I really tried to give the candidate the benefit of the doubt, and asked if there was some sort of annotation processor or code pre-processor that they were using, but they were adamant that it was plain old vanilla Java. I'm pretty sure that they were using some in-house built thing, but their solution had significant problems, so it wasn't the only reason I was against the hire.
That doesn't sound like a good question at. It sounds like you're looking for "Java programmers" instead of solid engineers.
When I interviewed for my current job using Java, I had been programming in Ruby and JS for the better part of a decade and had to refresh my Java syntax fairly quickly. I know I made some dumb syntax mistakes in my phone interview, like instantiating collections totally incorrectly. I distinctly remember one of my in-person interviewers saying "well, in Java it's boolean, not bool, but sure...". More semantically, I may very well have messed up mutability of collections and primitive conversions and boxing and such.
Enough of my interviewers saw through all of this that I got the job. Now people on my team come to me and say, "hey you're a java guy right?" and ask me questions about this stuff. I wasn't a java expert when I interviewed, but now I am, because that's what my job required, so I learned it. That's what my interviewers were looking for, to the company's benefit.
You're right that it is not disqualifying even if the Java expertise is essential for the job and the candidate describes themselves as a Java expert. This was a single short question among many though. And it's pretty fundamental to Java methinks.
BTW, they still got hired and proceeded to be a solid productivity reducer and all round waste of time.
> Nine times out of ten a candidate scores low because they overlooked an infinite loop or code would crash on boundary conditions
Why does this even matter? They're writing the code on a whiteboard, without being able to compile or debug, in 20 minutes. Does Google really expect code to be correct and production ready in 20 minutes?
The standard solution to my problem is on the order of 12 lines of code. I don't expect it to be perfect or "production ready", but the infinite loop should stick out like a sore thumb. I do expect candidates to demonstrate that they recognized that problem, and to at least make some attempt to check that they didn't overrun the bounds of an array in the 30 minutes they have.
Hard to even get the interview. I've applied a few times as a senior Dev and was rejected based solely on my cv with the feedback of "work on your algorithms"...
Based on their perception of my algorithmic experience from my CV? Google's a weird one.
This is getting ridiculous. These guides to interviewing at specific companies are starting to sound like the video game cheat code books of old.
If the process is so nuanced that there's an entire industry around these types of guides (and Google even highly recommends you buy them!), then the process is fundamentally flawed.
But we already knew that, and as long as others are still playing the game, we are forced to play or miss out.
Even Google suggest to "practice writing syntactically correct code on a whiteboard". This is clearly a useless skill as a software engineers except in getting a job at companies that do whiteboard interviews. Did you try to refactor code on a whiteboard?
How are they able to find people that are able to efficiently debug problems?
When I interview people I tell them, "Bring your own laptop set up to be able to code and debug". And I give them "Fix this site" or "build this thing" kind of problems.
It looks like that it works a lot better to find "hidden gems" and people that are good at "doing" instead of those that are jsut good at "telling".
>Bring your own laptop set up to be able to code and debug
One of the best interviews that I've ever had was a screen-sharing session that was based around much the same mentality: "I have this problem. Script a solution in whichever language you choose in notepad. Now, here's sample data. Run it. O.k. It doesn't work as you intended, so start debugging it."
We (as an industry) focus on developing, when debugging is an equally desirable skill. If you can't understand why your code is breaking and need someone else to assist you, then it isn't - necessarily - a bad thing but you are consuming another resource that could be best devoted to other things during the time it takes to sort the problem that you created out.
> This is clearly a useless skill as a software engineers except in getting a job at companies that do whiteboard interviews.
Have you ever actually been on the interviewer end of the process?
Literally over half of the candidates literally don't know how to program! They can sort of string together a Markov-chain something if you sit them down in front of an IDE and let them copy-paste stuff until syntactic errors go away, but put them at a whiteboard and they don't know where the parentheses go in a function call.
(They'll write crap like "()f" or "f()a" when they want to call a function, stuff like that.)
Me neither. Quite common deal breaker I seen is mismatch of expectations (e.g.: candidates applying for senior positions with experience to lead bigger projects, etc) but never seen gross incompetence like that (so that I am not even sure if GP is being literal or not).
> (They'll write crap like "()f" or "f()a" when they want to call a function, stuff like that.)
Who cares? If they write code that's well engineered and works in your product and they rely on auto-complete or whatever, isn't that what your business needs and why they really pay you? Furthermore, said code will be written from a comfy chair while they listen to their favorite music, with access to Google, SO, etc. And they will have enough time to leisurely think about what they're doing, test, refactor, etc.
In contrast, during such an interview, they are hand-writing code under duress for a problem they've had no time to think about while being judged by one or more people staring at them. If you're company is one of those famous ones where they get so many applicants they need to weed out many potentially good people, and also don't care whether they let in crappy people who game this system, then fine, whatever works.
If you put someone in a fight vs. flight situation, and add in the task saturation of the coding interview, a competent person can react adversely and make basic mistakes. A week ago in a conference, I watched an SRE who I knew was very competent have a panic attack when a demo didn't work as expected. I never learned what exactly happened, but apparently it was simple enough that a colleague was able to quickly fix the problem over their shoulder.
Unless you're giving public demos/performances (which is what an interview is), this skill set isn't nearly the most relevant one for the job.
The video is the post-mortem of a plane crash at an air show with the pilot involved. In the part I linked to, he talks about the psychology of a skilled person when put under this kind of pressure and how the quality of their decision making can quickly break down. The whole video is also great if you're interested in aviation.
Unfortunately, it seems that we are just now starting to realize the human aspects of software eng. Still, even if there could be a massive amount of data to show in which interviews, or more generically, how human judging or predicting personalities/traits/etc are intrinsically flawed
I frankly still don't understand what's the expectation of such whiteboard exercises. It's just one method, and companies have to understand that one day maybe it will be superseded, hopefully soon. This "we have always done it that way" is ridiculous. If you want to know how good a candidate is, you can easily do it with pertinent questions and maybe a couple of pair programming exercises to see how the person codes, how he/she structures the code in a comfortable situation, not under stress and in an unrealistic scenario -> coding on a whiteboard, as if we had no electricity. Whiteboards should be a tool, not the goal of an interview.
Elite schools do that in undergrad; e.g. as an initial scored lab exercise, write this recursive fractal shape using Logo, parallel Delaunay triangulation with prefix sums , dynamic programming solving oligopoly problem or single-value Paxos pseudocode on a piece of paper in 10 minutes (I am being serious). If you can cope with it, it immediately shows up in the interview and you are considered a member of the club, a person worthy of having conversations with.
>f you can cope with it, it immediately shows up in the interview and you are considered a member of the club, a person worthy of having conversations with.
This is the best part of these interviews. The utterly dehumanizing nature of it. You aren't even treated as a person until you've uttered the correct series of phrases that signal you are a member of the right class. Every word that comes out of your mouth, and everything on your resume up until that point is completely disregarded and treated as a lie.
Just to add context: MIT Interim Activities Period (IAP) is a "winter break" few weeks in which anyone can lead classes/sessions on any topic: http://web.mit.edu/iap/about/index.html
MIT CSAIL is the research lab joining of the legendary MIT AI Lab and MIT Lab for Computer Science. People who work hard and win the lottery to do research at CSAIL (or other prestigious lab) shouldn't afterwards be looking for entry-level coding jobs for which a whiteboard code monkey dance interview/hazing would be appropriate, IMHO.
(For different reasons, people who are in programmer career tracks, with verifiable industry track records and/or open source involvement, also shouldn't be put through the entry-level hazing. Claims that the ritual gives certain companies metrics or somesuch would carry more credibility, had those companies not been caught brazenly colluding, at the CEO level, to systematically suppress wages and mobility of their own employees, with presumed spreading market effects throughout industry.)
The course you linked to doesn’t cover any of the algorithms mentioned in the grandparent. It does however cover about three dozen common interview questions. The course is also from 2009.
I think the term "Logo" stands on a perfect abstraction level that in reality could have meant "using turtle.py you programmed last week with turn, move etc. commands", don't you think? Do we need to be super-precise down to minutiae when discussing informal stuff? Moreover, it's on paper, so who cares what syntax is the pseudocode, it could be even Logo itself.
I agree with your comment in general, but, I worry about the bias of "Bring your own laptop set up to be able to code and debug" - some perfectly qualified candidates don't have laptops. Some perfectly qualified candidates do have a laptop but don't code much at home, and the setup they're used to is their work machine or a school lab computer. We already have too much bias in favor of code-all-day-code-all-night candidates (e.g., looking for GitHub profiles) and I think requiring someone to bring their workspace might push this to the point where good candidates who happen not to code outside business hours wouldn't stand a chance. (There are lots of reasons for this that wouldn't impact how qualified you are, from "I have a family" to "My workplace is real stingy about open source, so I haven't bothered to set anything up for non-work coding".)
I'd be a lot more comfortable with "We have a machine set up for you, but you can also bring your laptop," as long as the machine is actually well set up and you don't fall into the implicit expectation that passing candidates will bring their laptop anyway. (Most of them will, in the end.)
Some people live on their company's computer and when they leave they do not have one.
Yes, I always add "there is a machine set up for you if you do not have one". That said the machine has a generic setup with most common editors but no plugins. But good engineers will probably have a script on a Github Gist or something like that will setup the machine as they want it.
In general, a candidate with their own setup will do better. I think this is one of the reason that big companies do not let people use their computer.
Yes, I figured you probably had a provision, just wanted to ask for the sake of people saying "This sounds like a good idea." I agree people with their own setup will have an edge, and that seems like a downside that you could argue is defensible. I just think that saying you have to have your own setup because we know people without their own setups won't pass is too far.
> I'd be a lot more comfortable with "We have a machine set up for you, but you can also bring your laptop,"
That's exactly how I do it. My goal is to assess the candidate's skills with realistic expectations, a whiteboard is a tool he or she can use, not the goal of my interview. If he can write code on a whiteboard and he doesn't know how to compile a program, it rings a bell.
When I was looking for work, my laptops setup was embarrassing. MacBook Pro with both a broken keyboard and misbehaving trackpad. I had to bring with me an external mechanical keyboard and trackpad. It worked out in the end but that could leave a bad impression with interviewers.
If I need to advise someone how to come up with $200, he isn't capable of a 6 figure job. If I have to hold someone's hand to set up programming tools on a laptop, he isn't capable of a 6 figure programming job.
So you have no actual explanation for your hiring practices, just deep-seated biases that happen to correlate with bias against protected classes but isn't nominally actually against protected classes. Makes sense—that makes you well-qualified to be an interviewer for many tech companies.
I had to write code exclusively on white board last month. Only laptop in the room was interviewers which he was using it exclusively to furiously copy the code i was writing in the whiteboard.
He said it will compile it and submit report when he gets back to his desk. :/
I've done nearly 100 interviews at Google, at least half of them with me copying code from a whiteboard, and I have never tried to compile a line of code that a candidate wrote.
I also go out of my way, to make it clear that I don't care about every hanging parenthesis, indentation, or typo.
I care about whether or not the candidate asks for clarification, or bulls ahead with assumptions, whether the overall algorithm works, whether the candidate can identify limitations of, and bugs in their solution (Everyone has bugs. Everyone. There's nothing wrong with that.) and what their testing strategy is.
I have to ask this, do you see yourself as the norm at Google throughout your whole employment time there?
In my experience as both interviewee and interviewer, there is a lot of power struggles involved. Just like any other interaction betweeen engs like code reviews.
OK so of the 5 interviewers, 2 did this. Others seem to not care. Do you know if "must compile" is a google wide rule that some interviewers just ignore?
I have also never received any negative feedback from hiring committees or managers about how I conduct my interviews, what I focus on, or how I analyse candidate performance.
Same happens at Amazon. They want syntactically correct program that will actually compile and run on a compiler, but to verify that they click a photo of what you've written on the whiteboard to actually try that on the computer themselves. :|
Someone above him might have decided that’s part of the process.
I am starting to think that most of those decisions are made by burocrats who have no accountability on the actual results, specially at Google where they can afford to lose great candidates left and right.
This process is very well designed to put all the cards in the employers hands.
Imagine if most highly compensating employers made the barrier to entry so absurdly difficult, you would no longer consider looking for new positions every 1-2 years to grab new offers/seek larger raises. Obviously, this isn't sustainable long term but it sure will reduce turnover rates if you can get a small cartel to follow suit.
I'm a moron and did something stupid not too long ago that would be perfect for this.
I'm too lazy to find my actual code but here's the gist:
C# but might working in Java/etc too *
Create an Image of some size
loop hight of image
loop width of image
Bitmap b = cast Image to bitmap since Image doesn't have the pixel method
get pixel value or whatever looks legit
end
end
This creates a new bitmap for EVERY pixel in the image. If the image is large enough and the system is 64 bit bad things happen. On Lubuntu this code burned through 8 GB of ram in seconds and was well on its way to eating all the virtual memory before I forcefully shut it off.
Maybe this is a Linux issue but it hard locked my system. I couldn't switch to a different terminal or anything.
I'm curious why this worked like that, since GC should consider all those bitmaps orphaned at the end of every loop iteration (possibly as soon as you get the pixel even, since that reference is no longer used). The GC might not run for a while, but it should certainly run long before the entire system starts swapping...
The only thing I can think of is that the actual bitmap data is not tracked as a managed array by a Bitmap instance, but rather is a pointer or handle from some underlying native library. GC might not kick in then, because it doesn't realize how much memory all those bitmap objects are actually hogging. Now, on .NET, when libraries do that kind of thing, they're supposed to use GC.Add/RemoveMemoryPressure to let it know. But perhaps the library that you were using didn't?
Can confirm. I interviewed last September and was given the choice between laptop and whiteboard. Picked laptop. It seemed this element was new to most interviewers at that time.
I'm totally sympathetic to how stupid these interviews feel.
Syntax is probably something that without fail novices screw up. I saw this in my own subordinates who simply would not see on their own laptops, even with the red squiggle underline, syntax errors in their own code. Also, I'm sure Google has data on its interviewing process and found in some important, measurable way that there's a problem with hires who screwed up the syntax. Correspondingly, none of my decent subordinates ever screwed up syntax once. I think this is the least controversial part of their process because it is dumbfoundingly easy and low cost.
Engineering interviews are overall a mature process and I honestly doubt there's that much innovation in terms of raw skills discovery. TripleByte, for example, doubled its multiple choice question count from 18 to 36, introducing design questions, and now has a brief 45 minute free programming problem. Hardly a huge innovation.
Clearly what's immature is the feedback and communication to candidates that fail. At Google specifically communications problems don't just crop up in threads full of rejects. Bad communications affect people working there, like product managers, admins and designers, who lack the skills they test for despite the tests themselves not obviously corresponding to anything the engineer actually does day-to-day.
It's bad for morale to test for stuff that doesn't matter. That may explain why smart people report the engineering org is not welcoming to people of different backgrounds. The empathy (or savvy) needed to throw out stupid tests is the same kind you need to understand people who don't share the same culture or values as you.
Google doesn't really know that because the company only knows the things it measures. It takes insight too to know what to measure and how things are related, especially when dealing with human beings and not software.
Google used to run “how to interview with Google” multiple day bootcamps where they’d train you to pass its interviews. I’ve gotten couple of those candidates and thought it was just ridiculous.
Or it means the process is well defined enough that you can create a level field by telling everyone how it works through guides like this instead of giving connected people a leg up due to inside knowledge of a poorly defined process.
I can guarantee you that more than 90% of people working in highly compensated positions at the world's most famous corporations have had a lot of "leg up" benefits in their lives.
Leg ups that have helped them become better at software engineering? Sure. Leg ups in the form of insider connections / elites? Less so, particularly for talent they're hiring today.
This is a significantly more meritocratic industry than most, and within the industry, the FANGs are relatively more meritocratic than smaller startups.
Meritocratic in the most boring uninteresting sense of the term where you literally only work on things in a way which Google tells you will help advance you up their ladders while not actually working on anything in a way you would find interesting in general.
Maybe the sort of person who has the time/desire/interest to study a guide to figure out how to fit in at google is exactly the sort of person they want to hire.
Yeah, I don’t think my whiteboarding skills are particularly useful at work, but the personality type that makes me obsessively practice whiteboarding for weeks before a job interview tour also makes me effective at work.
The bikeshedding engineer is exactly what Google or Amazon is looking for. Nothing wrong with that either, I just take issue with the entire industry standardizing around that type of engineer.
You're given a problem, there are usually a few clear-cut ways to solve it. You're expected to find an optimal or near optimal solution fairly quickly, and explain your approach clearly. That's it.
I've thought the more open-ended, unstructured interviews lead to more bikeshedding opportunities.
learn with the intent to become a better engineer, even if you don't hit the google mark (which is arbitrary anyway), you will become a better engineer. That is the philosophy most people should use when trying to aim for these companies. Don't cram, learn.
I disagree. Doing competitive-style programming and learning all sorts of weird algorithms has not made me a better engineer. I can count on one hand the number of times in my career that I have had to design or use an "interesting" algorithm - and no, not in the "not knowing what you don't know" sense where I could have used one if only I'd known about it.
I'm very critical of this approach to interviewing in general, but it also isn't the same as competitive programming and it isn't focused on "interesting" algorithms. It is far more focused on understanding how to use data structures and the trade offs between them. Chapter 3 of "The Algorithm Design Manual" (which is fittingly titled "Data Structures") is really the most useful reference for the majority of these interviews. I don't think this is the most useful thing for software developers to be good at, but it's definitely useful and worth learning. I was annoyed that I had to study for one of these interviews, but ended up being pleased that it forced me to review this material.
2) some sort of odd string / array manipulation that never comes up in real life
3) some sort of linked list manipulation where again there is an STL for that and/or wouldn’t come up in real life
4) some optimized algorithm that literally took the first person years to discover prims vs kruskals MST for example, but now is expected to “figure out” on the fly.
5) Expects you to solve all these tasks with an insane time pressure that again is not a simulation of real life, it just makes it stressful almost as a rite of passage
None of that rings true to me. Like I said, I really dislike the way we do interviews, but I find this to all be exaggerated to the point that it weakens your argument.
I think the point is that you don't need to understand the gory details of how e.g. your hashtable works - for most coders, it's sufficient to know that it exists in the standard library; it's O(n) on space; it's amortized O(1) on retrievals and updates, but can be O(n) on some inputs; and that there is a class of security issues related to using untrusted data as keys. They don't really need to know why all these things are true, or know how to implement it from scratch.
Right, and my point is that interview questions don't actually tend to be of the form, "write me a good hashtable implementation from scratch". They do tend to prod whether people can take advantage of the trade-offs between the properties of maps vs. lists.
My point is that there is plenty to complain about - unrealistic time pressure, writing on a whiteboard, only testing coding when that is the least interesting part of a senior developer's job, no opportunity to test and debug, etc. - that it isn't necessary to bring up these exaggerated boogeymen about how everyone is asking for novel algorithms on-the-spot.
I've seen plenty of interview questions that do boil down to "implement this algorithm" or "implement this data structure". There are similar examples in comments on this very discussion.
Exactly! The field is so large and growing rapidly that it’s unnecessary to memorize the inner workings (especially down to the node pointer level in c++ where you’re begging for trouble and it will be a nightmare to maintain unless you have a performance specific reason not to use the stl)
IMO being able to say “I don’t know the the specifics, but I would use dijkstra’s algorithm on this problem” shouldn’t disqualify you because you couldn't code it.
That's storing passwords in plain text. "Happens often" is no valid excuse, on the contrary. If it happens often you should know about it if you are a qualified engineer.
Well, ah... unless you're applying at Amalgamated SortCo Industries, asking for on-the-spot sorting implementation is pretty high on the list of absurd interview questions.
Anyway I'd think one might use an array and not some other list/collection.
I used clojure once during an interview and bombed, passed again 2 years later at the same company with Python. Would’ve made a big difference in my equity had I just started then!
Yea functional langs just get in the way during interviews. I can write same imperative code i write in java in scala but it just makes it harder( eg: remember to use var instead of val, mutable collections vs default mutable ect) at which point I am just better off practicing for interviews in java.
It's also a good marketing milking cow to create paid courses and trainings for Google interviews — the dream of so many novice devs and other "Intensive Coding Bootcamp" participants.
I'm semi-convinced this is yet another reason for perpetuating this type of poor interview practice (not just at Google).
When you search modern interview topic and comments/opinions on the current process, you'll find a few SEs (typically working at places like Google at some point) on that on the side sell training bootcamps, etc. These people will swear every direction that it's a reasonable process in comments around the web referring to their side business. Creating problems they provide solutions to: gatekeeping 101.
I hate these style of interviews. I give them to prospective engineers every week for one of these FAANGM companies.
They don't test for good engineers -- they test for people who practice these style interviews, and for good new graduates.
It makes sense to ask these questions to new grads, but afterwards there is so much more experience that I feel like is much more important than acing data structures questions.
I am amazing at whiteboard questions, but that doesn't make me a good engineer. It's because I found the trick to solving these, and have practiced them. A lot of it it is practice 'ooo this looks like a graph problem, let me use a graph', etc.
> I am amazing at whiteboard questions, but that doesn't make me a good engineer. ... A lot of it it is practice 'ooo this looks like a graph problem, let me use a graph', etc.
That actually does make you "a good engineer". The vast majority of developers wouldn't even be able to recognize that much.
Hypothetical interview question: Write a function that finds the distance between two words.
Candidate A: Can recite algos and remembers that Levenshtein distance is the answer.
Candidate B: Has no idea what Levenshtein distance is, writes a brute-force solution with the understanding that it's not an optimal solution. After the interview she spends more time learning what she doesn't know, learns about Levenshtein distance, and sends you an optimal solution via email.
The above is a real-life scenario, so my question to you is - how do you decide who to hire?
This is not a good interview question and neither candidate seems to have answered it well, but I would reject B first because I obviously don't know whether she did any research or whether she just asked Candidate A via StackOverflow
These guides are just people trying to make a quick buck. There's no shortcut to getting good and perhaps that's why these type of interviews are here to stay.
It wasn't really a choice when I interviewed there. All but one interviewer had me write code in the chromebook, which was my least favorite part of the interview process. The trackpad didn't respond to my slightly dry erase covered fingers, the keyboard was weird, and the quasi hangout software it was running crashed a few times, one of which required a full restart. That might not sound like a big deal but in a high pressure/time constrained environment it was a bit of a nightmare. I was told going in that the chromebooks would be available as an option, but writing a few lines of code only to be told to stop and switch over to the chromebook (which had either gone to sleep or frozen), have the interviewer log in and select the correct session, then select syntax highlighting, then finally being able to start writing code doesn't seem very conducive to maintaining a train of thought.
I thought about it, but the chromebook hang ups were not the deciding factor in my performance (I did extremely meh and am fine with that because I'm entirely self taught and to even get that far was really cool). There were other larger problems with the process that google should be fixing instead of dinking around with chromebooks (I'm assuming they were there so code could be reviewed afterwards, which makes sense).
I interviewed at Google in January, and they offered to let me use a computer if I had accessibility concerns with a whiteboard or really really wanted to, but discouraged it because they found it often made candidates too focused on the syntax of their code and less likely to have a meaningful high-level discussion with the interviewer.
That's hilariously ironic. If they wanted meaningful high level discussions without syntax focus, their interview structure would be different.
The most obvious change was to accept psuedo-code (like in the olden times) to reduce cognitive load dealing with: syntax, dynamic problem solving, and dealing with someone asking you random questions shifting your train of thought or changing the problem description on the spot.
This now common poorly structured interview process does a lot to discourage high level conversation and low focus on syntax. It tends to get hung up on language specific syntax, data structure recall, and less on designing and analyzing solutions.
Did they finish rolling them out? I know that is (was?) the plan, but when I interviewed last year at the Santa Monica location, they only had whiteboard.
Should be titled "How to Ace the Technical Interview in the Bay Area". Even absolute shithole bottom tier companies or unknown startups are asking these questions. Write perfect code on the whiteboard or get rejected. You have to put in A LOT of time into preparation even if you don't want to work at Google, which is ridiculous.
I don't work at Google (and I don't agree with some of the things the company's decided to do) but have many friends who enjoy working there. From what I hear, Google has an organizational structure that is very favorable for regular engineers. Once you're hired and you put in around a year of work in a team, it's almost trivial to find another team. Engineers also directly evaluate managers and I've heard stories of mid to high level managers crying in bathrooms because of poor reviews from their reports. These factors combine to create an environment where teams are actively working to make engineers happy and content. Compared to many companies where managers make a lot of decisions in a room with no feedback given to or received from engineers, it's heck of a lot better.
>I've heard stories of mid to high level managers crying in bathrooms because of poor reviews from their reports. These factors combine to create an environment where teams are actively working to make engineers happy and content.
Honestly this sounds like a thought experiment.
Do you have any ethical hangups about entering an environment where falling out of your favor can leave someone stress-crying in their place of work and/or about their livelihood.
If so, how much money would it take for you to join the system anyway?
How long would you stay in such a system if you found yourself already in one?
Back in the real world, in a business context, it sounds like an abusive workplace and an untenable system. Like, that obviously can't last forever.
Perhaps using the crying manager example was a bad idea on my part. What I can stand behind is having an org structure that encourages managers and execs to treat their employees well. It sounds like Google has done a better job than most. I'm sure there are managers and engineers crying in private in every big company out there. What I'm trying to say is even though a lot of people like to assume that people work for Google and stay there just for the money, Google probably does some things very well to keep all the talent despite the negative press it gets. And I think a major factor is how empowered a "regular" engineer feels in the company. It sounds like a step up from many other companies in that regard.
- "I'm sure there are managers and engineers crying in private in every big company out there": You're not excusing google here, just expanding the range of companies whose apparent behavior is mortifying a couple of people in this thread.
- "Google probably does some things very well to keep all the talent despite the negative press it gets.": probably. They probably do a lot of a/b testing to dial in the compensation/retention ratio they're looking for, or maybe they just heap rewards onto engineers because they can afford it. Be that as it may, some people think that what google's doing is detrimental to society, or at least the problems are bigger than a salary or even a total compensation package should make up for.
I don't see any problem because I agree with what you said. Work shouldn't be so stressful that you cry in private. Google should do more "good" for the world.
I still think Google stands as an attractive workplace for reasons that are not just compensation and resume boost, though.
Compared to many companies where managers make a lot of decisions in a room with no feedback given to or received from engineers, it's heck of a lot better.
Compared to many companies where managers make a lot of decisions in a room with no feedback given to or received from engineers, it's heck of a lot better.
I'm not so sure this is better. I was an individual contributor for over ten years until I became a manager a couple of years ago, so I've seen both sides of this coin. My experience has been that typically engineers aren't interested in understanding all the non-technical things that are necessarily part of the decision making process, or worse think of these things as beneath them, asinine, or "easy". Much of the feedback from engineers is negative, not useful due to an overly narrow focus on specific tech stacks or solutions, or lacking context. Sometimes the context isn't there because of poor management decisions to not be transparent but often it's not there because the engineers' bias results in them not seeking it out. Very few things are black-and-white, and certainly it's better to include engineers' in the decision making process at some level, but that's a two way street--maybe engineers could work a little harder to overcome their own erroneous biases.
The money is not quite life changing in the Bay Area but it's a significant acceleration towards retirement. I would not mind retiring in a few years, in my early 40s instead of working to 60+.
There's no way this is actually a genuine question. It's the same reason anybody wants a job anywhere: because they'll eventually become homeless if they don't find a job.
Very smart co-workers and the potential to be part of building software that solves a specific problem on a scale that nobody else on Earth has attempted.
Bingo. If you actually like CS, instead of just seeing programming as a way to get a fat paycheck like so many seem to, why would you not want to be part of FAANG or some other company building cutting-edge products and work on building some of the most advanced systems in the world?
Some people may not enjoy the "making the world an objectively worse place" part. Not everyone is moved only by a fat paycheck and working on interesting problems. Otherwise we might as well assume it is perfectly fine to work on a poison gas that only works on ethnicity X -- after all the pay might be pretty good, and it's a cutting-edge, advanced biochemical problem.
Obviously not saying FAANGs are quite that bad, but still, this is a pretty shallow reason to work somewhere.
Smart (in the usual understanding of smart), or good programmers (in the "can invert a binary tree on a whiteboard" sense)? These are two measures that are basically orthogonal.
Is there any evidence of that? Google selects people based on entirely different criteria, and promotes them for moving a button in Gmail from an inconvenient location to an even less convenient one.
Might depend on your definition of "intelligent" though. I've met quite a few, too, including friends and relatives. Sure, they better than average programmers, but general intelligence seems to be the same as in your general college educated population. Otherwise we'd be arguing that Damore (or, say, Altheide, if one has ideological preference for one or the other) are also "very intelligent".
I think you may be equating "good enough at programming to get hired by Google" with "generally intelligent". My point is that these are entirely different things.
Damore wasn't just "good enough at programming to get hired by Google". If you look at what team he was hired for, how he was recruited, and the fact that he was (reportedly) exceeding expectations suggests that he likely is quite intelligent.
> My point is that these are entirely different things.
Entirely? I'm sure that Google employs plenty of fools (as do all large corporations) but to suggest there is no correlation is preposterous on its face.
They lost me when they got into showing how to make the 'dups' function faster. The author definitely either doesn't understand big-O notation or doesn't understand the complexity. Their O(1) implementation is anything but. Likely O(n×log(n)) at best. Also, their brute force implementation is unnecessarily verbose. Want dups?
from collections import defaultdict
def dups(seq):
d = defaultdict(int)
for x in seq:
d[x] += 1
return [k for k, v in d.items() if v > 1]
Assuming Python's defaultdict has O(1) lookup/insertion (which I think it does), this algorithm is a proper O(n) complexity.
Hmm. I don't think they claimed to have an O(1) time solution, just O(1) added space. Which, it is, but only because they're counting on the original array's underlying type having enough bits for their sign flipping. It would be as if you used a more compact type for the array elements, and then allocated another bitmap for the range of numbers.
Of course, once we start optimizing how the original array is stored, we may have exceeded the limits of this problem as a teaching exercise :)
As for time, it does seem to be O(n) to me; can you clarify why you think it's nlogn? It may not be particularly fast in practice when compared to other O(n) approaches like the bitmap, but I don't think the complexity is wrong.
Your solution is nice - it actually gives you more information (how many appearances, not just T/F >1 appearance), but it does require more additional space and isn't necessarily faster. I think the bitmap approach would be nicer if you're ok with using more space; the bitmap is essentially a very easy to find perfect hash function due to the unique input constraints.
It would be nice if an article on how to ace a coding interview did not have incorrect code in it. AFAICT the set-based algorithm for finding duplicates is wrong; the resulting set will contain items in the list that are not duplicated.
Yes, I confirmed that by pasting it into the REPL and verifying that it gives the wrong answer for a one-element list. Apparently the author failed to follow his own advice to always test code that you write.
I wouldn't get an interview from Google even if I wanted to. I'm 29 and live in Africa.
That said this guide was still helpful. A reminder of some of the skills I should hone for my next interview.
I loved reading the comments here because they give so much perspective from all sorts of people. HN is extremely critical of everything and it can be sobering.
I cannot obviously speak about the intentions or damage such guides have or can do but honestly, even if you know how to code, it doesn't hurt to prepare for an interview in a way a prospective company would want you to.
In Silicon Valley, tech interviewing has become an arms race between applicants cramming to pass tech screens and interviews, and employers coming up with new routines. Sites like Glassdoor and CareerCup are loaded with interview questions that have appeared in those routines, giving savvy interviewees the opportunity to see the questions on the exam and prepare accordingly.
How do you feel about the existence of these sites, and do they affect how interviews are conducted?
As an interviewer I don't really care. A good candidate doesn't need them, and a poor candidate isn't helped by them. The only thing that's irritating to me is that they actually burn interview questions. Once a question is seen on an external job board it gets banned as an interview question.
In the current interview structure, I don't think that's inherently a bad policy. It forces interviewers to develop new unique questions and hopefully, while doing so, consider the cognative time and complexity the solution took them before deciding to hand it to an interviewee.
This also discourages overly complex or overly familiarized questions. If the question is too complex, chances are it will end up posted online soon after, penalizing the interviewer in time cost. If a new question is recycled frequently, it will also likely end up online at some point and penalize interviewers from using questions they're overly familiar/biased in assessment to based on their own rote learning.
Yes, I have a few close friends hopping jobs every 1-2 years. Part of their secret is be involved with interviews to keep all this crap fresh in their minds and also be part of this sub-industry of tech interviews.
Good for them financially maybe. Professionally, I didn’t see they go beyond the average senior dev. But that just me being sour.
There's an unconscious bias in this post. Google as a company is not only interviewing engineering roles. Even for engineering roles, there are too many sub categories and many doesn't follow the typical SWE interview process.
If you just want to know what the interview for your roles would be, the recruiter from Google will happy to give you an overview.
Any ideas what the process would look like for something like Solution Architect for Google Cloud? I can’t imagine that there would be graphs and tree coding questions but you never know.
I think it largely depends on the person interviewing you. I know candidates interviewing for Machine Learning positions get asked with typical algorithm questions.
I've never seen an interview process that HN (and Reddit and Slashdot and ...) didn't trash as "deeply flawed", "biased", "unfair", "unreasonable", etc. At some point, though, a company has to have some sort of process, and by and large what they use works for them.
It's always so much easier to criticize (as nothing is perfect) than actually come up with a better solution in real life. So anything where there's no perfect solution, people will endlessly criticize online, even though they themselves have no better solution, and what they criticize is not terrible by any means.
You can bypass the whole charade by knowing 2-3 people within Google that can provide "assurance" you are good enough. Whiteboard testing is for grunts/unknowns without network. Another way is to be a significant contributor to some popular open source project.
Is there something special about reversing a binary tree?
AFAIK, you could swap the child pointers and do that recursively for the child nodes. You could also do things O(1) by just changing the comparison function, perhaps by wrapping it to negate the comparison.
This is a specific, well-known case. Max Howell (the author of Brew) was rejected by Google. One of his interviewers asked him to invert a binary tree.
If they really want you, they do all they can to get you on board even if you are failing/aren't motivated to do their entrance interviews and what I mentioned are two such criteria they use to bypass their process for very senior hires. They would literally give you $3M signing bonus in extreme cases or acquihire your team/company if you still didn't want to move.
They probably didn't want Brew author that much; we don't know the details.
I have a very solid network inside google (ex-coworkers from another FAANG job) and also from xooglers, at IC and management track.
Even in my last interview failure (see my post here), the recruiter told me I had more than enough to support my application. Yet, an exec made the no-go call.
Ok, I know a person that was stopped at the founder's level ("no way that person could ever work here."). Pity a VP blocked you, such things can always happen both ways, regardless of your interview performance/likability.