Hacker News new | past | comments | ask | show | jobs | submit login
The rise of never-ending job interviews (bbc.com)
967 points by hhs on Aug 2, 2021 | hide | past | favorite | 1169 comments



My one and only Google interview went this way years ago. Each round they'd send me more books to study, which frankly I couldn't be bothered to read given the circumstances.

My experience ended when an interviewer in round 3 or 4 asked me an obviously scripted question. I answered sarcastically, he got peeved, and I never heard from them again.

I'm not claiming I'm Google caliber, whatever that means. Obviously I'm not because I don't have the patience for their interview questions.

To be clear, the entire question was: What's not in a Linux inode?

My answer was: Lots of things...dinosaurs, the moon...

The interviewer told me very matter of factly that it was in fact, the filename.

I honestly lost all respect for the process, sorry Googlers.


I had a similarly bad Google experience that I've talked about before[0] but will copy here:

I was asked to do a task that eventually boiled down to a topological sort, and I thought the question consisted of recognizing that the answer was a topological sort and moving on because it was over the phone.

However, that was not the case. The interviewer wanted me to code it all out over Google Docs, but I didn't remember the exact algorithm so I basically had to re-figure it out on the fly, which took most of the interview (I even similarly mention "in any real situation I would just look this up", but that didn't help). At the end, I had a bunch of pseudo-C++-code that should do it correctly.

I thought I was done, then the interviewer said she would go go copy my code and compile it after the interview to see if I was right, which blew my mind. It was never mentioned previously that the code would actually be compiled and run, and with no syntax highlighting or ability to compile and test the code myself there is zero chance it was ever going to work.

I never heard back, so I'm assuming my code failed and they left it at that. Anyway, I'm much happier now that I think I would have been at Google.

[0] https://news.ycombinator.com/item?id=23848556


At this point I'm pretty sure FAANG hiring is just a random walk. Every now again someone happens to have looked at all the questions they ask recently for that particular interview cycle (or avoid the trap ones like that) and that person gets hired (and then put on the ad targeting team or whatever).


The experiences that people describe here just confirm something that many of us has learned a long time ago:

NOBODY HAS "FIGURED OUT" HIRING !

Not Google, not Apple, no one. Sure, some places (and individual interviewers) are better at it than others. But at the end of the day, hiring is a deeply subjective process with lots of error and uncertainty built into it's nature. The subjectivity is intrinsic.

Places like Google can afford to be nonsensically picky and not suffer drastic consequences from it. They have a thick, never-ending stream of highly qualified candidates. At their volume of hiring, it doesn't matter to them if they screen out some folks that would have been brilliant hires, nor does it matter if they hire some promising but ultimately disappointing duds. All of that is OK.

Sadly, however, it seems that small shops are trying to cargo-cult Google's hiring practices. That IS harmful to the company and the candidates, IMHO. I think folks in these non-FAANG companies should get trained on how to conduct interviews, especially if they're interviewing non-senior candidates. Interviewing is a skill in itself. It's not something that comes automatically with expertise nor is it something that can be left entirely to HR drones.


Pretend there's a Programming Quotient (PQ) which is like IQ.

Let's say Google would like candidates with PQ>130 with 95% confidence. Google has an error with std. div. of 15 points in measurement of PQ in jobs interviews. Google then needs to set the hiring bar at 160 PQ in order to get those candidates. This:

- screens most qualified candidates out; but

- most candidates who do screen in are qualified

Statistics would suggest this leaves you with 95% qualified candidates. A more precise Bayesian analysis will show you don't end up with 95% qualified employees, but the basic idea works -- it's still a majority. You set an impossibly high bar, so that candidates hired need to be qualified AND lucky. You discount unlucky candidates, but you don't hire (many) unqualified ones.

The problem, of course, is that all Googlers are convinced they all have a PQ>160, and are superior to everyone else. That's where you get the obnoxious Google incompetent arrogance.


I’ve used Google’s software. I’m not sure they’re all that great.

Someone just released a product that offloads Chrome to the cloud. Gmail has a very long loading screen. Hangouts was replaced by something like 4 incompatible apps. Android phones are significantly less power efficient than iPhones. YouTube copyright notices are trivial to game. Etc


I think the bar I gave, PQ of 130, is about right for Google. Your typical Google programmer is pretty bright and pretty competent, but not spectacular.

Most of what makes big companies succeed or fail is in the overall culture, organizational design, incentive structure, and corporate structure -- properties of a network of individuals rather than of those individuals themselves. I think most of Google's success and failings can be explained that way, much more so than the success or fault of employee quality.

Organizational design is really hard to get right. A senior manager described it like a herd of cats. If you get them all mostly moving in a beneficial direction, you're doing okay.

That's why they pay executives the big bucks. Executives fake understanding how to manage this stuff. Most don't, but they do a good job of convincing boards that they do.


Yeah I don't know, I've been extremely disappointed with Abseil, protobuf, and gMock. So whatever metric they're using, it's not generating particularly great C++.

I wouldn't care about some company's code quality but in these cases Google's clout (due partly from their maladaptive hiring practices) causes these bad libraries to get grandfathered into many projects that I have to deal with.


I've seen amateur programmers produced great code, due to cultures of code review, peer mentorship, high professional standards, and time to think deeply through problems and talk things over.

I've worked in companies where great programmers produced horrible code, due to cultures of optimizing to productivity metrics / features shipped, rushed timelines, and interrupted work schedules with meetings, requirements changes, and people multitasking projects.

I'm not arguing one of those is better than the other. Running a business is about tradeoffs. I am not particularly impressed with anything Google has engineered in the past decade or more. The original search, gmail, Google Docs, Android, Maps, and a few others were brilliant, but those are a long time passing.

On the other hand, I'm not ready to condemn anyone working on those over that. Competence is situational and context-dependent. I also don't have insight into Google business decisions. Google revenues are growing exponentially, so they're clearly doing something right.


> Executives fake understanding how to manage this stuff. Most don't, but they do a good job of convincing boards that they do.

Makes me curious about if you have thoughts about how to find people who are good at that stuff for real (not just faking)


The number of things Google farms out to "the contractors" is crazy. I've talked to a few folks that got to work in the big beautiful facility but, the code they were writing was the worst, "pound it out, who cares how it looks or how well it performs" quality.


> all Googlers are convinced they all have a PQ>160, and are superior to everyone else.

And likewise, other employers looking to hire ex-Googlers are convinced of the same.


Well no-one ever got fired for buying IBM.


They do now.


Except that the measurement error is very clearly fat-tailed, and the std.div. is clearly much larger than one std.div of competence of the population.

Place the bar high enough and you will get more and more people far into the tail, and less and less people that actually fit your bar.


I very much agree with everything you wrote, except for the arrogance bit. Many actually suffer from the impostor syndrome and just a few I could call arrogant. I'm sorry you had to deal with them but please don't generalize from just a few.


Outwards arrogance is often the manifestation of impostor syndrome, but I digress.

Corporate arrogance isn't a property of individual personalities. Most Googlers are perfectly nice people. The Google corporate culture is a whole is rooted in a deep superiority complex and dripping with arrogance. Google believes it knows better than its users, and that translates to all aspects of product design. If you moved those same engineers to a different company, you wouldn't have the same behavior.

I'll also mention that each organizational design has upsides and downsides.

This culture seems to work well in Google's early markets (e.g. search) where users are statistics, and where most problems are hard algorithmic problems, and users are secondary. It has upsides in B2C markets like Google Docs or Android. It crashes-and-burns in a lot of B2B markets, like Workspace or GCP, where customers have a high degree of expertise which ought to be respected.

I'll mention a lot of fintech companies, as well as elite universities, have a similar culture. Those are domains where it leads to success as well.


I'm sure that's true, but I've interviewed there numerous times and I invariably get one or two shockingly arrogant and obnoxious interviewers. The fact that they usually have no idea why I'm being interviewed, and clearly have a lot of other things to do, may be the source of that perception. But it so often feels like "you're already wasting my time, but here's my favorite trick problem that makes me feel smart, man you're a waste of time."


The reason this cannot work is in any process that's that constrained, if there is any factor other than PQ that can get you a position, the outcome of the job interview will be entirely determined by that factor and not at all by PQ.

Instead, you should try to find lots of correlates of PQ and measure them regularly.


This is an excellent way of looking at this and many other high bar organisations - thank you !


It's helpful to explain this to recent grads.

A lot of really good people are discouraged by repeated rejection. However, high levels of rejection of very qualified people are built into this (and many similar) systems. You have to be qualified AND get a lucky die roll to get in the front door. Once people stop taking rejection personally, they can start acting more rationally, and there's less emotional harm. People feel really bad about themselves otherwise.

There are back doors with less luck involved.


I think you've hit the nail on the head. I've been on both sides of the desk so many times. When I'm hiring, I'm trying so hard to recognize the person is nervous, probably interviewing at multiple places, and they don't want to be asked to solve stupid algorithmic puzzles that will never come up in their daily work. While interviewing, I try to recognize that they NEED to ascertain whether I can actually solve problems performantly and quickly. They also need to quickly decide if I'm worth the high dollars they're about to offer me.

Because of this, I often hire people I've already worked with. I'm also often hired by people who've already worked with me. I hate that this leads to a very homogeneous experience... or even what might seem like gatekeeping. I've just found that the best indicator of how a person will perform, is already being familiar with their work.


Or you hire people you know. Which isn't perfect, has it's own set of problems, and doesn't scale. But I can't really complain given it's how I've gotten every job (just a few) after grad school and my interviews have been mostly perfunctory.


Here on the other side of the FAANG spectrum working with and for other independent contractors and various small (5-20ish devs) consultancies, I'd say that hiring based on who you know and you your peers recomend is far and away the primary way work is done.

The good news is, it's a very open network. We have a highly active Meetup scene, pretty regular public hackathons, annual small software conferences and un-conferences, and coworking spaces are (well were) packed.

For the most part this has worked, people new to the community are able to find jobs and the people hiring them know what hey are getting. But also like you say, this doesn't really scale to larger operations.


>The good news is, it's a very open network.

I'll accept the statement. But I will say that hiring from a network is at least a very different thing from hiring through a grueling set of often artificial interview hoops. It requires a potential candidate to have genuinely interacted at a higher than superficial level with a lot of people in a professional capacity. Which may not be harder than "leet code" but is certainly very different.

And yes, all the big companies have referral programs but that's mostly just a very rough first pass as a lot of referrals are basically I'm connected with this candidate on linked in. Referral bonus please.


> Sure, some places (and individual interviewers) are better at it than others.

In my, admittedly not super extensive experience, I tend to believe that about 1 in 10 companies actually knows how to run a hiring process. I'm not sure precisely what kind of error bounds I'd put on that, but I doubt I'm off by more than a factor of 2 either way.


There's nothing to figure out. Relationships aren't a maths sum. They work out to varying degrees, and have too many variables and depth to predict. But the group of people with a reputation for having maths skills, a reputation for not have social skills are going to figure it out - what could go wrong?


we have, we don't have technical interviews anymore, just a 'cultural fit' discussion. coding has become so trivial anyone with a brain can piece together snippets from stackoverflow. do yourself a favor and if all you do is crud operations in a web app, don´t even bother with tech rounds.


I'm really curious what company "we" is.


> At this point I'm pretty sure FAANG hiring is just a random walk. Every now again someone happens to have looked at all the questions they ask recently for that particular interview cycle (or avoid the trap ones like that) and that person gets hired (and then put on the ad targeting team or whatever).

This might actually be part of the retention strategy. If everyone who works at FAANG knows that they're somewhat lucky to get in, regardless of ability, it means they are less likely to jump ship. What's the point in applying for another job when you're already paid well and it's unlikely to end in anything but lost time? You're unlikely to win a 10% lottery twice, given the amount of time you'd bother investing.


I'll just +1 this as a datapoint.

I am not looking externally until I'm sure I'm done at AWS. I've done 80+ interviews here, and I'm consistently amazed by what drives people to say "The candidate wasn't up to the coding bar". As far as I can tell, you are almost never up to the coding bar. I think we interview so many people just to remind ourselves that if we leave, we aren't getting back in.

Just to head off the "So why don't I argue for change?" questions, I'd rather work on self improvement than fight tooth and nail to make the hiring process a little bit better. I'll let somebody else add that to their promo doc.


That culture is so foreign to me. Each place I've left in the past 10 years, I'm pretty sure I could go back. I know enough folks or at least have been told by C-Suite that "the door is always open, just give me a call". I went through a few rounds of Amazon interviews in 2020... I don't think I'd ever way to experience that again.


I remember seeing a comment about Amazon once claiming in order to be seriously considered for a role, you have to be better than 50% of the existing team you’d be coming onto.


To be fair, that would mean that your ability level is the median ability of the team you'd be joining, which doesn't sound all that far-fetched to me. If the skill level of new hires is always close to the median, the overall team ability should stay the same in the long run. This line of thought obviously depends on the assumption that you can measure the exact ability using just a single quantity.


Exactly. Who wants to hire somebody who's dumber than half their teammates? That's a recipe for failure. Assuming inaccurate measurement, you'll actually have some below median, so you need to bias upward or else you'll just end up with a bunch of mediocre eng.


The logical end to that is that you wouldn't hire half your employees again. When put that way it's pretty clearly an unreasonable standard.


Yep, and that point has been brought up ad-nauseum internally. Also Jeff did comment how he would probably not be hired by his own company. I personally saw the business pushing the scales to the point of 'just get them in the door, oh my god we are growing too fast and don't have time for your bickering'. Depending on the business unit and how hungry the hiring manager is that issue of raises the bar can just be a hand wave and 'get em` in'. For some technical positions you have a 'bar raiser' interview you who is often an IC who was promoted to Management ranks and has to serve penance by being the one to conduct countless BR interviews. 99% of them were pretty chill on the Clark/Fufillment side of the business. Cant speak to the Jassy/AWS side of the house. Their concept of BR may be more concrete. We rarely kept people out just because they did not 'raise the bar' since that is subjective as hell and personally I like qualified metrics.

I never interviewed for the AWS side of the house since I knew I wanted free time, and a life.


On the AWS side of the house, the bar raiser doesn't actually interview the candidate. The bar raiser is there as a neutral party to help facilitate hiring decisions and debriefs.

It's the bar raiser's job to make sure that the hiring manager doesn't just hire people to fill seats. More than once, I've seen bar raisers override hiring managers when there was compelling evidence from the technical interviewers.

At least in my org, you don't get the job if you don't raise the bar. Period.


That is crazy- with us BRs would just sit in on the POD like any other member and have the same voting authority as everyone else. I wonder when things split internally that Team Jassy and Team Clark starting doing things so differently. Thanks for the post man, that is crazy to learn.


That’s true if we assume that ability does not change with time. But if you have a spectrum of experience in the team, it might be unrealistic to expect rookies to match the median.


Inside that is called raising the bar. Often times I would personally aim to find people who were 50% better than the best person I knew in that role. Often times I would still incline on people during their POD. I would just strongly incline with 'raises the bar' on folks who met that rare exception. Honestly I was on the Dave Clark side of the business and we were much more chill about things vs the Jassy/AWS cult. Those dudes might as well be from the moon. So take my statement with a grain of salt. We worked for the same company... but we also didn't, Amazon is just that big.

I had plenty of times where an entire hiring pod would incline on a candidate because they had the soft skills we were looking for in that role and we knew we could sharpen their technical skillset in house. You don't have to be a rocket scientist to work at a FAANG... you just have to be interviewed by a team that is hungry and likes you.


Candidates should be better than 50% of the people currently in the role. So if you're interviewing for an SDE1 position, you're expected to be better than half of the current SDE1s.

You're evaluated against people in the same role, not the entire team. But, yeah, the idea is that the hiring standard (the 'bar' in Amazon terms) is always getting higher.


You don’t need to fight tooth and nail for change. Expressing dissent or disagreement with the status quo is a good start.


At Amazon, that’s considered volunteering for a PIP.


Sometimes I see articles about a wrongfully convicted man being let out of prison. This usually comes with a cash payment of some sort. The whole we took 20 years of your life, oops, here’s a million dollars. Every time I see this though I think ‘no, no way does money replace my time.’ That’s simply not good enough. And not because the payments are usually ludicrously low, but because there is no amount of money that would buy off my life.

Same as there is no amount of money you could give me to end my life, a position which I assume is shared by a vast majority of humans.

Time is not money. It’s not even close as an exchange rate. I would never put myself through these types of interview processes (not to mention that I would assume that sort of thing to be indicative of the job itself and company as a whole) because I value my actual life and dignity far and above what I value as an upper middle class income.


I value my actual life and dignity far and above what I value as an upper middle class income.

I am at least 90% sure that part of these interviews is to filter out people who these sorts of views. These employer would much rather have someone who wants to work 100 hour weeks and be the 'hero' over someone who works exactly 8-4 monday-friday and then goes home.


> Same as there is no amount of money you could give me to end my life, a position which I assume is shared by a vast majority of humans.

Pretty sure a lot of older parents would take a deal with lethal injection + $1M for their kids to inherit.


IDK, if the interview is just a few hours of misery to get a significant financial upside, it's not such a big deal. People always assume the job will be miserable too, but that varies from team to team. Especially because the salary is high enough that you may be able to save enough to buy your life back if you're careful and have time on your side.


Be sure to price in general life risks like accidents, cancer, stress induced hearth attacks etc. The problem with this approach is, that a non zero number of people will not live long enough to buy their life back.


Very fair point, important to try to enjoy every stage of the journey.


Yes and what this means is that Google, and others, are hiring those most willing to do anything for money.

That always works out well.


Congrats, you're discovered why hazing rituals have been a thing for thousands of years!


You’re forgetting that it’s easier to get a FAANG job once you’ve already had one. They love to poach each other’s people.


> They love to poach each other’s people

The 'FA.N.' part of it maybe: https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...


If that's true it's at least is a change from the "no poaching" agreements that used to exist.


It's very true, and very natural. It's one reason why FAANG salaries have been getting so high.

If I change jobs, and find myself in a good situation, and need more head-count, I'm going to reach out to people I like that I've worked with before, see if they might be interested in a job change. I know I can work with them, know we'll build good stuff.

There's a constant drive in FAANG to hire more people, most teams have open headcounts. To the degree that it's extremely hard to build up that funnel of candidates. If my team has a head-count of 5, that realistically means I've got to find a bare minimum of 30-40 candidates to enter in to the pipeline from somewhere, to maybe get close to that target. That's a slightly optimistic conversion rate. Now scale that up across a company with thousands of teams that are hiring. Getting that pipeline filled on that scale is crazy. It's just one of many reasons why FAANG go all in on college hiring events.

I can almost 100% guarantee to get someone I've worked with in a FAANG co-worker through the interview pipeline and in to a job. They know their shit, they know what they're doing, and they know what the process is like.


It's because the DoJ Antitrust Division came down on them hard in 2010[1], and part of the settlement forbade the companies from engaging in that behavior again.

[1] https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...


One can interview without quitting the current job. That allows for multiple attempts over time while retaining the current status. People may do so to see what pay they’d be offered.


100%, speaking as someone who's been both sides of the equation in FAANG hiring.

Part of the problem is the sheer scale of hiring, but I think most of the problem comes down to the lack of feedback or evaluation mechanisms on the interviewing side.

They train you up, half a days worth, training tells you not to be an arsehole, not to ask stupid questions, not to have unreasonable demands of candidates, not to be biased. They don't train you in valuable skills like active listening.

Next thing you know, you've done training, and you're interviewing candidates every week or two (or more often), and there is zero feedback mechanism. No one evaluates your interview questions, no one asks candidates to provide feedback on the interviewers. No one looks to see if you've got unreasonable expectations as an interviewer, or have your expectations set too low.

You do have to do a post-interview group discussion with the other interviewers to make a yay/nay decision, but it's super easy to present what you did/asked in a positive light.

The whole system is designed around the interviewer being right and infallible. Is it any wonder the process is so completely and utterly broken?

edit: > or have your expectations set too low

This is where I bias towards in worrying, imposter syndrome and all that jazz. I've made a conscious choice to not raise that bar higher. I think the questions I ask are good, I think they're set up well enough to encourage candidates to go as deep as they feel comfortable with. I try to design them with no one true answer but have a few in mind so I can go where the candidate goes.


I also did interviewing at a FAANG and this was not my experience.

1. We trained 4-5 times on each type of question. The first few were shadows, and then we did reverse shadows where someone watched us give the interview and gave feedback later. In one category I asked for and was allowed to reverse shadow an extra 1-2 interviews.

2. There was auditing. In debriefs where you discussed the candidate and reviewed notes, the debrief lead was supposed to closely examine what questions you asked and how you conducted the interview, with the explicit goal of making sure that the interview was conducted within spec and your recommendation made sense given performance. Shortly after I was certified to do interviews, a debrief leader (correctly) identified a major issue in an interview that I had conducted. That candidate was given another interview in the same category. Although I didn't face any official sanctions, it was definitely an embarrassing experience and made me handle future interviews more thoughtfully.

Overall, I was fairly comfortable with the rigor of the process that I saw. I'm certainly not saying the process is perfect but my experience did not align with yours.


This has convinced me never to work for a FAANG. That much interview rigor leads away from the "gut instincts" which lead to great hires.


Or it leads to people consciously avoiding subconscious bias and actually focusing on capabilities, not what your lizard brain thinks.


Not if it's taken too far, as it will create new biases based around the "rigor" of the process. You end up "hiring to the process" as opposed to "hiring to the team / role". If someone doesn't have enough experience/knowledge, but they are clearly talented enough to learn on the job and look like they could accomplish great things, that person may be a much better fit than someone who knows everything but is terrible at executing or working within the structure of a large organization.

(the "lizard brain" is a myth by the way; I think you're confusing it with unconscious bias of heuristics, which is not what the triune brain theory was about)


> No one evaluates your interview questions

Are you saying each interviewer just makes up their own questions?! That's ludicrous if so. Where I work, we have a standardized pool, with standardized evaluation criteria.


Everyone makes up their own. I can't for the life of me fathom how this doesn't subject them to all sorts of discrimination etc. claims (one of the reasons lots of companies favour standardised questions).

Obviously the drawback for FAANG is that standardised questions would rather rapidly leak. Very quickly you'll just end up with candidates that know how to answer your questions.

Where I work now, it's a mix of pool questions ("soft" skills) and interviewer-made questions (technical skills), but it's not a hard and fast rule to use the pool questions. I rarely use the precise wording for the pool questions, and instead adapt them to match the conversation with the candidate.


Adapting to the candidate is probably good most of the time, but don't work too hard to make that happen. Computational geometry is on my resume, and interviewers are always trying to show that they know that too by trying to bend their question into that. The exercise very often detours as they've led me way down a wrong path because they're analogy doesn't fit, and all they wanted to know was if I knew about minimum spanning trees or even just heaps.


What's funny is that I specifically remember a conference where some library that a google employee wrote was terrible. The one google developer talking about it, disavowed any responsibility of it.

Whatever metrics that google is using in their interviews have probably become worthless in the past decade as people game the system.


In my startup I interviewed a 48 years old senior Java programmer with excellent resume, who took 1hr to write a String.contains(), it only worked for the requested 4 letters, didn’t work if a letter was repeated twice, and didn’t work with Chinese characters. At least it had the JUnit. I asked an employee to do it too and he made his code pass the JUnit in 6 minutes.

The candidate hated the interview, claiming it was discouraging. Coding is erratic, talent is strange, it really is a craft and we still don’t know how to reliably raise someone to competency.


But which is more likely?

1. The candidate was a complete and utter fraud and their previous (and apparently well-regarded) employers were too stupid or negligent to notice this, wasting literally millions of dollars (48-21 * $100,000+).

2. Something about the interview failed to let this person demonstrate the skills that had kept them employed for two decades. Maybe their mind went blank under pressure, or at the end of a long day. Maybe they got hung up on something trivial (that a quick search—-or nudge from the interviewer—-would have resolved), or the question was unclear.


To add to this, I’ve found engineers more likely to hang on time series and string manipulation problems. Likely due to a combination of not having to code low level functions in these areas, as well as infrequently encountering the problem.


Yes, strings are hard and times/dates have a ridiculous number of edge cases, and sometimes very poor language support. This works both ways though; if the problem is easy enough (calculate average cycle time) it can give you lots of edge cases to discuss and really show how someone problem solves, which is really the point of a programming interview. If someone even mentioned non-english language support that would be enough for me, forget about implementing it.


I personally dislike giving questions with too many rabbit holes. My observation on a few questions is that it’s a 50/50 shot if the candidate who freezes on a question recognized more nuances than the candidate who didn’t which means I’m not getting any data.

Fizz buzz was a great question in that it had pretty much a Boolean success criteria.


The interviewer needs to be good/prepared to make it work.

If the interviewer only says "Write String.contains that passes these test cases" goes back to playing with their phone, several things may happen. One person will take that absolutely literally ("It's a test; better do as I'm told"), and you'll dismiss that apparent garbage or move onto the "real" assessment where they're hoping to shine.

Another will get bogged down in something the interviewer regards as a distraction, and "waste" a bunch of time on something the interviewer regards as a distraction. "He handled unicode, but not substring matching (KMP or Booyer-Moore) or vice versa." Maybe someone will goldilocks it and hit the right (not-explicitly-specified) balance of (also unspecified) features and time, but...

If you structure it as "Please, do the dumbest possible thing and we'll iterate"--and don't hold that initial pass against them--I could see it working well.


Sounds like the solution then would be to give interviewers all of the systems and support they would normally have access to on the job and see how well they adapt to a conventional task or an issue that was recently solved by someone in a similar position on your company, and have their result evaluated by someone involved with the implementation or fix.

That would tell you if their workflow would fit your company much more than knowing how to run a coding challenge would.


On the other hand, tools tend to be fast to teach and pick up relative to fundamentals. Most companies have rough around the edges tooling that a candidate either wouldn’t know about or need a couple weeks to get productive in.


I would say 1 is possible enough that it’s worth checking for. Remember that insanely too hard programming detail is a reaction against a situation where 99% of candidates couldn’t program at all.[1]

[1] https://wiki.c2.com/?FizzBuzzTest


What proportion of your colleagues do you think are wildly incompetent? Not just a bit sluggish, subpar, or sloppy, but not even remotely able to do something resembling their job description.

There are certainly a few. The job market, being a combination of people who want new jobs and those that can’t keep their old ones, is undoubtedly enriched for them.

Even so, it seems unlikely to me that there are anywhere near as many as most people say. You certainly don’t have to hire someone who flubs your interview, but you also don’t have to assume they are frauds.


> What proportion of your colleagues do you think are wildly incompetent

30%, minimum. I was hired alongside a guy with a fantastic resume. He pushed zero lines of usable code in 4 months. When he left I purged about 20 files which were tests that were just completely commented out (but I guess those count for LOC according to github's crude measure). I would say that not only was he incompetent, he was worth negative (thankfully, nothing critical) - Maybe in order to cover his tracks? he had moved certain classes of tagged tests (e.g. skip, broken) to "ignore" status instead of 'yellow star'/red dot, I now, months after his departure, have a pr reverting those changes months after because I didn't notice he had done that. Thankfully it had not covered up any major defect in our codebase (someone could have left a corner case test as "broken" with the intent to fix it later and wound up forgetting to and sending it to prod).

But hey. Programming isn't that bad. In the physical sciences it was 60-70%.


> about 20 files which were tests that were just completely commented out

How was this not uncovered during code reviews?


That's a good question. I wasn't reviewing the code he pushed.


The problem is that it only takes one or two wildly incompetent people to completely disrupt the quality of the software. These are the kinds of developers who actively create bugs, usually by building (or copy/pasting) solutions that only work by accident, or who decrease the velocity of everyone around them by generating reams of overcomplicated and brittle code that is hard to test, hard to review and hard to maintain. It costs a lot of management time too, trying to find a way to get them to improve, or to build a solid case for letting them go.

I think the reason why every developer tends to have a story about these sorts of incompetent colleagues is not necessarily because 50% of their colleages are incompetent, but because even if just 2% (one person in the department) or 5% (one person in your larger project team) is incompetent, that can be enough to cause a seriously negative impact.


I should clarify I lifted the 99% stat from the linked wiki. I agree it seems high.

I’ll estimate zero to 10% wildly incompetent. Many of the folks who aren’t able to program find other ways to be useful: Testing, requirements, prod support, sys admin, config. It’s not even clear they couldn’t program, but maybe came to prefer the other work at some point.

What’s your wildly incompetent estimate?


The problem is the percentage of wildly incompetent applying for your job is a lot higher than the percentage of wildly incompetent overall.


absolutely. The incentives to train for the job search and then apply (and succeed at) a job with zero relevant competency, are quite high. And there are... geographies... which have a deserved reputation of being mills for those sorts of individuals, likely because the economic incentive is even stronger than the median, which I suspect is quite annoying for actually competent people that come from those geographies.


Didn’t mattkrause acknowledge as much in his comment?

> The job market, …, is undoubtedly enriched for them.


A few percent maybe, but not as high as 10 percent. It's also not just people who "can't" do it, but also those that aren't motivated or cooperative (for whatever reason).


I interview for my company. 80% of the DS applicants (some of them with SWE background) that apply for our senior positions fail with FizzBuzz or some riddle of similar difficulty. This is already pre-filtering for seniors from established companies. We do not pay bad for the market. They also do equally bad with other FizzBuzz-level tests in other areas that they claim to have worked in.

It is still a very useful test.


This is exactly my experience too. Sometimes it's incredible just how little applicants understand about how to develop software. I've even interviewed people where they were allowed to have a web browser and IDE while coding a solution, and they still struggled.

Personally I am a much bigger fan of using FizzBuzz as a gate than an algorithm question. I think algorithm questions optimize for the kind of developer who doesn't mind memorizing algorithms to get a job, which might be a useful skill, but you can test that same skill of memorization using FizzBuzz, and then you don't end up also filtering out people who can code but don't care about memorizing algorithms.

In any case, I always think it's worth using their solution as a jumping-off point to ask other, more language-specific questions. Things like: how would you change this if it was intended for use in a FizzBuzz library, how would you annotate this if you needed it to be injected as a Spring dependency, why did you use a for loop instead of a Java 8 stream (or vice versa), what are the implications of declaring this thing as final or static, can you write a unit test for this, and so on. That's when you can get past the point of memorization into figuring out if they actually understand what they typed, which is helpful to ascertain their level.


"how would you annotate this if you needed it to be injected as a Spring dependency"

well, I mean, you get to ask what you like... but this is how you determine if someone understands what they've typed on a conceptual level?


No, it's just an opening to discussion. For example, depending on the experience of the person, it might lead to a conversation about dependency injection in general, the transition from Spring-specific to JSR-330 notation, maybe they can give some examples of where Spring-specific annotations are still useful, they could talk about constructor over field injection, or when it might be better to use a static/pure function instead of a bean, all kinds of stuff.

For me there are basically two questions to answer when I am interviewing someone. The first is if they have any real programming ability at all, which hopefully FizzBuzz should answer. (Many people do not pass that threshold.) After that I'm looking to figure out where they could fit into the team, or the company. That means seeing if they are already familiar with the frameworks they will be working with in the position (usually, but not always the case for junior applicants who have held at least one job before), but then also if they can speak critically about some of concepts used in those frameworks, and perhaps compare different approaches that have been taken to solving similar problems over the years (if they are more senior).

It's not a wrong answer if they don't know the framework or the concepts behind it at all, since they might be switching specializations, but that's important to know at the interview stage because they might be better suited for a different role than someone who is deep in the framework and more likely to be able to hit the ground running.


Thanks for posting. I'm always very interested in hearing form people who mention how ostensibly senior people fail fizz buzz.

My question is: what happens after people pass fizz buzz? Failing fizz buzz is how you filter people out, but it's unlikely that coding up fizz buzz passes the technical screen. What kind of questions do you use to establish this, once you're past fizz buzz?

I've failed far more tech screenings than I've passed. I could easily do fizz buzz, and when I've prepped for an interview, I could some tree and set permutation stuff. But the questions get so much more difficult than this. Since difficulty varies, an example of a difficult question for me is "find all matching subtrees in a binary tree" (at the whiteboard, in 45 minutes). When I got feedback about the no-hire, the explanation was that I had a good grasp of algorithms and made some progress, I didn't solve enough of the problem in code (tight pseudocode would have been ok) in time allotted (again, this was ~45 min at the whiteboard, one in a series of 5 one-hour technical exam style interviews during a day of interviewing).

I can't claim to be a great coder. I have understood how to code merge sort and quick sort and more complicated tree structures, and I could do them again if I studied and loaded it all back into short term working memory, but I'm content to know how the algorithms work generally and get back into the details when I need... but when anyone mentioned "Fizz buzz", I do insist on stating that my impression, based on quite a few interviews, is that fizz buzz isn't what is screening out software engineers. Lots and lots of people who can write fizz buzz (and build and print a binary tree pre order and post order, and do dfs and bfs, and solve problems with them) are still frequently screened out.

I'm at the point where I just won't do tech interviews anymore (or take home tests). I won't study for exams or do mini capstone projects for an interview that may or may not work out. I would do these things for a degree or licensing exam, but not for a job interviews. It's just too much of a time sink.

I accept that this may cost me good opportunities (in fact, it has), though of course I don't know if the interview would have gone anywhere, other than costing me another long prep session with "cracking the coding interview".

I'll finish the way I usually do, by 1) acknowledging that you are free to interview how you like, and that nobody owes me a job, and 2) mentioning that many companies complain incessantly about hiring difficulties without realizing that their own interview processes may be filtering out talented people and that nobody owes them an employee either.


> My question is: what happens after people pass fizz buzz?

We tune up a little bit the difficulty. The point is to start a conversation and see what the candidate knows about, ¿does he knows about time complexity?¿differences between passing by reference/value?. Afterwards we talk about the technology that they use, what they like, and what they will like to use in the future. Just to see if they read about their field and are able to talk without saying something egregious.

And if they do fine we bit the bullet and hire them.

> nobody owes them an employee either

Now that I am "in the other side", I can see a lot of things that will definitively would improve the problem at micro/team level, like posting salary or increasing the WFH days. But ¿would they improve at macro/company level? The things is, companies, including tech companies, from small startups to big corps, usually have much more problems than the quality or quantity of their software.

(signed, someone that has been rejected, and will be rejected, to more interviews that he has passed)


I’m the interviewer, I’m still wondering what happened.

This was the introductory question before launching 200 threads and asking him to solve the deadlocks/inefficiencies, which was the real question supposed to let him show off his skills in front of my employees, specifically crafted for him because I wanted to persuade my employees he was an excellent hire. So he had a taylored chance to show off his skills but failed at the introductory question.

But on the other hand, how can you be asked “Here’s a substring, return true or false if it contains the substring, this is the introduction of 5 questions so don’t sweat it” and not just write two nested loops and an if? I’d pass on UTF8 problems, but when you’ve been working with Java for CRUD apps, you still should have your UTF8 correct. This is how you end up with passwords that must be ASCII because the programmer is bad.


I've seen an actual Nobel Prize winner get stuck describing their research.

People's brains just occasionally lock up.


I had number two happen on an interview recently and I am incredibly happy the interviewer didn't hold it against me. I forgot an otherwise simple word/term, but the pressure of the interview just made my mind go completely blank. I think everyone has a tendency sometimes to forget what it's like to be on the other side, and will hammer on small mistakes, or not consider all the factors.


Me too!

I'm sure I'm on somone's list of incompetent bozos for an interview that went like this: "Please, describe Python programming language." That's it. I had no idea what I was supposed to be doing, and the two fellows interviewing me would not elaborate.

I talked about what I had done with Python. Stony stares. Do I talked about the nature of Python itself (interpreted, multi-paradigm, lexical/LEGB scope, the GIL). Stony stares. I wrote some trivial programs on the board. Stony stares. Had I brought an actual snake, I might have tried to charm it.

At the end, the CEO told me they weren't overwhelmingly sold on me, but would think about it. Never heard from them again.


I've repeatedly had employers very happy with my abilities & results, and am also entirely sure I've, on a few occasions, convinced interviewers I'm entirely unable to write code and am one of these frauds everyone's sure exist and that they need these coding tests to "catch".


The second is certainly more likely, but I'd wager the likelihood of the first is greater than 10%. I've encountered my share.

There are so many "developers" just faking it, I can certainly understand using a test that would reject 90% of the good candidates if it could reject 99% of the bad ones.


Unfortunately, both are fairly likely.


What was the goal of the question? Why did you want the person to implement a contains method? Did you really want to verify they understood String implementation in Java?

And if the candidate was able to do this in 6 minutes, what would you have thought? "Great, let's hire"?

In my humble opinion, the question is a waste of time either way. You'll get much further trying to probe what the candidate does know rather than randomly creating an exercise that you think they "should be able to do if woken up in the middle of the night". People forget how stressful interviews are and how easy it is to assume shared context.

The fact that your employee was able to do the test might be indicative of the fact that you share context with the employee that you did not share with the candidate, thus confirming your bias.


Beating up on this example some more:

Multi-lingual support seems really really hard, especially in six minutes. I would think most people would need to look at technical (i.e., unicode) and linguistic references to get it right.

Should does the ligature f l match itself, or the ASCII constituents 'f' and 'l'? How about combining vs. pre-composed characters? Some Chinese characters show up in other languages (Japanese, Korean) and are sometimes split between Hong Kong/Taiwan/Mainland language tags too. In fact, there's a mess of work devoted to this ("Unihan" https://www.unicode.org/versions/Unicode13.0.0/ch18.pdf). Having figured out what you can do, you then need to decide what you ought to do. Not being a Chinese-speaker, I have no idea which options would seem natural....

In fact, having written this all out, there's no way someone "solved" it from scratch in six minutes. It would be a great discussion question though....


    for (int i = 0 to text.length() - substring.length()) {
      boolean found = true;
      for (int j = 0 to substring.length()) {
        if (text.charAt(i) != substring.charAt(j)) {
          found = false; break;
  }
      if (found) return true;
    }
We’re not taking rocket science here. This code already properly handles surrogates and Chinese characters. The question about characters that can be written in two different ways should only be raised as a second level, once the first implementation is done.


Parent here.

> What was the goal of the question?

This is the introductory question before solving concurrency problems, because it’s much easier to understand what a thread does when you’ve coded the body yourself.

> Why did you want the person to implement a contains method?

The job is CRUD + integrating with Confluence + parsing search queries from the user, so finding “<XML” in a page and answering “Yes! This is totally xml, I’m positive!” is a gross simplification of realistic tasks in the real job (and in fact in most webapps), with characters instead of XML or JSON.

I have the feeling that you think this question is entirely abstract, but I both tailored the exercise because he touted being good at improving app performance on his resume (including using JProfiler) and I took care of using a realistic on-the-job example.

> Did you really want to verify they understood String implementation in Java?

Well, what consumer product can you work on if you trip into all UTF-8 traps? Telling customers “Just write English because we can’t be bothered to learn the easy thing in Java that handles UTF-8 properly” is… is acceptable unless he also fails the fuzzbizz test. And once UTF8 is mastered, it’s good for life! I wouldn’t mind teaching him if he didn’t fail the rest, but as a senior you should really know the difference between .getBytes() and .codePointAt(i).

> If the candidate was able to do it in 6 minutes, what would you have thought? “Great, let’s hire”?

The 4 other questions were classic gross concurrency errors, tailored because he touted it in his resume and I wanted him to shine. A senior should be able to guess them blindfolded as soon as I tell them “There are concurrency problems”, without even looking at the code ;) Volatile, atomic, ArrayList non-synchronized, 200 threads for a connection pool of 10, a DB accepting 7 cnx (note the prime numbers make it easy to spot which multiple is causing the issue), and strings of 10MB each with Xmx=100m, if he finds any 3 of the 12 problems, and 2 more with help, I’d hire him. If he ditched the code and postes tasks into an ExecutorService (as they teach in the Java certification level 1), I’d hire immediately.


In essence, you write:

1) We want to test concurrency but start with implementing String#contains.

2) You have to know how to implement String#contains because you might use contains in our environment (not really, but theoretically, so you better know how to implement it).

3) You must absolutely avoid basic UTF-8 traps because users use UTF-8.

Neither of the above tells me what would you gain if the candidate nailed the question. It just tells me that:

- Your team might or might not use contains to verify something is XML (I truly hope not).

- Your team uses UTF-8 strings (which is one piece of the shared context that the candidate probably does not have).

- You tested candidate abilities of performing under pressure rather than testing their knowledge or skill.

- You are trying to hire the exactly same senior developer as if you promoted someone on your team with your codebase.

You come to the interview full with assumptions and biases about what a senior candidate absolutely must know instead of seeking what they bring to the table and why they call themselves senior. Let me tell you there are lvl 4 and 5 Java candidates that have never touched UTF-8.

Finally, and let me blow your mind here, there are senior developers that haven't really used String#contains in the last X years of their career either.

I don't know what was the quality of the candidate, but I feel, from my limited PoV and lacking all the info, that your interview process is deeply flawed.


Honest question: I'd really love to know what UTF-8 traps people fall into all the time when working on a consumer product with Java - especially given that Java basically stores all Strings in UTF-16 (well, starting with Java9+ there's some "optimizations" made, but still). I literally can count those issues on one hand in over a decade of working on such (multilingual) products.

I also completely fail to see what a CRUD app (i.e. java + db) + shooting REST requests to confluence has to do with your concurrency questions, as in interview != job fit, but that might have to do with some missing context.


> The fact that your employee was able to do the test might be indicative of the fact that you share context with the employee that you did not share with the candidate, thus confirming your bias.

This! When giving interviews last I really worried if the questions I asked where just indicative of my own Dunning-Kruger effect.

i.e. Do I only ask questions I already know the answer to and not questions I don't know the answer to?

If I do then am I just filtering for people with the same background and knowledge and missing out on people with other skills I don't know, because they're in my blind spot, I need yet?


> i.e. Do I only ask questions I already know the answer to and not questions I don't know the answer to?

I've been advocating for a "coding interview" where both the interviewer _and_ interviewee draw some random question from leetcode or other problem bank, and try to work at it together.

This would show collaboration skills, and you can tell pretty easily how helpful the candidate is with his/her contributions, and whether you find there is an impedance mismatch somewhere.

It probably also maps more closely to the kinds of interactions you'd have after the person's been hired.

I think it would also help calibrate: if you can't figure it out, is it fair to expect the candidate to figure it out? Maybe it's just a hard problem!


Curious about the age of the person who supposedly wrote it in 6 minutes. If you're fresh out of school, "String.contains" may be top of mind, but the vast majority of people never write that function in practice, so it's easy to not think about.


I don't do interviews in my current job (mostly because the pandemic really did a number on hiring) but in my previous job the only coding question I asked was what I thought was a fairly simple string problem. They could assume ascii, use any language they wanted, and make any other assumptions to simplify the problem.

My then-co-workers liked to ask harder algorithmic questions but I wanted to give the candidates a little bit of a break with something easier.

It didn't always work, but at least I tried.


What was the problem statement you gave to the candidate?

What were you trying to tease out from the problem statement?


Did they have to write it in a google doc with no ability to compile and run their code?


Why would you not use the language built-ins?

I am not a Java programmer but it took me 30 seconds to find the contains() method.


Just check the Android code, specially the early versions looked like "C dev (not even C++) tries to create a Java based framework".

And the NDK clearly is anything but modern C or C++.


Google is famous for having a C++ implementation that eschews a lot of what makes C++ powerful.

I’ve heard it referred to as “C+-“.


Yes and apparently clang is now suffering from Google not caring about latest C++ compliance.

Apple mostly cares about LLVM based tooling in the context of Objective-C and Swift, Metal is C++14 dialect, and IO/Driver Kit require only a subset similar in goals to Embedded C++, so that leaves the remaing of the clang community to actually provide the efforts for ISO C++20 compliancy.

https://en.cppreference.com/w/cpp/compiler_support/20


Yep. If Clang doesn't have C++20 support by the time C++23 is out, I'm pretty sure my workplace at least will completely drop Clang and build solely with GCC. A win for open source, if nothing else.


Is clang not also open source?


Clang has a permissive license, GCC is (of course) GPL.

Which one is best for open source is debatable.

Permissive licenses make it easier for companies to make proprietary software, or even a closed source version of the original project.

Copyleft licenses (like GPL) are intended to promote free/open source software but they can be a legal headache, which can make users favor proprietary solutions.

On HN, I think that people tend to prefer permissive licenses (but complain when large companies "steal" their work, go figure...).


That’s fine. Having 2 (free software) tool sets competing on features is a good thing. Both need to stay relevant.


Well there might be a defense of that one.

"Data-oriented programming" (to distinguish from object-oriented) is largely C-style C++ that is written for performance rather than reusablility/abstractness/whatever. In the embedded programming world where performance is paramount, a lot of people have low opinions of many C++ features. One could also never completely trust compilers to implement everything correctly.


I'm not saying that's a bad thing. Google usually has a good reason for what they do (not everyone is always happy with the reason, but Google can always explain why they do stuff).

I come from an embedded background, and understand that.


Go is also an outgrowth of the Google idea that was first expressed in their style guide of basically "engineers are too dumb for harder features, let's ban them in the style guide (for C++) or just not have them (for Go)"


But I thought that Google engineers were all genius-level.

Aren't they the company that pretty much requires an Ivy-League sheepskin to be a janitor?


Apparently not,

"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike 1"

"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical. – Rob Pike 2"

Sources:

https://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Pa...

https://talks.golang.org/2012/splash.article


It's my understanding that there is a colossal gap between the hiring bar imposed by recruiters and the companies' HR department and what's actually the company's engineering culture and practice.

My pet theory is that HR minions feel compelled to portray their role in the process as something that adds a lot of value and outputs candidates which meet a high hiring bar, even though in practice they just repeat meaningless rituals which only have a cursory relationship with aptitude and the engineering dept's needs.


I don't think HR is really that involved in the hiring process. They certainly aren't at my company. They'll read the job posting and make sure that there isn't anything illegal in it, but then that's it. It's up to the Engineering Managers to come up with a process, and make adjustments when there are roles to be filled.

This is the first time I've been involved in coming up with the process, but from what I've observed, it's a similar situation in other organizations.


> the hiring bar imposed by recruiters and the companies' HR department

That is simply not how things are. The hiring bar is designed and upheld by people on the same software engineering job ladder as a candidate. The role of recruiters is primarily coordination. The role of HR is compliance with local employment laws.


This one is extra weird to me because I've written a lot of C++. I don't think I've ever committed a bug related to dynamic dispatch, templates, or some other "fancy" features. Not that I haven't committed bugs, but they're mostly either language agnostic logic issues or things one could have written just as easily in C.


How much of the early Android code was made by Google? Android existed for two years as an independent company before Google bought it.


The remaining 8 years have been written by Google.


Geohotz had a good rant about this once - the kind of people who need to cram leetcodes and memorize algorithms are not the kinds of people who Google wants to pass these interviews.

Makes a lot of sense, you could solve all these questions without knowing specific algorithms as long as you are good at problem solving - which is, I assume, the intent of the process.

Obviously it doesn't always work like that.


> Makes a lot of sense, you could solve all these questions without knowing specific algorithms as long as you are good at problem solving - which is, I assume, the intent of the process.

You could solve all these questions as long as you are good at problem solving, *given enough time*.

However, with tight time constraints and perfomance pressure, the only way you could solve all these questions is memorizing and practicing all these algorithms.


There was a hilarious rant about a FAANG hiring question that was something like: "Write an algorithm for detecting if a linked list has any loops in it, in linear time, using a constant amount of memory."

Apparently the correct answer is to use "Robert W. Floyd's tortoise and hare algorithm", which is trivial to explain and code.

The catch?

It took a decade of computer science research between the original statement of the problem and Floyd discovering the solution.

So... no worries, you have an hour, a marker, and a whiteboard. Your time starts: now.


I'm just imagining an engineer coming up with a novel solution to this problem in under the hour deadline and then not getting hired because it's not the "Robert W. Floyd's tortoise and hare" algorithm.

"So we specifically asked for linear time."

"Uh, yeah, I did it in log(n). That's better."

"It doesn't match what's on this paper they gave me. Thanks for your time. We'll be in touch."


it took a few million years until Newton figured out basic mechanics. but i don't think it's unreasonable to ask a junior engineer some basic kinematics questions!


Subtly different, IMO.

You’re expecting the mechanical engineer to recall something they learned about kinematics, not derive it on the spot. It’s a test of knowledge, rather than cleverness. The equations of motion are also more central to physics and engineering.

A decent programmer should know that linked lists exist, their general properties, pros and cons, etc. However, cycle detection is not a particularly common operation, so not knowing Floyd’s algorithm tells you very little—-and their failure to do years of research in 45 minutes even less.


Yeah, I think a closer analogue would be asking something like "derive the conserved quantities of the gravitational two-body problem" and then dinging candidates for forgetting about the Laplace-Runge-Lenz vector


A software engineer? Sounds totally unreasonable. A mechanical engineer, sure, it's going to be required material on their education.

The tortoise and hare algorithm is not the foundational skill required to make software work the way an understanding of motion is for building structures. That's why it's often omitted from educational material yet these people are able to produce usable software after even something like a bootcamp (which I guarantee basically no bootcamps ever touched this algorithm).

I'm not sure I approve of asking even more well known algorithms like Djikstra's algorithm or A* in a job interview, unless the role was something that specifically required that area of knowledge like building pathfinders for video games or robots or something.


It would bias toward people who are comfortable doing basic calculus on a whiteboard.

Also, it took around 13.8 billion years for Newton to do what he did.


ya its dumb, but also CTCI question 4 or something or linkedlist chapter


This is exactly the point!

If they actually aren't looking for people who just cram CTCI or leetcode, coming to this answer from first principles is demonstrably far more difficult than you'd expect achieved in an interview.


Which becomes forgotten knowledge in about 6 - 12 months time after you've last needed to apply it, depending on how often you'd have to use that information.

These sort of questions have an incredible recency bias, and have zero relevance to engineering competence.


I have a theory that these q's legally favor recent grads without having any explicit requirement to do so. Helps them filter for young, freshly-trained students who they can mould into whatever they like inside of the FAANG-bubble.


> These sort of questions have an incredible recency bias

Of course; how else do you do back-door age discrimination?


It's funny you put it this way. I actually did a couple of hard level Leetcode problems and I thought they would help me immensely in my day to day life in addition to helping me get better at interviews.

No such avail. In fact, unless these algorithms and problem solving methodologies are baked into your memory there's no way you are white boarding a Leetcode hard level problem in an interview.

What I was impressed at an Uber interview was their system design interview process - which basically boiled down to 'how do I abstract retrying a 429 - rate limit exceeded.

What I take is that - the interviewer is expecting a very specific solution even in an open ended system design question. It's like throwing a needle in a haystack at you and expect you to get to the needle in like an hour :).


>> which basically boiled down to 'how do I abstract retrying a 429 - rate limit exceeded

They're probably looking for some sort of variable refilling leaky bucket implementation, which is funny because I believe this is exactly what they do internally. It was probably the task the interview had in front of them in their day-to-day and wanted you to do it for them!

This is a fair design question for a senior role (which this sounds like) that promotes disccussion, but expecting a specific solution is really only testing "does this person match my preconceived ideal for what a <dev> is?" which is really dangerous and has very little value.


> the kind of people who need to cram leetcodes and memorize algorithms are not the kinds of people who Google wants to pass these interviews.

I thought it was exactly those who Google wants to pass. Anecdote: ex-colleague of mine who is not specially bright studied 3 months how to "crack the coding interview" and got a job at Google. His knowledge about algorithms and data structures was like mine: I know what a tree is, I know there exists operations one can perform on them and some of them are more performant/efficient than others... but I would need to Google how to "reverse a binary tree" if I had to do it in less than 1h.


I read an article recently about Google’s fairly awful interactions with HBCUs (historically Black colleges and universities). One thing that caught my eye was Google’s disapproval of seemingly standard CS programs in favor of a syllabus for cramming algorithms into student heads. That was one of their excuses for hiring differentials.


The "reverse a binary tree" problem gets often brought up as one of these tricky algorithm problems that you just have to know. However, the "algorithm" is to just swap all of the left pointers with the right pointers in each node.

The biggest difficulty for most seems to be that they don't know what "reverse a binary tree" actually means. It sounds kind of mathy and opaque, so I get it, but candidates should be able to have a dialog to figure out what the requirements mean. And on the flipside interviewers should be ready to have that dialog and not count not knowing the term by heart against the candidate.

This problem to me feels qualitatively different than the "rabbit and hare" algorithm for finding a loop in a linked list mentioned by another poster. That one needs a non-trivial algorithmic insight that just might not come to you during an interview. The solution to "reverse a binary tree" flows out of the structure of the problem statement as long as you have the fundamental skills for walking and manipulating data structures and the conversational skills to understand the problem, both of which seem fair to test for.


>ex-colleague of mine who is not specially bright studied 3 months how to "crack the coding interview" and got a job at Google.

Sounds bright to me.


So how they did in one job interview says more about their intelligence, to you, than GP's witness from knowing them for possibly years?

I think your weights are way off.


I think being able to pass a FAANG level coding interview with 3 months prep is a better indicator of their intelligence than one coworker's opinion, which could be clouded with biases.


And I think the vaunted "FAANG level coding interview" is a prime example of a setting where three months of hard swotting can make even a relatively dim bulb shine brightly.


I’m trying to think of how many tree structures I’ve encountered in all the projects I’ve ever programmed on, in my entire career. Maybe one or two?


A tree for performance? I've used a rope once or twice.

Plenty of times where I've used trees because they're the logical representation of the problem (ever had a field called "children" in your code? HN comments are a tree. Etc.)


You probably aren't the demographic Google want/needs to hire by the sound of it.


A perusal of Google's recent software track record would indicate that optimum efficiency aided by a wide knowledge of a library of algorithms and manual implementation of the same is either not what Google actually cares about, or if it was what they think they care about, not effectively delivered by the steps they take to achieve it.


I went through a marathon of interviews for one FAANG company (8 in total - coding screen, 4x in first round, 3x in second round). I did enough preparation to remember quite a couple of the Leetcode solutions by heart. I was pretty much a code printer if I got one of those, not much thinking involved anymore. I reckon it was clearly visible that I've seen a similar question before which is unavoidable if you've done enough prep.

While it's probably not what an interviewer is looking for, having the most common solutions memorised gives you an advantage of time. A coding interview usually consists of two challenges. If you get stuck on the first one and take too much time to answer it, you won't have enough time to go through the second one.

To avoid the code printer perception you can always go through an explanation what alternative solutions could be applied to the given problem, what their complexities would be and why the one presented is the best.


You need to act and pretend to think it through. Then you seem like a brilliant programmer they want to hire rather than someone that recently worked on the problem.


> Geohotz had a good rant about this once - the kind of people who need to cram leetcodes and memorize algorithms are not the kinds of people who Google wants to pass these interviews.

And yet, they routinely do pass these interviews.


I've always said that FAANG don't hire professional software engineers, they only hire professional leet coders


>Makes a lot of sense, you could solve all these questions without knowing specific algorithms as long as you are good at problem solving - which is, I assume, the intent of the process.

That's just absurd. If someone with no "algorithm" experience who was a good problem solver had to work out an answer from scratch for the interview its almost certain they're going to find the brute force answer and FAANGs pretty universally want the most efficient one so these people would routinely fail.

Its totally clear that what FAANG hiring optimizes for is recent CS graduates who passed tough Algorithm weed out courses at well known colleges in the past 18-24 months. They are young with no families or obligations and are happy to work 12 hour days at Google because they have hip open offices and ping pong tables.


I've mentioned before but at my previous FAANG gig everyone only looked at the resumes they were assigned to interview like 15 minutes before the interview time was scheduled down the hall / building somewhere.

No one knew who they were interviewing or what was on the resume until they glanced at it while walking to the interview room. I suspect all interviews and the process are just made up as folks go along, and half the time you get a gig or not mostly based on if one of the people you talk to is in a good mood or not.


One could argue that, when you're interviewing a lot of candidates and your acceptance rate is low, it makes sense to maintain a consistent approach to interviews. For example, asking all candidates the same question would help you develop a better sense for what a good discussion looks like, where candidates tend to struggle, how to help them along without solving the problem for them etc.

This is in contrast to tailoring each interview to a candidate's background.

With that, I think looking at a candidate's resume 15 minutes before an interview is not that unreasonable.

What am I missing?

(edit: this is meant for IC roles rather than management)


There is no consistent process anywhere in any of the interviewing loops, it is all random and made up as everyone goes along. There isn't any deep insight or thoughtful discussion and lots of work being put into the process, it is an afterthought and squeezed in during the day between other fires which are burning, none of which are good for the interviewee. This has been my observation at about every company I've worked for since I've been in IT.


My profile on LinkedIn for the longest time stated "No PHP jobs please" as a way to weed out recruiters that would send me job offers for PHP roles on the basis that I once did some PHP.

I've updated the profile to include "No FAANG companies" because I'm pretty sure I would not be a good fit for them but that didn't stop their recruiters from pestering me.


Good one, I get contacted by Amazon several times a week on LinkedIn to apply.

For my LinkedIn I listed minimum requirements and it saves me 100+ inquiries per week but I still get a ton of irrelevant jobs.


Ahh, reverse psychology!


It may be a random walk but I think the goal is to convince whoever gets hired that they’re special and talented and elite members of their discipline.


Oh you mean a real rockstar? a code ninja?


A real bitlicker


You think they're doing this to make you feel... good?


Harsh hazing and initiation rituals are a pretty effective way to build team "spirit" and make people believe the group they've joined is special, and being a part of it makes them special or better than non-members. It may or may not be part of why FAANG interviews the way they do, but it is likely part of the outcome.


Well they do pay a bit more than 15 dollars an hour. Hard to not feel good about that.


I interviewed 100s of candidates and there was this one guy that I would pair up with and he never said yes. I’ll never forget one candidate absolutely nailed it and his retort was “no, she clearly memorized the answers” so yeah, don’t take it personally.


I got an offer for Production Engineering tail end of last year and the interviews were mostly reasonble but the process was mega long.

I took a different offer in the end, but the recruiters reach out every few months for a chat.


I think there's probably one or more "real" filtering steps to start with, but after those steps there's still way more qualified candidates than positions to fill, and at that point the only possible solution is to choose randomly. But that's not "satisfying" so instead they just keep interviewing and finding reasons to reject people (all completely arbitrary, since everyone is already qualified) until they reach the target number, at which point all remaining candidates get offers. Which effectively results in wasting a bunch of everyone's time and then choosing randomly anyway.

(This is a massive oversimplification, of course, but you get the idea.)


You don't pass a FAANG interview by looking up all the questions and memorizing them, you just get a feel for what they'd ask you and learn how to respond with what they want.


That's what they're trying to achieve, but it's imperfect at best, so I'm not sure you can state that professional leetcoders don't get jobs at FAANG


The only people who get tortured this way are people who are getting interviewed outside of the networking pipeline, where the cool people are.


I've never written a sort from scratch since my college days, over 30 years ago. Also never had a job interview that asked me do to so, or any other coding questions. I'm planning more for retirement than a next job at this point, but I shake my head at what my younger colleagues need to go through these days for the opportunity to write JavaScript with IDEs that do most of the work for you.


I sometimes have to implement a sorting algorithm when I write bare metal code that doesn't have any sort of standard library available. Of course, being a 1337 hacker, I then resort to the state of the art bubble sort. Or maybe an insertion sort if I'm feeling fancy. But enough bragging.


OT: I wish I could relocate a blog post (on Microsoft’s site, I think) about a very a naive and “obviously wrong” sorting algorithm that a dev identified in their (Excel?) codebase. Turns out the code was naive because the vast majority of their users were sorting very small sets of data and the implementation actually performed much better in those circumstances.



Funnily enough, if you're sorting small amounts of stuff it does not matter what algorithm you use. If fact your O(n log n) sorts are probably a lot slower on less than some millions of elements.


> I then resort to the state of the art bubble sort

Remember reading Bentley's Programming Pearls and fairly sure that's what he starts Sorting section with a short implementation of


I liken it to what lawyers go through with the bar here in CA. It's gatekeeping.

Just like I don't think the CA bar should have a pass rate in the 20% range, I don't think coding interviews should ask riddles that are nearly impossible to solve on the fly unless you get lucky and memorized THAT riddle in your studying.


Except that lawyers have to pass the California bar once in order to get licensed; we SWEs have to pass the "bar" every time we look for a new job.


A big part of the problem that usually never gets discussed is how the lack of institutional trust between tech companies affects hiring and interviewing. There's no licensure in SWE, so in a sense your reputation is based on the companies you've previously worked for. But it seems like even if you have "name brand" corporations on your CV, other companies have no trust that your experience indicates competency. So engineers are forced to go through the leetcode grind every three years (or whenever they change jobs). Seems highly inefficient to me, especially when it's advocated by technologists who say they want to "automate all the things."


This is definitely true. But, let's be honest here: anyone who's done at least 10 or so standard, ~1-hour technical interviews has probably run into a candidate who looks great on paper, but just can't demonstrate enough basic skill to do the job.

One such candidate I interviewed seemed like they'd be really great for the role: PhD in graph theory, publications, projects listed on the résumé, couple of different programming languages (including ones we used). To me, this person's résumé screamed "solid mid-level developer." I would have probably been willing to pass them at a junior level, had they been able to perform at that level, though.

The interview itself was a pretty familiar story. For the technical portion, I introduced the problem (not a LeetCode-type problem, a more practically-oriented problem), we talked about requirements, drew some stuff on the board, and then got to coding.

I had a feeling when we were going through the requirements discussion that this might not go as smoothly as I'd hoped, but I pushed that feeling aside and did my best to let them shine.

We let people code in any reasonable programming language, but they must write actual code. They can fill in stuff like dummy helper functions, if necessary, but we want to see some kind of running, syntactically correct, and, preferably, at least lightly tested code.

They chose to code in Java, which, while not a terrible choice, seemed to me kind of like they were just handicapping themselves when stuff like (IIRC) Javascript and Ruby were listed on their résumé.

To make the long-ish story a bit shorter, we muddled through trying to implement the requirements we'd talked about earlier in Java, meanwhile the candidate was showing me a distinct lack of familiarity with basic facilities of the language, such as "what sort of methods do lists have that might be helpful here?"

Needless to say, this person did not pass my interview, and we did not end up hiring them. But, I really, really wanted them to succeed. Like I said, on paper, they look great. And, I'm sure they could have gotten through a culture fit interview just fine. I'm just not sure how well they would have done on our team, working on our rather large, pre-existing, and somewhat crufty code bases.

If you can figure out a good way to automate the task of "filter these developers down to the ones who can write some semblance of code," in a way that goes deeper than just "Write some code and run it against our automated test cases," I'd like to hear about that. And, I'm not doubting that it could be done, in theory. For instance, maybe something like the engine behind GitHub's CoPilot could provide a way to analyze and grade the candidate's code on things like style, testability, test coverage, modularity, &c.

But, AFAIK, there's nothing like that out there now, so, a structured process consisting of ~1-hour technical interview sessions, one-on-one with the candidate, attempting as best as possible to simulate the real work environment, is about the best I can think of.


From what I've heard, the bar is much harder than SWE interviews.


My first degree was civil engineering and I took the California Engineers in Training exam back in my college days and boy was it tough even though I had stellar grades from a top university. You could bring mountains of reference books to the exam but you had no time to use them. You had to solve endless problems based on fundamental techniques you already learned during your education.


Oh, I'm not about to claim that these licensing exams people in other professions take once and then never worry about again (bar exam, medical boards, etc.) compare in difficulty to a typical SWE interview.

But, here me out here:

Suppose the CA bar exam were changed in format to be more like a SWE interview. That is, instead of being a 2 day affair consisting of 5 essays of 60 minutes apiece, a 90-minute performance test[0], and 200 multiple choice questions, let's shrink it down to a format that fits in one day and under 5 hours.

Now, let's nix the performance test right away, because it would be incredibly difficult to shoehorn in any kind of real, practical task in under 90 minutes.

According to [1], it looks like 100 multiple choice questions are allocated 3 hours worth of time. So, basically, our cut down format could be 100 multiple choice and 2-3 essays. So, that's about half the amount of testing that's done currently, more or less.

Now, if you have half the amount of testing available, you have a choice: you can cover half as many areas of law, you can cut down the depth of coverage in each area, or some combination of the two. In any case, what you've done is increase the variability in the test. In other words, passing is now more likely to be influenced by exactly which version of the test you get.

If there's an area of law you're weak in (say, bird law), it might not even be covered. Conversely, if you're a real expert in bird law, you won't get as much of a chance to shine in the new format. That means our new format both allows somewhat more marginal candidates to pass, and gives up some sensitivity in detecting people who are really, really good.

So, in summary, the new format omits any sort of practical task, increases the variability of the test, increases the probability that marginal candidates will get through, and decreases the ability to distinguish truly excellent candidates.

Seems to me that's sounding more and more like a typical day-long SWE interview, isn't it?

---

[0]: Interestingly, this is essentially a work sample test, which is the thing proven to correlate best with job performance: https://en.wikipedia.org/wiki/Performance_test_(bar_exam)

[1]: https://www.tjsl.edu/academics/bar-prep/california-bar-exam


Lawyers have to pay dues yearly and take a certain number of CRE (continuing education requirement) hours every year to maintain their license.


SWEs have to learn new things every day to keep their jobs. But, this is kind of stretching the metaphor past its breaking point, really.


While I do agree the CA bar is tougher than it should be, a lot of the low passage rate comes from graduates of less-than-stellar schools. If you look into passage rate per school it’s naturally much better at the higher ranked universities.


Isn’t that almost the definition of gatekeeping? Make the process to enter the law profession so difficult that only students from select universities have a good chance.

Of course prospective lawyers should clear a certain level of knowledge before they are allowed to practice, but the passing rate is puzzlingly low. CA law schools, as a group l, really only prepare such a low number of their students to practice?


Law school teaches you theory. The practical part is supposed to come from competitive internships.

The bar is hard for CA because we go to school for the theory and then no one really teaches you the formulaic way to answer bar essays. Add on top of that it's a lot of memorization and study. Most students I went to school with had egos and thought they had it in the bag only studying on the weekends.


shitty law schools has become a huge problem in the last couple decades. similar to ITT tech or phoenix online. They drastically lower the lsats needed to get in, then almost no one becomes an actual lawyer from that school, but they do get 100K in debt.

also, gatekeeping has a negative connotation because it is usually used when it seems unwarranted. If only schools with high quality education are passing students, that doesn't seem like a problem with the gate, but instead with the other schools. I absolutely want lawyers and doctors to be required to pass certain criteria.


Well, not exactly. You need to have a certain level of competancy to be an effective lawyer. Those who arent “good enough” for the better law schools are often preyed upon by overpriced and underresourced schools to give false hope those who couldnt cut the mustard. Not saying you cant be successful if you go to a crappy school, but the chances of it happening are slim to none. Just look at pass rates of Whittier (I think they’re shut down now) or Thomas Jefferson and then look at the tuiton rates. And this isn’t even counting the non-ABA approved schools...


Is there a shortage of attorneys in California?


many people including myself believe that availability of legal services is broadly lacking in the US. not necessarily a shortage of attorneys, but a shortage of attorneys that can do work for anything but top-quintile clients. artificially restricting who is and isn't allowed to perform certain jobs often has that sort of effect.


Not GP, but I never thought of this side effect before. Thank you for making me aware of it!


attorneys multiply when left to their own devices, which is often bad for society.. (since like law enforcement and middle management, LOTS of new people every year want the job for all the wrong reasons) so California Bar limits the new attorneys, which both lessens the total number of attorneys, and also advantages the incumbents.


Not all younger developers have to jump through those hoops. There are plenty of companies that don't do that kind of nonsense, because they're trying to hire good engineers, rather than, specifically, engineers -- good or bad, that'll get sorted out later -- who will happily sacrifice their spare time to figure out how to jump through the arbitrary hoops that their jobs entail.

Holding this kind of bullshit interview isn't a bad idea if you're a megacorp with chaotic teams and fourteen levels of management hierarchy where people spend most of their time on infighting and career building. You need people who jump through hoops, because successfully delivering most projects in these environments is 10% difficult technical work (which the good engineers can handle, and you can typically hire enough of them via recommendations), 40% YAML-poking bullshit work, and -- optimistically -- 50% jumping through all sorts of technical and non-technical hoops, most of them self-inflicted.

I navigated this kind of process successfully early in my career, and the only thing that made me more miserable than interviews like these was the work I got to do afterwards, after accepting the offers. Once is happenstance, twice is coincidence, three times is probably just how these things are -- I'm now pretty convinced that the quality of the interview is (barring statistical accidents) highly correlated with the quality of the actual position.


> the only thing that made me more miserable than interviews like these was the work I got to do afterwards

Can you give some examples?


It's very difficult, because making sense of the examples would require explaining all the convoluted decisions and corporate mechanisms behind them, and that would make it trivial to identify the companies in question.

I can give one example, I guess, since it's been a long time, it's a very common scenario, and this is an internal thing, so anyone who can identify it is either already there (tequila's on me this evening, mate) or has worked there before (I'm still up for tequila).

The buggiest component in one of the projects was the testing framework. It was a home-grown testing framework, incredibly slow (think "takes about 5 seconds to send a 20-character string over a 115200 UART line"), and it had a huge bug backlog.

Virtually all bugs got closed with WONTFIX. The person who'd written that monstrosity had made it up the management line, and his minion was now in charge of the team that allegedly maintained it. Since all sorts of bullshit metrics like bug fixing rate and total number of bugs and whatnot came up during quarterly operational reviews, a low bug count was nice to have. And since no customer ever complained of a bug in the testing framework, for obvious reasons, all that was required to keep the bug count to zero was an understanding between these two guys that all bugs would just get closed.

So you had to spend a few hours finding creative workarounds, since the two-line fix you'd come up with in five minutes would never get applied. And I mean creative. We had code that literally eschewed non-functional framework code by exploiting a race condition in its code in order to do RPC calls without going through the buggy framework code.

Now imagine you have to do something like this for every single task you do and you have a pretty good idea about how it goes. I mean you technically spend 100% of the time doing what you're supposed to do, but only about 30% of it is actually what you're supposed to do, everything else is mostly fighting cargo cult practices, senior engineering ego, and management disinterest.


I had one interview where they asked me to write out some fairly complex SQL joins on a white board. This was for a Java dev gig, NOT a SQL admin gig. I decided to really lean into how awful it was by putting in notations for three different implementations (Oracle, MSSQ, and Postgres). I was subtly making fun of them but they didn't pick up on it.

I got the offer and took it because I do contracting and had just unexpectedly gotten a contract canceled early, so I was unemployed and have a family. At the end of the 3 month contract they offered me a full time position and I declined.


SQL I can do in my sleep. Still haven't ever needed to write sorts (other than ORDERY BY).


I do my own Graph algorithm implementations sometimes

Especially when I'm not sure the Graph approach is the right one, it's easier than either

1. Getting the Boost.Graph in the source tree to actually compile

2. Dealing with the bureaucracy of getting some, other, Graph library requiring feats of compilation possible for mere mortals into the source tree

This is, however, not something I do from memory - much like the ancestor comment I see the relevant skill as closer to recognizing positive-weight Shortest Path and knowing you want Dijkstra than being able to write Dijkstra without wifi


I happened to have, by chance, practiced writing sort from scratch right before an interview where they asked me to do just that. Doing it correctly in just a few minutes gave me pretty high status. Higher than I deserve. The randomness cuts both ways.


> I've never written a sort from scratch since my college days, over 30 years ago

I didn't study CS at college, and I've never needed to write a sort on the job, so I've only ever written sorts in job interviews! I think I worked out a basic bubble sort, and told them it probably wasn't the fastest way of doing it but that it'd do the job.


I once had an in-person interview where they gave me a sheet of printed code and asked me to point out the syntax errors. Some interviewers are absolutely insane.


More than a decade ago I had an amazon interviewer write some perl on a whiteboard, and ask me to find the error.

There were two - I pointed out both and they didn't seem super happy about it. To this day I'm still not sure if one of them was an unintentional error.


A past employer of mine used to have one of these. It was a true work of art: around two dozen lines of code with something to discuss in EVERY SINGLE LINE, ranging from dumb syntax errors to logic errors to problems of overall design. Made for some great conversations.


The problem I see is that it probably looks really dense and complex the first time a candidate sees it. This is not a great way to start the interview. To me it comes across like "find Waldo in the next 30 seconds. Also there's a bunch of other characters hidden in there. Go!" We all know what we're looking for (kind of) but it's a very stressful approach. It might work better if you paired and wrote this crazy code, and looked to them to identify issues as you built it up.


I had to review someone's PR for one interview. Ultimately I failed it because me feedback focussed solely on implementation details and asking if there are better ways to solve the problem (with some suggestions as a nudge). Apparently, that was fantastic and showed all the qualities of good coaching...but they expected me to point out all of the instances of poor indentation and other aesthetic things. My justification that it was unimportant and that running rubocop would fix it wasn't good enough - the PR had to know all of the nitpicks.

You know, if I had to do that for every PR I reviewed, I'd be burned out in no time.

It was a shame, but if I didn't flunk that interview then I wouldn't be where I am now.


"if I didn't flunk that interview then I wouldn't be where I am now"

This is, I think, the closest thing there is to a universal experience in this field.

Followed closely by "how did I miss that single-character error?"


Based on your description ... that wasn't a failure. Success is not working at places like that.


Done right there will be a lot more in there than syntax errors. A good example of this sort of test will separate those who have revised syntax without really thinking (they'll get the syntax errors but little or nothing else, great if you want to employ a human linter) from those who actually think about what they are looking at will spot much more (a logical error that would cause an infinite loop, a point where a comment and the code disagree, an incorrectly validated input, an injection flaw, a heavy expression inside a loop that could be done first and result-cached for efficiency (your compiler cannot always identify such situations), hard-coded credentials, bad naming, ...). The best may identify some but also state why in the grand scheme of things is might not matter (efficiency in code that runs once so 10μs and 100μs is statistical noise) or where other priorities might take precedence (for example readability and therefore ease of maintenance).


Does this kind of trick question work well in practice? If they explicitly ask for syntax errors it would be a waste of time to also check the code for other errors. I can’t imagine many people opting to do that unless they know it’s a trick question.


It shouldn't be a trick question, but an open one. “What can you see wrong in this code?, there are a few things, we don't expect you to see them all”. For non-junior starters it is effectively a simulated code review — can this candidate spot problems before they get committed further into the pipeline (and hopefully avoid them in their own code).


I don’t see what information people can glean from those trick questions.

Any time I’ve interviewed people I’ve made it a point to emphasize that none of my questions are trick questions and if anything is unclear, they should ask clarifying questions. The result? Interviewees are more comfortable and are far more honest about what they know, what they don’t know, and you get to see a glimpse of what they’re really like.


"Basically, I will ask you questions, and hopefully you will give me answers.

It is fine to say 'I do not know'. It is fine to guess, but if you do, I would prefer that you tell me it is a guess.

If you want any clarification, I will try to provide it. If you don't recall the exact signature of a library function, ask me. I will note down what I said and not hold incorrect nformation I give against you.

Any questions before we start?"


If you'd see an obvious SQL injection, wouldn't it be hard to resist telling them?


What is super frustrating is that these same companies and people are convinced they can find game-changing productivity gains in software development, like magnitude-level improvements, yet ask questions that, what little increases we've made in the past 50+ years have made trivial.This is just a lazy question.


Actually this one might make sense, depending on the code and the position you applied for.


Your IDE can check your syntax. For people who have to switch between languages regularly, precise syntax memorization is difficult and a waste of time.


Your IDE can tell you if your syntax failed to represent any valid program. It can't tell you if your syntax represents the wrong valid program. If you can't even spot invalid syntax, you have no hope of spotting bugs, because you can't read what the code says. Which turns out, contrary to your beliefs, to be important for many purposes.

That doesn't mean you can't debug. You can step through the code step by step in the debugger, or add more and more logging around the bug, until you figure out which expression has a meaning unintended by its author. But that means that, if we're working in a language someone knows well, you're likely to spend hours debugging a problem that they can just see immediately when they're reviewing a merge request. That's an orders-of-magnitude difference in productivity when it's important for code to be correct, precisely due to what you call "precise syntax memorization".

Most of programming isn't writing code. It's reading code.

It's possible to go overboard with this. There are other skills that are more important than being able to look at some code and immediately see what it means. There are excellent programmers with severe dyslexia who will just never be able to do this. But it's foolish to think that gaining this skill is "a waste of time" for those who can.

There are languages I've used where I don't know the syntax that well. PHP, Ruby, x86 assembly, OCaml. There are languages where I know the common syntax well, but there are plenty of obscure corners of the syntax that I don't: C++, Perl, bash. But I regularly switch between C, Python, and JS, and I'm pretty confident that I know their syntax, as well as numerous other languages like Tcl, Lua, Prolog, PostScript, and arguably Scheme and Elisp, which I don't use regularly but still wouldn't have any trouble spotting syntax errors.


I'd struggle to code a five-line program that'd compile in languages I've written tens of thousands of lines of code in, in Notepad, that'd actually compile & run on the first try. I might fail that in a language I was writing last week. I mean, I might manage it, but it'd be sheer luck. Decent chance I forget, in the moment and under pressure and without examples to crib off of, IDE support, or the ability to check the manual, the correct way to do comparisons for all types, even, or basic stuff like how to print to the console or what this language's sugar for a "for-each" is, or whether it has such sugar. Reading input? The right calls for file IO? Anything more complicated than that? Oh god, there's no way.

Luckily I never fucking ever have to do that in my actual job. If I did, I might well get good at it. Since I don't, I... don't. I also haven't gotten much better at driving semi trucks or framing a wall, in my over-a-decade career writing software. Go figure.


An IDE can check your syntax, sure. It can even catch low-level bad practices ("you're making a database call in a tight loop, this is horribly inefficient"). This is, in my opinion, basic tool usage: warn of LHF early.

Last time I saw one of those test-ish pieces of code, though, an IDE+static analysis would have caught about 1/2 of the problems; the other half required actual thinking (not statistical pattern matching, aka AI): "don't trust user data, that should not go there even though the call signature matches, you're holding it backwards."


> "don't trust user data, that should not go there even though the call signature matches, you're holding it backwards."

Good type systems do this, although it's beside the point.

The point I was making is that the IDE remembers unimportant things so that my only concern is the actual thinking part. It abstracts away the minor and sometimes very important syntax differences.


While I understand where you are coming from, I disagree.

IDE tools are great performance enhancers, but they can also be crutches.

I would always expect a professional software developer to be able to parse some code on a page and point out its syntax errors (as well as suggest edits).

edit; here I am thinking about something more substancial then just a missing ';' or a lack of a closing "


Let's use (C++) something not common, but not all that crazy:

    bool x = false;
    x ||= something();
How many multi-lingual programmers will remember which one of the 7 languages they know does have a boolean assignment operators and which do not without looking it up? Does that make them unprofessional?

How many do remember exact operator precedence rules for all those languages, when in practice you may need just the basic ones and use () to work around the lack of exact knowledge.

Also which version? Something not working in PHP 7.3 may be ok in PHP 8, but company wants you to code in PHP 7.3, or ES5. In practice you get quickly acclimatized to any of the languages you know after working with them for a few days or a week, but good luck remembering exact rules of any of them at any given time when asked.


You're not mentioning or alluding to the one obvious syntax error. I don't know how to spoiler it so I will say there is an issue on the second line that, again repeating, I don't know how to call out.

And yes, I work in multiple languages.


also if you can't spot that error, you might not realize a subtle, yet important, detail about that line (if written correctly). contrary to what you might expect, it cannot short-circuit the way || does, leading to possible performance or even correctness issues.


This is very true. I hope that people are not using the short-circuit feature in a way that impacts correctness! I would have an issue with that code for trying to be too clever even if there were no bugs and it worked well.

Performance issues, on the other hand, I can see accidentally arising.


I see short-circuit for correct behavior all the time, most frequently in the format: if (pointer && pointer->member == value)

Where you want to make sure a pointer you've been given isn't null before you try to dereference it. Without short-circuit, this becomes a segfault.


Yes, the "and" pattern is common. I've seen plenty of "ensure this exists before operating on it" chains, where it even goes if ( pointer && pointer->otherPointer && pointer->otherPointer->member == value )

I've never seen someone use an "or" pattern in a non-confusing way.

That is, I've seen "and" patterns that act as a safety/early out. I fail to see the use of an "or" pattern where you want to stop executing if the first value is true (ignoring using nots to invert the logic of an and pattern, and I would criticize that).


That boolean assignment operator was addressed. Whether or not C++ has them (it doesn't) is exactly the point of that code snippet.

They do exist in other languages. ES has it in exactly that form. Java has it in the |= form, not to be confused with the bitwise OR of the same form.

Whether or not these short-circuit is not all that interesting for most boolean logic. (Though it can be useful to know if these are used as hacky error-handling and default-setting. It depends on how you read "something" whether or not that's going on here.)


> That boolean assignment operator was addressed.

Where and when? Because I was the only person alluding to it, and the replies to my post.

> Whether or not C++ has them (it doesn't) is exactly the point of that code snippet.

C++ has boolean assignment operators that operate the way that the code is clearly meant to operate. That code does not have have a valid one. Where do you think Java got them from?


> > That boolean assignment operator was addressed. > Where and when? Because I was the only person alluding to it, and the replies to my post.

The words "boolean assignment operator" are in the first sentence after the code snippet in megous' post.

> C++ has boolean assignment operators that operate the way that the code is clearly meant to operate. That code does not have have a valid one. Where do you think Java got them from?

As far as I can tell the |= operator in C++ is the same as in C, i.e. a bitwise OR operator. It works for booleans due to their bit pattern, but it's not the same. My C++ knowledge is extremely limited, so I looked it up and I may be misinformed though.

Java's |= is different for ints and boolean. There are no bitwise operators for booleans: bool1 | bool2 is a strict logical operation (that doesn't short-circuit). bool1 |= bool2 is a logical operation that will fail when other types are mixed in. int1 |= int2 is a bitwise operation. Java does not have a short-circuiting ||= operation (but ES does).

For the most part these differences aren't that important, but they do trip people up when switching between languages.


> As far as I can tell the |= operator in C++ is the same as in C, i.e. a bitwise OR operator.

This is true.

> It works for booleans due to their bit pattern, but it's not the same.

This makes less sense. A bool in c++ is just an integer value with few inputs. It is, as far as I can tell, the exact same.

> Java's |= is different for ints and boolean.

Java has bool and int as different types. There were versions of C++ compilers that just straight out had bool as a typedef of int.


C++ does not have boolean assignment operator in a sense the ||= syntax works in ES.

https://www.w3schools.com/cpp/trycpp.asp?filename=demo_oper_...

||= will not assign anything unless the LHS variable is falsy. It will not even evaluate the right side unless LHS is falsy.

|= will work the same in C++ and ES mostly (depending on types)


I'm just thrown for a loop. Why is the default to assume I'm familiar with the ||= operator in ES?

The fact that the ||= operator short circuits makes sense from an optimization point of view. I will maintain that there is no good use for the correctness (as opposed to optimization) of the code to depend on that feature.


How many of the multi-lingual programmers that know 7 languages are really good in all 7? As an interviewer and hiring manager I am more impressed of people than know 2, max 3 languages very well than 7 at an intermediate level.


I see this attitude in the corporate world and among people who are newer to programming a lot.

For skilled, experienced programmers, most mainstream languages become an implementation detail. You have to spend time learning idioms, footguns, and generally the way the language manages memory, but you absolutely can be great at 7 languages because they fundamentally do many of the same things.

I haven't hired people based on "their stack" in a long time, and it's been completely fine. Someone with skills can quickly learn your stack and be productive in it. I personally jumped on a project as a coder a few years ago having never written C# before, and I was productive in about a day. All the concepts were familiar, and the stuff I had to learn was mostly syntax.


Exactly. "If a person can drive a Honda, how well can he drive a Mazda?". Duh, just as well as Honda. And just like the natural languages, the more you know, the easier it gets.

   > All the concepts were familiar, and the stuff
   > I had to learn was mostly syntax.
This reminds me how I have learnt to program. I grew up in then USSR, we had no computers at our school but we had programming lessons. So I was introduced to all the fundamental concepts: variables, assignment, loops, control structures, etc. When I went to university I finally got access to the computer (Yamaha MSX). And then it was exactly as you say: "what's MSX Basic's syntax for this particular concept?".


If a pilot is only type-rated to fly an Airbus 320 he is not allowed to even try to fly an 737 nor a different type of Airbus. This attitude of "a car is a car, what can go wrong?" is deadly in some domains.


In my (corporate) role most of the people around me have over 20 years of experience in a very specific domain (manufacturing execution systems) and it takes about 5 years to train a new person. Having someone new productive in month is a dream for us, but it never happened.

If you can be productive in about a day, please explain why a pilot gets ATPL (airline transportation pilot license) after a minimum of 1500 hours of flight. Also please tell if you would board a plane where the pilot has 100 hours - that's an awful more than a day or even a week.


I know what a language ought to be able to do and where the usual foot-guns are. I haven't even attempted to really learn a language thoroughly since my first one (Perl—and yes, that's probably part of where my brain-damage comes from). It's all the same stuff, more or less. Understand pointers and how things like how OO systems are usually implemented, how stacks work, that kind of thing, and all the languages start to blend together.

99% of the pain in a new language usually ends up being the (often god-awful) tools, platform/SDK bullshit, and learning where the clearest path is incorrect (no no no, the official docs and tutorials say to do it this way, but everyone who knows what's what actually replaces that entire part of the language/first-party libraries with this other library developed & open-sourced by some other company, since the official way is obviously so terrible, and you just have to know that, or notice by reading other people's projects—ahem, looking at you, Android). The language itself is typically nothing.

This has worked out fine for me. It does mean I've gradually grown to hate languages that lack static typing. I don't want to remember or look up things when I can make a quick note and then let the computer remember or look it up for me. I thought that was kind of our whole thing, no? Having computers do stuff for us, when they're able?


I think you overestimate the value of being super intimate with any given language. Pretty much no language in common use is so different from the others that you drastically have to change how you express any given thing you're trying to do. Knowing concepts and when and how to apply them is more important in my considered opinion.


In my opinion, if you're hiring for an expert in a specific language, you're not hiring a software engineer, you're hiring X language developer or X language software engineer. The language needs to be explicit in the position listing and perhaps even job title. That's fine if that's what you want but be specific about it so you don't waste people's time looking for a language specialist when most people anymore are generalists that have all sorts of knowledge distributions for any given set of technologies but can most likely adapt and learn to fit your distribution of technology given just a bit of time and opportunity.

Modern development requires juggling too many technologies for most people to specialize in a single language unless their career goal is to niche themselves to that language.


Someone doing webdev fullstack for more than 10-15 years will know at minimum ES + (PHP/Python/Ruby/Java/Go...) + SQL + HTML/CSS and a few utility languages.

So at least 4. If you combine webdev with something low level/embedded, you need at least one systems language, so you're at 5 languages you need to be proficient in.

Add one hobby language or a second web backend or systems language, and you're at 6 major languages.

7 is a lot. But 5 is plausible to be proficient in for someone who switches between webdev and lowlevel stuff to not burn out, or has a FOSS hobby.


I've shipped production code in 17 different programming languages. I wouldn't say I'm proficient in any one of them, they are all just tools to solve a problem and the knowledge of language specifics comes and goes. Need to hyper-optimize a DB query on an Oracle RAC cluster? PL/SQL. Need a shader? GLSL works fine. Need a webpage? HTML/CSS/JS. Need to build a 7' long flying robot fish? C. Programming languages are just tools.


In over 20 years of working in IT I never found a single webdev-type of person that can write an efficient SQL query by himself. Yes, most are able to write a query to bring the correct result, but I saw way too many cases where the perf test on a database with the expected production volume was running in completely inacceptable times and the developer had no idea how to fix that; for each version of SQL perf-tuning is very specific.

Also proficient != expert. I met enough developers that were brilliant in their work to be convinced that 5x developers are not a myth, but they are real, while rare, occurrences. For me a senior developer in X knows the ins and outs of that X to the level that his code is an order of magnitude better in term of efficiency, performance, productivity and security. A regular developer can be just proficient, but it is not what I wrote about.


As a hiring manager for over 15 years I prefer people who can pick up a language based on the need. Good problem solving is language agnostic.


It is an option, but how fast will they be very efficient? It takes years to master something and some people, not all, need to have that mastery level.


> they can also be crutches

Yes, they are absolutely crutches. All great tools, libraries, and abstractions are crutches. I want programming to be easier for myself and my employees.

The only problem with a crutch is that you might end up not having it when you need it. That's not an issue in this case.

> here I am thinking about something more substancial then just a missing ';' or a lack of a closing "

The original example that I was responded to was about an interviewer who expected their code to compile. That would include incredibly pedantic things.

For example, if you're the kind of person who uses single quotes in JavaScript and then you're suddenly writing a different language where '' is different from "" and `` and $"" and whatever, you could easily make an unimportant mistake that prevents compiling.


Yes! Because the IDE cannot help in Github Pull Request reviews for example.


If a company did not run automatic tests and lints on whatever they felt was important to have when merging to production, to the fullest extent possible, I would stop everything and write those. This leaves reviewers free to focus on the intangibles: architecture, expressiveness, logic and data flow, and so on.


In my experience, Pull Requests aren't about catching syntax errors... the build will fail if there's errors like that. Rather, a Pull Request, and the code review involved, is about the underlying logic of the code; both with what it's doing it and how it's doing it.


I don't buy that argument, as a programmer you have to write out code following a precise syntax all the time. Programming would be insanely tedious without knowing correct syntax.

Having said that, if someone came up with an interview test which used especially esoteric parts of the language in unconventional ways and then asked to spot the errors, the could be a dubious question.


> as a programmer you have to write out code following a precise syntax all the time

When did I claim that the IDE absolves you of needing to know the syntax?

It doesn't. But between autocomplete, hinting, linting, and any other static analysis, it makes it close to painless to switch between languages without making horrifying mistakes. The top-tier JetBrains IDEs (IntelliJ and Rider come to mind) will even tell you ways to make your code more efficient or modern, like changing a bunch of if/else to pattern matching.

Why should I have to remember the full truthiness table of JavaScript? Why should I have to remember what all the different string delimiters do in every language? It's not important. My IDE can (and does) know that I'm trying to do some kind of string interpolation and will just fix it for me.


Design mistakes maybe, but syntax errors are just not representative of any real debugging experience IMO.


I think some of this is filtering out bullshitters. In a previous job I used to interview a lot of unfiltered candidates. What we did was sit them down at a Linux command line and ask them to show us the files in the directory, open a file for editing and that kind of thing. A surprising number of people who claimed years of Linux experience had clearly never used the command line at all.


I've tried questions like this (super basic but can quickly filter out people who know nothing), but sometimes the candidate seems insulted! I try to quickly move on to a harder question.


If you start with "I'm sorry if you find this question offensie, but recently we had a wave of candidates who had problems with basic things so we need to start from the beginning" and the candidate still feels offended, well, they're being offended too easily which might cause problems in the future.


I think people can still find it off-putting that after all the evidence they provided you to get to that point, you're challenging them to prove they aren't complete frauds. Like you could has spent 30 seconds googling them and verify they are legit, but it seems you didn't even bother to read their resume. Not saying it's the case, but rather that not everyone is aware of how good the frauds can be at presenting themselves and bullshitting through interviews.


I don't think the problem is offending the ones who know how to do things. It's making them think they're applying for a worthwhile job.

I remember thinking what you're writing about there, that there's a lot of people to be filtered out, and that good candidates wouldn't be offended.

So we had this simple two-part quiz question for people, starting with "what is the expectation of a dice roll?". Amazingly a lot of people can't figure this out.

But also a lot of people know the answer immediately and will wonder WTF you are asking such a simple question for. I remember this one lady who interviewed with my firm, the look on her face when she realized we weren't asking anything complicated. You could just tell she thought we were a bunch of amateurs, and she'd better be on her way to see some other proper hedge funds.


How did people like this even make it to the interview rounds


Sadly because in that disfunctional start-up we didn't pre-screen except to have someone look through their CV to check that boxes were ticked :-(

But I think the point in this thread is the Google question about Linux inodes was actually part of a pre-screen interview done by an outside agency.


I could see it if you're resume talks about a whole bunch of intense recent development in the exact same language they want to test, but only then.

In contrast, I've been stuck in a giant multi-language integration-fest, and... well, there are definitely languages on my resume that I would not be comfortable being pop-quizzed on, simply because I've been using others for the past two years.


I've been doing C++ full time since 1996, and yet I frequently have intellisense warning me about forgetting something silly - like a missing capture in a lambda, or even forgotten semi-colons. That's because I'm not an f'ing compiler.

_Can_ I make sure the code is 100% correct before even compiling? Sure, but I'll spend an hour checking every detail, while intellisense does it while I type.


I mean, yes. In a whole program, typos or mistakes happen. No need to try to prevent those. So do spelling or grammar errors when writing in English. We expect a professional writer to have a human editor to catch those and not be perfect. But asking an applicant for a writing job to find the spelling/grammar errors in a paragraph seems reasonable to me.


The simple fact that you think so tells me that you aren't a programmer, and that you should stay far away from interviewing or managing programmers. About the only thing the two situations have in common is that both are based around a text-based medium; for the rest there is just no comparison between writing computer code, and writing text for humans. Have you ever noticed how very few people can actually write both good computer code, and good documentation for the users of that code? That's because they are completely different disciplines, requiring a completely different mindset.

Let's turn this around: if I were interviewed by someone who flagged down my code for missing a #include or lambda capture (both very easy mistakes to make), I'd know that the people I'm interviewing with are idiots with no understanding of the thing they claim to be testing me on. Would I want to work there? Nope.


You seem very confused. At first I was worried you misunderstood my text: maybe you have trouble with natural language but not programming (or maybe English but not code). That would certainly explain your claim that they are disjoint skillsets. But as you went on, you committed a logic error, so now I'm just thrown.

There is a difference between "find the errors in this provided code" and "write code on a whiteboard with no errors". At no point was I talking about code you wrote.

Also, it's an aside, but I find people who can program well write excellent documentation for the users, if what they are writing is an API. Of course, they are not the best at explaining the steps in a GUI, but that probably has less to do with communication skills in general and more to do with the difficulty understanding how they perceive the problem.


e.g. a compiler

But yeah, I think I've seen questions like that for an intern positions. It's basically a "have you ever seen this language?" to weed out people quickly.


> It's basically a "have you ever seen this language?" to weed out people quickly.

It's not that. People who constantly use multiple languages in an IDE will not be able to point out most syntax errors outside the IDE. So it's not a "have you ever seen this language" filter.


It was for a junior c# position so I doubt it's applicable considering Visual Studio will point out the problems.

But I'm curious what position would this question make sense?


> I'm curious what position would this question make sense?

I was once asked a similar but better posed question: I was given some code and asked what I'd point out if asked to do a code review on it.

And the code had loads of things wrong with it, from bad variable names and incorrect comments, through unit tests that didn't have any assertions and loops that weren't actually loops, all the way to choosing a non-secure random number generator in an application that needed a secure one*

In other words, it was a test of my ability to code review, with some glaring issues to give me some easy marks and set me at ease, and some subtle issues where talented people could really set themselves apart. It was fair and relevant to the job because I was presenting myself as a senior programmer with lots of experience doing code reviews and coaching junior developers.

It's possible trinovantes's interviewer intended to give the same sort of test - but either didn't explain the question clearly enough, or trinovantes misheard or misremembered.

* A bug right out of puzzle 94 in 'Java Puzzlers'


Citrix gave me a page of code which had a lot of tricky obfuscated syntax problems. Code that looked live but was actually commented out, nested comments that were not terminated properly, code formatted in a very deceptive way, strings that were not quoted correctly so they would not end where expected on the page. It was all very contrived stuff that was deliberately deceptive, not like real broken code, and the errors would have shown up with syntax highlighting. I caught most but not all of the tricks. This is also the only interview I have ever had where a panel of engineers stood behind a desk and all barked questions at me. I did not get the job.

At a different interview, this one with Microsoft, the lead engineer showed me a page of actual code from their app and asked me what I thought. Luckily the bug instantly leaped off the page to me, although many people would not see it. That was a good way of proving that I have a useful ability, and a better use of time than whiteboard coding or quizzes. I got the job and they were glad they hired me.


I tend to believe that "walk me through a peer review of this code" is a great interview question. It speaks to the user's familiarity with the language, with problem solving in general, and also people skills. Sure, it's not the end all/be all, but someone needs to be pretty amazing to want to work with them if they are incapable of doing a peer review.


This is awesome and I think every company should do something like this. You get to ballpark technical ability really well while getting some very relevant behavioral information (i.e. what will it actually be like to work with this person, without lame ‘culture fit’ questions). It can be hard to test competence in a way that allows all kinds of devs to shine; code reviews are pretty unavoidable on the job though so it’s a good fit.

Coding is a mostly solitary activity that you probably don’t want candidates spending more than ~60 minutes on, ideally on something that resembles the actual day-to-day instead of sucking all the Leetcode possible out of them. Even just quickly pair programming on something nets you more data points in the same time.


It doesn't sound that stupid, it checks whether you know the syntax of the language you'll be programming in.


Depends... imagine getting four pages of C# code and having to play Where's Wally but you're not even sure what Wally looks like. Surely it'd be better to just write FizzBuzz.


It's literally asking someone to write perfect code in a time limited situation, under pressure, how can someone with any kind of coding experience ask someone else to do in an interview?


It not literally asking to write any code, just to read it.


Not one person ever code reviews code that was not already compiled. Requiring you know the syntax of a programming language (in my case, Swift, changes syntax in every release) outside of a compiler is pointless. I remember in the early days of J2EE people asking you for the home interface of an EJB stateful session bean. Like who the hell cares about memorization in the era of Google, and that was 20 years ago when Google did no evil. This isn't first year computer programming 101, you are paid to make things that work, not regurgitate syntax.


We may have had the same interviewer- tbh I thought this was pretty reasonable and more of a warm up really. I think this also lead to a discussion on some questionable logic and how you could improve the code.

I had 10+ years experience in C++, I thought it was completely reasonable. Especially compared to their later questions around something along the lines of finding a shortest path in a tree. I interviewed there a bit before their process was so well known to be game-able, I went in with zero study time on obscure algorithms outside knowing the O(N) of the most widely used, and certainly did not practice actually writing or interacting with trees and such.


Yeah, I had an interview like that

There was a function with a syntax error, that also returned a pointer to stack memory, and made some logic error where it assumed a class with no vtable would be polymorphic.


I have done this. Is it really such an insane idea? Makes for a nice break from "what does this code do"? You need some technical "anchor" to make for more concrete discussion points.


Would you accept "my build toolchain and linter will catch all of these syntax and stylistic errors." as an answer? 'Cause that's what all your devs are going to do IRL.


Not really since that doesn't leave much room for discussion, but that is my problem not theirs. The point is to have something to discuss, not to listen for a singular answer. Now we can discuss hypothetical situations and philosophy but I'd much rather have a concrete piece of code to go through.

Seeking out problems and errors can be a good conversation piece. Hopefully you get to hear some anecdotes, prod the taste in style and how well that that taste might play with others. The interview situation isn't easy for anyone, and anything that can if something is even remotely qualified helps.


I worked at a place that did something similar. VBA code, printed out in a proportional font, no word wrap so long lines spilled over...

"What does this code do?"

That was a pretty easy one to figure out, it pulled coordinates from a database table, and then it stepped along all the lines trying to find the longest one (they were a metal shop).

"Do you see any evidence that this code has been optimized?"

That was the dumb question.


wait until you see this asshole move being pulled to find a bug in production code under time pressure their team has failed in the real environment by being given out some thousand-lines long printed code from the development branch.


It is disrespectful, but it is a proxy test for how many hours you have spent reading and writing code in that language.


A while back Indian companies were notoriously famous for giving questions from Let us C, from Yashwant Kanitkar.

The questions go like,

What is the output of the expression below?

    int i = 10;
    ****++&&*+p;

Followed by a myriad of options. Including things like Syntax error.

Not sure how this measures language proficiency.


Ooof... I have done a few double pointers in my day *, but anything beyond that seems like bad practice!


> What is the output of the expression below?

> int i = 10;

> **++&&*+p;

> Followed by a myriad of options. Including things like Syntax error.

I consider myself fluent in C and to a lesser extent C++ -- that's a Syntax Error in C, at least.

This isn't a particularly difficult one to spot, but I can understand how it would be if you weren't very familiar the language.


They mix the correct ones and wrong ones in a way, in a time constrained situation.

Eventually your eyes will give being a lexical analyser.


My diskile of C and C++ were particularly due to such questions. Its always in C or C++ I see such convoluted questions


I've been working in C for most of my career and I've never heard of such questions. They're not in any textbook I'm familiar with. In fact they're incredibly pointless and irrelevant.


That probably explains a few things...


That's an easy one -- p is undefined. :)

(/me runs away)


I wonder how many Indians partake in the IOCCC [1].

[1]: https://www.ioccc.org/


What about it is disrespectful? It seems to me that it’s testing for something relevant and I don’t see it as otherwise bad (abusive, pure trivia, easily Google-able, etc)


It is akin to giving a spelling quiz to an author. There is a level of decorum required when dealing with professionals. A junior verbally pointing out a syntax mistake reveals a naiveté about the competency itself and "the mission" of the field.

In this interview, I would have liked to receive two code examples (that might contain errors) and discuss benefits according various objectives.

If the interviewer makes it clear that pointing out syntax mistake is not rude, I could mention them in passing. This demonstrates not only attention to details but also decorum.


I once halted an interview because of this kind of thing. Told them their interview process suggested they were looking for someone substantially less experienced. When they then insisted everyone had to go through this, I told them that was a warning sign to me that their hiring process was a box ticking exercise rather than addressing the actual needs of the positions they were hiring for and that I was no longer interested.

Recruiters need to understand that these kinds of processes will often filter out the wrong people, such as those skilled enough to be able to pick and choose.


But also the right people, such as those arrogant enough to be upset by it.


A lot of interviewers have complete leverage, so they don't get any sort of feedback or real check on their ability or processes. It's only when a candidate isn't desperate for the position, is competent, recognizes red flags, and gives them the feedback that they'd ever be aware of. You don't have to be upset to realize there's a mismatch and withdraw your candidacy.

Candidates often have to call out nonsense otherwise it may never be called out. Processes need feedback to adjust and adapt, otherwise they'll typically continue with momentum alone.

With that said you can give feedback in a polite and professional way, you don't have to be arrogant about it. "Based on the questions, it appears you're searching for these specific abilities which are often attributed to a junior role, so I believe I may be a mismatch for this specific role. I'm going to politely withdraw my continued involvement in this process. I appreciate your time and interest and hope you will contact me if a more senior role is available." Or something to that effect. You don't have to be arrogant to give feedback.

If you were like "what, this is ridiculous, what am I am an intern? Good luck filling this trash position!" And then walk out then sure, that person clearly had some anger management issues.


I once interviewed for a director role and the first round went something likes this:

Interviewer: How would you reverse a string? Me: boggle Any language I want to use? Interviewer: Yes. Me: Okay, Ruby. "somestring".reverse! Interviewer: boggle Me: I don't think we're aligned on what this role is. says thank you and leaves

Interviewers need to understand what they are interviewing for.


I mean, it all comes down to how that situation is handled.

If OP was a dick about it, then yeah, it serves to filter out an arrogant assbag.

But if OP simply explained that the interview led them to believe the position was a more junior/entry level than they were expecting, that seems fine. Further, to even explain that the interview process seems to just be a checkbox process seems fine; if you work in a critical thinking/creative role, checkbox culture is an absolute brain drain.

Getting that out in the open, in honest and respectful terms, is a fine thing to do. Why wouldn't it be?

Further, any hiring institution that feels the need to build in 'tricks' to filter people out of the interview process is toxic. Even if the people they're filtering are arrogant assbags.


If a prospective employer has a hiring process that is wasteful and pointlessly bureaucratic and tests people for things entirely irrelevant to a role, that tells me they're likely to be an awful place to work and/or don't understand what they're hiring for (which is likely to make them an awful place to work too).

If you I consider that arrogance, then so be it. I consider it not taking jobs that'd make me miserable, because I don't need to.

A prospective employer doesn't have a right to have me bend over for whatever process they'd like.


TBH, aeons ago, we had this as a high-pass filter (doesn't notice a glaring SQL injection hole, no problem with eval() on user input? Nope.) and a conversation starter (-why are you constructing a database handle right in the middle of business logic? -indeed; how would you do this?)

It very much depends on the code - but I was genuinely surprised how many applicants, claiming to be fluent and applying for a senior developer position, had problems just grokking what the code did (a while loop reading from database).


If I claim fluency in a non-native language because that’s a relevant qualification for the job, testing me on it is fair game IMO.

That’s true whether I’m an author writing in German, a newscaster reporting in Italian, or a programmer coding in C++.


Definitely not - if an author has written a book in German (even if it's not their native language) then giving them a German test is definitely insulting, the same applies for a newscaster in Italian if they have done reporting in Italian previously.

What you say applies if you're hiring people for their first position at that task (e.g. the author writing their first book in German or a newscaster who has never done reporting in Italian professionally). If you're hiring people at some hypothetical "level 10" then your interview needs to discriminate between "level 9 or less" people and "level 10 or more" people, but asking them to assert that they meet "level 1" implies that they might not, and that implication is literally insulting.


In the interview case, you often have someone who claims they wrote a book in German (but you can’t see the book) or to be a professional Italian newscaster (but you can’t see any of their reporting).

Switching part of the interview to be in Italian or German would not be seen as disrespectful, right?

It’s interesting that some find the coding equivalent insulting rather than merely a bar pointlessly laid on the ground to be stepped over.


The difference IMHO is in the expectations of what's required from the applicant. Switching part of the interview to be in Italian or German would not be seen as disrespectful as it does not add much (if anything) to the length of the interview, but asking them to fill a 30-minute quiz on basic Italian/German grammar would be disrespectful.

A bar pointlessly laid on the ground to be stepped over is reasonable iff it's you can just quickly to step over it - but if they ask the candidate to waste half an hour to prove their capacity for stepping over bars laying on the ground, that is disrespectful of their time.

For programming, a trivial short task (e.g. fizzbuzz) is appropriate but a trivial long task is appropriate only for junior positions but disrespectful for senior ones - ask something that tests whether they're capable of something serious, because passing the trivial task can't be sufficient anyway.


I think the issue is what happens if an author ghostwrote a book in German? I guess that's the book version of copying code off Stack Overflow and claiming it's yours? People trying to game the system make it crappier for everyone else.


OK then, I'm not fluent in any languages, I merely use them to create products that people buy for money.


Then do it in a different way. Phrase it as a toy pull request that the reviewer has to review, and let that contain everything from minor syntax errors to logic errors to missing core stuff (like missing test cases).


I usually write python code, I stick to pythonic styles, consistency and good practices.

If there is an error, I can quickly figure out what that is and what I should do to fix it. I would know why the error occurred.

Beyond that deliberate interview practice is the only way to get a lot of interview questions right.


you prefer being asked to write it out long hand?


Github PRs also lack syntax checking etc - so it isn't something you'll never see at work right?

(Admittedly if the PR doesn't build why are you reviewing it but whatever)


That's why you are reviewing a pull request in your IDE.

And you are having a CI build and unit tests in place. If it doesn't compile or a lot of tests are failing, a sane person won't even bother to review a pull request.


Serious question, as I've never done a PR in my IDE; hadn't even thought to.

If you check out the branch in your IDE, is there a way to have it highlight the changes in the branch you're reviewing? Or do you need to reference the output from `git diff` or the github PR view?


VS Code:

1. Open Source control window

2. Checkout the PR branch

3. Open the branches listing panel in the sidebar.

4. Mouseover the target branch of the PR

5. There's an icon that looks like two nodes with arrows pointing between them. Mouseover text "Compare with ...". Click it.

6. The search and compare panel in the sidebar has a listing of files changed, you can click a file to get a diff view.

There are gitlab [1] and github [2] extensions which streamline this workflow if your code is hosted on one of those services, and let you leave comments in editor which show up in the web UI.

IntelliJ has support for display a diff for a branch or github PR built in [3] but I hate their diff modal view.

[1]: https://marketplace.visualstudio.com/items?itemName=GitLab.g...

[2]: https://marketplace.visualstudio.com/items?itemName=GitHub.v...

[3]: https://www.jetbrains.com/help/idea/contribute-to-projects.h...


This is incorrect. You can run tests on PRs and disallow merging until it passes all the checks

https://github.com/features/actions


My lesson learned was "don't listen to what they say, watch what they do". As in, don't trust them to tell you what they're looking for, understand that figuring out what they're actually testing is also part of the screening process.

I don't see a lot of focus on the hide-the-ball aspect of interviews, but is something I've experienced a few times and bothers me way more than anything else.

The clearest time this happened I was told the company was big on pair programming, so I'd be doing a pair programming session. It turns out they meant I'd be tested on whether I could finish a coding exercise in the allotted time with someone watching. There was zero "pair programming" of any kind involved and time spent on collaboration counted against me.


Back when I wanted to interview at google, I sensed some frustration between HR and the engineering. Some engineers just didn't make good interviewers.

HR wished I had gotten a different interviewer.


I had a recruiter obtain approval to have my technical review ignored because I pointed out so many flaws in it. At that point I had too much of a distaste for the process to continue.

But I kept being approached by Google recruiters, kept recounting what had happened last time and asked if I could expect better this time. None could promise things had improved, and a few did express that kind of frustration.

This was years ago now. Could have changed of course. But I started telling them to go away.


I was getting HR hits like every six months and eventually I told them I would get back in touch when I was ready.

Now I'm getting an email every month from FB, and about ready to do the same with them.

The last time I passed the first and second rounds and Google let me languish on the third round for a month. I still probably would not have made it through, but it was surprising that my candidacy was dropped like that.


It says something about their HR systems that it's clear they don't review past interactions before doing this. To me that's a warning sign too - I'm less likely to consider interviewing if a company contacts me again and the interaction doesn't start with a pitch of how this time or this position is a better fit...


The problem is that many out there try to copy the dysfunctional FAANG interview style. Google was asking those dumb brain-teaser questions for years, which were obviously useless to the naked eye, and then they realized themselves that those were counter-productive.

Yet, you could expect one of those in almost any interview process. Seemingly smart people are ready to jump on bandwagons, too.


Google was asking those dumb brain-teaser questions for years

Microsoft started doing this, realized it was a bad idea, and stopped before Google even really started this practice. So Google is as guilty of bandwagon jumping as everybody else here.


I find it… troubling? That a technical interviewer can’t tell for herself whether your code will work. Wouldn’t you ideally want people who actually understand code to be giving the coding questions?


Are you saying that because this interviewer needs to run code to find logic errors, she's somehow not a competent engineer? Because I usually need to run code to find logic errors. Sometimes I use formal verification instead but that's pretty rare. Am I also not a competent engineer?


He's saying that in order to judge someone's ability at something, you need to understand that thing yourself. If you require the candidate to write perfect code using nothing but a whiteboard, you need to be able to read that code using nothing but a whiteboard as well.

If the interviewer cannot do this, how is he going to judge the result? Does he know how to run a compiler? Does he know how to run the code? Does he have the skill to judge the output? Is it really a good policy for a company to discard a possibly excellent candidate that just missed something silly that would normally be checked by a tool while you type?


If you can't tell the difference between sketched-out kinda sorta pseudocode and potentially workable C++ then you're certainly not competent in C++.

And if you fail to communicate the requirement for one or the other then you're certainly not a competent interviewer.

You also have to ask what exactly is being tested here? Is it the ability to remember syntax? To remember an algorithm? To improvise an algorithm? To recognise which algorithm is needed?

What, exactly?


Maybe they are a bad interviewer?

In my view, if the answer involves a topological sort the interviewer should know how to solve it and be able to follow and find errors in the candidates code. If the interviewer, knowing the answer, cannot find any issues then surely the code is fine (for code written in an interview)


It's also possible that she hadn't seen the particular algorithm used before, or that she was having an off day or stressing about a meeting immediately after the interview, or that there were errors that she did see and she didn't want to say "yeah there are errors here" because doing so could affect the candidate's confidence in the interviews after hers. I could imagine any of these being true. Or she could just be a bad interviewer.


That is irrelevant. Asking someone to type non trivial code outside of IDE and then expecting it to compile and run without issues is lunacy. Even junior programmers know this. The interviewer in this story was either an amateur, an idiot or on power trip.


I guess I'm also an idiot then, thanks, how kind of you to say that.

Actually hang on, I'm editing this to be slightly meaner. Your whole take that doing this is a sign that she's either an idiot or on a power trip is a very familiar thing that people say about women in tech and I'm honestly tired of it, because I can see myself doing exactly what she did and I don't like it when people say those things about me. Please don't do that.


Gender has nothing to do with this.

You just can't tell someone to type code into google docs and expect it to just work and worst off all judge a persons skill on this basis. It takes minimal experience of programming to learn this. Hence all the jokes that people are surprised/suspicious when their code runs after first compile.

If you disagree with this you could have provided any sort of counterargument. Instead you took this weird "women in tech" angle. Wrong is wrong, interview here was wrong, gender did not play a role.


If you can't assess someone on their response then you're not interviewing them, you're just giving an exam by proxy. But that might very well be because the whole recruitment process is thoroughly stupid.


Does this reasoning not also apply to the applicant, though?


So, we can have an interviewer perform a possibly incorrect manual validation of an interviewee's possibly faulty code. Reading code is harder than writing it, and presumably the applicant has been asked to do something tricky.

Or they can run it, asking the real arbiter of truth whether it works or not.

Of course, "whether it works" is merely one (very important) metric of quality.


The interview is a proxy for working with the person. In this case, it's a proxy for pair programming / code review. A good chunk of what the interviewer, ideally, looks for when asking a coding question is communication from the interviewee - can the interviewee communicate what they are doing and why? Can they explain the intent of the thing they've just written? Do they have a clear picture of it in their head, and can they communicate it? When the interviewer spots problems with it, what does the ensuing discussion look like? - how well does the interviewee collaborate in solving them? If the interviewer is wrong, does the interviewee push back? How? Can they understand you, and can they make themselves understood?

Can you work with this person? Can you collaborate to write code, or will it be a daily struggle?

Whether the code actually builds and runs after the hour is up does not help answer these questions; it is arguably the least interesting part of the whole process. The time limit is artificial; if all the other things align but you didn't happen to get it working in one hour, you'll likely have got it in two. If they don't align, you'd likely never have got it.


Every recruiter says the same thing, but in practice the interviewer is only looking for the right answer, and that is what determines the outcome of the interview. I've never seen anyone helped out by soft skills while still having an incomplete solution.


Really? I certainly have. Several times. And I've been in that situation myself.


The problem is it won't run. It was written in google docs without IDE. Everyone who programmed in their lives knows this. You cannot write out a whole algorithm like that and not make any even trivial error.

This is like writing code in notepad and creating a PR without building or running it to test once. What is this testing? There is no real world scenario where you are expected to work like this and for a good reason.


Yeah. If I were asking a trickier question and I got an interesting new variant of the solutions I'm aware of, I'd absolutely run it if I had time to type it up and everything. Most solutions are either something I've seen before verbatim or obviously wrong, though.


I think it depends on who gets to choose the programming language.

If you're interviewing for specific language skills, then what you say is clearly true.

If it's a general coding skills interview, I invite the candidate to code in whatever language they like. More often than not they choose a language I am sufficiently fluent in to follow along. In rare cases I need to ask them to explain a thing or two. In very rare (but generally quite fun) cases they choose a language I am totally unfamiliar with; this tends to lead to interesting discussions.

In all of those cases the full coding transcript is captured and, if necessary, can later be additionally reviewed by/with someone who knows the language well.

P.S. Also, their thought process and approach to problem solving is just as important as the code, and is largely independent of the choice of programming language.


Funny and illuminating examples of this are the excellent "hexing the technical interview" series (read in any order): https://aphyr.com/tags/interviews


Standard disclaimer: everything below is my own opinion based on my personal experience, and I'm not speaking on behalf of my employer.

> I thought I was done, then the interviewer said she would go go copy my code and compile it after the interview to see if I was right, which blew my mind.

I've been interviewing at Google for nine years and have never done this. I generally don't think it's fair to ding a candidate for something I don't notice in an interview, where I'm at a huge advantage. If it looks right to me then that's good enough.

But that said, I have often asked questions similar to what you describe. You have to write code in the shared doc because hiring committees want to see it - fair to blame the process for that, but not the interviewer. I actually appreciate this for a couple reasons:

* It levels the playing field a bit, in the sense that code is more objective than an interviewer's notes on how a conversation went (especially for people who aren't native English speakers)

* I sometimes find that candidates who communicate well struggle to turn their ideas info code. Other times, someone's communication and solution are kind of average, but then they use all the little things I like seeing (defaultdict(set), zip, etc. in Python). Lots of people claim to be very experienced programmers; seeing how comfortable and fluent someone is when actually writing code is a strong signal. If you don't like the focus on data structures and algorithms that you haven't thought about since college, you should probably appreciate the coding part.


How would you feel if you were going to a job as a writer, and the interviewee asked you to put a story, poem, etc. down on a napkin, with a crayon?

Like thats how I would feel writing code into freakin google docs during a job interview...with Google.

Tie 1 arm behind by back as the synax goes wonky, I'm fighting the spacing, etc. etc.

Or put mario Andretti into a ford focus, then test his lap times, with 0 warning.

Yuck.


> mario Andretti

If you're a world champion, you would probably enjoy an exercise like this.

This lady pilot sure did: https://www.youtube.com/watch?v=5KiC03_wVjc


What a delightful video! Unfortunately, her first time was 23 seconds over and she will not be getting a job on the Google racing team.


On one hand, I can appreciate that it's unpleasant to be put on the spot. Or being forced to use crappy tools. Nobody likes that.

That said, if a person is being considered for a coding job, wouldn't that person want to demonstrate their coding abilities? I mean, assuming they're actually good at coding?

In interviews, I much prefer a concrete problem to an abstract one, and coding problems are typically far more concrete than most other subjects covered in SW dev interviews.


Google Doc is specifically designed for writing and reading and used probably by more people than any other piece of software ever for that task. And you think it's fair to compare it to crayon and napkins? What a world we live in. Programming is often about thinking through the problem, but if you can't separate yourself form your IDE when writing code and it's about syntax then that says more about your style.


While I don't necessarily disagree with the sentiment, I can provide a real life take. Not specific to GDocs, this applies to all browser based editors.

You may usually write code in a terminal based programmer's editor (vim, emacs, ...) and realise the code you've just written is not quite right. You want to delete the last two words. There's even a default and handy keybinding for doing that.

So you press ^W twice.


I consider google doc coding an extension of the white board coding, which often happens in pair programming. I do not think it should aim perfectly working code, but at least I would personally expect reasonably good whiteboard coding hygiene from a colleague, at least for a report to the committee.


I mean tabs don’t work like a normal editor. And it wraps lines. And the default font is not monospaced.

If you are gonna require a candidate to write compilable code at least use one of the many tools designed explicitly for coding interviews.

I think I’d rather use straight notepad than google docs.


Just fyi - Mario Andretti and other similar drivers could be placed into any car and would perform at a very high level after a lap or so...


> It levels the playing field a bit, in the sense that code is more objective than an interviewer's notes

Except that writing code in a Google doc on the phone doesn't in any way resemble the real "playing field". I grant that it's level in that everyone faces the same constraints, but to write code in a gdoc with red squiggly lines underneath every keyword, automatic capitalization after a dot, and variable width font? How does that give you any kind of useful assessment? It's like handing a butter knife and some twine to a doctor and saying "here, show me how you'd stitch up this wound".


Google interview process is designed around how badly you want to work there not what are your skills i.e practising solving bunch of leetcode puzzles on whiteboard and speaking out loud is all it takes to get through the interview process. It just takes some time to prepare.


I think that is basically it. I think many of these FAANG companies interviews are more of a hazing than an actual assessment. Everybody who works there had to go through that interview… if you want to be part of the pack, you too must endure it.

When viewed in that light it actually doesn’t seem quite as bad.


I worked at Google and did a fair share of interviews. Two observations:

When you have 3 interviews per week for a prolonged period, you, as an interviewer, are not going to do a stellar job every time. What's worse: you will develop a routine and it becomes very easy to give candidates that do not fit your routine a lower grade. It takes effort on the interviewers part to recognize talent that perhaps doesn't fit your routine or your expectations. If you are not going to end up being a bad interviewer you also have to try to relate what you see in interviews to what you know about work.

For instance I _never_ asked people to code live (mostly on whiteboards back then) because it just isn't a relevant exercise. And I was kind of horrified at experienced interviewers who asked people to code and then got obsessive about small details that the tooling would have taken care of. Absolutely pointless.

The only piece of advice I found useful from the interview training was this: this is the candidate's big day. For you it is a chore, for them it is their big chance. Keep that in mind and respect it. I kept telling myself this for every interview - and some days I felt really terrible because I wasn't properly prepared.

The other thing that horrified me was when we let inexperienced people who had been out of school for less than a year interview people. These interviewers barely knew how to write software themselves, and they'd get even more hung up on irrelevant stuff because they simply had no idea how to be software engineers.

I doubt that I would have done very well in those kinds of interviews because this isn't how I work and it certainly isn't how I teach people to do problem solving. Problem solving requires more time because any even mildly tricky problem worth solving tends to have a lot of facets far beyond picking an algorithm or knowing how to code it up. That's the easy part because for that part, you have books, papers, tools and other people to seek advice from.

Junior programmers right out of school with no engineering experience have no business interviewing developers. They make poor and overly judgemental interviewers and only rarely are able to spot talent if it doesn't fit their template. They also aren't going to fight for candidates that may not fit the imaginary template, but have some special gift because they are junior programmers. It takes a certain amount of balls to say "I know you think this candidate is rubbish, but I see something here and I don't care what you say, I am going to insist".

(btw, statistically, this used to be a good predictor for later success: candidates that were somehow "controversial" in that they didn't make the grade with some interviewers, but displayed something that made other interviewers fight for them)


> The only piece of advice I found useful from the interview training was this: this is the candidate's big day. For you it is a chore, for them it is their big chance. Keep that in mind and respect it. I kept telling myself this for every interview - and some days I felt really terrible because I wasn't properly prepared.

Thank you for being kind and respectful.


Thanks, these seem like the most thoughtful Googler/Xoogler comments I recall hearing on the topic.


Googler here. At some stage of the interview process, we have to check whether you're able to code, there's no way around it if you're applying for a coding position. So yes, we'll have you code the solution to a problem. Of course in the real world we'd use a library or look up the algorithm on stack overflow like everyone else. Good for you if you came up with a workable algorithm for the problem quickly. But that's not the only point of the interview. An important goal there is to check if you can really code. We don't usually care if you don't know find the perfect algorithm (unless it's for a senior position), and in fact it's a desirable property if you don't, because that allows us to see your thinking process. Some of the best interviewees I've had, they didn't have the right solution, but impressed me with how they thought about the problem and dealt with the situation of having a problem in front of them that they didn't know how to solve. How they considered possible boundary conditions and restrictions and extensions. Someone saying "yeah, this is just depth-first-search" and then spits up a memorized solution gives me zero insights on whether the candidate is good (and will likely not allow me to write great feedback on the candidate, unless they're able to sell me that they understand what they are doing and why and how). It's usually expected that most of the interview will be taken up by you implementing some algorithm, and it's expected that you won't get every detail right.

We do NOT require you to produce token-by-token perfect code, and nowhere in the process will I ever have to give feedback on whether the code produced was actually valid. So maybe you misunderstood your interviewer's intention, or something else went wrong, or maybe they were just joking. But it's not the interviewers task to copy code into GCC and try it out, that'd be a huge waste of time. So let me be very clear and explicit: no-one gets classified as "no hire" because they've forgotten a semicolon somewhere. You messed up somewhere else.

With that said, if a candidate says they can write in a language, we expect them to know the language, its idioms and at least parts of the standard library. Not every nook and cranny (e.g. I'd often instruct candidates "just pretend you have some library that implements a heap, and invent an API for it, I don't care about that part"), but if you call "strlen" in the termination-condition of your for-loop instead of before, or do other stuff that shows you don't know the language well, that's a red flag. After all, we expect you to be able to write production-level code that servers billions of users.


There’s enough counter anecdotes here on HN when this topic comes up that the wording of your post comes off as tone deaf. You actually seem to believe every interviewer at Google behaves like you, for the same reasons, to the same standards.

“We do NOT”

“You messed up”

“let me be very clear”

“whether you’re able to code”

These types of statements come off as patronizing. If your whole process sounded like this, and if you somehow actually do represent a majority of Google interviewers, then it’s no wonder so many folks have a distaste for the experience.


I tried conveying what I was told when I was trained for conducting interviews, what I've explicitly seen demanded on the interview forms I have to fill out and what I get as feedback from hiring committees. Thus I indeed assume that my opinion represents the majority of interviewers. Whether that assumption is warranted or not, I wouldn't know.


Well, in real life this varies. I've seen interviewers from among the first 200 Google employees, nitpick their way through someone's whiteboard code, obsessing over every comma and semicolon. It was embarrassing. And I have to say that while shadowing these interviews I couldn't help but think "I've seen your code - you have bigger problems than getting the syntax perfect, mate" (about the interviewer).

It depends on who you get as your interviewers, so generalizing isn't really useful. Some interviewers can't relate the interview to what it is supposed to tell you. I've seen that in inexperienced interviewers and I've seen that in very experienced interviewers who had enough GOOG stock to buy a small country.

If the interviewer can't think of a better way to test candidates, that's on the interviewer. Not the candidate.


> So let me be very clear and explicit: no-one gets classified as "no hire" because they've forgotten a semicolon somewhere. You messed up somewhere else.

How much money can you bet that this never or does not happen? If you can't put your money, I find it difficult to take this seriously.

> We don't usually care if you don't know find the perfect algorithm (unless it's for a senior position)

The unless clearly means that you "do care".


This statement "we don't usually care if you don't know find the perfect algorithm (unless it's for a senior position)" made me chuckle.

I can see this interviewer writing feedback like "Well, this candidate didn't come up with a perfect implementation of Dijkstra's shortest path algorithm, but its OK, they were pretty close. Oh wait, they are a senior engineer? Nevermind... they should have really mastered these algorithms while building CRUD APIs all these years."


“servers billions of users”

And therein lies the problem.


This is all ridiculous. Of course you have to test that people can code, but it doesn't follow that people must be able to code without any references available, or without using standard libraries. I can code just fine, but I still have to look up the arguments to functions I don't use very often, etc.

As far as algorithms/data structures, most of what we do on a day-to-day basis is more about selecting the appropriate data structures and algorithms that fit the problem, and assembling them together correctly. I've never needed to code a hashtable, but I need to think about their characteristics and whether they're appropriate all the time. I've never written a btree, but I do understand database indices pretty well. So in my view, asking candidates to code up these things on the fly in an interview with no reference materials is a total waste of time. What matters more is if they can correctly apply them. If you're really sure someone needs to write these things, then a more realistic move would be to sit them down with a standard textbook or paper and see if they can get the code working.

> After all, we expect you to be able to write production-level code that servers billions of users.

And you do all this through sheer clairvoyance, rather than using tools like unit tests, code review, design specs, etc, right? No? Then why are you expecting people in interviews to do it without those things.


How does Google interview for product development skill? That’s seriously lacking in that organization, definitely the worst of the FANGs.

The arrogance displayed in your comment isn’t backed up by the ultra-low quality of product being produced by Google. Something is clearly wrong with the culture and hiring process there. Maybe instead of fretting over people using strlen as an exit condition, you should look for people who can actually produce software people enjoy using. You don’t even need to do a coding interview! Just browse GitHub for people you want to hire and get to work convincing them to join you.


A friend encountered this with Amazon, although their tool at least has some code help (at least when I tried a year ago). The interviewer tried to compile his code.


It's also incredibly telling that the interviewer couldn't figure out whether your code was correct or not without compiling and running it.


I hate this. People have IDEs and internet when working. Having to write something that can be compiled in a doc is just stupid.

As someone mentioned in another comment, I also never had to actually implement a sort since I left university more than a decade ago. I'm happy to discuss the different types and approaches to how they could be done - as to evaluate problem solving, logic, and general understanding - but to actually code in an interview...

In a similar way, this week I even jokes on Twitter how I'm doing many improvements and refactors on a codebase, but there's the occasional 'how I do join 2 lists on Java again?' moment when I completely forget something basic.


i failed quicksort the first time I interviewed at google. then got hired and worked there for 12 years and never wrote a sort.

I also complained that google's coding solution (docs, basically) was a terrible way to code, other companies use coderpad or whatever, but I expect that Google will never change this. They love to hire people who are excellent at coding CS approximate solutions, but have little to no judgement on how to do good software engineering.


The pandemic has finally. forced them to move to something more Coderpad-like


Sometimes I sarcastically think they already have a friend who applies and needs to fail the others.


That does happen more often than you think. The hiring manager wants to hire a friend, tells HR he needs to hire a good programmer in mind (but doesn’t disclose it’s his friend). HR tend posts job postings for legal reasons knowing full well none of the candidates will be hire. They then reject a few candidates and then brings the friend saying they couldn’t find someone more qualified.


we must have had the same interviewer! It boggles my mind how this helps their business case / bottom line. And I was applying as a UX designer lol.


I had a similar experience interviewing years ago for Google, although I can't remember the algorithm they wanted me to implement.


Let me first concede a few points:

  1. You should have been better informed about the expectations of the interview, so you would have had a chance to prepare yourself.

  2. Coding in a Google Doc is a terrible experience. It's a step up from coding on a whiteboard, but that's not saying much. Google has since moved away from both, prefering an online text editor that's not quite an IDE but at least more programmer-friendly.

  3. The goal shouldn't be to write 100% correct code without any compiler feedback. That's insane. Still, there is a big difference between "candidate forgot a semicolon once" and "candidate did not know how to write a for-loop without IDE feedback".
But beyond those valid complaints, it sounds like you were also unhappy that you were asked to write any code at all. I don't think that's reasonable. The point of a phone interview for an entry-level SWE is to determine two things:

  1. Can they figure out how to solve a nontrivial problem?
  2. Are they able to translate ideas into reasonable, working code?
For an algorithm question that boils down to a topological sort, the interviewer will see three kinds of candidates:

  1. Those that don't have a clue how to solve it.
  2. Those that recognize it boils down to a topological sort.
  3. Those that recognize it boils down to a topological sort and are able to implement a solution.
Each of these candidates is strictly better than the last, and Google only wants to hire the third one.

> I didn't remember the exact algorithm so I basically had to re-figure it out on the fly

Yes! That was the whole point of the question! You had already demonstrated that you were at least a "type 2" candidate, so now the interviewer was trying to move beyond that and figure out if you were actually a "type 3" candidate. Nobody expected you to have the exact solution memorized, but they expected you to be able to figure it out from first principles.

> I even similarly mention "in any real situation I would just look this up", but that didn't help

That was missing the point, which was to test your ability to actually implement a solution.

In the real world, if you encounter a standard problem (which happens often, like "I need this list sorted" or "I want to put this stuff in a hash table for O(1) access") you wouldn't even look up how to solve it. You would just call the existing standard library function and move on.

Logically, the problems that you end up spending most of your time on are not of the standard variety, and involve actually thinking about how to break down the problem and actually implementing your intended solution. Those are the problems Google needs you to solve on the job. If you can't even implement a topological sort from scratch, why should anyone expect you to do anything more complicated than that?


My experience with Google (in the UK) was an interviewer who asked a very similar question, among a number where he was clearly unprepared to deal with answers that weren't 100% as expected, and asked questions that had basically no relevance to the role.

The experience was so bad that afterward I sent a lengthy email to the recruiter thanking her but pointing out I expected to fail the interview because the interviewer insisted on things that were wrong or irrelevant and set out why as a courtesy so they could address it for the future.

I was rung up and was told the recruiter had gotten the interview set aside, and offered to just have me bypass the technical interview and wanted to move on to an interview with the hiring manager.

In the end I declined, as the first interviewer would have been one of the people I'd have managed, and I just did not want to deal with a team with a person like that, and the whole process was just excruciatingly slow to the point I had offers on the table before they could line up an interview.

It seemed to be a process optimised for people who either desperately want to work specifically at Google, or doesn't have alternatives.

I kept getting calls from Google recruiters for years after that and kept recounting my past experience and asking if they could provide a better experience this time around.

I have had worse interview experiences than Google, including one where I halted a phone interview halfway through and told them they'd shown me I didn't want to work there, but Google is pretty high on my list of places I'm not particularly interested in interviewing.


> In the end I declined, as the first interviewer would have been one of the people I'd have managed, and I just did not want to deal with a team with a person like that,

Oh, I can totally empathise with you because I've been there. I applied for a position which was the head of a team and a future direct asked me esoteric statistical questions from his PhD thesis. I did quite well to answer most of them but he wasn't impressed.


I love when you can tell an interview question is someones very personal pet issue that no one on earth could reasonably care about.


I was once asked to prove that K-means is not NP-hard in 1-d ?!


Weird. When I was casting for actors instead of trying to "trick them into doing something bad" and throwing them out, I tried my best to get it to work with them, even if it meant I had to do something different.

A casting is always stressful, they don't need me as an enemy as well. If they are not good enough with a ton of help, they won't be good enough period. If they are really good with a little of help, they can still be better than someone who needs no help at all. In the end I care about the final result, not about whether they grade well on some invisible scale that only I can see.


This. Help the person you're interviewing. You'll both get more out of it, you'll both enjoy it. If you're in a company that is expanding, you'll be interviewing a lot, so it's really important that it doesn't depress the shit out of you.


There is a risk that you accidentally do the thinking for them and give the answers for them without noticing.

Doing a collaborative interview requires careful attention by the interviewer, but I agree it’s worth it.


So factor that into your judgement. If you needed to help them constantly, you still know they're a bad candidate. Nothing is gained by stonewalling them other than potentially getting someone who's going to badmouth you to everyone.

I've definitely had interviews with solid candidates who just faced an early roadblock, but were brilliant after I gave them an early push. Sometimes that's more of a sign of communicating a question poorly or nerves than genuine lack of knowledge on the candidates part.


This, so much. Interviews are very stressful for candidates, everyone should remember what it's like sitting on the other side.

I've been doing a bit of technical interviewing and I usually try to help people along so they reach the "end" of the interview, especially if they're not doing too well. So many candidates sound defeated when they're struggling with a task and it's a form of respect for me. They've put themselves out there, after all.

Treating candidates - which you know you are going to reject at some point of the interview - with dignity is the right thing to do and can go a long way when it comes to your company's reputation.


That is the hard part of the job. You need to be able to take yourself out of the equation. If you can't do that, you are simply the wrong person for the job.

Usually it is not hard to notice whether you'd like someone to succeed and then factor that in.


> Usually it is not hard to notice whether you'd like someone to succeed and then factor that in.

What about the opposite, that you don't like someone, or worse, you are indifferent to their success?


Same thing. One of our main actors was once a guy I didn't like at all. But he was fantastic for the role and it turned out to be a good decision afterwards (despite him being complicated to handle, which we also factored in).

Maybe the hard part is to get a HR person that cares about the outcome instead of their authority.


What happens in the real world when a colleague has a mental block? They Google it. If that doesn't help, they slack the team: "Hey guys, this is something stupid-simple that I can't bring to mind right now, but... <question>".

Why should their interview be a stressful stone-walling event when life as an employee is not?

Maybe an interview should have "Who wants to be a millionaire" lifelines. Two "Googgle"s, and an "ok, I give up - give me a hint".


I mean, sure. Nobody said interviewing should be easy. Be attentive and focused as an interviewer too!


I agree with the spirit of what you are saying, and in light of the role unconscious biases plays in hiring, I'm challenging my own beliefs on how to run an interview.

How does an interviewer ensure all candidates are offered help fairly and equally?

What is the purpose of asking a question, that if a solution is found after "hints" are offered, would be acceptable for hiring?


If the question you are answering has a "right" answer you are asking the wrong question. E.g. if you search for a web dev, give them a problem and some constraints (e.g. language), ask them to draft a quick solution.

When they are unsure then they will usually ask about the problem or about how the problem should be solved (microservice, shell script, ...)

These are valid questions and you can turn them into more knowledge about the candidate by asking them what pros/cons they see by going each route.

Just like that I have actors that want to know more about the role and actors that need less guidance. In the end their performance counts. I will still try to give them hints inbetween takes (just like I would when we work together on the set) — but in the end it is their work which I will judge.

Unless you are hiring for a position where a dev works in complete isolation and receives no guidance whatsoever I think emulating the real thing is the way to go. And that usually doesn't entail holding a devs hand and neither it does entail not helping them at all.


Same-ish. I got through several phone interviews and was asked on-site for a full day of interviews, back to back, with a lunch break in the middle. The HR girl was honestly running around like the mad hatter, apologising for being late and then running away as soon as she had left me in the general area where my interviewer would be. Some of the interviewers left me with the next, others just wandered off, leaving me in a room. One of the interviewers was in a different office so we talked over video conference. I say "we talked" but he was reclining so far back in a chair that he might as well have been in bed, and he read questions from a sheet without once looking at the camera. After that I did actually get a decent interview from someone who was on the team I interviewed for. During the lunch break someone had to take me for lunch, and from the moment she showed up I gathered that, like perpetual interviewering, bringing ten-percenters for lunch was another boring part of the job. I also got a really bad 'vibe' just walking through the building. Lots of fun-looking-stuff like games and beer fridges, but everywhere, people with headsets on staring into screens, never once looking up to see who is in their vicinity. Anyone ever seen Coraline? The people with buttons for eyes...

After that whole depressing charade was over, I figured it would be an offer or GTFO. Nope, I got called for another interview. I said "eh, no thanks".


Years ago, while applying to some security engineer position at Google, they were testing low-level skills focusing on vulnerability research, C/C++ and x86. At that point having spent 10+ intense years on that arch: having written binary translators for it, reversing/exploiting software and writing a micro-kernel. After rejection, they recommended me to read a "x86 assembly 101" book which was truly infuriating at that point.

Granted, it was an automated email, but an automated "No" would have been more appropriate. Is it so hard to offer different standard responses? Why does the company consistently fail at human interaction, be it candidates or customers?

From my circles, most share the feeling that FAANG/MSFT are insulting for candidates (I can only speak for Google/Microsoft).


Because they have a money printing machine called AdWords which affords them the opportunity to be terrible at anything and everything else without any impetus to recognize it or need to care.


Yeah I was asked to complete a pretty challenging question in 25 minutes (eventually solved it after the interview ended in 2 hours) and after it was clear the interview wasn't going anywhere, asked for tips. They said "study data structures and algorithms"

Like... yeah thanks I guess this degree and 5+ years experience isn't worth much. Such a broken process


feedback is the hardest part of any assessment, not just in interviewing


It's so stupid, all of it. They try to come up with random questions that doesn't mean anything if the candidate remembers the answer at that point in time.

I've read about Linux inodes. I know what they do. In fact I even have had Linux systems where I get inode related error messages because the partition had too many small files on it.

But given that question, in that situation, I would likely not know what they even meant with that question. Am i supposed to have all the inode details memoried forever in my mind? It's fucking ridiculous.


I’ve written a few Linux file systems and I’m not even sure I’d have answered that question correctly.

I’ve also failed job interviews where I was told there was no coding expected in the face to face and then got given a piece of paper and asked to solve 3 theoretical problems in SQL on paper while 3 interviewers watched. 10 years prior I’d worked for several years on Oracle middleware so I knew SQL inside and and out but I still lost my nerve at that interview and was told I wasn’t experienced enough.

There was another telephone interview where I was asked all sorts of command line questions, the problem there is they spelt out the command line flags differently to how I normally talk and read then (eg “what’s see hatech em ohh Dee seven hundred and seventy seven”, had i seen it written down I’d have been like “oh you mean see hatech mod seven seven seven” (chmod 777) but they way they read it out sounded cryptic has hell.

So there’s a valuable lesson I’ve learned for interviewing: putting pressure on interviewees is just as likely to filter out good candidates as it is bad ones. So you’re better off making them comfortable during the process. Good but nervous candidates will perform better. The interviewing process shouldn’t be about who can hold their nerve the longest.


Seems like in these cases bad interviewers are a great opportunity to glimpse into bad companies

A frustrating way to find out though


> hatech

I've never heard 'h' pronounced like that.


I've never heard that either. The most common pronunciation I've heard in the wild is "tch-mod."


hay'tch


There are moments in life when I wish everyone was forced to learn NATO phonetic alphabet early in school.

"Charlie Hotel Mike Oscar Delta Seven Seven Seven".

The reason to cram into people a standard spelling alphabet is that it minimizes confusion over the usual "see as in $random-first-name". The reason to standardize on the NATO one is that it's already an international standard, and a subset of the population that goes to work with anything resembling a radio transceiver will have to learn it anyway.


It wouldn’t have helped in the interview if they did use the NATO phonetic alphabet because when you read CLI commands or talk about them in the office, you don’t spell it out using the NATO alphabet.

The point was the interviewer read a written command differently to how I’d typically hear it. It’s a little like the S-Q-L vs Sequal debate and how that can sometimes throw people.


In terms of topical content it's a good question. The idea that the name is a link stored in a directory entry is a key part of filesystem architecture and anyone familiar with unix filesystems should be able to immediately talk about why.

The problem here is that what could be an invitation to showcase knowledge is reduced to a vague, one-dimensional and non-obvious trivia question. There are a ton of valid answers to this question, like:

* The file data

* Extended attributes (ACLs, etc)

The topic is fine, but the framing of the question is terrible.

If I wanted to test a candidate's knowledge in this area I would probably ask: "Why isn't the filename stored in the inode?" -- this initiates an architectural discussion, rather than a poorly designed guessing game.

Guessing games in general are a red flag for the employer. They tend to indicate the interviewer isn't competent freely discussing the subjects at hand.


If your question has that "gotcha there is just one right answer"-feel to it, you are doing it wrong.

I had a teacher once who constantly asked questions like these in such an imprecise fashion there was no way anybody could have guessed how the question was even meant to be answered. I still cringe when I think about it, because the only purpose of these questions was to show us that he is really clever and we don't know shit — and it didn't work at all.


Yes, absolutely.

Personally, I have always approached interviews as an opportunity to either teach or learn. I pick a subject and drill down until either the interviewee reaches their limit of knowledge, or I do. Why is it like that? How does that work?

Then, we have a discussion. One (or both) of us is learning and we work out the whys and hows together. If I'm the one learning, and I hope that I am, I fact check the discussion after the interview. If not, I get a strong indicator not just for the technical level of the candidate but also how they operate at the edge of their comfort zone. I've found this can be a strong predictor of future growth.


> They tend to indicate the interviewer isn't competent freely discussing the subjects at hand.

Ding ding ding.

Even the question, as asked, was OK. The interaction with the interviewee wasn't.

OP's joke was clearly just asking the interviewer to be more specific. Instead of exploring the question with the interviewee (e.g. "well, can you think of something that one might naively assume to be in the inode, but that is not"), they get pissed? lolwat?


> * The file data

Except on some filesystems really small files can actually be stored directly within the inode, so even that's not always true.


Yes and that's true of ACLs as well which I think underscores my point: Questions should be an opener to dialogue. A topic, not a conclusion.

Discussion of these whys and hows and whens is the most valuable part of an interview and will more accurately illustrate depth and breadth than any number of fixed questions.


If I remember correctly the name of the file is not in the inode. That allows you to have a file with different names. (links)


Google is notorious for this.

I have been headhunted by them multiple times in the past. The last time their recruiter called me and explained me that after the usual screening and HR calls I would need to pass multiple increasingly complex technical interview rounds - and then THEY will decide on the role and project I am best suited for (if at all). I have flatly asked the guy whether they are being serious and turned him down.

To be fair to Google, though - their interviewing used to be first class. When I got into the first call sometime in the early 2000 or so, the interviews were difficult but with competent interviewers and no BS scripted questions. But that was because it was done in-house, I was talking directly to some engineer in Palo Alto over the phone. Even the HR lady was actually pretty technical and was asking me rather pointed questions.

Later on they outsourced it to HR agencies and it became "the process", with people being called over irrelevant/not interesting jobs (entry level sysadmin in Ireland once - even the HR guy on the phone recognized it makes no sense to call me over it ...) and the "we decide ..." at last.

However, Google is by far not the only company doing it. Microsoft's hiring process was very similar and Google's explicitly inspired many smaller companies or "less desirable" (for the candidates at the time) companies to ape this, thinking it is somehow a good idea.

This seems to be pretty much the norm in the tech industry. Along with attempts to effectively have the candidate redo all comp-sci final exams during the interviews - because "credentials can't be trusted", as someone told me.

Give me a break. It is high time tech companies should start treat (especially experienced) engineers with a bit of respect and dignity, we are not school kids anymore.


> even the HR guy on the phone recognized it makes no sense to call me over it ...

Well, he had a quota to fill so ¯\_(ツ)_/¯.

There are situations where the market can be spectacularly efficient. Outsourcing hiring to third parties is definitely not one of them.

In this case, big companies can afford broken interview process, because it's not their time that's being wasted, and the inertia of a big corporation can hide a lot of inefficiency.


facebook had me as someone who could fill their quota for 1st rounds etc I sent an email asking them not to do this twice now. It is quite funny though, because every time I bring up NYC being the only possible location for me, the recruiter usually grumbles says its "tough to get into NYC" and then usually I get that the engineering manager "doesn't like that many people" I wonder how many people have been rejected working for Facebook because one person in NY doesn't like them? I mean that person might be amazing, but still seems limiting. They do fine anyway though, most of the hard parts have already been done long ago at Facebook I would suppose.....


>credentials can't be trusted

As much as I dislike interviewing I totally agree with this statement. Almost none of my graduating classmates could write a fizz buzz and I wish I were kidding.


But think of how smart he felt after telling you the answer! That confidence boost alone was probably worth their time to interview you, mission accomplished.


LOL. He was obviously way smarter than me. He probably told his colleagues about how some dummy was talking about dinosaurs in an inode.

I feel the smartass part of my brain eternally dooms me...


This makes me thing of the time I was interviewed by three bankers. I'd probably be rich and miserable had I answered non sarcastically :/


The suspense is killing me. What was the question, and how did you answer?


Oh don't worry it wasn't just one question but the whole interview. One I remember was how would I explain [insert random algorithm] to a board of decision makers, I made it sound like I was explaining a 100 year old how to turn on a computer.

I had no interest in ever "explaining" stuff to non-technical people at that time, all I cared about was the code. So I also trolled one of the guys interviewing me since I happened to be stuck with his legacy "code", which made me judge him and take him with absolutely no seriousness: he was one of those "cfa" people who learnt how to "code" a line of VBA and think they're Linus.

I also remember the face of the HR person who was like wtf the whole time and politely told me they had "chosen" another person for the job, I laughed inside and politely answered that I understood.

I recommend against this behaviour to past me anyways ;)


> I recommend against this behaviour to past me

We all have to feel our oats at some point … and then realize later how childish it was / how poorly we treated others … so we can be forgiving of those who come after us …


You have to say what happened


IMO asking gotcha questions like that to make yourself feel smart is just cringeworthy and silly.

Your goal when interviewing people should be to find things out about them — with gotcha questions you don't find out anything about them, but they find out something negative about you.


I don't think a (non-technical, mind you) sourcer gets that much confidence by reading out loud the answer key.


I don't think this was scripted; it's a typical sign of bad interviewers when they think that

1) they are smart

2) therefore if you're smart you have the same way of thinking

3) also you have the same knowledge (and gaps!) about irrelevant details

Therefore instead of checking your knowledge they check whether you're like them.

It's just amateurish.


It is also scripted. Google used to use internal people do to the screening but not anymore, they have outsourced it to agencies. And a random recruiting agency drone on the phone, paid minimal wage, can't be expected to ask sensible questions, so they get a script to choose questions from, along with expected answers.


I can't believe that Google would outsource their hiring process, do they think they can keep their quality this way?


I doubt Google has outsourced their entire process.

But many recruiters work as independent contractors or in recruiting firms. They often do a minimal phone-screen, vs sending a candidate "cold" to the hiring company.


Have they kept their quality? Google does a few things really well, but that drops off very quickly when you look across their suite of offerings.


Nope, the inode question is part of the 50-ish long trivia quiz that a sourcer uses to figure out if a candidate is worth putting on the phone with an engineer.


This is pretty spot on


Downvoted??


The inode question is tricky. It is certainly not something an employer should take for granted you have memorized. But the answer can be reached by reasoning, at least if you know something about filesystems and the concept of hard links. Which you probably do if you have read the "ln" man page.

It is still a pretty poor question, I remember getting a similar kind of question when I passed through the (at the time, in Sweden) mandatory conscription testing, where you are tested for which role you best fit as a military conscript.

One question on the intelligence test was: "What is heaviest, gasoline or water?". I did not know how to approach the question, I had not memorized the density of gasoline and I didn't think an intelligence test could have assumed that either. I was stumped. How could that be an intelligence test question? Only afterwards did I realize that the test creators probably assumed that knowing that gasoline floats on water as common knowledge, and if you were intelligent enough you should be able to derive the answer from that. I think the inode question is similar.


> "What is heaviest, gasoline or water?"

It depends on the amount. Two pounds of gasoline are heavier than one pound of water.

Yes, I get that they are asking about density, not mass. My point is that tests need to be carefully written. I remember once during an early online screening at a multinational they asked something like

Alice went shopping in the morning and bought apples. What did she buy?

A) Apples. B) Oranges. C) All of the above. D) None of the above.

With the information provided, the only answer we can reject is D, because we know that she did in fact but apples. However, it is possible that she also bought oranges (and/or something else), in which case answers B and C would also be correct. We don't have enough information.

Bear in mind this was a test for recently graduated software engineers, who are supposed to be trained in logic, so I spent a few seconds puzzled wondering why the question was worded so strangely.

I did answer A and moved on.


If looking for the best answer, you can rule out B but A vs C is still ambiguous.


Well, maybe it'd be better called "aptitude test": if you don't know that gasoline lighter than water that means you've never handled it manually, so better not put you into an automechanic position.


It was 2 days of various tests, resulting in a number of scores, physical, intelligence, psycological etc. All together an aptitude test. I don't know why this was called intelligence test, it was a mix of questions, not all of these kind. I remember many were something like "circle the fourth word of the second sentence except if..."


If you ever carried a canister of gasoline, you'd notice it to be uncannily light. It may have been a practical experience question.


> canister....uncannily light...

I see what you did there


mote not a good example as oil which is also hydrocarbon based does in fact float for awhile but not due to density


Almost everybody with some Linux experience knows that the file name is not stored on the inode. It's not this part that makes the question bad.

What makes the question bad is that nobody can guess what answer he wants, and anything else, as correct as it may be, will be understood as a mistake (what is evident by the OP's answers that were perfectly correct).

"What number am I thinking right now" may be a really strict question that fails your expected number of people, but it's not a good interview question.


My first linux install was Redhat 5.2 bought at a bookstore in ~1997. I have not always been a primary linux user, but I have been using it in some capacity for over 20 years now- though as a means to an end primarily- in the bad old days I mucked around with drivers and X configs and have even recompiled a custom kernel on occasion. Never once has it come up that the filename is not stored in the inode.

I am actually just curious as to why this would ever even be a thing you come across unless dealing with filesystems directly?


You'd never need to know anything about it as a user. I don't know how anyone could seriously make the argument otherwise.

In fact the only reason I know anything about inode and dentry specifics is because they are 'very clever' interviewer favorites! I've been a professional UNIX admin for 15 years and I've dealt with inode issues literally once in my life lol


Thanks. Its comments like the GPs that makes me wonder how I have avoided learning some very commonplace issue and its these types of comments that foment imposter syndrome.

I have even read books on linux architecture and remember discussions about filesystems and inodes and remember the general structure and form, but a detail around what is and what is not stored in the actual inode... seems like an absurd detail to memorize.

The only way I could see this being commonplace is if I somehow missed out on a widespread bug that somehow caused inodes to be corrupted and requiring manual intervention/surgery to prevent data loss.


It's easy to remember that inode = the stuff that shows up in 'ls -l'


Probably the most common case where this comes up is in the context of hard links: two (or more) files with different names that point to the same inode. I guess actual usage of hard links is rare enough that it doesn't come up all that often, but I'm actually surprised you didn't know this – not judging you, just something I thought most more experienced Linux/Unix users knew, but it seems not.

Hard links can be a somewhat notorious footgun due to this by the way; with a soft link you know you're only deleting a link, but with "rm hard-link" this is a bit trickier: if you think there's another link but actually, it turns out you made an error and there's not then you've lost that file. A "hard link" isn't really a thing on its own: it's just another reference to an inode. This is why symbolic links are used in most cases, but you can hard links are still used from time to time in e.g. /bin and some other places.


Google is infamous for jerking people around in the interview process. I had a very similar experience, as did several people I know.

They do it because they know they can get away with it, but I do wonder if they have a backup plan if the lustre of working for Google ever fades, because it's an open secret that their interview process is a fucking nightmare for no reason.


> fucking nightmare for no reason

Absolutely! To add, the amount of nit-picking that happens is staggering. I was downgraded from strong-hire to lean-hire because:

1. I did not use classes in Python. That problem could easily be solved using simple functions. The feedback I got was "candidate does not know idiomatic use of modules & classes"

2. I did not use one of python's standard lib functions and instead I coded it myself (I could not remember it at that instant)

3. I could not spot a scenario in the first 5-7 min of interview. I eventually spotted it and coded it well within the time limit.

Somehow I felt that I am supposed to feel grateful for lean-hire


> 1. I did not use classes in Python. That problem could easily be solved using simple functions. The feedback I got was "candidate does not know idiomatic use of modules & classes"

Apparently they missed this classic HN discussion:

https://news.ycombinator.com/item?id=3717715


> I could not spot a scenario in the first 5-7 min of interview.

This is endemic and part of a much wider malignancy in the tech interviews. Cram two medium-to-high difficulty questions in the span of 45 minutes and require the candidates to solve them both on the spot. In other words, you have at most 20 minutes to work out a complete solution to any given problem.

In practice that means that you need to come up with the correct base solution in the first 2-3 minutes, because there is no time to actually work through the problem.

I call these types of interviews Epiphany Lottery.


They’re IQ tests, it’s that simple.


> correct base solution in the first 2-3 minutes

Not only correct solution but also generate alternatives to showcase what other things you know to get strong-hire.

Example:

This problem was asked in Google: https://leetcode.com/problems/cat-and-mouse/

It is actually based on a paper [1]. Plus it seems it is expected that one needs to know about 'alpha-beta' pruning algos for such problems.

If you solve this using dfs ... its basically gtfo

[1] https://www.semanticscholar.org/paper/Undirected-Cat-and-Mou...


[flagged]


When people say "grind leetcode" they don't mean it as a tool to get better at these kinds of problem solving. What they basically mean is that if you "grind leetcode" you have a better chance of being asked a question in these interviews that you have solved before and simply write down the solution during the interview.


They are memorization tests at best. At worst they test which candidates have been unemployed most recently so they can cram leetcode study.


> At worst they test which candidates have been unemployed most recently so they can cram leetcode study.

Or which ones are young and single with no responsibilities outside of work. Which is likely a feature, not a bug.


I mean that describes me. But if you don’t have the mental acuity and length of short term memory required to solve these problems…


If they're IQ tests, then why does doing a whole bunch of leetcode questions make me better at it? Isn't IQ supposed to be relatively stable throughout one's lifetime?


I've done hundreds and I have gotten better, but only to a point. At some level it's not pattern recognition anymore, just inspiration. And I'm not smart enough for that.

This one, for instance is expected to be completed in 30 minutes. I spent 2 hours on it yesterday and failed most test cases. I'm a failure.

https://leetcode.com/problems/exam-room/


Do thousands then and get back to me. It's all patterns, this "inspiration", is just patterns, your just not seeing it yet.

Its true on some degree, that you need some intelligence, but I have a few close friends who are FAANG, they are definitely not the best developers I've ever worked with, only difference is they really really cared about getting into FAANG. Getting into FAANG was basically their life ambitions, so they dedicated literally all there spare time to it.


I dedicated a lot of my spare time to it in college (and after) too. What did I get for it? I make just $200k a year at the least impressive FAANG and literally everyone considers me to be a mental defective.


> The feedback I got was "candidate does not know idiomatic use of modules & classes"

Many googlers probably haven't read this blog-post from some time ago [1]: "Python Is Not Java". I mentioned that at my first interview for a Python programmer job ~15 years ago, i got hired (truth be told the interview was for a small-ish startup, not for a behemoth like Google).

[1] https://dirtsimple.org/2004/12/python-is-not-java.html


Did you choose Python or were you asked to use Python by the interviewer?


I chose Python because its faster to code


It's quite a common pattern these days: I often see candidates who choose a language because they think it's better suited for interviews, not because they know it well (we leave the choice of programming language to the candidate).

Sometimes this works well, sometimes it really backfires on them. Coding is one of key rubrics on which we assess software engineering candidates, and if the only signal I have is that they don't know know their chosen language very well, it's hard to justify scoring that rubric highly.


The root comment is saying the interviewee knows the language well enough to consider one solution better than another solution.

And this was misinterpreted by the interviewer as not knowing the language.

An effective interviewer would have asked "Why are you doing it this way?" instead of assuming - wrongly - it was all the candidate knew.

This actually matters. Before you even get to coding skill you want people who can parse reality accurately, and not make incorrect assumptions about what's happening in front of them - either out of narcissism and arrogance, or because of poor communication skills, or because they're following a set process which is bureaucratic and inflexible and operates with a poor signal to noise ratio. (Among other possible reasons.)


I really well versed with Python, Golang & Rust. I will not choose Rust for interviews. For me, Python is much more productive in an interview setting.

But I get what you are saying though.


I didn't mean to imply that this applied to your case. Just a general observation (rather frustrating for someone like me, who wants candidates to do well but not that infrequently sees them being let down by their own choices).


That lustre has faded already.


Yeah... reminds me of a past coworker. He spent weeks writing weird comments in code, then writing weird code to read comments. From the actual source files. Then when deployments didn't work, he said they worked on his machine the deployment must be broke.

He was hired by FAANG less than a month later.


To solve the hiring problem that seems rampant in the industry, I propose for all resumes that get past the auto-filter, we just implement a lottery system. Maybe it'll be even more effective than the current method!


Mine (and the public's) growing awareness of surveillance adtech really has taken away from my desire to apply to Google or Facebook.


Restoring my faith in humanity ^^^


Yeah, it's probably been 3-4 years since I've heard anyone use the word "Xoogler" in a positive or unironic way.

Darn the wheel of the world! Why must it continually turn over?


I skip any and all interviews that require math, algorithms, or anything else. First of all, I'm an infra / devops engineer and not a full-time software developer. Secondly, I've never used this stuff.

And thirdly - because it's a proxy for intelligence by people who think they're smart for having memorized sorting algorithms.

Who wants to work in a place where people who memorize things are conceived of as smarter than people who didn't memorize the same thing at the same time interval (i.e. cramming before interview or just out of college?)

Fuck those places, I'd rather work with human beings.


Had a similar experience.

I was a recent grad, around 10 years ago. A Google recruiter called me out of the blue. After a couple of minutes, she started asking technical questions about Linux, which I had just started using.

After a while came the final blow: I was asked what was the fastest way to sum two integers (there were probably some additional specs I don't recall).

I mumbled something. Wrong! The answer is, in fact, using the TLB. I vaguely remembered what the TLB was from my CPU architecture classes, but was aghast at the connection between that and summing two integers.

The recruiter pretty much said I wasn't Google material on the spot and said goodbye. I felt bad for myself at the time, but now I can see this is a ridiculous approach to interviewing.


https://www.kalibrr.com/sites/default/files/featured_images/...

Every year, Google receives over one million resumes and applications. Only 4,000-6000 applicants will actually be hired — that's less than a 1% hiring rate.

Given those odds it's not worth it.


These numbers are misleading af. Most of those applications are just spam, not really relevant and never get read by any human. I’ve done ~150 interviews for backend roles at google and based on my experience 4 years ago if you make it to onsite you have 5-10% of getting an offer


That confirms that we shouldn't bother if we value our time and energy.


Many people apply to all the FANGS, since the interviewing skillset needed is similar. Moreover, you can reapply every six months. In result, this gives you perhaps 5-10 lottery tickets every year, which means it's very feasible to get in within a year or two (or three).


In other words - don't bother to apply if you actually need a job. And if you have a job already, why would you bother to apply there? It used to be attractive but is it still? With all the scandals and upheaval in the recent years?


>And if you have a job already, why would you bother to apply there? It used to be attractive but is it still? With all the scandals and upheaval in the recent years?

$$$


The reason is that it doubles or triples regular SWE salary.


Eh, I've had two offers from Google that were far below the "market rate" I was receiving at some of the largest pre-IPO unicorns of the last decade, and their interview processes were substantially better.


"pre-IPO unicorn" is the same rarified air. We're comparing to average paying jobs most people have.


Going through 5 companies hoops for 2-3 years sounds like a full time job. There's no way I would have time to do that on top of work right now.


What if doing so could double or triple your salary? Would that change the calculus?


It doesn't change the number of hours in my day, so unlikely.


But it could cut down on the number of days you have to work before retiring.

For what it's worth I probably wouldn't want to move to SF and take a FAANG job either, but I can see why many people would


I think i may have given the impression that it’s 10% by pure chance which it most def isn’t. There’s certainly some randomness in the system but if someone is in the bottom 50% by skill they can probably apply like 100 times and still fail every single one.


>Moreover, you can reapply every six months. In result, this gives you perhaps 5-10 lottery tickets every year,

you are aware of the saying that the lottery is a tax on stupidity?

I guess that is perhaps a little harsh, but only a little.

First off, I don't think I would want to work at any place where getting in there is represented as winning a lottery ticket. Think of the poor oompa-loompas enslaved in Willy Wonka's factory.

Second off, perhaps it's just my vantage as a consultant in Denmark but everything I've read about working in Google has made me think it doesn't seem very enticing.

Third off, even when I was not married with kids I would not have wanted to spend so much time twice a year! How many unpaid hours a year are people willing to work for a lottery ticket that essentially pays out a top-paying job?


Do you understand the difference between spending money on a negative expected value lottery, and spending time developing your knowledge skills?

Almost everything in life is "lottery ticket" with some odds.


So the top range is 10% chance of getting an offer? And how many hours do you have to spend chasing 10%?


Somewhere between 0 and infinity? I asked extremely simple questions algorithmically (not anything posted online) and didn’t discount candidates on not knowing the algorithm but about half of the people couldn’t write ~20 lines of code after we’d already talked through it. Another 30% could but the code was total mess. I passed roughly 20% personally of which about half passed the hiring committee.


if you get onsite at most companies for an experienced position the chance is rather 50% to get hired. if i would know the chance is only 5 % i would decline the onsite interview at google.


From my experience 50% couldn’t write code at all. So yeah if you only look for “can code” checkbox then you’d probably extend offers to half. Google chose to place the bar higher and that’s their right to do that


> if you make it to onsite

So those numbers are accurate and not misleading.


You've been interviewed 150 times to get in? Or are in and have interviewed 150 candidates?


Yes


It’s true, and the end result is you probably consider 90-95% of the people you interview to be morons right?


Sure but I never applied...they reached out to me. Hence why I didn't read their materials to interview. I've been happily employed the entire time.


And yet if they ask you for an answer to a quirky question like "why are manhole covers round?", they expect a clever answer like "they can't fall in", rather than a pedestrian answer like "well, probably they just started making them that shape for some simple production reason, and then everyone else just kinda of did the same thing because it worked."


I had same question, i was pissed because that was another stiupid question. My answer was: Because its easier for ninja turtles to jump on.


What? Manhole covers aren't round. Silly americans.

http://lh4.ggpht.com/_LWPSf1_ugFI/TA4tx-22QRI/AAAAAAAAFx0/T1...


They aren't even all round in the US. Where I live there is a high and rapidly growing number of data centers. In fact there is currently another large data center campus being built for the company famous for asking this question. Among other things this means installation of fiber optic cabling along the roads leading to the data centers. This in turn means the installation of a large number of manholes to service the cables. All the covers for these manholes are rectangular.


Manhole covers come in a variety of shapes. A non-ironic "why are manhole covers round" question speaks more to lack of life and/or travel experience than anything else.

There are triangular[1] and even more exotic-shaped[2] covers.

[1] https://www.reddit.com/r/mildlyinteresting/comments/g826ve/m...

[2] https://manhole.co.il/doSearch.asp?tp=114


> [2] https://manhole.co.il/doSearch.asp?tp=114

Holy crap, I never thought a website dedicated to manholes pictures was a thing. I love the human species :)


And look at that, it's got a hinge on it, so it can't fall in. And it can't be stolen or misplaced.


It's just an invitation to discuss curves of constant width, like the Reuleaux triangle, that has indeed been made into a man-hole cover.


Ah the famous Reuleaux triangle, something which 99% of software developers around the world deal with every day!


Lots of man-hole covers are rectangular though. Turns out it isn't very important at all to have manhole covers that don't fall in. They are quite heavy and don't roll around on their own, so having them fall somewhere really is of minor concern.


Yeah, but they don't sound as nice when you hit them with the hammer (even when you make them of width/length golden ratio so that they look nice).

The manhole cover question, is more of a check to test whether the candidate belongs to the math-circle people where this is well known.

Even if they don't it's then an opportunity to test if the candidate can see the question behind the question.

With these sort of open-question in an interview context you have to roll with it, otherwise it's seen as an unwillingness to play (sphericon) ball.


Hand-Rolling is a good reason to make them round.


Manhole covers are round because they minimize the circumference per area that must be cut to make the cover fall in.


Most probably, because manhole covers were/are made using cast iron, which were sand casted.

Mold for sand casting is easily done in lathe (turning a wood on lathe for precise shape is faster and more accurate than sawing).

So as one of the parent comment mentions - it is round because of production reasons.


My guess is it’s also a matter of practical usage: they’re made of heavy iron and a round one will fall in place whatever orientation it’s thrown over the cover, while a different shape needs to be carefully oriented. I suspect this also means less broken fingers.


But round one is easier to steal.. just roll them off. Whereas a square or any such shape, would be very inconvenient to move.

[Edit in response to a question]

Some people have attempted to steal manhole covers in order to sell them for scrap metal. China Daily notes that there has also been a problem with taxi drivers removing manhole covers to "steal water and clean their vehicles". https://www.bbc.com/news/blogs-news-from-elsewhere-52400235


If you're going around stealing covers, you're probably driving a truck and carrying them in the truck bed.


Who steals iron to turn in for scrap? It's only worth about 10 cents a lb. And then you have to haul it to the scrap yard. It's heavy!

On the other hand, copper is $3/lb, brass $2/lb, aluminum is $0.50/lb


Read "Playing the Moldovans at Tennis". According to that book, most of the manhole covers had been stolen for scrap.


Why would someone steal a manhole cover?


If Google is asking about them on an interview they must be important.


It's a very common thing to steal. Lots of iron. That's why they use concrete-filled ones. That's why they are stamped so a scrapyard can be checked (or alert authorities).


To sell them at the scrap yard. Gotta be careful driving in some countries; drive over an open manhole and you might crack an axle.


Isn't as easy to shatter in handling too, lacking stress concentrators.


My take is because the manhole is round.


The mouth of the 'hole' could be easily square, actually making a square shape is perhaps easier - since laying concrete is easier in straight lines.

So, most probably the shape of the cast iron cover (circular), drove the shape of mouth of the hole.


... fell for it like a dude trying to get his try at opening a stuck jar lid.


In my area there are lots of square manhole covers. I have it on my personal to-do list to photograph almost all the older manhole covers from my city and geo-locate them at some point, I think it could be a good indicator of how my city grew and evolved ~100 years ago.



Ok but consider - is there any department at Microsoft other than marketing that Richard Feynman would have been suited for?


Richard Feynman oversaw the IBM computer's use and programming used in the Manhattan Project. Microsoft also has an R&D department that would probably just give him a budget and a team of assistants to see what he came up with.


I don’t know but I shudder at the thought of what state the world would have to be in for Feynman to be interviewing at Microsoft or any FAANG company for that matter.


Feynman made significant contributions to supercomputing.

https://longnow.org/essays/richard-feynman-connection-machin...

I'm guessing Feynman would've had a much better chance getting hired at old Microsoft, than through the current FAANG/wannabe interview process.


> "why are manhole covers round?"

asked that question I would have answered: "that's a trick question! they are not round!"

http://www.theromanpost.com/wp-content/uploads/2019/11/cropp...


A bad interviewer might expect a certain answer but a good interviewer will use it as an opportunity to examine the interviewee's critical thinking skills.

Are there any other reasons the cover would be round? It gives you an insight into their thought process as they come up with more ideas and explanations.


After failing two Google interviews both triggered by being reached directly by Google HR, telling me how my public information and CV was impressive and I should be really at Google, I told them to never ever contact me again.

My feedback was that, if the CV is so impressive, according to them, why do I have to endure such interview processes?

Anyway I never bought into the whole do no evil marketing, nor bothered to apply, if it wasn't for their HR reaching out to me in first place.


I had the same experience, some internal recruiter reaching out that they were super impressed and wanted me for some special group, then a phone call where they never bothered to show up(!) so we rescheduled, then some stupid interview where they asked me to estimate the value of 2^10 or something and then told me they didn't want me for the special group after all. Complete waste of time, especially given I didn't really even want the job, they reached out to me. Taught me a lesson, at least: I'll never work at a big company, no salary is worth that kind of treatment of other people.


Their process may simply be optimized for their particular purposes. Google doesn't need more engineers, but there's a certain kind of hire they don't want to pass up on. No company really needs a genius rockstar coder that's going to quit after two years to become competition. That's why the inode question makes sense. It's a stupid question that deserves a stupid answer. If you're the kind of person that actually gives the stupid answer, you're clearly not a cultural fit. Google is looking for the person that disregards the stupidity of it, who has prepared themselves and gives the "expected" answer.


Google wants soldiers, and they can get them.


> The interviewer told me very matter of factly that it was in fact, the filename.

which is not even true in some file systems that can inline data directly into the inode, if the data is small enough.

for the downvoter(s):

> If the data of a file fits in the space allocated for pointers to the data, this space can conveniently be used. For example, ext2 and its successors store the data of symlinks (typically file names) in this way if the data is no more than 60 bytes ("fast symbolic links")

> Ext4 has a file system option called inline_data that allows ext4 to perform inlining if enabled during file system creation. Because an inode's size is limited, this only works for very small files


> the data of symlinks (typically file names)

This means the name of the file the symlink points to. It's not the file name of the symlink itself.


true

but they are both paths in the end


I've never seen "the file name" used to describe "name of a symlink".


I had a similar experience with Proton's recruitment. They sent me a dumb IQ test with stupid questions like (how many triangles are in this triangle). I answered with "Way too many", and that was the end for me


>What's not in a Linux inode?

Wow, even if you were hiring filesystem experts to write filesystems I think there are a million better ways to ask that question.

e.g.: "Hey, talk me through the design of a really basic filesystem, it needs to support hardlinks, symlinks, files and directories."


I'm not experienced in tech interviews nor knowledgeable about inodes ... but your question and the interviewer's question seem like they're functionally equivalent. They presumably don't just want you to say "names", they're expecting you to talk about what is in a Linux inode, what a directory is, etc., and they can drop in further questions to prompt you if you don't "talk me through the design of a basic filesystem".

If the question is too vacuous surely you ask for clarification -- "well, what happens when you issue the 'mv' command" -- and presumably if you designed file systems you can talk about different implementations, optimisations, and problems crossing filesystem boundaries or whatever.


I think there are multiple things wrong with the original question:

- It's unclear what they're trying to judge. Do they want to know if someone understands filesystems on an intimate level? Do they want to know if someone knows how hardlinks work? Do they want to know if someone knows what fstat returns? Are they trying to trick someone since an inode in many contexts is just a number? This is an unnecessary level of uncertainty for a question and encourages random tangents which may not be the answer the interviewer is looking for or cares about.

- Someone who has studied google's standard questions which this apparently may be one of [1] would be able to answer this without understanding the implications, what does that tell you about the candidate?

- If you want the candidate to go into a tangent about filesystems in order to figure out how well they understand them, this is as mentioned before a really stupidly open ended question.

- If someone did understand you were talking about inodes, they may be of the opinion that the inode contains a reference to the file's contents and as such contains the file contents. In the case of a directory this means that an inode "contains" the file names of files in that directory. This makes the question a leading question which is trying to lead to what would be a wrong answer from that person's point of view. How do you answer a leading question when you disagree with what it's trying to lead you to?

If you want to determine if someone understands filesystems on an intimate level. I think my question would be far better at elucidating that.

If you want to understand if someone knows how hardlinks work, I can't come up with a question off the top of my head but I doubt that "what is not in an inode" is anywhere close to the best one.

If you want to trick someone, go ahead, this is a good question. Likewise if you want to screen for people who have googled "Google SRE Interview Questions" and memorised the answers.

[1] http://www.gwan.com/blog/20160405.html ( https://news.ycombinator.com/item?id=12701272 )

Regarding the above reference:

I've spoken to someone who interviewed for an SRE position and she said the questions were very similar to an early phone interview she had. She said the interviewer did not know what they were talking about and were just going off an answer sheet. So I don't think the interview in this case is representative of one for a Director of Engineering but is representative of an early screening interview for SRE. In which case anything BUT the expected answer to the question "What is not in an inode" would be considered incorrect.


It's cruel and inhuman, but if you're Google, there's some cold rationality behind this. I think the process goes for them like this: There's 100 people applying for any given position, 10 of which might be good hires. It's reasonable from their point of view to subject the interviewees to a process that culls 83 bad ones, and 7 good ones so that only 10 people need to be interviewed by expensive on-staff engineers, even though it looks like madness from the outside.


The rationality doesn't reflect the reality of Google being unable to innovate and continually competing with itself. Will another engineer from the same cohort who knows the same trivia and has the same education be able to shake things up?

What happens to the 3 that get rejected, do they try again later and filter through? Or do they tire of the process, give up, and work for competitors?


Have you seen their earnings, they're in a monopoly position, none of this matters any more.


When you're in such a dominant position they don't care about people being able to shake things up


Neither did IBM, GE, Oracle.


I'm not saying they're right to not care, but they don't


It's definitely questionable whether the process retains many good engineers. That said, I suppose it works as long as it culls more bad engineers than good engineers.


> I suppose it works as long as it culls more bad engineers than good engineers.

That's a pretty low bar. A Fizzbuzz test culls more bad engineers than good ones.


I remember getting one like that - Googler demanded some random value from a system .h file. Easy enough to grep if necessary, but _no_ it had to be from memory to make one Googley.


Google interviewers must hate Einstein. He said, "Never memorize something that you can look up."


If you need to look up something enough times, it'll stick anyways. But still, don't go out of your way to memorize something.


That one of my rules I always give to junior devs. Just read the docs again for what you are doing. You may perfectly 'know' them but sometimes there is an extra 'oh if you are doing this do this other thing too'. Or worse the call is similar to some other call but does something slightly weird. Does not come up often but has caught me out a few times. Never hurts to re-read it. You mostly can get away with it but sometimes...

A more human anecdote for example I have movies I know I have seen dozens of times. Yet my recall of what happened in them is not as good as it used to be as the last time I watched them was a few years ago.


If the meta-interview system has even a modicum of sanity in its design, that question will be discounted when none of the candidates get it.


This sounds dreadful, on the positive side I had very enjoyable interview rounds in places like Goldman Sachs and Lehman Brothers (obviously this was a few years ago), 3 or 4 interviews hour long plus depending on if they were before or after work. These tested what I new, how I would approach things, and giving me a very good view of the people that worked there from more junior devs, architects and management in the teams I would be working in and those around it.

The best interviews aren't scripted but are created from around the interviewers expertise but based on the skills and qualities needed for the role.


Quite honestly, I am torn. The best jobs I had so far involved multiple interviews (Amazon with a total of 6 one hour interviews and my current employer with 7, the jury is still out on my current job one month in), the worst involved either one or two interviews or multiple, but very easy and yet unpleasant ones. Scripted questions suck, Amazon in my case was good, as there was only on scripted logical question and the rest was STAR-style leadership principles centered questions.

Multiple rounds give you more opportunities to ask questions to multiple people and get a better idea of the company. They can also be a pain in the ass.


Exactly that. I often interview people as part of the team. We have normally one HR person, the team lead and one of the team present.

More often than not I give my spot to a more junior team member if they are available. So that they learn something from watching the interview as well as "just" see if the person interviewed would be a cultural fit. If they could imagine working with them.


Now, the inode question is part of the pre-screen. That's normally _before_ you get put on phone with any engineers. How the heck did you get it in "round 3 or 4"? Was your process started from scratch for some reason?

The question is meant to be asked by a sourcer - a contractor whose list of requirements does not include being answer themselves any of the questions they ask. Famously even if you know the answer pretty well, say because you were the original creator of the thing, but answer in a way that they can't link to the answer key, you still fail.


>Famously even if you know the answer pretty well, say because you were the original creator of the thing, but answer in a way that they can't link to the answer key, you still fail.

Which is the most ridiculous part of it. You are being examined and evaluated over trivia in a subject that the examiner likely has no idea about and where there are only 1-2 possible "correct" answers.

That's like sending a janitor to "source" surgeons by asking questions to people in white coats in a competing hospital about appendectomy.

Yet this is considered effective (and acceptable) somehow ...


What alternative would be more efficient and acceptable? A typical team of 6-8 replaces two persons per year. The number of resumes per opening is measured in thousands.


> A typical team of 6-8 replaces two persons per year.

How is this fact not seen as a complete failure of hiring practices or company/team leadership?


Internal mobility is strongly encouraged. People typically move after promotion, which is expected after 2 years. Some people don't like to move, bringing it down to about 2/year. All working as intended. Then, people moving out tend to want to join new teams, for the career growth opportunity, while you want new hires in established teams.

Note: that's anecdotal from a couple years of observation. I haven't looked into any Google-wide statistics (and if I did I would not blabber about them). But I'd be surprised if this was way off.


In an industry where the median tenure for a SWE is ~2.5 years, that's "doing no worse than the industry average".

Turns out upper management rates predictability quite high.


Are you saying a company with the resources of Google, which expects all of its employees to answer trivia and whiteboard answers in interviews - all of that brainpower and money can't invent a better process?


I don't think it's a matter of impossibility. The current process is simply a local optimum. Most practicing software engineers can pass the trivia quiz. Most time wasters can't, unless they googled for the answer key. The result is a pool of candidates that usually can understand a real interview question an engineer will ask. The system works, at the price of being annoying.


If there's only a single correct answer that the interviewer doesn't understand and an expert might get wrong, it's people who google the answers that have the best chance of passing.

I tend to ask open-ended questions. Let them talk about code, and I can probably figure out whether they're full of shit or not. And a real coding challenge proves better whether someone can code than any code question or whiteboard challenge.


But you're an engineer, aren't you? Do you think you could figure out if someone is bsing you about their experience in mRNA manipulation?


No, but that's why you need to let people's skills be assessed by people who are able to assess those skills.


Having too many resources tends to lead to sub-optimal processes. Just look at the Joint Strike Fighter program.


If you want to mechanically apply a trivia test, then at least make it multiple choice.

You will still be choosing people through a mechanical procedure, what is quite stupid, but it's less stupid than that way.


If the applicant has graduated from a university, technical school, whatever, then look at the records from their exams there. Also taking into account the average level of the university, technical school in question.


That'd be really unfair on people who were busy e.g. gaining actual work experience


Wait, seriously? That's an absurd screening question.


> Famously even if you know the answer pretty well,

Everything above would make sense if the question was "what is an inode?". But it was "what is an inode not?". That wording doesn't make much sense to ask anyone, no matter how you spin it.


> the entire question was: What's not in a Linux inode?

Were they considering you as an experienced developer of a Unix/Posix filesystem, who would almost certainly know what an inode is?

Or were they considering you as someone who had been using a Unix so long and extensively that you had a chance of once having had to configure an older filesystem for huge numbers of inodes. Or a chance that, for some rare reason, you once had occasion to learn that the `ln` command isn't always used in conjunction with `-s`?

In either of those cases, you might then guess at what the question was getting at, in a slightly clever way. If you got it, then both of you could share a bonding moment of both knowing this thing not everyone knows, and it could be a quick warmup to better questions.

Or, if you didn't get it, you could feel thrown off, and insecure or negged, which is also a win for an evil interview process.

If I'm feeling punchy at 3am, an alternative theory is that they could be a recent CS grad, who'd had a class in which they were told to recite, after the professor, "What's not in a Linux inode is the filename," and that's what they think is important about that. (By way of introducing the separation between inode and directory entry in Unix, through catchphrase and rote memorization. Other professors would probably instead explain using a diagram or the code for structs.) (Theory variation: perhaps the only professor who ever said this phrase was at Standford, which would make calling for it an especially transparent shibboleth for frat-like alumni affinity, and an overly-precise filter for socioeconomic class.)


I once cost my employer a bunch of valuable data due to a bug in my code that the second I discovered the data was gone I knew why it was gone because I understand inodes and their relationship to filenames.

That's to say it's information that is useful to have in your head if you're an SRE, just part of understanding Linux fundamentals.

That's not to say it was a good question. Despite my encounter with that bug in my code I still wouldn't have answered the question correctly. Filenames point to inodes, inodes point to data. The word filename might not even pop into my brain when thinking of inodes in isolation.

Also it doesn't show the full picture, I'm not at all sure I have what it takes to be an SRE let alone one at Google, that I sort of know what an inode is says nothing about if I can properly use that information to make architectural decisions.


It was probably an opener for discussion. Assume the best about the interviewer for a minute. What is the best possible interpretation of where they wanted to go with the question?


The fact that the interviewer didn't get the joke and immediately divulged "the answer" makes it hard to assume the best here. If they were really trying to open a dialogue, I'd expect a chuckle and maybe some clarification (like, what _is_ in an inode?)


You may well be right. I'm reading between the lines and could be wrong - but I'd guess the interviewee gave those answers in a frustrated tone. This specific interviewer could have just been a jerk.

There does however seem to be a lot of criticism of FAANG interviews which seems immature.