Hacker News new | past | comments | ask | show | jobs | submit login

Some time ago, I was contacted by Apple to apply for a job. My code is insanely well-documented. I like to think that a lot of the inspiration for my code docs comes from Apple's open codebases. Their code is exceptionally well-documented.

In any case, as is usual with all employers, these days, they completely ignored the focused, relevant links that I sent them to elements of my extensive portfolio of repos, and, instead, based the entire interview on a 50-line binary tree test in Swift.

I'll make it clear that I'm NOT a fan of these. I am mediocre, at best, at them, as I don't come from a traditional CS background (I started as an EE).

In any case, during the test, I did what I always do when I write code. I stopped to write a header document for the function.

This was clearly not something the tester liked. Also, to add insult to injury, they dinged me for not writing a cascaded nil-coalescing operator. The code they wanted me to write was difficult to understand, and absolutely not one bit faster.

What makes this even funnier, was that this was for an Objective-C job, and the links that I sent them (that they ignored), were to ObjC repos.

After that, I just gave up on them. It kind of shows where a lot of this is coming from.

Dynamically-generated documentation can be great (It's clear that the lions' share of Apple's developer documentation is dynamically-generated), but it requires a VERY disciplined coding approach. I suspect that they may be hiring less-disciplined engineers, these days.




I used to think Google and Apple were making a poor choice by doing this kind of interview, but I wonder now if it gets the results they want: they want interchangeable cogs that don't stand out too much. They want known quantities. If you're "the documentation guy" or "the security-testing girl", you've wasted time learning skills they won't utilize, since they want you to be exactly like every other developer. If documentation and security are going to happen at the company, a dedicated team will be formed for that, with monthly reports on key metrics going up the chain to management. That's how big corporations work, from what I've seen.


HR wants the cogs that are flexible because they can cram them somewhere whenever the company wants spin down a team and move the personnel elsewhere. This is great for big companies that just need warm bodies to crank out crap however it's an inefficient process when you look at the lower level. You have a larger team of people that are less familiar with the task that a smaller team of experts could handle better and faster. If you can handle the schedule and direct cost hit of a larger, less specialized team, it reduces the later costs of needing to lay off people and hire new people. You can just move these warm bodies around somewhere else. A small specialized team is going to be a bit more riskier since once the job is done, are you going to have a need for these experts? Would they want to stick around doing something they never wanted to do? If a specific field is your core business, hiring experts is critical when starting up. But once youre a massive corp, you really just need boots on the ground to handle the mundane. College fresh-outs are usually the kinds of people that get drawn into these warm body kinds of jobs. They want blank slates that can crank out code, documentation, or whatever the pointy hairs need. If you really do consider yourself an expert, consider a consulting/contractor gig where you come in, do the specific job they really need, and leave with a bucket of cash.


Most engineers at Apple are assigned to one (small) team, including new college graduates.


Before you throw Google and Apple into the same bucket consider this: There is no unified job description for a Software Engineer at Apple, and it will often vary depending on which VP or even Director you report to. At Google, there is a ladder comprised of specific milestones that you must meet that is agreed upon by committee and used throughout the entire company.

At Google, you almost never know which team you will land in before you interview. At Apple, you probably know who your future manager will be before you interview.

(Source: I have worked at both companies.)


I was at a place that had been around for 20+ years, but needed to grow fairly quickly. So they systematized their hiring process. The focus was on generalists both because a lot of their technology was written in-house (it predated the commodity, off the shelf, solutions). It used the theory of the off the shelf solutions, but often had a different name or a slight difference. But the major reason they wanted generalists, like you say, was so they could move people around as-needed.

Much of the industry was either specialists in their studies, or specialists in that they knew a specific software package. It was interesting to interview people who had very in-depth knowledge of one thing, but completely oblivious to even the basics of other areas.


You’re right, and it can get worse. Certain focused industries such as financial services employ what they call “subject matter experts” aka “SMEs”. Turns out most are “solution matter experts” as in, only know how the particular wonky solution works as installed at their company unchanged since it was installed 10 - 15 years ago.


Do any specific interviews come to mind? I'm curious what sort of questions you asked, since I am a specialist :)


This was all about 10 years ago and it definitely lifted some of the trends with interviewing at places like Microsoft, Google, etc. There were 3 areas; technical, personality, and critical thinking. The criticism I noticed from interviewees grew further down that list. The job involved user support, which was the personality portion (it wasn't just "team fit"). Critical thinking had some of those abstract, problem solving questions tech companies were notorious for, but also had a mix of actual issues encountered.

Here's my defense for the critical thinking questions since I know it's a contentious topic here; I'm evaluating how they asses and troubleshoot something. Often, actual technical problems get caught up in the minutia or domain specific parts. So abstracting it and even removing all of the tech keeps people from getting hung up on that. I hate "trick" questions, but sometimes would ask one to see if they were thinking towards optimization or non-traditional. It's actually deflating if they already know the answer. I never cared if they got a result, I'd often move on if it took too long.

I felt the most criticism towards the technical questions. Many of them were more technical than really ever came up on the job. I think it's good to find the limits of a candidate's knowledge, but not ding them for it. An example would be doing stuff with pointers when the job is 100% in a scripting language. Sure, ask them if they know the basics of pointers, sure maybe once in 5 years they'll troubleshoot a bug that might need obscure knowledge, but I would hope that would get triaged and passed around instead of expecting everyone to deal with the 1% case.


Yeah I have come to a similar conclusion - the labor marker for software developers in the bay area in a sense becomes highly liquid because of similar job requirements, experiences and interviews. Heck I was told by a FB interview to basically go through leetcode “hard” problems to prepare for the interview.


Leetcode is just a game that you have to play to get into these companies. I suspect that OP had other interview rounds that went poorly. There is always a chance of getting a bad interviewer, but it doesn’t often happen in every single round.


I never understand this kind of comments. Is there a reason why OP should have bad intentions when writing the comment? The article is about Apple not providing sufficient documentation for their APIs and OP wrote about a experience that supports reasoning why Apple seemingly is not focusing on delivering documentation.

So how about giving reasonable arguments why OP probably had rounds that went poorly and is salty about it and why Apple has a wonderful software developer culture that focuses on quality instead of assumptions? Apple has problems with documentation since years, one example I had to deal with recently: their maps API. The MapKit programming guide is still in Objective-C, old design and outdated.


> The article is about Apple not providing sufficient documentation for their APIs and OP wrote about a experience that supports reasoning why Apple seemingly is not focusing on delivering documentation.

FWIW, I don't see how "I tried to add documentation in my interview and my interviewer didn't like it" translates to "everyone at Apple hates documentation".

> The MapKit programming guide is still in Objective-C, old design and outdated.

Objective-C guides are not inherently old and outdated.


> FWIW, I don't see how "I tried to add documentation in my interview and my interviewer didn't like it" translates to "everyone at Apple hates documentation".

You're absolutely right and this could have been an argument made by trangon, instead of an assumption about bad intentions by the OP. The interview story is just an anecdote in the end.

> Objective-C guides are not inherently old and outdated.

I agree in parts, they're not outdated in terms of facts present (although I wonder if MapKit didn't have any changes since October 2016). What I meant is outdated in a sense that Apple is pushing Swift but doesn't provide a programming guide in that language for this particular topic. Still supports my point of Apple having problems with documenting their APIs. SwiftUI is another example.


I think it's bullshit that "Leetcode" is a game you have to play. I've been in the software business for 25 years and not once had to use any of that knowledge in my work. When those games come up in interviews with me I call them out, if they hold fast I walk. I don't need to have my time wasted. My experience should speak for itself.


In several career focused communities where the majority of the readership is people with less than 5 (and often seeking first job)... where the focus is on Big N positions...

there is a common answer of “study leetcode”. No other advice, just the belief and assertion that all you need to get one of those jobs is to score higher than the other people.

This idea unfortunately perpetuates itself as one person advises another and a person.

The number of leetcode questions solved is used in the inevitable male appendage sizing contests.


Realistically only the recruiters looked at the repos/portfolio you provided, if at all -- that's to get you through the door. Once you get on-site interviewers tend not to even look at your resume. This can be good or bad, as it eliminates a lot of bias based on education, background, etc. An interviewer has a question that's well calibrated, that they've seen people answer hundreds of times. There's a simple answer, a better answer and an "if you can teach me something I'll just hire you" answer.

If you spend time doing extraneous things like documentation during a coding challenge you likely won't do well, not because documentation isn't valued but rather because you won't have time to move from the "simplest" answer to the "best" answer, let alone the "teach me something" answer. This may frustrate your interviewers because they're seeing you spend your time in a way that's not advantageous to you. They may even be more frustrated if they actually liked you as a candidate.

Tests are different because they often help you move more quickly and confidently between layers of the question as you can sanity check you revised implementation against your tests.


Speaking only for myself, before I interview someone, I definitely look at their resume and prepare a few questions about it. I also look at github accounts if they were mentioned in the resume, but I tend to downweight them for a number of reasons:

1) You can't always be 100% sure the code was actually written by the candidate.

2) One quality I'm looking for in particular is somebody who can work well with OTHER PEOPLE'S CODE, and that's even harder to evaluate. If I see a balance of original repos and forks, that might be some evidence, but again, how do I dig out what they contributed? Chasing down PRs would be more helpful, but that definitely is too much work for a job interview.

3) Having a busy github account could be a marker of particular life circumstances (to avoid waving the red flag of "privilege" around too much) — the candidate is not overly busy working a second job, raising young children, or giving 120% in his day job.

4) There are plenty of respectable reasons not to code in your spare time — you may have different hobbies etc. When hiring pathologists, we don't give preference to candidates who cut up bodies in their spare time. We should get out of the habit of doing so for programmers.

5) A busy github account can also be a warning sign of somebody who might be more invested in side projects (or possibly even bootstrapping a startup) than in their day job.


I feel like I get a lot of this signal with much less noise by asking someone in the moment to “tell me about something you’ve built.” While you can lie on a resume it’s much harder to lie about that on the spot but it should be really easy for you to tell me about something you cared enough about to make.


Agree, and one of the reasons I skim github repos is to ask questions about some of the projects.


> I'll make it clear that I'm NOT a fan of these. I am mediocre, at best, at them, as I don't come from a traditional CS background (I started as an EE).

I come from a CS background and struggle with these too.

The big problem I have is this performance stuff is often barely relevant to the position. You need to know the principle of a BST. Unless you're building a database working in a unique scenario, actually traversing it going to be done by some library.


They already know your documentation style based on the portfolio. They are testing your grasp of language fundamentals and various features, and how you can use them to solve a simple and common problem.

Writing documentation headers in an interview setting should only happen if they explicitly state that you should write them, or after you ask the interviewer if you should be writing them.


They didn't look at the portfolio.


As someone administering these kind of questions regularly, the majority of interviewers are aware of the shortcomings. There's a kind of 'it's the worst form of interviewing except for all those other forms that have been tried' attitude prevailing. Every other method has problems that are usually worse than those associated with doing abstract problems like implementing a BST.

From what I've seen, interviewing is evolving and focusing gradually more on realistic, practically-oriented coding, and less on Leetcode-style questions, especially as people are learning to game those.


It's not relevant to the position, but 'coding on a whiteboard' is a legal sorting function that will reject a lot of truly awful candidates (You pretty much can't bullshit your way through it. You may end up being an awful hire for other reasons, like your personality, but that's another story.)

That it also rejects some good candidates is fine, there's always more candidates for FAANG jobs. (Few competitors pay FAANG salaries.)


That's fair but these companies also claim it is hard to find developers and more hb1's are needed. Hard to have it both ways.

I feel like there should be a course for passing these exams run by ex-FAANG employees. Perhaps a bootcamp.


1. There are plenty of websites which will let you grind away at FAANG interview questions. If you have a few weeks of time to waste on gaming this signal, you'll be coding self-balancing trees in your sleep... And have no issues with passing an interview for at least one of these firms. Depending on your current salary, this may be a much better ROI than your entire undergrad degree was.

2. Some of these firms occasionally host coaching sessions for referrals. Ask your friends if one is currently running.

3. "You don't need H1Bs, just lower your hiring bar" is a criticism that is a bit less applicable to firms with a high hiring bar, than to firms with a low hiring bar. If you are, in fact, talent-limited, an H1B will actually cost you more than a local worker. (Lawyers, immigration sponsorship, uncertainty in whether the visa will actually arrive, aren't cheap - and FAANG does not have a reputation for exploiting their H1B hires.)


> There are plenty of websites which will let you grind away at FAANG interview questions. If you have a few weeks of time to waste on gaming this signal, you'll be coding self-balancing trees in your sleep.

I'd like to practice this, any recommendations?


Leetcode, if you want to pay for UX.

If you don't want to pay, just get the list of former questions from the internet. [1]

Many of those questions may no longer be asked, but if you can ace them, you should have few problems with similar questions/variants thereof.

This is not a skillset that develops from doing your day job. You have to deliberately practice to get good at this.

[1] https://www.interviewcake.com/google-interview-questions


It's a game. You win, you get a job. It's worth spending a few weeks and just banging out BSTs every night until you're good at it. Engineers with years of industry experience tend to fall flat in these interviews because they haven't written BSTs in years and don't think they should have to in an interview. It's not about should, though, it's about whether you want the job. Change the system from the inside.


Thankfully in many regions we still have the choice not to go through hiring games.


I agree with you that in an ideal world there'd be a way for you to demonstrate your prowess without playing these games. However, in the real world, I want a job, I'm going to play? It's not like I don't solve hard engineering puzzles for fun on my own time anyways. Framing matters a lot!

Here's what I like about interviews:

(1) I get to talk about myself and how great I am without people rolling their eyes at me.

(2) I get to solve a fun puzzle with:

(2)(a) A defined answer.

(2)(b) That I never have to maintain.

(2)(c) With a potential financial windfall at the end.


Seriously though, in Europe the Algorithmic style interview is not the norm, by and large


Interesting, what's the process there like?


Plain old style interview, you get through HR, get a couple of interview rounds with people from different departments, discuss with them several subjects somehow related to work, positive and negative experiences across your career, what would you do if X happens,....

If you happen to do coding exercises at all, they are just basic tasks like implement a linked list data structure, count the number of words on a file and such.

Answer some trivia about the language you are supposed to use, e.g. when should you use a struct in C#.

Luckly only startups over here are somehow into SV hiring culture, which still leaves plenty of others with job offers available.


>You need to know the principle of a BST. Unless you're building a database working in a unique scenario, actually traversing it going to be done by some library.

Looks like you got stung by the confusion between binary tree and B-Tree (and B+Trees).

1. The b in btree isn't binary.

2. Databases use B+Trees.

3. binary trees are almost worthless.


> 3. binary trees are almost worthless.

Interval trees reduce to Binary Search Trees when the nodes can't overlap. Most markup DOMs (in browsers, WYSIWYG editors, etc.) are held as an Interval-tree ADT which is actually implemented by a BST.

Also, Red-Black trees specifically are used in many database clients to implement an in-memory write-back cache of a database table, as the combination of performance requirements for flushing batches of writes to the DB, and reading back arbitrary keys that were written to the cache, make these BSTs nearly optimal for this use-case.

But yes, DBMSes themselves don't tend to use BSTs for much. Maybe as a query-plan-AST term-rewriting-phase ADT.


>Interval trees reduce to Binary Search Trees when the nodes can't overlap.

Interval trees are degenerate R-Trees which can be implemented using GiST which is itself a btree.

>Also, Red-Black trees specifically are used in many database clients to implement an in-memory write-back cache of a database table, as the combination of performance requirements for flushing batches of writes to the DB, and reading back arbitrary keys that were written to the cache, make these BSTs nearly optimal for this use-case.

Got a source? I'd like to read up on this use case.

>But yes, DBMSes themselves don't tend to use BSTs for much. Maybe as a query-plan-AST term-rewriting-phase ADT.

Postgres' source does indeed have a red black tree in it:

https://doxygen.postgresql.org/structRBTree.html


I wonder if the merit of these exercises is from reading the candidate's reaction to meaningless and trivial tasks. If they take it with a smile, or better yet if they spent part of their free time studying it in detail, they will make a terrific cog in the machine...


To be honest, as a part of hiring procedure, it has its merits. In an ideal world, you want to ask the candidates a problem that is well defined, and applies the tools that they might have learned in their past. This is why algorithmic problems are common - a large pool of candidates have recently finished school, they have a lot of fundamental knowledge from their experience there (and nothing else), and asking these is a way to evaluate [ how well is this problem approached ] with [ tools they should have at their disposal]. It's more about can you understand what you have learnt and use it to solve a novel thing, and less about can you balance a binary tree given 100 integer inputs. It's largely devolved in a cat and mouse game now, with candidates spending a large part of their time poring over CLRS or on Leetcode, and companies asking increasingly difficult (and serving no purpose but testing how much time was spent on practising) whiteboard questions.


Where this falls down is when you're hiring someone who has 10 or 15 years of experience and has forgotten more about day-to-day product development than the freshers have learned about algorithms. So you're asking that guy questions that the freshers have fresh in their minds because it's all they know, and it's totally unsurprising that he fails because he hasn't ever used that knowledge in a work setting. You do need to tailor your interviews to the person's experience level. If you're hiring for experience, ask that person about process and about team forming and about mentoring and leadership. You don't have to be able to write a perfect algorithm as a senior, only to help your juniors over the hurdles keeping them from doing it themselves.


Part of it might also be a way around age discrimination, as those relatively fresh from school (and thus, younger - and likely will take a lower salary) likely are able to pass them faster than older potential hires.


That doesn't make much sense to me. If they don't want to hire older people to save money, can't they just offer lower salaries and let older people self-select themselves out of the job?


I suspect that the pre-employment drug screens are to test who washes their hands.


To be honest, while documentation is extremely important in the real world, interviews are time-constrained, and it makes no sense to write documentation when you have 45 minutes to implement something like that.


Ignoring dozens of repos (non-forked), and thousands of lines of testable, shipping code, is probably not helpful.

I was a manager for a long time. I loved long résumés and relevant material.

I was hiring expensive people to work on really important stuff, and there was no way that I wanted to rush the vetting.

Also, I never gave a single test, and I think I got it right, every time.


As someone who does a lot of interviewing at a big company, by the time the candidate is in a room (virtual or real) with me large amounts of vetting has been done by managers and recruiting. I really only look at the resume for context if I even do at all. I'm there to cover a specific technical competency and soft competency, and any other data I can get in 45 minutes + bio break, make feel comfortable, and them asking questions.

I rarely have time to look at code repos unless asked by the manager/recruiter.

You are also likely interviewing with the wrong people. Most large companies are looking for as many of the best devs they can find for generic opportunities. They have many many slots to fill.

If you are looking for something very specific then you need to find a recruiter who does placement as well. Amazon (where I work) has two major hiring processes, direct to team where they are usually looking for a specialty or generalist and take the first qualified candidate, and generalized hiring where you have to pass a standard interview but then you begin interviewing managers and finding a fit. The second may be better for you, but you still have to pass a bog standard interview.

There is of course the third way which is have a friend on the inside do the pre fit and selling.


Throw out everything else you are doing in those 45 minutes and just look at their repo and make a decision based on that. You'll get better results.

(Maybe spend a few minutes on the phone just to verify they are really the person who wrote all that code.)


Guess you missed what I meant. The manager/recruiter covers that. If it's substantial then I would likely get called in to review it.

The on site interviews are checking for things like communications, soft skill, design, etc. Things that don't fully come across in a code repo and are hard to verify who did what.

We have tons of people who try and misrepresent themselves. It would be a waste of time for the 6 people on a loop to all read the repo. It would also be bad if we didn't cover the things we cover in the on-sites.

We are also required to keep loops the same for all candidates. So giving an offer to one candidates because they have a GitHub repo and a phone call and the rest don't and therefore have to come in is a no-no. Therefore they all come in.


Large companies tend to aim for standardization in the hiring process. They want apples to apples comparisons as much as possible, largely for legal reasons, but also for tracking metrics. Unfortunately, someone submitting high quality GitHub repo links doesn't fit a model where not many candidates are doing that.


Apple's hiring is not that standardized. Each team has their own process. There are some common themes, but they definitely don't all use the same tests. I've interviewed with a few teams there over the years (have ultimately turned down offers for one reason or another). I've had a pretty wide array of experiences, and incidentally, never a BST question.

I've also generally had very good experiences interviewing with them - tough but entirely reasonable questions focused directly on my expertise and the tech the team is using. One that stands out in contrast to OP's complaint was an interviewer that brought in a stack of printouts of screenshots of apps I'd worked on and used them to spark a discussion of the work I'd done on them.

OP's experience, while annoying, and not surprising, is also not universal at Apple.


> but also for tracking metrics.

Another point in case against overreliance on metrics. Optimizing your process for making it easier to measure can easily turn into looking for your keys under a lamp post on a street even though you've lost them in a park.


And that process doesn't seem to be workout out too well, based on all the posts I see here.


Selection bias.


> Also, I never gave a single test, and I think I got it right, every time.

There's bound to be some confirmation bias there


Yes.

I confirm that everyone that worked on my team worked out well. Some had more challenges than others, but we got the job done. The team worked well together, and we worked with our overseas compatriots in exemplary fashion.

I was very fortunate. It was a small, high-functioning, C++ team of experienced engineers. All family people, with decades of programming experience.

I'm quite aware that this was a luxury. I'm grateful for that.


> All family people, with decades of programming experience

I might be reading into this wrong, but it almost sounds like you're saying that you only hired people who had families. Doesn't that sound a little bit discriminatory?


You cannot hire by not discriminating. It's by default a task that discriminates.


Sophistry. "Discrimination' is well-understood to mean 'decide with bias regarding protected categories'.


[flagged]


Could you clarify what the intended reading is, then?


At my company, HR was run by the General Counsel. It was NOT a "warm fuzzy" HR, but it WAS an extremely "legal-correct" HR.

I would have been fired if I had shown any bias at all. I had my differences with some folks there, and I wasn't always thrilled with the way that things were run, but it was the most diverse environment I've ever seen; and I include a lot of Silicon Valley companies in that statement.

The simple fact of the matter was, that the jobs in my team required a fairly advanced knowledge of C++, and the lions' share of applicants were men in their 30s; usually married.

Marital status, gender, race, religious or sexual orientation/gender identification mean absolutely nothing to me; unless the applicant insists that it should. In that case, I generally find that they might not work so well in the team.

Lack of drama was important to me. We had a full plate.


I should also add that managing a team that has several senior, married people is NOT the same as managing a team of young, unmarried folks.

An almost guaranteed way to lose people with families, is to insist that they must ignore their family to work the job.

This means that things like offsites and travel need to take into account things like school schedules, childcare and time off. I would often let people travel on Sunday to Japan, taking the heat for them not being there for a Monday meeting (which usually was just a "set the agenda" meeting anyway). That was my job.

It doesn't mean that someone going through a divorce, or caring for a sick family member, can take a ton of time off when a project is melting down, but it does mean that I would work with them, to ensure that their families get cared for, and that they can leave the job where it belongs, so that they are 100% for their families.

If you do things like this, you end up with a team that will take a bullet for you, and will do things like travel to Japan for two weeks to work on an emergency integration.

It's absolutely shocking how many managers don't get that.


I disagree. It's one way for the interviewee to reiterate back to the interviewer that they understand the problem and constraints.


Hiring people who can pass a clever quiz without verifying if they do good work in the real world is precisely how a company ends up at the top of HN with the headline "Apple, Your Developer Documentation Is… Missing".

I would argue that "how fast can someone implement this algorithm?" is a considerably less useful question to answer than "does this person document their code?" in an interview. If time is the constraint then schedule a longer interview.


If this were true then the same would be posted for Microsoft, Google, Amazon etc.

There is likely no causal effect between whiteboard/algorithm interviews and poor documentation.


Maybe not much on HN (though I do remember many complaints about MS), but each of these have fostered complaints around the web.


what key macros do you setup to make jumping around in your IDE faster.


It's even worse than that really. Most of these white board tests seem to be "Have you revised this specific algorithm for your interview, and found the specific implementation we're looking for?" If you have, great. If not, rejected.

As a method of finding good candidates it feels like it must suck but so few companies are willing to actually measure their hiring effectiveness (and be open about it).


> Most of these white board tests seem to be "Have you revised this specific algorithm for your interview, and found the specific implementation we're looking for?" If you have, great. If not, rejected.

As someone who has given hundreds (maybe thousands) and received dozens of these types of interviews, it's really not.

Generally speaking I'm looking for at least 2-3 of these abilities, depending on which question I'm asking:

1) Can you reason about data structures and algorithms when looking at a problem you haven't seen before? 2) Can you communicate your ideas effectively? 3) Can you integrate new information from a colleague while problem solving? 4) Can you write code that makes sense? Do you understand basic programming concepts? 5) Can you read your own code and reason about it?

There is at least two hidden attributes that I'm not testing for but do affect performance, so I try to account for them:

1) Are you comfortable with me, a relative stranger? 2) Can you do these things while dealing with a high pressure situation?

You will encounter high pressure situations at work but often interviews feel more high pressure (to some people) than most daily conversations about engineering.

That's it. If I can tell a candidate already knows "the right answer" to the problem, I'm usually disappointed because I'm more interested in watching them think.


Yeah, the "I have this memorized" no-questions-asked immediate implementation answer is ... fine ... but also always makes me wish I'd picked a different question.

It tells me they meet the baseline - they can learn well enough to at least be able to do this particular problem super quickly.

But it doesn't tell me any more than that, unlike the candidate who thinks for a pick, considers a few different approaches, writes something up, stops a few times to think about edge cases, satisfies themselves that they're covered, etc.

"Can you figure stuff out on the fly" is the test, not "do you have this memorized?" Fortunately, most people don't have everything memorized, so for now it's still a useful-enough filter.

The problem with alternative, past-experience-based types of questions is that most candidates are terrible at driving them. I would love someone to tell me an hour-long story of debugging something in a library, say - but usually, even with a lot of prompting, it's very hard to get much interesting stuff out of them.

So by putting some algorithmic questions on the panel too, at least they have a chance to show that they can code, even if they can't communicate much meaningful from their past experience.


I had a high level google interviewer throw a tantrum because I didn't know the exact STL threading mechanism when I specifically said I could interview in elixir because it's my day job or C (not c/c++) because I also do embedded assembly. Like dude I can tell you the giest of how to handle concurrency management using multiple paradigms the exact library call on a specific language is just an implementation detail.


It feels like we test for one thing when hiring but then ask the employee to do something else after getting hired.


I'm not defending those types of interviews, because I'm not great at them, but it seems that they're looking for human algorithm machines. People who can go in and write very specific performant code. Maybe that's the requirement for the job as opposed to someone who can interface with businesses and do general enterprise software.


I have personally found that sometimes writing out comments that explain how a confusing function is going to work will help me to understand the ins and outs and flow of the function before writing it, which will in turn help me to actually write the function, especially with algorithmic functions. Those comments might as well be written as long-standing documentation that doesn't change as long as the implementation/algorithm doesn't change. I know "comments lie" but if it's a long-lived implementation, it's more beneficial than not.


Writing a function docstring is just muscle memory for me. It's part of my process. Explain the objective, then implement it. Docstrings are usually the first or second thing I write after the function signature.

I would also have serious reservations about taking a job where documentation is considered a negative during a coding interview.


At another company not writing it would fail you. It's difficult to know when to apply the correct answer-


...which just shows how noisy that signal was to begin with.


It sounds like you might not be quite getting the concept behind this sort of interview. Although some code is written, this is (usually) a test of how you talk about code. You want to show that, if a co-worker asks for your advice on how to solve a tricky problem, you will be able to help them figure it out.

Part of this is understanding what they're asking for and adapting. If someone wants to know how to document code properly then you can have a conversation about that and give an example. But if they're asking about algorithms then you write the code and explain how it works verbally. You don't need to write comments because the reader is right there and you can explain it to them. It's not production code.


>It sounds like you might not be quite getting the concept behind this sort of interview. Although some code is written, this is (usually) a test of how you talk about code.

Assuming everything the GP said is true, what you're saying is completely contradicted by the fact that they were dinged for not using a null coalescing operator.


Well, yeah. But it may be that, when talking to a co-worker about something, they point out some programming syntax you weren't aware of, just as a way of trying to be helpful. What's a good way to handle it?

You could thank them for pointing it out, say you're not sure you want to use it (if you don't think it's better) and then get the conversation back on track. And, hopefully they'll think you handled it smoothly and won't take off points, but you'll never know how important it is to them that you're fluent in Swift. Maybe not at all, it's just an aside?

Which is to say, usually you can't tell in an interview what they're really grading you on. It's nerve-racking, but you just have to try to seem reasonable and hope that works.


Why not both. They want to hear you talk through the solution. And they have unhelpful biases about what they want to see in the solution.


Well... sure, why not? I was responding to this though

>It sounds like you might not be quite getting the concept behind this sort of interview. Although some code is written, this is (usually) a test of how you talk about code. You want to show that, if a co-worker asks for your advice on how to solve a tricky problem, you will be able to help them figure it out.


Yep, this is exactly what happened. The OP didn't stop to consider the purpose of the exercise or what the interviewer was trying to get out of it.

The details of the exercise aren't the point...and I'm sure the reason the interviewer likely gave off mixed signals about the extensive documentation is because time is limited and a good interviewer makes sure to keep the candidate on track so that there's enough time.


A binary tree is plausibly something a candidate would know, or could figure out without communicating at all. Apple picked possibly the worst problem if what you are describing was their goal, the ideal problem would be something that is unlikely to be known by any candidate. If their goal was to determine if the candidate could actually code, a binary tree probably asks too much (the ideal real-world solution is likely to just import a package).

Common data structures have no place in any type of interview.


I’ve seen this before. Very recently I saw a candidate rejected based only on the fact that they were rattled by a word-search problem (find words in a letter grid) on a whiteboarding test. It was frustrating to me, as I had seen this person’s production code at a previous job and witnessed them successfully working problems.

“I just can’t hire someone who fails this sort of problem”.

Given that this person was applying for a general backend position building foundational, framework derived features (mostly CRUD and some light analytics) it was confusing to me. There is literally zero application for graph traversal in their work. They will absolutely never encounter this sort of problem in the course of developing our software...which is b2b, targeted at industry experts: we build software that generates reports. Not even complex aggregates...just moving window averages and the like.

At this point in my career, after building more than one team/business I’ve learned that the shape of your technical interview loop informs not just the codebase you generate...but also your engineering culture.


I disagree. Expecting people to invent novel data structures on the fly, under stress, is unreasonable. Usually the idea is to ask a question they haven't actually seen, but is still somehow solvable using basic data structures most people learned in school, maybe with a minor tweak for the problem.

Otherwise, it's unlikely that most people will be able to solve it in 45 minutes, even with hints, and then you don't learn anything other than that they failed at an unreasonably hard problem.

So, one of the problem's solutions being a binary tree isn't itself necessarily a problem. The interview could be done well or badly depending on execution, and we can't judge that from afar. All we really know is that a binary tree was somehow involved.

---

At least, that's the idea. Interviewing is still terrible. It's pretty random to base a hiring decision on a few (extended) questions, and I think interviewers are left on their own too much to come up with good questions. How do we even know what's a good question or not?

Sometimes I wonder if it wouldn't be better to choose twice as many candidates as you need and pick half of them at random, just so everybody is clear that there's a lot of luck involved and they don't take either success or failure some kind of precise indicator of someone's worth.


> Expecting people to invent novel data structures on the fly, under stress, is unreasonable.

If that's the metric being measured, then sure, it's a bad idea.


Most interviewers don't give a crap about one's GitHub/Bitbucket/X portfolio. It might help one to get the interview in the first place but after that point it's useless.

Why?

Because any single interviewer has a set of well rehearsed questions they know like the back of their hands. They know the different possible solutions and understand their pros and cons. This makes the interviewing a routine that lessens the brain drag. I in an interview setting you wan't to know your shit.

Using the candidates online source code repo portfolio would mean that the interviewee would have to try to understand the candidate's code and whether it'd be applicable to the intended role and how well the code would lend itself as a measuring stick for the candidate's skills. This is a lot more work and so obv it won't be done.


Asking the same questions also makes it easier to compare between candidates and the questions can become optimised over time to ensure they are fair and valid performance predictors.

Also, it's never obvious how much someone contributed to their portfolio. Just being able to explain it doesn't mean you were the sole contributor, or the contributor at all, and it doesn't give a great indication of how you work or how well you can analyse problems.


"Asking the same questions also makes it easier to compare between candidates and the questions can become optimised over time to ensure they are fair and valid performance predictors."

I've seen this claim getting bandied about a lot lately, but I'm increasingly less convinced. You receive 10 n-dimensional vectors of programming and engineering skill, aka "applicants". You take their dot product with a ten-dimensional vector. Even assuming you can do that correctly and reliably, on what basis are you so sure that the ten-dimensional vector you've chosen has any relationship to what you really need? Repeatably and accurately asking a detailed question about recursion and pointers in a job that rarely calls for it just means you are repeatably and accurately collecting irrelevant information.

I've been chewing on this claim for a few months as people have been making it, and it's not like I'm entirely unsympathetic to it, as hiring does obviously suck. But lately the more I think about it, the less convinced I am this is the path forward. It just seems like a great way to institutionalize bad practices.

I think you need to embrace the diversity of the incoming n-dimensional vectors and pursue a strategy of exploration. Fortunately, they're not all uncorrelated and you do have an idea of what you're looking for. I see interviewing as a process where I'm seeking the answer to the question "What are you capable of?", and I want to seek out answers focused on how that relates to what I'm looking for, and that may mean I thought I was interviewing a dev candidate but I'm actually interviewing someone suitable for QA, or vice versa, or I thought I was getting an academic and I've got an old UNIX sysadmin, and these are just the examples I can quickly give. (What really happens is even more detailed, like, they clearly know their APIs but they're pretty sloppy in their naming, or I started looking at their personal projects for skill and was surprised at the documentation, etc.) It's not an easy approach, but why would we even expect there to be an easy way to analyze applicants?

(Or, to put it in more statistical terms, rather than applying a fixed vector dot product, I approach applicants in terms of a multi-armed bandit, where I'm trying to find their strong points. It helps in this case that the "multi-armed bandit" is actually incentivized to help me out in this task.)


The most important aspect of an interview process is that it is easy to teach. It doesn't matter if you have enough insight if you can't transfer said insight to those who will be on the floor and interview. So no assumptions about common sense of the interviewer is allowed, because common sense isn't common.

So the only thing you really can do is ask objective questions or check technical skills. Thus the choice is not between hard skills or soft skills, but between hard skills and no skills because the soft part deteriorates extremely quickly and becomes essentially useless at scale.


"The most important aspect of an interview process is that it is easy to teach."

That basically begs the question of what I'm saying though; why should we expect that there is an "easy to teach" process that "works", for some useful definition of "works"?

Our desire for something to exist does not mean it does exist. I think a lot of the rhetoric around "fixing" interviews is making this fundamental mistake, where what is desirable is laid out, and then some process that will putatively lead to this is hypothesized, but there isn't a check done at this phase to be sure the desired characteristics can even exist.

It may be that evaluating human beings and whether they can fit into technical roles is just fundamentally hard. (I mean, when I put it that way, doesn't it sound almost impossible that it's untrue?) While acknowledging fundamental difficulty is not itself a solution, it does tend to lead one to different solutions than when the assumption is made that there must be some easy, repeatable solution.

I can sit here and wish there was some easy way to communicate how to be a full-stack engineer in 2019, but that doesn't mean that such a thing exists, and anyone trying to operate on the presumption there is is headed for the shoals at full speed. People who know such easy things don't exist may still wreck themselves but at least they aren't guaranteed headed towards the shoals.

(Or, to put in another really technical way, the machine-learning-type bias that somewhere in the representable set of interview processes there must in fact be a good solution is not necessarily true, and think there's good reason to believe it's false.)


"Also, it's never obvious how much someone contributed to their portfolio."

You guys are absolutely correct about the reference questions. I can find no fault with that.

However, there's absolutely no question that I'm the sole contributor to all my code. Most of my repos have only one contributor (Yours Troolie). A couple have been forked off.

I do link to a number that have been turned into larger projects; with multiple contributors, but there's no question that's happened.

Also, those are my old PHP repos; not the ObjC or Swift ones.

There's ten years of commit history in the repos. You could run crunchers on them to do things like develop velocity and productivity metrics.

I'm also quite aware that my case is unusual. Most folks don't have such extensive portfolios, and are much more of a "black box."


> However, there's absolutely no question that I'm the sole contributor to all my code. Most of my repos have only one contributor (Yours Troolie). A couple have been forked off.

A bad actor could easily just have someone else write the code or copy it though.


I think that realistically, the only person who could fabricate a repository commit structure in a way that wouldn't be clear that is what they had done, would be someone who can program. If I am taking a working program, and slowly breaking it up into pieces starting with tests, and simpler components, fabricating the addition of new components each commit. That is work in my opinion, only an actual programmer could falsify, and if they are an actual programmer they would have actual repositories.


Yep, exactly.


I hear this argument a lot and I agree 100% that this is the reason, but I'm baffled about why developers feel like it's so hard to review someone's GitHub profile. I find them shockingly easy to evaluate, as in literally it takes about a minute. Here's what the process looks like:

1. Look at the project list, if it's all tutorial projects, you're already done. There's no use looking closer because you won't be able to tell if it's their source code or if it's copied from a tutorial.

2. If it looks like they have some real source code, apps or frameworks they've written from scratch, pick one and look for an important file. There's no trick to identifying an important file, in most cases the projects are small enough that it's obvious. (And if the candidate has large and complex projects, you've probably already identified an exceptional candidate.)

3. Once you've opened the file, I'm always stunned at how easy it is to evaluate. I'd say in about ten seconds of scanning I've usually identified three or so mistakes that roughly gauge that developers ability. In which case I'm done, unless I want to dig for some material to discuss on a phone interview.

4. If I haven't seen three mistakes, then we have an exceptional candidate, this is really rare (maybe 1 in 100), so we've already identified an exceptional candidate in about a minute.

One of the keys takeaways is that process only takes a long time for a truly exceptional candidate, in which case I'm happy to spend the extra time. I'm baffled by the preference for coding tests, because those I find way harder to evaluate. I know in my bones what production ready source code looks like, because I've written and read it all day, everyday for ten some odd years. Evaluating someones coding test is way more mental gymnastics and guesswork for me.

For the people who have experience evaluating GitHub profiles, but instead prefer giving tests, I'd love to hear how your experience deviates from mine. I know there's nothing special about my ability to evaluate source code, e.g., I watch my colleagues do it all day as part of code review.


What is a "cascaded nil-coalescing operator" ?


Think the ternary operator, but worse.

In Swift, this is expressed by "??".

It means "If the previous test returns nil, then execute what is after the ??".

For example, if you have a concrete Int, but the source might be an optional, then you could do something like this:

    let a = b ?? 0
That means that if b is nil, then set a to 0. Otherwise, set it to whatever value b has.

You can chain these, like so:

    let b: Int? = <something from somewhere>
    let c: Int? = <something from somewhere>
    let a: Int = b ?? c ?? 0
So you have a couple of optional values coming in, but they could be nil, so you chain them.

You can have a lot more going on in the handlers than simple assignments, but this gives you an idea.


Strongly disagree with the "but worse" characteristic. As soon as you learn what "??" does, it's actually an improvement in readability in many cases. For example:

  nameLabel.text = user?.name ?? "Guest"
This, as opposed to:

  if let name = user?.name {
    nameLabel.text = name
  } else {
    nameLabel.text = "Guest"
  }


That is because you apply pattern matching when it is not really useful here, on top of avoiding a simple ternary operator.

In some languages, you simply write:

    if (user)
      nameLabel.text = user.name;
    else
      nameLabel.text = "Guest";
Or:

    nameLabel.text = user.name if user else "Guest"

Or anything along those lines. Concise, clear, no need for an extra operator.


... and this is so wonderful! It's in no way worse than the ternary operator. You punt to a default value really easily, without having 20 lines of `if (a.this == null) a.this == 0; if(a.that == null)a.that == ""` or similar.

When constructing objects "in-line" in c# I actually used to see the ternary operator quite a lot, as in

  new SomeClass() {
    Val = valSource == null? SomeClass.DefaultVal : valSource
  }
The coalesce operator is way better. Except it can't be used in LINQ projections.


> Think the ternary operator, but worse.

How the hell is it worse? It lets me write things like

    toolbar.visibility = userSettings.showToolbar ?? defaultSettings.showToolbar

    someOption = localPreference ?? onlinePreference ?? defaultPreference
which is a lot better for the coder and a reader than any other alternative I can currently think of.


Think I like others may have read that wrong and it was meant as a joke.


"Think the ternary operator, but worse"

Giving the developer succinct ways to prevent program failure from unexpectedly undefined values is a feature.


[flagged]


> Most languages have had this since the 1960s.

> It's called the "if" clause.

If we got rid of everything we don't absolutely need in programming, we would have to get rid of 98% of Swift syntax, and probably every modern programming language including Swift.


Yes, but I LIKE Swift, because of all that sugar.

I started off with machine language, in embedded systems. You can't get much more "raw" than that.

I have been writing Swift exclusively for a long time. I speak it without an accent. It's my beauty.

But I also know that you can have so much sugar you fall into a diabetic coma.


>But I also know that you can have so much sugar you fall into a diabetic coma.

Exactly my feelings. Same thing here started from machine code and now doing everything else including kitchen sink


Did you guys really start from machine code or do you mean assembler?


Machine code. Was actually typing it on big roll of paper and then crawling around it pretending to be a debugger


Machine. Hex keypad and a lot of looking up hex codes for commands.

Type in a bunch of numbers, then set the PC to 0.

Assembler was great after that.

I do not remember that stuff fondly. It sucked.


> It's called the "if" clause.

"If" is often a statement, not an expression, so they aren't equivalent in a lot of cases. I'm not sure if this is the case in Swift, but I'm fairly certain that Swift's '?' operator also does unwrapping and propagation, similar to '?' in Rust.


In Objective-C (and some other languages with ternary operators) the same can be written like this:

    int a = b ?: c ?: 0


That one's straight up called the Elvis Operator.

I hate it when things like this get names that everyone's supposed to know and it's considered a mark against you if you've never heard it before, but I do think the name Elvis Operator is hilarious. :D


I was wondering why the funny name, and

> The name "Elvis operator" refers to the fact that when its common notation, ?:, is viewed sideways, it resembles an emoticon of Elvis Presley.

https://en.wikipedia.org/wiki/Elvis_operator


Also, a lot of other languages (Python, Ruby, JS, etc.) can do the same thing with their OR operator:

    b = list.next.to_s || "Reached end of list."
If list.next.to_s is non-nil, then the OR statement is short-circuited and list.next.to_s will be assigned to b. If b is nil and "Reached end of list." is non-nil, then OR statement will return the string literal to be assigned to b instead. Thus,

    a = b || c || 0
would be the equivalent for those languages.

edit: One BIG downside to this I forgot to mention at first: If nil and a false value are both possible for a variable, then this construction can betray you and break your heart.


Yup.

Personally, I like the ternary operator, and I also like the nil-coalescing operator, but I'm quite aware that they can produce difficult-to-maintain code, so I'm careful with them. I have gone and done some silly stuff with both, but then, I realized that I was leaving unmaintainable code.

If you use godbolt.org, you can actually see what code is produced by the source.


The second conditional is unneeded as the 0 will only be returned if c evaluates to 0, so it can equally well be written

    int a = b ?: c;


That's not specific to Objective-C, it's a GCC extension to the C language.


(Which Clang happens to support, hence it working here.)


I'm not sure how it looks in ObjC but in Typescript you could write something like:

  let studentScoreFormatted: string = student.formattedScore ?? 'Student has not taken this test yet.';
Whereas if the score is null it will return the value after `??`.

It is awfully helpful for ensuring you handle error states in displaying API values. Additionally if you want this behavior but when the value is falsey, then you can use `||` instead.


?? doesn't exist in typescript yet, but there is a tc39 proposal to add it to js proper


TS 3.7 adds it.


Do you mean Swift? I didn't think Typescript had something like this yet.


Not sure about OP, but this is coming in Typescript 3.7 I believe. Currently you can use `||`, but as mentioned above it will also catch false, 0, "", etc


Bingo. TS 3.7


Something like a.?b.?c which is syntactic sugar for:

    a && a.b && a.b.c
(with a short-circuiting &&.)


This is Javascript/Typescript only, and not really correct.

    a?.b
    a && a.b
will behave differently if a is false or 0 or an otherwise "falsy" value.

It's identical to something similar though

    (a !== undefined && a !== null) ? a : b


That's right, of course. I was just trying to convey the idea. But details matter of course. I was thinking of variables that either contained nil or contained an object with the appropriate field. If that precondition isn't meant you need more robust code like yours.

But also, as another commenter pointed out, there is a distinction between "optional chaining" and "nil coalescing" and I explained (roughly) what optional chaining is.


This is just optionals chaining, not nil coalescing.


Thanks for correcting me, I actually wasn't aware of nil coalescing in Swift and guessed it meant the same thing as optional chaining. The nil coalescing operator seems to correspond to Haskell maybe function, that is, a ?? b is computed as follows: if a is nil, the result is b; otherwise a must wrap some actual value c and the result is c.

Is that the one you had in mind?


Well a nil-coalescing operator is:

    a ?? b
Where this is shorthand for:

    a != nil ? a! : b
idk what cascaded means, but it sounds like a ternary operation from hell.


It's Swift's equivalent of Rust's `unwrap_or`, or the trick in python where you write `None or "Hi!" == "Hi!"`.


Apparently it functions a bit similar to the javascript || assignment.

https://stackoverflow.com/questions/2100758/javascript-or-va...


A bunch of double question marks with some optionals in between would be my guess.

    optional1 ?? optional2 ?? optional3 ?? ifeverythingelsefailsvalue


a = b ?: c;

if b is not nil, then a = b else a = c


Python equivalent:

  return a or b


> I suspect that they may be hiring less-disciplined engineers, these days.

I don't know about that; if Apple had lowered their hiring bar, they'd likely have enough engineers to not be constantly starving one project or another of talent because it's being "borrowed" by the Big New (next-to-announce-at-a-keynote) Thing. That still seems to be a problem; I haven't noticed any decrease in the number of Apple projects that "lay fallow" while their engineers are off doing something else.


> This was clearly not something the tester liked. Also, to add insult to injury, they dinged me for not writing a cascaded nil-coalescing operator. The code they wanted me to write was difficult to understand, and absolutely not one bit faster.

The interviewer probably wanted you to write idiomatic Swift. That seems reasonable.


..except that it was for an Objective-C position, and "idiomatic Swift" doesn't mean "unmaintainable Swift."

Most of the open-source code that I've written has been for other people to take over. Most of my repos are still 1-contributor ones, but every single one has been written to be extended. The ones that have been extended are being extended very well, indeed. I have had almost zero questions about the codebase.


> not writing a cascaded nil-coalescing operator

Given that a large portion of the discussion in this thread is around what this operator is, it seems somewhat valid for a company to want to see you use this. It might represent a deeper experience in the language being used, or at least some previous study of syntactic options.


Google is all-in on this too, but in my experience Apple is way worse at that kind of interview just like you experienced. I had one interviewer ask me to write strstr but then insist that boyer-moore could not possibly work. I don't expect every engineer to be familiar with common CS algorithms but whew, if you're gonna use them as an interview question...

The other parts of the interview weren't as bad but they also weren't great. Expecting you to know what happens when you export an int as 'main' instead of a function, for example - you can certainly figure that out from first principles or happen to know, but why should you?

It's sort of a "when a company shows you who they are, believe them" situation - even when I did get offers, seeing this reduced my confidence in the company.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: