Last year I spent some time interviewing job candidates and what I learned from this is that the recruitment process in IT, in general, is broken.
A typical candidate is first faced with some kind of task to weed out the lower 20% of applicants. After that comes the proper interview during which the candidate is asked various questions like what was their major or how is the abbreviation "SOLID" expanded.
In my experience, none of this correlates highly with future performance and this may be the reason why regardless of the school they attended those students performed roughly the same.
Time and time again the one thing that seperates the best from the rest is their ability to perform code reviews. I have yet to find somebody who's a poor developer, but a great reviewer.
Real programmers read far, far more code than they write.
Reading code, and writing readable code, are perhaps the two most valuable skills in the profession.
Print out a page of code and review it with a candidate. Have them explain what it does, or how they might go about figuring that out, and what kinds of improvements they might make. Include some glaring bugs or code style problems if you want to ask those sorts of questions.
I feel like small fragments of code work well for refactoring questions, whole programs tend to have a bunch of boiler plate code, but going over a bigger program to find the real core features quickly is also a valuable skill.
I've done this with SQL too. Printed out a real statement from an application and ask about performance, optimization, what the application might be doing, etc.
Or load a 100,000k line application in their favorite editor and ask "okay, try to find the frobzing subsystem, I want to know how it frobz bazzes". (I haven't done this yet...)
it gives quite a bit of insight about what aspects they latch onto as issues of note. Do they catch the insidious, subtle issues or just the low hanging fruit? Do they understand what stdlib function arguments expect as input? Do they just pass your review through a linter and call it a day?
This is an misleading conclusion that ignores a HUGE selection bias. I doubt top MIT CS students, for example, would feel the need to practice coding interviews on interviewing.io
While there's definitely good people at non-elite universities, selection bias is at play here too.
For example, in the senior year case, you're just looking at people who were concerned enough about interviews to go to interviewing.io during their senior year. Most people I knew at MIT had a full time offer they were happy with from their Junior year internship by their senior year.
You can actually see the impact of this in your graphs too, notice that the % of people with a 1 or a 2 in elite schools increases for senior year vs junior year. People who have been doing well in technical interviews and have good return offers won't spend as much time preparing.
The hell? What sort of problems are you getting?
Were you doing edit distance and word break and shortest palindrome in middle school?
* Print pascal's triangle up to the n-th level, with proper spacing (so that it looks like a triangle).
* Make the game Go for 2 players. I got a sub problem of this during my Google interview (determine whether a piece is captured).
the rest was more regular (I didn't apply to super elite jobs, and both the companies with these two hard tests failed, so it's no indication neither about them, neither about me)
How long were these interviews? Seems like something I could do in a few days not a 2-hour interview!
And, please, when showing graphs for comparison purposes: make the Y axis scale identical, or the results are quite visually misleading.
If anything, those students would be the ones working the hardest to study for their tech interviews.
I am not from MIT, but instead a top university in China. I was accepted into Google last year, and I never even heard of interviewing.io before.
What brings you to this conclusion?
I'm not saying technical interviews are good or bad. I just think it's a different skill from what a student generally learns.
I just don't get the modern interview process. Why are we looking for lint in human form? I'd rather see someone sketch some psuedo-code for an interesting problem with several acceptable solutions.
I think I learn more by asking a fuzzy, underspecified problem question and seeing what clarifying questions I get back. I want to know if the person can wring a spec out of a customer - coding it up with all the semicolons in the right place is the easy part.
>> Most people I knew at MIT had a full time offer they were happy with from their Junior year internship by their senior year.
This leads me to think that top-universities student are already "chosen/watched" by companies before the recruitment phase so both companies and student are much more confident they can work together.
So basically, top companies just recruit people the best way : by looking at students while they are still student. Which is fine, except the top universities are, I guess, the most expensive ones. But if that is true too, then that's another bias : companies have rich employees, so they hire people with the same set of personal values...
Not necessarily, of course, but it'd be pretty surprising if it didn't trend in that direction.
However, the following types of top students are significantly less likely to practice on interviewing.io, for example:
- Seniors who have already done internships at top companies (Google, FB, Stripe, etc)
- People who excelled at programming contests and generally enjoy algorithmic challenges: IOI, TopCoder, Project Euler, etc.
- People who have built awesome open-source projects that got significant recognition. For example, Feross Aboukhadijeh was a Stanford student who built Youtube Instant and immediately received an internship offer from Youtube founder Chad Hurley . I doubt people like him would need to practice on interviewing.io
- In general, students who are regarded as top n% of MIT/Stanford/CMU CS are inundated with recruiting pitches (I speak from experience as a CS graduate from one of these universities. Most of us constantly received an overwhelming number of e-mails from recruiters, many of them offering to get us to get fast-tracked in the interview process).
Granted, not every top CS student at MIT/Stanford/CMU has done competitive programming or built really cool stuff that's widely recognized. But a significant percentage of the top n% of students do have this kind of background (admittedly, this is somewhat of a circular definition of "top" student). That's what I mean by selection bias.
When NYU and Arizona State are "Top 50" while Michigan State and Vanderbilt are classified as "no-name schools", I question the meaningfulness of this blog's baseline and what it means for an interview to be technical.
For Phd programs there are some real surprises, but for undergraduates there's not much in the way of major specific strength differences. That's largely because the strength of an undergraduate program is mostly a function of the strength of the matriculants not faculty.
CS alone is far from representative of tech industry in the aggregate.
Furthermore, PayScale's list clearly doesn't control for locality bias--e.g. consider Purdue #191, UW Madison #138, UM Ann Arbor #88, UT Austin #79 to name a few; or what Duke #10 while UNC Chapel Hill #67 even means?? How do top programs which feed industry around tech hubs like Austin and Research Triangle meaningfully compare to the Bay Area and NYC if unadjusted gross salary serves as the sole basis of value?
I actually think studying for the SATs is a better comparison - it's a test that doesn't exactly translate to real-world performance, but has huge bearing on prospects, and often the best advice for acing both (tech interviews and the SATs) is to simply do as many practice tests/problems as possible in the areas you're weakest in.
: I only got two questions wrong on the entire test, both math problems. Since I was practically unstoppable at math back then, I always suspected that there were errors in grading.
The mathematics questions nearly all use the same basic rules and are presented in the same format, and can easily be learned and trained.
SAT prep courses are ubiquitous and expensive because they work. Do people without training do well? Sure. Do people with training still do poorly? Sure. But a good SAT question would be "does this mean that training doesn't help or that there isn't a correlation between training and success?" because the obvious answer is no.
>The many different kinds of IQ tests include a wide variety of item content. Some test items are visual, while many are verbal. Test items vary from being based on abstract-reasoning problems to concentrating on arithmetic, vocabulary, or general knowledge.
Vocabulary very clearly belongs on an achievement test, not an aptitude test (as I imagined IQ to be).
Can you practice to get good at quickly solving basic algebra problems? Yes. Which means that you can practice to get better at the SAT.
Also, if you're testing only for specific skill sets rather than aptitude and interest then you already failed as a hiring manager.
Ah, the classic confusion between not finding enough evidence to prove there is a difference, and finding enough evidence to prove there is no difference.
The juniors from elite schools in particular have fewer 1s and 2s and more 3s and 4s than the other juniors. Really, you found statistical evidence that they aren't from different distributions? I'd love to see it.
The quality of the cohort and, to a lesser extent, the quality of their DS&A classes matters quite a bit.
Also the comp is a league below the others I mentioned.
The best programmer I ever met was a musician with no college.
Interviewing is expensive, especially if you hire the wrong person, and if a given population tends to cluster around higher scores, that what you’d pick first.
I've seen a lot of bad devs get hired into places with supposedly high bars, or devs being let go and then ending up at Google next. While some places often let go of good devs, I'm talking of cases where I believe it was justified.
The best jobs I had and also jobs I performed the best I got through credentials and experience, not scribbling on a whiteboard.
If you went to top 10 school in the world, you worked hard (at least it used to be this way). I don't need to look at specifics of what you did but I know you needed grit, good work-ethic, personal time sacrifice, etc to get through. Unless you're super smart. In either case it adds to your personal brand.
I think that's what perpetuates the system. Given unlimited time I'd be happy to give everyone a shot. But time is precious so stick to what you know? I know how things are in the Uni I went to. If a CV landed on my desk form someone who did the same course, I'd put them on top of my list.
The point of using interviewing.io online technical interview as the first step of the process is that it makes your "I'd be happy to give everyone a shot" take far less time than alternatives. Still can't get unlimited time or perfect fairness, but it's better.
The worst students were always the ones that went to feeder schools and had wealthy parents, which are incidentally the majority of students at top schools
I wish companies like this didn't exist to enable these types of interviews.
I am all for big data analysis disproving common conceptions, but this feels off.
You're comparing apples to oranges. The worst at MIT is not necessarily any better than the average at some state school. I've seen more than a few people with bachelor's degrees from UCSD and UCLA be below the class average at Cal Poly for their second bachelors or masters degree. If there's any metric you can rely on it's that their average is usually better than the average of another school.
Given that sameness was the point - just make a single graph with 4 colored lines.
Of course there is no real explanation of the method that was used besides the fact that it was some sort of "statistical significance testing". Equivalence testing makes more sense to me if one wants to essentially say that MIT is Aspirin but Ohio State is a generic drug that is similar enough to work just as well (for a reasonable definition of similar enough).
Plus the charts don't even show up, probably due to load.
edit: And the bias of the interviewers who use their platform, who may not be able to attract top tier talent, and are happy to get even barely competent people... It's impossible to say since the charts don't show up and we know nothing about the methodology and motivations of the different actors.
Re qualification bias, we call it out explicitly in the article. Without the coding assessment before the real interview, the result may very well have been different.
Lastly, re the bar, our interviewers are coming from Google, Facebook, Dropbox, AWS, and so on.
PS. 5 Years later, I'm now the CT0 of my company managing 50 people. So srs, not srs Googs you missed out =P
In any case, a good(!) whiteboard interview absolutely can focus on all of that broader impact you mentioned and get the interviewee to discuss the UX/efficiency/maintainability/etc of what they're creating.
(I am not sure how one decides if finding shortest substrings is important or not ... perhaps most common usage?)
What is used in divvying up a self-selecting cohort of undergrads performing on technical interviews in 2018 are the US News & World report rankings for graduate schools composed in 2014 "based on a survey of academics at peer institutions" (https://www.usnews.com/best-graduate-schools/top-science-sch... … and I don't think the lack of that link in the article is an accident).
Having a selective grad school doesn't mean much if anything for the standards and teaching in that school's undergrad program, especially for state schools with giant undergrad populations and relatively small grad programs.
For example, Illinois' grad school is selective and highly thought of by professors and thus it's treated as an "elite" school and students of UCSD's undergrad CS program are classified as "top 15".
Regardless, my alternate hypothesis would be "students of a specific level of confidence in their ability to pass a technical interview use interview.io for a limited period of time in which it provides value to them … that population of students receives a certain distribution of scores".