Take their Fibonacci exercise as an example. In the first section they say, "The first line of the input will be an integer N (1 <= N <= 100)" which specifies how many test cases follow. Then in the next section they say, "The first line of the input will be an integer N (1 <= N <= 10000), specifying the number of test cases"
So besides the fact that they aren't even consistent in specifying the expected range of N it isn't even clear why you would need to specify the number of test cases in the first place vs just reading one from each line until EOF. That makes you think maybe they want you to put in some basic error checking which again since they aren't consistent with N turns into a trial and error exercise if it even matters at all.
There is also no indication about the version of interpreters or compilers being used to check the submissions. If I choose to write my solution in python is that python 2 or 3?
Combine all that with an implied scoring system based on speed or number of tries and you have to wonder if this is really measuring anything relevant or if it is just filtering for people who's default assumptions happen to be the same as the person who wrote the lame exercise description.
I doubt any serious developers would consider doing this, there's already a way to demonstrate your programming capabilities in the real world, either by contributing to an existing open source project or maintaining your own projects on github.
I personally wouldn't be motivated to use it either to find work or to hire. I think, when hiring, you really need to do a face to face interview. I've found resumes work well enough for pre-screening that it's not a pain point for me. And, it's not like there are so many candidates out there looking for work that it's impractical to do face to face, or at least telephone/skype, interviews with all the good candidates. I learned the lesson that face to face hiring is needed by hiring someone based on a co-worker's recommendation (even though he did badly in an interview, I looked passed it based on the recommendation). It's so costly and painful to make a hiring mistake that there's no way I would deviate from the current model and try a new method or service.
That really depends on your target market. For the most popular technologies (.Net, Java etc) in the most popular areas (London, Glasgow etc) for low-to-medium positions, you'll get tons of decent candidates.
This isn't an easy problem to solve and is something that companies have been struggling with for a long time, so no disrespect intended. But based on my understanding of how this works I don't think it will be very likely to yield great results.
There are countermeasures. For instance, who said we'll let you see the whole question bank? And if you type a close enough approximation, out come the plagiarism checkers.
But when somebody's gone to the trouble to make a database online of every problem, I'm happy, because if that's so we must be successful enough that we have plenty more resources to game their gaming.
- https://www.hackerrank.com/ (aka https://www.interviewstreet.com a YC company)
- https://www.mindsumo.com/ (college student only)
- http://www.codewars.com/ (in beta)
Edit: hyborg beat me to it :)
As for the answer to the implicit question...the best people are passed by reference when they're already recognized. We're aiming to help people show that their skills have value when they don't have a reference.
The number is just, well, a pointer to a broader portfolio - and we'll only be making that portfolio more expressive over time.
There are also plenty of ways of circumventing this. Since a test taker is evaluated not just by his answers, but also by the process of inputting his answers, that itself could also be faked away.
While I don't think the point of this idea is to evaluate the candidate's personality, I'm wondering if the founders thought about this. Is this going to be left to the companies to figure out on their own, or are the founders going to assist in this? What I don't like is the idea of encouraging employers to filter candidates by coding skills first, then personality. As a project manager, I've dealt with my fair share of really intelligent but lazy, egotistical engineers and thinking back I wouldn't really hire them had I known about their personalities earlier. I would rather hire engineers with B+ coding skills and A+ personalities, than engineers with A+ coding skills by B+ personalities.
You're the hiring manager. I'm a professional programmer who works with you. Let's say I personally know and can vouch for the ability of 100 programmers. There are 7 billion people on the planet. What's more likely? That your best candidate will come from my 100-person address book, or that it will be one of the other 6,999,999,900 people in the world you're not looking at?
Now it is likely that someone from my address book will be, on average, a better candidate than a random selection from the rest of the world, but all that shows is that professional references will produce "above average" candidates.
I suppose the difference is whether you want the "best way to find good people," what the parent said, or a "good way to find the best people" which I argue referrals might not be.
A technical screening can be handled in a ten-minute phone call for free - with the dignity of the applicant intact.
I take issue with any hiring system that attempts to narrow the field of view of candidates. This includes any single-number "scoring" system, as well as hiring cultures that focus on single methods (e.g. "puzzle questions") to the near exclusion of other hiring criteria. If your organization is doing any of the above, it's shooting itself in the collective foot.
The really hackerish stuff can't be tested for in a format like this.
I get we always complain about the developer interview process, but are these business models actually solving the problem?
Many a job has been won or lost based on the character of a prospect. In fact, I've lost count of how many times I've seen otherwise very technically skilled people lose out to basic inter-personal communication. Above all else, a seeming lack of tact.
I do not think developer grading is impossible, but there is so much more context involved. The "score" is dependent not just on the company, but the time and place in the developer's and the companies own life cycle. The perfect fit is always subjective -- these are organic life forms not rigid machines.
That also means the recruiter(s) need to understand the questions they're asking. Technical merit is all well and good, but how well can a recruiter process the answers they're getting? This can't be a bullet point questionnaire with blind matching of answer "C" to question "1". Sometimes solutions aren't so black and white.
I don't want to see someone's first solution to a problem. I want to know the solution they came up with after 5 mental iterations and a few hallway discussions with peers.
It sounds about as silly as this scoring does in the first place anyway...
Aren't technical interviews (which are just politically correct way to test IQ) enough of a humiliation one has to go through?
Interviews are a necessary and evolutionary process of finding good employees, and we know that so that's why we subject ourselves to them. It's not a form of humiliation because it's the norm. How else would a manager figure out if you got the skills and character to do the job? It's not a perfect process by any means, but it definitely helps.
Here's where this startup tries to take things to a different level. I get the idea behind it, but I cannot fathom why anybody would want to subject themselves to unnecessary testing. What's the ROI? Why give out personal information and have it stored on some database for some undetermined, possibly indefinite amount of time (forever? 1 year? 5 years?)? Am I going to be paid to do this? Can I no longer get a job if I don't do this? If this is purely voluntary, I have absolutely no incentive of taking the test.
Perhaps these questions should be something the founders should consider.
Bcrypt will support up to 72 characters. If you are going to limit the password length I'd expect it to be something closer to 72 than to 30.
Let's not get into plagiarism-detection arms race; consider automatic refactorings / pretty printers with variations, and human input to vary variable names.
Just hook it up to the spare PDP we have and go and see Bob and work out how to use it to analyze the data from his experiment tracking high speed droplets droplets in 3dspace.
BTW I had to write a driver to network with a rtos (RT11) just to talk to the device. I was 22 at the time and joined the company straight from high school.
Thanks. Will be fixed.
If you can root their VM blind, without feedback, you win all the challenges.