Hacker News new | comments | show | ask | jobs | submit login
He got 1%, we can't hire him (codingjohnson.com)
835 points by gringofyx 1512 days ago | hide | past | web | 576 comments | favorite

There is absolutely zero reason why HR/recruiting people should have final say on a candidate. None. It should be inconceivable, at a technical company, to hand over that much hiring power to a non-technical person.

HR people are valuable in a company. But their role is to get resumes in front of the real decision makers, and to take care of all the stuff like W-2 forms and whatnot nobody wants to deal with, not make decisions about who to hire.

It seems weird to me that so many tech companies have adopted this particular Big Corp characteristic, because it's certainly not a universal phenomenon. A tech company is not Wal-Mart. Its hiring process should not be structured like Wal-Mart. People are the lifeblood of a technical company. Tech companies should not therefore look to Big Corps who just hire large amounts of unskilled labor. They should look, instead, at how hiring is done at a consulting company or an investment bank, companies that are also reliant on skilled people as their most important asset. At those places, from your screening interview forward, you are only evaluated by someone on the business side of things. It might be a junior analyst or a senior managing director,[1] but it's someone who does the work that makes the business money. HR's job is to get good candidates in front of those people.

At least in my experience, start-ups and very small tech companies do this right, out of necessity. Where I used to work, interviews would involve talking with a few line engineers, then the VP of Engineering, then briefly the CEO (who was a technical person). I think as companies get bigger, they feel like they need to adopt the Big Corp model. But this start-up model of hiring scales just fine to companies of 500-1,000 technical people if you're willing to create a culture where everyone, especially the top technical leadership, is personally invested in hiring and devotes a reasonable amount of time to evaluating people.

[1] Anecdote: I once had a screening interview conducted by the managing partner of the D.C. office of a major law firm. He flew out to Chicago for a day every year to talk to prospective entry-level candidates.

> their role is to get resumes in front of the real decision makers

I worked for a while for Company A, which was founded by a team that included senior engineers and senior management from larger company B, licensed technology from B, did critical contract work for B, and at one point was supposed to become the primary manufacturer of B's products.

Company A shut down. My resumé went into company B's HR department, never to be heard from again. Eventually I had to pull a string that led to B's senior management. Suddenly an interview process was setup.

I get there, and the first words out of the HR drone's mouth when we sat down, with a deeply confused look on her face, were "What's Company A?". She'd just suffered severe whiplash when a directive from on high had told her that she would be setting up an interview for a candidate she'd ignored. She wasn't new, she'd been there over two years.

Everyone involved in my hiring except her knew what A was, who I was, and why I was there. The interview process was little more than a check for whether I annoyed the hell out of the people I'd have to work with or not. I was being handed to B on a silver platter, a pre-vetted, pre-trained, instantly-productive candidate. The most senior technical person who interviewed me walked into the room, sat down, and said "I really don't have anything to ask you".

If you insist on using recruiters, do not put them in HR. Embed them with the departments they're hiring for, where they might actually learn something about the work the company does and what it needs.

This sounds like a case of bad (Too centralized? Too incompetent?) HR, rather than an issue with having an HR department. You get what you pay for with recruiters, and more broadly HR. If you go to the lowest price resource for Java programming, HR or Accounting, expect to pay the price later.

But you did the right thing too, which is working around HR to get your name at the top of the stack. There isn't anything nefarious about doing this. You're creating your own referral, which is the best source of employees for a company.

"Too centralized" is exactly my point. An "HR department" is a monolithic unit that operates in its own bubble. It doesn't understand what anyone outside its bubble does. It's an inherently poor vehicle for staffing the rest of the company. The people screening resumés should have some meaningful criteria by which to judge them, and it's just impractical when they spend all their time in administrative-land.

At my prior employer I found that I was 10X more efficient than HR at screening resumes, so I would do it after hours rather than trust them. But... This would be inefficient if I were the CEO. And I've worked in situations where the recruiter had done the jobs that they were recruiting for - now that was efficient!

I suppose, HR unit for recruitment belongs to the Taylor's scientific management; where companies were machines and recruitment unit would efficiently find resources to fill the openings. Also, resources were abundant and job openings were scarce, so HR's non-smart screening made some sense.

Now, for the technology world, the valley, or any company where people with unique skills are needed, HR unit struggles to help the company to hire the right people. That's why we hear everywhere rants about HR, recruiters etc. The solution may be to focus on newer tools for hiring or in contrary to focus on pre-modern methods; like leveraging your existing connections, using conferences, hiring competitors etc.

From a skills perspective HR maybe struggling, but recruitment is more than finding the right skills. It includes background checking, psychological screening, screening for fitness to corporate culture etc. I don't claim all HR are more skilled than everyone here, but most of non-HR people are not experts in all aspects of recruitment neither.

Background checks are contracted out to third parties, psychological screening is nonsensical and discriminatory, and HR has not the first clue what corporate culture is like outside their bubble.

> their role is to get resumes in front of the real decision makers

For recruiting technical people, it is ideal that the recruiters has relevant technical background. The reality is mostly the opposite. So you never know how many good candidates are filtered out because their resumes are lack of flashy keywords.

If the recruiters don't feel hurt, I prefer to screen the resumes by myself. The worse scenario is that you are scheduled for a phone interview. When you look at the resume, it is full of flashy keywords while lack of experience in "serious" projects. I fully understand the candidates are eager to get a job. But after you spent dozens of hours to talk to these unqualified candidates with flashy resumes, it becomes a negative flag. Well, these flashy resumes are a result of the tech-recruiting industry.

And this is why I believe the HR department needs to give less emphasis on Resume and the "flashy keywords" and focus on the projects that the candidate is involved in / what role he/she played in it. Just looking at the portfolio, you can have a pretty good idea of what you can expect.

In many places HR is just a dump for unproductive people that can't be fired.

"There is absolutely zero reason why HR/recruiting people should have final say on a candidate. None. It should be inconceivable, at a technical company, to hand over that much hiring power to a non-technical person."

And yet they often do. Consider that for a moment and wonder why it is so common, especially in technical companies.

What I have found over the years is that technical people hire people they think are great without regard to the HR "signals" and then one day they get screwed. Maybe they hire someone with anger management issues, maybe they hire someone whose antics expose the entire company to a crushing lawsuit. Or many they hire someone who, in short order, irritates all of the other employees such that there is a huge morale crash and exodus. Basically their hiring on technical merits and/or interview results in a very bad outcome. And then someone they know and respect says "Gee, an HR person would have spotted that right away, why didn't you listen to them?" or worse "Gee if my HR person had let that person through I would fire them on the spot."

You see in both cases something a bright technical person might be actually unable to see, could be the difference between a "good hire" and a "bad hire." And that is how HR people get into positions of power over hires and fires.

1) The hiring manager has someone else to fire (and blame) when a hire goes badly.

2) The hiring manager has some 'cover' over the things they don't readily see (like emotional issues).

Of course since the whole emotional/psychology thing is so opaque to some folks it is really hard to judge if the person they have providing that visibility is good or not. Sort of like someone who knows nothing about technology hiring a consultant, or someone who knows nothing about cars hiring an auto mechanic. One has to take things on faith a bit that the other person knows what they are doing and try to come up with ways to re-assure yourself that this is true.

This sounds good in theory, but in reality HR people are not especially skilled in picking up on the character issues that matter to teams; moreover, when you instruct the HR team that their role in the process is to screen for this kind of stuff, you end up with HR people creating sporadic roadblocks for hiring.

Rayiner is absolutely 100% right about this. The real role of HR people is tax forms and health insurance, and little else.

I don't disagree with Rayiner here, I'm just sharing how I've seen this sort of stuff get put into place. When you ask "How the hell did HR get to have veto power over hires?", I have found more often than not there is a story of "that guy" or "that gal" who got through the "old process" which is now some legendary part of the company history.

One thing I try to determine about startups is their ability to fire people.

Lots of the problems you mentioned sound like they were in part due to the company finding that difficult, and of course it should be noted that any process like this, about humans and done by humans, is going to have errors on occasion. The real trick is realizing and correcting them.

Yes. I've been at start-ups unwilling to fire people and it sucks.

I've had people suggest that "willing to fire" is a bad sign, because we can be even better by just always hiring the right people! Which is a self-reinforcing style, because now you really can't admit you hired the wrong person.

100% agree, but I think the real trick is not taking it personally. Too many people think that having to fire someone reflects badly on them, it doesn't[]. It only means that it didn't work out for some reason. I try to explain to people "work" and "not work" are two different places, like killing you in World of Warcraft has nothing to do with killing you in "real life", and not being able to work with you doesn't mean I don't enjoy or want to hang out in non-work situations. But that is a hard thing to separate for many people, they are their work and that means work is them.

[] The exception is when it does, they clearly hired someone for some bogus reason, but there will always be exceptions.

The real role of HR people is tax forms and health insurance, and little else.

Well said. And their role definitely should not include administering bullshit voodoo science "psychometric evaluations" and making any decisions based on them.

Typical "those guys know nuthin and we got it all figured out" comment that seems all too common on Hacker News. Calling psychometric evaluations voodoo is akin to doubting climate change or casting a wary eye on vaccines. The evidence, methodological rigor, and results are there for all to see, but it just doesn't mesh with your gut feeling. Of all the things in the field of psychology to call voodoo, this is almost certainly the least deserving.

Like everything, competent implementation and use are key, so I'm sure you have had experiences that give you reasons to doubt their efficacy. But every field suffers from some form of that in some way or another, and the degree of transparency into the process varies greatly.

Like everything, competent implementation and use are key, so I'm sure you have had experiences that give you reasons to doubt their efficacy. But every field suffers from some form of that in some way or another, and the degree of transparency into the process varies greatly.

Fair enough. BTW, I'm not the one who downvoted you, FWIW.

Psychometric evaluations are not voodoo science.

Honestly, as a PhD psychologist, they are the only useful (i.e. scientific) part of psychology.

For example, see Hunter and Schmidt (1998) http://mavweb.mnsu.edu/howard/Schmidt%20and%20Hunter%201998%.... This (and if you've been around here a while, you've probably seen this) is a meta-analysis of the utility of particular selection procedures for jobs. That is psychometrics.

Unfortunately, much of the bullshit surrounding the particular psychometric tests used by HR departments has thoroughly debased a discipline that invented cross validation. Smith and Mosier, 1958 (Method 6, I believe).

Another boring comment shitting on psychology - and theoretically by someone who should know better. It's really tiresome.

cut/pasting from another comment of mine:

by saying that psychological research is itself useless, you're also throwing away things like A/B testing, UX testing (including Apple's much-vaunted usability stuff), research into grief management, team-building research, research into cognitive recovery therapy after acquired brain injury, work looking into ameliorating sexism and racism, perception research for HUDs in fighter aircraft (my honours research), some pain management research, research into dealing with PTSD, research into crowd control and management...

Someone with a PhD in the topic should be well aware of the breadth of the field that is 'psychology', and to say that the only useful thing in the field is psychometric testing just displays your myopia.

Half the stuff that HN talks about is psychology, from A/B testing to building staff relationships. It is far from 'useless', particularly given this audience.

It is ironic in the extreme that the people that shit on psychology do so because they see it as a 'pseudoscience' that 'doesn't observe things properly', yet so very few of them actually see psychology for what it is - instead just falling back on their own narrow stereotype of it.

Dude, from my perspective, its the only sub field that is even halfway right in statistics. I have read so very many psychology papers in top-ranked journals that commit basic, stupid statistical mistakes all the time. And no-one seems to learn.

If you use a linear regression rather than ANOVA, you will often be asked to change it to an ANOVA.

I am well aware of the breadth and depth that is psychology, and most of it is poorly conducted and irritatingly bad. I actually think it has a lot to do with applying a particular experimental model of science developed for non-reflexive systems to reflexive systems, with predictably hilarious results.

Please do not take out the rest of the commentators opinions on me, it upsets me also when people bash psychology from ignorance, but I come from a place of love when I bash it, as I do really adore the subject, but feel that much of it is so very, very awfully done.

A/B testing - psychology degrees teach pretty good experimental design, but you wouldn't know it to see a lot of published research. Additionally, rigorous experimental design owes a lot more to statistics than psychology.

UX Testing: This is definitely a good area, but again I think we don't control enough for the impact of the researcher(s) - see Rosenthal, 1969 for the problem, and note the complete lack of care regarding these effects in modern psychology.

Grief management? Seriously? I don't really think psychology has added much here, but would be delighted to read some good papers that prove me wrong.

Ameliorating sexism and racism? Good intentions, but given that I read a lot of this stuff I would have to say that they are perhaps the worst for statistical sins and errors (with the notable exception of Brian Nosek).

I know very little about perception research in HUD's, do you have some good papers?

I'm not entirely certain why you felt the need to correct me, when if you look at my comments in the overall thread it can be seen that I am pretty much on the same side as you.

It is possible that I am so sick of people who commit statistical sins for career advancement purposes (something I could never do) that I may be taking it out on the field.

Incidentally, what is psychology? I would be interested to hear your definition (as long as its not waffle like the scientific study of the human mind and behaviour, which merely begs far more questions).

I won't question that some psychometric evaluations may reveal something interesting about the person taking it. I have much less confidence in the ability of anybody (HR or otherwise) to accurately map those results to anything related to a hiring decision. I lack ALL confidence that the results of such a test should trump the determination of a group of co-workers who interact with the candidate face to face for a period of time and cover a broad ground of topics.

Unfortunately, much of the bullshit surrounding the particular psychometric tests used by HR departments has thoroughly debased a discipline that invented cross validation.

Yeah, that's the rub, innit? The times I've worked for companies that did this stuff, and when I've seen results from them, I've seen nothing that leads me to believe in the utility of the tests. Unfortunately I can't recall the specific name-brand of the ones I've been exposed to or I'd criticize them specifically.

Which is why we screen people for submission to authority? I don't doubt that they "work" for certain easily-measurable traits of personality. I'm also certain that (1) the tests will be abused and (2) HR people are screening for attributes that are probably more useful in a factory setting.

To varying extents, submission to authority is why you get a paycheck.

If an organization has a successful process of doing something and they bring somebody in who bucks their methods without regard for the establishment, that can be problematic. Sure folks can come in and disrupt organizations for the positive, but I'd say that's probably not the majority of organizational disruptions. Most of them are just obnoxious and unproductive.

People always doing whatever they think is right works until someone's sense of what is "right" is actually wrong, or harmful to their employer. For example, consider all the people who thought their particular brand of humor or affection was ok, but it triggered a harassment suit.

Loose cannons can destroy more value than they create, even if they are brilliant in certain skills.

Good post. These HR tests favor extroversion and obedience. In my experience, brilliant people are highly unlikely to exhibit these traits.

I work with a guy who pounds on his desk daily, curses at code, has OCD, walks about aimlessly, is anti-authority, depressed, and is a self-described autistic. In other words, he's the type to score 1%. However, if you have any advanced math problem he can solve it in minutes, whereas most people would either take days or never get the right answer. For me at least, I have no problem interacting with the guy and joke around with him often. But I do notice that with other people things will usually end up standoffish or awkward.

Perhaps, HR would be better served by using the personality profile to train the other workers in how to interact with a new hire and get the most out of them. The idea that a company should have a singular culture built of singular personality types, not only sounds like a flawed plan, but one that in the end is impossible to achieve.

I think that these HR tests basically ensure uniformity of candidates to a certain extent. But if all people in a workplace are of the same nature, if they wear the same clothes (some companies require that too), and if they think in the same manner, a lot of out-of-the-box thinkers would be left out. And it is a law of nature that the higher the variety, the better the yield. I believe that a lot of differently thinking people can generate great ideas by mutual interaction.

By the way, that's exactly why I left a corporate career at a bank: I was the only person in the entire floor to question the dress code, I was the only person in a 10.000 people center to commute by bike (among other things) and I got tired of being the ugly duck and having my ideas being completely rejected just because they were different.

it would be better, but that would require competence from HR. In other words impossible.

You basically shift responsibility to HR people (who don't know shit about candidates) and they only hire mediocre bets to avoid blame.

That's because they're not sharing wins from stellar candidates but they take blame for bad ones.

I wonder why blame shifting seems so central to american way of thinking. You'll give talent away to avoid it.

> I wonder why blame shifting seems so central to american way of thinking.

Because blame goes down, and reward goes up, so its a lot more important to make sure that the blame for wrong doesn't get to you than to make sure you get the credit for right, since the reward for right is an "attaboy" where those above you reap the substantive benefits, while the punishment for wrong remains substantive.

TL;DR Because capitalism, mostly.

TL;DR Because capitalism, mostly.

This isn't really unique to capitalism - it's just politics and interpersonal competition.

This kind of thing happened in the Soviet Union, it happened in monarchist France (court of Louis XIV), it happened throughout all of human history.

> they're not sharing wins from stellar candidates but they take blame for bad ones.

It often seems like hiring stellar candidates involves circumventing HR policy. HR responds, instead of asking how they can help be part of the solution in the future, by incorporating new policy. Hiring stellar candidates, therefore, creates a situation where it makes it harder to hire stellar candidates in the future. There is no "win" in that game.

> I wonder why blame shifting seems so central to american way of thinking.

FWIW, the story the OP told took place in the UK.

Where do you think those americans come from? ;)

But I expected it to be US, frankly. Didn't expect "personality tests" from UK.

I've seen this happen myself as well, in 3 cases at 2 different companies we wound up firing a person due to those sorts of issues. But in my opinion it should not be (and it was not) a big hassle to fire someone. I've also hired some people I was on the fence about from a personality perspective and have been pleasantly surprised at working with some productive, albeit quirky people.

But, underpinning your story is the blame/cover angle which I completely understand, and would consider an antipattern itself. If a few people screen someone and decide to take a bit of a chance on them from a personality perspective and they don't work out, let them go, learn from it, and move on.

Technical people also look for "signals" or "red flags" during interview. They ask themselves questions like, if the candidate is a good fit for the team, if I want to work with him, etc.

HR people are not necessarily better in identifying these "signals". They might emphasize on "signals" that are not a big issue for technical people. Some technical people could be a little quirky. As long as it is not serious, it is not a big deal.

I have a solution: Give HR hires a psychometric test that weeds out ones who like psychometric tests

What is "Big Corp" though? I work at MS and they certainly don't do tests like this - and from my understanding neither does Google, Amazon, Facebook (is fb a "bigcorp"? perhaps not yet..). Dunno about IBM - wouldn't be surprised either way I guess.

Perhaps the bigger lesson here is to not do personality tests. With that said, obviously gauging team fit is super important - but you don't need (and should avoid) a test for that.

On a related note, it seems another lesson to be learned from here is to be cautious when searching a technical job at a non-technical company. We don't know whether this is the case in OP, but nevertheless the whole "BigCorp" caution you give is likely more applicable when you look at financial companies or otherwise "companies that don't focus on making software/etc."

But I have no idea why growing startups would model their HR after them. If they want to model their HR after a "Big Corp" then they might as well model it after a technical Big Corp. Let's not compare apples to oranges.

> What is "Big Corp" though? I work at MS and they certainly don't do tests like this - and from my understanding neither does Google, Amazon, Facebook (is fb a "bigcorp"? perhaps not yet..). Dunno about IBM - wouldn't be surprised either way I guess.

I'm an Amazon employee and I can confirm that I didn't have to take any kind of personality test when I applied here.

I was also encouraged to apply to IBM via a friend of a friend, and while I didn't have to take a personality test for them, I did have to fill out a questionnaire asking me about my years of experience in various topics. I was automatically rejected by their system for not having X years of experience in language/framework Y. I didn't find this quite as bizarre as rejecting a brilliant candidate due to some personality test, but I definitely found it a similarly poor experience. It was weird to run into an issue that seems to be brought up every time developers talk about poor interview/recruiting practices.

The only time I can remember actually filling out any kind of personality test when applying for a job was when I applied to McDonalds as a kid, which echoes the GP's point.

When I was younger I applied a few places that had those McDonalds type tests, they are so completely trivial to subvert.

The questions basically where a thinly veiled attempt to figure out how pliable and compliant you where.

They should have just had one question "Do you like having a roof over your head?" because lets be honest you don't work those kinds of jobs because you enjoy them.

All this psychometric stuff is just a lot of crap.

To be fair, they likely get a lot of applictions, and some portion of those applicants may be mentally ill in some way, possibly on drugs, or may otherwise be truly abysmal employees who don't care about anything. Tests like those can weed out people who clearly shouldn't be considered.

They have no place at a serious company hiring for a technical position, though. Usually 10 or so minutes of face-to-face talking is more than enough to tell you if your candidate is mentally ill, on drugs, or a major asshole.

Well, that's the thing. I happily stipulate that anyone that doesn't recognize that answering "strongly disagree" to "I think it is okay to steal office supplies" probably will not perform well on the job. Not so much due to the minor stealing issue, but more due to the inability to recognize this answer will never, ever get you hired (this is of course confounded with the people that decided the test and job is BS and they are willfully answering 'wrong').

We were all subjected to a test like this recently here. I'm talking most people have been here 5+ years. And suddenly my boss needs a test like this to figure out that I am introverted, that Bob likes to take charge, and so on??!!?!? Really? Fire that manager, if they are that incompetent. I refused to take the test, and fortunately my boss felt the same way and backed me 100%.

It's outrageous. You've observed my capabilities, you know I am great (or poor, or whatever), and suddenly I have to take a test where I am supposed to tell you if I am lonely, if I make friends, and so on. No. No. No.

the problem is discrimination lawsuits.

an old person contests a no-hire decision with "I wasn't hired because I was old"

option A: Hiring manager says "well I just thought you were a jerk"

option B: An objective and universally applied test scores the applicant as being 95% jerk, which is above the 60% cutoff that the company uses globally

which would you rather defend yourself with?

You might be correct about the reasoning. But those records are likely to work against if a suit ever hit full speed, because those scores could easily support the plaintiffs. "Gee, everyone over 55 ever hired has a 75%+ score. But your 25-54s range wildly from 56% and up."

Once the company asserts this defense, the plaintiffs ask for demographic details on every employee in the company, and what the scores were at time of hiring.

Think of them this way: they don't test your personality, they test whether you know what the obviously correct answers are for a personality screening test.

Now imagine what must be going through someone's head when they answer "never/strongly disagree" to the question "How often do you try to treat customers and coworkers with respect?" and the tests start to make a little more sense as a screening mechanism.

And whether you can lie for them.

I took a PHP test for a recruiter once. Half the questions were "what would you get if you typed this into the intrepeter?", which you weren't allowed to do. It seemed to come down to an honesty test.

An anecdote, for sure, but a friend of mine just failed a Google interview for not knowing 'Dalvik' was the name of the Android VM, despite knowing it was stack-based and used, among other things, to avoid licensing fees. The non-technical interviewer saw that his answer to the trivia question was wrong, and that concluded the interview.

It's possible that trivia question was the straw that broke the camel's back, but since she wasn't sure whether or not his additional information was correct and/or sufficient, he 'lost out'.

But isn't Dalvik register-based, not stack-based? Missing the name could be excusable, while getting the architecture wrong might have been a relevant factor.

Yeah, mistype on my end. He answered it correctly :-)

Facebook has 5,000 employees. Target has 350,000. So when I say "Big Corp" mean the latter. The primary users of these tests are companies like Target, Starbucks, etc, who hire large numbers of relatively unskilled people for jobs where personality factors are more important than anything else.

Fair enough. In that case, my last 'paragraph' is definitely what the takeaway from my comment should be.

I read "Big Corp" as "non-technical Big Corp" in this sense. I think it's pretty well known that the major tech players have difficult technical interviews, but that the people ultimately making the decision are engineers.

IBM uses a test called IPAT for hiring. It's more of an IQ test than personality.

>>But their role is to get resumes in front of the real decision makers, and to take care of all the stuff like W-2 forms and whatnot nobody wants to deal with,

Don't forget enforcing dress codes and other such nonsense policies in the "employee handbook."

Don't forget enforcing dress codes and other such nonsense policies in the "employee handbook."

I personally don't have a problem with this. Someone has to be responsible for making sure that the office doesn't look like a sty / homeless shelter. When investors/customers/interviewers stop by, there's a certain level of professionalism that needs to be present. So if there's a rule that people need to wear a top, they should be the ones to enforce that, in my opinion.

If I were a business owner and a customer judged me for allowing my employees to wear comfortable clothes around the office, I would not want them as a customer. Same with investors.

Why? Because them caring about outward appearances would signal to me that they have a fundamental lack of understanding about what factors are relevant to running a successful business.

Don't get me wrong: if an employee is visiting a customer or business partner, then yes, they should dress up. But the office should be an environment where people focus on their work instead of how they look. My philosophy: people should be able to wear anything they would feel comfortable wearing around a roommate that they don't know well. I think once you provide that guideline, people will auto-adjust towards a standard that is acceptable to everyone.

I agree with your sentiment, but on the other hand, what's the big deal to put your best foot forward when a VIP is visiting?

When I was in a small, casual start-up, they asked us to dress nicely for just one day to impress some visiting investors. None of us had a problem with it; we all had a stake in the company's continued health. It seemed like the natural thing to do, and I'd do it again if needed.

I've done that too, in the same situation. But I think it may backfire. Some investors may be nostalgic for their days in a startup. Others are still looking for that "startup edge" which differentiates from stagnant corporations.

If your clothing says "corporate", you may send the wrong message and drain some interest.

Obviously this varies by area and industry.

> Why? Because them caring about outward appearances would signal to me that they have a fundamental lack of understanding about what factors are relevant to running a successful business.

why does it matter to you if your customers understand what factors are relevant to running a business?

What do customers do in your back/development office?

Because like it or not, first impressions are incredibly valuable. If I am looking at giving you, say, $100k worth of business, and I walk in and it smells vaguely terrible, people are disheveled and the office is a mess, then right off the bat you're down a peg.

Assuming you have a competitor that is clean & neat, with employees who are presentable, all things being equal I am more likely to choose them. You & your employees might actually do better work, but you're fighting an uphill battle from the get-go. And that to me is one thing that a successful business doesn't do: immediately put themselves at a disadvantage.

And it's not just customers or investors. It can be vendors, partners and, quite frankly, other employees. Ultimately it's a place of business and there are certain standards that need to be set that otherwise some employees might not set their own appropriate ones (whether it's smell, hygiene, shabby dressing, what-have-you).

To clarify, I'm not saying you have to have everyone in suits to have a successful business. I'm just saying that someone should at least be able to enforce a basic level of present-ability throughout the workforce. Even if that just means "Hey, ABC, you gotta start wearing deodorant to the office."

How did we get from comfortable to disheveled? No-one dresses up at my office but they do still smell like they take showers and wash their clothes.

Well, HR does have final say-so in terms of logistics. Like does the candidate require sponsorship to work in this country, do they have a criminal record, did their reference say they were an asshole, are they asking for more money than we can afford. Even judging personality and communication skills is not outside their expertise. The lesson of this article is to not use personality tests.

Well there are obviously certain areas where they should have final say (e.g. this candidate is great, but here illegally and not authorized to work in this country).

They should not have final say over a candidate based on personality or communications skills. They should be a source of input (i.e. this guy is an asshole, because he was very rude to the guy who helped him scheduled his callback interview, or perhaps more importantly to inject some company-wide context into what can turn into a group-specific analysis) but it's the technical people who know what kind of communication goes on in a technical team, and thus it's the technical people who should evaluate a candidate on that characteristic. Yes, this requires trusting your technical people and investing them with a greater responsibility to understand the dynamics of their work group. That's a good thing.

Even with this, should they have the final say? I don't think so.

They should give you the risks -"he has a criminal record of fraud, and will expose us to some lawsuits that could close the doors. In addition, we will have to quit doing business with our 5 top customers". But that's not a decision. It's information.

They can give you their capabilities - "I am unable to come up with a way for us legally employ them in the U.S." or "It will cost us approximately $500K to manage the legal end of hiring them." This is not a decision, it's information.

They can give you personality notes - "He was extremely rude and insulting during the initial phone calls, and asked me to perform a sex act for money. I believe he will be a personality cancer in this company". This is also not a decision, it's information.

When you actually trust and value your HR department they no longer feel the need to be gatekeepers. They are a valuable source of information during the hiring process. Of course, you have to trust your hiring managers to make the right decisions based on this information.

Policies that give multiple departments "final say" or veto powers are put in place because the individuals are not trusted... which points back at poor hiring or promoting.

To be fair, you did say:

> There is absolutely zero reason why HR/recruiting people should have final say on a candidate. None.

Now you're saying:

> Well there are obviously certain areas where they should have final say

Fair, if pedantic, criticism.

If you are hyperbolic, then you can expect pedantry in return.

If every hyperbole was met with pedantry then nobody would ever go fishing again, and HN would turn into the worst newsgroup you can remember from the 90s.

This would be good for the fish.

Be pedantic towards hyperbole and save ALL the fish!

When groups designed to serve become self-serving control apparatuses, I think a company is deeply fucked. Probably irrevocably so, although I'd love to learn otherwise. It means people have forgotten the mission, and once that happens I think it's slow death by turf battle.

You also see this happen with IT, Ops, and software development organizations, not just HR. E.g., feature X is the company's highest business priority, but the DBAs don't like it and they control all schema changes, so they just say no.

In my view, HR's job is to help hiring managers find the people they need, and then to support those people when they're there. If hiring managers need help with making hiring decisions, then by all means offer them help. And if HR,or anybody, spots something that concerns them, they should speak up. But I think giving HR a veto is a clear mistake. So no, I'd say that tests just make this an obvious problem, rather than a subtle one.

You explained why they should have input, not why they should have final say.

We tend to say anyone involved in the process can veto and I'd extend that to HR if they'd shown themselves to add to the process.

Work permit issues are entirely a reason to veto and they're better placed to spot that sort of thing that the dev team, similarly issues around references and the like.

The issue here isn't with HR per se, it's with a bad HR department overreaching, but personally I'd be just as skeptical about a technologist pointing to poor performance on a technical test (with good face to face interview results) as I would about this.

If you're trying to separate "face to face" performance and put that in the domain of HR, and "technical test" performance and put that in the domain of the technologists, then you're going to have a bad time. That prospective hire who did great on the face to face isn't going to spend part of every work day chatting with HR...

Between your technical people and your technical management, there should be enough people skills to function coherently and effectively as a team. As such, those people together should have enough people skills to figure out whether a prospective candidate has the people skills to function effectively in the team. If those people see people skills as exclusively the domain of HR, well then you made prior mistakes in hiring that cannot be remedied with further mistakes.

This is exactly the sort of mission confusion that results in HR departments overreaching. If HR sees itself as "the people people" and its role as evaluating candidates for "people skills" and engineering's role as evaluating candidates for "technical skills" that's an unworkable division of labor and will invite improper encroachment from HR into the hiring process.

That's not what I said.

What I'm saying is that putting too much faith in tests isn't just something HR do (I've seen people put far too much faith in technical tests), and that HR having a veto isn't per se a bad thing so long as they exercise it sensibly based on things within their expertise.

we tend to say okay, you say that, but any empirical evidence that this leads to better decisions? Especially when, as in this article, the only one saying no is the one unable to judge the person's ability to actually do the job?

> We tend to say anyone involved in the process can veto and I'd extend that to HR if they'd shown themselves to add to the process.

IIRC there was a woman a few months back with Valve who couldn't get the staff she needed because of that sort of veto setup.

That may be the case.

It works for us because the people involved all understand the log jam it can cause if people don't behave properly but I can see in another company (or even division) it might not work. You would only need one or two people deciding they wanted to throw their weight around to wreck it.

Maybe I'm lucky but I've worked with some good HR people who understood where they could add value and where they should step back. To me this really feels like a good people / bad people thing rather than an HR people / technical people thing.

> There is absolutely zero reason why HR/recruiting people should have final say on a candidate.

I can think of one. If they are responsible for doing a criminal background check, and have found serious criminal activity in the prospective employee's past.

That's not to say that some prospective hires can't be rehabilitated; but in certain cases, hiring someone with a criminal background is just too much of a risk, and it's not unreasonable for HR to have a final say on this.

Other than that, yeah, you're right.

I used to work for a pre-employment background screening agencies. It's illegal to reject someone based on any criminal history as long as the person has not lied about their background. Unfortunately, it's not illegal (federally, with the EEOC, each state may have their own differences) to discriminate by personality test.

This is definitely something I saw prevalent in the hiring world: if it isn't outright illegal, use it to prevent someone from being hired. In many cases we had clients that'd tiptoe the line and ask if certain reasons would be valid for rejecting a person, and some HR managers were upset when they learned it was against the law to disallow someone who had admitted their criminal history from being hired. So they found another reason.

There is still a massive amount of hiring discrimination. It's just not the "official" reason anymore since that'd be illegal. That job was soulbreaking.

It's illegal to reject someone based on any criminal history as long as the person has not lied about their background.

I'm not sure if you are in the US, but that's not true here. As a specific example: if the crime involved money and the position is with a financial institution. We have a big problem in the US where felons can't get jobs which can obviously lead to a lot of recidivism.

> It's illegal to reject someone based on any criminal history as long as the person has not lied about their background.

Source? As far as I know, it's legal, but you have to show a good reason to do so, and finely tailored policies, that aren't simply a broad "we don't hire anyone with a criminal record."

There are plenty of cases in which you may, or even may be required by law, not hire someone based on particular crimes on their criminal background check, such as sex offenders for jobs involving children or felonies for people who need to get a security clearance.

I was overly broad with that, excuse me. It's much more complicated, but generally you can't exclude someone based on a criminal record alone as you rightly point out, and that's not accounting for other circumstances. So yes, financial will matter (of course, any financial crime means that license will be revoked and that record will be accessible as well), but unrelated crimes are different.


The main reason someone gets rejected from criminal reasons, in my experience, is having an old record that's been cleared or expunged, and then answering "No" when the employment application asks a specific wording of "Have you ever been convicted of a crime?" - The answer would be yes, and a criminal check would reflect something had happened, but that the record has been expunged due to enough time passing for that particular crime. It sounds convoluted, and it is. Not sure it really should, but I only worked in the industry. Whether or not the industry followed the letter of the law is much, much different.

Its ironic that in an America that lets legal shenanigans turn all kinds into criminals, that the same legal weaselling lets people get away with what is completely illegal, just because they say they didn't do it for the illegal reason, just some other reason they choose at the time.

Even in that case, HR can identify the risk paramters (likelihood of risk manifesting, expected cost if it manifests, overall expected cost taking all of that into account, etc.) but they can't weigh the cost against the expected benefits.

There are plenty of cases in which the company just needs to have a policy that it's an unacceptable risk. For instance, if the job involves getting a security clearance, a felony on your background check basically means that you are unfit for the job. Or if you are dealing with highly sensitive financial data, there can just be too much of a risk.

I'm not saying that all HR departments should have such a policy (in fact, in some cases it can be illegal to request criminal background information, at least on the initial application form). But there are some cases in which a policy like that could be necessary and final.

> There are plenty of cases in which the company just needs to have a policy that it's an unacceptable risk.

Sure, but HR generally shouldn't be setting that policy, and whoever is setting that policy is logically making the decision, HR is gathering an input to that decision (a potentially decisive input, but that's because of decision criteria set outside of HR.)

I think appearance of this kind of voodoo in the hiring practices is a sign that the company has jumped the shark. It may still have many years of good earnings and still be a lot of fun and profit to work for, but once these symptoms appear, it is not on the way up anymore but on the way down, since it is no longer rules by common sense but by black magic of "we do not need to think, we have policies, policies don't have to make sense, they are there to be followed".

> There is absolutely zero reason why HR/recruiting people should have final say on a candidate. None. It should be inconceivable, at a technical company, to hand over that much hiring power to a non-technical person.

Welcome to the world of Asian Bigcos, where HR not only has the final say on the candidate hire, they also have enormous influence on what department/team that candidate ends up in.

I've read a few business related books/taken several courses and it's actually taught that HR should have the largest role in hiring.

The rational: 1) HR is the manager of the human resource. 2) HR should determine and screen employees for possible risks. 3) HR is at fault if the screening is not achieved.

Relying on a test such as personality is pretty B.S. though. Most tech guys compared to the general populous will receive low scores for being social or extroverted. Which should directly translate into being a lesser productive team member (by being unable to communicated his/her ideas). In actuality, introverted individuals often can communicate well or in a manner which is suitable to completing a task such as the man who was not hired in this example.

What HR really needed to do was to just talk with him, clearly he didnt' seem like a threat and was capable of working with a team so it really shouldn't have mattered.

At the previous company I worked for I was given one of these tests in the interview process. Later I asked the HR Director about the test and she told me "Well you did crazy good 100% accurate, however the software also factors in when people give answers that we want to hear you failed that because and it said you were not being honest, however I ignore that one because I think that if you're smart enough to know what the right answers are then you're most likely smart enough to know what would be best for the company given the chance"

Smart HR Directors will get it right however I feel sorry for any company that uses the results of those tests to actually make a decision.

I understand why you think that way, but there are reasons HR-related folks get involved in these types of things. Note that I cannot speak for this particular company or their processes, but I can speak generally.

Properly validated psychometric tests are valid predictors of job performance (by which I mean objectively and factually correlated with actual work performance), and a properly constructed work-relevant test will be on the whole will be more valid than subjective judgments from most technical professionals. Of course there will be hits and misses, but the consistent application is the key to its use. If you don't consistently apply it, it becomes worthless.

But, let's throw that out of the window for a second because in this particular anecdote I don't think it's relevant. This test they were using seems more like a cultural fit, specifically a person-organization fit test. These tests are less about predicting job performance than they are about predicting organizational commitment and ultimately tenure [1]. Surely we can agree that those things are important too.

But, let's again say that this candidate's aptitude is so impressive that we don't mind an increased statistical chance that his or her tenure may short. The real issue is that legally you are obligated to apply the same set of criteria all of your employees for a given role (not necessarily the exact same position, but comparable positions). If the company is applying cognitive testing, personality testing, drug testing, person-org. fit testing, etc. to one group of applicants, you now cannot fail to adhere to that criteria because it is an invitation for legal liability, however seemingly-spurious.

Having said that, it's important that your selection processes have been validated previously, or at least are in the process of being validated (which provides some protection). I also have a big concern with the linked situation because: 1) it's a cultural-related test, which means the possibility to show cultural-related bias is high and 2) they were unwilling to make language accommodations, which seems unnecessarily rigid when native-level English skills are likely not a core component of the job. Point #1 can be accounted for in some shape or form (at a minimum through a low cutoff score), but Point #2 is more problematic.

Anyway, there is a reason companies adopt "Big Corp" characteristics as the scale. The primary reason is for legal compliance through standardization of process, and the other is that data is supportive of the validity of objective predictors of job success relative to subjective judgments.

I realize I may be opening myself up to some criticism and scorn from HN crowd for seemingly representing "Big Corporate" or acting like a Bob from Office Space, but so be it. Despite what it may seem, there are often smart people using evidenced-based processes driving seemingly asinine HR processes that drive you crazy. And sometimes there aren't, and it's just a big pain in the ass for poor reasons.


[1] PDF of meta-analytic findings about person-job/org fit: http://nreilly.asp.radford.edu/kristof-brown%20et%20al.pdf

The problem with psychometric tests is that they're easy to game by answering them with the values the company is almost certainly looking for. Any company issuing a psychometric test is looking for obedience, hard work, discipline, profit orientation, civility, non-threatening creativity, etc. etc. etc.

So what you get by discouraging a broad diversity of personalities is a mix of people who actually have those specific characteristics and sociopaths. The latter group ends up running the company, and of course they issue utterly sociopathic "psychometric evaluations", which are basically just encoded discrimination against entire classes of people, regardless of job fitness, because the people up top are such incompetent leaders that they need subservient workers to obey their crazy orders without question.

Actually, spotting and weeding out sociopaths is one of the points. I know that you may believe that you're a good judge of character, but actually a smart sociopath (not the norm, btw) is likely to be able to game you just as easily as the test.

There are three buckets of people you don't want to hire:

  - helpless
  - depressed
  - jerks
In my subjective experience:

The top one has about %50 shot of being a character flaw, eg, no amount of confidence-building will help.

The middle one has the most hope, if the talent potential is significant. The best in the field can be painfully shy as well as depressed. The problem is that depression (not shyness) hurts morale. [1]

The last has a %75 chance of being a lost cause. Impossible to manage as they are incredibly disruptive to morale and productivity. [2] Their survival is often linked to mastery of politics, which inculcates their position.

[1] http://m.fastcoexist.com/1679208/a-sad-worker-is-a-bad-worke...

[2] The No *sshole Rule by Robert Sutton

Your buckets are quite simplistic and I doubt your company will be successful. First, if this is the states, you are not allowed to discriminate against clinical depression, so (2) will just lead to a lawsuit eventually. As for (1), if they have the skills are they helpless? Jerks are quite deplorable, but its a gradient right? How much jerkiness do you tolerate, or do you want saints?

I agree. I am working with people have these 3 types of characteristics. This 3-type categorization is way too simplistic to describe an individual. I find no problem working with them. People are interesting. They all have some kind of quirks and behavior patterns. It's important to be tolerant and adaptive. But people with strong technical depth is really hard to come by.

Duh. Value is holistic. We can find anything we want to find to support an argument.

When you have worked with someone that screams at their coworkers, can barely feed/clothe themselves or refuses to lift a finger to help you... then you will know how much time and emotional effort these behavioral traits can waste.

Toxic employees exist at the extremes of those buckets. Avoid hiring toxic employees, but the filters available to you might not be good enough to screen them out, so fire them quickly. During the interview, you might just be able to observe minor quirks which are hardly indicators either way.

I cringe whenever I read job ads that are aiming for a culture fit: "we only want to hire harmonious, independent, extroverted employees;" when in my experience, the best employees do not hold strongly to those ideals, but neither to do they hangout at the other extremes.

All or nothing hiring is too risky. Gradual formality makes the selection from both sides cooler, you know. We don't do interviews either; hazing sure, but no stupid hypothetical questions to test loyalty. There is no loyalty. Greed is good. =)

There is the another option. Make something people want and build it up slowly, on the side. If you do, you may free yourself from being at the beckon call of other masters (other than users). Others have, so it's possible. We are each masters of our own fate.


Hiring is discrimination. That's the point.

Only someone without life experience would not read between the lines that buckets are ideals and need the explanation that hiring is holistic.

To then make a straw man fallacy out of the opposite of an ideal is to be inconsistent: you can't have it both ways.

    - people that put their percentage symbols in the wrong place

    - People who capitalise their sentences.
ps. lol

    - People who spell 'capitalize' as 'capitalise'
pps. lolol

- the colonies

Well, my main objection is to hiring people on the basis of their character, whether it's with a test or not. Assess for whether or not the person can do the job and make you a profit.

Can you spot the person who interviews well, but is a skilled sociopath who will destroy the morale of your team, causing other skilled talent to leave?

That's the value of these tests (when properly applied): They allow an evaluator to make statistically accurate predictions of specific types of behavior. The key is the two words: statistically and specific. A single composite score will not tell you anything, but the idea that there's a 68% chance that the person you're about to hire is a schizophrenic kleptomaniac should give you pause.

> there's a 68% chance that the person you're about to hire is a schizophrenic kleptomaniac

Sorry, I don't believe "psychometric tests" can do that either. At least not at the level that we're talking about: distinguishing an otherwise qualified person from someone who's got some sort of psychopathology.

Then you didn't read the other replies in the original thread from actual research psychologists. They described the process and posted links to peer reviewed academic papers. Your baseless disbelief is equivalent to disavowing climate change or vaccines.

> the idea that there's a 68% chance that the person you're about to hire is a schizophrenic kleptomaniac should give you pause.

I don't believe in employment discrimination on the basis of mental illness. After all, John Nash was schizophrenic.

You can't catch psychopaths using tests like these. The whole point of being a psychopaths is that you can change the perception of your personality at will.

So what your letting in is a subgroup of people who pass, and psychopaths.

That isn't actually what psychopathy means.

It's not what it means, but it is something they are capable of. They can be charming, and they can be horrible. They decide what they are going to be like, to fit into an overall selfish strategy. If being shy is an advantage, they will become that. If being confident is advantage, they will become that. They can gain your trust, then flip when it is an advantage to do so.

In these tests, they know what to say, what to do, to give a certain impression. These tests can't catch them, because they are not honest, and lack integrity(No consistent values, only ones that benefit them).

More specifically, the psychopaths that you're likely to run into in a tech company fall into this class.

There are certainly psychopaths who are hopeless at concealing the fact. These are typically the less smart ones, so you wouldn't meet them; they are more likely to be in and out of prison than your office.

There are also psychopaths who are just about perfect at pretending not to be, and have no intention of ever doing otherwise. You can work with one of those for decades, and never notice a thing unless something extreme enough to make 'acting normal' seem a long-term liability happens.

Again, go read the links that the actual PhDs in psychology posted in replies to the parent thread. They provided documentation in the form of peer-reviewed studies that back the claims that this kind of personality is detectable despite your belief that they are not.

If one's character meaningfully predicts job performance on top of whatever other methods you are using, then assessing on their character is the same as assessing on whether or not they can do they job (or at least, how well they can do it). Integrity tests have demonstrated this in many contexts, even taking into account the possibility of faking.

Except that obviously wasn't the case in the linked article.

In the case of the military it can be the opposite: they weed out applicants who they are not confident will pull the trigger every time they are asked to.

Well, they try.

I don't remember where I got the number, but apparently something like two-thirds of the regular army still end up unwilling/unable to aim properly when a combat situation actually happens. That's from the Great War, though; procedures may have gotten better since.

>The problem with psychometric tests is that they're easy to game by answering them with the values the company is almost certainly looking for.

You'd think that, but a quality psychometric test can't be gamed if it was developed in-house and is only taken once by the candidate. This particular company might not have had one of those, however.

Professional personality tests can't be gamed - you also can't find these on the internet, because they're very strictly copyrighted.

Can you go into more detail? It's hard to imagine how a test could be devised that wouldn't be gameable by giving answers that would be expected of a person with a normal personality.

It's not particularly difficult to devise a personality exam that way - in the best case scenario, the candidate cannot game the test, and in the worst case scenario, they game it but the proctor is fully aware of the gaming, so you simply reject the test (or the candidate).

Instead of trying to get rid of questions that are gameable, the psychometrists design the test with statistical consistency - things as obscure as, say, what color the candidate answers in a multiple choice question out of orange, yellow, blue and red can be used to correlate with other questions (not literally, but to a candidate it would be equally obscure and seemingly innocuous as a question). If a candidate answers one question in what they perceive to be the societal ideal, this will be exposed in other questions where it's not possible to cross-consistently game the question unless you've read and thoroughly understand a manual of psychology or psychometry.

Tests designed this way allow for a certain amount of "gaming" by candidates before it crosses a statistical threshold, at which point it essentially tells the proctor, "The answers are so inconsistent that the candidate wasn't being honest." and you have to throw out the whole test (which in the context of hiring, means rejecting the candidate).

The reason why this works is because while people will answer "Would you consider yourself hard working?" with "Always" or some other unrealistically gamed superlative, they won't realize that other questions that are seemingly unrelated are highly correlated with that quality - if you answer yes to one and no to another, you probably lied on the transparent one, with high confidence.

Tests like this[1] have been around for so long that they consistently get better, though there are some valid criticisms of them being easier for non-minorities (for a variety of reasons). They are used in clinical and professional contexts, and while the quality personality tests will theoretically be consistent across multiple-test takings (in other words, results are relatively immutable), in practice you should probably avoid giving one individual a lot of exposure to the same exact test twice.

Hope that helps.

[1]: http://en.wikipedia.org/wiki/Minnesota_Multiphasic_Personali...

I'm a psychologist, and I game personality tests for fun.

The MMPI is harder than the Myers Briggs, but not that difficult. Additionally its really only optimal for clinical samples, at which it is very good.

However, everyone in the field knows that these tests can be gamed, the only open question is how many people game them consistently.

You can probably estimate the proportion of social desirability exhibited in job interviews by comparing to non-job situations (such as a sample matched on all relevant covariates (whatever they are) from the general population.

Nonetheless, believing in personality tests as an accurate indicator of personality is as misguided as believing that Facebook represents the social graph of all its daily active users accurately, i.e., somewhat misguided.

And I am aware of lie scale, and they are trivial if you actually read the question. Protip: If a question says always or never, its probably designed to trip you up.

I do agree that personality tests are more accurate than this thread makes them out to be, but they are certainly not as useful as your comment implies.

Wow, I have to defer to you then :)

Is there anything you've got offhand that I can read further about this? I didn't know you could actually game them.

All self-report data is inherently flawed, which is why you can't rely on one kind of data while trying to learn something about personality.

Find the scoring manuals, do loads of personality tests, rinse, repeat. Its not particularly difficult.

Despite this, I have ran many surveys back when I worked in academia. You can detect some of this stuff with Guttman errors, but these are not often used, and as long as you are consistent, its very difficult to spot.

> The reason why this works is because while people will answer "Would you consider yourself hard working?" with "Always" or some other unrealistically gamed superlative, they won't realize that other questions that are seemingly unrelated are highly correlated with that quality

Can you provide a deeper example? I'm honestly curious - what sort of question is innocuous enough to be answered honestly, yet useful enough to provide information? (i.e., do slackers like the color yellow or something?)

This is actually part of the so-called "bogus pipeline." Test-takers are told that the test can detect any attempt to lie, causing them to answer the questions more honestly than they would otherwise. The MMPI does detect inconsistent and unrealistically extreme answers, but it's not quite as foolproof as claimed above.


Here are some:


By googling those, you can go deeper into the rabbit hole.

Okay, that is the MMPI. That is not evidence that any test that is not the MMPI cannot be gamed. The MMPI (and I disgree with it in so many ways) was researched and tested for decades, and perhaps they can detect lying. Perhaps. I don't buy that any lesser test cannot be easily gamed absent a heck of a lot of experimental evidence for that specific test.

Often, at least in the work contexts, it doesn't matter if they can be gamed. Validity testing is done with the effects of gaming built-in, and there is some evidence that gaming has relations with performance benefits.

A acquaintance of mine is finishing his dissertation on faking on personality and integrity tests, so I'll know more on that soon.

Yes, but in that context gaming is known and accounted for. My definition of gaming was manipulation without the proctor or evaluation showing any statistically significant deviation, which is virtually impossible on modern personality tests. The "gaming" becomes transparent and used to score the candidate, if it's present at all.

I appreciate your input, especially as counterweight to those who underestimate the sophistication of these tests.

However I question your absolute statement. What man can create, man can circumvent. (How's that for an absolute statement?)

Would you say that the people who created the test cannot game it?

No, it can certainly be gamed. It's just so hard without prior knowledge and preparation as to be statistically negligent. This is also why I said you should avoid giving the same test twice. But a single, cold test administration should be very difficult to game.

You should also read upthread, what the actual psychologist said. It clarified my comment really well.

> But a single, cold test administration should be very difficult to game.

Oh come on. This was posted in a different comment:


The following are very obviously the "correct" true or false answers to these questions from the MMPI-2:

  T * My mother is a good woman.
  F * Evil spirits possess me at times.
  F * There seems to be a lump in my throat much of the time.
  T * At times I feel like swearing.
  T * My hands and feet are usually warm enough.
  F * Ghosts or spirits can influence people for good or bad.
  F * Someone has been trying to poison me.
  F * Everything tastes the same.
  F * Someone has been trying to rob me.
  F * Bad words, often terrible words, come into my mind
      and I cannot get rid of them.
  F * Often I feel as if there is a tight band around my head.
  F * Peculiar odors come to me at times.
  F * My soul sometimes leaves my body.
  F * When a man is with a woman he is usually thinking
      about things related to her sex.
  F * I often feel as if things are not real.
  F * Someone has it in for me.
  F * My neck spots with red often.
  T * Once in awhile I laugh at a dirty joke.
  F * I hear strange things when I am alone.
  F * In walking I am very careful to step over sidewalk cracks.
  F * At one or more times in my life, I felt that someone
      was making me do things by hypnotizing me.
I cut out a few because I couldn't see how true or false mattered.

Those are the obvious ones. I've taken the MMPI. There's a lot more than just those. None of these are the seemingly innocuous ones I was talking about.

Again, I'll reiterate: can they be gamed? Yes. Is it likely? Emphatically, no.

Okay, I didn't know you've taken the MMPI, so fair enough. I would be surprised if answers to the innocuous questions were that important though; they strike me more as luring you into a sense of complacency and so that you end up answering honestly when it comes to the important ones.

Now that I've read some more, I'm curious.

Are you referring to the MMPI or something similar? Or to Unicru and Kenexa? Unicru seems to ask much more prosaic questions than MMPI.

And yes, when I speak of gaming, I am assuming prior knowledge and preparation - but not necessarily inside knowledge.

Properly validated psychometric tests are valid predictors of job performance (by which I mean objectively and factually correlated with actual work performance), and a properly constructed work-relevant test will be on the whole will be more valid than subjective judgments from most technical professionals.

Do you have any support for this where the employees are both technical and creative? I'm obviously thinking about software development, but it should apply to any kind of engineering or technical job where the employees need to have a deep technical knowledge, know how to apply that knowledge practically, and have to use that knowledge to create new solutions to new problems.

I've been personally involved in validity testing for graphic designers, and while the validity coefficients were reduced they were still of practical significance, and had incremental validity over cognitive ability testing (which is always the best predictor, but tends to show racial bias). I will see if I can find any published research as I've not seen any and am now curious myself.

Is this an accurate description of what you mean by a cognitive ability test? http://en.wikipedia.org/wiki/Cognitive_Abilities_Test

If so, under which circumstances are they used? For a graphic designer, the natural "test" would be for fellow graphic designers and potential managers to look at an applicant's work samples, or to ask them to produce one. This method directly tests the applicant's ability to do the type of job, although there is no objective metric. You are relying on people's subjective assessment. How do cognitive ability tests compare?

Yes, in essence. When I refer to CATs I'm talking about measures testing g (http://en.wikipedia.org/wiki/G_intelligence). And I know it's hard to believe, but g-centric tests like cognitive ability test do a better job than other seemingly more relevant selection measures like work sample tests and assessment centers. The benefit of work sample tests, assessment centers, integrity tests, etc. is that their validity is decent and that a significant portion is independent of g-centric measures.

Here is a good article you can read on the subject: http://www.unc.edu/~nielsen/soci708/cdocs/Schmidt_Hunter_200...

The difficulty I have with that article is that I don't know how the jobs in these studies map to engineering or research jobs. I'm thinking of jobs where one has accumulated close to a decade's worth of knowledge before starting.

Related to this, I think it's important to consider that this correlation between g and job performance is conditioned on the fact that the person applied for the job. That sounds trivially true at first, but it means that the applicant felt like they were competent for the job (in the best case; in the worst case, it meant they felt that they had a chance of appearing competent at the job). In other words, what we're saying is, "Of the people who thought they could do the job, the smartest ones tended to do the best."

But if our candidate pool was everyone, I'm skeptical that g would still hold as a good predictor. I think I'm a bright guy, but I'm pretty sure I'd make a terrible nuclear engineer. And with that in mind, we may need to keep the non-g related selection around to prevent such a situation.

You've made a good comment. I can't specifically point you to a study with research or engineering (though I know there has been some that involved academic research performance as an outcome). The finding tends to be that 1) g is more, not less, important for jobs with higher complexity and 2) job knowledge acts as a mediator between the predicted relationship of g and work performance.

You are right to think that the results would be different if the test was just given on the general population. It's an academic consideration that tends to resolve itself in the field.

And you'll never hear me, or anyone else, suggest that g should be the only predictor for just the reason you describe. Biographical data (e.g., years experience, work history) is much better first hurdle.

The Wikipedia article talks a lot about correlation between performance in different subjects in school.

I've also read that two of the strongest predictors of performance in school are your parents' performance in school, and (after controlling for that) your parents' income.

Do studies of general intelligence usually control for the impact of parents' education and income?

> cognitive ability testing (which is always the best predictor, but tends to show racial bias).

I'm intrigued, is the "bias" because the test is unfair to one racial or more racial sub-groups or because the test is "fair" and that is how they actually perform or is it a language thing.

I do reasonably well on standardised IQ tests but I suspect if I did a German one I might struggle.

It's a bit of a mystery, truly, and bias can mean different things (e.g., slope vs. intercept predictive bias). Note also that predictive bias is not the same thing as mean subgroup differences (e.g., mean score differences of White vs. Black candidates).

This is a quick overview of the topic: http://www.siop.org/_Principles/pages31to34.pdf

An interesting thing is that the predictive bias is reduced for open-form response questions vs. select-one tests. It's an indication that there is more at work than just subgroup differences.

When my mother was doing sociology at Uni I read one of her texts (as you do) and it had an example of an IQ test been flawed, they gave the same test to two different groups of children and found that the poorer (working class as they where grouped in the study) children performed less well consistently.

One of the people looking at the results then looked at the breakdown of questions by group and noticed immediately that questions like "The cup goes on the a) saucer b) floor c) table d) shelf" where consistently "wrong" (correct answer was a) for the poorer group at which point he realised that working class children drink tea from mugs and saucers where a middle class thing.

The story might be apocryphal but its stuck with me since I was 12-13 whenever I run into any kind of standardised testing/results.

The latter. The races perform differently on "fair" cognitive tests.

How do we know the tests are fair? For a given test score, life outcomes like income and criminal conviction rate are the same.


As, I'm sure you can tell, this isn't specifically for technical and creative employees, but in general it appears that structured interviews, IQ tests and work samples significantly out perform unstructured interviews.

> If you don't consistently apply it, it becomes worthless.


Having read Thinking Fast and Slow and been convinced by this book's many useful views, I would agree that a simple questionnaire-based ranking is actually better than any subjective assessment of candidates. And it should be as neutral as possible, ie not letting one's "first impression" influence the ranking (because it is almost always based on irrelevant physical features).

But even then, there is no reason for this neutral evaluation to become worthless if not 100% consistently applied. There is the broken leg case: If you want to evaluate the probability of someone going to see a movie tonight, you just base yourself on simple statistical facts (how frequently people go to the movie in this country), and should not try to infer more from subjective context, except if the guy in question broke his leg this morning.

These tests and evaluations help much in reducing system bias due to halo fallacy, framing effect, and even time of the day for the evaluator (it has been proven juges are more lenient after lunch!), but they still are only helpers, they do not need to be decision-blocking.

Well, legally consistent application is very important. If getting an 80% if the cutoff for candidate A, and getting a 65% is acceptable for candidate B, then you're now setting different standards for different candidates. What if those different standards align with something like gender, or religion? It may not happen on purpose, but it's a problem.

Statistically, you've created a model for consistent application, and if you don't consistently apply then the anticipated value of the tools you're using is now in question as the model has a large source or error involved now.

I don't mean to say subject evaluations have no place in the process. Think of each part of the selection process as an independent stage. The candidates can pass the objective tests, perhaps with a minimum cutoff score, then pass the subjective judgment test. Or vice versa (though that may limit utility further).

I'd like to apologize for my poor grammar in the above comment. I typed it while on a conference call, and am a bit embarrassed by it.

> Anyway, there is a reason companies adopt "Big Corp" characteristics as the scale. The primary reason is for legal compliance through standardization of process, and the other is that data is supportive of the validity of objective predictors of job success relative to subjective judgments.

I can't speak to your second point, but as to your first point, I should note that law firms themselves definitely do not use such HR-driven hiring processes.

If they discard the second point, the first point may become less relevant. Companies that choose not to follow evidenced-based hiring practices, instead focusing more on pedigree or a few interviews, may do just fine as long as everyone is treated consistently and their processes don't show observable bias. They do so at their own detriment, but legally they can be OK.

Also worth noting: sometimes less rigorous way is the better way. Developing validated tests doesn't work as well with smaller numbers, and can be expensive. There are also considerations to be made for the selection ratio (candidates selected over candidates applied). The utility isn't always worth it, and I shouldn't make it out to seem so.

Huh. Hewlett-Packard is as BigCorp as BigCorps come, and we don't use psychometric tests -- at least not here in the software division, and not anywhere else that I've ever heard of.

Here's a very oversimplified model of people that would allow a test to strongly correlate with job performance, but still be wrong.

Let's assume for a moment that when a person is told to do something, and they disagree, 90% of all employees are typically wrong when they disagree and 10% of all employees are typically right when they disagree.

We are considering now just engineering type jobs (since that's what the article is about).

Now consider anti-authoritarianism: 90% of the people who are anti-authoritarianism will perform terribly worse than the typical population (since they will tend to go with their mistaken belief of what is right, or be obstructionist when poeple want to do it differently, etc). They will be your least valuable employees.

The remaining 10% of those scoring high on anti-authoritarianism are among the most valuable of your employees, as they won't allow their teammates to go down a dead-end.

So anti-authoritarianism is highly correlated with poor performance (even trouble-makers) but rejecting all applicants based on this will keep you from getting some of the most valuable employees.

>>> Properly validated psychometric tests are valid predictors of job performance

Here I have a problem - which job performance? Different jobs are obviously demanding different qualities - highly creative but impatient person may be an asset in job requiring instant creativity but a liability in a job requiring steady repetitive tasks and constant attention. So the test has to be matched to a position. But in the original article not only HR gave the same test to everybody, people who decide on positions don't even seem to have information or input on any correlation between position and test results. Given this, I highly doubt such application of testing can have any meaningful correlation with job performance.

> Here I have a problem - which job performance?

Job performance can be defined many different ways, and the most common in field studies is what is most common in the workplace: supervisory, and possibly peer, evaluations. It's not the perfect criteria, but the "perfect criteria" for many jobs exists only in the theoretical. When appropriate, "work performance" is also defined as sales numbers, widgets produced, time-to-complete, etc. It just depends on the study.

> So the test has to be matched to a position.

In many cases you're right (e.g., work samples), but not always. g-centric tests have demonstrated high validity across jobs and contexts. Tests for cultural fit (in which the desired outcome is typically workplace satisfaction and tenure) are typically validated across a wide range of jobs, and so they are applied across those jobs. Things like integrity tests and personality tests can have meaningful validity across jobs, even without seemingly job-relevant content.

In regards to the role of HR in hiring a candidate, I think it is a bit more complicated then you make it out to be. The problem is that a candidate won't just exist in isolation on the team he/she was hired to. There needs to be some representative of the rest of the company, outside of his/her project team and managers, to vouch for him/her. After all, this is a person that may be given building access, that will interact with other employees at company events, the cafeteria and the hallways.

I don't know anything about the details of the test in the article, but I can imagine someone doing well in their technical interviews, but failing miserably on company fit. Maybe he (settling on a 'he' in this case) is a brilliant programmer but showed serious misogynistic tendencies when interviewing with a female HR person. Maybe something came out of his HR interview that revealed him to be a borderline psycho who could code well but was a troublemaker. Considering that it is HR that will have to deal with this person in the future if there are complaints against him, I think it is only fair that they set some baseline level of acceptability for candidates.

I used to work for a company that writes and administers these tests. This company broke the cardinal rule.

#1 - Only give personality tests in their native language.

This is unacceptable, a personality test is NOT a language aptitude test.

Personality tests will have questions like this:

        True or False I am happy when others are taking advantage of me.  
Non-native speakers may not appreciate that "taking advantage" is a negative term and might read it as "I am happy when others are taking advantage of my abilities". Hell I WORKED for the company and had to double-triple take some of the questions.

Some of these personality tests are even written for English / UK-English because of the nuances involved.

The correct answer, in any case is "True". Employees who object to their bosses taking advantage of them are a liability risk and are automatic no-hires.

(Assuming this is not sarcastic, though I suspect it is)

I have no idea what sort of company hires only people who answer, "I would be perfectly happy if my boss took advantage of me, but it sounds like an extremely insipid place full of yes-men that I would not want to work at.

For the sake of argument though, if the point of the test (in OP's case anyways) is to determine culture fit, should it matter what the "right" answer is if the rest of the team got it "wrong"?


This comment conveys the all-too-common reality that "cultural fit" is code for "everyone should be like us." To the extent that cultural fit is a defensible hiring criterion, or even a good one, it doesn't mean pure homogeneity.

There are a lot of ways where it's obvious that you don't want homogeneity. Take the question above. It may be good for the company if all of the individual contributors like being taken advantage of, as they'll work long hours for bad pay or whatever. But if your upper management or sales team likes being taken advantage of, they're going to let the company be taken advantage of, and now it's not so good.

In a lot of things, it's way healthier to have a mix. You want people who like and excel at starting projects and laying good groundwork, and you want others who like refining and maintaining systems that are already defined, because any company that's still building new systems will need people to do both. In most development teams, having all of either will be bad news.

And the same goes for personality issues. It may seem great to build a culture around people coming in at the same abnormal time as you and liking the same kind of fun as you. We're all together all day! We can bond through that fun! But there are a lot of problems that crop up as you grow. Once your culture is defined by, say, being a bunch of 23-year-old men who come in at noon and love video games and drinking, people who don't fit some or all of those characteristics might not feel especially welcome. Hiring people for areas other than engineering gets hard. Hiring senior engineers who are more likely to need to be home by 6 gets hard. Hiring women gets hard. You've just boxed yourself into a pretty small corner of your potential talent pool.

I've seen great companies built out of beautiful mixes of people. A married foodie who lives in SF working alongside a through-and-through Southerner who thinks In-N-Out is exotic. Fresh out of college video game players alongside parents with two decades of experience who like hiking. People still got along great and had a lot of fun together. There were certainly some commonalities, but I'm sure people would answer personality questions in drastically different ways. That doesn't mean they don't "fit" together.

There certainly are some cases where rejecting someone on "fit" reasons makes sense. Say you interview someone who is technically great, seems to be able to generally work well with others, but is terrible at pair programming. Will never cede the keyboard or listen much to their partner, and doesn't seem to be trying to fix it. If your company does a lot of pair programming, don't hire them. If you're a company that doesn't really do pairing, this might be a bit of a red flag, but it's not a deal breaker. There might be some analogs in the personality space.

But culture fit definitely should not mean "we have identical personalities."

This stuff matters. I'm the kind of person who would pass a lot of "culture fit" homogeneity tests (young, white, male, pretty nerdy, elite university), but I hate the homogenous cultures that often crop up at Valley companies and actively avoid them. A major factor in choosing to intern at Matasano this summer instead of some companies that are filled with very happy 22-year-old men is the much more reasonable culture, and all 'tptacek has written on HN about how "culture fit" is often BS.

So if you build a culture where "fit" means "be like us," you won't just cut out a lot of people who aren't like you, you'll cut out people who are like you but prefer communities of diverse people.

I had a dream once that I got a job with a San Francisco startup. All of the people there resembled the sort of characters a dimestore novelist would come up with to fit a "hacker" archetype. There was an Asian guy with blue hair and earrings, a girl who was fond of tight-fitting "punky" clothing, etc.

In the dream I left the job because I felt I didn't fit in with them. Thankfully I've managed to get in with more diverse communities of coworkers in real life.

You're only thinking about the smallest cogs. Do you want a middle manager to be taken advantage of by those under him? Do you want someone in Sales to be taken advantage of by customers?

No, but only because that stops them being taken advantage of fully by their boss/shareholders [categorically not my point of view].

LOL, I have always answered no to this. I guess it's more than coincidence that led me to applying to places that don't administer these tests.

If you don't mind me asking, where would one begin searching for employment at a company such as the one you used to work at?

Would you care to send me an email? I'd love to ask someone who worked at one of these companies a few questions.



Oh god, I am shocked and horrified to hear that these sorts of tests may now be making their way into the tech industry.

I've never encountered one of these personally, but my then-girlfriend who was a recent nursing school graduate looking for her first job ran into them and failed the first few of them. After reading about them online I'm convinced the only way for you to "pass" these tests is to be slightly psychopathic or to simply know the very flawed theory behind the tests and train yourself to take them in which case they of course test nothing but how good you are at adapting to flawed tests.

The specific test she kept running into was the "Unicru" one:


I highly recommend anyone running into this problem with HR in their tech companies do what they can to FIRE HR. This kind of bullshit needs to be nipped in the bud if it threatens to take hold in our industry.

I know that it's hard when you want a job, but personality tests should be declined the same way polygraphs should be (if they were legal to administer). Developers have an advantage right now in the supply and demand department, and this is exactly the way that power should be used.

Talented developers are in short supply in the core tech industry. Outside that, the world is filled with "programmers" and "analysts" struggling to find jobs in enterprise IT and niche-market product development. Post a job to craigslist and you'll be drowned in resumes.

My suspicion (never having seen one of these tests) is that it's being used in these situations. I doubt Apple or Google are going to start using them any time soon. My guess is your advice is pretty bad for the people facing a personality test.

>> ... now be making their way into the tech industry.

I remember to fail one such test somewhere around 2001, the reason being: "... lack of a status-driven momentum ..." and " ... lack of aggressivity in pursuing career goals ... " or something to that effect.

Not that I walked out smiling, but they explained me in two sentences why I would not have liked to work there. So, addressing your quote above, I'm astonished that these tests are still around. Basically this company (and others) are putting their culture into the hands of an external service (Mercury Urval if I remember correctly in this case) and they can lose big by handing off influence on the choice.

Wow. That test looks like an excellent way to select for a combination of mindless corporate drones and skilled liars.

I'm confident I would ace this test. All I'd have to do is pick the exact opposite of what my honest answer would be.

Dude, your girlfriend may have been too immature at that point to realize what the right answers to Unicru were, but it's really obvious what the interviewer wants to hear after getting some life experience with assholes: bend over, don't complain, smile, etc...

I never got such a test, but if I did, I would try really hard to score 0% just because fuck you. Funny thing is that the capable guy being exemplified in this article probably did just that, after all, if you answer randomly the score will be higher than 1%.

I think if you haven't worked in a 'real' professional job for several years (your first job) then it can be harder to understand the psychology behind the test implicitly. Or if you just worked in small businesses with very low amounts of politics.

I would think an engineers personality should be different from a sales person's personality.

Ironically, if you did that then the score would probably be accurate.

Well yes, looking at the link above, I'm actually agreeing with many points from the "strong disagreement" list.

lolo I like:

"It is maddening when the court lets guilty criminals go free >> SA"

Yeah, what is the point of having a court of law anyway, because the test taker knows all.

There are much worse ones. I seriously hope that the key is fake. You are supposed to Strongly Disagree with:

  "When you are annoyed with something, you say so"
  "When people make mistakes, you correct them"
  "You are not afraid to tell someone off"
... and Strongly Agree with:

  "Any trouble you have is your own fault"
  "You avoid arguments as much as possible"
  "You finish your work no matter what"
This is what the wicked stepmother would ask the candidate for a Cinderella, with the added benefit of being too insecure to leave her for Prince Charming.

The key is correct. Remember you are a mindless drone, as such you are not expected to take charge. Them above, they know best.

Anyone remember that case when a prank caller called a McDonalds, telling them that they were management and that they should strip-search a female employee? McDonalds has an employee handbook with all the procedures for the robot, and if you can make it appear that what you ask is in the handbook you have hacked the robot. This is the background to that story. Excellent robots they have at McDonalds.

The reason for this procedure is that they want to pick up people from the streets and train them in a week. This is how you do it.

I think the caller pretended to be a cop. Pretty sure they made a movie about it recently...

One would think they changed the line in the manual about cooperating with law enforcement to something more detailed.

Note: I'm saying "something more detailed", not "something more appropiate to situation". The robot must not be required to think ever.

Not sure about a movie, but it was the lede case in a Law & Order SVU episode: http://www.imdb.com/title/tt1015439/

This is disturbing on many levels to me. Someone who is "maddened" when a guilty criminal goes free demonstrates clear ignorance of the justice system as well as a tendency to be instigated by demagogues.

On the other hand, this test as a whole seems to be selecting for someone who is pliant and believes whatever he is told to believe, the sort of person who actually believes the HR indoctrination and the like. In this regard, I think the question is very revealing - someone who would answer "Disagree" to this question reveals that he understands that our judicial system is not perfect and reacts with skepticism whenever he is told by the media not to value human rights. Such a person would be more likely to stand up for himself in the workplace when his rights or the rights of others are violated.

It's actually rather clever if you're looking at it through the eyes of an asshole.

By the way, your username cracks me up.

Or they could be modestly conversant with the basics of criminal law.

The courts are designed to let some portion of the guilty go free: E.g.: http://en.wikipedia.org/wiki/Blackstone%27s_formulation

For example, in any case where the evidence was obtained illegally, a person who is provably guilty may not be convicted: http://en.wikipedia.org/wiki/Fruit_of_the_poisonous_tree

> in any case where the evidence was obtained illegally

It does make sense. Who's to say that the evidence in question wasn't planted?

I feel like this validates my comment? The two notions regarding criminal law you post are not maddening, but thoroughly reassuring.

They are reassuring from a systemic design perspective, or from the perspective of someone who is accused but innocent.

But in the individual case of an obviously guilty person going free, it is legitimately maddening.

I think with more perspective, there is nothing maddening about it.

It's like a pool filter. Any one particle of feces can float around the pool forever, but in aggregate the feces level is reduced.

I think that's a valid perspective, but that's not the only one.

If you have been a victim of a crime and the perp walks, it's reasonable to say something besides, "Thanks goodness our system works to protect the innocent!"

I also think prosecutors should always be a little maddened by it. It's they're job to keep the guilty-but-free error as low as possible.

Or to put it differently, aggregate feces reduction is great, but you are unlikely to say that if you discover a bit of somebody's poop in your mouth. And you never want the person in charge of cleaning the pool to say, "Fuck it, a little poop never killed anybody."

Yes, that one surprised me. I would think the personality type they want is never maddened.

I worked IT at a bank and they administered a test like this (Meyers-Briggs). Not during the hiring process, it was intended to be used by managers to build teams based on personality quadrant. I'm pretty sure the results were ignored though.

Myers-Briggs test isn't recognised as being scientifically valid so is largely ignored by the field of psychology

just saying ...


I don't think they're "making their way", I think they're already here in some tech segments. I've only been subjected to a test such as this once, administered by a certain mega-grocery-wholesaler in southern New Hampshire, and I thought it was hilarious. My answers reflected my "take charge, get sh*t done, &etc" startup mentality. They clearly wanted corporate drones who would quietly go about their assigned duties and submit to whatever authoritarianism was in the air at that location. I hope I bent the needle on their personality-o-meter as I sure tried to.

Thought experiment: For those running companies and hiring people. Would you take the time to occasionally run your interview processes over your existing employees? I'm not talking about involving them, I mean actually putting them through it.

It would be one way to sanity-check that the process is selecting for the things you want from those coming from outside. As a company grows, the non-technical systems and policies also need to adapt but without feedback things might get weird (like the OP's example). If an existing high-performer does badly with your interviews/tests at some point then you really, really need to fix something.

How many people trust their employer to take a negative result on such a test as a sign of a poorly functioning test rather than a poorly functioning worker.

But if you work at the company for some time your employer has much better metrics to grade you than hiring process.

Sure, as long as the people that know you run the interview and don't share the results with anyone that doesn't it's fine...

Here here, I'll second-that. Most management types are disconnected from the processes that their staff are instructed to follow. I've ran through application demos and walkthroughs with senior managers before where at the end they ask "Why do we do that?"

Most technical types are disconnected from the realities of sales, marketing, finance, law, etc, etc

So are most HR types.


By your logic, no position in a business know about business apart from oh so clever programmers? Oh please. Look across HN at the threads about the psychology, depression, and so on. Programmers and some of the most insular people out there. No problem as such, but don't tell me they have some special ability to cover all bases in a business. That has to be a joke.

If you think about it, HR are one of the few departments that are more likely to know about other departments. They have to, its their job. Do you think marketing has more interest in various departments as HR? Developers? Try talking to sales people, they think the idiot developers know nothing about that people want. Try talking to accountants, they think marketing just blows money. And so on. Oh yes, my department is best and every one else is a monkey. Yeah, heard the inter department politics over and over again. Its tired, boring and gets in the way of business. If idiots just grew up, and respected each others job functions productivity would benefit. But, hey.....

A lot of the problem with HR is arrogant people in other departments who don't do the hiring job properly, then spit the dummy because they find out their funky programmer got fired for fraud from their previous job, or something, because arrogant programmer didn't want HR to do the back ground checks, because arrogant programmer thought new programmer would be one of the boys.

Please, stop this nonsense.

can you make a company full of HR succeed? Can you make the same with only programmers?

The author addresses this:

She also told me that if I was re-interviewing at the company then I likely wouldn't get the job based on my psychometric profile, which was actually ironic since I was highly respected in my role and was one of only a handful of people that could convincingly debate technical alternatives with Senior Management.

I think I was in a really fortunate position when I was able to negotiate this at my second company. Interestingly, at my first company, asking an existing employee to take the written tests in the specified time-frame was a standard practice. They were also encouraged to provide subjective evaluation.

On a very positive note, our HR director was someone who had removed psychometric tests at a number of organizations before he came to us.

Why start with the interview? Try to find the job listing and submit a resume. Then ask yourself if that was a good use of your time. This part of the process is horribly broken for a lot of employers and they don't even know it.

> The problem here is that we didn't have the final say.

Yup ...

There is going to be a real big shift in the next twenty years. We look at things like Developer Anarchy and say "what let the programmers run the company?" - but that's the wrong idea. It's let those with software literacy run the company - just as 500 years ago those who were literate took over running companies

For a while we shall see parallel organisations within one company - the illiterate traditional management model, and a more productive, clearly vital org that consists of all those who "get it" - whatever their job title

I wasted too many years trying to join the well renumerated traditional side - and regret the half attention I found I could pay to programming. But I have seen the light

Stop working for companies that are not dedicated to software literacy. Schumpter will be round soon enough to have a word with them.

I recently spent a year working for an "elite" Silicon Valley startup company. The VP engineering had read enough Joel Spolsky to get through the door.. but he didn't practice software literacy.

I would love to work for a shop, even in an underling capacity, that really gets software dev right. It would be worth delaying my startup dreams just so I can do it right at scale.

Count the number of people in senior mgmt and in all positions who can do fizzbuzz. Greater than 50% in all cases - great. Greater than 50 % for all but less than 20% in senior positions - get out. Anything less - run for the hills.

It's not saying its important - everyone in Hollywood says the story is the thing - but only Pixar seems to live by it. See any talk by Ed Catmull

VP Engineering is a parking title.

The role or the person performing it in most cases generally does nothing even remotely close to engineering. If you sit down and seriously grill the guy, he will have no clue what he is doing, why he is doing it or if he is even necessary.

There are a few good people who become VP's but such people are exceedingly rare. Most of the times, VP's are made and hired through politics, strong friend network, god fathers or sometimes sheer luck.

A person I know who has done a few successful start ups once told me, he purposefully avoids hiring anyone with 'director' or 'VP' titles from big companies. Often, they are the ones which take the highest compensation, while actually being the most useless people on the team.

Why do people say "doing fizzbuzz" rather than just programming? Fizzbuzz is just a trick question to see if new programmers know how to use the modulo operator or not.

I have been to student recruitment events at universities hiring software engineers for quite advanced level jobs. We give out a programming quiz (with a nice prize) at these events and we include the fizzbuzz puzzle. We allow solutions in any language and give out style points for nice solutions.

Our experience is that the answers to fizzbuzz are a good classification criteria in vetting who has got programming abilities and who has not. Most people who actually try can solve it, but the ones who are talented give out either a perfect and simple solution or do something elegant and go for the style points. The ones who don't give out a fumbling solution that is too long or shows signs of not being comfortable with the task at hand, even if they manage to write a computer program that produces the correct results.

As silly as it may sound, the fizzbuzz test is a good classifier for programmers. I didn't believe it until I saw the evidence from the quality of candidates we got.

Knowing how to fizzbuzz doesn't make you a good programmer.

But not being able to do it is a very strong signal that someone can't program in any professional setting, no matter what they pretend. It's a very simple and effective test to rule out people who think/pretend they can program, but really can't.

So... it's basically a bloom filter for developer aptitude? :)

Yes, yes it is.

You can solve fizbuzz without modulo operator:

    bool divisible_by(int a, int b) {
      return a==((a/b)*b);
It doesn't require anything more than knowledge about what division means, and I think it's fair play to require such knowledge from programmers.

Maybe this is a stupid question, but wouldn't that always return true? (a/b)*b just cancels out the b right? Unless there is weird floating point stuff going on.

There's not floating point stuff going on, that's why it works. :)

a and b are ints, so a/b is truncated (rounded down) before applying *b.

It's integer maths. For example: a = 10, b = 3 10/3 = 3 3 * 3 = 9 9 != 10

Or even if you don't know anything about integer math, you can use repeated subtraction. Or ask for help on finding a multiple and do the rest of the structure yourself. Or keep a counter to 3 and a counter to 5 and reset them when they fill up.

Definitely not a gimmick question about whether you know about modulus.

Fizzbuzz isn't just modulo, as evidenced by the amusing number of incorrect solutions under the original article.

Fizzbuzz is a test for programming-ability. It's easy to explain, easy to grade, easy to run. If you wanna check if someone can programme, then someone else has already come up with a good test (fizzbuzz) to give them.

fizzbuzz I find is also used as slang for "Groks simple programming". I would say any fizzbuzz equivalent would be fine too.

"Stop working for companies that are not dedicated to software literacy. Schumpter will be round soon enough to have a word with them."

You wish. Once a company is a certain size, it can ignore market pressure by colluding with government (the company I work for should have been obliterated long ago, but there are artificial barriers to entry).

This isn't just about developers and hiring practices. This is about systemisation and bullshit metrics being used to make opaque and life-changing decisions.

This is about the same kind of shitty thinking that results in surveillance states.

"We can't employ/feed you/let you travel/let you have a mortgage/let you open a bank account, you are Invalid."

"Invalid, what do you mean?"

"We are not allowed to tell invalids why they are invalid. Report for reprocessing."

Edit: Oh, and psych tests are bull. It's pretty obvious what the "right" answers are - so all you actually succeed in doing is filtering the psychopaths in.

Which by the way is the same process you go through for college admissions. "We're sorry, we can't admit you" "Why?" "This email was sent by an automated system. For questions, please send an email to admissions@uni.edu" OR "Congratulations, you've been admitted" "Nice, but I'd like to know for what" "Well, we decided that way. You're in, why do you care?"

I very much dislike the idea that you can't give special treatment to individual candidates. The job of a recruiter is to find the best candidate for the job, not to treat all candidates equally. Now obviously we can't go around discriminating against candidates based on attributes not relevant to their ability to perform (ie gender, race, religion etc), but that doesn't mean we can't make use of any extra information we might be able to garner about a person.

If a particular candidate, who happens to fail part of the interview, still seems like a good choice, by all means look for additional means of evaluation.

I once had an interview, where I spoke with a few people very positively for about 3 hours, it was going great. There was no sign of a technical test, so I had assumed by this point that they either didn't do them, or they'd be in a second round of interviews. Then they realised they'd forgotten to give me the technical test, which they then gave me and I screwed up for whatever reasons. Upon learning that I'd aced every part of the interview except the programming test, I mentioned they had an employee who'd worked with me on previous projects and could vouch for my coding ability. But they said they couldn't use that information because it would be unfair to other candidates.

This was a long time ago, and I didn't really lose much by not getting the job, but it always felt like the approach they took was wrong.

> The job of a recruiter is to find the best candidate for the job, not to treat all candidates equally.

I won't disagree, but I would like to restate that a little more carefully:

"The job of a recruiter is to measure which candidate is best for this job, fairly and without bias."

As you point out, there are quite a few reasons that people try to weedle out of actually hiring the best person, some of them not so savory (subconscious preference for white, male people - but it depends on the company, and the culture). It should be hard, but not impossible, to override the default procedure - and for exceptional cases only.

Certainly, in the Article, the flawed psychometric test should not have been grounds to reject the best-performing candidate in the room. But similarly, personality (or appearance) should also not be a reason to promote someone over their ability to do the job.

It's probably true that in my example, I was good but not exceptional enough to justify overriding procedure. Thanks for giving me that perspective!

The problem here is that it's very hard for people to not discriminate at all based on attributes not relative to ability such as gender, race, religion, etc.

For example, many people have racist tendencies but rationalize them as something else.

"There is a big group of male magentas dressed the same over there, I should avoid them because they are almost certainly a gang"

is rationalized as:

"I don't avoid the group of male magentas dressed the same because I'm racist and assume that most magentas are in a gang, but because I've been taught common sense. Common sense dictates people that look like that must be gangsters."

Perhaps if we interviewed people without being able to see them, their name, or anything about them and using voice obfuscation we could eliminate a lot of bias?

It's indeed very hard to avoid discrimination. I'm not proud to admit that I have racist, and even sexist tendencies. I have just enough self-awareness to recognise that I've done it after the fact, but it's much harder to stop myself in the process of discrimination and behave differently.

I imagine a significant number of people are similar to myself in this regard, I would love to see more discussion about it. Because I feel it's the subtle prejudices we all have that create the real problems, rather than the smaller (but louder) groups of overtly prejudiced people.

That was very possibly a white lie. In this egalitarian, self-esteem is paramount era, many will be more comfortable saying "it wouldn't be fair" as opposed to "you FAILED."

Another employee vouching for you isn't quite the same thing.

Wow. The articles about completely idiotic hiring procedures never cease--because the reality of companies using idiotic hiring procedures never ceases--and each new submission is actively discussed here on Hacker News. How to hire good workers is already a solved problem, but most hiring managers don't do even minimal research to find out what best practice is.

I'm told that this Hacker News community disfavors repeated posting of FAQ posts as comments, but this is a Frequently Asked Question (what's really the best way to hire good workers) and over the last year or two here on Hacker News I've put together a post with a lot of references to the best research. A recent posting of that here on HN


could help companies cure their idiotic hiring procedures. Read the FAQ if you haven't read it before. (Please let me know what you think of it, as I have been doing more research so that I can refine the FAQ and post it on my personal website.) There is no good reason for companies to do anything less than the best when setting up hiring procedures. Protect yourself by knowing what effective hiring procedures look like and how to find companies that use them.

Surprising and disappointing that the best methods were only barely over 50% correlation with performance on the job.

What does the algorithmic whiteboard coding tech interview count as? Does it qualify as work sample and brain teaser combined with interview? I suspect it's too far removed from what software engineering is actually like to be a good work sample, and the prior knowledge required to solve algorithm problems doesn't make it a great brain teaser either.

Surprising and disappointing that the best methods were only barely over 50% correlation with performance on the job.

That's probably about the best we could expect, as job-seeking behavior when a candidate desires a job is different from job-doing behavior over the long term after someone is hired and knows more about the company. The appalling thing is that companies still use procedures (personal unstructured interviews, "personality" tests, and in Europe even handwriting analysis) that are demonstrably much worse than that, rather than the work-sample tests and general cognitive ability tests that at least give companies their best chance to hire workers who will do well on the job.

Isn't 50% just like flipping a coin?

Is a correlation of 0.5 like flipping a coin? No, it isn't. A coin flip would have a correlation of 0. http://en.wikipedia.org/wiki/Correlation_and_dependence

> How to hire good workers is already a solved problem

Oh? I hadn't heard about that.

Read the FAQ


linked in the parent comment to yours, and you can learn about the best trade-off among time, expense, effectiveness, and legal exposure among all hiring procedures used by companies. It's low-hanging fruit to improve competitive position for a company to follow the minority of companies that use effective hiring procedures rather than the majority of companies that use haphazard or demonstrably ineffective hiring procedures.

"The psychometric test was supposed to produce a "true" reflection of how someone saw themselves, and I was told it couldn't change over time - i.e. whatever it determined was fixed, immutable and infallable."

20 years ago I took the Meyers-Briggs test multiple times (well, once a year for 3 years). My numbers changed a little bit each time - I was fairly strong I and N, but very weak 'T' and 'J', to the point where sometimes they were 'F' and/or 'P'. I took a similar test again a few weeks ago and 'I' and 'N' were strong, the others were still weak.

I had a couple people tell me though that "it never changes". Which is ridiculous because it's obviously not true, and depends totally on how you interpret the questions, and that's based on how you're feeling when you take it. I don't think people administering tests or interpreting the results always understand what they're actually looking at (which probably makes me a stronger J).

remember that MBTIs have no evidence base - they can be helpful in allowing someone to "see" jobs they might be happier in, as they did for me, but they don't necessarily measure anything statistically significant about a personality.

It's obviously pseudo bullcrap; what's astonishing is that the hiring manager in the article is dogmatically following it and ignoring the advice of the senior technical people involved in the hiring process

That's arguably backward. The MBTI was developed to help people place into jobs, but it has not proven itself to be a useful indicator of that. Conversely, all it is is a personality measurement. When you analyze the type system, it's actually a fairly decent one. (Probably the best we have today, but that doesn't mean much if you think they're all bad.)

Well, really, how different is it from the Chinese Ground/Fire/Water/Wind personality traits system? You can explain anything with it: Roger is Water with some wind, and B-Con is mostly Fire with some ground mixed in. We need more "ground" people for our team!

What does that tell you that "hey, this team is pretty pie in the sky and chasing ideas that aren't likely to be monitized. Let's get a product manager in here to give some direction!" The latter, I argue is far superior cognitive model, and has the decided advantage of being based on empirical observations. (I know you weren't arguing for MBTI, I just used your post as a launching off point since you mentioned the existence of different systems).

Sure - they're not the be-all answer to everything, but I've found them helpful in (re)assessing myself lately.

I found much more static-ness in my IQ over the last 30+ years. I was actually surprised at how consistent it was over decades and various tests (and, dare I say, slightly disappointed).

I agree about MBTI - until I understood that I was allowed to have a personality preference for perceiving over judging (preferring to live in the moment than plan ahead in detail) I was stressing myself worrying about why I was so rubbish at planning, rather than trying to find a career where it was less important! (the only careers I've come up with so far are stock exchange trading and politics - if anyone has any others I'd be interested to hear. I like coding, especially in sprints, but I am absolutely terrible at estimating how long it will take me to write things.)

I find that every time I take an IQ test I get a higher score - I presumed because I'm learning what sort of questions IQ tests ask and how to answer them faster - so when required I just quote the first ever test I took as my IQ (which was a very unscientific one, unfortunately, by answering questions along with a TV program. I also had a score bump for age, because I was only 16.) I was, at the time, delighted that I got higher than my maths teacher! remember too, though, that IQ is heavily biased towards people with a "western" education.

IQ isn't very important though, once you get above 130 - the differences don't correlate to greater performance in any real test cases. The difference in performance at those levels is to do with attitude, experience, vocabulary (outside of technical fields I've studied/worked in, mine is awful), and all sorts of soft factors.

(apologies if I'm wrong, but I assume that anyone with karma on hacker news is IQ >= 130 or so.)

IQ numbers vary based on tests, to some degree, so "130" doesn't necessarily mean a whole lot. What I'd found is that I was in the same percentile on all tests (took 3 a few years ago) compared to tests from 30 and 20 years ago.

The raw score numbers might have changed up or down slightly, but same percentile in my case.

... IQ is supposed to be a normal distribution, ">= 130" === "3 sigma above the mean," i.e. top 0.1%.

(bearing in mind that the test is inescapably biased towards people with our background - US/UK/etc, certainly English speaking, probably university educated.)

To be fair, I wasn't university educated at 9, when I took my first tests, although I was born in the US and spoke English.

fair point indeed - the tests are highly weighted (and thus rather unsatisfactory - the skewing reduces the resolution of the test, if you like) for young people.

I suspect the tests I had at 9 were age-specific, but I've still kept in the same percentile.

What was interesting to me was tests a couple years ago - the questions were sometimes things like

"I'll say a number, you repeat it back to me with the digits reversed".

They kept going until you couldn't do it anymore. I think I conked out around 9 or 10 digits.

"Give me as many names as you can in 10 seconds" (something like that).

"Spot the differences between these two pictures".

I'm wondering in what ways questions like these can be (or are) culturally biased? I can think of ways myself, but curious as to what others think.

Oh, if we are talking about the specifics of IQ testing here, I can link to one of the rare Wikipedia articles


that has actually recently been updated with reliable sources. Some of the comments below this comment of yours make guesses about IQ tests that can be checked against the sources by referring to that article.

I've taken the Meyers-Briggs several times and tried to answer in a way to get different outcomes. I've answered realistically, speculatively, and hopefully. Each time I got the same result.

Interesting. How strong are those results each time?

Each of my 4 scores have had a % strength with them - my I and N have been fairly high (30-60% over time), and the T/J were always < 10%, slipping in to F/P on occasion.

I don't remember. The test I took were online versions. I took several at different websites. I really don't know the validity or reliability of them.

There's no way this is possible unless you are selecting different shades of the same category of answer. The overall framework of MBTI is largely binary. To produce different results, all you have to do is choose the complete opposite of answers you chose last time.

"I make lists frequently" is the opposite of "I don't maintain lists at all"

What wouldn't work is substituting "Crowded environments make me tired" with "I like to hang out sometimes, other times alone". The delta isn't wide enough.

When I said I was trying to get different results I didn't mean artificial results. There were times when I answered the questions how I thought I would actually act in a situation. And I've answered the questions in how I would want to act in a situation. I wanted to see if there is a difference between who I am and who I want to be. What is the difference between actual self and idealized self.

If I wanted a different answer then all I could have done was answer the opposite of what I did before.

> I had a couple people tell me though that "it never changes". Which is ridiculous because it's obviously not true, and depends totally on how you interpret the questions, and that's based on how you're feeling when you take it.

The official MB position is that your type itself doesn't change, even if your answers do. Your perception of yourself and of the questions can change, but your type itself does not.

Frankly, I don't buy that, I don't see any strong evidence, from theory or practice, to suggest that it must be that way. However, IMO, it's a pointless subject. Whether you change or not really has no impact on anything. Just use the typing system and take the most accurate results you get. If your score changes, whatever.

Note: I'm a very strong MB enthusiast and I think it's a fantastic personality typing system. Oddly, I've never taking the actual MBTI (that is, the actual test), though.

The worst part about companies using Meyers-Briggs (other than the pseudoscience beliefs around it) is that it's easy to manipulate. If you study the test and know what the company is looking for you can easily answer the questions to the the result you want.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact