Hacker News new | past | comments | ask | show | jobs | submit login

That seems much worse than programming tests IMO. If I have a bachelor's degree, and you're asking me to retake the SAT, you're just insulting my qualifications. At least programming tests have some legitimate rationale. Let's not pretend IBM has any fucking idea what they are doing.



It wasn't an IQ test, and IBM definitely knew what it was doing in ways today's "move fast and break things" culture cannot fathom. Their software was and still is used in many mission-critical applications because the company was rigorously focused on quality. When was the last time you heard of a bug in mainframe software affecting anyone?


While I agree with you conceptually, as a daily DataStage/InfoSphere user, there are plenty of buggy IBM products. I do think a general problem solving test is way more useful for vetting candidates than some stupid algo whiteboard assessment, and my company has been using something like that to great effect to recruit jr devs.


Maybe you're talking about something else, but have you heard about IBM's infamous Phoenix Pay System?

https://en.wikipedia.org/wiki/Phoenix_pay_system

In short, it's costing Canada billions of dollars and has affected countless people's livelihoods.


This was also back when companies were actually willing to train people. Using a test like that, they could try to hire people who had "potential" and then give them a career where they could grow, improve, and bring value to IBM

That doesn't mesh so well with the modern era of employers hating spending a dime on training, even if it would save them a dollar, and employees wanting to jump ship every other year for that sweet sweet 10% compensation bump


Good point. The first thing they did after hire was send everyone away to a 6 week "boot camp" which was invaluable. Who does that anymore?


Intelligence Is the Best Predictor of Job Performance https://journals.sagepub.com/doi/abs/10.1111/1467-8721.ep107...

Thought in the US you will run into problems due to Griggs v. Duke Power Co.


I don't know if people have generally been following Taleb's critique of IQ and other intelligence measures, but there are three tracks to it.

One is purely statistical -- that if you offer any metric of success, and try to correlate it with any measure of intelligence, even if the two are actually uncorrelated, then having dead people in the sample (that is, people who score 0 on both) will increase the quality of the correlation. Applicable because when you eliminate people who score low on IQ tests and low on performance tests from the sample, the correlations tend to disappear in these studies.

The second is more intuitive, that intelligence is multifaceted and attempts to project it into a low-dimensional space will mask the inherent complexity, but will be capable of identifying individuals that score low across many dimensions. That leads to the condition in the first issue.

The third is probably the most controversial; that IQ tests in particular do not measure generalized intelligence, but measure how well you can do "drone work", so for places where performance can be objectively measured, it will tend to be in tasks that do not require creativity or initiative in unpredictable directions because those are not easily measured, so thus reflective of the same sorts of tasks that IQ tests measure.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: