Hacker News new | past | comments | ask | show | jobs | submit login

It might even be worse still since this model (seems I may be wrong) to assume that the probability of a usability bug is constant, it might be that the share of bugs discovered by users is skewed towards the first few such that the first user finds more than the formula would predict.

It's certainly been my observation that cynical developers who test things as they go by deliberately putting in silly things into stuff they just wrote seem to get hung up less in testing.

I mean the system I inherited at work the first thing I did when I got an instance spun up was put in a negative value in the quote line quantity (which immediately broke..well almost everything) then decimal values in quantity fields where only integers made sense, then text in number fields and so on each time breaking something in a new and interesting way.

Sometimes I think it's hard not to be cynical about enterprise systems.

My old lecturer (somewhat pithily) "Almost all the testing in the world means nothing compared to 15 minutes in the hands of the 17 year old office junior"




One of my friends was literally hired as an intern to try to break software last year. He loved it, and found a ton of bugs, which was really helpful to the company- they eventually paid him a $1,000 bonus for his help over the summer.


It sounds like this guys employer has taken first step to inventing QA.

There's whole classes of highly paid engineers whose job is to do this. But they work for old fashioned, boring, companies.


The problem is QA tends to get to scripted. You need to exercise each corner of the product, thus you test A, B, C in that order - so you never find cases where testing C, A, B breaks, or any other permutation. (to be fair with any complexity it is impossible to test all permutations)


That's one part of it.

Another part is hiring a breed of test engineers who like breaking stuff and have a knack for it.


We call it exploratory testing.


I can believe it, the best person to test software for low hanging fruit of the "What happens if I do this thing that no sane person who knows anything about what the software is supposed to do would do?".

It's one of the reasons why I don't trust myself to test things fully, we write the software with all sorts of assumptions in our heads and subconsciously steer away from doing silly things - in that context it's really difficult to aim at a point a zero-knowledge user would hit.


I'm been building an API Security Scanner, and one of the things it does is just fuzz every endpoint with garbage in each parameter to look for stacktraces, errors, etc.

Moreso than any of the security tests I've written, that fuzzing has broken every enterprise API our customers have thrown at it.


Even a sane person can find a lot if they never touched the implementation of a specific feature.

On one of my teams, we had a hard rule where all features must be tested by 2 other team members (and if multiple people worked on a feature, none of them can be the testers). Something like every other case found something questionable, if not an outright broken edge case, that the developer(s) completely missed.


I remember I implemented 2FA for a previous company, and our QA person managed to lock himself out of the account while enabling it. I asked him how that's possible, and he said "well I went to enable 2FA, it gave me the recovery codes, then it asked for a 2FA code so I entered one of the recovery ones, but now I ran out".

I thought this was so dumb it was brilliant QA.


I have a friend who does this for free. Every single thing I've ever shown him (both my software and that of others) breaks as soon as he gets his hands on it.

He's the sort to dig as far into the internals of things as he can and then start messing with it (he implemented partial function application for C, for example [1]).

My poor prototypes never stood a chance.

[1] https://github.com/zwimer/C-bind


That's so enjoyable to do. :D


I bombed a job interview by writing a piece of demo software that failed all of your "throw it garbage" tests. I couldn't have hired a professor to teach me a more useful lesson!


Mine was brilliant, not a famous academic or anything and it was a college not a university but he'd written production systems in the 80's for telecomms and massive supermarkets so as an instructor for the real world really hard to beat - he taught me all sorts of things that stuck some of them I didn't understand at the time but 20 years later they make much more sense :).


That seems like a brutally unfair interview practice unless you were told in advance they’d be doing that.


They gave me a small project to do, and told me to do it as though I were building it for a customer. I think that was warning enough that it ought to gracefully handle bad input — any real-world program needs to do the same.


To be honest, unless they explicitly discussed this with you before, or went through it with you afterwards, I'm with empath75 on this one. We used to do something like that at $former_workplace and every once in a while, a candidate would come up with a program that didn't validate (most of) the input or failed in similar trivial ways.

It turned out that some of them, indeed, simply didn't care -- and didn't know, either. We'd explain what the problem was and they'd shrug or say they'd seen <big name software> breaking like that, too, you fix it if it turns out someone actually breaks it.

Others, however, would skip it so that they could focus on stuff that was more complicated or more relevant. They'd validate one set of inputs, just to show that they know it needs to be done and can do it, but not everything. Or they'd throw in a comment like //TODO: Validate this by <insert validation methods here>. Most of the time we'd just ask them to talk us through some of the validations -- and most of the time they could write them on the spot.

You could argue that this is very relevant in real life, and that even if it weren't, what's relevant is the interviewer's choice, not the candidate's (although tbh the latter is one of the reasons why tech interviews suck so much).

But at the end of the day it is an interview setting, not a real-life setting, no matter how much you try to make it seem otherwise. At the end of the day, the people doing it are still young candidates trying to impress their interviewers, not company employees working on a company project under the supervision of tech leads. You don't end up with much useful data unless you allow for some flexibility in this sort of experiment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: