Day 1 - 8 legit surveys each getting $50 amazon card
Day 2 - just over a hundred with 90+ being the same gmail account.
Day 3 - out of credits, campaign over.
Between this and the Voi write up I think it’s important to have a dark mind in the room to avoid stuff like this being so trivial. Some hacks you hear about and the first thought is about how gifted or smart the hack/er was, other times you just think how dumb the developer was. I’ve been on both sides of this and while I’m not proud of it I think the stories like these keep people on their toes.
Surely that would have taken you ten minutes at most? I can't imagine anything more trivial. I can't imagine that the client wanted multiple survey results from the same person either.
Any feature which fundamentally transforms the final result, say by blocking a transaction, will be one of those cascading features. Sure just adding a "good enough to be 80% there" could be simple. Yet what happens when the client turns around and says "You delivered a broken app to me, I tried submitting my survey 3 times and only the first time was it accepted".
Sure this feature can be accommodated. The client might ask for special handling of test email accounts. Or an admin panel. Or a report & moderate workflow where they can override the block.
Getting to the core issue: the client did not see the value in the proposed work. If you are getting paid and spending extra time unpaid to give value to clients, you better be doing so because the client is going to be thankful.
Imagine a scenario where the OP ignored the clients wishes, implemented the feature anyway, tanked the negatives, all so that the client would not lose money to fraud, and thus prevent the client from seeing why the feature was needed in the first place. Lots of work, negative reward for the programmer.
The Op sounds like they were trying to graft more money from the client to me. The email was already stored. A trivial check would have reduced the triviality of the exploitation.
Not to mention that multiple survey results from the same person cannot be assumed to be what the client wanted.
Also, while keeping track of email addresses is simple, making such a system hard to exploit isn't all that trivial, with there being wildcard and throwaway email addresses and so on.
This to me would be a fundamental feature of the system - multiple survey results from the same person are of no value.
But I want to echo that your comment is accurate, maybe not 10 minutes, but it’s a rounding mistake worth of time on it’s own. The results got emailed as soon as POST so we didn’t keep a db along side the app. Keeping the list of used emails would have been easy but it would mean keeping a log/db/redis/flat file somewhere. That ends up eating a little more time.
I think what we have illustrated here is exactly what happens in the bigger hacks. “Why didn’t they just... it takes 20 minutes to do that” and yes it does but it was packaged probably with other things that got killed.
If you say, "hey, for the cost of a gift card or two we can prevent a single person from disrupting our campaign" it's much more likely to get approval.
And agree, this should be extremely trivial to implement and if the project was quoted prior to development should be included in the initial scope.
We can't prevent all malicious behavior but we can prevent the easy stuff.
I've never done it, I've just heard about it, quite a lot.
That sounds an awful lot like the author thinks that what they were doing is fraudulent. But I seem to have misplaced my Law Degree, so I couldn't say.
Personally, I am not sympathetic with attempts to "hack" a business' systems, whether by social engineering, attempting to exploit loopholes, or full-on exploiting security vulnerabilities. It may be legal, I cannot say, but when done in bad faith it is nothing more than parasitism. Or if you prefer, "exploitative."
Now of course, if the author has no intention of using these fraudulently obtained promo codes, and this is supposed to be some kind of white-hat "security research," that's another matter.
But this doesn't really look like it is being conducted for public benefit, e.g. to protect the privacy of users. And unsolicited testing of other people's systems seems like a legally perilous activity.
Really he only managed to hack a small portion of the system and ignored the societal safeguards that are far more robust.
That's the beauty of crowd-sourcing. Fee free to use words that seem right, and if it turns out that there is a slight adjustment to be made, everyone learns for free.
1.9 Unauthorized use, tampering and hacking.
You must not tamper with, attempt to gain unauthorized access to, modify, hack, repair, any Voi material or hardware, source-codes, information, including the Voi App, Website and Scooter, for any purposes. The Voi App, Website and the Scooter may only be used for the purposes pursuant to this Agreement.
If that's valid in court, that's another matter.
It looks like the Sweden Penal Code has a definition of data breach and jail time 
0 - https://www.voiscooters.com/voi-user-agreement-12rr/
1 - https://iclg.com/practice-areas/cybersecurity-laws-and-regul...
EDIT: Not discussing the technical details of this post because they are quite boring, to be honest. That's Web Security 101 and, given so many previous incidents, far from surprising that these companies aren't handling it well.
Prosecutors and police alike certainly doesn't have a perfect track record in this regard, there's still quite some room for improvement.
Not only did the prosecutor act strangely, but also, reading accounts of what Assanges lawyer or witnesses has said in court ( regarding the extradition request ) is ... cringy more often than not.
There is a mostly Western idea that money means you have considered everything right and are rewarded from that
This is such a huge distortion and I don't understand it, almost seems like it is a necessary religious doctrine to make people comfortable with pursuing this system
The number of painful outages caused by cert pinning versus the actual security benefit isnt worth it IMO. But the truth is that random app dev doesn't have their shit together enough to do pinning right.
Don't believe me? AWS recommends the same
If you're Chase, or PayPal, sure. But if you're random startup with a crappy app, don't bother IMO.
In the author's own timeline it's clear he gave Voi 9 days to respond before publishing all the details. This kind of conduct reflects poorly on the whole security community and I assume someone told the author this, resulting in the unpublishing of the post.
- use SSL pinning.
- block different GMail emails from the same Google user.
- prevent different users from using the same credit card.
These are user hostile. I see no legitimate reason to do so. Though limiting coupon use with same base email/card/.. is reasonable.
- require more information than just an email address.
Maybe you need to (e.g. theft) but coupon reuse prevention is overreaching.
- don't email the promo codes in plain text. ... It could also be built as a deep link into the app
Is this assuming iOS use and opening the mail on the phone?
If you disagree, please state the reason.
Paying for your kid, spouse or friend from a single card seems reasonable. Not everyone has lots of cards.
As for email canonicalization: What if you lose access to your account, want one for kids, separating work/personal .. don't do it by default unless you do have a good reason. You're just adding minor inconveniences and possibly serious vulnerabilities (using random email forwarders along with typical email password resets).
Certificate pinning to deny user access - your APIs should be secured by making them secure and it will actually work. Data itself (also) belongs to the user, not (only) to the service provider. Some may disagree, but laws in some countries state so.
It makes traffic inspection of your own devices hard. If you warn and ask the user, then fine, but I believe it was meant as mandatory.