Hacker News new | past | comments | ask | show | jobs | submit login
I found a security issue on a competitor, got fired and served a summons (accidhacker.wordpress.com)
359 points by accidhacker on March 17, 2022 | hide | past | favorite | 366 comments



So, it sounds like, this guy worked for CorpCo, found a security issue with OtherCo's app, and explored their APIs (maybe from CorpCo's network?), including accessing at least one CC number that was not his.

Someone else later discovered the same issue with OtherCo and stole a bunch more card numbers, and used them to commit fraud.

OtherCo looked through their logs, saw the initial exploration coming from this guy at CorpCo, assumed he was also responsible for the subsequent fraud, and contacted CorpCo which ultimately fired him. The fraud was substantial enough for OtherCo to convince authorities to pursue criminal charges.

It's a sad story - but it's not unreasonable behavior from all involved.


You missed the part where he immediately reported the vulnerability to his manager, security team and execs and got the assurance that it was being handled. If after that he is still thrown under the bus and fired, it's clear that someone at his own company dropped the ball and put the blame on him. I wouldn't be surprised if the actual fraud was committed by one of these people as well, using the very hack that he found and disclosed.


Finding a vulnerability is one thing. Using it is another. This person overstepped pretty severely and I'm not shocked at all that they got fired.

And honestly, writing this post at all continues to show bad judgment. I don't think this person understands where white-hat boundaries are, or how to deal with being accused of a crime.

Please reconsider if you think doing something like this to your competitors - or anyone - without some kind of program of consent on their part (like the previously mentioned Project Zero) is a good idea.


How did he "use" the vulnerability? What did he do that qualified as overstepping? He literally logged into a website and made requests to a hidden endpoint with random IDs. When he realized he was looking at production user info he stopped and reported it to higher ups. The process he has described is right out of the white hat guidebook. What would you have done differently?


He didn’t exactly stop. He could have quit poking around once alarm bells started ringing but he wanted to see how bad it got so he continued to poke around. Knowing when to stop should probably be a white-hat quality.


Poking around is exactly how you determine how severe the vulnerability is.

Simple data extraction is hardly a outside the what a white hat hacker should do. If there's an API endpoint that returns say, `user.name`, it's reasonable to try other things like `user.email` or `user.credit-card` to see what else could be do BEFORE reporting it.

It might be totally valid to return `user.name` where `name` is simply the first name. But you'd never know unless you actually tried a few more endpoints.

I agree with a parent comment in that someone dropped the ball here and threw him under the bus to protect reputations and prevent any nastiness between the two companies.


No. Simple data extraction is what a contracted penetration tester does, because they secured (here's that word again) a contract that limits their liability when doing simple data extraction. Poking around to find out how severe a vulnerability might be is how you manage to get yourself in civil trouble even when you're testing a site that runs a bug bounty; god help you if you're doing it on a site that doesn't have one, or, worse, hasn't really ever heard of one.

People share a lot of really bad advice about security research. The best advice you'll get from people that don't work in the field is "stay away" and "don't try to help"; it's cynical, but at least it's not going to get you sued, like following this kind of "poking around is a white hat norm" advice will.


You can get into legal trouble simply by looking at the source code of a web page you are browsing, as has happened in several famous recent cases. However, the take away from that isn't "if you go into Chrome developer tools you totally had it coming".

There are bounds to what is or should be considered reasonable testing, and if you don't press against them then they will keep moving closer and closer to the point where simply using your own computer in ways the manufacturer didn't intend will be illegal (and that is already happening as well).

According to what the author describes he did absolutely nothing wrong morally or legally, and the attitude of victim blaming prevalent in this thread is a huge problem in the security industry. We should be supporting rather than crucifying him, because one of us will surely be next for looking at a website the wrong way.


It is absolutely not the case that the author described something absolutely legal. I don't care if you think it's "victim blaming"; I am making a simple positive statement, not a normative one. People will get hurt buying into what you're saying. There's no "reasonable testing" exception to the CFAA. You're very unlikely to be prosecuted for doing this stuff, but it's not that unlikely that you will get wrecked with legal bills reaching that conclusion.


'tptacek is 100% correct here. Anyone reading this: please listen to these words of wisdom, because not doing so is how you get seriously hurt in the eyes of the law.

Hack, but hack carefully, at least until there are laws that protect you. Today, there aren't.


Considering the author hasn't been pronounced guilty by any court I'm not sure how you can say that with any confidence.


There is a huge gray area between "absolutely legal in all circumstances" and "absolutely not legal under any circumstances". The fact that somebody has not (so far) been found guilty in court doesn't mean their conduct was definitely legal.

If you're trying to discuss whether something is legal in general, looking at a broad spectrum of case law is generally more instructive than focusing on the outcome of one particular case.


1. tptacek is an expert on this subject. He works in security. He is giving good advice, that's not going to land people into a pile of shit.

2. You don't need to be found guilty for a court to ruin your life. If you doubt me, ask anyone who's 'won'[1] a nasty divorce.

[1] And I don't mean the lawyer.


Because the behavior described in the blog post is open and shut CFAA-and-equivalents, regardless of weaseling over words like “authorized” and “access” and “computer”. The author’s own words complete a CFAA case. Not argue for. Complete. The narrative as described is prosecutable.

You could have done everything the author claims to have done in order to stop a Martian invasion or the extinction of every living thing or in the genuine spirit of trying to help and it’s still several prima facie violations of CFAA. It just is. I’m sorry. The why doesn’t matter, barring the contractual scenario tptacek points out (and which STILL requires diligence by both parties to avoid prosecution).

To be clear I think it sucks what happened to the author, but if weev goes down for enumerating primary IDs via a Web browser (and to be fair, also trying to sell the data like a complete tool), setting up an entire technical infrastructure to compromise this app in this way is trivially demonstrable intent. You and I both know what Charles does. Now wait until a prosecutor spins the whole setup as a giant technical hack that shows this person intended to compromise a competitor. I’m not even finished with law school and I’m certain I could prosecute this person successfully, but note that doesn’t mean I’m saying they should be.

Given what you said here and to your point, I’m going to preempt your likely retort and point out that I’ve described the behavior as afoul of CFAA and not the person. You’re right that they are entitled to due process. The blog post is literally evidence is all. I’d bet my Rams tickets next year there’s a subpoena on its way to this post. If not in a theoretical criminal case, then definitely in the civil litigation already underway (again, taking the author at face value).

tptacek is right, and I mean this with respect: you really need to be careful with your opinions on CFAA, particularly when potentially suggesting violation thereof. Your pronouncement that the person didn’t do anything illegal is actionable in a very distant, fucked up world with a bunch of prerequisites, but still a very very possible world. (IANAL/YL, comment is general opinion and not advice, etc)


> You're very unlikely to be prosecuted for doing this stuff, but it's not that unlikely that you will get wrecked with legal bills reaching that conclusion.

What he's saying seems to be exactly in line with what's happening. Even if the author doesn't end up being found guilty by any court, he'll still be wasting a bunch of money on lawyers.


How could you possibly get into legal trouble for viewing the source code of a website? Are we talking about China or North Korea maybe??


See the recent(ish) "but, hacker, prosecute!" case with teacher PII in, um, some US state or another. Where a journalist was looking likely to be put up on charges of "computer hacking" based on exactly just "view source".


> Poking around to find out how severe a vulnerability might be is how you manage to get yourself in civil trouble even when you're testing a site that runs a bug bounty

Sueing people doing it is a fantastic way to ensure well intentioned people will never report vulnerabilities to you any more.

The same goes for your whole post chain.


You’re a curious person and see that the back door to a closed restaurant was left unlocked. You should let them know, but make sure you find out how big of a screw up the closing manager caused so s/he can be dealt with or the process fixed.

You open the door a little and peak inside and see the office door is open. “This can’t be,” you think as you walk into it.

You bet there’s a safe left unlocked and customer reservation left unprotected on the computer, “how irresponsible can these people be…”

If their security is this bad, you wonder what their food safety processes are.

It’s a slippery slope, and maybe well-intentioned, but that doesn’t change the fact that you’re not allowed to wander into this restaurant’s back door or be there when it’s closed, and now that you have, how do you prove you didn’t do anything malicious if the only evidence there is is of you in the restaurant when you’re not supposed to be?

Maybe you can make an appealing public good argument against criminal accusations based on your stellar clean record, but how do you protect yourself from civil suits, which they have every right to spin up if they have damages and can link you to them?


Please don't conflate physical invasion with violation of data privacy. Everyone downloaded a car, 3d printed it, and are now driving around happily in them. This argument is beyond tired.


The laws regarding computer intrusion are stricter than those governing real-world breaking and entering. Trying someone's door isn't a federal crime (though it'll absolutely get you arrested, even though that's the open protocol a doorknob advertises).


Oh, well. I didn't write this law, I just pay attention to how it's used.


Which is exactly the problem - this kind of rule following behaviour for arbitrary and non-sense rules allow for more rules to be implemented without push back.

We as developers and end users have to fight it and not simply argue for the sake of following rules.

It makes zero sense to prevent people from viewing source code of a page when that is how the entire tool chain was built to be used.

They should have made their own native app that couldn't be reverse engineered if they had any mind for real engineering rather than blame 'the web'.


You can be as pissed about it as you like, and I don't have a problem with that. Where I start having a problem is when people say things like "poking around to see what kind of data is exposed by a bug is reasonable testing and not illegal". You wanting something to be legal doesn't make it legal!


Point of order - upstream what is being discussed is 'whitehat' behaviors and tendencies, and whether a behavior is reasonable. This got transmuted to a question of legality by some one downstream, and is in my opinion a good shout but wasn't really a rebuttal to any of the points raised. It seemed like there was some talking past each other going on and I think that is the core of it.


I think I keyed off of "I agree with a parent comment in that someone dropped the ball here and threw him under the bus to protect reputations and prevent any nastiness between the two companies."


>...and is in my opinion a good shout but wasn't really a rebuttal to any of the points raised.

True, but it was not somehow 'out of order' because it was not a rebuttal of the 'whitehat' claims. Given that the article is not primarily about how the author discovered the vulnerability, but the legal problems that ensued, it was not at all unreasonable to point out that acting in accordance with 'whitehat' behaviors and intent is not enough to shield oneself from legal scrutiny. Furthermore, the ensuing discussion, in which various attempts were made to deny this distinction, shows that the point needed to be made!


Objection sustained!


> poking around to see what kind of data is exposed by a bug is reasonable testing and not illegal

Poking around to see what kind of data is exposed by a bug is reasonable.

I’d stop there, and I’d suggest people do that, because it makes for a better society for all of us, regardless of the law. What’s that called? Civil disobedience?


Why bother stopping there? You'll already have committed a felony. Go as far as you want from there! It's civil disobedience, after all!


Real issues arrive when people use "illegal" and "immoral" as if they were interchangeable.


(Note: The post appears to have disappeared, so I don't have the full details of what the author did/didn't do. I have a vague understanding based off the parent conversation.)

I don't want to sound purposefully ignorant here but is your advice basically that, if you find a vulnerability, the only way to remain a white hat hacker, assuming you were not hired to check for vulnerabilities, to not report the security flaw?

If not, please explain.

If so, I can certainly understand the perspective, but for me, I feel a sort of obligation to make people aware of serious problems. Whether it be them driving around with a headlight burned out, or their website processing credit card info in an insanely insecure manner. Each are dangerous, and can cause many problems not just for the operator, but for the people around them as well.

To me, it seems like it is ultimately at least as malicious as a grey or black hat, if a white hat were to find a basic (or serious) issue, and not try to inform the owner (or in this case, their boss) of the issue in some fashion.

Perhaps the law should look for some kind of "good sanitarian law" for these sorts of situations.


There is no "good samaritan" clause in the CFAA, just for what it's worth to you.

I don't know what this "white hat hacker" stuff is. There's no rule of hats in the law. I know "white hat" mostly as a term of derision, not as a technical or legal term of art.

The problem this person has run into is that they went looking for vulnerabilities in someone else's website without permission to do that. You're not allowed to do that. If you do it, a lot of times you'll run into companies that are cool about it, but a lot of times the companies you run into aren't cool about it. The law is on the side of the people who aren't cool about it.


> The law

US law right? Out of interest, not trying to make a point; I cannot read the article so I do not know.


to the sibling comment, since replies are turned off: yes, US law. I doubt either 'tptacek or I know any detail of significance around non-US computer fraud / security law.


> I don't want to sound purposefully ignorant here but is your advice basically that, if you find a vulnerability, the only way to remain a white hat hacker, assuming you were not hired to check for vulnerabilities, to not report the security flaw?

I think the important point is: if you don't want to risk a world of legal trouble, don't look for vulnerabilities in other people's systems in the first place, unless and until you have a contract with them. At the very least, make sure they have a bug bounty program or some other public indication that they are interested in getting vulnerability reports.

Again, the most important point is that you do this before even checking for the vulnerability you think you may have found.



It would be better if we weren't dragging this back onto the thread.


From a legal perspective, as far as I can tell, you're right. From a moral perspective, this only considers the well-being of the company with the security flaw. Limiting security investigations to those that have the full blessing of the company means that security flaws can be swept under the rug.


If someone's left their door unlocked, and you notice, you let them know it.

You don't walk in and rummage around their house to "investigate the severity".


This is a bad analogy - it's more like you request access to an area, there are no locks, you're invited in and told that the content is not restricted.


No, it isn’t. This is a self-serving fiction, and repeating it for the witless is how they end up being prosecuted.

The mitigating circumstance is if you have a written agreement that says, you are authorised to be there, and doing what you’re doing.


This happens a lot! It's gotten people in trouble even in situations like bug bounty programs, where some amount of testing is authorized --- there's a delicate set of norms about stopping your exploration when you find a problem. In a situation where there's no prior authorization --- as is likely the case here --- violating whatever implicit norms exist can easily get you in legal trouble.

At the very least, if you stumble across something that spits out a plaintext credit card number, stop right there and don't do another damn thing with the target. Don't see which credit card numbers you can see, don't change the `/100` in your URL to `/101`, just stop.


...and report it, apologizing for going too far in the first place but explaining how you got there and that you want to cooperate in any way to help them fix it.

And please, don't tell them you'll give them the bug if they pay you or give you a t-shirt - this is blackmail, most likely (IANAL), and sure to get you in trouble. (but it happens)


The crazy thing is how unintuitive the law is here. If we were calling customer support and asked "can you tell the card number of [John Doe/customer 314/etc]" without ever claiming to be that person, wouldn't we put 100% of the blame on the company for answering that question? (If the caller there used the number, that'd be a different story.)

Most devs outside security would just assume that unless you're doing something encryption-related or at least trying multiple passwords, you're not hacking. You're just asking nicely for information.

It is amazing how little technical knowledge you need to violate the CFAA.


Oh, the URL change...so many drug test results.


> Knowing when to stop should probably be a white-hat quality.

You don't stop until you fully explore just how severe the vulnerability is and all affected systems. This isn't rocket science. You have to know just how bad things are if you to have any hope of fixing them.


Isn't that the job of the person who is responsible for the code, and NOT the responsibility of a well-wishing outsider who stumbled across the bug in the first place? Unless you're actually angling to get the job of fixing the code.


Better to reveal such things anonymously over tor or something similar.


> I don't think this person understands where white-hat boundaries are, or how to deal with being accused of a crime.

OK but where, exactly, ARE "white-hat" boundaries? Is there a rule-book somewhere that he missed? How does one know how to deal with being accused of a crime? Practice? Cop-shows on TV?

IMHO this person is someone who was just too naïve, overly trusting and unlucky. If there's blame here, much of it falls on his organization for failing to handle the initial incident report seriously.


> I don't think this person understands where white-hat boundaries are

Not that ignorance of the law is an excuse, but when are software developers not in infosec supposed to be taught these limits?



Thank you for the link, but is it something you expect all non-infosec engineers to know?


I think the subtext of CFAA is that the law believes it shouldn't have to tell you not to try to pry credit cards out of someone else's website. I'm not speaking normatively, I'm just telling you the way it is.

I think publicized security research --- which I obviously support --- has created expectations among technologists that either weren't intended, or were more wishful than the facts support (lots of researchers, most probably, think the CFAA is a bad law and are happy to set expectations premised on its illegitimacy, but: see Wesley Snipes for how that turns out).

If there weren't lots of public security research, I don't think you'd really have to tell developers not to go looking for vulnerabilities.

Again, and to be clear: if it's something running on your own machine, or a machine you own, go nuts.


This is an important caveat; the issue at hand is looking for vulnerabilities on other people's machines, not in other people's software. If it is running on your machine (read: not SaaS), reverse away.

Releasing the exploit, on the other hand, is a different story (don't do that)


Is releasing the exploit problematic you think? In a legal sense?


It depends on how you got the bug. If you test something on your own machine and find a vulnerability, you're fine publishing it (modulo you may be civilly liable if a binding contract prevents you from doing that kind of research --- a pretty rare case). If you find a vulnerability in someone's SAAS application running on their server and publish it, the publication itself probably isn't unlawful --- the problem you'll have is that you've also published evidence that you committed a federal crime.


To be clear: I meant releasing an exploit, not a bug.

Per my understanding, there is a distinct difference between releasing a bug (that you found in software on your own machine) vs. releasing a tool that exploits that bug.


I don't really think that's true either, though obviously it has to be at least a little riskier to release an exploit; like, a fact pattern that has actually happened before is "someone writes an exploit, and a friend uses that exploit to steal a bunch of credit cards; the exploit writer is culpable as a co-conspirator". But just releasing an exploit by itself is I think --- NOT A LAWYER, TALK TO A LAWYER --- pretty safe.

This also comes up in discussions about exploit markets: you can sell an exploit to a bounty program, or to Zerodium or whatever, but people always think you can make more (for instance, if it's some dumb SQLI, more than the $0 the public exploit markets will pay you for it) by selling "on the dark web". But if you're selling a bug that only lets you exploit some specific SAAS application "on the dark web" to some anonymous criminal (or someone a jury will think you should have known was a criminal), you're taking a chance that a prosecutor will make a case that you took money to facilitate a specific crime that they're now going to charge you with.


Yeah, I’m not totally certain, and I suspect intent matters.

My example was that cracking a game is illegal, but releasing a cracker is much much worse.

And, again, IANAL.


Part of it is that there's a specific law criminalizing copy protection bypasses (the DMCA), but it also includes an explicit exemption for security research.


I am not a lawyer, but for example, cracking a game is against the EULA and likely illegal. Releasing a cracker is much worse.


I work in banking and when probing (internal) api’s of partners/clients you accidentally find stuff; they are all very broken and it is normal to find bugs and holes while just probing the use of the api; not trying to find issues. Not sure how to prevent that and no I don’t know how far I can go; I know I cannot do my integration work if I do not do it. Not sure how others can. I would never report bugs to them though; I am old and cynical; software is very broken (especially since the Big Move to modern tech; I see more and more ‘out facing’ (still internal) APIs in banks written in js (not blaming the language but blaming the vast amount of cheap and untrained coders in that language niche); we used to write those apis against the real ‘basement’ backends, now they do (usually hiring an external consultancy corp like cap Gemini for big $$$ who then has it done by their offshore teams) and it is a freakin’ horror show; same for insurers); I assume going in I can somehow query all creditcard numbers in there, so no need to tell them that.


Yes, I think it is reasonable to know what's appropriate and what's not. I'm not in infosec.


Great link. I posted it, so if people wants to discuss it: https://news.ycombinator.com/item?id=30710885


I agree in a practical sense. But I think there is a huge social problem here.

Power relations between employers and employees, corporate goons (lawyers), lack of honest communication, lack of trust, lack of collaboration to make things better, general FUD. This all leads massive inefficiencies, people get hurt, progress is slowed down or even reverted.


Save those emails folks... Never take a hand shake or a voice confirmation on something that could get you in trouble.


The problem is that many people still insist on doing things in person or via a ‘quick call’. I have been around long enough that I insist on chat and email; voice is fine for fleshing out features and such but things like this or basically anything that could come back to you, should be in writing with tons of Ccs. I know many people who insist on phone or personal because they are too lazy to read and write, but I definitely also know a % who do it because then there is no verifiable record. And so I err on the side of safety and do everything in writing.


What you do in those cases, though they can deny they received the email or read it, is to send a sort of "meeting notes" email to them, where you say something like "as agreed during our phone discussion earlier today...".

That way you create a sort of trail that a discussion took place and at least that was your understanding at the time.

Even worse if it can be proved that they received the email and read it. Because in that case if they don't reply disagreeing, you can argue that the silence was an approval of your account.


> though they can deny they received the email or read it,

That’s why I put many Ccs and ask them to respond if they agree or not; I make sure they do. This is always never, at that time, an adversary type of thing, so people generally play ball automatically. When things go rotten then we have everything to win. This happened a few times in court.


Then "confirm" it over email :) . This almost always works "Per our discussion I'm getting ready to {bullet_list_of_items}"

Edit: someone mentioned using CC as well to all those involved in the "plan", that's something I do as well but didn't think about it


That's an important point, which will make white hats think twice if they are going to report stuff like this in the future or keep it to themselves and deny anything that might circle back.


> it's clear that someone at his own company dropped the ball and put the blame on him

Hard disagree. Why is he reporting it to his own company? It's in his company's best interest for a competitor to have a security issue. He should have gone directly to the competitor's security team. Could have potentially gone anonymous. Could have gone through another person. A reputable security researcher. Lots of other things he could have done, but didn't.


Sure he could, but his "mistake" was to act in good faith. He reported to his own employer to gain points inside his company. He's showing everybody how smart and competent he is. Maybe he will get a promotion.

Moral lesson: your employer isn't your friend. They will throw you under a bus in a blink. Always cover your a*.


Agreed that anyone who knew about the flaw should be as suspect as the employee in question. But what was this guy doing pentesting a competitor instead of performing his job duties - that’s grounds for being fired right there. Even with an “ok” outcome he’d have arguably caused harm to his employer.


Even ignoring that most software jobs understand that a small amount of time just noodling away on tech stuff is to be expected as part of the job, exploring a competitor's public APIs seems very closely aligned with many programmer's priorities.


Maybe, but if his company really had a problem with it he would have been fired or at least repremanded when he initially brought up the problem, not months later.


I know right? I bet he even went to the bathroom and even slept at night. Imagine!


Because that's definitely the same thing.


[flagged]


Could you both maybe stop? We get it, you contemptuously disagree with each other.


> it's not unreasonable behavior from all involved.

Isn't it? I understand OtherCo uses legal means to investigate more effectively, a judge might order ISP logs to be turned over or so, but CorpCo knows this person, knows they disclosed what they saw from the start. It sounds pretty unreasonable to fire someone over hearsay, especially when it's unlikely to be true (what dumbass does card fraud and tells their employer—a bank—and requests they disclose it to OtherCo?).

And even from OtherCo, they're acting like it's the 70s and they've never heard of responsible disclosure. I understand their logic a bit more for the aforementioned reason, but still, reasonable is not the first word that came to mind.


Eh it's worth looking into, it's not a stretch to think that someone could find a bug without trying to conceal themselves.. then return later from a more anonymous client to exploit the bug further

it's not proof of guilt but a reasonable trail to follow up on


I think it is a stretch. What basis would someone use, who knew enough to disclose immediately, to bet their freedom that it was being exploited by others such that they could exploit it without their initial disclosure providing probable cause that he was the exploiter? I can't even think of a movie where that happens, probably because it's impossible to create a suspension of disbelief with the plotline.


> think of a movie where that happens, probably because it's impossible

Always reminds me of a CSI (original series) interview with the actor who played Grissom; the writers contacted the police and forensics labs for story ideas but the real life events were so insane that they would feel too fabricated for television.

People are generally greedy and often dumb; maybe they found an open house door, put a little note in the mailbox, slept on it and thought maybe they could get away with a further peek inside if no one closed the door yet (which makes the perpetrator think they didn’t read the note yet)?


On one hand, someone working infosec might also have access to a spare computer running tails, which they use to sell the exploit to a third party exploit customer, but only after reporting the exploit to the victim company to cover their tracks regarding liability for things like those IP logs. On the other hand, it's not uncommon for a vulnerability to be detected multiple times by different unrelated people, especially if that vulnerability makes itself know via semi-regular use of the product/service.


Aren't you just proving their point?


Following up on doesn't necessarily mean suing. But that's the part I can somewhat understand, yes. What's more unreasonable is OP's employer in my opinion.


OP's employer would likely want to avoid looking complicit in hacking a direct competitor to their line of business. What they did does not look overly unreasonable from that POV, the problem is OP put them in a pretty bad position. It's not just about being factually right either, appearances can matter too.


OP, absolutely, did not, put their employer into any kind of situation. Thanks for F-ing helping.


If the retention of the employee is seen as a liability, then that's it, he's out.

Very unfair from employees perspective, but 'self interest rational' from the Corp.

If it's perceived that the employee 'went to far' in their 'discovery' of the competitive API, and that could possibly constitute a crime, then it's basically a 'no brainer', they're going to have to let him go.

Since the staffer did actually report it right away, hopefully he will have his own political cover, and can communicate that to future employers, who should 'get it' - though some won't.

It's unfair but these things happen when an incident blows up into something where the stakes are much higher. 'Fairness' at the microlevel goes out the door towards ostensibly bigger objectives.


I do find it unreasonable to assume that OP would have been the only one finding and exploiting such a blatant vulnerability.


It's not unreasonable. It may be stupid, but criminals do stupid things all the time. Someone has to be first. Turning the investigation over to the authorities to investigate is also reasonable.

What's disappointing is that his company didn't stand by him during the investigation, especially since he reported it.


To me that is actually grounds for OP to sue the employer for slander/libel/unjustified termination etc. OP did their part and the employer didn't.


Do you have a legal background? I don't; what I know about defamation law I mostly get from religiously following Ken White. This doesn't sound like an especially plausible case of defamation. If it helps: absent a contract saying otherwise (pretty rare) employers in the US can generally fire you because they don't like your new haircut.


(With the exception of firing you for protected reasons/classes, like race, gender, sexuality, etc.)



It is not a smoking gun. Now if they find unique product obtained through a transaction it would be.


Also could land them substantial prison time under CFAA Re accessing a computer system beyond what was authorized. The above link seems to redirect to a default site now. I’m guessing they yanked the post.

Back before crime it was ok to explore all over town and on private property. When the kidnappings and burglaries started, all that changed. And I’m so very sorry about this because the old days were truly magical.


If they didn't do anything with the information, as seems likely (why assume otherwise, people? we're not detectives!) then no, it's unlikely to result in substantial prison time. The thing to watch out for in these benign cases is doing things that trigger substantial forensic investigations, which easily run into 6 figures. If you don't cause that kind of damage, and you don't in any way attempt to profit from the activity, then there isn't actually much in CFAA that will drive sentences up.

(I'm not a lawyer, but for obvious career reasons I nerd way the hell out on CFAA stuff).


Am I recalling right that much of weev's $73,000 in damages was the cost of AT&T sending physical letters to everyone saying their email addresses were leaked?

Yeah, this supports that:

https://gizmodo.com/exclusive-at-t-hackers-last-bid-to-stay-...


It's also not completely crazy to wonder if this guy discovered the flaw like this, reported it in a way he knew would never actually get to OtherCo, then exploited it himself through a more anonymous channel for real. Then wrote it up like this and made himself sound super-extra-innocent when he found it.


Agreed. This is the most likely scenario. His blog post is literary trash and full of unconvincing self-rationalizations of his every move prior to being caught, while also digging a hole for himself by articulating what kind of industry-insider knowledge was required to exploit that vulnerability. There is, however, a small chance that he is telling the truth, but it’s not enough to get him off the hook in a civil case (at least in the US) because the standard of proof is much lower. Dude’s handle on Twitter is “AccidHacker” and he may claim that’s short for “Accidental Hacker”, but that’s also not believable, from a literal/legal perspective anyway —more likely something an actual hacker would post lamenting how they wound up in such a life.


Yeah, next time report it to the feds/state.

Get protected status as a whistleblower.


There is no protected whistleblower status available for finding web bugs.


If you can pitch it as securities fraud, you might have a very small chance with the SEC. There is a new law protecting whistle blowers for the SEC, and it's seeing people apply to it for wide ranges of reasons. This seems related to the idea "everything bad a public company does is securities fraud".


It's fun to noodle about! But so far as I know, nothing like this has ever happened: nobody has gotten whistleblower protection from 18 USC 1030 charges, and there is no obvious legal pathway to get it from any agency of the federal government. Sell your bug to a carding ring, then go to the FBI and ask for immunity in exchange for helping them bring down the ring; that's the closest I can think of.


No, do not talk to the police. Anything they need to hear before charging you can be published publicly on the web. Anything they need to hear after charging you will be communicated to them in writing by your defense attorney.


I don’t see how it’d count as a whistleblower if it concerns a company that has no relation to your employer.


TBH I definitely wouldn't want my employees going around and hacking competitors in their free time. Too much potential liability. I can see giving a star performer a chance if it's their first time doing it and if they'd already come clean about it to the other party, but otherwise I'd probably cut my losses.

It's one thing if it's a designated, well-vetted free for all, like Project Zero, with clear legal approval. It's another thing for a regular employee to be hacking competitors -- that begins to look like industrial espionage. And even GPZ doesn't hack banks.


If this (not infosec) person was doing this type of security probing during work hours using work equipment, I could see them being fired even if they are hacking on the company’s own product.

You’re not paid as a competitive intelligence analyst and security researcher, regardless of your “personal interests,” you’re paid to work on a product to make the company more money.

Furthermore, if you start unpacking the mobile app of a bank and doing actual pen testing analysis in the wild hitting their prod servers, even accidentally, you seriously need to understand the situation you are putting yourself in, regardless of where you work.


This is the part I’m stunned about. So many in this thread don’t see the liability of unpacking a competitors app and poking around their APIs.

Forget CC fraud. This opens up liability for IP theft. Even if the other company doesn’t win the case it is going to create a shitstorm that might potentially give them free PR. “Look our new product is so good their devs are reverse engineering our innovation to copy us”

Forget what this person found out. Going and unpacking the app and using private APIs itself was a dumb move regardless of what happened after. People can be idealistic and talk about intent but it really is a bad move that opens all kinds of liabilities.


Personally I don't think what he did was wrong.

I think the moral of the story is don't do non-work related things on your work computer. It shifts the liability from the individual to the company.


But also don't do work related things on your non-work computer. Investigating the source code of your company's competitor's products is absolutely work related.


The thing that is kind of baffling to me about so many people in this comment section is that they think their personal opinion about the ethics of what he did has any bearing on anything. That's not how the legal system works at all.

To quote Tom Scott, a youtuber, "I'm not telling you how it should be, I'm just telling you how it is."


Precisely. Any corporate lawyer reading this thread is probably hoping none of the engineers in their company think this way.

In my old company we were told to perform competitor analysis only through publicly available information - blogs, marketing announcements, videos, ads, etc. No creating a free account to poke around. I often wondered what happened in the past to have this rule.


All the actions here were decisions made by a private company.

At the present time the legal system is not involved beyond anything other than the terms of employment in the employee's contract.

Just because a company may have the right to fire a person at will does not prove that the employee did anything wrong. (morally or legally)


> if you start unpacking the mobile app of a bank and doing actual pen testing analysis in the wild hitting their prod servers, even accidentally

That sounds pretty damn incriminating, not sure how the OP thought he could get away with it. The article is not available anymore but at first I had thought that he might have poked around with the help of Chrome inspect tools on their competitors' web app, saw a call or two being made to their backend API and then randomly changed the ID in the query being made or something like that and see what the response would be. But unpacking the mobile app to see what API backend calls are being made is on a whole another level.

To be fair, nowadays I would not even choose to do the first thing, i.e. playing around with making backend API calls based on what I can see through Chrome inspect tools, I've read about too many cases of people getting in legal trouble about it to now know better. I might have done it 5 or 7 years ago, but definitely not now.


Eh, I agree it was kind of a dumb move, but there's no reason he shouldn't be able to have a snoop around. This wasn't "hacking" in the exploit sense. Web scraping, poking and prodding at APIs (yes even "private" ones), indexing a user ID, looking at source code, injecting your own code (e.g. modifying client js), redirecting some requests, none of these are hacking or exploitation, they are literally how the technologies of TCP/IP, HTTP, the web, and web browsers are meant to work.

I'm not sure where the line of "trying to break security" it, but none of the above count. Even accidentally blasting some endpoint with automated garbage. Intent matters.


There is a reason he shouldn't be able to have a snoop around: it's a violation of federal law. Don't test other people's websites without permission (and even then, be careful).

I worry that people read a lot about vulnerability research conducted on iPhones or Chrome or whatever and assume that it's open season on any kind of application, but the rules for apps running on other people's servers are very different.


Ah, the ol' "press F12 and you're a criminal" argument.

> the rules for apps running on other people's servers are very different

Did the "hacker" ever have access to other people's servers? Or did he merely observe what his own computer was doing and then make some web requests?


I don't understand what you're trying to say here. An argument that your browser is just doing what it's supposed to do when it triggers e.g. a SQL injection on some server somewhere is, I promise you, not going to help you if the government decides to prosecute you.


You're mistaking the easy for the legal. It can definitely be illegal for someone to "observe what his own computer was doing and then make some web requests".


If "observing what your computer was doing and making <internet> requests" was always legal, then there would be no such thing as illegal hacking, because that covers essentially all possible activities one can do with a computer. There may be some who would prefer that world, but it is obviously not the one we live in.


By your logic, someone that successfully guesses a weak password and ssh's into someone else's server and takes all of the data they can find there is doing nothing wrong, as they are "merely observing what their own computer was doing and making some TCP requests".

Obviously, this is not how the law sees it.


I don't think anyone here is disputing the fact that in the eyes of the law, his actions were illegal. But as someone who develops web scrapers/automation for a living, I poke and prod APIs in much the same manner as this guy. I don't feel such exploration should be criminalized. Sadly, it is.

In your SSH scenario, its completely different- you're literally acting with the intent of accessing someone else's computer to exfiltrate sensitive data. That's not what happened here (according to the author).


Running a crawler and poking around API artifacts manually simply aren't the same thing under the law, even though on the wire what's happening is the same. As long as your crawler isn't programmed to go looking for SQL injection vulnerabilities or whatever, there's no case to be made that you had any intent to gain unauthorized access. That's what matters here: your intent.


If the "hacker" hacked his web browser, this wouldn't be an issue. They didn't. They attacked an application running on a server somewhere that was not their own machine, from a web browser running on their machine. The former is the important part in the eyes of the law, and the latter is irrelevant.


Well... his man-in-the-middle was a false attestation of identity to gain unauthorized access. That is pretty much the definition of hacking. The app would have otherwise encrypted the traffic and they wouldn't have been able to 'poke around'.


It was a false attestation of identity to his own device. Surely you can't be accused of hacking your own phone?


It wasn't the device that was under attack but the communications between the app and server. Sure, there is an idea that everything your phone does ought to be known by its owner but that isn't how the phone or law is set up and the attacker had to find a security vulnerability to gain access.


I don't believe your assertion about how "the law is set up" is correct, if you mean to imply that altering the operation of code running on your device is proscribed.

Where it falls foul of the law is when you start sending requests/commands to other people's servers in excess of your authorisation.


That doesn't sound like they were using SSL for authorization. SSL (in the usual use case) just ensures the client to server connection. You can e.g. use special installed certs for accessing VPNs, but I don't think that's the situation here. It just seems like the client was expecting some domain name to serve some endpoints, and he just MITM them to proxy it to his own servers. That's not hacking because here SSL isn't being used for authorization.

The bottom line is the resource permissions weren't scoped right, like at all, and no amount of SSL is gonna fix it - that's what logins and tokens and oauth and that whole dance are for.

Sorry, endpoints aren't doors of a house, where "waltzing in just because it's unlocked is breaking and entering." They are protocols. Merely "talking" to open protocols is/should not be a crime.


Whether or not you think it should be a crime has absolutely nothing to do with whether it is. In this case, you're pretty much dead wrong. There's no "this is how the open protocol" works exception to CFAA. A case will turn on your intent, and the standard to which you'll be held is "what a normal person on a jury would think to do with their browser", not "what people on HN think is reasonable to do with a browser".


Incidentally, cases of breaking and entering often turn on your intent as well; that's the whole concept of mens rea.


Right. Lets say you live in a cookie cutter apartment complex. You go to door 22 instead of 23, and because the landlord uses cheap locks, you are able to get in with your key and a bit of jiggling. You make it as far as the kitchen table with a very surprised family before you realize it's not your apartment. Not BnE.

The CFAA uses "intent" and "defraud" quite a few times. That's not gonna stop some DA from trying to throw it at you, and your life is gonna suck, but state of mind is going to be the most important factor. The disclosure shows you weren't in it to defraud. Obligatory IANAL.


The whole reason CFAA got passed in the first place was that Congress was concerned about crimes involving computers that weren't fraud, and thus weren't chargeable under wire fraud statutes. "Fraud" isn't the threshold act of a CFAA case; unauthorized access is.


I don't think it's pedantic to say it's not "what a normal person on a jury would think to do with their browser".

It is "what a person on a jury would think is legal to do with a browser".

Otherwise any niche activity would be de facto illegal.


No, you're not following the argument. The elements of a 18 USC 1030 charge have to include unauthorized access along with an intent or knowledge that the access was unauthorized. Just doing something silly like changing the colors in your own browser, a niche activity, can't get you charged --- there's no colorable argument that your access was unauthorized. Looking at other peoples' credit card numbers is a much easier case to make, and then the question becomes "did this happen totally accidentally, could I myself as a juror have found myself in this situation?".


I almost completely agree with this comment so I won't nitpick, I just don't think the statement I commented on earlier was a fair simplification on what's the jury's job.


>That doesn't sound like they were using SSL for authorization.

Not for authorization. As explained in the article, the man-in-the-middle attack was successful because the app didn't use SSL pinning. This allowed for them to decrypt the traffic between the app and the server; they could then view the API calls and get an understanding of how it worked. The traffic would have otherwise been encrypted.


>there's no reason he shouldn't be able to have a snoop around

Copyright, anti-trust, patent, civil and criminal liability beg to differ. It's just a bad idea to snoop around the technical implementation of your competitor's product. Nobody should do it without the explicit prior permission of their employer.


> I can see giving a star performer a chance

No. A million times no. You do not use double standards. It kills morale, because it's immoral. Anyway, a top performer, should know when to walk away and contact security and legal.

If I found this. I would immediately have contacted my manager, and our security team, and see if our security would advise legal. Then we'd immediately contact OtherCo's security team.

If you find a security flaw, you have to immediately report. You DO NOT "explore" someone else's machine. It goes sideways too easily, and people always want to get the cops involved. Your number one duty is to protect yourself.


> Your number one duty is to protect yourself.

Which may involve not informing your own manager or security team, depending on context?

I wonder if he would have been caught if he hadn’t told his own company?


If he was doing it from the company network and company devices, almost surely, unless he got extremely lucky and their internal monitoring is pretty bad (unlikely at a bank).


Cover up is never the solution.


It’s literally the least painful solution for the individual involved.

If they guy hadn’t reported it to his company nobody would have know he was the once exploring their API and he’d be home free.


Until it is uncovered and the individual is now screwed twice; once for doing it, and once for covering it up, at which point the company can now potentially claim gross negligence if they become liable.


It literally just takes someone to do an log inspection, and then he’s super screwed.


I'm not saying I'd give them a chance to hack. I'm saying there's a chance I wouldn't have fired a star performer immediately on hearing they had been hacking. If it was someone I could afford to part with, I would, without a warning. The behavior in this blog post shows extremely poor judgment.


But you have to fire them. You especially have to fire the star performer.

Saying the rules don't apply to certain individuals will destroy your team.

There are plenty of policies that will get you immediately fired at many companies. Inappropriate data access is always one.

And let me tell you from experience, EVERYONE is replaceable. Just do what you'd do if he left, or dropped dead. If you can't handle those contingencies, you have something deeply wrong with the organization of your team.


I'm interested to hear what real-world experiences have lead you to think this way.

From what I've seen, exceptions can always be made and people will often put up with it, depending on the circumstances. Look at the number of people in here who don't believe what the OP did should be a crime, and don't consider his actions immoral. You're telling me those people wouldn't accept a little rule-bending if they were on the team? I can't agree.


If your point is that some people will accept some rule breaking, then rule breaking is okay, I simply don’t agree.

Of course some people will accept rule breaking. There’s always people that will justify it post hoc. There are also others that won’t. The real problem is you’ve now created a culture of rule breaking. Soon everyone will be violating policies as a matter of due course.

This is exactly how you get lawsuits and criminal complaints. If you’re lucky, you just have a toxic culture that hurts recruiting and retention. People here always complain about “politics”, but that’s exactly what this is.

If you need examples of where exceptions came back to bite people and companies in the ass, it’s everywhere. Theranos, Uber, Zenefits, Google, the list goes on and on. Even so, I fully expect advocates for capricious enforcement (ie protecting their friends) will further justify it by saying something about how the fines and settlements were “good business decisions” or something, because self justification knows no bottom.


I get the impression that your argument seems to be strongly motivated by your own moral values.

I'm not saying that rampant rule breaking should be encouraged, but I don't see the slippery slope that you do.

I've always found zero-tolarance policies to be rather unfair.


If I went to my manager telling them I was hacking a competitor's infrastructure and accessing competitor PII & proprietary IP, I'd fully expect to be fired on the spot. And they're the most laissez faire manager I know.


Depends on how you raise it right? “I was trying to research how X built their app, to see if there was anything we could or should do in there, and I think I found a huge security vulnerability.”


> to see if there was anything we could or should do in there

That right there is where the company lawyers get called, and all of your contributions to the company code base start being reviewed and ways to remove the most recent ones start being contemplated. It is extremely easy to run afoul of copyright if your employees go around reading your competitor's source code. You may well have a dedicated team that does this, but you'll be keeping a very clear paper trail proving they never came anywhere close to your code base directly.


This is still problematic. This runs the risk of IP theft claims.

Additionally when you download and use any app, there’s probably somewhere in the TOS about not trying to reverse engineer anything. You’re probably breaking that at minimum.


Well, chances are that I’m not trying to use their app in the first place if I’m working at their competitor, so I’m not sure if that should bother you.


Yeah aside from the hacking there are potential copyright issues if he’s digging too deep in reverse engineering the competitor’s app.


Also when none of it ever leaves their own computer? It seemed to me like they were poking around for curiosity, not to publish any dirty laundry substantial enough to be a reverse engineering or copyright issue.


Yes, also then. If company A's app and company B's app happen to have some code in common, but one of company A's employees is found to have reverse engineered company B's app, then company B has a very nice case for suing company A for copyright violations.

Now, if you do it and no one finds out, that's one thing. But telling your manager (worse case, in an email!) becomes a potential liability, as the company has to keep such records and provide them to the other party in case of a suit.


You're probably correct from a liability perspective, but from a hiring perspective in that space, you probably want the kind of personality that is driven to do such things working for you.


If by "such things" you mean trying to see how your competitor implements their product, no, that is certainly not the kind of personality you want to work for you (assuming you are not some unethical hack like Uber, of course).


And what makes you think you have or should have any control over what your employees doing in their free time?


Imagine not wanting curious people on your staff.


Curiosity is very important, but it is not the most important quality a person can have in all contexts. Wisdom comes to mind as one that is more important sometimes.


Wisdom is knowing stuff. Politics is something else.


Unfortunately, you gotta have the wisdom to recognize the politics and thereby protect yourself. To know when it's just not worth it. Sad but true


Knowledge is knowing stuff wisdom is knowing when and when not to apply that knowledge.


Do you want staff to be curious enough to... Let's see... Run queries on your customer database to find out their friends' account balances?


Why did you immediately jump to a malicious scenario? Is that what you imagine curious people do?


No. My point is that curiosity is valuable, but so is integrity. The scenario I presented is around the same level of malicious as pen testing a competitor's new credit card API. Specifically, both are potentially criminal acts of illegal computer access (by current laws in the US), and both have negligible impact on their victims.


You can be curious without being reckless.


Agreed, use a trusted VPN.


I'm sure you meant it in jest, but yeah, if you are going to do something illegal at least try to cover your tracks. Naivety is not a virtue.


It was only half in jest.


The kinds of assignments involved with GPZ also shy away from server-side applications ran by other companies.

Maybe doing approved bug-bounty research in a competitors webapp is a safer proposition.


> I definitely wouldn't want my employees going around and hacking competitors in their free time.

Where do you work? Just so I know where not to apply.


This is both a conflict of interest and could bring the employer into disrepute. Many employment contract rightfully forbid such actions, free time or not.


Please name a company where you would expect such behavior to be encouraged. Genuinely curious.


"Encouraged" is different from prohibited / it being a firing offense.

I just checked with my employer. To make sure I represented the situation accurately, here is what I wrote (I applied the situation to our line of business, IT security, and replaced credit cards with another type of authorization token to make it realistic for us):

> if <name of actual competitor> releases a hosted password manager, where you store all your secrets on their servers, would <our company name> condone me looking at the internals of that application in private time, or does <our name> rather think that's a liability risk?

The answer started with, and I quote: "not a problem". My employer did request advance warning (they expected me to actually have such plans it seems; I clarified afterwards this was completely hypothetical for a HN comment) such that, in case it's controversial, they can warn the competitor ahead of time and say it's just my private opinion, but that's just a request and sounds reasonable to me.


Uber? They care little for other laws, so I would be surprised if they cared for this one ethical standard.


Hey now, they said they wouldn't want their employees doing it, not that they'd forbid employees doing what they will with their free time.


Lots of people suggesting that either company was out of line here, but like, CFAA is still a thing (assuming OP is in the USA) and it's still got gnarly teeth. Let alone the possibility of industrial espionage allegations...

If you're going to go hack on a company, make sure you have some legal protection first. Check disclose.io or the company's website (look for a security.txt!) to make sure there's some sort of safe harbor provision, or a pre-existing vulnerability disclosure program or bug bounty program that allows you to do this kind of testing.

If you're not going to do that, then disclose the vulnerability anonymously and cover your ass while you're testing, or just don't.

Meanwhile if you're an American please write your local representative and express your displeasure with the antiquated, overly-simplistic CFAA and ask them to support initiatives to have it replaced or removed.


CFAA isn’t going away.

> If you're not going to do that, then disclose the vulnerability anonymously and cover your ass while you're testing, or just don't.

No. Just don’t. Know that video about not talking to the police because they interrogate people all day long and you’re an amateur in a pro fight? Same thing with infosec. We attribute IOCs to noobs all day long.

You don’t need a criminal record. It’ll ruin many parts of your life. I have friends who can confirm that the record they got in their late teens or early 20s closed many doors. Join a formal bug bounty platform and find legitimate work there.


> CFAA isn't going away

There's some pretty concerted efforts in play to at least have it updated and tempered, which could have legs. I don't hold much hope it'll go away but I do think some of these efforts to have it replaced could have legs.

> No. Just don’t.

Yeah, fair, I mean I'm all too aware of the consequences myself, but within this setting telling a bunch of people "thou shalt not" seems almost more harmful (IMO it's akin to saying "never roll your own crypto" which someone inevitably ends up taking as a challenge)


Until we fix the laws, I'd suggest just letting the world burn until voters and lawmakers get tired of half the country's personal data being stolen once a month and make a safer landscape for hackers to report vulnerabilities.


I do hope those efforts succeed. I think the parent meant to state "hasn't gone away," but even if they didn't, the point remains if you replace that.

I hate the CFAA, to be clear; it's just definitely still the law.


Industrial espionage is not involved here. This is just reverse engineering that escalated into something that might be misconduct.

Espionage would include things like illegally surveilling the competitor's networks, bribing their employees for information and credentials, using malware to create backdoors, social engineering, blackmail, poaching their talent and incentivizing unethical disclosure of trade secrets, and cracking systems that explicitly bar access.

Reverse engineering their product through public IPs is legally acceptable up to CFAA boundaries, which are fuzzy, and it's not clear what kind of exploits were involved in this situation. They may have been relatively benign reverse engineering, or they may have been something associated with civil and criminal penalties.


From exactly where do you draw the conclusion that "reverse engineering a product through public IPs is legally acceptable up to CFAA boundaries"? What are those "CFAA boundaries"? There is no exception to the CFAA for "reverse engineering"; there is only exceeding your authorization, or not.

There is a lot of authoritative writing about the legality of reverse engineering (long story short: reverse engineering is mostly fine, legally) --- but that writing covers reverse engineering stuff running on your own computer. It categorically does not extend to reverse engineering software running on other people's computers without their permission. You'd easily get into a bunch of trouble assuming otherwise.

A lot of terrifying stuff on this thread! It's good this person already has a lawyer.


I agree with you that reverse engineering does not extend to anything one pleases on the internet.

I also don't see game-modders or game cheaters regularly going to prison even though gaming is an enormous industry.

So clearly there is some tolerance as connectivity being ubiquitous blurs the line a bit though. An app I reverse engineer on my device, may as a side-effect make some communications with a third party asset, though primarily it is all my stuff. The same applies to a cars and other items, surely.

That being said financial account creation is definitely NOT the place to take risks. Same with government systems. Pretty quick many other laws and regulations ij the book come into play. They can be very broad too.


The bright line here is between code running on machines you own, and code running on machines you don't own. It's not complicated.


You personally reverse engineering an app on your phone has been quite well established as legal.

You releasing a competing product after having personally worked on reverse engineering someone's product is a lot murkier, and easily opens you up to copyright lawsuits, which you'll have a hard time fighting if you do happen to have similar code, since in copyright it matters not just if the code was similar, but also whether it's likely that you actually copied it (unlike patent law).

This can and has been done, but normally you want a very clear firewall between the reverse engineering team and the dev team, with lots of paperwork proving that no-one on the dev team ever saw a line of code from the reverse engineering team - they were only told concepts and ideas, which are not copyrightable. This is how the first free Unix was created, for example.


The perception of the possibility of the perception of industrial espionage is usually enough to get a lawyer choked up in cases like this - I wasn't saying there WAS industrial espionage, just that there might have been the possibility of painful allegations thereof...


I hope one day you are never a victim reading some asshole talk down to you about why it your fault.


Many moons ago we were scanning ourselves to test changes to our infrastructure, and when testing externally scanned /24 instead of the /29 range we had. We found two wide open SQL servers, listening in public addresses with blank sa passwords (the default on install at the time, oh those innocent days...). Out of curiosity we had a quick look and found one if them contained credit card details. We backed out very quickly... Not wanting to leave such a hole open but also not wanting to risk accusations of hacking we worked out who the companies were by other clues and forged emails warning then about the issue (sending them via a wide open SMTP server, again innocent times!). We never checked to see if anything changed.


It doesn't sound all that unlike if you were to be doing IT work for your company on your bosses computer, accidentally click on the folder, and woops, you just discovered a lot of photos of a particular sub-class of people. Do you not say anything, and potentially leave yourself liable, or do you say something, and risk the fallout from that?


Shooting the messenger, seems to be a common practice when it comes to security issues. We only hear about those situations where someone has gotten nailed because they reported an issue but the fact that it happens at all is a problem.

I think part of the problem is that fixing a security issue is expensive and usually involves company management that has very little experience with code development so they think the person that reported the issue is a black hat hacker. It also makes life difficult for everyone involved.

A solution is to have a neutral in-between organization that can better understand how to deal with the problem without having to blame the person that reported it.


That natural independent organization will know even less about the code base than management and will likely just add more dead weight to the problem.


I have no idea why you'd expect to have the right to probe around APIs like this and not be accused of malicious hacking.

You don't have authorization, you're working for a competitor, that business decided to throw you to the wolves because its easy for everyone involved to forget you ever said anything since you didn't leave any paper trail.

If I find you sneaking around in my backyard having managed to open a ground floor window I'm going to probably lose my shit and call the cops, and not thank you for discovering a security flaw.

You need to get permission for this kind of shit, and not go around jiggling other people's locks.

And the level of effort here seems fairly high and a lot more than just "view source".

The author probably deserves to lose the ability to be employed at a bank because this was some pretty bad judgment.


Lol the audacity to say any of this. The only mistake this dude was trying to be nice to either of the two shit companies he worked for. A fintech created a shitty shitty app and released it to the public and you accuse this guy of “jiggling” the lock ? He should have totally released this info on 4chan and let the wolves do their job.


Unpacking a competitor app and poking around private APIs to reverse engineer implementation is a liability landmine. Ask your company’s lawyer.

Finding and disclosing a security flaw isn’t even needed for this to be a problem. You’re immediately a liability to your company. You’re breaking TOS or whatever agreement the competitor app install or usage has. You’re opening up your employer to an espionage or IP theft lawsuit.

There are many easy ways to do compete analysis that don’t involve these liabilities.


> You don't have authorization

Yes, you do. Authorization and Authentication are part of the API. If you want to keep a server private, don't give it a public IP and DNS!


If you don't lock your doors, it doesn't give anyone else the right to come into your house.

A business can be sitting unoccupied with its doors flung wide open, and that is still tresspassing if you enter without authorization.

"You were too dumb to stop me, so its my right to rifle through your shit" is a nerd law that doesn't exist in the real world.

Don't do this kind of jiggling of door handles without authorization, if you don't want to wind up like OP.

Ask a real lawyer for advice.

They'll almost certainly tell you not to do it.


Exactly! When you throw your data on the street, don't suppose that no one will read it.


This is less like sneaking around your back yard, and more like going in the front door of a business and looking around, and then finding a bunch of credit card numbers sitting around on the desk behind the counter.


Based on the text of the post, it seems like OP stopped "probing the API" as soon as they had enough information to know that there was a security hole. So is your proposal that any kind of probing of APIs in general should be considered malicious? Like if someone has an image conversion API and I submit a gif, or a 1x1 image to see what happens, that makes me a malicious hacker?

That position seems wrong to me, and the strained physical-world analogies you give don't justify it. Often, testing something directly is the only way to find out how it actually works. Which is necessary in plenty of non-hacking situations.


Maybe you don't understand what happened here. It's not your lock where he jiggles. He just found out that your bank that states that she secures your money with a good lock in a safed room stores your money on a big pile on the yard without a fence.


Why would you bring this to your manager instead of trying to find a way to report it directly to the competitor, anonymously or not? If a bug bounty seems like a conflict of interest, refuse it. From the perspective of the competitor, this was an unreported security vulnerability. Of course they were going to find out; if I logged in one morning and the DB table that's supposed to be nearly empty suddenly wasn't nearly empty anymore, that's alarming!

So much here is just an absolute wild failure of judgement.


Seeing that we as a society haven't fixed this problem yet tells me that we deserve the countless security breaches we get. I will continue to let any vulnerability I discover go unreported until the government grants immunity to those who report them.

I want to help others, but not at the risk of destroying my life.


Please report them; they're helpful. But there's a huge difference between finding a vulnerability, stopping, and reporting it, and finding one and continuing to dig.

If you report it immediately, often you'll even get asked to dig further; but doing it on your own without authorization is the problem.


People got into trouble for reporting issues without any further digging. I'm not stupid enough to take the risk. Want good samaritans? Fix your laws first.


> … and have a lawyer working on all this drama.

> But I mostly kept this story to myself and felt it was time to share, even if anonymously.

I’m sure your lawyer would beg to differ that it is “time to share” if you still have a criminal and/or civil case pending.

I’m not sure why you’re publicizing what you have been accused of and providing the level of detail you have provided.

And is a Wordpress-hosted blog truly anonymous?

Anyway, I guess you are here for questions, so here is one: Do you feel like you made an ethical mistake attempting to find security holes in the competitor, or do you think your former employer overreacted and was wrong to fire you?


> And is a Wordpress-hosted blog truly anonymous?

Would not be surprised if someone figured out how to register a free Wordpress account to the point where any disclosed information is a dead-end trail to any corporate investigators. Takes a dedicated discipline and tradecraft.

If I was in the shoes of OP I certainly would not be discussing anything related to my case until either its conclusion or with all publications reviewed by lawyers.

Then again I have not been in this situation but have seen how things can go south when certain advice is ignored.


I mean, it’s par for the course with someone who thinks it’s just a no-big-deal, “personal interest,” to MITM a bank’s mobile app in order to start poking around in their production servers, without prior consent from the bank.


If this description gets submitted as evidence, the case will be dismissed. The way it is described is 100% legit pen testing with public data and endpoints and no malicious use of vulnerabilities found. If this is actually what happened the case will get thrown out.

Is it ethically wrong to check a public product to see if it follows basic security protocols? Especially if you’re an expert having created those security controls previously? Obviously not.

Is it ethically wrong to ask your employer before submitting findings to a competitor? Maybe? But only because you have an obligation to disclose regardless of what your employer says, but a judge isn’t going to be mad about following chain of command. It probably helped the bug report get p0 attention from the security team.

The fact of the matter is if the vulnerabilities were real and it sounds like they were, it’s fantastic news that the vulnerable company was able to fix them before the errors were exploited widely.


> The way it is described is 100% legit pen testing with public data and endpoints and no malicious use of vulnerabilities found.

"legit pen testing" requires consent from the owners of the system being tested.


I’m sure the lawyers could figure out how to argue that the MITM attack this person did is some kind of illegal identity fraud.


Unsure why this is being shared with potential criminal and civil charges being involved.

Also, if true, you likely have enough evidence for a stellar lawsuit against the company once all is said and done. You may also have enough evidence for a second lawsuit against your employer depending on exact circumstances. Hopefully your lawyer is knowledgeable enough to navigate these issues.

In the future, don't involve your employer with security disclosures. Ensure you document everything and email or write the company in question to give them time to fix their issues. What they did is wrong, however you should have reported it to them in a reasonable amount of time. Not that you are required to, but rather, to cover your own butt and prevent this exact type of scenario.


> Also, if true, you likely have enough evidence for a stellar lawsuit against the company once all is said and done.

To what end? If your adversary has attorneys on staff it will be a bigger gamble for you to bring a suit against them.


Employment law attorneys work on contingency, so no expense to you. You start on the wrongful termination suit, get evidence, and then go after the other company based on what you discover from your previous employer.


I guess it depends on where OP is located, but most US states have at-will employment, and being a hacker is not a protected class, so I don’t see how this could be an illegal firing.


shrug that's why one would consult with a contingency-based employment attorney.


Original author seems to have had a change of heart, or valuable legal advice. The blog post has been removed and the Twitter profile locked.

Archives of the first, at least, still exist.


I'm not able to load the page.

"AccidHacker" "Coming Soon"

Is this happening for anyone else? Perhaps it has been deleted?



Likely to be deleted I think. If I was their lawyer I'd insist on taking it down immediately.


Yep, same here...



I think it's been hugged. No luck for me.


That times out as well, never connects...


Who you using for DNS? Archive don’t like cloudflare and their name servers will often return an invalid reply or an address on the other side of the world.


I think it's great that you shared your story with the world. We can learn something from this. I guess it is that security researchers can easily be accused of all sorts of things.

It is true that exposing a vulnerability in somebody's product can make them mad, since it can harm them. Especially with banks, they are not the most ethical entities out there. Good luck. I wonder if more people would like to offer their advise on what you should have done instead.


It's been my experience as well that exposing vulnerabilities, especially financial makes enemies, not friends.

So basically we learn do the minimum possible, to not do the right thing so as not to get fired, and ignore gaping holes leaving them for someone else to risk their neck on, all so we can pay bills for 50 years and then hopefully retire.

What a life.


Op should really delete this blog post. It has a lot of details that could be used against them in court.


It's already on HN now and the Wayback Machine is a thing. Once it's on the internet, it's pretty much there forever.


you mean destroy evidence to hinder the prosecution?


On one hand we have this poor guy, and on the other hand we have Aubrey Cottle, the guy who leaked personal details of about 100,000 individuals and openly bragged about it on TikTok. [1]

Anarcho-tyranny is truly the mot juste.

[1]: https://thegrayzone.com/2022/02/18/hacking-canadian-trucker-...


This guy has a history of claiming to have performed every hack under the sun for the last fifteen years. He's probably not being prosecuted because this info is very public, he's a well documented bullshitter.

I saw it happen personally with a bunch of GNAA related hacks. He had befriended the people who actually did them, and within a year or two of finding out more details, was using them to claim he had either masterminded and planned them all or performed them all, alternating between the two based on whatever was 'cooler' in the situation.


Banks, credit card issuers, and many brokerages suck from a tech stack perspective.

During the y2k times I did a lot of contract work porting old COBOL code to be y2k compliant. The number of seriously spooky security things was mind boggling.

Having mocked API's like you found is not a surprise. The fact they even bothered to use API's was a step in the positive direction versus telnet and ssh tunnels to pipe data around with hard-coded ip's and accounts.


It sucks but I can totally see the employers and competitors viewpoints. OP not only cracked open the app but made requests to their server. OP is not involved with the infosec at their employer and has no business cracking open their competitors app. The blowback from this could have been huge for the employer.


OP - the people involved only want to cover their arses, and they're all using you to do it. This is unjust, but unfortunately it's also normal. Time to stop talking and focus 100% on avoiding conviction by working carefully with your lawyer.


I'd ask a lawyer about suing the accusers for libel.


That’s a good way to narrow your legal representation to crackpots and scam artists.


Any decent lawyer would look for an opportunity to turn this around on them.

He *may* have broken the law, but if he didn't commit financial crimes with the information he obtained and was falsely accused of them, lost his job over it, and was prosecuted for the false claims then he definitely has damages, some of which would fall under libel. This could go not so great for "Whistlr" if the information found in process shows that they maligned him.


FinTech really embraces the “move fast and break shit” mantra.

I’ve seen so many blatant violations in my short stint it would make your head spin.

Happy to never go back…


Wow, the author is very naive. Did you really think anything good would come out of you telling this to your employer? Your do not owe them anything you do in your spare time, unless there's some contractual obligation. Companies are just looking after themselves, and will quickly throw you under the bus. I am very paranoid with that.

I used to explore APIs and do this sort of digging for fun, even though I was employed. My approach was the following. If the company had a decent bug bounty program, I'd be happy to report it there, it's the safest way for anyone to do that. If there was not, I'd try to sell it to an "alternate buyer", which is less preferred. I never many things worthwhile reporting though.


If you are not making money from security you shouldn’t mess around with it at all. Too much downside and too little upside. Let the criminals ransomware the hospitals until society grows up and learns that they need security research.


I can't speak for whether the OP is in violation of any laws or not. I am, however, very curious about the identity of this competitor bank. Yeah, I get why they might be suspicious of OP given his initial access, but my god that company sounds like it's being run by a bunch of incompetent hotheads. They have seem to have absolutely no idea how to investigate a security breach let alone run a finance company.

Note to OP: if things don't go your way with the lawsuit, do the society a favor and let the world know who these guys are so we can all steer clear.


Shooting the messenger is cheaper and maintains the appearance of security which is apparently much more important. The emperor's clothes are particularly fine today, are they not?


They are indeed the finest garments I've had the pleasure of witnessing


Directly informing the effected bank and then informing your then employer would've been a better solution? I think it has the highest chance for least negative repercussions. I assume it would still go a bit wild, but have the least negative impact. Obviously I could be wrong.

You need to keep in mind that security disclosures and everything around (legal, why disclose? etc) it is a black box for most folks.

Also OP said after reporting it, "months later" OP was fired. This means anyone in those months anyone could've found what the OP found and abused it. And the effected bank might've not been involved until later? And all those crimes are now on your shoulders. Your then bank employee would get delayed through the politics in your company (even with good intentions). It will happen even for their own security disclosure handling. Let alone for competitors cos so much legal stuff are involved now.

From the effected bank's POV, you are just a bank employee who did some fraud. Mainly cos you didn't inform the effected bank and it's been months. Insider hacks from the same industry is not unheard of.

Like I said, this would've been a different scenario if you informed them ASAP. And I am sure when issues hit the roof your employer would wash their hands off. It is the easiest thing to do. Unless you are very well connected in the office. Otherwise even your employee are not sure about you. They will be asking questions like, why is this person doing this anyways? He is not in security so why was the OP poking around it? etc etc.

Hope this gets sorted out. Interesting case indeed.



> Directly informing the effected bank and then informing your then employer would've been a better solution?

We've seen many cases where if you report a security vulnerability to a company, they blame you as a hacker instead of thanking you.


> I assume it would still go a bit wild, but have the least negative impact.

Which is why I added this. It will not be a smooth experience. But the OP will still be in a better position than the current one IMO.

You tell me who looks MORE suspicious.

- A guy who found a vulnerability and reported very soon and hence the effected bank know about it soon.

Or

- Or when the effected bank finds out after A COUPLE OF MONTHS (even a few weeks after) that their competitors employee found a vulnerability and they still haven't heard about it?

Keep in mind the added issue of the employee being from a competitor and that they might have already seen abuse of the vulnerability which might not be done by the OP. But since it is delayed, all of it is in the OP's shoulders.


No good deed goes unpunished.

I learnt this lesson in high school when I disclosed serious security vulnerabilities in the schools IT systems and statewide VPN and firewall/censorship solution. I was effectively expelled (in technicality "asked to leave").

Unless someone is paying you just don't bother. Even if they are paying you stick to the systems in the brief.


The moral of the story is that you should use Tor / a proxy and remain anonymous if you are planning on hacking someone else.


Useful reminder for the armchair lawyering: the only person's who's opinion of the law carries any weight is a judge. Lawyers make arguments to sway that opinion.

The opinion as fact stating here is incredible. The law is the law because that's what the law says it is. This was a stupid thing to do. There's just no arguing that.


I really hope for this person's sake that they kept a log of communication with their manager and higher-ups when initially disclosing the vulnerability. After losing access to company systems (after getting fired) I doubt HR is going to be very helpful in fighting their case.


Surprised to see so much “that’s what happens when you poke around” in this thread after we just had the jvns guide to dumping undocumented APIs on the front page this week without any “breach of CFAA” comments.


404'd. That was quick. Seems like he took advice of some in this thread.


If there is now a pending card fraud case that you are now involved in, how did you manage to get another job? Like you said, this kind of thing is a carrier killer. Does your current employer know about it?



... as its now off line.

The linked twitter feed at the bottom of the page is also protected/inaccessible.


You were very likely not the only one who found this security issue. But you were probably the only one who disclosed this issue to the bank.

This is ironic, but it is how business people deal with issues.


> Couldn’t help myself but do one more request changing the ID. Brought another card number and name.

weev. If one doesn't know his story, the one shall do no such web requests. Or at least should know what the Tor is :)

Interesting possibility here - say somebody intentionally left that as a backdoor for a little side hustle of card fraud for themselves and associates or just to sell it and masqueraded it as just sheer security incompetence, and the author just happened to be a convenient patsy of opportunity.


It seems like that a good alternative would be to anonymously contact the other company via email and just leave it at that.

I wouldn’t involve my employer ever.

I’m confused by his employer’s response, he told them everything, they knew, then the other company said something else happened?

So they took the other company’s word and fired him?

I’m not surprised that could happen but a little they would just believe the other company who they already know don’t know what they’re doing.

The story ends abruptly and … seems missing things.


The value of the author's continued employment was less than the cost of fighting accusations of corporate espionage/sabotage from a direct competitor.


The other company thought there was corporate espionage… and they think the company spying on them volunteered the information?

And then they wanted just one rando fired?

Not sure that sense.


If a company has a endpoint that is not secured by a password then it's public.

You ask the server permission to access the data, just like you knock on someone's door to come inside.

If they open it up, or return you data then you can only assume they are allowing you access.

When you start guessing passwords or reusing tokens or trying to access something with a different token, that's a bit different.

Lock your door if you want to keep the things inside safe.


TFA seems to be gone now, but here's a mirror: https://web.archive.org/web/20220317003600/https://accidhack...


And its gone, 404.


Thats the reason why you everytime anonymize your accesses, don't tell anyone from your findings that has nothing to do with it and sell the shit in a bug bounty or on the darknet. And if there is no one who will give you money for it, make the most damage you can to make clear that it's a financial risk to deploy insecure software.


I would really like to know what bank this is.


The bank that fired them? Or the not-bank that is presumably taking legal action? I can only speculate that Whistlr means a hip company we all know about that has been doing all sorts of things in the paying-for-things space. This hip company has rather recently started doing card issuance.


This is the greatest fear that legal at my company has. Especially when we -share- source with a number of competitors.


At one of my jobs, we integrate with a lot of APIs. There are background check companies out there that just have a single API for all user data with no restrictions. If you want to check a person's past employment, you get their social security number.

Shit like this is why we keep such outrageous behavior quiet.


Leak anonymously and let nature take it's course. Never put your fate in the hands of beancounters.


And of course he doesn’t name and shame them so they have no penalty for this despicable behavior.


Surely this will have a happy ending, right? He specifically didn’t name and shame but he certainly could and that sort of dirt could lose a financial company some serious customers. I imagine they’ll pay him for his silence.


This will end in criminal charges.

Mark my words.

If it gets to trial the judge won't understand anything and all he'll hear is hacking. Same for jury if one is involved.

Most likely a plea with probation and now he has a criminal record (possibly felony).


I think you could be right.

This isn’t some “right click view source” nonsense.

OP put in some decent amount of effort into compromising the bank’s system. A judge and jury will not understand this, even if they are a computer literate power user, and even if they work in IT in a non-security role.

I also assume that OP is telling their side of the story in the most favorable way possible, and could be omitting some details.


Not likely. Prosecutor has a giant stack of these type of cases and takes the ones that are slam-dunk frauds or where there is some political capital to be gained. This doesn't sound like one of those.


I'm willing to bet $100 that it will not end up with criminal charges. We'd need to set some sort of time-limit, if you're prepared to take the bet. How about six months?


No he is just still hardly naive and thing he can win without to shaming both enterprizes for the shit they done.


Or simply offer to make the lawsuit go away for his silence.


The story reminds all of us that anything related to banking should be taken very seriously and unless well prepared a technical person should stay out of the business described in the story.

Let the dinosaurs have their own ways, good or bad.


The technical person should just anonymize his accesses and make the most damage he can. Lets financial loss find his way into the brains of the dinosaurs.


This guy is not very bright.


if you had any written documentation when reporting (at your ex-employer), it might have played out differently i feel (they would have been on hook for any legal notice)


Remember, crime is just surprise pentesting with an ad hoc payment model.

But also remember, surprise pentesting with an ad hoc payment model is crime.


Which jurisdiction is this in? I'm fairly sure this wouldn't fly here, afaik you can't be fired if you are doing your job properly, you didn't do something like steal from your employer, and the company isn't in trouble and can't afford you anymore. This is just hearsay, plus they already knew the claim was nonsense because you up front disclosed the issue.

I'm not even sure which company I fault more here, OP's or the one with the security issue.


Not sure where "here" is for you, but in the United States, you can be fired for having a bad haircut. Conducting unauthorized security testing on a company competitor will absolutely get you fired.


> Not sure where "here" is for you, but in the United States

...which is exactly why I was asking which jurisdiction OP is in. Kind of annoying when that information is omitted. I work in Germany but am from the Netherlands; both cultures and the resulting laws are quite similar anyway.

> Conducting unauthorized security testing on a company competitor will absolutely get you fired.

In the US, maybe. If all US companies are like OP's. For a data point from Europe, I actually asked my employer because it seems people here (mostly from the US afaik) thought I had too much confidence in my employer in other comments. I commented here with what I asked and their reply: https://news.ycombinator.com/item?id=30710078

It's also a bit different to look into a product meant for the general public (that's how I understood OP's situation to be), or if you actively seek out to hack a competitor's infrastructure and mess with them (not what OP was doing from my understanding).


I'm pretty sure running destructive pen tests on a competitor using work resources without any prior authorization from either company counts as doing your job improperly.


> using work resources

Did OP do that? I thought he said he did it in private time, so I assumed using private resources (the post is deleted now). Using work resources does change the situation, though if it's just about client-side systems and not IP space then it makes very little difference in practice. Especially if your employer lets you use hardware for private purposes (this is apparently very common for company phones and laptops in NL/BE/DE; personally I like to have clear boundaries there...), but we don't know if that was the case here iirc.

> destructive pen tests

That's not really what happened though, if I remember the post correctly. Running a GET without authorization header whatsoever on a beta application and getting back production data with secret payment tokens in it... you can't make that stuff up.


Clearly life hasn't moved on. The other shoe can drop at any time.

"and have a lawyer working on all this drama."


Yeah, and it's not like lawyers do this as a charity. This is already a financial burden.


Very neat idiom. Is it really used and understood by people in real life?


I've heard it a few times and understood it.


I was reading the article finished left and went back and thought let me check something.. it's gone.


Was this post taken down? I get a generic “coming soon” page when I click through it.


tl;dr: Programmer doing competitive analysis on a competitor's new financial product discovers that it has poor security. Unsure of the how to correctly report vulnerabilities to the competitor, he raised the question to management inside his own company. His company's management, presumably not being infosec people, gets confused and the vulnerability report is lost in the corporate shuffle. Later, someone else rediscovers the same vulnerability and uses it to steal. The competitor that wrote the vulnerable app also botches the investigation, and blames the programmer, since his account appears in weird log-file entries from back when he was investigating.

I'd say the main mistake here was, after finding the vulnerability, handing off the reporting responsibility to people who couldn't handle that responsibility rather than handling it personally.


Who wants to bet this testimony will be exhibit B?


What’s the company with the flaw that is suing?


is it Stripe?


Mods: please kill this post, for the sake of the poor person going to court soon.


As the saying goes, no good deed goes unpunished.


...and that is why you never try to help businessmen. EVER. You just laugh and move on. At best you tell people privately not to do business with them, but you NEVER stick your neck out. There is absolutely no way this wasn't going to end in a lawsuit. Don't be this guy. Seriously. No matter how much of a duty you think you have to report security matters you find this episode vividly illustrates the fact that no good deed goes unpunished.

And people wonder why even in the future nothing works.


This is why you leak anonymously and let the wolves take care of it as nature intended.


I'm not sure an unprepared one can leak anonymously and safely.


Eh, quick DM to Wall Street Journal or Financial Times and if it's a decently large bank the PR fallout will quickly prevent any potential retaliation.


Probably not. Bugs like these happen all the time at big financial firms.


just post your findings from an internet cafe on 4chan and watch the competitor burn


Serious question: are internet cafes still a thing?

Admittedly this is from a small sample of Seattle-area coffee shops but most of my experiences with non-home non-work WiFi is at coffee shops that provide the WiFi but expect you to bring your own device (laptop, phone, etc).

Is it still common to go someplace and rent time on a computer provided by the owner (i.e., at an internet cafe?)


I don't think places you can rent machines are common (except libraries as noted but these may require ID) but the steps "buy machine with cash, take machine to anonymous cafe outside your area and get wifi password, upload info" still seem reasonably secure. You can throw the machine off a bridge later if you're paranoid - and work hard at sanitizing your info while you're at it.


Can you just spoof your MAC address instead of buying a new machine?


possibly but will you bet jail time on it?


>Serious question: are internet cafes still a thing?

I live in a capital city and am aware of at least three of these still kicking around.


You can do it for free at a library.


>are internet cafes still a thing?

I don't know about internet cafes specifically, but pretty much all cafes have internet these days.


Here in Germany there are still a lot of them around. Though any rando WiFi hotspot would propably do the trick


Public libraries are much better for this and easier to find.


Issue is, he will be stored in logs traversing through accounts not his.


If you're going spelunking you'll take precautions to mitigate the risk of it being traced back to you.


I think a lot of hackers start with just seeing what’s going on without any intent to dig deeper. But then something too good to pass up gets them high, and with weakened inhibitions, they make bad decisions that require responsible disclosure (in lieu of saying and doing nothing and possibly getting into more trouble). Of course a responsible disclosure is always prudent, and these companies certainly did not deserve one, but the hacker here got too excited. They didn’t cover their tracks first, necessitating a responsible disclosure.


Yeah but that wasn't the situation. Twas a bit late for that by the point OP discovered there existed a security issue in the first place and they had even already "exploited" it.


It's not a leak to publish your own original research.


... or report the vulnerability anonymously?


From what little I know of the world, this kind of litigiousness is more peculiar to the US than the rest of the world (or maybe at least the ones who usually speak English?).


It's not.

Germany is way worse.


Yeah seriously, people talk about the UK's defamation law being bad, but in Germany you can get in trouble simply for insulting someone or making a rude gesture (Beleidigung).


In Korea, defamation is a criminal charge and you may be sentenced to 5 years in jail if guilty. As a lovely cherry on top, truth is not a defense, although it does lower the maximum sentence to 2 years.

https://seoullawgroup.com/defamation-laws-in-korea/


Yes, in the EU he would simply be serving time, with little to no lawyer involvement from the companies - the government would have taken care of the whole thing.


The Dutch public prosecution doesn't prosecute when responsible disclosure guidelines were followed. For the computer fraud part obviously; card fraud would be investigated (and there's no helping that this person will be a suspect, but without evidence or motive that should not be a big deal). That's not "this person would simply be serving time". Quite the opposite.

Can't speak for other European countries, I'm not familiar with their practices, but I haven't heard in recent years that anyone got prosecuted for responsible disclosure. A few months back a company started a civil case: that was unusual enough that it made the news (and it was dropped). About a year or so ago, someone was convinced while claiming to be white hat, but they emailed the company something like "pay me and I'll tell you what the problem is. Wouldn't want reputation damage now would we?" Again, unusual enough that it makes the news, and in neither case was a legit white hat "serving time".


This is beyond delusional. Absolutely no evidence or case studies to support this.


I feel that in so many cases as a developer, the best thing for me to do is to let things fail.

Scrum incentivizes waiting for bug reports from production.

Low tenures for employment mean there is no reason to care much about maintainability or security.

The pay difference between excellence and mediocrity is 2%.

What a world we live in.


Which one gets 2% more?


In theory the excellent one. In practice, the one that negotiates better.


> There is absolutely no way this wasn't going to end in a lawsuit.

That’s what you call an inaccurate cynical view.


Not sure why this is (edit: was) downvoted. I do security for a living, this is exactly how I'd have handled it and how responsible disclosure is supposed to work. With perhaps the substantial difference that I don't work for a bank and my employer would not fire me over such hearsay accusations from a competitor. We find security problems all the time, not always on purpose. I'd disclose it myself if I found it during non-work time, but it doesn't really matter either way. Since OP doesn't do security professionally, involving the CISO sounds like a good call honestly.

Using anonymisation proxies just makes it all the more suspicious, by using a normal connection and disclosing any findings it's quite clear you're using your real name, have no malintent, nobody needs to try and dig for your identity...


> Using anonymisation proxies just makes it all the more suspicious

If you're referring to the author's use of Charles Proxy, it's *not* an anonymizer or anything of the sort, it's just the macOS equivalent of Fiddler: a localhost HTTP/HTTPS proxy that lets you decrypt TLS traffic (provided the client software isn't using certificate-pinning, which it wasn't).

From TFA:

> Wondering how I could feed them to the app, I set up Charles Proxy and pointed my phone to it. I honestly thought this wouldn’t work – who at this time doesn’t have SSL Pinning?

https://www.charlesproxy.com/

https://www.telerik.com/fiddler

As a curious user of other applications I've done the same thing the the author has described, so this kinda stuff isn't uncommon - except in my case I've never had an opportunity to stumble across anything as bad the author.

As for phone-app' insecure back-end web-services: that's about par for the course thesedays, even for banking and other "secure" services. It's depressing. I have plenty of my own pet-theories as to why things seem worse today than they were 10 years ago, but that's another discussion.


> If you're referring to the author's use of Charles Proxy, it's *not* an anonymizer

I am not, I was referring to the suggestions broadly shared elsewhere in the thread of doing this anonymously instead.


Doing unsolicited security research against a competitor of your employer just sounds hugely ill-advised to me regardless of the findings.


I guess being in the security business changes my perspective on that a little. I'd not expect a competitor to specifically try and hack us, that would be quite the middle finger from their side, but if they come across something and disclose it then I'd send them a suitable thank-you gift (maybe some beer, depending on what they like) and a nice card.


> and my employer would not fire me over such hearsay accusations from a competitor

So you think but you might be very wrong.


I might be, I suppose. Without going into details since this account is trivially traceable to my name, this wouldn't be the first thing the Internet tells me to never pull on my employer that went swimmingly. I do have some faith in my boss; it's a small company and security is a small world, and rich enough to not need backstabbing and politics a lot, thankfully. And I've got access to sensitive customer information on a daily basis (unpatched vulnerabilities), some trust is required in this field. If that wasn't there, I'd never have been put on projects or at least not lasted for very long.

Also note that I'm not in the USA and it's not this 2 weeks' notice wild west style employment (from my point of view).


> I do security for a living, this is exactly how I'd have handled it and how responsible disclosure is supposed to work.

Assuming you're in the USA (might be an incorrect assumption) I'd strongly recommend familiarizing yourself with the CFAA and how "improper access to a computer" has been legally interpreted.


I've familiarized myself with the relevant computervredebreuk legalese and read that our public prosecutor doesn't prosecute cases where all the responsible disclosure guidelines were followed. (Of course they'll still check that they were, since not every claimed white hat is truly a white hat.)


These cases of just crossing the threshold are not that interesting to prosecutors unless affecting some company with a lot of political capital. If there is no restitution, no safety implications, no children harmed, and the case is a bit confusing, why take it? Prosecutors prefer to take the slam dunk cases where any normal person can agree it was an intrusive or fraudulent access.


Other jurisdictions are not so sensible.


You have a lot of trust in your employer.


One of the reasons that I like working there.


... or report it anonymously to a higher authority like the CERT for your location.


Your IP is still in the access logs. The advice is good and we do this sometimes when not wanting to spend work time to deal with corporate politics around the reporting (though frankly the back-and-forth with the CERT is often a game of its own), but it wouldn't have helped in this case.


This is why it is a good idea to use Tor or a VPN for most things or even just for when you casually go reverse engineering a competitor's product :)


As I wrote in a comment elsewhere in the thread, I'm not sure if that is really the best idea.

> Using anonymisation proxies just makes it all the more suspicious, by using a normal connection and disclosing any findings it's quite clear you're using your real name, have no malintent, nobody needs to try and dig for your identity...

But yeah it all depends on the circumstances. Given how these companies are handling it, it couldn't have gone much worse I suppose. On the other hand, if the company suspects it was a black hat then you have to be very sure you can hide all tracks completely and, if found out, you have a lot more to explain.


I wouldn't even restrict this to security matters; don't stick your neck out for your employer at all for any reason, unless the risk has already been taken or you're a shareholder. The nature of capitalism and employment is one of control and risk management. Unless you're guaranteed to be rewarded (very well), don't do anything beyond your basic responsibilities. More than that costs extra. You either have control, or you don't, and as a normal employee, you don't. This is how I learnt not to care about doing very high-quality work, because it costs me disproportionally more than it costs my employer, and it usually doesn't matter if they're big enough.



Thanks, poor fellow is too clever for his own good.

> I haven’t made any transaction with those card numbers, told anyone specifics on how to get them, nor took any kind of advantage of this data.

Well, he just did. With this sloppy (forgivable if naïve), curiosity driven approach, I wouldn't be surprised ongoing exploitation wasn't simply the result of not cleaning up the test rig. Did he at least clear this with his lawyer before posting? Given the 404, I'm thinking not. Sheesh...


I'm sure someone clued this person in in that this blog just gave his adversaries a huge gift in their lawsuit. Not a smart move at all to publicly announce your own guilt in a case you might want to win.


underrated comment.


Maybe someone committed card fraud, maybe an employee, and is pinning it on OP. Or vice versa.

The write up is pretty confusing and doesn't read like an expert.


The two parts where this person (I'm guessing you, OP) went wrong:

    A company that wasn’t any kind of direct competitor announced they’d start issuing credit cards (then turning into direct competitors), and I started to became more and more curious about what they were building as I knew some (pretty good) people working there.

    After reading that they had launched some card stuff on their production app, I downloaded it and looked for card-related assets (it was something really simple, unzipping a .ipa file and looking for images/texts)
Step one here essentially throws away any plausible deniability you have that you weren't trying to do competitive analysis. You became toxic the moment you took it upon yourself to do this without any kind of protection in place.

Setting aside the data access issues, at a bare minimum your employer couldn't any longer guarantee that code you wrote wouldn't just get them sued.

    The mock I used specified a card ID… the app then requested the number for that.

    Something that I, logged in as a user without any credit cards, should be denied access to. But… There it was. And… in plaintext.

    Having worked on some projects regarding credit cards and such I first thought they were running some kind of test environment on production, as this was a pilot that only employees (of a small team of them) should have access to.

    “Maybe it’s even a static file being served on this route”. Couldn’t help myself but do one more request changing the ID. Brought another card number and name.

After seeing the plaintext data in step one is where you could have plausibly stopped and maintained some semblance of deniability. It may have still shaken out the same way but it's an easier sell. Continuing to then browse other card data will get you automatically in trouble.

Regarding the rest of your outcome, I have two reads.

Assuming you're truthful, it's entirely possible someone else noticed the exact same issue at the same time and exploited this maliciously and it's going to be incredibly difficult for you to convince people it wasn't you, given you already have the track record of doing this same thing.

Assuming you're being deceitful here, there's a few warning signs along the path of the story.

Firstly, you write from the perspective of someone who believes themselves to be superior to the developers of the app you exploited. On the one hand, given what you found, I kind of get it, some of that seems like pretty basic security failings. On the other hand, in my experience this is what people who turn out to be guilty of the thing they're being charged/sued over tend to do as a way of explaining things.

Secondly, that you are communicating this at all suggests you're trying to prove you were "technically right" to do these things. Courts aren't going to care about that. Your lawyer was generally right to advise you against these things. This is now out there and will likely be used against you in a court of law.

I'm passing no judgements on whether or not it is right that you face this kind of litigation for reporting security issues; I work in security doing exactly this kind of work and the thing that often saves me is documentation, documentation, documentation. Agreements, in writing, signed by all parties, etc. Even that doesn't work sometimes.


> Couldn’t help myself but do one more request changing the ID. Brought another card number and name.

Most likely illegal.

Don't do this.


Go on the dark web, sell the exploit, and move on.

Companies, this is what the rest of us recommend when YOU don't have a responsible disclosure program with $$$.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: