This is for cases where you want the credit but still want the protections afforded by being somewhat anonymous. Similar to WikiLeaks but more focused on allowing the company or entity to solve their problems and representing fairness on all sides.
The report isn't public yet, but it's on record. You've reported it to the organization funded by Homeland Security to take such reports. In 45 days, CERT will disclose it to the public.
CERT may contact the bank themselves. If you do, you can cite the CVE vulnerability number they give you. This gives you some advantages when talking to a bank. "Have your technical people contact Homeland Security's US-CERT at (888) 282-0870 regarding CVE-NNNN" will usually deal with a bank's people. They can't make the problem disappear.
Yes, reporting to CERT is "safe"; you almost certainly aren't going to get sued for doing it. But don't count on CERT coordinating a fix or even figuring out how to report flaws to. It's unlikely that anyone at CERT knows who "Zecco" is.
CERT themselves ask you not to submit to CERT unless your vulnerability fits some specific criteria. "Unresponsive vendor" is one of those, but CERT's fine print says that they prioritize severe, multi-vendor vulnerabilities.
Anyone who runs a bug bounty program can tell you how unrealistic it is to rely on CERT for this stuff: triaging reports for just one vendor is a full-time job. CERT wants to get early warnings of things like OS and platform vulnerabilities. I don't think it's a good idea to report those to CERT either, but regardless, CERT isn't set up to handle your CSRF report in some random website.
There are two types of financial service organizations: the big banks, and random firms (like Zecco was, before Ally bought them).
There's no point in contacting CERT about a Bank of America vulnerability. CERT won't prioritize the report and won't know the right person to talk to, but also, you're a Google search away from finding out who to report to at Bank of America (spoiler: it's Hacker One). These kinds of things don't happen at BofA, not because vulns are hair-on-fire there, but because there's a process in place to handle them.
There's not much point in contacting CERT about a Zecco vulnerability. CERT doesn't know who to contact and doesn't know how to find them and won't spend the time trying. CERT isn't going to publish an unconfirmed report. All CERT is going to do is go to Mitre; you can do that too, and note the guidelines for what will get you a CVE.
The issue here is just TANSTAAFL. It takes a fuckload of effort to triage and confirm vulnerability reports. There's no magic "this is a real vulnerability" certificate you can get CERT or Mitre --- or really anyone who doesn't spend a lot of money to maintain the capability for their own products. If there was, Hacker One wouldn't make half their money selling triage services. :)
>For every valid report they get, they get 3 that aren't valid.
Because taking the time to write and to submit an invalid report is a total waste of the reporter's time. Reports aren't the type of thing that someone will accidentally say "oh this is a severe vulnerability! here's some cash" even though the researcher has submitted bullshit.
So can you talk about "3 that aren't valid" for every valid report? Who makes these? Weird, obscure cranks, of the type who in other industries would be churning out perpetual motion devices? I would expect 80%+ of vulnerability reports to be serious and real - quite different from what you just wrote.
Bug bounties, on average, have a signal:noise ratio that is horrible. I advocate for the programs completely, but they require a lot of planning in order to prevent them from becoming overwhelming. I personally know security engineers at Google and Facebook whose full time job is sorting through nonsense bug bounty reports.
What Tom said about people in Eurasia wanting to cash in on bug bounties is spot on. If you start a bug bounty program, expect to get a deluge of nonsense reports for bullshit like having the Trace HTTP method allowed on an API endpoint. For every one valid report, three - five will be bullshit. Of those that do not reproduce, half will be incomprehensible, and the other half will blatantly ignore the program guidelines or be low effort spam ("content spoofing"). If you offer a cash reward, expect the ratio of valid to invalid to be closer to 1 in 10.
It's a numbers game for these people. The security research industry is bifurcated between rare, sophisticated and highly paid freelancers who do it mostly for passion outside of their day jobs, and opportunistic amateurs who couldn't write a curl command.
From there it's pretty easy to see that "vulnerability spam" would be a thing.
Hacker One doesn't have a program for BofA. You probably found this dummy (and slightly misleading) page: https://hackerone.com/bofa
Why not? If anything the story above shows that a financial institution very much can and post-2008 I believe there many things a government cannot do but there are very few things a bank cannot do.
He went on CNBC to argue that independent security researchers should start a hedge fund that short sells the stocks of companies affected by vulnerabilities. https://youtu.be/jxUWRRDdhVI
He seems to be of the opinion that this would be a less risky strategy than bringing those issues to the attention of many companies. He also believes that profit incentives for researchers will serve the public interest, because it creates economic disincentives against big companies having insecure software.
There are examples of honest people getting fucked over, but this isn't one of them.
As it stands, literally everyone who would have been in a position to be reasonable was someone he'd actively worked at pissing off. Not in the sense of someone he'd pissed off in the past, but as someone he was actively pissing off in conjunction with this issue. The moral of his story is that there's room for aggressive incident reporting, but not if one is going to be an insufferable jerk about it.
That is hilarious.
This was introduced 3 days ago https://twitter.com/martenmickos/status/854321634404061185
HackerOne will work with friendly hackers on a best effort basis to verify the legitimacy of a vulnerability, reach out to and verify the identity of an individual at the affected organization, then share the vulnerability with the organization so it can be resolved.
Seems like it address most of the problems of educating the organization so they don't threaten you.
The more realistic concern here is that for these kinds of findings --- CSRFs in random web applications --- there simply isn't going to be a contact at the target company, and H1 isn't going to find one for you. That's why they point out they can't promise a contact.
You'd be surprised. HackerOne is relatively new, just several years old. Does everyone know OWASP, almost certainly yes. Does everyone know the BSide community? No.
Anyway, H1 can act as a shield, in this case. On the other hand, companies like WhiteHat or Rapid7 are probably more well-known since they will probably spam your security team on a regular basis trying to sell their products.
I think the disclosure assistance is a pretty clever idea for generating new sales leads, since by definition they will be talking to companies with an actual zero day situation.
But any person looking at their homepage would be a lot less concerned. Impressive logo's and a clear story for an enterprise audience.
If other people feel the same way, I'll fork and make a repo. EFF is a great starting point but it is not nearly usable as a HOWTO.
I'm putting my balls on the line by publishing this blog post. Actually I started this blog 10 years ago just to make this page, here is the original page: https://privacylog.blogspot.com/2008/10/pre-announcement.htm...
WeWork leases offices; they have a shared workspace area in all the buildings, but most of their buildings are private offices.
Apart from the fact that there's a good chance the company you're trying to report to already has an H1 program running, what they're promising to do here is to spend some effort trying to track down security contacts for you. They profit from this, of course: if you give them a good bug, and they facilitate its reporting, the target company is very likely to sign up for H1. But it costs you nothing and might solve a problem for you.
(I'm ambivalent about H1 --- we run a couple H1 bounty programs that existed prior to us taking over security at our clients --- but I don't think it's a good idea to be dismissive of them.)
I wonder if an org like the EFF could add this to their scope.
Maybe if they were required by statute to accept anonymous submissions and make FOIA-style responsible disclosures after a reasonably short period of time, they wouldn't end up colluding right away.
The offensive organization wold probably then still sit on vulnerabilities only it knew about, but at least this would be better than the current situation.
It's a myth far as I know. I've studied them a long time seeing much conflicting info about this. A declassified, historical document I found at one point about them said their job was SIGINT and COMSEC (just communications security!) for U.S. government. A later provision extended this to protecting COMSEC of defense contractors. The IAD seems to have policy-driven stuff about helping protect INFOSEC in general. There could've been a COTS mandate of some sort at some point but it was clearly toothless.
The NSA is mandated to protect communication security of defense sector. That's it. Even then, the defense sector keeps asking them to downgrade the security to let in more quick-moving products from commercial sector that are hacker fodder. They've since started on a program that lets them in after a 90-day evaluation against the lowest standards from Common Criteria. The NSA is the last group that should be responsible for INFOSEC given all this w/ market an utter failure, too.
The groups that have done the most are probably NSF and DARPA for funding strong security with NIST and DISA (esp STIGS) at least trying to do something with hardening guides and crypto recommendations. I prefer reputation-driven nonprofits that are funded with combo of donations, licensing of quality software, and consulting fees. They can't get acquired or be destroyed by changes in government policy.
Unlikely. They are still here to protect americans, in a sense. Stealing money from a bank or a regular business is not on their agenda.
There is a 10% of vulnerabilities that might have re-use for intelligence purpose, but it shall be alright for the bulk of it.
That may be the charter of the the organization. But the individual people running the FBI goals are to 1) be reappointed / not get fired 2) continually expand their budget / power. Given US politics 1&2 are not always congruent esp in short term with "protecting americans".
They aren't like that at all.
2) the Director of the FBI (and other other high level managers) is much more of a political bureaucrat than they are a LEO.
The EFF seems like a good choice. In general you would need to pick an organization that does not have a vested interest in using exploits.
Indeed. Didn't the FBI effectively purchase a zero day to break into the iPhone of the San Bernardino shooter? Didn't they also then not disclose said zero day to Apple?
There's no way that any LE agency can be trusted with this responsibility; I'm not convinced that it can be done by the federal government at all. EFF seems like a reasonable choice, but even non-profits have the potential to be corrupted/subverted (and operating as a dump for zero days has the power to corrupt, for sure, regardless of how moral your organization claims to be on its website).
This definitely falls under the umbrella of hard-problems-in-politics-that-will-not-be-solved-any-time-soon
> The National Security Agency is now able to share raw surveillance data with all 16 of the United States government's intelligence groups, including the Central Intelligence Agency, Federal Bureau of Investigation, Department of Homeland Security and Drug Enforcement Administration.
Come up with a good set of guiding principles for members. This would help avoid waiting 7 years and then sticking it online. Not criticising, I'm saying the situation here is pretty screwed up.
Members pay dues, the association provides backing. Company threatens to call the FBI and the association is the one they can deal with.
An organized group can help to provide the needed political pressure so that a properly disclosed vulnerability doesn't ever lead to the FBI and trumped up charges.
A respected group can lend credibility to a researcher. A bank may not give 2 shits about even a well respected member of the community. They will care if it's a group well known for finding and disclosing vulnerabilities.
This seems like an easier problem than the general case of software engineers because the community is smaller and you don't have the conflicting interests of "I can negotiate better on my own". Plus things like membership can be handled more easily, start with a small group of people who absolutely should be members. Extend via application and invite.
I'd like to work in an organization like this. I'm not sure if anyone would want to join. It seems like everyone else is either completely independent like my own IDJGAF strategy or they are full corporate like HackerOne and other brokers.
It's next door to the military intelligence folks.
If you don't care about your reputation, you post anonymously. An anonymous full disclosure post is a good way to report a bug without dealing with drama about your "incentives".
One time I found a photo printing website made all photos public. They refused to fix, I fully disclosed, it made front page Slashdot. Then the company had to change its name. Maybe it was fun or maybe I get credit but most importantly it gets something from my TODO list to my DONE list. This is very important to me.
I have a 0-day on Apple, not very exciting. I reported in 2015 and they still did not fix. Having this in my inbox is a waste of my time thinking about it. I will FD it.
My experience is that security researchers do not make money unless you run script kiddy programs for stupid bounty programs. When I interviewed for a "security" job all they would ask me about is Microsoft certifications and user access testing. I asked if a TLA offer letter counted as sufficient reference and he said no. At that point I immediately switched from MS CS into MS Finance and MBA and my life has improved (while still being technically challenging and academic.)
So technically my disclosure policy is IJDGAF with two extra weeks as a gentleman's favor. Maybe I'm the bad guy, but that's why I'm here for the lovely discussion on YC. Thanks for sharing.
For a CSRF that you didn't use someone else's account to exploit and that you've told nobody about, and assuming you have no acquaintances who might screw you over by abusing the bug, 30 days and then Pastebin seems like a decent answer.
If any of your friends are shady, just forget about the bug.
The more unpatched vulnerabilities there are in existence, the more lucrative it is to be involved in any part of the computing crimes community.
It's like reglazing a broken window in your neighbor's garage at your own expense, because you don't want burglars to see it and start casing other properties in the same neighborhood based on the conditional probability that a visible broken window indicates a higher incidence of other exploitable vulnerabilities.
It's also important to pursue the very easily exploited vulnerabilities, because when you get rid of all the low-hanging fruit, the people who can't already climb the tree won't survive long enough to learn how. You're cutting a lot of bootstraps so that immature criminals can't pull themselves up by them.
Perhaps there's a breakdown of definitions here. I've lumped bug-bounty hunters and grey hat hackers, along with actual researchers, under "researchers." Stop me now if this isn't who you're referring to.
Now if it is, this route of action goes against the reseachers' monetary incentives. It is in their wallets' interests to have criminals validating the existence of their work. As well as selling the direct findings of one's research, including even minor exploitabilities, which is a given.
If researchers were to constantly give away their work (on even little issues) it would directly lower the cumulative value of cuber-security research, i.e their more expensive projects now sell for less.
The FBI / NCFTA invited me to speak about this vuln because it may have affected many banks at the time. (Please stop laughing.)
They called me to cancel. "Now we're all focused on this big DOS. Do you know anything about DOS that's happening today you can help us with?" I asked if the DOS is affecting the stability of the system or actually breaking anything. And they said yes it is bringing the banks down and affecting revenue.
You can read into this anecdote as you wish.
New York City based their increased focus on petty crimes on it. I don't think it is useful as the basis for a model of policing, though.
In some ways, it is an embodiment of the slippery slope fallacy, where if security is not perfect, it's worthless, in the same sense that a roof with one leak in it is worthless, because that one leak becomes the beachhead for further damage to the roof.
From the original article in 1982:
Consider a building with a few broken windows. If the windows are not repaired, the tendency is for vandals to break a few more windows. Eventually, they may even break into the building, and if it's unoccupied, perhaps become squatters or light fires inside.
Or consider a pavement. Some litter accumulates. Soon, more litter accumulates. Eventually, people even start leaving bags of refuse from take-out restaurants there or even break into cars.
Broken Windows, The Atlantic Monthly, March 1982
The way I would put it, based on my visits to my home town of Zagreb, Croatia:
Apathy is contagious.
Have you heard of Elizabeth Kubler-Ross's '5 stages of grief' model that summarizes people's typical responses to bereavement? EKR argued that people generally go through a cycle of denial, anger, bargaining, depression and acceptance. IME this is a good rule of thumb for how people typically handle any kind of unwelcome news.
In this case:
o There is no such problem
o Grr why did you hack us I'll call the police
o How about you take this pittance and STFU
o We're just trying to run a business and you ruined everything
o OK we'll fix it and alert our customers
But stipulate that there's some number here, and the answer is: because nobody in management at Zecco ever built a plan for how to handle incoming vulnerability reports, and so nobody who got the report was empowered to do anything but escalate the issue --- and halfheartedly, at that, because nobody in management at Zecco ever build a policy that ensures anyone cares about vulnerabilities, so this is for them the moral equivalent of a WONTFIX.
How diligently would you escalate a WONTFIX?
Large firms wouldn't survive at high enough rates to dominate public life as they do, if they weren't underwritten by the state at every turn.
You can use a lawyer for this, this is a standard piece of advice for other kinds of bounties-- e.g. reporting criminal tax evasion.
In a situation like this I'd probably directly ping taviso or someone else from the Google Project Zero team. Their contact information (email, G+, twitter (DM)) is not impossible to get at.
From there, I could get advice about next steps (the Project Zero team are going to know a few people) or maybe they could run with it themselves (depending on the bug; I don't know what the response would have been in this case).
Mind sharing how you've achieved this? I haven't had the same level of success, with my couple of attempts (thus far) to try to resolve various issues falling flat.
I decided to log in today just to see if it's still there (was a couple days ago), and it's finally been patched. If I had used a throwaway I would gladly let you guys know the bank, but I won't since it's trivial to find out who I am from my handle.
When in doubt, people, call your attorney.
More easily, my profile on my firm's website: lawyernamedliberty.com. I'm fairly easy to get in touch with.
Oh, new bank? Just assume it's your bank, and do whatever you would do next.
Hm, I recall the Comodo hack. I think it Comodo was hacked twice or more times that year. It won many rewards and continued leading the CA space. The market did not work apparently...
The other end are buyers. Most of them don't know what to expect for security or how to evaluate it. Most attempts to solve this failed. They've been conditioned to expect constant hacks, crashes, or data loss. So, they see Comodo etc get hacked and shrug. They'll usually stay if their end of whatever they bought works. The sector that will pay for highly reliable or secure software is probably under 1% of the market or projects. It's enough companies keep forming to do real thing but tiny, tiny few struggling to justify the extra costs or less features necessary for higher security.
Although I guess it could help align customer and business goals, since no one wants to lose money
If you short it, at least you might make some money to offset any pending lawsuit. There's plenty of examples of people doing the same thing to fall back on, such as the guy who found out a newly listed company wasn't actually real.
It's public information.
Now if someone who works at the bank had told you about it, you'd be in a lot of trouble.
I'll admit that viewing the source code and noticing this link would be a stretch, but I wouldn't necessarily expect it to be a slam dunk for the researcher, especially if he had assented to the site's ToS (and since he had an account, it seems that he had).
At this point, I imagine he could be in all sorts of (primarily civil) trouble for the disclosure that he just made. He may be protected under some type of financial whistleblower law, but I wouldn't hold my breath.
BOOM! And they've been harsh on hackers for a long time. So, the vulnerability must not require violating access controls or system integrity to be safest. Hackers should be in the clear if it was simply noticing something in HTML/HTTP or whatever that indicated insecurity. An example might be a breakable cipher-suite or handling sessions improperly.
1. conspiracy to access a computer without authorization
2. fraud in connection with personal information
This is because Goatse Security not only noticed the vulnerability itself, but because they wrote and executed a script called the "iPad 3G Account Slurper" to iterate over ICC-IDs, returning the associated email address for each one.
Executing the script against AT&T's servers probably is a bona fide violation of the CFAA, not just a conspiracy, but I would guess it's simpler to bring the conspiracy charge since you don't have to get into the nitty gritty of actual requests made, etc.
According to the complaint, they proceeded to email a handful of notable people whose emails had been harvested, including someone on the Board of Directors at News Corp. All of these contacts appear to be media outlets. The Gawker article also lists some of the people whose email addresses were extracted this way (without disclosing their emails).
I'm assuming this direct communication to journalists and/or execs at journalism outlets gives rise to the fraud with personal information charge.
Overall, I don't think that weev did anything that I wouldn't have necessarily have done if I were in that situation (trying to drum up attention and make a name for his consulting firm), but it's different from this disclosure because as far as we know, this researcher did not actually exploit the vulnerability and he has not obtained or disclosed any information from doing so.
Again, not a lawyer.
But would this case with the bank be different because the vulnerability, unlike formaledehyde, could be actively exploited? Encouraging a stock price to fall because of bad practices seems alright (like the LUmber Liquidators example), but if in the process you become an accessory to smaller-scale fraud against individual account owners, is it still "alright"?
That said, technical glitches tend to not affect the fortunes of companies nearly as much as we (the HN crowd) think. Tradeking had the glaring vulnerability outlined in this article for years, and they are doing just fine.
I think the point I'm getting hung up on is that the bank's stock price could drop for two reasons: bad PR due to the glitch, and/or falling financials due to fraud perpetrated as part of the glitch. I can completely understand a hedge fund trading and making money off the bad PR. But if (hypothetically) the bank lost a ton of money by hackers liquidating user accounts or, worse, making leveraged bets [before everyone checked for that sort of thing ;)], and the hedge fund knew there was a reasonable chance that the malicious activity would occur based on the newly disclosed information, would they have liability there? (from the theft/fraud perpetrated against the bank, not the drop in stock price)
But general public disclosure of a vulnerability, and/or trading on the anticipated effects of public disclosure, is not illegal. It likely won't win you friends in the IT community, but it falls short of an indictable offense.
Before writing his blog-post, he short-sold a bunch of Lumber Liquidator stock and made tons of money during the fallout.
I don't have a problem with MedSec making money by shorting St. Jude's stock (that seems to align incentives to take care of security issues as early as possible). But if MedSec publicly disclosed specific, exploitable vulnerabilities (I'm not sure about specifics from the article), they shouldn't be able to hide behind the "doing what is best for the consumer" argument. It's definitely a clever business hack, and that's alright, but the fake sense of moral superiority isn't.
A company discovered vulnerabilities in some medical devices, then shorted the stock of the company before disclosing them.
I'm a happy user of N26. I very, very highly recommend it to all european customers. I'm never dealing with shitty bank service again. https://n26.com/ (Email me if you want a referral invite).
C-mp-t-rsh-r-: your website's trash and you should be embarrassed with yourselves.
It's people like you who keep companies like that in business and encourage such atrocious activity.
My bank (arguably) condones use from public computers by asking me if they should "trust" the computer I'm on.
Or, you know, poor people.
"Sign this NDA or we will send the FBI to arrest you because you found that our banking website's security was completely fucking broken and told us about it." Jesus fucking christ.
The researcher is lucky that TradeKing believed their NDA trick was sufficient. Even if the case here is weak, and I wouldn't necessarily assume it is, it would still seriously damage the researcher's life.
Here's how it goes when you get sued by a big company. Their lawyers essentially have a heyday doing everything possible to obstruct and delay the process so that they can maximize their time on the corporate teat. It will go on for years; they won't mind because it's business as usual for them, and they're getting paid big bucks to torment you. Your life will be ruined: assets seized pre-emptively, reputation and credit destroyed, inordinate quantities of time consumed by legal research and tedious paperwork, struggling (if not immediately blatantly failing) to keep your incompetent counsel paid at $250/hr and meet the retainer, and eventually failing to file some document or pay some fee that will cause the court to enter a default judgment against you and permanently confiscate everything you own, leaving you with the albatross of a massive outstanding judgment waiting to be enforced, bank accounts garnished any time you get any money, etc. And that's the short version!
And then guess what -- if, by some miracle, you don't lose in the first round, this whole process will repeat as they file appeal after appeal. Hunker down because the proceedings will last at least 5 years.
The corporate lawyers will be able to justify all of it to their clients without blinking an eye, who probably forgot that they even asked them to sue you. Everyone at the company and the law firm will go home and sleep soundly on their piles of money, and you'll have learnt your lesson that trying to stop the subterfuge of an online trading platform is a terrible offense.
Good reading: http://www.nissan.com/Lawsuit/The_Story.php
Otherwise, in court I'll be happy to defend myself. If it is necessary to spend time to defend yourself then that is a blessing. I have successfully sued the government (the US Army and Veterans affairs, no less) http://www.gao.gov/docket/B-413723.2 when they do things wrong. Just be persistent and be right. Then we came out with a nice settlement. Sorry GAO used to publish fulltext docket outcomes but I don't see it here.
Fuck Nissan. (Can we curse here on HN?) Because their cars suck and because of this case that I am well aware of. The sad thing is that Mr. Nissan spent so much money in defense. I should hope that he would be able to be more effective with less money.
Companies that run formal bug bounty programs (either directly or through a third party like HackerOne) show some recognition of this and some goodwill, especially those that include payouts of five figures or more, but those companies have to be careful that they don't accidentally create an environment where bidding wars between exploiters and companies are legitimized.
Why not? Yes prices can become high, but isn't that the work of the researcher? If the company doesn't want to have to purchase expensive bounties, they can either reduce the exposure (less legacy code, fewer APIs, more firewalls) or use more strict security rules.
I'd feel safer if LastPass' bounty was higher than the value of the assets I put in that vault. If the value of a single vault (mine, actually) is $10,000 and the bug bounty is $2500 (which it is), how can we persuade discoverers to sell to LastPass?
This contract might actually be egregious enough to warrant an unqualified declaration of invalidity, in which case you should go the other direction and overstate your case with conclusory statement and some word like "clearly" or "patently". "This contract is patently invalid!" and then explain why.
This isn't even an IP question!
A contract is what lets you sue someone over a private transaction. That's what it does, that's all it does. If for whatever reason you're not willing to bring a contract dispute to court, then your contract doesn't do anything and you wasted your time writing it. Contract = right to sue for breach of contract.
In order to sue someone, you need to be able to describe what damages have been done to you. The goal of a lawsuit is for the responsible party to 'make you whole,' i.e. pay you back an amount equal to the damages done to you.
In a contract dispute, the 'damages' of breaking the contract is equal to the 'consideration' of fulfilling the contract. In other words, the promised consideration is the actual thing that you can sue over.
If there is no consideration, then there are no potential damages, and there is no potential lawsuit. And since the only point of a contract is to enable a lawsuit, a contract that doesn't do that isn't a contract.
This is categorically incorrect.
Damages for breach of contract are supposed to put you back in the position you'd have been in had the contract been performed. It's not related to the value of the consideration.
Consideration is one of the things needed to make a contract binding in English law (along with offer & acceptance, and "intention to create legal relations").
Jurists still debate the rationale for consideration, but the best answer I've found is that contract in English law is seen as an exchange or a “bargain”. There is no gratuitous contract, donations are not contractual right.
By comparison, a contract under French law is based on "consent of the parties" and the theory of individual autonomy. There's no requirement for consideration.
In a "mutual NDA", consideration is easy to find; each party agrees not to disclose confidential information disclosed by the counterparty.
Another way to make an agreement binding without consideration is to sign it as a deed.
I don't think mutual NDAs are typical. Typically, you sign an NDA prior to receiving information. So the consideration for signing the NDA is receiving the information that you agreed to not disclose. If you already have that information, then that's no longer valid consideration.
In this case, the reporter already knew the security vulnerability, so that knowledge could not be considered consideration. The bank would have needed to offer something else.
If I say, "I'm going to give you some apples in six months, after the harvest" and then there's a blight and I don't actually end up with any apples, society (at least in America) decided that I should be able to just say, "Oops, sorry, I'm not going to be able to give you those apples after all" and be done with it.
On the other hand, if I say, "I am going to sell you some apples in six months, in return for $100", American society collectively decided that I'm on the hook to get you those apples, regardless of whatever difficulties should ensue.
If a contract doesn't outline consideration, and the jurisdiction requires consideration, then the lawyer writing the contract was not very good at their job...
Also, you have to ask why someone chose to sign a one-sided contract. Was it signed under duress? The court shouldn’t enforce that. Was it a gift? The court would rather not get involved with enforcing every casual promise!
You, sir, have unfortunately failed that test.
Worth noting that just because it doesn't stand up as a contract doesn't necessarily mean a claim can't be made under breach of confidence (I doubt it would be applicable here, but just pointing out that contracts aren't the only form of legal protection provided to confidential information).
Definitely not. The bank did not disclose the vulnerability to him, he discovered it on his own. He had absolutely no obligation to the bank.
Edit:googled some more and it appears that continued employment as sufficient consideration is different on a state by state basis and isn't firmly set in stone yet
In the first case you have extinguished a right that could be used against you. In the second case you have obtained nothing more than the illusion of safety.
Yes, that is exactly right.
> doesn't seem like a gain
Why not? If you don't think that's a gain, why are you wasting your time doing the interview in the first place?
A chance for employment (over an outright dismissal) is a recognizable gain.
You are however, free to decline with the appropriate consequences.
You'll see what a contract needs to be valid during these courses. There is simply nothing about both parties requiring to gain something.
The point stands. Your link doesn't infirm what I said.
A good bunch of things taught in a contract law class are subtly wrong even for a very similar neighbouring country (in EU it's now getting a bit better because of harmonization efforts) but common vs civil law changes pretty much everything.
And a semester in contract law is not really much expertise - any MBA with a semester in USA contract law would have much more relevant expertise than us Europeans talking.
LOL. Res ipsa loquitur.
You think you are qualified to determine ANYTHING about US contract law when you've taken a single semester in contract law related to an entirely different country?
By your logic I am basically a Astronomer. Except mine is more relevant since astronomy is the same regardless of where you take a "whole semester" of it.
There isn't a bright line rule.
But even if we are allowed to infer consideration, and I agree with you that we are, this contract isn't simply lacking the terms of consideration. It doesn't appear to contemplate consideration at all, which in my experience, is unheard of for these types of agreements.
But that would depend on the specific jurisdictions case law on contracts and then how the judges reading the contract.
If this were my client and he got some kind of consideration, I'd tell him to treat it like a valid contract. Though I'd trying to poke holes in it during litigation. But litigation is losing 9 out of 10 times, even if you win.
Without going into the extreme details of this case. Being "considerate" in legal jargon is much more subtle than "both parties have to gain something" in engineering talk. Determining the consideration can be as hard as a NP problem, to speak in engineer :D
Back to my original point: Let's not talk people into signing perfectly valid contracts, hoping for a loophole because it didn't look nice enough to them!
Obvious failure modes are exempted. Anyone can tell you about a bad bridge after it has failed. But it would take a bridge engineer to tell you that before it fails.
Also, your definition includes itself as part of its own definition, which is a circular definition fallacy.
According to what definition?
> Anyone can work in wood long enough to say "That wooden bridge looks like it'll hold X people,"...
I severely doubt that, given the complexity of trussed bridge designs . There's a lot more to it than how much weight a 4-by-4 can support.
> ... and not have any way of conveying how they came to that conclusion...
If you can't transfer knowledge in a way that other people can independently verify, you're working in magic. If such a transfer is possible, but simply not possible for a particular person because they lack the tools, then that's a professional failing. For some reason, this state seems acceptable to you when we're talking about physics and complex loads. But could you imagine a doctor describing the appendix as "that thing sticking out where the long thin squiggly bit meets the short thick squiggly bit"?
> Also, your definition includes itself as part of its own definition, which is a circular definition fallacy.
You can't just throw out "circular definition is fallacy" and dismiss the idea. That itself is a fallacy -- "argument from fallacy". 
Yes, I use the word "professional" twice, but that's not necessarily a circular definition and especially not necessarily a fallacy. First, the two "professionals" are not the same person. The first mention of "professional" is an individual, while the second mention is a group. What I did is tie membership of a group to a conditional ability which is dependent on the group itself.
However, I did cheat a little bit. Because what I did not define is the individual ability necessary to meet that conditional. Because, of course, that changes depending on what group of professionals we are discussing.
For backup, let's look at a definition of malpractice :
> a dereliction of professional duty or a failure to exercise an ordinary degree of professional skill or learning by one (as a physician) rendering professional services which results in injury, loss, or damage
In other words, malpractice is a professional doing something which such a professional should not do... Because the mere fact of a person being a professional implies that they should know better.
It's this same logic that I am using: A professional is someone who acts in a professional capacity, and understands the practices of such profession, and thereby is capable of judging whether another person understands and acts in a professional capacity.
I'm pretty sure the correct reaction on my part here is:
Good day, sir.
PS: and if you want an advice by a lawyer that accept liabilities for its counsel just pay for it, because that is the only way you get it.
I think it's totally fair to reject an NDA but I don't blame him for fearing an overzealous reaction on their part. Even being on the right side of criminal and civil law, you really do have to be willing to spend time and money to mount an affirmative defense.
Edit: looks like this could be possible without getting into trouble depending on the state you're in: http://lifehacker.com/5491190/is-it-legal-to-record-phone-ca...
A $50 misdemeanor fine for unlawfully recording a phone conversation, may well be a small price to pay - if the content of that recording can successfully protect you from a potentially bankrupting civil case.
And you always have the option of not disclosing the recording if that is your lawyer's recommended advice.
It bothers me a lot when services, such as Google Voice, announce to all parties that such recording is occurring.
Google is based in California. There is a good probability that the act of recording occurs there. California is an all-party consent state. Also, even if the recording isn't happening in California, it's potentially tricky to be sure that no party to the call is in California (even numbers assigned to landlines don't assure that the person ultimately connecting is in a particular place.)
It's completely legal to record a phone call in Canada as long as you are a party to that conversation. However I still cannot find an app for my Android phone to do this.
Second, those beeps probably exist to reinforce that the audio is unmolested. A beep every 5 seconds means you would have to cut audio in five-second increments, which is not likely to be convenient to whatever segment of audio you actually want to cut.
I mentioned it because someone working for a big organization and making a lot of interstate calls probably hears these beeps all day and would be less likely to protest than if someone verbally announced that they're recording the call.
If so, as you point out it seems like an interesting way to avoid having to announce the recording to those not knowledgeable.
EDIT: I don't know how reliable this site is, but it seems to indicate the recording beep is sufficient for notification, but not sufficient for consent, which makes sense.
A few months back I did some research  on these e-payment APIs and noticed that one of the major banks had a serious flaw in their API implementation. It was possible for the end-user to manipulate the signed API calls to change the payment amount, effectively paying less than the actual price for products they buy.
I reported the issue to the bank and got a swift response where they acknowledged my report and said they were looking into it more closely. A few days later I got another email where they basically said "ok, this looks bad, and we can see it's pretty trivial to exploit, but... it's too expensive to fix, so we won't do anything".
I wasn't comfortable with this, so next I reported it to NCSC-FI/CERT-FI. They also agreed that it looked bad, but said that they had no way of forcing the bank to take action. So that got me nowhere either. I haven't heard from either NCSC-FI or the bank since, but the issue does appear to be partially mitigated now.
I've since found several other issues in the same bank's systems but haven't bothered to report them since they don't really seem to care.
I really take issue with the notion that security is important, so you're fully justified in screwing people and companies over as much as possible to prove a point. That seems to be a common attitude in the security community. I get the frustration people have with the intransigence of corporations and programmers, and people's general stubborn unwillingness to understand the severe impact of vulnerabilities, but if just security-shaming companies into fixing bugs actually worked we would have a much more secure internet today than we actually do. Unless you can get regulatory agencies to start holding companies and individuals legally accountable for security issues (that is, making it more expensive not to fix than to fix), nothing will change, even if you have all the technical solutions and social pressure in the world.
The wording you choose should be cognizant of your state's laws and the company's user agreement in such a way that the company is actually at risk if they ignore you.
When talking to people, "Reason is, and ought only to be the slave of the passions".
When talking to companies it is only necessary to discuss the impact on their profit.
So no, publicly exposing an issue does not always work if there are no incentives for anyone to fix it.
The correct solution before this was to make an announcement:
"Here is the announcement I have made disclosing the problem. It is in both our best interest that it get fixed before publication. I have irrevocably given it to a blind drop that will publish it on DATE. And I believe that is a reasonable DATE that you could fix the problem. Let's work together to fix the problem."
What do you think about this type of approach? There is probably a name for it in Art of the Deal. (Whatever you think of the man, the book is worth reading.)
However if the FBI and NCFTA were /genuinely/ interested in disclosing this in their forum for other banks then maybe my phone call with them may have been a win-win. But I think they were not genuinely interested.
No bug bounty but oh well.
It doesn't matter how we regard CVEs as a community, this is the truth of the matter outside of it. We're handing them over a bomb, and they want to know why. It feels very Spy vs Spy to me, as silly as that sounds.
I tried reporting it to the credit card, and to the issuing bank, and to the FBI. The only thing I asked was that they cancel the credit card accounts and put a "potential fraud source" note on each customer's account. Each party I called was more concerned with threatening me, and trying to find out what kind of criminal angle I was playing, and what my ulterior motive was, etc etc. I honestly expected to hear "Oh dang, that sucks, we'll close the accounts and contact the victims", and was depressed at the hostility I encountered.
Why should we be strictly ethical in the face of behavior that is unethical? We deserve protection, too.
That's pretty much trying to shut down business with their customers. You don't see how they'd interpret that as hostile? Future actors would know how to apply similar techniques if the outcome was in their favor (e.g. Anonymous suddenly produces a large file of cc#'s and threatens bank!)
> The only thing I asked..
In fact, why were you making demands about how they handle their customer relationships, instead of simply presenting what you'd found?
That's not how credit cards work. You close that account, transfer the balance to a new card, and issue it to them in the mail. I've done it a half dozen times, and my CC company is only out for odds and ends like postage and stamping a new card.
> In fact, why were you making demands
I wrote "asked", and then you pasted that, and misquoted it as "demanded"? If you hadn't included my quote, I'd accuse you of dishonesty, but now it's just weird.
I asked them to proactively protect their customers, because my grandfather had been through hell after his identity was stolen, and I wanted to do my best to protect other people from the same.
I'd be annoyed if my bank didn't do something.
So again, how is saying "You should take action to protect your customers' data" a threat? How can it be interpreted as a threat? What is threatening about it?
You're basically saying "academics can derive your social security number using public information!" And wondering why they don't reissue all of the SSNs...
>You're basically saying "academics can derive your social security number using public information!"
No, I'm saying that if my name, social, DoB, mother's maiden name, and credit card number appear online in a csv file with 200 other people's personal information, I'd really appreciate it if my credit card company would take proactive steps to keep my accounts secure.
Password + token is a common pattern in systems where hardware/software/OTP tokens were bolted on after the fact.
Not just that, but on certain systems (think a Windows login screen, or a POP3/IMAP login for your e-mail client), you can't have a 3rd "token" field -- they're hardcoded to ask for just a username and password.
So vendors came up with the idea of appending the token value onto the password, and their middleware (say, a PAM module) splits the provided value into password and token and validates both.
EDIT: That's not to say that Schwab is doing it right (in the front-end, seriously???), but just pointing it it's not as uncommon as you think.
So have it with no encryption, and the back-end can pull the password and 2FA code apart and verify both of them, for all kinds of systems which have only a username/password prompt for logins.
I no longer hold any assets at Schwab, but I do poke around every now and then, and it's possible they changed things without me noticing.
Don't be so sure. If they didn't disclose this to their buyers they are guilty of fraud. The statute of limitations has probably run out (I don't know which state has jurisdiction here), but delayed discovery rules may apply.
If I were a betting man, I'd bet the buyer knew about the issue and basically didn't care.
This is negligent. If they are running banking ecommerce infrastructure and are unable to deal with 101 security risks then it is absolutely negligent. The "it is too complex for the average person" isn't an adequate defense.
The only thing is that there has to be someone who lost something of real value for it to go to court as negligence does it not?
In your contact with companies you should say "Failing to fix this issue would be a violation of reasonably assumed security practices as required in LAW..."
Wouldn't the FTC want to know about this though, as this would be a great way to execute a pump-and-dump scam...
I would like to migrate to my own domain with Jekyll or something. But I would not look forward to implementing commenting and trackbacks even though the blog is pretty modest any way in terms of using those features.
What is he trying to say here? How on earth would it be possible to execute the url in the context of your zecco cookies unless it's openend in a (browser) in which you've logged into zecco?
The pre-fetching will use the user's context (and cookies) because it's executed by the user's web browser.
On a similar note, your web mail could fetch images in emails ahead of time, but that would still be out of your browser's context
Terrible nonetheless. Reminds me of how Mt. Gox used to hand out password resets with plaintext passwords in the query string on their own forums.
Sounds like somebody should write a book about all of the missteps in that debacle.
It's like circumnavigating the globe backwards in order to avoid using a crosswalk.
I'm pretty sure the author wasn't the only guy looking for vulnerability. I'm pretty certain criminal minded folks would've already used it....with no way of finding out which are real or manipulated.
Which further raises the question, why they would go to extreme length to cover their tracks? They could've easily saved themselves trouble by coming clean but because they've gone such great length to hide it and threaten anyone who tries to expose it makes this a hollywood type story. That just seems so over the top like they are protecting something much bigger.
It's unlikely, but my point is it's a hole in their system which would allow this to happen and it seems like they've deliberately let it continue. :(
Motivation was clickbait and/or fear that people would not understand the latter.
BUT actually this vuln may have been from upstream with Penson. And then it may affect many broker-dealers. They have many clients in US and Canada. (Don't laugh that such a ridiculous vuln could be in so many places.)
At the time, considering this (and Penson was on the phone) I understood that irresponsible disclosure could have serious consequences. FBI would have been warranted to knock on my door.
That's why I'm now publishing 10 years after the fact.
Pardon my ignorance, but how would this work?
"But this only affects people that are logged in, right? Yes ..."
So I suppose what happens is, that the user is already logged into the service and thus has a cookie for the service in his browser.
If the user then somehow executes a request to the URL in the article with the same browser (eg viewing a malicous email with the IMG tag in a webmail client), the browser will enclose the cookie in the header of the request. This makes the request automatically authenticated.
The article mentions it would occur even without opening the email.
You could also abuse Firefox and Chrome prefetching links. I'm not sure Gmail for example remove prefetching attributes in spam links. They do block images though.
Anyways, how would it work with the server receiving any data from the client just by viewing the link in your browser?
I am not saying no one has damages, but if 100s of people had damages, I expect something would have happened...
>Also their engineers made it clear that unauthorized transactions like this and later shown below would not be distinguishable from other legitemate transactions.
If you kept buying and selling on that account, including with the supposedly-hacked-purchased shares, you'd need to explain why you didn't bring it up until now.
With sufficient preparation it's likely, that the bank (and prosecutors) wouldn't be able to prove that crime beyond all reasonable doubt, and he wouldn't be convicted for it, but it still carries a risk that they could prove that (e.g. by forensic analysis of his computer) and he'd go to jail.
Furthermore, even if he manages to prevail in the criminal case, in the civil case (where the criteria is less strict) it is quite likely that after reviewing all possible evidence they'll manage to get to the correct judgement that the "unauthorised trades" claim was false, thus not getting him anything anyway.
> Also their engineers made it clear that unauthorized transactions like this and later shown below would not be distinguishable from other legitimate transactions.
They don't have to prove that it couldn't have been someone else, they have to convince the court that it's more likely than not. Motive matters a lot - if there's some way how that transaction would have been useful for a fraudster (i.e. if it was a money transfer to them), then it's one thing; but if there's no indication of why someone else would want to make the fraudulent trade (which is the case for most stock purchases/sells) and a clear motive why the claimant would want the trade to be reversed (i.e. the stock buy seemed good on that day but turned out to be bad afterwards) then if there's any technical evidence whatsoever pointing towards the claimant, it's hard to be convinced.
If data shows that the transaction is e.g. done from some Starbucks and local security cameras show the claimant near that Starbucks at that time, it's probably not enough to get a conviction but likely enough to make them lose the civil claim.
The criminal case would be expected to get much more evidence than an ordinary civil claim, so they'd likely wait for its results and use everything that the police/prosecutors gathered to dismiss their civil claim.
Again, the IP address would obviously be associated with him and the browser because that's how the vulnerability works. The attacker just has to get the victim to visit any website with a browser which has the cookies for the bank. So proving that the user's browser/machine/IP made the request does nothing to show that the user did so intentionally.
> Motive matters a lot - if there's some way how that transaction would have been useful for a fraudster (i.e. if it was a money transfer to them), then it's one thing; but if there's no indication of why someone else would want to make the fraudulent trade (which is the case for most stock purchases/sells) and a clear motive why the claimant would want the trade to be reversed (i.e. the stock buy seemed good on that day but turned out to be bad afterwards) then if there's any technical evidence whatsoever pointing towards the claimant, it's hard to be convinced.
It doesn't have to be done by a fraudster. The motive for the attacker could simply be to fuck with people. They don't gain anything but satisfaction from the fact that they were able to successfully exploit this vulnerability.
In general, you make good points, they are believable and likely would be made if such a court case happened. In the absence of hard evidence, if they seem slightly more believable than whatever story the company presents, the claimant would win; if they seem slightly less believable, the claimant would lose. In a civil claim, the company needs to prove that it was authorised only just as much as the claimant needs to prove that it was not, it's a somewhat symmetric contest - simply claiming "I didn't authorise it" is effectively countered by claiming "Yes you did", and simply moves the discussion on to further investigation.
The motive could be just a prankster messing with people, but it's a lot less convincing motive than an obvious benefit. If the transaction is one where you clearly lose money and someone (possibly anonymous) gains it, it's easy to make the case that you were hacked. But, for example, if the claimant had previously unsuccessfully complained to the company about the theoretical possibility of such vulnerability, and then complained that a seemingly random transaction is unauthorized, I'm fairly sure that any decent lawyer would successfully convince the court that "a prankster did it" is comparable to "the dog ate my homework" and it's a bit more likely that they orchestrated the claim themselves to mess with the company. Getting 51% of belief is preponderance of evidence, and sufficient in a civil trial.
And in any case, all this wouldn't be "simply claim" - seriously making such a claim would require a significant investment of time and money from the claimant. It's not something most people would do for fun. Some would do it to make a point, but that's quite a niche hobby.
I would have done the following:
1. Shut down my account.
2. Send the exploit to the company anonymously with a deadline to fix.
3. Upon deadline, post exploit and cc the company.
The inability to publish is a rub, but I think we need a cultural shift to drive back corporate idiocy and protect consumers.
Of course you would. The bank would call the FBI and tell them you're hacking the bank, and the FBI would then knock down your door, tear up your house and drag you away. The system would then do everything it could to represent what you did as a crime, and if you are lucky you get away with only a year in court, many thousands in debt and your name dragged through the mud.
tl;dr The actual legality of an action is only tangentially related to how the legal system will be used against you in response to it.
I still hope this is not the case in most places outside the US - that is, I hope the responsible disclosure is complete proof you are in fact not hacking anyone.
Next time I would change 30 to a reasonable number. In this case (multiple vendors and a large installed base) maybe even 180 days may have been fair. And then I would stick to my guns.
But it seems the reason why these cases don't get resolved quickly is purely for economic reasons: the perceived cost of fixing the issue seems (to them) is far greater than the cost of dealing with the (remote?) possibility of the exploitation of the vulnerability.
I also think the security researchers have an 'overgrown' sense of the urgency upon having discovered such exploits, and it never seems to get fixed fast enough from their point of view.
But understanding the forces that are at play, also helps understanding such an 'irrational' decision. Big institutions are not known to be proactive and the political climate in such environments does not incentive the 'doers' but does get people in panic mode to try to stop the leak, instead of the root cause (the exploit).
FIRST, be reasonable. This is a good life axiom. Don't expect a large organization to confirm, engineer, test certify, and deploy a change that requires external documentation in less than 14 days. Even if the ship's on fire.
SECOND, be valuable. If you are reporting a vuln that is a bug report. When's the last time you got thanked for /any/ buy report for a non-GitHub project? If your report explains the cost and liability for lawsuit if they fail to fix your reported vuln then you are speaking their language.
I have a confirmed vuln reported to Apple under their "responsible disclosure" program since 2015. They have yet to fix it or provide credit as they promised. If you thought Apple was a magic company that "does the right thing", then I hope this dispels that myth.
Lots more information about disclosure:
Now, in 2017, he flouts the NDA and acts in the public interest.
For example, if you'd stolen millions of credit cards in 1983 you'd have a special session of Congress dedicated to going after you, whereas now we (rightly) blame Target.
> 2017 I have yet hear from FINRA that any action has been taken. I have yet to hear from ZECCO / TradeKing that the issue has been resolved.
This may be a lot of it. In October 2008, a massive security breach affecting all accounts was maybe a solid #2 on their list of problems.
Imagine this conversation were the user to have discovered a parameter which let the user execute trades on behalf of another user.
For example, a realistic exploit would be to slowly buy up a bunch of a random penny stock; and then post an image link to some forum frequented by users of that software with the order "buy 10000 units of stock_x, okthxbye". The order will be executed by users viewing that forum and will bump up the price as you dump it.
The police rappel down the sides of your house in full gear and shoot your dog.
Or, per the article, the company pressures you to sign an NDA, and mentions "FBI" to instill fear of rappelling.
TLS1.2 and proper crypto schemes should be mandatory at this point.
var i = document.createElement('img');
Then look at the cookies sent over the network.
var i = document.createElement('img');
So, I clicked over to the Network tab and viewed the headers. The request headers do not include any cookies. If Hacker News were a broker using GET requests to buy shares, and the image URL was such a request, HN would not have known whose account to buy the shares for, even though I'm logged in in another tab.
So, presumably, the hack does not work in Chrome 57.
Edit: Never mind. It's because I have third-party cookies blocked. If I unblock third-party cookies, my HN cookie does get sent.
Is that a common element of an NDA's term?
I cannot verify that number but I am quoting it from a phone call with a Penson engineer.