Hacker News new | comments | show | ask | jobs | submit login
What Happens When You Send a Zero-Day to a Bank? (privacylog.blogspot.com)
1493 points by ivank 8 days ago | hide | past | web | 438 comments | favorite





There needs to exist a legal entity/non-profit or company that acts as a shield and/or escrow for these kinds of situations. Basically, as a researcher you can have them deal with the company/organization for you, including dealing with any threats, collecting any bounties due, and such. The company could have domain expertise of the industry, laws, and generally be a force against these companies -- the analogy would be a lawyer.

This is for cases where you want the credit but still want the protections afforded by being somewhat anonymous. Similar to WikiLeaks but more focused on allowing the company or entity to solve their problems and representing fairness on all sides.


There is. Carnegie-Mellon University's CERT. Here's the form for reporting a vulnerability.[1] For this kind of problem, select "Request Vulnerability Coordination Assistance". You can even do this anonymously.

The report isn't public yet, but it's on record. You've reported it to the organization funded by Homeland Security to take such reports. In 45 days, CERT will disclose it to the public.[2] CERT may contact the bank themselves. If you do, you can cite the CVE vulnerability number they give you. This gives you some advantages when talking to a bank. "Have your technical people contact Homeland Security's US-CERT at (888) 282-0870 regarding CVE-NNNN" will usually deal with a bank's people. They can't make the problem disappear.

[1] https://vulcoord.cert.org/VulReport/ [2] http://www.cert.org/vulnerability-analysis/vul-disclosure.cf...


I don't recommend submitting to CERT unless you genuinely don't care about the outcome of reporting.

Yes, reporting to CERT is "safe"; you almost certainly aren't going to get sued for doing it. But don't count on CERT coordinating a fix or even figuring out how to report flaws to. It's unlikely that anyone at CERT knows who "Zecco" is.

CERT themselves ask you not to submit to CERT unless your vulnerability fits some specific criteria. "Unresponsive vendor" is one of those, but CERT's fine print says that they prioritize severe, multi-vendor vulnerabilities.

Anyone who runs a bug bounty program can tell you how unrealistic it is to rely on CERT for this stuff: triaging reports for just one vendor is a full-time job. CERT wants to get early warnings of things like OS and platform vulnerabilities. I don't think it's a good idea to report those to CERT either, but regardless, CERT isn't set up to handle your CSRF report in some random website.


Even if CERT doesn't do much actively, you've put the problem on record and can refer to that record when dealing with vendors. Most companies can ignore security vulnerability reports if they choose, but a bank cannot. They have an obligation to report the vulnerability to their auditors. It triggers certain Sarbanes-Oxley reporting requirements.[1] It's easier to fix the problem than deal with the problems of having a logged, unfixed vulnerability.

[1] https://www.a2q2.com/blog/sox/29-cyber-security-and-sox/


Sure banks can. Source: did software security for a number of banks. Big banks are chock full o' CSRFs, XSS, SSRFs, and SQLIs. They get found all the time. For every valid report they get, they get 3 that aren't valid. Nobody's hair achieves ignition over this stuff.

There are two types of financial service organizations: the big banks, and random firms (like Zecco was, before Ally bought them).

There's no point in contacting CERT about a Bank of America vulnerability. CERT won't prioritize the report and won't know the right person to talk to, but also, you're a Google search away from finding out who to report to at Bank of America (spoiler: it's Hacker One). These kinds of things don't happen at BofA, not because vulns are hair-on-fire there, but because there's a process in place to handle them.

There's not much point in contacting CERT about a Zecco vulnerability. CERT doesn't know who to contact and doesn't know how to find them and won't spend the time trying. CERT isn't going to publish an unconfirmed report. All CERT is going to do is go to Mitre; you can do that too, and note the guidelines for what will get you a CVE.

The issue here is just TANSTAAFL. It takes a fuckload of effort to triage and confirm vulnerability reports. There's no magic "this is a real vulnerability" certificate you can get CERT or Mitre --- or really anyone who doesn't spend a lot of money to maintain the capability for their own products. If there was, Hacker One wouldn't make half their money selling triage services. :)


I'm pretty surprised at this:

>For every valid report they get, they get 3 that aren't valid.

Because taking the time to write and to submit an invalid report is a total waste of the reporter's time. Reports aren't the type of thing that someone will accidentally say "oh this is a severe vulnerability! here's some cash" even though the researcher has submitted bullshit.

So can you talk about "3 that aren't valid" for every valid report? Who makes these? Weird, obscure cranks, of the type who in other industries would be churning out perpetual motion devices? I would expect 80%+ of vulnerability reports to be serious and real - quite different from what you just wrote.


Whether you run a bug bounty or not, there are now hundreds of people in Asia and Eastern Europe who hope to make $500 every time they find a page without X-Frame-Options set. A lot of them have pirated Burp Suite and are hoping to simply cash in on the scanner output.

I used to manage a bug bounty for a bank. It's nightmarish how accurate this is.

As someone who has been on the receiving end of a bug bounty's mailbox, 3-to-1 sounds about right. We got a ton of invalid "security vulnerabilities" that were essentially either people reporting OAuth as a vulnerability or not understanding how XSS actually worked. Most of these came from teenagers in southeast Asia.

I've also seen lots of reports of some serious vulnerability - "major product X doesn't validate TLS certificates" - where it looks like the reporter forgot they had added their own cert to the OS's trust store.

super-interesting - thanks.

Here is some context for what I'm about to write: I managed a bug bounty for a sizable arm of BBVA. I have temporarily managed bug bounties for many smaller tech companies. In 2014, I surveyed the the industry as BugCrowd and Hackerone were coming into prominence.

Bug bounties, on average, have a signal:noise ratio that is horrible. I advocate for the programs completely, but they require a lot of planning in order to prevent them from becoming overwhelming. I personally know security engineers at Google and Facebook whose full time job is sorting through nonsense bug bounty reports.

What Tom said about people in Eurasia wanting to cash in on bug bounties is spot on. If you start a bug bounty program, expect to get a deluge of nonsense reports for bullshit like having the Trace HTTP method allowed on an API endpoint. For every one valid report, three - five will be bullshit. Of those that do not reproduce, half will be incomprehensible, and the other half will blatantly ignore the program guidelines or be low effort spam ("content spoofing"). If you offer a cash reward, expect the ratio of valid to invalid to be closer to 1 in 10.

It's a numbers game for these people. The security research industry is bifurcated between rare, sophisticated and highly paid freelancers who do it mostly for passion outside of their day jobs, and opportunistic amateurs who couldn't write a curl command.


BBVA? I have been trying to get in touch with them regarding an app of theirs (Access Key Extranet by BBVA Bancomer, S.A. https://appsto.re/us/4QEKfb.i) which does not properly validate TLS certificates.

Thanks for this detailed write-up! Super informative.

A lot of vulnerabilities are basically the same thing. From what I've seen, even legitimate researchers will have form reports. It's just an obvious optimization.

From there it's pretty easy to see that "vulnerability spam" would be a thing.


I guess it's various consulting firms that want to expand there market like e. g. KPMG cyber security division and the likes.

>finding out who to report to at Bank of America (spoiler: it's Hacker One)

Hacker One doesn't have a program for BofA. You probably found this dummy (and slightly misleading) page: https://hackerone.com/bofa


> but a bank cannot.

Why not? If anything the story above shows that a financial institution very much can and post-2008 I believe there many things a government cannot do but there are very few things a bank cannot do.


Who is Zecco?

The financial entity the article is written about.

You're suggesting we send our exploits to the company that hacked Tor for the FBI? How much do they pay you?

There was a guy who got thrown in prison for bringing a vulnerability to the attention of AT&T. He was thrown in solitary confinement for over a year. Then his sentence got vacated. What did he do as soon as he was released from prison?

He went on CNBC to argue that independent security researchers should start a hedge fund that short sells the stocks of companies affected by vulnerabilities. https://youtu.be/jxUWRRDdhVI

He seems to be of the opinion that this would be a less risky strategy than bringing those issues to the attention of many companies. He also believes that profit incentives for researchers will serve the public interest, because it creates economic disincentives against big companies having insecure software.


He didn't "bring the vulnerability to the attention of AT&T" he fucking told Gawker first. Furthermore, he's part of a number of black hat hacker groups, including an anti-semitic one despite the fact that he's Jewish.

There are examples of honest people getting fucked over, but this isn't one of them.


Honestly, that guy went to jail for being an asshole more than anything else. If at any point he had attempted to curry favor or at least neutrality, the outcome likely would have been far different.

As it stands, literally everyone who would have been in a position to be reasonable was someone he'd actively worked at pissing off. Not in the sense of someone he'd pissed off in the past, but as someone he was actively pissing off in conjunction with this issue. The moral of his story is that there's room for aggressive incident reporting, but not if one is going to be an insufferable jerk about it.


This is brilliant. And if companies are going to be so resistant this is the obvious solution.

Ha, true. And the hedge fund should make its trades public. "Sir, the vulnerabilties hedge fund just bought options on our stock.". "Fuck, which of our software is it?"

Then the company could contact the fund holders for information and they could tell them "we'll let you know about the vulnerability in a month or two".

Company name is TRO LLC.

That is hilarious.


It sounds like you need HackerOne Disclosure Assistance https://support.hackerone.com/hc/en-us/articles/115001936043...

This was introduced 3 days ago https://twitter.com/martenmickos/status/854321634404061185

HackerOne will work with friendly hackers on a best effort basis to verify the legitimacy of a vulnerability, reach out to and verify the identity of an individual at the affected organization, then share the vulnerability with the organization so it can be resolved.

Seems like it address most of the problems of educating the organization so they don't threaten you.


Just that they have a name that will immediately be without any trust at any non -tech company. Basically mentioning "hacking" will make any non-technical CEO shiver and call the lawyers.

It is unlikely that there is a company in the US that has a security team that hasn't heard of H1. They're kind of a big deal. Since they're the ones handling the first contact in these situations, you can safely let their name be their problem; they know how to explain themselves.

The more realistic concern here is that for these kinds of findings --- CSRFs in random web applications --- there simply isn't going to be a contact at the target company, and H1 isn't going to find one for you. That's why they point out they can't promise a contact.


>It is unlikely that there is a company in the US that has a security team that hasn't heard of H1.

You'd be surprised. HackerOne is relatively new, just several years old. Does everyone know OWASP, almost certainly yes. Does everyone know the BSide community? No.

Anyway, H1 can act as a shield, in this case. On the other hand, companies like WhiteHat or Rapid7 are probably more well-known since they will probably spam your security team on a regular basis trying to sell their products.


Disagree. Any big company can be sufficiently ignorant, but Hackerone and Marten Mickos both have a brand name associated with them, partially due to their funding: https://www.crunchbase.com/organization/hackerone#/entity

I think the disclosure assistance is a pretty clever idea for generating new sales leads, since by definition they will be talking to companies with an actual zero day situation.


OK, let the lawyers handle it. You don't make progress by catering to other people's ignorance and insecurities.

You are right, that isn't how you make progress: it's how you make money.

The name is not ideal. I've heard a story of it also not being great for employees when talking to custom officers :/

But any person looking at their homepage would be a lot less concerned. Impressive logo's and a clear story for an enterprise audience.


I would be more supportive if HackerOne just gave me a form so I can do this myself. I do not need HackerOne to take credit for my work and and do not need them to represent me. I'm a big boy and can follow my own ethics and take my own consequences.

If other people feel the same way, I'll fork and make a repo. EFF is a great starting point but it is not nearly usable as a HOWTO.

I'm putting my balls on the line by publishing this blog post. Actually I started this blog 10 years ago just to make this page, here is the original page: https://privacylog.blogspot.com/2008/10/pre-announcement.htm...


That's some startup that's trying to become an intermediary for bug bounty programs. They're in a WeWork shared space at WeWork Transbay on Mission in SF. Unless they're hosting a bug bounty program for the vendor involved, they don't add any value.

They're the largest and most important bug bounty service, and have been for years.

WeWork leases offices; they have a shared workspace area in all the buildings, but most of their buildings are private offices.

Apart from the fact that there's a good chance the company you're trying to report to already has an H1 program running, what they're promising to do here is to spend some effort trying to track down security contacts for you. They profit from this, of course: if you give them a good bug, and they facilitate its reporting, the target company is very likely to sign up for H1. But it costs you nothing and might solve a problem for you.

(I'm ambivalent about H1 --- we run a couple H1 bounty programs that existed prior to us taking over security at our clients --- but I don't think it's a good idea to be dismissive of them.)


Personally I think this is a function the the FBI should fill. However, there is a risk they would sit on zero days and weaponize them (or give them to another three letter agency).

I wonder if an org like the EFF could add this to their scope.


Someone in another forum commented recently that there should be a new top-line federal agency whose mission is to promote the security of America's information infrastructure. They suggested it should be established as an adversarial check/balance against e.g., the CIA and NSA.

https://twitter.com/Snowden/status/839168025517522944

Maybe if they were required by statute to accept anonymous submissions and make FOIA-style responsible disclosures after a reasonably short period of time, they wouldn't end up colluding right away.


The already exists an agency with this mission: the NSA. The problem is that they have two often-conflicting mission statements, WTH the other being to spy on foreign adversaries. Really what this would entail is splitting the NSA into two bureaucracies, with the information-security one then being able to wholeheartedly pursue vulnerabilities that affect American infrastructure.

The offensive organization wold probably then still sit on vulnerabilities only it knew about, but at least this would be better than the current situation.


"The already exists an agency with this mission: the NSA."

It's a myth far as I know. I've studied them a long time seeing much conflicting info about this. A declassified, historical document I found at one point about them said their job was SIGINT and COMSEC (just communications security!) for U.S. government. A later provision extended this to protecting COMSEC of defense contractors. The IAD seems to have policy-driven stuff about helping protect INFOSEC in general. There could've been a COTS mandate of some sort at some point but it was clearly toothless.

The NSA is mandated to protect communication security of defense sector. That's it. Even then, the defense sector keeps asking them to downgrade the security to let in more quick-moving products from commercial sector that are hacker fodder. They've since started on a program that lets them in after a 90-day evaluation against the lowest standards from Common Criteria. The NSA is the last group that should be responsible for INFOSEC given all this w/ market an utter failure, too.

The groups that have done the most are probably NSF and DARPA for funding strong security with NIST and DISA (esp STIGS) at least trying to do something with hardening guides and crypto recommendations. I prefer reputation-driven nonprofits that are funded with combo of donations, licensing of quality software, and consulting fees. They can't get acquired or be destroyed by changes in government policy.


The NSA is part of the U.S. Dept of Defense, which generally can't operate domestically. I'm not sure how that applies in this case.

NIST is another.

Whatever we might think about this idea, neither FBI nor NIST has the staffing to handle vulnerability reports. I've heard good things about reporting incidents to FBI, but there's pretty much no chance FBI (or NIST) is going to help you coordinate disclosure. FBI works cases. NIST is, for security stuff, something like 10 people.

> However, there is a risk they would sit on zero days

Unlikely. They are still here to protect americans, in a sense. Stealing money from a bank or a regular business is not on their agenda.

There is a 10% of vulnerabilities that might have re-use for intelligence purpose, but it shall be alright for the bulk of it.


> They are still here to protect americans

That may be the charter of the the organization. But the individual people running the FBI goals are to 1) be reappointed / not get fired 2) continually expand their budget / power. Given US politics 1&2 are not always congruent esp in short term with "protecting americans".


They were perfectly happy to use the vulnerabilities exploited by Stingray after all.

Don't mix up the existence of incentives with the idea that you have insight into what people's goals are.

Have you ever interacted with law enforcement on any professional basis?

They aren't like that at all.


1) yes, but with the "troops" and lower level managers.

2) the Director of the FBI (and other other high level managers) is much more of a political bureaucrat than they are a LEO.


While it may be true that in this particular instance the FBI might act benevolently, the idea was that it would be nice if there was an organization you could go to with any zero day bug. Even if the FBI is not mismanaged and always tries to protect Americans, you could easily imagine a scenario where someone reports an exploit to an OS where anyone can remotely install a key logger. The FBI wouldn't be a good organization to report this to because they may want to use this to track down criminals which, they would no doubt feel, would greatly outweigh the cost to Americans that the zero day represents.

The EFF seems like a good choice. In general you would need to pick an organization that does not have a vested interest in using exploits.


> While it may be true that in this particular instance the FBI might act benevolently

Indeed. Didn't the FBI effectively purchase a zero day to break into the iPhone of the San Bernardino shooter? Didn't they also then not disclose said zero day to Apple?

There's no way that any LE agency can be trusted with this responsibility; I'm not convinced that it can be done by the federal government at all. EFF seems like a reasonable choice, but even non-profits have the potential to be corrupted/subverted (and operating as a dump for zero days has the power to corrupt, for sure, regardless of how moral your organization claims to be on its website).

This definitely falls under the umbrella of hard-problems-in-politics-that-will-not-be-solved-any-time-soon


What if there were multiple non-profits that can keep each other honest?

What makes you say that? Plenty of evidence points to upper management at the FBI and other 3 letter agencies being more interested in power brokering than honesty and serving the public.

So what happens if you find a zero day vulnerability in a Russian bank? Not everyone on the internet is from the US.

Then you don't report that to the FBI, it's not their jurisdiction.

It's good to have this as a general reminder: "Not everyone is from the US", people tend to forget that so often :)

It'd be tough to form an international legal entity anyways.

How about the National Institute of Standards and Technology? My impression is that they are less interested in offense compared to the NSA, and more serious than possible private entities like PCI.

You're thinking CIA and NSA.

And DEA, DHS and many, many more.

> The National Security Agency is now able to share raw surveillance data with all 16 of the United States government's intelligence groups, including the Central Intelligence Agency, Federal Bureau of Investigation, Department of Homeland Security and Drug Enforcement Administration.

refs: https://www.engadget.com/2017/01/12/obama-expands-the-nsas-a... https://en.wikipedia.org/wiki/United_States_Intelligence_Com... http://www.reuters.com/article/us-dea-sod-idUSBRE97409R20130...


Professional association of security researchers?

Come up with a good set of guiding principles for members. This would help avoid waiting 7 years and then sticking it online. Not criticising, I'm saying the situation here is pretty screwed up.

Members pay dues, the association provides backing. Company threatens to call the FBI and the association is the one they can deal with.

An organized group can help to provide the needed political pressure so that a properly disclosed vulnerability doesn't ever lead to the FBI and trumped up charges.

A respected group can lend credibility to a researcher. A bank may not give 2 shits about even a well respected member of the community. They will care if it's a group well known for finding and disclosing vulnerabilities.

This seems like an easier problem than the general case of software engineers because the community is smaller and you don't have the conflicting interests of "I can negotiate better on my own". Plus things like membership can be handled more easily, start with a small group of people who absolutely should be members. Extend via application and invite.


Thank you. This seems like a great idea. I would be more open about it. Rather than fees and membership, I would have a list of contributors and cross-checked publications. Basically like a resume. I.e. if you want to announce a vuln we would help you timestamp it, and optionally have a person you trust vouch for its authenticity. And if having that timestamp and cross check is valuable to you then you can brag about it when you contact the vendor.

I'd like to work in an organization like this. I'm not sure if anyone would want to join. It seems like everyone else is either completely independent like my own IDJGAF strategy or they are full corporate like HackerOne and other brokers.


> Professional association of security researchers

It's next door to the military intelligence folks.

(ba-da-bump)


Or you can just post your findings to full disclosure and call it a day.

What incentive, besides good-boy points and experience/publicity/etc (for more funding), do researchers have to do this?

In this case, the researcher cared because they wanted the bug fixed. Posting the vulnerability publicly risks having it be exploited maliciously, but it also maximizes the likelihood that the bug will actually get fixed, because it's hard to ignore a public vulnerability in your service.

If you don't care about your reputation, you post anonymously. An anonymous full disclosure post is a good way to report a bug without dealing with drama about your "incentives".


I sat on this disclosure for ten years because family told me FBI would go after me. Good advice or bad advice, that was ten years of my life that could have been better spent.

One time I found a photo printing website made all photos public. They refused to fix, I fully disclosed, it made front page Slashdot. Then the company had to change its name. Maybe it was fun or maybe I get credit but most importantly it gets something from my TODO list to my DONE list. This is very important to me.

I have a 0-day on Apple, not very exciting. I reported in 2015 and they still did not fix. Having this in my inbox is a waste of my time thinking about it. I will FD it.

My experience is that security researchers do not make money unless you run script kiddy programs for stupid bounty programs. When I interviewed for a "security" job all they would ask me about is Microsoft certifications and user access testing. I asked if a TLA offer letter counted as sufficient reference and he said no. At that point I immediately switched from MS CS into MS Finance and MBA and my life has improved (while still being technically challenging and academic.)

So technically my disclosure policy is IJDGAF with two extra weeks as a gentleman's favor. Maybe I'm the bad guy, but that's why I'm here for the lovely discussion on YC. Thanks for sharing.


You have never heard me say IDGAF is an unethical policy. If you've paid attention to me here (I don't know why you would), all you've seen me do is point out how Orwellian and coercive the term "responsible disclosure" is.

For a CSRF that you didn't use someone else's account to exploit and that you've told nobody about, and assuming you have no acquaintances who might screw you over by abusing the bug, 30 days and then Pastebin seems like a decent answer.

If any of your friends are shady, just forget about the bug.


It's "broken window" community policing.

The more unpatched vulnerabilities there are in existence, the more lucrative it is to be involved in any part of the computing crimes community.

It's like reglazing a broken window in your neighbor's garage at your own expense, because you don't want burglars to see it and start casing other properties in the same neighborhood based on the conditional probability that a visible broken window indicates a higher incidence of other exploitable vulnerabilities.

It's also important to pursue the very easily exploited vulnerabilities, because when you get rid of all the low-hanging fruit, the people who can't already climb the tree won't survive long enough to learn how. You're cutting a lot of bootstraps so that immature criminals can't pull themselves up by them.


This is correct, and the morally sound right thing to do if you're interested in cutting down on cyber-crime, but it unfortunately falls under the incorporeal "good-boy points."

Perhaps there's a breakdown of definitions here. I've lumped bug-bounty hunters and grey hat hackers, along with actual researchers, under "researchers." Stop me now if this isn't who you're referring to.

Now if it is, this route of action goes against the reseachers' monetary incentives. It is in their wallets' interests to have criminals validating the existence of their work. As well as selling the direct findings of one's research, including even minor exploitabilities, which is a given.

If researchers were to constantly give away their work (on even little issues) it would directly lower the cumulative value of cuber-security research, i.e their more expensive projects now sell for less.


RE Broken Window.

The FBI / NCFTA invited me to speak about this vuln because it may have affected many banks at the time. (Please stop laughing.)

They called me to cancel. "Now we're all focused on this big DOS. Do you know anything about DOS that's happening today you can help us with?" I asked if the DOS is affecting the stability of the system or actually breaking anything. And they said yes it is bringing the banks down and affecting revenue.

You can read into this anecdote as you wish.


This was fascinating. Can I read more somewhere?

https://en.wikipedia.org/wiki/Broken_windows_theory

New York City based their increased focus on petty crimes on it. I don't think it is useful as the basis for a model of policing, though.

In some ways, it is an embodiment of the slippery slope fallacy, where if security is not perfect, it's worthless, in the same sense that a roof with one leak in it is worthless, because that one leak becomes the beachhead for further damage to the roof.


Ahh! I just realized, I read about the New York broken windows situation in Malcolm Gladwell's excellent book (in my layman opinion) The Tipping Point.

From the original article in 1982:

Consider a building with a few broken windows. If the windows are not repaired, the tendency is for vandals to break a few more windows. Eventually, they may even break into the building, and if it's unoccupied, perhaps become squatters or light fires inside.

Or consider a pavement. Some litter accumulates. Soon, more litter accumulates. Eventually, people even start leaving bags of refuse from take-out restaurants there or even break into cars.

Broken Windows, The Atlantic Monthly, March 1982

--

The way I would put it, based on my visits to my home town of Zagreb, Croatia:

Apathy is contagious.


There's no analogy - what you're describing is literally a lawyer.

The EFF, ACLU, CBLDF all exist even though they act as lawyers for many people. Safety in numbers, plus trusted reputation.

Why didn't they just give him $10K to shut up and then go fix the issue? It would be cheaper than all the lawyer fees.

Because if the vulnerability turns into a problem down the line and ends up in court, the record of the payment could come out during discovery, which could be made to look bad for the defendant firm.

Have you heard of Elizabeth Kubler-Ross's '5 stages of grief' model that summarizes people's typical responses to bereavement? EKR argued that people generally go through a cycle of denial, anger, bargaining, depression and acceptance. IME this is a good rule of thumb for how people typically handle any kind of unwelcome news.

In this case:

  o There is no such problem
  o Grr why did you hack us I'll call the police
  o How about you take this pittance and STFU
  o We're just trying to run a business and you ruined everything
  o OK we'll fix it and alert our customers

Vulnerabilities like these aren't likely to cost anything close to $10K, even if they stay open for years.

But stipulate that there's some number here, and the answer is: because nobody in management at Zecco ever built a plan for how to handle incoming vulnerability reports, and so nobody who got the report was empowered to do anything but escalate the issue --- and halfheartedly, at that, because nobody in management at Zecco ever build a policy that ensures anyone cares about vulnerabilities, so this is for them the moral equivalent of a WONTFIX.

How diligently would you escalate a WONTFIX?


The person I was in contact with was the CTO Michael Raneri and the General Council. And the CTO was promoted to CEO while the bug was still broken. And then he cashed out and sold the company.

Escalation enough?


What do you want me to tell you? I agree with you: they don't care. It didn't appear to matter for their outcome. They WONTFIXed a CSRF. Are you shocked?

On my phone call it was very awkward because I am spending lots of time helping them, sending multiple PoCs. But I think it improper to ask for money. Maybe if would count as blackmail. So I didn't. Also their ONLY goal, as per article, was to make me STFU. So this is why talks quickly got boring.

Scare people away from looking. Security by obscurity

Spending money is part of the point, since that's the way one gains power in large organizations. Even better if one can spend lots of money doing something self-evidently stupid and also rather mean.

Large firms wouldn't survive at high enough rates to dominate public life as they do, if they weren't underwritten by the state at every turn.


> the analogy would be a lawyer.

You can use a lawyer for this, this is a standard piece of advice for other kinds of bounties-- e.g. reporting criminal tax evasion.


There's a service called Synack that does something similar to what you describe

I was thinking of exactly this as I read the article.

In a situation like this I'd probably directly ping taviso or someone else from the Google Project Zero team. Their contact information (email, G+, twitter (DM)) is not impossible to get at.

From there, I could get advice about next steps (the Project Zero team are going to know a few people) or maybe they could run with it themselves (depending on the bug; I don't know what the response would have been in this case).


Thank you, going forward this looks like a great start. And I have always had great experience reaching directly to Googlers when needing connections or technical advice.

> I have always had great experience reaching directly to Googlers when needing connections or technical advice.

Mind sharing how you've achieved this? I haven't had the same level of success, with my couple of attempts (thus far) to try to resolve various issues falling flat.


CERT and the Zero Day Initiative handle disclosure for you, in some cases.

https://vulcoord.cert.org/VulReport/ http://www.zerodayinitiative.com/about/


Both of these organizations might be "helpful" if you have a new Internet Explorer vulnerability, but neither will likely help you with a CSRF bug in a bank website.

Still, thanks for sharing. Research ethics is always something I'm interested to read.

How about Google? If you discover a big vulnerability, perhaps? Someone ought to have some connection somewhere. Ask your colleagues around.

What, more government spending and taxes and regulation? That's anti-American! Are you a traitor? Get the government out of our lives! Let the free market fix this. If you don't like it, just don't buy it. We don't need more acronyms! Just a bunch of bureaucrats! Drop more bombs!

/sarc


On a similar, but separate note, my bank launched a new version of its online banking platform. From launch I noticed it opened my accounts in a new tab while leaving my credentials (password and all) in the sign-in form. Not so bad when signing in from home - horrific if you're signing in from a public computer. I tweeted to the bank and spoke to someone on the phone about it. It's been 3 months and the bug is still there.

[EDIT] I decided to log in today just to see if it's still there (was a couple days ago), and it's finally been patched. If I had used a throwaway I would gladly let you guys know the bank, but I won't since it's trivial to find out who I am from my handle.


Tell us what bank so we can avoid them.

So far, I count three separate replies to this article along the lines of "I also found my bank doing so-and-so thing insecurely, but LA LA I'm not going to tell you which bank it is!" These kinds of comments don't help anyone--you might as well not post them.

Yeah I genuinely don't understand the point here. Who is protecting what?

So the article mentions the threat of retaliation against the security researcher, and you are surprised people are afraid to come out publicly?

I read more in the article so I am updating my comment - the FBI's involvement is surprising and alarming.

When in doubt, people, call your attorney.


Who just has an attorney sitting around who is competent to handle such things? I wouldn't know who the fuck to call if I found something on my bank's website.

I am one such attorney.

Can I get your number?

My sn is my name. I am the only lawyer named Liberty that I am aware of.

More easily, my profile on my firm's website: lawyernamedliberty.com. I'm fairly easy to get in touch with.


So just assume it's your bank, and do whatever you would do next.

Oh, new bank? Just assume it's your bank, and do whatever you would do next.


This is still useful. You can go see if your bank does the described behavior. If not, you're not affected by that particular thing.

This deserves more than an upvote. This is exactly the right attitude. It puts the incentives in the right place and will let the market do what she does best: work.

> let the market do what she does best: work.

Hm, I recall the Comodo hack. I think it Comodo was hacked twice or more times that year. It won many rewards and continued leading the CA space. The market did not work apparently...


Well, in a way, it did: people voted and said "we don't care la la la what did you just say?".

The security market is working exactly as it was designed and evolved to. Far as when high-assurance started, the Black Forrest Group of execs of big companies convening on INFOSEC told one of INFOSEC founders they thought companies would refuse to sell them highly-assured software. The reason was they suspected they intended to make extra profit two ways: cutting QA for immediate profit; selling the fixes for later profit. This proved true with lock-in strategy combining for what was essentially checkmate to lots of companies.

The other end are buyers. Most of them don't know what to expect for security or how to evaluate it. Most attempts to solve this failed. They've been conditioned to expect constant hacks, crashes, or data loss. So, they see Comodo etc get hacked and shrug. They'll usually stay if their end of whatever they bought works. The sector that will pay for highly reliable or secure software is probably under 1% of the market or projects. It's enough companies keep forming to do real thing but tiny, tiny few struggling to justify the extra costs or less features necessary for higher security.


Better yet: Short their stock, then write a scary blog post about the problem.

Just curious, what would the legal implications of something like that be? It seems like you're still benefitting from criminal activity that you enable, but what would the specific charge (if any) be? And any examples where people have tried this?

Although I guess it could help align customer and business goals, since no one wants to lose money


Not at all. You're making bets based on public information only you have realized is meaningful before informing the rest of the public to make money off that discovery. Quite a few folks make a lot of money this way and (nearly) everyone benefits: https://www.bloomberg.com/news/articles/2015-03-04/how-a-25-...

Maybe but I, personally, would not want to take the risk that I might need to defend that proposition in court.

Nothing can protect you from the lawsuit being brought, but it will likely be thrown out. That's the same with anything, and whether you short a stock or not.

If you short it, at least you might make some money to offset any pending lawsuit. There's plenty of examples of people doing the same thing to fall back on, such as the guy who found out a newly listed company wasn't actually real[1].

1: http://www.npr.org/2015/01/30/382587945/winning-at-short-sel...


And even more general. Any form of profit will attract the possibility of defending yourself in court.

IANAL but there is no risk that you may have to defend that proposition in court as long as you don't actually exploit the vulnerability and simply point it out.

It's public information.

Now if someone who works at the bank had told you about it, you'd be in a lot of trouble.


IANAL either but my understanding is that you can be prosecuted under U.S. law for poking around on servers in any unconventional way. The text of the CFAA forbids "unauthorized access" or "exceeding authorized access".

I'll admit that viewing the source code and noticing this link would be a stretch, but I wouldn't necessarily expect it to be a slam dunk for the researcher, especially if he had assented to the site's ToS (and since he had an account, it seems that he had).

At this point, I imagine he could be in all sorts of (primarily civil) trouble for the disclosure that he just made. He may be protected under some type of financial whistleblower law, but I wouldn't hold my breath.


"The text of the CFAA forbids "unauthorized access" or "exceeding authorized access"."

BOOM! And they've been harsh on hackers for a long time. So, the vulnerability must not require violating access controls or system integrity to be safest. Hackers should be in the clear if it was simply noticing something in HTML/HTTP or whatever that indicated insecurity. An example might be a breakable cipher-suite or handling sessions improperly.


It sounds awfully close to what got weev sent to jail.

This is a good parallel and you're definitely right. However, weev was charged [0] on 2 counts:

1. conspiracy to access a computer without authorization

2. fraud in connection with personal information

This is because Goatse Security not only noticed the vulnerability itself, but because they wrote and executed a script called the "iPad 3G Account Slurper" to iterate over ICC-IDs, returning the associated email address for each one.

Executing the script against AT&T's servers probably is a bona fide violation of the CFAA, not just a conspiracy, but I would guess it's simpler to bring the conspiracy charge since you don't have to get into the nitty gritty of actual requests made, etc.

According to the complaint, they proceeded to email a handful of notable people whose emails had been harvested, including someone on the Board of Directors at News Corp. All of these contacts appear to be media outlets. The Gawker article also lists some of the people whose email addresses were extracted this way (without disclosing their emails).

I'm assuming this direct communication to journalists and/or execs at journalism outlets gives rise to the fraud with personal information charge.

Overall, I don't think that weev did anything that I wouldn't have necessarily have done if I were in that situation (trying to drum up attention and make a name for his consulting firm), but it's different from this disclosure because as far as we know, this researcher did not actually exploit the vulnerability and he has not obtained or disclosed any information from doing so.

Again, not a lawyer.

[0] https://www.eff.org/document/criminal-complaint


Would this really be considered public information, since the existence of that vulnerability it's not known to the public or literally anyone else until you publish that blog post?

That's not really true; anybody can sue you if they want, whether or not you're in the right.

I agree that making bets by noticing public information earlier is 100% okay (and in the case of Lumber Liquidators, a better outcome for almost everyone).

But would this case with the bank be different because the vulnerability, unlike formaledehyde, could be actively exploited? Encouraging a stock price to fall because of bad practices seems alright (like the LUmber Liquidators example), but if in the process you become an accessory to smaller-scale fraud against individual account owners, is it still "alright"?


That question has nothing to do with shorting stocks and everything to do with vulnerability disclosure: http://www.blackhat.com/presentations/win-usa-04/bh-win-04-g...

There are law firms working with hedge funds that specialize in doing exactly this when they are about to file a class-action suit. It's possible to be criminally charged if you know that the information you are spreading is false. But other than that limited circumstance, you are free to trade on any information you have about a company that you did not illegally obtain from an insider. Even in the case that the information was obtained from an insider, to convict you, the government must be able to prove that you knew that the insider both a) received a benefit (usually money) in exchange for the information, and b) breached their fiduciary duty by disclosing the information.

That said, technical glitches tend to not affect the fortunes of companies nearly as much as we (the HN crowd) think. Tradeking had the glaring vulnerability outlined in this article for years, and they are doing just fine.


Great point, I think the tech crowd may overestimate the cost of glitches, relative to everything else at play in a business.

I think the point I'm getting hung up on is that the bank's stock price could drop for two reasons: bad PR due to the glitch, and/or falling financials due to fraud perpetrated as part of the glitch. I can completely understand a hedge fund trading and making money off the bad PR. But if (hypothetically) the bank lost a ton of money by hackers liquidating user accounts or, worse, making leveraged bets [before everyone checked for that sort of thing ;)], and the hedge fund knew there was a reasonable chance that the malicious activity would occur based on the newly disclosed information, would they have liability there? (from the theft/fraud perpetrated against the bank, not the drop in stock price)


I believe that responsible disclosure is a courtesy to the vendor and its customers. Afaik, there is nothing in the law that requires it. Exploiting vulnerabilities like the one you are discussing here yourself certainly would be illegal, and you could possibly be implicated in a conspiracy if you disclosed the vulnerability solely to one person or group that you knew would exploit it (so "I told my Russian hacker friend about this..let's short the stock before he nails them with it!" would probably be a conspiracy case, whereas a press release or HN posting would not be).

But general public disclosure of a vulnerability, and/or trading on the anticipated effects of public disclosure, is not illegal. It likely won't win you friends in the IT community, but it falls short of an indictable offense.


The Lumber Liquidators short-seller is quite a famous example of this strategy being executed.

Before writing his blog-post, he short-sold a bunch of Lumber Liquidator stock and made tons of money during the fallout.


Martin Shkreli claims to have made a lot of money by shorting pharma companies ahead of their FDA results - he would read their studies and make reasonably accurate predictions as to the outcome.

Shkrelli has shuttered two hedge funds (Elea Capital Management & MSMB Capital Management) when he was unable to cover shorts and put options when the stock price moved away from him. He is also currently awaiting trial for securities fraud. So I would take his comments with a grain of salt.

This is why I said "claims". He no doubt failed at some of his shorts. On a livestream he said he made all the money he still has on his companies, not trading. The strategy is still relevant to the discussion, though.


I posted this downstream, but it's happened and there weren't charges filed.

http://www.pcworld.com/article/3155990/security/stock-tankin...


Great link, thanks for sharing. The quote that stood out to me was “My issue was that patient safety wasn’t front and center.”

I don't have a problem with MedSec making money by shorting St. Jude's stock (that seems to align incentives to take care of security issues as early as possible). But if MedSec publicly disclosed specific, exploitable vulnerabilities (I'm not sure about specifics from the article), they shouldn't be able to hide behind the "doing what is best for the consumer" argument. It's definitely a clever business hack, and that's alright, but the fake sense of moral superiority isn't.


Attempted stock manipulation, probably

This has been done!

http://www.pcworld.com/article/3155990/security/stock-tankin...

A company discovered vulnerabilities in some medical devices, then shorted the stock of the company before disclosing them.


Alternatively, publish it in an obscure place online, get proof you published it in archived medium (eg Gmail or Archive.org), short the stock based on that now-public information, and then reveal it again in a way that will get stock-smashing attention. That's my hypothetical model I came up with when trying to figure out how to incentivize apathetic, but public companies, to care about security a bit. You can even follow up offering them security consulting but don't expect a yes haha.

I feel like someone would try to sue over such an action, but would they have any ground to stand on?

And get sued for libel and market manipulation.

The idea of the banking system being subject to market forces is nice.

I don't think it's HSBC, but they do similarly horrific stuff. Almost all banks have a truly terrible online service.

I'm a happy user of N26. I very, very highly recommend it to all european customers. I'm never dealing with shitty bank service again. https://n26.com/ (Email me if you want a referral invite).


N26 had some of the worst security until a researcher came along. See https://media.ccc.de/v/33c3-7969-shut_up_and_take_my_money

People often confuse lack of published security issues with the existence of strong security. It was the rally cry all along of techies opposing Apple's security-based advertisements.

I use HSBC for personal and business, can confirm personal is bad but HSBCnet (business) is the worst software application I've ever used, period. http://john.je/k7X2

Wells Fargo and Schwab seem ok in my experience. Wells Fargo even updated their site with slick new UI and menu options are actually findable. Amazing!

It was discovered today that Wells Fargo passwords are case-insensitive:

https://www.reddit.com/r/personalfinance/comments/66n4li/i_j...


Just today...? Chase Bank has been case-insensitive for several years now. I even contacted them about it when I found out and they outright told me they had no plans to fix it.

Tons of companies do this because it substantially diminishes the number of support calls/complaints that they get related to unsuccessful logins.

It's 2017, and I still have online financial accounts that are "secured" by short numeric PIN, so count yourself lucky that you can at least use some letters in your password.

C-mp-t-rsh-r-: your website's trash and you should be embarrassed with yourselves.


To be fair, until sometime in the last ~2 years, Schwab PWs were alphanum case-insensitive 6-8 characters only.

So I know this also :) I am a techie that made a bit of money and it was kept at Schwab. I made a big enough issues of it that they arranged a call with their security team. The call was good and they explained the reason why (legacy system) and their plans to update / fix it. They also address my questions about password storage (hashing/salt) - they did it correctly. They showed a great deal of knowledge and competence in their job, such that I was willing to leave my money. I applaud their willingness to have the call. My dealings with them have always been pleasant.

I sent feedback about the WF interface and it was actually addressed with 2 weeks. I was floored.

I clicked the transfer money button from chrome and it logged me out for months if not years. I called them and it still took them forever

Wells Fargo seems ok in your experience??!! Is that a joke? Your account is wide open to any of thousands of employees who conspire against you. You're ok with that?

It's people like you who keep companies like that in business and encourage such atrocious activity.


What I love most about n26 is the lack of foreign currency transaction fees.

Then you're going to adore Revolut: http://revolut.com/

Do they have direct debits yet?

Who logs into their bank from a public computer? Genuinely curious.

And the kind of people that don't have access to anything else.

My bank (arguably) condones use from public computers by asking me if they should "trust" the computer I'm on.


Right, most public computers I've seen would be trivial to bug with a key logger. Though with 2fa, I suppose it wouldn't be quite as bad (but then again, those using public computers might not be using 2fa).

When on holiday this is quite common. With 2-factor auth this is fine.

There's plenty of laggards who don't have home internet and only browse through e.g. a library computer. Some of them are probably doing banking too, given the recent trend of preferring online transactions

> laggards

Or, you know, poor people.


For anyone being as confused as me in the timeline bit: The author is called Will(iam) Entriken. So "Will" and "Entriken" is the same as "I" and passive voice.

Yes, this was needlessly confusing in the blog post.

Thank you, I have posted an update to address this, including credit to you.

Kudos to the author, and hopefully they don't get sued as a result. This bullshit with corporations trying to cover up security vulnerabilities (rather than fix them) needs to stop.

"Sign this NDA or we will send the FBI to arrest you because you found that our banking website's security was completely fucking broken and told us about it." Jesus fucking christ.


No one should independently contact a company about this type of issue without first obtaining competent legal advice. And I do mean competent advice; most lawyers are very technically illiterate and will not be sympathetic, let alone familiar with the relevant areas of law.

The researcher is lucky that TradeKing believed their NDA trick was sufficient. Even if the case here is weak, and I wouldn't necessarily assume it is, it would still seriously damage the researcher's life.

Here's how it goes when you get sued by a big company. Their lawyers essentially have a heyday doing everything possible to obstruct and delay the process so that they can maximize their time on the corporate teat. It will go on for years; they won't mind because it's business as usual for them, and they're getting paid big bucks to torment you. Your life will be ruined: assets seized pre-emptively, reputation and credit destroyed, inordinate quantities of time consumed by legal research and tedious paperwork, struggling (if not immediately blatantly failing) to keep your incompetent counsel paid at $250/hr and meet the retainer, and eventually failing to file some document or pay some fee that will cause the court to enter a default judgment against you and permanently confiscate everything you own, leaving you with the albatross of a massive outstanding judgment waiting to be enforced, bank accounts garnished any time you get any money, etc. And that's the short version!

And then guess what -- if, by some miracle, you don't lose in the first round, this whole process will repeat as they file appeal after appeal. Hunker down because the proceedings will last at least 5 years.

The corporate lawyers will be able to justify all of it to their clients without blinking an eye, who probably forgot that they even asked them to sue you. Everyone at the company and the law firm will go home and sleep soundly on their piles of money, and you'll have learnt your lesson that trying to stop the subterfuge of an online trading platform is a terrible offense.

Good reading: http://www.nissan.com/Lawsuit/The_Story.php

IANAL.


My father is a (technically literate, he used to be a database architect) lawyer, and the general advice he gave me was that if you are in a situation where you have a critical vulnerability you should disclose it through a lawyer anonymously -- your identity is then protected under attorney-client privilege (assuming you haven't just asked your lawyer to commit a crime by disclosing it).

IANAL though.


Interesting idea. Thank you for sharing. Is the goal just anonymity? Technically we already have solutions for anonymous disclosure of documents. Are there other benefits?

A technical solution to the anonymity problem would probably work just as well (assuming it wasn't backdoored), though the protections against a lawyer disclosing their client is legal rather than technical (so the "splatter" from a company's over-reaction are more likely to be smaller). You also get the additional benefit of the company probably taking a disclosure more seriously if it comes with a law firm's letterhead (unfortunately).

Here's what really happened. I talked with my doctors and realized that I only have so much time left to live. Writing this article was of the items about putting my affairs in order. So short term this was a good decision.

Otherwise, in court I'll be happy to defend myself. If it is necessary to spend time to defend yourself then that is a blessing. I have successfully sued the government (the US Army and Veterans affairs, no less) http://www.gao.gov/docket/B-413723.2 when they do things wrong. Just be persistent and be right. Then we came out with a nice settlement. Sorry GAO used to publish fulltext docket outcomes but I don't see it here.

Fuck Nissan. (Can we curse here on HN?) Because their cars suck and because of this case that I am well aware of. The sad thing is that Mr. Nissan spent so much money in defense. I should hope that he would be able to be more effective with less money.


How much should one spend to report this? Even the first consultation with that type of lawyer would be a few hundred $. If the threat is that real, why not just close the account and go somewhere else.

...I'm worried of how much incentive there is to become a black hat. Either you risk prison... either earn a lot of bitcoin.

Yeah, the incentives are perverse. This is more of a symptom of the function of our legal apparatus than the law itself, because in theory, going through the legal process should be quick and if not affordable, at least reasonably doable for the individual or small business.

Companies that run formal bug bounty programs (either directly or through a third party like HackerOne) show some recognition of this and some goodwill, especially those that include payouts of five figures or more, but those companies have to be careful that they don't accidentally create an environment where bidding wars between exploiters and companies are legitimized.


> careful ... bidding wars

Why not? Yes prices can become high, but isn't that the work of the researcher? If the company doesn't want to have to purchase expensive bounties, they can either reduce the exposure (less legacy code, fewer APIs, more firewalls) or use more strict security rules.

I'd feel safer if LastPass' bounty was higher than the value of the assets I put in that vault. If the value of a single vault (mine, actually) is $10,000 and the bug bounty is $2500 (which it is), how can we persuade discoverers to sell to LastPass?


Yes, this is why I don't use LastPass. As soon as I saw this I realized they must be a joke.

Thanks for the support! So glad to see supportive people that recognize what is going on here.

The NDA is not a valid contract because there is no consideration. For a contract to be valid each party has to gain something. This is why many contracts include a token consideration of $1. This one didn't, so it's invalid.

I definitely think you're correct. In the future you could probably save yourself the hassle of the "Are you a lawyer?" questions by dropping the phrase "almost certainly" right before "not a valid contract". Most attorneys I know are super reluctant to call a contract invalid without some sort of qualifying language.

This contract might actually be egregious enough to warrant an unqualified declaration of invalidity, in which case you should go the other direction and overstate your case with conclusory statement and some word like "clearly" or "patently". "This contract is patently invalid!" and then explain why.


Despite the fact that I'm not a lawyer, I happen to know quite a lot about contract law because I was once involved in a contract dispute. That provided quite a good education on this particular topic.

What really made me laugh was "Are you an IP lawyer?"

This isn't even an IP question!


Consideration is a common law concept as far as I can tell. As someone unfamiliar with how it came to be: Why was consideration introduced? What's the rationale, the goal behind it?

IANAL

A contract is what lets you sue someone over a private transaction. That's what it does, that's all it does. If for whatever reason you're not willing to bring a contract dispute to court, then your contract doesn't do anything and you wasted your time writing it. Contract = right to sue for breach of contract.

In order to sue someone, you need to be able to describe what damages have been done to you. The goal of a lawsuit is for the responsible party to 'make you whole,' i.e. pay you back an amount equal to the damages done to you.

In a contract dispute, the 'damages' of breaking the contract is equal to the 'consideration' of fulfilling the contract. In other words, the promised consideration is the actual thing that you can sue over.

If there is no consideration, then there are no potential damages, and there is no potential lawsuit. And since the only point of a contract is to enable a lawsuit, a contract that doesn't do that isn't a contract.


"the 'damages' of breaking the contract is equal to the 'consideration' of fulfilling the contract"

This is categorically incorrect.

Damages for breach of contract are supposed to put you back in the position you'd have been in had the contract been performed. It's not related to the value of the consideration.

Consideration is one of the things needed to make a contract binding in English law (along with offer & acceptance, and "intention to create legal relations").

Jurists still debate the rationale for consideration, but the best answer I've found is that contract in English law is seen as an exchange or a “bargain”. There is no gratuitous contract, donations are not contractual right.

By comparison, a contract under French law is based on "consent of the parties" and the theory of individual autonomy. There's no requirement for consideration.

In a "mutual NDA", consideration is easy to find; each party agrees not to disclose confidential information disclosed by the counterparty.

Another way to make an agreement binding without consideration is to sign it as a deed.

https://blogs.warwick.ac.uk/anneprudhomme/entry/consequences...


> In a "mutual NDA", consideration is easy to find; each party agrees not to disclose confidential information disclosed by the counterparty.

I don't think mutual NDAs are typical. Typically, you sign an NDA prior to receiving information. So the consideration for signing the NDA is receiving the information that you agreed to not disclose. If you already have that information, then that's no longer valid consideration.

In this case, the reporter already knew the security vulnerability, so that knowledge could not be considered consideration. The bank would have needed to offer something else.


It's what distinguishes a contract from a promise.

If I say, "I'm going to give you some apples in six months, after the harvest" and then there's a blight and I don't actually end up with any apples, society (at least in America) decided that I should be able to just say, "Oops, sorry, I'm not going to be able to give you those apples after all" and be done with it.

On the other hand, if I say, "I am going to sell you some apples in six months, in return for $100", American society collectively decided that I'm on the hook to get you those apples, regardless of whatever difficulties should ensue.


I believe the idea is nobody would willingly sign a contract that does nothing to benefit themselves so they must have been mislead into the agreement thus it is invalid. Sort of a rational actor theory of law.

Isn't the benefit for William that he was provided some confidential information in addition to what he already knew?

Typically yes, access to the information is the proper consideration for agreeing not to further disclose the information. But as lisper says [0], that will also typically be spelled out in the contract.

If a contract doesn't outline consideration, and the jurisdiction requires consideration, then the lawyer writing the contract was not very good at their job...

[0] https://news.ycombinator.com/item?id=14167805


Possibly. But the contract doesn't say so. This is exactly why the consideration has to be explicit, so the judge adjudicating disputes doesn't have to guess about such things.

Basically, per our phone call, my consideration was duress. As in "sign this or else..."

Because it’s much easier for the court to reach a judgement about a contract if both parties are clear about what they were expecting to gain from it.

Also, you have to ask why someone chose to sign a one-sided contract. Was it signed under duress? The court shouldn’t enforce that. Was it a gift? The court would rather not get involved with enforcing every casual promise!


Is this a test? Are we allowed to use Wikipedia? Because Wikipedia has a lot on it.

This was a test for understanding that a person not familiar with a particular field (e.g. GP and common law) will not be easily able to find a source on a particular aspect of that field and at the same time verify the information are more-less complete. Therefore, it's much easier for someone familiar with the field to provide a link to appropriate source.

You, sir, have unfortunately failed that test.


I thought it was significant that they were able to distinguish it as a common-law concept. Are you implying this was something like a lucky guess on their part?

My guess is that 'beefhash is not from a common law country and was only able to figure out for sure that the topic is a part of common law.

Their other comments cause me to guess differently.

I agree. There's plenty of "Tester agrees"/"Tester shall (not)", but the document provides nothing of value/benefit in return.

Worth noting that just because it doesn't stand up as a contract doesn't necessarily mean a claim can't be made under breach of confidence (I doubt it would be applicable here, but just pointing out that contracts aren't the only form of legal protection provided to confidential information).


> I doubt it would be applicable here

Definitely not. The bank did not disclose the vulnerability to him, he discovered it on his own. He had absolutely no obligation to the bank.


Agreed, that was the first thing I thought when I looked through it. It's a totally one-sided contract, which is invalid for that reason.

Are you saying NDA without putting a dollar value are unenforceable?

It's not about the dollar value, it's about having a contract. In a contract, the parties usually agree to do something in return for something else. This NDA is completely one-sided. The author gains nothing. So basically it is a gift, and not a contract. And in many legal systems, it is easier to take back a gift than to undo a contract.

Consideration doesn't have to be denominated in dollars.

Consideration can be as minimal as "your continued employment with this company." It does not have to be any sort of additional dollar amount.

I thought continued employment was the textbook example of not being consideration which is why a company can't say they fired you with cause for not signing an NDA that was presented to you after you had already been working there

Edit:googled some more and it appears that continued employment as sufficient consideration is different on a state by state basis and isn't firmly set in stone yet


"We won't sue you", however, is not consideration.

It is, especially (but not necessarily) if the person promising not to sue has a valid claim. This is covered in any contract law textbook, but it's an old common law principle so there are plenty of free online sources, too. For example, in the US, see Bennett, 'Forbearance to Sue' (1898) 10(2) Harvard Law Review 113–118 [1]; in the UK, see Kelly, 'Forbearance to Sue and Forbearance to Defend' (1964) 27(5) Modern Law Review 540–545 [2].

[1] http://www.jstor.org/stable/1321438

[2] http://onlinelibrary.wiley.com/doi/10.1111/j.1468-2230.1964....


I wouldn't be so sure - for example, out of court settlements pretty much amount to "We'll pay you $x without admitting that we ever did something wrong, and you agree not to sue us over that thing that we totally did not do.", and these definitely are valid contracts.

Obtaining an opposing party's waiver of their right to pursue legal remedies is not the same thing as obtaining non-binding indication that they may not pursue you legally.

In the first case you have extinguished a right that could be used against you. In the second case you have obtained nothing more than the illusion of safety.


Not suing in that case is the terms of the contract. The consideration is $x for party A and for party B it's not having to admit wrongdoing. Had the NDA in question given William $x to not disclose the security hole then he would certainly would be in breach of contract. But the NDA gave him nothing.

When they were drafting the "contract", the concept of consideration was very interesting. I had considered that if I would receive cash for agreeing to not disclose then this would be blackmail which apparently is bad.

What about NDAs for interviews? What am I gaining (a chance at getting a job doesn't seem like a gain)?

> a chance at getting a job

Yes, that is exactly right.

> doesn't seem like a gain

Why not? If you don't think that's a gain, why are you wasting your time doing the interview in the first place?


OTOH, I've heard that you can't be forced into an NDA with the consideration of only continued employment.

Consideration is a necessary but not sufficient condition to have a valid contract. There are a lot of other requirements as well. This is why it is best to have a lawyer review any contract you sign, especially when the stakes are high.

The government taxes you for stock options (which are opportunities to gain profits).

A chance for employment (over an outright dismissal) is a recognizable gain.

You are however, free to decline with the appropriate consequences.


This was my question: is this NDA even enforceable and why would the author have signed it?

My reading is that he signed it because he was falsely made to believe he may have done something illegal and this would protect him from the FBI. I.e. he was coerced.

Yes.

[flagged]


Care to explain why he's wrong, or are we to assume your expertise, random internet person?

Assume the expertise. A whole semester of university dedicated to contract law: Consumer contracts and B2B contracts in the national law, then the specifics when dealing with a party in another European country and internationally.

You'll see what a contract needs to be valid during these courses. There is simply nothing about both parties requiring to gain something.



Too bad, the best schools are free where I come from. A few ones actually pay you.

The point stands. Your link doesn't infirm what I said.


"the best schools are free where I come from" generally implies living in one of civil law jurisdictions, where many legal principles are quite opposite from USA.

A good bunch of things taught in a contract law class are subtly wrong even for a very similar neighbouring country (in EU it's now getting a bit better because of harmonization efforts) but common vs civil law changes pretty much everything.

And a semester in contract law is not really much expertise - any MBA with a semester in USA contract law would have much more relevant expertise than us Europeans talking.


> Your link doesn't infirm what I said.

LOL. Res ipsa loquitur.


Dura lex sed lex.

> European country and internationally.

You think you are qualified to determine ANYTHING about US contract law when you've taken a single semester in contract law related to an entirely different country?

By your logic I am basically a Astronomer. Except mine is more relevant since astronomy is the same regardless of where you take a "whole semester" of it.


We certainly don't have all the facts surrounding this case, but we definitely have enough to move forward under the assumption that this would be resolved with American and probably Californian law. I'll leave you to research whether California requires consideration for valid contract.

I have no idea since I haven't reviewed the contract. But consideration can be more than just cash money. In some areas and circumstance continued employment can be enough consideration. Or getting access to more information might be enough.

There isn't a bright line rule.


We definitely don't have all the facts and learning new facts could definitely change the direction of the conversation. I hope that we all understand that this arm-chair lawyering is, at its core, a hypothetical exercise.

But even if we are allowed to infer consideration, and I agree with you that we are, this contract isn't simply lacking the terms of consideration. It doesn't appear to contemplate consideration at all, which in my experience, is unheard of for these types of agreements.


Normally, you'd be allowed to look at the entire circumstance to determine consideration if it was unstated. however, this contract contains some pretty strong clauses about the document being the entire terms of the contract. So maybe that would be enough to invalid it.

But that would depend on the specific jurisdictions case law on contracts and then how the judges reading the contract.

If this were my client and he got some kind of consideration, I'd tell him to treat it like a valid contract. Though I'd trying to poke holes in it during litigation. But litigation is losing 9 out of 10 times, even if you win.


Thank you, and also parents. This discussion is very relieving for me.

Well, the contract has been published, we could get more facts. (except it's midnight now and I'm not gonna read it carefully now)

Without going into the extreme details of this case. Being "considerate" in legal jargon is much more subtle than "both parties have to gain something" in engineering talk. Determining the consideration can be as hard as a NP problem, to speak in engineer :D

Back to my original point: Let's not talk people into signing perfectly valid contracts, hoping for a loophole because it didn't look nice enough to them!


Came here to say just this.

Are you an IP lawyer?

No. But I can read.

Are you a heart surgeon? No but I can read. I'll stick to advice from subject matter experts, not self appointed experts.

As you wish. My legal advice comes with a free double-your-money-back guarantee if not completely satisfied.

There's a helicopter crashed in a house. I don't need to be a pilot to know it's not supposed to do that.

This is actually pretty close to how I personally define a "professional": A professional is someone whose work can only be judged by other professionals of the same domain.

Obvious failure modes are exempted. Anyone can tell you about a bad bridge after it has failed. But it would take a bridge engineer to tell you that before it fails.

https://news.ycombinator.com/item?id=8960822#8963307


Your definition would include tradesmen and craftworkers, then, who are not strictly professionals. Anyone can work in wood long enough to say "That wooden bridge looks like it'll hold X people," and not have any way of conveying how they came to that conclusion, because they didn't learn via a means of studying a specific body of work that can be measured and accredited. Without this distinction, many professions would not exist.

https://www.designingbuildings.co.uk/wiki/The_architectural_...

Also, your definition includes itself as part of its own definition, which is a circular definition fallacy.


> Your definition would include tradesmen and craftworkers, then, who are not strictly professionals.

According to what definition?

> Anyone can work in wood long enough to say "That wooden bridge looks like it'll hold X people,"...

I severely doubt that, given the complexity of trussed bridge designs [0]. There's a lot more to it than how much weight a 4-by-4 can support.

> ... and not have any way of conveying how they came to that conclusion...

If you can't transfer knowledge in a way that other people can independently verify, you're working in magic. If such a transfer is possible, but simply not possible for a particular person because they lack the tools, then that's a professional failing. For some reason, this state seems acceptable to you when we're talking about physics and complex loads. But could you imagine a doctor describing the appendix as "that thing sticking out where the long thin squiggly bit meets the short thick squiggly bit"?

> Also, your definition includes itself as part of its own definition, which is a circular definition fallacy.

You can't just throw out "circular definition is fallacy" and dismiss the idea. That itself is a fallacy -- "argument from fallacy". [1]

Yes, I use the word "professional" twice, but that's not necessarily a circular definition and especially not necessarily a fallacy. First, the two "professionals" are not the same person. The first mention of "professional" is an individual, while the second mention is a group. What I did is tie membership of a group to a conditional ability which is dependent on the group itself.

However, I did cheat a little bit. Because what I did not define is the individual ability necessary to meet that conditional. Because, of course, that changes depending on what group of professionals we are discussing.

For backup, let's look at a definition of malpractice [2]:

> a dereliction of professional duty or a failure to exercise an ordinary degree of professional skill or learning by one (as a physician) rendering professional services which results in injury, loss, or damage

In other words, malpractice is a professional doing something which such a professional should not do... Because the mere fact of a person being a professional implies that they should know better.

It's this same logic that I am using: A professional is someone who acts in a professional capacity, and understands the practices of such profession, and thereby is capable of judging whether another person understands and acts in a professional capacity.

[0] https://en.wikipedia.org/wiki/Truss_bridge#Truss_types_used_...

[1] https://en.wikipedia.org/wiki/Argument_from_fallacy

[2] https://www.merriam-webster.com/dictionary/malpractice


[flagged]


Really? I put the effort into that response and this is all you have?

I'm pretty sure the correct reaction on my part here is:

Good day, sir.


I like your definition better than "a trade that requires paying a fee to a local bureaucracy."

And the validity of an NDA is blatantly obvious to any person who can read?

Lawyers tend not to tell they are on internet because of potential liability. Thus when you read IANAL odds are goods that the writer known about the subject but don't want to be liable in any way so he just protect itself.

PS: and if you want an advice by a lawyer that accept liabilities for its counsel just pay for it, because that is the only way you get it.


Lesson learned: when reporting a vulnerability, record all discussions from first contact with the vendor. At least in cases where the vendor doesn't have a clear, easy to find policy and/or bounty for disclosures.

I think it's totally fair to reject an NDA but I don't blame him for fearing an overzealous reaction on their part. Even being on the right side of criminal and civil law, you really do have to be willing to spend time and money to mount an affirmative defense.


I believe that you'd need to tell them that they were being recorded or you could get yourself into trouble.

Edit: looks like this could be possible without getting into trouble depending on the state you're in: http://lifehacker.com/5491190/is-it-legal-to-record-phone-ca...


Just because evidence was not lawfully obtained (ie. call recorded without other the other party's consent where that is requirement of state statute), doesn't necessarily mean that evidence can't be used to protect yourself against a more wide-ranging claim. The various precedents against the use of tainted evidence are mostly used in favor of a defendant and against the state.

A $50 misdemeanor fine for unlawfully recording a phone conversation, may well be a small price to pay - if the content of that recording can successfully protect you from a potentially bankrupting civil case.

And you always have the option of not disclosing the recording if that is your lawyer's recommended advice.


In some places it's a felony to record someone without that person's knowledge. Just be careful with this.

As someone who lives in Texas, I can confirm that Texas is a one-party state. I specifically do not need to inform people of recording devices if I am a party to the conversation.

It bothers me a lot when services, such as Google Voice, announce to all parties that such recording is occurring.


> It bothers me a lot when services, such as Google Voice, announce to all parties that such recording is occurring.

Google is based in California. There is a good probability that the act of recording occurs there. California is an all-party consent state. Also, even if the recording isn't happening in California, it's potentially tricky to be sure that no party to the call is in California (even numbers assigned to landlines don't assure that the person ultimately connecting is in a particular place.)


From my naive understanding and possible spotty recollection of the law(s) involved: in the US at least, as long as the recording party is in a one-party state then it doesn't matter where the other parties are located.

Obligatory disclaimer: IANAL

It's completely legal to record a phone call in Canada as long as you are a party to that conversation. However I still cannot find an app for my Android phone to do this.


SIP softphone (zoiper, etc) and route calls through an asterisk server you control, which is set to record.

I think it may be enough to play a beep every few seconds to indicate that the call is recorded. At least that's what a bank I used to work for would do when I called offices in a two-party state.

First, IANAL but I would be very surprised if beeps alone would be considered a legal notification of recording.

Second, those beeps probably exist to reinforce that the audio is unmolested. A beep every 5 seconds means you would have to cut audio in five-second increments, which is not likely to be convenient to whatever segment of audio you actually want to cut.


Apparently the legalese is "recorder warning tone" and it should be a 1400 Hz beep every 15 seconds. https://en.wikipedia.org/wiki/Recorder_warning_tone

I mentioned it because someone working for a big organization and making a lot of interstate calls probably hears these beeps all day and would be less likely to protest than if someone verbally announced that they're recording the call.


Interesting... So seeing that it's a federal standard, now I wonder whether it is sufficient notification of recording...

If so, as you point out it seems like an interesting way to avoid having to announce the recording to those not knowledgeable.

EDIT: I don't know how reliable this site is, but it seems to indicate the recording beep is sufficient for notification, but not sufficient for consent, which makes sense.

http://www.justanswer.com/criminal-law/5dj81-question-record...


It looks like the beep is sufficient for recording from a one-party state calling a two-party state, since federal law supercedes the other state's law. Actual consent would be required if the recorder is in a two-party state, even if the other party is in a one-party state. But even if the recording is "technically legal" without consent, using it as evidence in the two-party state could still be problematic. So I guess it wouldn't be a good idea to rely on the beep alone.

All of Canada is single party consent for recording. If you're the person doing the recording, you've consented, so you are good. Don't assume everyone that speaks English is in the US.

Just asking for a friend, but Pennsylvania and California are two-party recording states. Is there a statute of limitations after when releasing an undisclosed recording between a Pennsylvanian and a Californian would not be considered an offense?

In Finland, most online stores allow you to pay for your shopping directly using your online bank. The way it works is the online store calls the bank's e-payment API, which in turn lets the user authenticate using their normal online bank credentials and accept the payment.

A few months back I did some research [1] on these e-payment APIs and noticed that one of the major banks had a serious flaw in their API implementation. It was possible for the end-user to manipulate the signed API calls to change the payment amount, effectively paying less than the actual price for products they buy.

I reported the issue to the bank and got a swift response where they acknowledged my report and said they were looking into it more closely. A few days later I got another email where they basically said "ok, this looks bad, and we can see it's pretty trivial to exploit, but... it's too expensive to fix, so we won't do anything".

I wasn't comfortable with this, so next I reported it to NCSC-FI/CERT-FI. They also agreed that it looked bad, but said that they had no way of forcing the bank to take action. So that got me nowhere either. I haven't heard from either NCSC-FI or the bank since, but the issue does appear to be partially mitigated now.

I've since found several other issues in the same bank's systems but haven't bothered to report them since they don't really seem to care.

[1] https://www.slideshare.net/JuhoNurminen/the-sorry-state-of-f...


Post them anonymously and see how fast they become too expensive to not fix.

Unless you think this would actually lead to banks taking such vulnerabilities more seriously in general--which I don't believe is the case--taking an action like that is pure spite. Consider the possible outcomes for this particular vulnerability: [1] nothing happens, [2] it gets heavily exploited, customers lose money, and it doesn't get fixed, [3] the same thing happens and it does get fixed. In all three cases, the outcome is at least as bad as it would have been had you done nothing, except possibly earlier and worse.

I really take issue with the notion that security is important, so you're fully justified in screwing people and companies over as much as possible to prove a point. That seems to be a common attitude in the security community. I get the frustration people have with the intransigence of corporations and programmers, and people's general stubborn unwillingness to understand the severe impact of vulnerabilities, but if just security-shaming companies into fixing bugs actually worked we would have a much more secure internet today than we actually do. Unless you can get regulatory agencies to start holding companies and individuals legally accountable for security issues (that is, making it more expensive not to fix than to fix), nothing will change, even if you have all the technical solutions and social pressure in the world.


Also a big issue here, as with many software vulnerabilities, is that the people the public disclosure would actually damage are the users, not the company making the vulnerable software. The bank would only start losing money if the users (personal customers, business customers using their APIs) would notice the hack and start demanding their money back.

It would be very nice if your security disclosure report included a section about how you have provide good faith upfront notice to the vendor and that based on research and belief it would be negligent for the company to not fix the issue by X date.

The wording you choose should be cognizant of your state's laws and the company's user agreement in such a way that the company is actually at risk if they ignore you.

When talking to people, "Reason is, and ought only to be the slave of the passions".

When talking to companies it is only necessary to discuss the impact on their profit.


Just to be clear, I haven't really disclosed anything publicly, not regarding the e-payment API issue or any other issues for that matter. The SlideShare from my comment references the e-payment API vulnerability but doesn't disclose any technical details. It's not possible to reproduce the attack based on the slides alone.

This is not spite, please see full reply to parent.

My credit card may be used for an online payement because it tells on a few information (number, cvv, etc.). This is obviously a security problem. Nobody cares: neither me as any payments which are not mine are immediately reverted (and then, maybe, the bank investigates), nor the bank for wom it is cheaper to write off this money then to fix the system.

So no, publicly exposing an issue does not always work if there are no incentives for anyone to fix it.


Check the slideshare link OP posted. (Pun intended?)

You have reached zugzwang in game theory parlance.

The correct solution before this was to make an announcement:

"Here is the announcement I have made disclosing the problem. It is in both our best interest that it get fixed before publication. I have irrevocably given it to a blind drop that will publish it on DATE. And I believe that is a reasonable DATE that you could fix the problem. Let's work together to fix the problem."

What do you think about this type of approach? There is probably a name for it in Art of the Deal. (Whatever you think of the man, the book is worth reading.)


The thing about setting deadlines like that (blind drop or not) is that it's very easy to look at it as some form of extortion. "This guy has cyberweapons, and unless we do what he tells us, he's going to release them on DATE. Better call the lawyers."

In Germany you can contact the CCC to walk you through the process of reporting vulns like this one. I'm sure the EFF does similar things for US citizens. A quick Google search brought up this FAQ: Coders' Rights Project Vulnerability Reporting FAQ (https://www.eff.org/issues/coders/vulnerability-reporting-fa...)

There was no value in discussing this over the phone. Clearly their only motivation was to trick him into signing the NDA or foolishly becoming an employee to keep him silenced. Just send in the bug report and empty your account. If the bug persists after 6 months then close the account and go to public disclosure.

Yes, this is my new IJDGAF policy. The phone call was a losing proposition from the beginning.

However if the FBI and NCFTA were /genuinely/ interested in disclosing this in their forum for other banks then maybe my phone call with them may have been a win-win. But I think they were not genuinely interested.


About a month ago I noticed that my bank had a vulnerability - I could access the details and photos of every remotely deposited check. I sent them an email, they took the feature offline in about 2 hours.

No bug bounty but oh well.


My bank used to show deposited check photos in a popup with the URL viewable iirc, and sometime they switched to a modal window with base64 data as the source instead of a URL that might be manipulated. I wonder how many small banks still may have bugs like that

Modifying a data: url is a feature not a bug.

which bank?

It was a credit union, not one of the massive ones, but beyond that I'd rather not say.

I think they're regarding these things as weapons, because that's how they or others are using them.

It doesn't matter how we regard CVEs as a community, this is the truth of the matter outside of it. We're handing them over a bomb, and they want to know why. It feels very Spy vs Spy to me, as silly as that sounds.


That was my experience when I stumbled across a text file with several thousand credit card numbers, which included tons of details about each card holder, including SSN.

I tried reporting it to the credit card, and to the issuing bank, and to the FBI. The only thing I asked was that they cancel the credit card accounts and put a "potential fraud source" note on each customer's account. Each party I called was more concerned with threatening me, and trying to find out what kind of criminal angle I was playing, and what my ulterior motive was, etc etc. I honestly expected to hear "Oh dang, that sucks, we'll close the accounts and contact the victims", and was depressed at the hostility I encountered.


Sad part is that if you just posted that link somewhere very public anonymously, it'd have been fixed in minutes and everyone on fraud alert.

You could always send an anonymous, or not, tip to KrebsOnSecurity.com. Brian has the skill to handle this kind of disclosure and the street cred to avoid pitfalls.

Has he said that he's willing to be a liaison like that? If not, you'd be putting an unfair burden on him by doing that.

Krebs writes articles based on tips [0] of this nature all the time. As a reporter I expect he would appreciate the opportunity to break the story.

[0] https://krebsonsecurity.com/about-this-blog/


Krebs has sources contact him all the time.

I don't think you owe them that. Based on their history of behavior in this area, I don't believe the government, or other institutions with a similar story in this area can be trusted with that kind of kindness.

Why should we be strictly ethical in the face of behavior that is unethical? We deserve protection, too.


Suppose they granted your plan to "cancel the credit card accounts" and "potential fraud source" note on each account.

That's pretty much trying to shut down business with their customers. You don't see how they'd interpret that as hostile? Future actors would know how to apply similar techniques if the outcome was in their favor (e.g. Anonymous suddenly produces a large file of cc#'s and threatens bank!)

> The only thing I asked..

In fact, why were you making demands about how they handle their customer relationships, instead of simply presenting what you'd found?


> That's pretty much trying to shut down business with their customers.

That's not how credit cards work. You close that account, transfer the balance to a new card, and issue it to them in the mail. I've done it a half dozen times, and my CC company is only out for odds and ends like postage and stamping a new card.

> In fact, why were you making demands

I wrote "asked", and then you pasted that, and misquoted it as "demanded"? If you hadn't included my quote, I'd accuse you of dishonesty, but now it's just weird.

I asked them to proactively protect their customers, because my grandfather had been through hell after his identity was stolen, and I wanted to do my best to protect other people from the same.


When you come across a single credit card number (say, by finding someone's card on the ground), the response by most financial institutions is to invalidate that card and mail them a new one. Why shouldn't the response be the same if you come across a stack of 100 credit cards?

The response to invalidate is a choice by the bank, not the person who finds the card. Also, that is a single number. It's suspicious/threatening for a non-trivial amount of cards when the presenter also makes demands.

How is "someone has stolen your clients information and likely already sold it to nefarious actors, because otherwise it wouldn't be on the internet anywhere, so you should keep them safe by deactivating those accounts" threatening?

I'd be annoyed if my bank didn't do something.


That is a different point. You are changing the party being considered by re-framing it under yourself, a customer, instead of considering it from the point of view of the bank.

That's ridiculous. If the bank is aware that my credentials are available online somewhere and are taking no action to protect me, they're being complicit in any harm that comes to be, because they have both the responsibility and ability to take action, and refuse to. They're being irresponsible and potentially harming their clients.

So again, how is saying "You should take action to protect your customers' data" a threat? How can it be interpreted as a threat? What is threatening about it?


Because shutting down 100 credit cards has more reputation and monetary liability than shutting one down?

You're basically saying "academics can derive your social security number using public information!" And wondering why they don't reissue all of the SSNs...


So? When a website has its passwords stolen (even just the hashed, salted ones) the immediate step is to invalidate every password potentially compromised and force users to reset them. Doesn't matter how or why the passwords were lost, you start by mitigating the damage someone can do. Why isn't the response for when credit card information (which is very often more valuable) is stolen similar?

>You're basically saying "academics can derive your social security number using public information!"

No, I'm saying that if my name, social, DoB, mother's maiden name, and credit card number appear online in a csv file with 200 other people's personal information, I'd really appreciate it if my credit card company would take proactive steps to keep my accounts secure.


I wrote this a couple years ago about Schwab's embarrasing security. Most of the issues are still there.

https://jeremytunnell.com/2014/12/22/swab-password-policies-...


FYI,

Password + token is a common pattern in systems where hardware/software/OTP tokens were bolted on after the fact.

Not just that, but on certain systems (think a Windows login screen, or a POP3/IMAP login for your e-mail client), you can't have a 3rd "token" field -- they're hardcoded to ask for just a username and password.

So vendors came up with the idea of appending the token value onto the password, and their middleware (say, a PAM module) splits the provided value into password and token and validates both.

EDIT: That's not to say that Schwab is doing it right (in the front-end, seriously???), but just pointing it it's not as uncommon as you think.


RADIUS backend to network equipment as well. Since RADIUS is old, it doesn't have great password hash/encryption support (afaik), so you don't use it over open networks already, and it's well entrenched / widely supported.

So have it with no encryption, and the back-end can pull the password and 2FA code apart and verify both of them, for all kinds of systems which have only a username/password prompt for logins.


Are most of these actually still there? The password requirements have changed dramatically in the last 6 months.

Well if true, I stand corrected. I have received no communication from Schwab about any changes. I assume that if they had made stronger passwords available, for example, they would notify their customers.

I no longer hold any assets at Schwab, but I do poke around every now and then, and it's possible they changed things without me noticing.


Wow, how has this not gotten more attention?

It has. It's just been like that for years.

Wow. Going on with your life as a C-level executive with this knowledge, as if it's just all good, is just insane. I'm sure they're in the clear personally now, but I can certainly see why they would wanna sell their company fast after gaining this knowledge in 2010.

> I'm sure they're in the clear personally now

Don't be so sure. If they didn't disclose this to their buyers they are guilty of fraud. The statute of limitations has probably run out (I don't know which state has jurisdiction here), but delayed discovery rules may apply.


I'm not so sure it's fraud for 2 reasons: 1) how easy it would/should be for the buyer to discover the issue; 2) these transactions generally have very detailed disclaimers / disclosure -- basically making them 'as-is' transactions.

If I were a betting man, I'd bet the buyer knew about the issue and basically didn't care.


Yet security researchers go to prison for iterating the ID numbers in a URL to access private profile pages :/

This is negligent. If they are running banking ecommerce infrastructure and are unable to deal with 101 security risks then it is absolutely negligent. The "it is too complex for the average person" isn't an adequate defense.

The only thing is that there has to be someone who lost something of real value for it to go to court as negligence does it not?


This is good thinking. But you need iron tight wording when spelling this stuff out.

In your contact with companies you should say "Failing to fix this issue would be a violation of reasonably assumed security practices as required in LAW..."


Given my experiences with C-level executives it's unlikely the leadership thought this was a "real" security issue - and it is entirely possible that there haven't been any attacks made using this vulnerability - Zecco isn't Fidelity or MorganStanley.

Wouldn't the FTC want to know about this though, as this would be a great way to execute a pump-and-dump scam...


Exactly what happened to me with Starbucks (https://sakurity.com/blog/2015/05/21/starbucks.html) - threats, signing NDA, they disappear.

Thanks for sharing, I remember seeing this before. Fun to read again!

Archived copy, which can be read without JS enabled:

https://archive.fo/8ZpDJ


Thank you and I am sorry that my blog has offended your browser. Would you like to recommend a better hosting service I could use instead of the wildly antiquated Blogger?

I would like to migrate to my own domain with Jekyll or something. But I would not look forward to implementing commenting and trackbacks even though the blog is pretty modest any way in terms of using those features.


According to https://news.ycombinator.com/item?id=13355531 , JS requirement is not a problem with Blogger per se, but with some of its themes.

> if somebody sent you an email with that code (even if you never open the email)

What is he trying to say here? How on earth would it be possible to execute the url in the context of your zecco cookies unless it's openend in a (browser) in which you've logged into zecco?


Some webmail clients (and potentially other web communicators like online chats - FB messenger, etc) might pre-fetch all URLs send in the email or chat.

The pre-fetching will use the user's context (and cookies) because it's executed by the user's web browser.


I'm guessing if you used a popular web-based email service, or any browser email client, then this would be possible.

Possibly, but you'd still have to (try to) "render" it in your browser by opening the mail.

On a similar note, your web mail could fetch images in emails ahead of time, but that would still be out of your browser's context


Remember...... 2008. Many people still had "auto download pictures" enabled in their email.

note that the url is inside an <IMG> tag.

I know that MS Messenger used to "pre-fetch" URLs in your system's IE session even if you don't open the conversation. I presume there was some similar issue with 2008-era email clients (it's a "useful feature" after all).

I'm guessing he meant a webmail client.

Note to self: The right thing to do, if you find a serious vulnerability, apparently, is consult an attorney. Geez, what a world.

>Geez, what a world.

*America


Sheesh... From a personal liability standpoint, better to just post these things to the company anonymously. Give them the standard window (90 days) then go public.

At least it was over http/s

Terrible nonetheless. Reminds me of how Mt. Gox used to hand out password resets with plaintext passwords in the query string on their own forums.


Reminds me of how Mt. Gox used to hand out password resets with plaintext passwords in the query string on their own forums.

Sounds like somebody should write a book about all of the missteps in that debacle.


The craziest thing is how hard they work to cover it up and not fix it vs how blindingly easy it always seems that fixing it would be.

It's like circumnavigating the globe backwards in order to avoid using a crosswalk.


this is a major bombshell. I'd hate to be those guys running zecco. The fact that they coerced an NDA to hide the millions of customer transactions that now have no way of proving were legitimate or not.

I'm pretty sure the author wasn't the only guy looking for vulnerability. I'm pretty certain criminal minded folks would've already used it....with no way of finding out which are real or manipulated.

Which further raises the question, why they would go to extreme length to cover their tracks? They could've easily saved themselves trouble by coming clean but because they've gone such great length to hide it and threaten anyone who tries to expose it makes this a hollywood type story. That just seems so over the top like they are protecting something much bigger.


Worst case (?) scenario, they were abusing the system themselves or were being pressured to allow others to do so.

It's unlikely, but my point is it's a hole in their system which would allow this to happen and it seems like they've deliberately let it continue. :(


Thanks for understanding. I was writing this up and wasn't sure anyone would really connect with the story or care about one line of code against a now non-existing broker. I'm so happy to hear this support.

Nitpick: was this disclosed to a bank or a broker? Not sure it matters tbf

I believe you have picked an actual nit. He reported to Zecco (his actual broker) and Penson (Zecco's clearing firm). Both were SEC-registered broker-dealers at the time, neither were banks.

Yes, you are correct. I am guilty as charged.

Motivation was clickbait and/or fear that people would not understand the latter.


Serious question: Would the FBI actually come to your door if you went full disclosure with a banking zero day? Is there real legal exposure here or was that just bluster from Zecco/TradeKing?

Surely Raneri had no authority to speak for FBI.

BUT actually this vuln may have been from upstream with Penson. And then it may affect many broker-dealers. They have many clients in US and Canada. (Don't laugh that such a ridiculous vuln could be in so many places.)

At the time, considering this (and Penson was on the phone) I understood that irresponsible disclosure could have serious consequences. FBI would have been warranted to knock on my door.

That's why I'm now publishing 10 years after the fact.


>if somebody sent you an email with that code (even if you never open the email) then you would be the unwitting owner of one share of Krispy Kreme Donuts

Pardon my ignorance, but how would this work?


I felt ignorant first when reading it as well. But looking at the "FAQ" at the bottom, it says:

"But this only affects people that are logged in, right? Yes ..."

So I suppose what happens is, that the user is already logged into the service and thus has a cookie for the service in his browser.

If the user then somehow executes a request to the URL in the article with the same browser (eg viewing a malicous email with the IMG tag in a webmail client), the browser will enclose the cookie in the header of the request. This makes the request automatically authenticated.


>eg viewing a malicous email with the IMG tag in a webmail client

The article mentions it would occur even without opening the email.


Well, it is possible your email client is doing prefetching. I wouldn't rate it as probable, since you're unlikely to have a client with the same cookies than your web browser, but still.

You could also abuse Firefox and Chrome prefetching links. I'm not sure Gmail for example remove prefetching attributes in spam links. They do block images though.


Good point.

Anyways, how would it work with the server receiving any data from the client just by viewing the link in your browser?


I would not be surprised if this turns into a class action lawsuit. The negligence here is remarkable.

You need damages to have a class action lawsuit. What are your damages?

I am not saying no one has damages, but if 100s of people had damages, I expect something would have happened...


Couldn't anyone who lost money on a stock be able to claim damages? How would the bank prove the purchase order was legitimate seeing as there's basically no security around the endpoint and the bank knew it?

The bank may be able to demonstrate that the vulnerability was not exploited by, e.g., showing that the order preview page was first loaded with the same parameters, or showing a same domain referer.

The article covers this:

>Also their engineers made it clear that unauthorized transactions like this and later shown below would not be distinguishable from other legitemate transactions.


That doesn't really matter that much. The customer would have to show they were harmed. So if you were playing around with certain stocks, decided you didn't like the outcome and are now going to sue, you'll need to provide some proof that you didn't make that transaction.

If you kept buying and selling on that account, including with the supposedly-hacked-purchased shares, you'd need to explain why you didn't bring it up until now.


Maybe the bank should've used this method to prevent the problem in the first place by just checking that the referer request header was from their domain.

You can spoof referrers, you just need some browser extension (or, if using python and requests, doing requests.get(url, headers={'referer': my_referer}) )

Is it proven anywhere that it wasn't?

The article mentions that unauthorized transactions were indistinguishable from legit ones:

>Also their engineers made it clear that unauthorized transactions like this and later shown below would not be distinguishable from other legitemate transactions.


They may have not been logging referrers.

If it's a publicly traded company, shareholders might be able to sue.

Also, the company was sold. Not class action, but if they did not properly disclose this then there could be liability there.

A strong argument can be made against revealing your name when disclosing information like this - especially if you're dealing with banks (often litigious and technically illiterate) and even more so if you're in the United States. If I was the OP I would've found a way of reporting this information to the bank anonymously. Possibly followed up with a promise of media disclosure if not fixed in a timely manner.

Ugh, I hate hearing about crap like this. Unfortunately, the incentives at large, public, consumer-facing companies always drives this behavior.

I would have done the following:

1. Shut down my account. 2. Send the exploit to the company anonymously with a deadline to fix. 3. Upon deadline, post exploit and cc the company.

The inability to publish is a rub, but I think we need a cultural shift to drive back corporate idiocy and protect consumers.


Would he not have a case of gross negligence against Zecco if he were a customer? Is there something preventing a lawsuit, outside of the possibly non-binding NDA?

No damages, assuming no unauthorized trades were executed in his account as a result of the unpatched vulnerability.

Couldn't he simply claim unauthorized trades were executed? How would the bank be able to prove otherwise? Especially considering the bank knew about this huge security hole.

In order to do so, he would have to actually declare a claim that a particular trade was unauthorised. Assuming that he actually did execute all his trades himself (which, frankly, is quite likely), making that claim in court would be a crime (perjury + fraud), a much serious issue than the security vulnerability.

With sufficient preparation it's likely, that the bank (and prosecutors) wouldn't be able to prove that crime beyond all reasonable doubt, and he wouldn't be convicted for it, but it still carries a risk that they could prove that (e.g. by forensic analysis of his computer) and he'd go to jail.

Furthermore, even if he manages to prevail in the criminal case, in the civil case (where the criteria is less strict) it is quite likely that after reviewing all possible evidence they'll manage to get to the correct judgement that the "unauthorised trades" claim was false, thus not getting him anything anyway.


How is the bank able to get to get the correct judgement in the civil case? There's proof the bank knew about the security hole, there is proof that at least one person outside of the employment of the bank had discovered this vulnerability (meaning there were likely more), and there is no way for the bank to prove that the transactions were legitimate. The article mentions that unauthorized transactions were indistinguishable from legit ones:

> Also their engineers made it clear that unauthorized transactions like this and later shown below would not be distinguishable from other legitimate transactions.


For starters, all the details on how that particular transaction was performed, timestamps, IP addresses, all the browser fingerprints visible in the logs of that request (they tend to be quite identifying), subpoenaed logs from the claimant's ISP.

They don't have to prove that it couldn't have been someone else, they have to convince the court that it's more likely than not. Motive matters a lot - if there's some way how that transaction would have been useful for a fraudster (i.e. if it was a money transfer to them), then it's one thing; but if there's no indication of why someone else would want to make the fraudulent trade (which is the case for most stock purchases/sells) and a clear motive why the claimant would want the trade to be reversed (i.e. the stock buy seemed good on that day but turned out to be bad afterwards) then if there's any technical evidence whatsoever pointing towards the claimant, it's hard to be convinced.

If data shows that the transaction is e.g. done from some Starbucks and local security cameras show the claimant near that Starbucks at that time, it's probably not enough to get a conviction but likely enough to make them lose the civil claim.

The criminal case would be expected to get much more evidence than an ordinary civil claim, so they'd likely wait for its results and use everything that the police/prosecutors gathered to dismiss their civil claim.


> For starters, all the details on how that particular transaction was performed, timestamps, IP addresses, all the browser fingerprints visible in the logs of that request (they tend to be quite identifying), subpoenaed logs from the claimant's ISP.

Again, the IP address would obviously be associated with him and the browser because that's how the vulnerability works. The attacker just has to get the victim to visit any website with a browser which has the cookies for the bank. So proving that the user's browser/machine/IP made the request does nothing to show that the user did so intentionally.

> Motive matters a lot - if there's some way how that transaction would have been useful for a fraudster (i.e. if it was a money transfer to them), then it's one thing; but if there's no indication of why someone else would want to make the fraudulent trade (which is the case for most stock purchases/sells) and a clear motive why the claimant would want the trade to be reversed (i.e. the stock buy seemed good on that day but turned out to be bad afterwards) then if there's any technical evidence whatsoever pointing towards the claimant, it's hard to be convinced.

It doesn't have to be done by a fraudster. The motive for the attacker could simply be to fuck with people. They don't gain anything but satisfaction from the fact that they were able to successfully exploit this vulnerability.


The attack would leave traces. Timestamps would show when exactly the request was made, ISP logs or data from the claimants computer would show other requests in the same seconds (i.e. wherever the victim got served the malicious link); Sending the img link by email would be visible in that email; getting the user to view a malicious post on some webpage/forum/etc is likely to leave evidence there.

In general, you make good points, they are believable and likely would be made if such a court case happened. In the absence of hard evidence, if they seem slightly more believable than whatever story the company presents, the claimant would win; if they seem slightly less believable, the claimant would lose. In a civil claim, the company needs to prove that it was authorised only just as much as the claimant needs to prove that it was not, it's a somewhat symmetric contest - simply claiming "I didn't authorise it" is effectively countered by claiming "Yes you did", and simply moves the discussion on to further investigation.

The motive could be just a prankster messing with people, but it's a lot less convincing motive than an obvious benefit. If the transaction is one where you clearly lose money and someone (possibly anonymous) gains it, it's easy to make the case that you were hacked. But, for example, if the claimant had previously unsuccessfully complained to the company about the theoretical possibility of such vulnerability, and then complained that a seemingly random transaction is unauthorized, I'm fairly sure that any decent lawyer would successfully convince the court that "a prankster did it" is comparable to "the dog ate my homework" and it's a bit more likely that they orchestrated the claim themselves to mess with the company. Getting 51% of belief is preponderance of evidence, and sufficient in a civil trial.

And in any case, all this wouldn't be "simply claim" - seriously making such a claim would require a significant investment of time and money from the claimant. It's not something most people would do for fun. Some would do it to make a point, but that's quite a niche hobby.


Do ISP's keep detailed logs as far back as 2005?

Nope, but if you reported that you just noticed a fake stock deal made 12 years ago on an account that you actively use, you'd have an uphill battle proving that it really was unauthorised, and the lack of logs would only make it harder for you.

Yeah, but presumably he'd claim it on an asset in the red, and for a large enough amount of money to be worth risking lying about under oath. Zecco could have the court subpoena the ISP to prove the IP was in use at the time by the defendant.

Of course it was from his IP, the only way the transaction works is if your browser has the proper cookies. The whole vulerability is that all someone has to do is put that <img> into ANY webpage you visit and so long as your browser still had the cookies, the transaction would go though without you needing to do anything.

Good point. IANAL, but I would think burden of proof is still on the claimant that the transaction was fraudulent. If it had happened to multiple users around the same time frame who provably visited a similar set of potentially malicious websites, it might work, but then the company could respond that it's strange he conveniently didn't notice anything on the statements or confirmation of purchase e-mails until the exploit was publicized.

Seems to me that given the way it's likely to be received, the proper way to disclose a vulnerability like this is anonymously, through a lawyer.

I'm not quite following the timeline: why did he end up under an NDA and the too-long wait to get it fixed? Why not say "I'm publishing this on my blog in 30 days so it better be fixed by then"? Would you risk getting in legal trouble for publishing a way to do bank fraud (for example) - assuming you gave some reasonable timeframe for disclosure?

> Would you risk getting in legal trouble for publishing a way to do bank fraud (for example) - assuming you gave some reasonable timeframe for disclosure?

Of course you would. The bank would call the FBI and tell them you're hacking the bank, and the FBI would then knock down your door, tear up your house and drag you away. The system would then do everything it could to represent what you did as a crime, and if you are lucky you get away with only a year in court, many thousands in debt and your name dragged through the mud.

tl;dr The actual legality of an action is only tangentially related to how the legal system will be used against you in response to it.


So basically what's needed is a place to send these notices anonymously and a place to anonymously publish the exploit after responsible disclosure - at least for countries where legislation works like you describe.

I still hope this is not the case in most places outside the US - that is, I hope the responsible disclosure is complete proof you are in fact not hacking anyone.


That's how I started the discussion with Zecco. The next phone call had the FBI on the line. Then I signed the NDA.

Next time I would change 30 to a reasonable number. In this case (multiple vendors and a large installed base) maybe even 180 days may have been fair. And then I would stick to my guns.


Thanks for the follow up and good luck next time!

Just to be clear I also find it appalling that any important institution would take their time in fixing such a simple exploit.

But it seems the reason why these cases don't get resolved quickly is purely for economic reasons: the perceived cost of fixing the issue seems (to them) is far greater than the cost of dealing with the (remote?) possibility of the exploitation of the vulnerability.

I also think the security researchers have an 'overgrown' sense of the urgency upon having discovered such exploits, and it never seems to get fixed fast enough from their point of view.

But understanding the forces that are at play, also helps understanding such an 'irrational' decision. Big institutions are not known to be proactive and the political climate in such environments does not incentive the 'doers' but does get people in panic mode to try to stop the leak, instead of the root cause (the exploit).


YES. Many authors love to write more than the readers love to read them.

FIRST, be reasonable. This is a good life axiom. Don't expect a large organization to confirm, engineer, test certify, and deploy a change that requires external documentation in less than 14 days. Even if the ship's on fire.

SECOND, be valuable. If you are reporting a vuln that is a bug report. When's the last time you got thanked for /any/ buy report for a non-GitHub project? If your report explains the cost and liability for lawsuit if they fail to fix your reported vuln then you are speaking their language.

---

I have a confirmed vuln reported to Apple under their "responsible disclosure" program since 2015. They have yet to fix it or provide credit as they promised. If you thought Apple was a magic company that "does the right thing", then I hope this dispels that myth.


Kudos for the balls to come public despite NDA.

Many are questioning the validity of that NDA in an earlier comment due to its complete lack of consideration.

Things have progressed quite a bit since 2008 fortunately. Vulnerability disclosures have become much more acceptable, and are handled in a much better way.

Lots more information about disclosure:

* https://www.ee.oulu.fi/research/ouspg/Disclosure_tracking

* https://www.ntia.doc.gov/blog/2016/improving-cybersecurity-t...

* https://www.thegfce.com/initiatives/r/responsible-disclosure...


That's a lot of errors for one document.

I'm also kinda curious why the author didn't run through a simple spell checker before posting. I'm grateful for the article, it was an interesting read, but really why not just paste into google docs real quick or something?

Maybe his editor is on leave.

Thank you for taking the time to reply. @jacquesm, @komali2, @LanceH, I have made many typographical corrections and given you credit where due.

I thought the author is non-native, considering TLD.

Native English speaker. Very jet-lagged. Posted from Doha airport and blogspot.com kept redirecting to that other domain. Thank you, I have made many corrections now.

Why did the author post in 2017, so long after? He said in early conversations he was going to go public much sooner.

He was afraid that he was bound by the NDA not to disclose it.

Now, in 2017, he flouts the NDA and acts in the public interest.


But why now? What changed?

Perhaps author gained some age and wits.

I like to think that the world's view on network security has changed, even in just the past few years. Companies seem more educated on proper disclosure, network security is now seen as a part of national security and/or common good, and society seems to be shifting the blame for insecurity away from the exploiter and more toward the exploited.

For example, if you'd stolen millions of credit cards in 1983 you'd have a special session of Congress dedicated to going after you, whereas now we (rightly) blame Target.


I realized that life is shorter than I expected and I started putting some affairs in order. This is something I had wanted to do for a while and now any consequences are less important to me.

The year?

> October 2008

This may be a lot of it. In October 2008, a massive security breach affecting all accounts was maybe a solid #2 on their list of problems.


Very nice LOL here.

Surely one acquisition and 9 years later this isn't still an open vulnerability… right? It'd be nice if the author discussed that.

I have closed my account and do not know the answer. Also, please if someone would be able to confirm in the affirmative in this or any other Penson site, then please start a new round of responsible disclosure.

Only briefly mentioned in the article:

> 2017 I have yet hear from FINRA that any action has been taken. I have yet to hear from ZECCO / TradeKing that the issue has been resolved.


Hopefully, he closed his account in that time, so he wouldn't know.

Palo Alto's school district just got hit by similar.

https://www.paloaltoonline.com/news/2017/04/20/pausd-student...


I think that an important point in this vulnerability is that it does not violate the CFAA. From my, albeit limited, understanding of the CFAA, it requires access breach.

Imagine this conversation were the user to have discovered a parameter which let the user execute trades on behalf of another user.


This vulnerability does allow to execute trades on behalf of another user.

For example, a realistic exploit would be to slowly buy up a bunch of a random penny stock; and then post an image link to some forum frequented by users of that software with the order "buy 10000 units of stock_x, okthxbye". The order will be executed by users viewing that forum and will bump up the price as you dump it.


> What Happens When You Send a Zero-Day to a Bank?

The police rappel down the sides of your house in full gear and shoot your dog.

Or, per the article, the company pressures you to sign an NDA, and mentions "FBI" to instill fear of rappelling.


I recently dropped a credit union because they can't be bothered to secure their mobile app (in the official google play store!) to use anything better than TLS1.0.

TLS1.2 and proper crypto schemes should be mandatory at this point.


They were going to be required by 2016, but then PCI-DSS decided to give companies an extra 2 years to implement the changes.

https://cdn2.hubspot.net/hubfs/281302/Resources/Migrating_fr...


Mudge at DefCon IIRC had a good point about MIC contractors having Intellectual Property repeatedly stolen, govt gives more money each time and disincentives to root-cause analysis and patching vulns/0days.

Were cookies shared across sites in 2008? It seems pretty odd..

Images are loaded with the cookies of their own site. Example: go to google.com, then open the console and type the following:

var i = document.createElement('img');

i.src= "http://news.ycombinator.com/y18.gif";

Then look at the cookies sent over the network.


I just tried it using Chrome 57:

    var i = document.createElement('img');
    i.src= "https://news.ycombinator.com/y18.gif";
    document.body.insertBefore(i, document.body.firstChild);
The image appeared in the upper left of the Google home page.

So, I clicked over to the Network tab and viewed the headers. The request headers do not include any cookies. If Hacker News were a broker using GET requests to buy shares, and the image URL was such a request, HN would not have known whose account to buy the shares for, even though I'm logged in in another tab.

So, presumably, the hack does not work in Chrome 57.

Edit: Never mind. It's because I have third-party cookies blocked. If I unblock third-party cookies, my HN cookie does get sent.


Imho this is a complete misfeature of the web. It facilitates invasive tracking in exchange for very marginal utility.

This is how FB & others track everyone on the web through ad frames, like buttons, etc.

Just FYI, I use the uBlock Origin extension for Chrome and I believe it solves all this for me.

Do they get info about from which page they got requested when one includes just an image?

Yes, it is passed along through the Referer (sic) header.

Yes, via the HTTP Referer header.

Where do I type it ?

In Firefox, Ctrl+Shift+K opens the web console.

Ctrl + Shift + J

in chrome


Cookies are still sent for requests in which the response is opaque to the current document/window (e.g. <img>, <video>, <audio>, <script>, etc). There's no way for the document/window containing these elements to ever access the actual bytes returned by them.

Because most people don't have 3rd party cookies disabled. It's one of the first things you should do when you install a browser. It doesn't break anything worthwhile and protects your privacy (and security).

This is still true in 2017, unless a website is using same site cookies, which is a very recent thing. (The first draft is from 2015 April)

That's how the web works today as well. That's why you need CSRF protection.

With some hindsight perhaps the Term of the NDA could have been improved by adding 'until ten years have passed'

Is that a common element of an NDA's term?


When you submit a zero day to a bank, maybe let us know. I this case, you submitted a zero-day to a trader-broker, not a bank.

Or they call the FBI and bust your ass and you have to file bankruptcy and get probation.

What bank or financial firm has 100k branches? That's a HUGE presence.

Penson created the system that multiple brokers use for multiple branches.

I cannot verify that number but I am quoting it from a phone call with a Penson engineer.


Hasn't this guy ever heard of CERT?

Balls. Made of steel.

Well, good luck.

What about XFRS, CORS protection, …? This was 2008. We still had the <blink> tag back then.

Heh.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: