Hacker News new | comments | ask | show | jobs | submit login
Panerabread.com leaks millions of customer records (krebsonsecurity.com)
403 points by Thrymr 10 months ago | hide | past | web | favorite | 148 comments

Commenting only on the speed of response (or the glacial interpretation of it in Panera's case):

For companies operating in European Union, the General Data Protection Regulation (GDPR) (1) mandates that such breaches need to be disclosed under 72 hours. The implementation deadline for GDPR is by end of May 2018 (~7 weeks to go).

Underarmor, a US-based sports apparel manufacturer, who operates in EU as well, recently had a breach that affected 150-million users, and went public within 3 days of discovering the breach (2).

I believe UnderArmor's case is the norm we can expect going forward.

(1)https://en.wikipedia.org/wiki/General_Data_Protection_Regula... (2) http://www.bbc.com/news/technology-43592470

Good read outlining the timeline of events from the person who originally reported the leak: https://medium.com/@djhoulihan/no-panera-bread-doesnt-take-s...

I found his initial interaction with their head of IT Security (very first initial response) laughably appalling:

    Dylan Houlihan <dylan@breakingbits.com>
    to Mike, Geri Haight -

    Hello Mike et al,

    Thank you for making yourselves available. There is a security vulnerability on the delivery.panerabread.com website that 
    exposes sensitive information belonging to every customer who has signed up for an account to order Panera Bread online. 
    This shows the customer's full name, email address, phone number and the last four digits of their saved credit card number.
    Moreover, the customers are easily enumerable which means an attacker could crawl through all the records.

    I can provide the specific details of the vulnerability over email once you respond, but if you prefer (for more security), 
    I can also encrypt the information with a PGP key you provide me. Alternatively we can hop on a phone call.

    Best regards,
    Dylan Houlihan
And their response:

    Mike Gustavison <Mike.Gustavison@panerabread.com>
    to dylan


    My team received your emails however it was very suspicious and appeared scam in nature therefore was ignored. If this is
    a sales tactic I would highly recommend a better approach as demanding a PGP key would not be a good way to start off. 
    As a security professional you should be aware that any organization that has a security practice would never respond to
    a request like the one you sent. I am willing to discuss whatever vulnerabilities you believe you have found but I will 
    not be duped, demanded for restitution/bounty or listen to a sales pitch.


"...demanding a PGP key"

This kind of incompetence directly endangers the privacy and security of anyone who does business with Panera. And it's reminiscent of the kind of incompetence that characterized the Equifax breach and other recent high-profile hacks.

Maybe it's time that a subset of IT workers become professionally licensed and liable, like engineers.

>it's reminiscent of the kind of incompetence that characterized the Equifax breach

Go to Mike's LinkedIn and he is the former "ISO - Sr. Director of Security Operations" for Equifax.

I tell you, if I was this incompetent, I'd be homeless. Not in a cushy, high paying corporate job.

Which means he's not incompetent. He's competent, just not at information security.

Competent at MBA-speak, golf, and schmoozing, most likely.

I made this recommendation a couple of years ago when a careless sysadmin left a MySQL dump on a public web share. The response I received is still relevant:

>Requiring a license would wind up making such qualified people more expensive to hire, and companies would ignore it and hire those without licenses to save money.

It would be just about impossible to enforce, naturally, and would be like firing the Senior Developers and hiring fresh graduates.

He has a CISSP (or so says his LinkedIn) though, and on Wikipedia it says DoD, ANSI and NSA value or approve of it and its holders have a higher salary on average.

And we're talking about the director of security with 17 years of security experience here, (he also spoke at Akamai Edge 2015), not a common programmer or admin, I'd assume he already isn't too cheap to hire with those credentials? And that a company that size doesn't skimp on it's directors?

Then again they'll probably lose nothing over this leak and their response.

Being passive aggressive is even somewhat justifiable if they really get that much scamming but taking offense at someone asking for a PGP key isn't nor is ignoring Dylan's emails repeatedly for 6 days when he asked if his encrypted information came through.

Plus the whole "we are working on it" and then not doing anything for 8 months. Did he throw what Dylan sent him away? And then the fix that required you to login (with an ordinary customer account) to get all customers' data instead of exposing it to the internet. They also told fox only 10 000 customers were affected, treated Krebs like an idiot to the point that he went on a Twitter rant against them and he and others were posting links to other holes, web accessible admin login panels of various things, etc. and saying their website should be taken down (which it now is).

Dylan also angrily posted this after they said to the press they take security seriously: https://medium.com/@djhoulihan/no-panera-bread-doesnt-take-s...

Ideally you wouldn't even need to ask for it.


On his LinkedIn page it says he has CISSP[0] and has had four security jobs (Panerabread being his fourth) so far between 2000 and now.

He also might have spoke at Akamai Edge 2015 as a security expert (some internal page comes up if you Google his name called 'speaker details' and in the URL the ID of the event leads to Akamai Edge 2015).

[0] - I've no idea if it's good for anything but according to Wikipedia DoD, NSA and ANSI approve of it and it makes the salaries of its holders higher.

> IT workers become professionally licensed and liable, like engineers

Except for software engineers, ironically.

I suppose there are many ways to choose the subset. Maybe software engineers in specific verticals: medical technology, avionics, etc.? Given the number of security breaches lately, CSO seems like a no-brainer, too.

Solution Architecture is where I would go if I wanted to credential IT folks like engineers or CPAs. Require the license if you are accountable for implementing a solution that hits various thresholds.

Infosec is an area where there is already a problem with credential collectors, and in many places it is just a dressed up audit/compliance function. It’s not a standalone vertical imo.

I think the reaction here is harsh. The email did read like a pitch.

>Head of IT Security

>'demanding a PGP key would not be a good way to start off'.

Please tell me this man will be fired.

I've developed a joke law that says that the amount of genuine information in a statement is inversely proportional to how polite and wordy it is. I.e. the "we take security seriously" PR fluffs.

Perhaps Mike knew that law and that's why he took Dylan's email as not genuine. Perhaps "yo fucka, I pwn'd your shit, tomorrow it's on the dark web if u no patch this link" would be the proper way to inform them of a leak.

Then again some people say it's good they didn't try to get Dylan arrested for "hacking".

Jesus Christmas. Honestly, how many more times can they steal my ID?

It's gotten so they have to run a diff to see if there's anything new.

Well, at least they didn't leak their customers' HIV statuses, unlike the other security breach I read about yesterday...

Arguably that wasn't a leak, they're intentionally putting that data into an analytics store. Whether that's okay is a different issue, but it wasn't leaked to the public.

That wasn't a breach, a possible MitM != breach nor does putting data into an analytics company. I'm not saying they are blameless but it irks me when thinks like this are mislabeled and dilutes real breaches.

Was that the Grindr one? That's a HIPAA violation.. big fines..

HIPAA only applies to health-care providers and related entities, not random other companies.

Related entities includes the broader "clearinghouse" entity, which has been applied to debt collectors.

I agree it would be a stretch to make a claim but I'm not 100% sure it would be fruitless.

A debt collector would apply since they have been contracted by a HIPAA-covered entity and the data they have likely came from that source. Grindr is completely unrelated to another HIPAA-covered entity and any health data you give them is solely your responsibility... so don't.

A fixable oversight, I hope.

The standard "we take security very seriously" is starting to ring a bit hollow.

No worries.... thoughts and prayers will soon replace it.

they should try offering thoughts and prayers to those affected.

Yeah, it's a straight up lie when they say it after theres a security breach. Panera is basically an example of how not to handle this situation.

Companies that actually take it seriously don't have data breaches.

I'm picturing one of the Pythons saying it emphatically while raising their eyebrows and nodding to the side a bit.

The guys responsible for the information security worked at Equifax before: https://www.linkedin.com/in/mike-gustavison-b020426/

Coincidence? Strike two?

And he joined Equifax after jumping ship from A. G. Edwards in 2008, presumably because the company was accused of fraud in that same year.

His first security gig was Senior IT Security Analyst at A. G. Edwards and Sons. His only work experience before that was Supervisor of Branch Installations.

This seems unbelievable, but that senior security position was his first IT experience.

Could this be a scheme to sell customer data?

I assumed for some time that installing backdoors is a good way to sell customer data you otherwise wouldn't be allowed to share.

Equifax didn't fall victim to a backdoor but to an outdated Apache Struts that no one noticed.

I'm not only talking about this particular case, but in general. "Accidental" backdoors let companies share data they legally couldn't share.

Look at Facebook and how their API was surprisingly abused for years until they noticed it.

“The biggest concern is credit card data, a breach occurring on a digital property is devastating to companies.”

Mike Gustavison , Director of Info Sec , Panera Bread

Let me guess. They passed their PCI audits with flying colors.

I’m sure the only reason that only partial credit card numbers were stolen is that PCI makes it very hard for Panera to store complete credit card numbers (with expiration dates and the security code on the back).

> PCI makes it very hard for Panera to store complete credit card numbers (with expiration dates and the security code on the back)

How about impossible. Storing the CVV number is 100% not allowed. Even storing complete cards numbers is only allowed under very specific conditions. Any deviation opens them up to liability for related fraud.

Even storing complete cards numbers is only allowed under very specific conditions.

We encrypt these at the app, even before putting them into the DB, yada yada. The PCI auditor actually made us restore the DB from backup onto another server and show them the data, to prove that some magical process in the backup program didn't cause them to come un-encrypted. They also wanted us to change all corporate email addresses to random characters, ostensibly to prevent spearfishing (we declined to take this suggestion). My point is that they go to crazy lengths to ensure you're doing this stuff right.

I find PCI compliance annoying mostly due to individual auditor predilections.

I've heard that there's a fair amount of variability. Obviously at least part of our audit team were lunatics.

Thank you. I didn’t remember the details and went with “hard” because whenever I say “impossible” somebody will come up with some exception to the rule.

As long as that Nessus scan passed, they're in the clear, right?

My (potentially limited) understanding: Certification by an ASV doesn't free you from any responsibilities. You could still violate the DSS even if an ASV clears you.

To be fair, auditors can only check what they were hired to. PCI audits are better than having none at all but they clearly can't be the only audit.

A- This is infuriating

B- How can a company have such a bad response? I think just about every big company has put a huge emphasis on data security. But hey, companies are big and technology is complex, so maybe data leaks still happen. But when they do, how can you treat them with such a lack of care? And how can the director of Security be alerted about this and not fix it? Seems potentially criminally negligent?

c- The tweets from Brian Krebs are also infuriating (and hilarious) https://twitter.com/briankrebs

Some highlights:

"Per my last tweet, Panera issued a statement to Fox News saying the breach only impacted 10,000 customer accounts. Interesting that they had no numbers for me, and yet had this 10k number all ready to go on the same day this was "discovered," eight months after it was reported."

"Hey Panera, despite your statements to the contrary, you still haven't fixed this customer info leak. Would you like to revisit the 10k number you just gave to Fox news? https://delivery.panerabread.com/foundation-api/users/12345"

"you know what, let's go for 37M instead of 7M: https://delivery.panerabread.com/foundation-api/users/12345"

"At the risk of making my job harder (or possibly, easier?) it's clear I'm going to have to write an entire series of blog posts about how not to handle a data breach from a PR perspective. I'm sputtering over here. Gave @panerabread every courtesy and they treat me like an idiot"

"Hey @panerabread : before making half-baked statements to the press to downplay the size of a breach, perhaps you should make sure the problem doesn't extend to all other parts of your business, like http://catering.panerabread.com , etc. Only proper response is to deep six entire site"

Krebs doesn't have to write his own blog series on how to handle breaches (although I might be interested in his version as well) Troy wrote a nice post about it


"@panerabread" ... "half-baked statements"...

Sometimes life imitates art.

Most the IT Managers / Directors I've worked with were never from developer backgrounds. They were either an "IT Guy" that stuck it out or the "network guy" who's extent of knowledge is seemingly plugging in a network cable.

Between that and the fact most established businesses I've been in still treat IT like it's a necessary evil and waste of money, I'm not remotely surprised when stuff like this happens. My current company had a data breach, the IT Director swept it under the rug. I contacted my attorney for what I'm required to do to (to cover my ass). I emailed my managers and moved on down the road.

I love that the maintenance page has a button labeled "Order Online" (https://delivery.panerabread.com), which is the page/domain broken in the first place!


Aaron Swartz faced 35 years in prison for leaking JSTOR articles.

Instead of fines, the Chief Security Officer should be fully responsible and face 35 years in jail if a breach happens.

You better believe they'll care about security then.

Many companies would also rethink whether they need to track and keep personal information at all.

I'd revise that from "if a breach happens" to "if a breach happens and the CSO demonstrated criminal negligence." The attack surface for security is too large, and it's not fair to hold a CSO of a cafe chain to such a standard when zero-days are also possible. Punish for being negligent, not for being attacked by a zero-day, or something else really obscure.

What if the CSO ignored bug reports about this for a full 8 months? Would that make it negligent?

What if the CSO informed engineering teams, got stonewalled, and, a few weeks later, escalated through the company's risk process (Panera is public, or was before it was bought by a public company, and will have a risk process). What do people here think a CSO does? If your mental model is: "decree that something is safe to deploy publicly, or else forbid its deployment", your model is broken. Most CSOs have an advisory role in the organization, and the real institutional power comes either from engineering or from the CIO.

This security director handled Dylan's bug report badly and deserves the reputation hit he's getting. But if we're going to suggest liability (let alone criminal liability) for security flaws, we should at least have some idea of what it is we're regulating.

Pull the plug.

The final "stick" and reason for a C in the title is the responsibility to shut down the data (and website) until such a point it can be secured.

It's should be considered more of a fiduciary duty (protect shareholders, customers) to protect data as making the right investment or HR decisions.

"Pulling the plug" is almost never a capability provided to a company security team.

What happens when the CIO plugs it back in?

The CIO then accepts full liability.

If we were running under the liability model the CSO's final option would be to resign which sucks. But he is basically in the same situation that any employee is who is being forced to do something that is clearly illegal. But, I guess that is a good argument for why liability might not work because you end up not having a security team or you put good people into legal dilemmas that they shouldn't have to deal with.

For example (issue may or may not have been legal, but point is the guy resigned): https://arstechnica.com/tech-policy/2016/10/report-fbi-andor...

Which is why we need jail time for execs.

It is very simple: with big $$ there should be a big risk.


Aaron Swartz faced 35 years in prison for breaking and entering and unauthorized access of a computer network / hacking amongst other things.

It's a shame it ended the way it did, but please don't downplay what he did and use his name to push an agenda.

> breaking and entering

Is that true? It was an unlocked closet. The walls were covered in graffiti.

>Is that true? It was an unlocked closet. The walls were covered in graffiti.

So, if your house has the door ajar, and the walls are "covered in graffiti" it's open for all?

It wouldn't be breaking and entering.

And a house is different than a school. MIT has an open campus. MIT has a long history of celebrating students who transgress boundaries and go where it is unexpected[1]. I don't have a history of celebrating people who enter my house uninvited.

> Swartz had connections to [MIT]: "He was a regular visitor to the MIT campus and interacted with MIT people and groups both on campus and off. … He was a member of MIT's Free Culture Group, a regular visitor at MIT's Student Information Processing Board (SIPB), and an active participant in the annual MIT International Puzzle Mystery Hunt Competition. Aaron Swartz's father, Robert Swartz, was (and is) a consultant at the MIT Media Lab. Aaron frequently visited his father there, and his two younger brothers had been Media Lab interns." [2]

If a good friend of mine sees my house has the door ajar, and the walls are "covered in graffiti" it would be perfectly reasonable for him to check inside.

[1] https://en.wikipedia.org/wiki/Hacks_at_the_Massachusetts_Ins...

[2] http://swartz-report.mit.edu/faq.html

>MIT has a long history of celebrating students who transgress boundaries and go where it is unexpected[1]

Only when it's conservative enough and doesn't break the law too much. And not officially. In fact the very wikipedia link says:

"Although the practice is unsanctioned by the university, and students have sometimes been arraigned on trespassing charges for hacking, hacks have substantial significance to MIT's history and student culture".

>If a good friend of mine sees my house has the door ajar, and the walls are "covered in graffiti" it would be perfectly reasonable for him to check inside.

Not really. Especially if they know they're not welcomed if found inside, and they have no business there.

MIT's official campus guide brags about the hacks as a way to try to attract students[1]. They are advertising the police car on the dome as a positive thing.

I don't see any reason to think they would be upset about him going in an unlocked closet. The previous quote mentions he was part of a puzzle hunt. If he was creating a part of that hunt and used that closet as a part of a puzzle I would think they would have been ok with it. The walls were covered with graffiti. How many years of prison were the students who drew the graffiti threatened with?

[1] https://institute-events.mit.edu/sites/default/files/documen...

> It wouldn't be breaking and entering.

I think it would qualify for the UK equivalent.


35 years for any victimless crime is ridiculous.

That is a terrible idea. Imagine sentencing programmers to jail for security issues in their code.

Why is a software developer an engineer when it fluffs their ego, but not an engineer when regulation and consequences for failures are necessary?

Yes, if the security failure is grossly negligent, you should face criminal proceedings. As a C level executive, you are responsible for your chain of command.

Is there any evidence that software engineers are protected in some way from criminal negligence cases?

The reality is that it is vanishingly rare for any engineer to face criminal charges for their professional actions. It doesn’t seem to me that software is held to much lower a standard.

Not protected, simply not pursued, although it’s usually outright fraud that is the target of most prosecutions.

Watching the SEC closely to see how many ICOs they prosecute. Also was helpful to see someone involved with their breech response who attempted to profit from non public material information prosecuted (although that’s tangential to the breach itself).

Someone relatively important is going to have to get burned before more software professionals are pursued for grossly negligent security failings.

You misunderstand my point. Are there examples of other sorts of engineers being brought up on charges?

It only happens in the most egregious of negligence cases as it is and even then convictions are rare.

I'm saying your impression that software engineering is protected is wrong, because no engineers (to any normal approximate) are brought up on criminal charges.

Lawsuits are commonplace in civil/geotechnical engineering because faulty work has life and death consequences for the general public. To be a certified professional engineer and sign-off on design plans in California you need to pass an exam, after which could result in issues of liability. This law practice defends professionals that may be in a dispute [0]. Here's a breakdown of why engineers might get sued [1]. Here's a case where a company was held liable for damages associated with a construction project [2].

The title 'software engineer' without any notion of liability is an exercise in stroking ones ego.

[0] https://mylicenseattorney.com/california-board-for-professio...

[1] https://design.insureon.com/news/3-reasons-engineers-get-sue...

[2] http://caselaw.findlaw.com/ca-supreme-court/1671856.html

He said “criminal” charges. That is a very high bar.

Software engineers can be held liable in civil suits, as can other engineers even if there is no professional accreditation body for their industry.

It is less common in software than civil engineering for a few reasons, one of which is that customers literally have no problem signing away their liability. No one would sign a contract from a bridge designer that said “this might fall over in a stiff breeze” but that happens all the time with software.

By that extension if a McDonald's drive thru employee accidentally spills hot coffee on a customer, the CEO is responsible and should be charged with assault?

If they create a work situation where by cutting corners on container safety, protocols, and employee attentiveness I think they are guilty.

And in the modern security context we're pushing deadlines just to race to the latest features with almost no regard for security in the process.

Something has to change. If this kind of negligence were causing similar problems in physical realms there would be regulations.

The tech companies behind these mistakes won't have that free roam forever. Every major screw-up is a step closer to regulations and everyone will cry about it when it happens... But so many companies today don't seem like they're ready to behave responsibly.

Is that grossly negligent? No. Is keeping the coffee excessively hot for cost reasons, thereby causing the customer to receive third degree burns on their genitals and winning in court? Yes.


Your culture is set by your leadership. Make good choices.

While I fully understand that without universal insurance in the US, it may be most expedient to go after someone like McDonald's with deep pockets, I am tired of hearing how shocking and unconscionable it is that coffee could be served at a near boiling temperature.

I make coffee nearly every morning by boiling water in a tea kettle and pouring it over coffee grounds in a Melitta filter. If I poured or spilled it on my genitals, that would be bad. Doesn't make an approximately 200F temperature incorrect though.[1]

[1] See the National Coffee Association on how to brew coffee at http://www.ncausa.org/About-Coffee/How-to-Brew-Coffee

I'm familiar with the case, that's why I mentioned it. My point was that although they lost the civil suit, there weren't any criminal proceedings against C-levels. I understand the argument of negligence being as guilty as malicious intent but it creates a sweeping blanket that's hardly fair or enforceable.

I agree with your principles in theory but it's just impractical.

The Department of Justice was able to dismantle Arther Anderson after their fraudulent audits of Enron. Lots of things that are impractical are possible with sufficient effort. And the government has unlimited resources for those efforts.

You must hold systemic negligence and corruption accountable, or it perpetuates the cycle.

A) The DOJ had been looking at Anderson for years prior to Enron due to irregularities with other major firms like Waste Management Inc. Enron was not an isolated incident.

B) They were prosecuted for the very specific crime of obstruction of justice after they were caught destroying evidence. It wasn't some backlash against a nebulous problem.

C) Their conviction was overturned!

I'm not sure you could have picked a worse example for arguing your point.

They keep the coffee that hot because customers like hot coffee. That's the main reason I get coffee at McDonalds, not because it's great coffee (though it's not bad) but because it's HOT. Half the time I get coffee at Starbuck's it's only a litte better than piss-warm.

I don't think forbidding hot coffee at drive-thrus is unambiguously in favor of safety, since not-so-hot coffee encourages people to drink while driving, which could cause an accident. Some people want to drink on their way to the office or home, and others want coffee that is still hot when they get there. The consequence of the litigation seems to be that the former group of customers is privileged, but I'm not certain that is an overall social good even if you prioritize safety - and some would of course be happy to trade off others safety for their own hot coffee.

There seems to be an unlimited supply of people always popping up to "debunk" the "myths" about the Liebeck case who seem to deflect from the fact that it is normal for coffee to be brewed at near boiling temperatures[1] that cause the sort of damage that was at issue. I could burn myself severely while draining pasta too, if I pour hot water all over my pants and don't remove them; it doesn't mean boiling water is too hot for cooking nor that say, a manufacturer of a non-defective pot is to blame.

Added reference due to downvoting:

[1] http://www.ncausa.org/About-Coffee/How-to-Brew-Coffee

"Your brewer should maintain a water temperature between 195 to 205 degrees Fahrenheit for optimal extraction."

it's unfortunate but leaks and breaches happen in programs (which a website is). it's coding, it isn't perfection and no one should go to jail or be ridiculed because they unintentionally introduced a bug that caused whatever problem arise (WE HAVE ALL DONE IT). This is why it is ideally best to have some sort of peer review and/or buddies reviewing our code for things we don't see before they are pushed into production, however unfortunately, this doesn't happen in all cases.

the only crime was not fixing the problem and keeping it a secret AFTER IT HAD BEEN DISCOVERED. in this case, it wasn't the mistake that was the crime, it was the cover up.

Engineers in other disciplines are held liable for their mistakes. Imagine a civil engineer signing off on a building and then having it collapse. If it was found that the engineer was negligent then you can bet your ass there will be reprucussions. As an engineer, you are the top of your field and with that comes a professional responsibility that is important to fully realize. Mistakes are mistakes sure, but if those mistakes end up being responsible for criminal activity then you’re fully responsible. It’s why the chain of command exists.

> imagine a civil engineer

But there isn’t an equally trained engineer dedicating his energy to taking down the bridge - it only has to not collapse under normal usage.

When a bridge is intentionally destroyed by enemy action, it’s engineer is not held liable.

> Engineers in other disciplines are held liable for their mistakes.

To be fair, they have several hundred (if not thousands of) years of trial and error, documentation, etc. behind them to (try and) help people avoid the mistakes.

Computer Science has barely 70 years of half-arsed fumbling about.

Italy jailed scientists for failing to predict an earthquake, despite the fact that it's not possible to predict an earthquake.

They were eventually acquitted, but the very fact that they were even charged in the first place is ridiculous.

Sounds like an excellent idea:

Jr. Developer - automatic pass. Low money

Sr. Developer - likely a pass, provided 'i' are dotted and 't's are crossed - decent money

Tech Lead - no pass unless tried very hard to get it resolved, big money

Exec - no pass, very big money

Didn't Iran put a developer in jail cause some of his open source code was used on a porn site?

True but worth mentioning different forces were at stake there and here (although both very dark).

In Swartz case, prosecutor was trying to make example of him because his public University made/is making tons of money for providing information that should be free (or already is)

In this case, I would imagine they want peoples info to be leaked and exposed as much as possible, just to have a good reason to fine those for-profit private companies.

Edit: in other words - show me a priest who doesn't want you to sin, or a cop who doesn't want you to break the law, or a doctor who is not fine with people getting sick. Otherwise they would all be out of job.

This is why you should lie as much as you can when dealing with for-profit corporations, especially online. Any information you give them will eventually be available to everyone, because they have no reason to care.

Wow, this story is amazing. Companiy got notified last August of a 0 day (no authentication) to download all customer records, but no action taken for half a year. Then a very bad PR stunt leading to even more exposure - one can't make this stuff up... its April 3rd already, right?? Wondering why they couldn't just really fix the problem? Would be interesting to learn more on how they do engineering? Eg. was it all outsourced and someone else tries to fix it now? This year is going to be good!

That's not what 0 day means.

It's exactly a 0 day. They were notified last August of a 0 day in their website and 6 months later 6*31 days (31 for simplicity) later it was is still was not fixed.

Here the definition:


I think your original statement was confusing because you put 'no authentication' in parenthesis, implying that to be the definition of 0-day

My natural gas provider can't get my bill to print with me emailing them for over a year.

So their old 1990s site, worked fine. Upgrade to new whizbang bullshit and a steady stream of emails still can't get it to simply use a CSS print routine. Outsourcing is glorious!

Cases like this are why I think the general public vastly overestimate the capabilities of government surveillance. These same people work at NSA, CIA, etc.

Not to insult the intelligence of these fine agency folk; my point is security is only as strong as its weakest link. And whether public or private, people can make some very weak choices.

Is there any hope companyies like the Y Combinator backed Request Network can save us from this happening over and over?

A summary of their plan is at https://request.network.

What things would prevent them from implementing this? Seems like a great way to stop losing credit card and identity info in breach after breach.

Maybe in the future, but there are plenty of companies that ar around Panera's size that had to get into the online-ordering space before SaaS was as big as it is today. Thankfully, much smaller eateries now can use Yelp or Seamless to deal with account management instead of rolling their own bespoke systems

It's not a breach when it's openly accessible. It's a leak and nothing else.

Not sure if you are trying to be sarcastic. As soon as someone downloads the dataset its a breach. By design vulnerabilities are always the best.

If you leave the front door open, your security is still breached when someone breaks in. You're just dumb for leaving the front door open.

When you rush shit out the door and don't support your development team, this is sadly a common occurrence.

And I was worried about the acrylamide.

I didn't even know about acrylamide. Now I am worried.

Their bagels are also pathetic and embarrassing.

So here's a fun note - as it turns out, the Panera Bread Director of Information Security mentioned in that email exchange worked at Equifax from 2009 to 2013. There's a comment mentioning it on that page, but you can find it just by looking at his LinkedIn: https://www.linkedin.com/in/mike-gustavison-b020426/

Time is a flat circle. Everything that has happened before will happen again. Every time it happens, we will hear "Security is our top priority" or "We take security very seriously."

EDIT: This just got more interesting. Turns out that despite taking the site down for an hour earlier today, they didn't fix it: https://twitter.com/briankrebs/status/980944555423002630

Also, based on the vulnerability still working at this endpoint [1], Krebs revised his estimated number to 37 million records: https://twitter.com/briankrebs/status/980949205974953984


1. https://delivery.panerabread.com/foundation-api/users/678141...

Well, it costs nothing to put out a press release saying something “is out top priority” and “being taken seriously” and not do anything.

Correct me if I'm wrong - its also NOT illegal to do so, even if you are lying it is immoral but not illegal.

So I was once told by cop when i told them defendant is lying not showing up that he has good reasons. Unless you are under oath by very few LE organizations, its not illegal to lie.

Of course I'm not saying its a good thing; just pointing out they can say whatever they want to - there is no liability.

Don’t forget, “We’re sorry,” “We’ll do better,” and my personal favorite, “Trust us!”

I’d prefer crippling fines.

Absolutely agreed. It feels like corps are developing thier own infosec version of the four dogs defense.

4 DOG DEFENSE My Dog Does Not Bite. My Dog Bites, But It Didn't Bite You. My Dog Bit You. But It Didn't Hurt You. My Dog Bit You And Hurt You, But It Wasn't My Fault


This sounds like a dog version of the narcissist prayer.

This was a very interesting read.

"I’d prefer crippling fines"

Probably won't happen until some Senator gets personally burned. Equifax hasn't suffered much, for example, and they released almost all of their info for every adult in the US that ever used a credit card or had a mortgage.

I'm almost wishing some activist hacker would buy the data for the House and Senate reps and go to town...just to get their attention. Purchase pornhub accounts , shady drug site stuff, escorts, etc, and start sharing it publicly.

My guess is that senators that have been burned have been done so secretly and are being blackmailed.

The Equifax dump was apparently huge.

> My guess is that senators that have been burned have been done so secretly and are being blackmailed.

The whole bunch has been blackmailed for decades. Just not "ordinary" blackmailing, but threatening by big funders to cut said funding unless, for example, the politician keeps supporting NRA/BigAg/BigFinance-favorable policies...

Hmm, I like this point, but is that blackmail or more just "the system?"

We just need Pence's Grindr details.

Heh. Fabricated or real, that would get a fair amount of news time and attention. Maybe Romney too.

I know HIBP's Troy HUnt has very carefully detailed his ethical and moral tradeoffs in what he does, and I appreciate that as a benchmark.

But I so want to lose my mind, start getting these breach db's and start emailing Congresscritters with "This email was hacked, you're screwed, we're screwed, and here's legit links to help fix our lives back up... (eff.org) (hibp) etc"

And now I'm on the watch list for when someone crazier than me actually does this. Sigh.

Some Senators might already have such arrangements ;)


I feel like there could be an xkcd-style greasemonkey script that adds a winkey face to the end of any of those phrases to make them a little more accurate.

"We take security very seriously ;)"


That’s because business in America allow everyone to fail upwards after you hit a certain echelon.

There’s no accountability and it’s about protecting everyone in that class at the expense of all other employees and consumers.

Yay, America!!

Maybe someone could go in to business and provide services that would help companies prevent these things from happening?

Security consultants and contractors already exist.

But why would Panera, Equifax, et al bother investing in better security when they face no consequences for these incidents?

Markets can't solve everything

> Security consultants and contractors already exist.

Thanks for that protip.

Because that would cause Panera a lot of money and the alternative (this) will likely cost them very little money. The choice then becomes clear: it's cheaper to tweet out an "oops" apology than to actually prevent it.

They don't care that your information got leaked, that doesn't enter into the calculation (unless it costs them money, which it doesn't).

Years ago, I was assigned to clean up an office building that had recently been vacated by a government cybersecurity contractor. While throwing away all the trash that had been left behind I discovered a binder that had at least a hundred pages of print outs from mapquest with the location of the panera bread on each circled in pen.

Comments like this tend to weirdly circulate, I've noticed, with people forgetting that originally, it was completely anecdotal and unsourced.

Op I don't mean to disparage you, but this is the internet, and there just aren't enough grains of salt in the world to allow me to swallow a tale like that without speaking up.

What years?

I'm pretty sure the intersection of time between Panera's spread to the east coast and map quest's prevalence don't line up

That could’ve easily been a really boring office lunch option binder, though. ;)

It was a hundred panera bread's from all over the East Coast. Something suspicious was going on...

Do you think it could have been some sort of investigation? Like they say, criminals aren't born, they're bread.

I can’t decide whether to flag this or upvote it. I guess I’ll settle for replying.

Come on, let's stay on topic. All I'm trying to figure out is, have I been scwned?

Was that the path of yeast resistance?

At least around here, they were one of the only places to go with free public WiFi for a while. It could be something related to that, just a place to find open internet hotspots.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact