
McDonalds India is leaking 2.2M users data - madmax108
https://hackernoon.com/mcdonalds-india-is-leaking-2-2-million-users-data-d5758b2eb3f8#.woaua1qdv
======
jbuzbee
_McDelivery is leaking personal data for more than 2.2 million of its users
which includes name, email address, phone number, home address, accurate home
co-ordinates and social profile links._

I know the world has changed, but many of us remember being given a list of
everyone in our city's name, home address, and phone number. The Phone Book.
And I recall it was unusual not be be listed in the phone book. My father was
prominent in our community, and we were always in it. We'd occasionally get
calls from a someone angry at my father, but it wasn't a problem. Different
times...

~~~
killerpopiller
that was before ones personal contacts (and every personal information one can
think of ) could get into silos of spammers, fraudulent marketers, annoying
cold-callers, malware-distributer, who don't have to pay to contact and do so
with click - not with a letter nor a dial phone. May I again mention Cambridge
Analytics manipulating voters by making use of such silos automated?

------
brianbreslin
My GF works at burger king corporate with their digital team, and this is one
of their biggest nightmares. Most of these firms use a company called
Tillster[1] who has the most market share in apps for QSR industry.

1.[http://www.tillster.com/](http://www.tillster.com/)

------
koolba
> The McDonald’s India app, McDelivery is leaking personal data for more than
> 2.2 million of its users which includes name, email address, phone number,
> home address, accurate home co-ordinates and social profile links.

I'm sure I'm not the only one wondering where this app was developed...

Marginally on topic, if they do any encryption within the app I wonder if they
name the variable to store the authentication code a "bigMac".

------
nojvek
Security is expensive, why pay all that money when you're unlikely to get sued
because you're too big to be sued with your army of lawyers.

We need more of the class action lawsuits like WeVibe. But even as a user who
got affected, I am unlikely to get compensated.

There needs to be a startup for filing class action suits against companies
abusing privacy and security, and ensuring the average Joe can go against a
fat Corp.

------
dahart
> This is published under our responsible disclosure policy

Honest question because I don't know what responsible disclosure protocols are
-- is it considered responsible to report large vulnerabilities directly to
the public when the company is too slow? Are there no intermediates? No
security organizations or advocates, or even businesses in the vulnerability
market, who can help to escalate an important issue without potentially making
the situation worse temporarily?

I realize this might be a stupid question, and that I need to assume hackers
are already ahead of the public, but my armchair security instincts are that
real true harm could come from announcing unpatched vulnerabilities to the
public, inviting otherwise lazy or opportunist hackers to jump in.

Only based on what I can see here, the timeline in the article is 6 weeks
since discovery, and it went from reporting directly to the app maker, who did
respond once, then straight to the public. It just seems like there are a
metric ton of intermediate steps one could take before announcing publicly.

~~~
anilgulecha
This is the standard.. from 30-90 days of initial report to no intent/comment
on fixing.

If they did show intent of fixing the issue (by responding to his last 2
requests), then most researchers hold off on full-disclosure for some more
time (depends usually on if they think the issue is being exploited in the
wild or not).

In this particular case, given the sequence of events, the disclosure would be
considered responsible by the security industry.

~~~
dahart
Thank you. I can accept full public disclosure is currently considered
responsible, and is standard practice. My question might be too presumptuous,
and I didn't mean to suggest hackernoon did anything wrong here.

What I'm really wondering is why there aren't intermediate third party steps
between direct contact and public disclosure? Are there no agencies, public or
private, that can help escalate the response times? Are there reasons that the
security community has come to believe that third parties don't help, or
reasons that public disclosure is seen as the safest alternative?

~~~
ethbro
What would a third party do?

If the company is made aware of the issue, either they're fixing it or they
aren't.

If you read up on the history of responsible disclosure, it's the result of a
long, hard, and occasionally still fought battle with companies who decide
it's cheaper to sue security researchers into silence than patch flaws.

Not all companies, but I'd dare say most. Based on that, they're damned lucky
they're being given the benefit of 30-90 days.

~~~
dahart
> What would a third party do?

Well, I can certainly imagine a lot of possibilities. This question seems to
suggest there are none. If there are none, I'd like to know why, given that
it's easy to imagine some.

A company can be aware of a security flaw and in a state of fixing it, just
not as fast as the reporting party wants, for a variety of both good and bad
reasons. A company can be aware of a security flaw and not understand or agree
with the reporter's assessment of the seriousness. A company can be aware of
the flaw and be planning or fixing it too slowly, and for whatever reason
unwilling to communicate intermediate status, nonetheless taking action.

A third party's role would be exactly the same that the public has in this
situation, namely to scare the company into taking the issue seriously and
fixing it quickly. The benefit of going to a third party that could help,
instead of the public, would be preventing larger scale damage than is already
occurring.

A well respected third party could help the company with the security problem
understand that the issue is serious and could even offer services to fix the
problem, in case the company isn't competent in security matters. Just one
imaginary scenario, the third party could be a government organization or
enforcement agency that is tasked with internet safety issues, and has the
ability to revoke the offending business's right to operate if they fail to
comply.

I know some companies have been incompetent and malicious, I know that some
companies have tried to silence vulnerability reports, but is that reason to
assume they all are? Even if we did speculate that it was most companies, is
that a good reason to always escalate to the maximum first, before there's
evidence of a cover-up attempt?

I just don't see how the threat or the act of making the situation even worse
is the only possibility, or even the best possibility for the affected users.

~~~
ethbro
That's why I was asking. I honestly couldn't think of any.

 _> A company can be aware of a security flaw and in a state of fixing it,
just not as fast as the reporting party wants, for a variety of both good and
bad reasons._

Hence the commonly accepted 30-90 day window, which acts as an impetus for
companies to prioritize fixing vulnerabilities they'd not like made public.
Previous to this, they'd fix them a couple of years later during the next
rewrite. This is also why there's typically an extension available _if the
company has demonstrated in good faith that they 're working on a fix._

 _> A company can be aware of a security flaw and not understand or agree with
the reporter's assessment of the seriousness._

All the better reason to disclose it after an acceptable period and make the
company's customers aware. Either they agree with the company's position, or
they can vote with their better-informed dollars.

 _> A company can be aware of the flaw and be planning or fixing it too
slowly, and for whatever reason unwilling to communicate intermediate status,
nonetheless taking action._

This is the same as no-action. It's not hard to keep the security researchers
who reported a bug to you in the loop, outside of general corporate
incompetence.

 _> A third party's role would be exactly the same that the public has in this
situation, namely to scare the company into taking the issue seriously and
fixing it quickly. The benefit of going to a third party that could help,
instead of the public, would be preventing larger scale damage than is already
occurring._

This is exactly what responsible disclosure does. During the 30-90 day window,
only the company, the researcher, and any black hats who are aware of the
issue are informed.

 _> A well respected third party could help the company with the security
problem understand that the issue is serious and could even offer services to
fix the problem, in case the company isn't competent in security matters. Just
one imaginary scenario, the third party could be a government organization or
enforcement agency that is tasked with internet safety issues, and has the
ability to revoke the offending business's right to operate if they fail to
comply._

This sounds like an attempt to impose another corruptible organization in the
middle of something the market is currently solving quite well. When your
company shows up on the 8pm local news, customers notice. If it's not a big
deal, then it doesn't. If the result hurts your bottom line or you couldn't
produce a fix by the window, then hire more staff into your security / bug fix
teams or un@$%! the technical debt in your codebase that made delivering
impossible.

At best, it looks like CERT, whose power is essentially limited to naming and
shaming. At worst, it looks like something like the Better Business Bureau
(aka pay to certify).

Furthermore, looking at the track record of regulatory capture, I have less
than zero faith that any organization would ever revoke an offending
business's right to operate over a security matter until there's legislation
passed mandating a better security posture / consequences for the private
sector.

 _> I know some companies have been incompetent and malicious, I know that
some companies have tried to silence vulnerability reports, but is that reason
to assume they all are? Even if we did speculate that it was most companies,
is that a good reason to always escalate to the maximum first, before there's
evidence of a cover-up attempt?_

Yes. Because without a strong default posture, companies don't take security
issues seriously. The 30-90 default provides this encouragement and moreover
places the onus on an offending company to justify why they need longer than
this.

Furthermore, I don't feel like this is the "maximum first." That would be
immediately releasing any vulnerabilities discovered. The currently accepted
methodology is the result of two decades of experience in solving this problem
(look up the history on Bugtraq etc) and seems to be more effective than
anything else that's been attempted.

~~~
dahart
> This sounds like an attempt to impose another corruptible organization in
> the middle of something the market is currently solving quite well.

Whether the market has solved this "quite well" is certainly up for debate. It
could _definitely_ be better than it currently is.

We have oversight agencies that are not known for being corrupt. I'm thinking
of OSHA, the EPA, the CPA and several others. Does any side involved here
deserve due process? It is sad if the default assumption is that everyone is
cheating on all sides, and nobody deserves to be treated as innocent until
proven guilty.

~~~
ethbro
I'd rephrase, in my opinion the market / current system are solving the
problem in the best way we've found so far.

What do you see as the problems of the current system that could be improved
upon?

OHSA I'll give you (but probably because large corporations also have an
incentive to avoid lawsuits, and having and following OHSA rules helps that),
but the EPA has certainly been a political football in terms of both
enforcement (via administration) and science (via funding, see budget cuts to
climate science proposed). Not sure which CPA you're referring to (guessing
the state consumer protection agencies?), but the federal CFPA has had to put
up with astroturfing like this [1] from the unethical businesses they've been
focused on.

I'm not a cynic or a misanthrope, but I do believe that standards developed
with the expectation of bad actions from as many parties as possible are more
robust than those which assume good. If nobody misbehaves, maybe they can take
a more efficient happy path, but it's important to take the pathological case
into account too. Especially when money's involved!

[1] [https://www.forbes.com/sites/realspin/2014/04/06/the-
consume...](https://www.forbes.com/sites/realspin/2014/04/06/the-consumer-
financial-protection-bureau-a-toxic-workplace-begets-toxic-policies-for-
consumers/#52762699556b)

------
r_singh
Three years ago when I was learning web development, I noticed McDelivery in
India stores the plaintext password for each user — and it's still the same .

I know this because you can recover your password from them in plaintext. I
find it hard to believe.

~~~
r_singh
Does anyone know about more apis in India that are so porous?

~~~
anilgulecha
Yes, Axis Bank's MF. I made a full-disclosure here (plaintext passwords).

[https://medium.com/@anilgulecha/axismf-coms-bad-security-
pra...](https://medium.com/@anilgulecha/axismf-coms-bad-security-
practice-c9a1cb60110)

------
jessaustin
We need to get a bit smarter as consumers. Who really thinks it's a good idea
to give _McDonald 's_ all her/his PII?

~~~
bluesign
People who orders from mcdonalds? I guess they have delivery in India

~~~
jessaustin
As I recall from my travels there, in India it would be very cheap to simply
hire a guy on the street to run down to McDonald's for you. There are always
people hanging around looking for odd jobs.

In general I'm more and more hesitant to assume that since I'm paying someone,
they should know enough about me to be able to take out loans in my name. I'm
resigned to the fact that e.g. Amazon and PayPal know that much about me, but
at this point I would rather not give complete or true details to a
restaurant, hotel, or grocery store.

------
driverdan
Just a warning, gathering user data from an API endpoint like this is illegal
in the US, even if it doesn't require auth. Be safe!

~~~
manquer
Any citation ? My understanding was that under CFAA unless you are explicitly
circumventing security provisions it not illegal ?

~~~
rohan404
I suppose that whole incident with Weev and AT&T would serve as legal
precedent.

------
bluedino
Who developed this thing? Who approved it at McDonalds corporate?

~~~
tyingq
Typical in a Fortune 500 company it would be the marketing department, since
it's their money that pays. Company culture, though, varies, on how much
involvement would come from a privacy or IT security area of the company. In
my experience, very few companies would have an a privacy or IT security
department that had something like veto power. They typically only have a
limited influence, like being able to add some terms to a contract before it's
signed.

