
Former employees say Lyft staffers spied on passengers - dhruvarora013
https://techcrunch.com/2018/01/25/lyft-god-view/
======
ryan_j_naughton
Having seen this at too many companies, we at fair.com decided to adopt
stronger policies to prevent this, viz:

\- all inbound API requests first go to our API proxy in the secure layer.

\- the API proxy encrypts all PII using the encryption service in the secure
layer

\- then API proxy sends the request on to the appropriate service, having
swapped all PII for tokens.

\- all services in the general layer are not able to talk to the encryption
service to decrypt data.

\- thus all data that would normally be considered very sensitive (e.g. credit
reports) can be stored or passed around the services because values like SSN
and others are tokenized.

\- our services with UI's like our CRM, wherein the customer service rep needs
so see the data unencrypted, works perfectly bc the response from the API
proxy outbound decrypts the data, but selectively so based on the person's
permissions through our abstracted auth layer.

Thus in the Lyft example, the majority of employees could have access to "God
view" but with all the PII encrypted (so they couldn't search their friend's
account by email, for example) but they can still look at rides and
transactions, while just those who need to see the decrypted PII could be
given those permissions.

Of course, this assumes that the encrypted PII is sufficient for
anonymization. If you can look for all rides within a block of an address in
their God view, then you could quickly figure out which person was your friend
by narrowing down to rides originating from his house and his work. But again
that comes down to properly limiting certain search capabilities in the UI.

I don't get why more companies don't follow our approach. I've seen way too
much personally sensitive data in plaintext in databases over the years.

~~~
jjeaff
While I like that, and is a cool approach, what is really the difference
between just storing PII encrypted in the same database and then keeping your
keys locked down based on permission levels?

The API proxy would be great if you have a trusted 3rd party in charge of it
or something. But abstracting it out to a separate layer doesn't seem
necessary since it's all within the same company anyway, and developers will
need access anyway

~~~
ryan_j_naughton
There is a big difference.

\- Logs: we can log all params within the general layer without worrying about
leaking PII to the logs

\- Monitoring Network traffic. If I need to use wireshark or something similar
in the general layer, all the data there is already encrypted/tokenized so it
is safe to do so.

\- If you let each service encrypt the data itself, then all of those services
are in scope from a security perspective. Access to those services from
engineering's perspective would have to be considerably more locked down,
potentially preventing engineers from access to their ENV variables, memory
dumps, sshing into those boxes/containers, etc, for fear you could get the
encryption keys (of course while many of those things should be locked down
anyway).

\- Further, having each service do the encryption itself means you are
duplicating that solution over and over again and introducing more
opportunities for error. Having a single encryption service within the secure
layer allows us to change our approach more cleaning than it being spread out
everywhere.

And the list goes on and on...

------
mpolichette
When I did an internship at a national lab, a lot of the hard rules about
security relied on the fact that you had gone though their hiring process and
would follow the rules. There were different access levels, for sure, but only
like 2 or 3. You might have "had access" but you shouldn't be anywhere you
didn't have a good reason for being.

Lyft should be checking on this, running audits and whatnot, but they also
should be setting good policy and culture to not abuse access.

Basically, I think its reasonable to both allow many people access and expect
them to not abuse it.

~~~
jamestimmins
This was how it worked when I worked in admissions during college. You had
access to every applicants' information, grades, essays, etc., as well as
counselor feedback. But you were told that if you looked up yourself, someone
you knew, or any celebrities, then you could be fired.

I don't know if there were automated checks for that kind of thing, but
everyone knew there was a line you didn't cross.

~~~
dkarl
At Lyft people _did_ think there were automated checks, _did_ know there was a
line that shouldn't be crossed, and yet there was rampant abuse. Don't you
suspect that many of the students in your position abused their access?

I think companies should be responsible for implementing effective security,
whether that means preventing improper access or at least detecting it and
punishing it after the fact, not just establishing a "culture." The most
dangerous people, the ones who commit violent crimes, aren't limited by
culture anyway, because they despise norms and have very different perceptions
of risk compared to most people.

In your case, your fellow student workers might simply have not felt safe
sharing their crimes with you. "Naughty" behavior can be taboo yet widespread.

~~~
ams6110
If you have rules but don't enforce any consequences for breaking them, the
rules pretty quickly get ignored.

If Lyft had fired a few rulebreakers early on, everyone else would know they
were serious.

------
mattmanser
_Lyft tells TechCrunch that staffers in several departments that might need
access to this data for their job have the ability to look up this
information_

See, that's a complete lie and that's the attitude that needs to sop.

No-one _needed_ access.

Analytics definitely didn't. Engineers _never_ did. Customer services should
have to request permission _from the customer_ before accessing sensitive
data, with a valid reason. Insurance definitely did not. And "trust and
safety", just like customer service, should have had to get customer
permission first.

What's the _need_?

These are the laws that need to come into place, if your company is bigger
than X, safeguards on personal data must be in place to stop anyone accessing
a customer's personal data without explicit permission from the
customer/senior (legally culpable) manager or as needed to fulfil an order.

~~~
ryacko
[https://danluu.com/wat/](https://danluu.com/wat/) apparently this is normal:
Facebook famously let all employees access everyone’s profile for a long time,
and you can even find HN comments indicating that some recruiters would
explicitly mention that as a perk of working for Facebook. And I can think of
more than one well-regarded unicorn where everyone still has access to
basically everything, even after their first or second bad security breach.
It’s hard to get the political capital to restrict people’s access to what
they believe they need, or are entitled, to know. A lot of trendy startups
have core values like “trust” and “transparency” which make it difficult to
argue against universal access.

~~~
fooblitzky
It doesn't need to be normal. There's no reason companies couldn't build a
system that required approval from your manager before being able to access
customer data. Any time a manager granted access, that could be audited by
some second tier.

~~~
cwkoss
"Okay, stay on the line. My manager went to the bathroom 15 minutes ago, he
should be back any minute, and then we can proceed with..."

After-the-fact accounting for all 'sensitive' actions would probably be more
practical for most business needs.

I'd put a wizard in front of the thing that grants the access token to figure
out the purpose and scope of the token needed.

Information request: "Rider History"

User: current caller

Scope: Between 9 AM and 11AM today

Reason: Lost an item this morning, need to lookup driver

If you were fancy you might even be able to convert the wizard's contained
information into a request against the backend. Select trip.driver, trip.time
from trips where user_id={caller_user_id} and time={9:00-11:00 today}

------
lhorie
Someone I know was just commenting that from convos w/ people in other
companies, it seems many startups have benefitted from not being under the
limelight, and thus had the chance to quietly clean up their own messes while
Uber was taking all the heat from the media.

~~~
dominotw
I always suspected that uber flames were fanned by its many competitors.

~~~
mrkurt
Uber's dumpster fire didn't need any fanning.

------
throwawaylyfty
I worked there. I was an engineer and definitely needed access to these data.
Fraud and abuse is constantly evolving and touches every part of the business.
Everything was audited and I never saw or heard of a single abuse of access.
Privacy was talked about seriously at onboarding and other trainings. I have
no doubt if somebody was caught abusing this they’d be fired.

~~~
lovich
The article is indicating that what you are saying is at least not universally
true in Lyft

~~~
djsumdog
It's really difficult to audit all this stuff. I was at a health insurance
company where someone used su to go to a different user on a box they had root
on and it did get picked up by the security team, but only a few weeks after
it happened.

I was offered a security job at one shop and turned it down, keeping my
development role. They had 3 security people for the company (total IT size
was 500) and it involved a ton of log parsing, DDoS work, and they were
starting to develop an internal white listing application tool. They wanted to
bring me on because the desperately needed a developer to add some automation
parsing the important from the chaff. (A younger me would have probably done
this back when I wanted to be a Pen-tester. I only got interviewed/offered the
position because I made the mistake about talking about going to Defcon on a
company Slack channel and the security guy insisted I interview).

------
cronjobma
Whether it’s Uber or the NSA stories of staff spying on people for a variety
of reasons... it always comes down to people who seem to have access to things
that they probably shouldnt have gotten access to in the first place. Users
should be protected by having their data encrypted and anonymized so no other
human being (staffers, governments or hackers) can connect an ID to the data.
This way they can still access the data and use it for what ever work related
purpose, with less risk of these things happening

~~~
joshuamorton
(I work at Google, but these views are my own):

This works until you need some kind of ombudsperson. At some level the data
needs to be accessible and audit-able, otherwise what am I to do if my driver
just drops me off at a different place than where I asked, or doesn't pick me
up.

You need to know that I was in their vehicle, otherwise how can they charge me
if I ruin their car. You need to know they were my driver.

There absolutely should be data privacy guarantees that are as strong as
possible. But "encrypt and anonymize everything" doesn't work. (edit: and
note, I think this is an unfortunate truth, but still a truth).

~~~
musage
What do you do when you pay with 20$ for something, but get change for 10$?
Why are "argue with them", "accept the loss and move on", "karate chop" and
infinite other things not among the options?

> You need to know that I was in their vehicle, otherwise how can they charge
> me if I ruin their car. You need to know they were my driver.

How did taxi drivers handle that for the last nearing 100 years? People before
us managed sticky situations without destroying human civilization, so can we.

~~~
Consultant32452
Well, back in the old days if we wanted to complain we'd write it down on a
piece of paper, wrap that piece of paper inside another piece of paper, and
then put that in an unlocked box out in our yard.

That's similarly how we'd order products, out of magazines. Only in that case
on the paper we'd include things like our bank account information that we'd
put in the unlocked box in our yard.

I'm not suggesting that things can't or shouldn't be better. I think it's just
important to have a realistic perspective on where we are in this continuum of
service vs privacy.

~~~
mikeash
I generally agree with you, but I think it should be pointed out that
unauthorized access to that unlocked box carries severe punishment. A big part
of the problem with things like this is that not only are there no controls
preventing access to private info, but there are also few if any consequences.

~~~
Consultant32452
How much control do you think there was in the businesses that got those
wrapped pieces of paper with your banking account information? Sure, there are
penalties while it's in the box in your yard, but not really once it was at
its destination. I bet it's much more strict today than it was a few decades
ago.

~~~
yellow_postit
I'm not so sure. There was a lot less surface area then to secure vs today's
"collect all the data" ethos

------
mmanfrin
The screenshots from the leaker mention that they are using "redshift", which
is the name of Amazon's RDB product. Which means this is about people who have
access to the database. This is unsurprising that they could access customer
data given access to their database. I'm not sure how you prevent this without
preventing access to the db (and there are legitimate reasons people within
the company would have access -- DBAs, Engineers who are writing direct
queries, etc).

Not defending people accessing PII, but just saying there are legitimate
reasons why someone _could_ have access.

~~~
gresrun
You can't prevent access but you can log all access and require a written
reason for access. That, followed up by routine audits of access logs will
reduce and discourage abuse as described in the article.

~~~
cookiecaper
This is about Redshift, Amazon's cloud-based data warehousing tool. Auditing
and logging every individual access made by data analysts, engineers, and
others making use of Redshift would make their jobs impossible. One day of
queries would take weeks to audit and validate a legitimate use case for all
the individual data that got touched, and if you're just going to say "oh they
needed everyone's PII because it was a big analytical query like they do all
day", you're back at square one.

The reality is that some people are going to need wide-reaching access. You
could monitor for certain problematic access patterns, like someone who is
supposed to be doing primarily aggregate queries doing a lot of specific ones,
and I'm sure that'd be a good thing to do, but to be honest there are probably
much higher priorities since employees who need sensitive access are probably
going to be able to avoid that type of detection.

------
erulabs
I can't even imagine how much PII is sitting in AWS Redshift from various
startups... It's brutally slow if you don't tune the encoding types _extremely
carefully_, which no one ever does. One has to wonder, if this is such a big
deal, what the access rights from AWS' perspective looks like xD

Imagine the data team convincing the rest of the company that they're doing
anything to make their redshift _slower_. Encryption? People want their
graphs, damnit!

------
jonknee
What circumstance would be needed to have a view where an employee can find
riders by name and look at their whole history? If there is a complaint it
should allow the customer service agent to see the ride and perhaps some
history (ratings make sense, but including full locations/times seems unwise),
but I can't think of a reason why this would ever need to be a process
_started_ by a Lyft agent and not the customer or driver.

~~~
Theodores
I used to be in the habit of taking my cat to the vet in a regular taxi, with
a local company. I always got the same guy and the same car as they always
'knew' where I was going. The guy I got liked cats, didn't mind waiting around
and made sure everything was looked after. Others were allergic to cats or
only doing airport trips, so I had no problem with them looking at my history
and doing their best for me.

~~~
jonknee
That’s great for you, not great for say someone who’s being stalked by an
abusive ex. It’s such an obvious and foreseeable problem, they need to own it.

~~~
fjsolwmv
Most male employees wouldn't forsee that problem.

------
ilamont
That sure changes the Uber vs. Lyft media narrative.

~~~
djsumdog
Not really. Uber created the Hell Map, Greyball, and had the two harassment
memos. This just seems to be a failure to actively monitor their audits and do
permissions correctly .. something that Uber got hit with and fixed too.

Lyft just needs to deal with this issue now that it's public, not it's not
specific to Lyft. It's really difficult to effectively scan for abuse, even
when you audit everything. I've seen similar issues at many companies.

This Lyft situation might be more oversight where Uber seems to be more
malice.

------
nkkollaw
I was talking about this with my girlfriend at dinner tonight.

While we were eating, I noticed a few cameras that got a view of the whole
restaurant, and wondered: of course filming the restaurant might be useful in
case of a robbery (?) or for insurance, etc., but what are the chances the
minimum-wage employees that checks those DON'T use it to check out hot women
or embarassing stuff that happens from time to time..?

With us being tracked with data and video all the time nowadays we are safer--
or the companies that have the footage are safer--but of course there is a lot
of room for abuse and improper usage.

~~~
duxup
"the minimum-wage employees that checks those"

Does anyone even look at them? I just assume most video gets recorded and then
recorded over or somehow discarded.

Outside of situations like say an airport or casino I doubt anyone is actively
watching most of the cameras unless something happens after the fact.

~~~
nkkollaw
Perhaps not actively, but I bet in case of a hot woman or something weird
going on they'll know where the monitors are.

I also have two separate family members that have businesses with cameras, and
they both look at what's going from an app on their iPads while they watch TV
in the evening (actually, having seen them do it is probably why I know videos
at businesses are not handled properly).

~~~
averagewall
Don't public places like shops have to have a sign saying there's CCTV? So
customers know they're watched just like in the street.

~~~
nkkollaw
Not in Italy nor Poland.

Also, what difference would that make? I already know I'm being filmed, the
problem is that there is no control over how that video is used.

~~~
jaclaz
>Not in Italy nor Poland.

Which does not mean that there isn't a Law (in Italy) mandating it.

[Italian] Articolo 13 del Decreto Legislativo n.196/2003 e Videosorveglianza -
Provvedimento generale - 29 aprile 2004:

[http://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-...](http://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-
display/docweb/1003482) [/Italian]

In a nutshell: [http://www.garanteprivacy.it/Garante-Home-
theme/images/origi...](http://www.garanteprivacy.it/Garante-Home-
theme/images/original/CartelloVideosorveglianza.gif)

Surely it makes no difference whatsoever.

------
msoad
Since when a Blind post is a credible source for news?

~~~
joeblau
Heh, were you around for the Secret days? So much "news" came from that app.

------
OliverJones
Protection of PII has, for a long time, been a central tenet of USA-based
health care IT (due to the HIPAA / ARRA-2009) regulations.

It's possible to do that fairly well, and still leave need-to-know exceptions.
(The subsititute nurse on the intensive care unit needs to know if a
particular patient has Crohns disease, for example).

My point is, PII CAN be protected reasonably well. It takes executive will to
do so, and training, and monitoring.

I worked in a hospital for a while. They had good training on how to avoid
misusing PII. It starts with "don't look up your ex or your senator" and goes
into ways to keep patient data safe.

When there IS a leak HIPAA-covered operations are obliged to disclose it. See
here for the catalog of recent disclosures.

    
    
       https://ocrportal.hhs.gov/ocr/breach/breach_report.jsf
    

Doing privacy right is systemically possible. But it's a systemwide task, not
just a one-off training or audit.

(Now, we can talk about whether HIPAA's main point--preventing insurers from
abusing patient data--is working or not. Your doc makes you sign a permission
slip letting insurers see your information, so you waive that protection in
return for reimbursement. But that's a different issue.)

~~~
rmc
> _PII CAN be protected reasonably well. It takes executive will to do so_

These is where the EU's new data protection law (GDPR) could help. It has
large fines, and the ability for NGOs to sue you on behalf of users. When the
alternative is a big fine, it's easier to find the will.

------
maybeiambatman
I can completely see this happening. I have interned at small startups in the
past where practice like this was rampant.

~~~
tzahola
Yeah, and it’s usually called “data science” and “personalized targeting”.

------
mys_throwaway
I happen to know that the former country manager of Uber in Malaysia used
driver personal information to bully and then swindle a gig worker at another
(but not competing) gig-economy startup.

Long story short, he pressured the worker into doing a gig for free if he
could guess his birthday. Not quite a fair gamble considering he knew the
guy's birthday from when he'd signed up as a driver for Uber.

This would still be a terrible misuse of confidential information if he'd
stopped at the poorly executed joke, but the highest ranked Uber employee in
Malaysia instead insisted the marginally employed freelancer hold to the
agreement.

~~~
0xB31B1B
What does this have to do with lyft?

~~~
mys_throwaway
\- The linked article alleges employees at Lyft abused information.

\- Lyft is a direct competitor of Uber.

I think my account of an employee at Uber, and not just any employee, but a
country manager, committing a worse abuse is fairly relevant to the topic.

------
buildbuildbuild
Can unfortunately confirm that friends at both Lyft and Uber have in the past
known my ride history. I admittedly had to push a bit jokingly for either to
look it up, but the fact that it is even possible for insiders to access
internal production databases makes me suspect this problem is far more
widespread than just at ridesharing companies.

I wonder who at Fastmail can read user emails? Who at Heroku can access my
code or ENV secrets? Can bank employees see recent transactions, bypassing ACH
verification deposits’ “Security”?

Sad how rare end to end encryption is as a feature in 2018.

~~~
dx034
Many companies have userdata access heavily restricted and audit every access.
It's not that hard to log all queries to production that don't come from the
system. And if a developer runs queries against the production database that
should pop up somewhere immediately.

Not different in banks, any lookup of customer accounts is monitored and
checked. Looking up a friend's account will get you fired immediately (or
worse).

~~~
buildbuildbuild
What I worry about is whether the logging is at the application level or
database level.

Would not be surprised if my banker’s crusty windows XP teller software’s
auditing could be bypassed by an unexpectedly-savvy insider.

~~~
dx034
Logging is done at every level. OS, network and multiple times on a db level
(access log plus execution log).

------
retox
I thought 'staffer' was only used when talking about people working in
government or for a political party. Aren't 'staff' or 'employees' just as
good for the headline?

~~~
dragonwriter
> I thought 'staffer' was only used when talking about people working in
> government or for a political party.

Nope.

> Aren't 'staff' or 'employees' just as good for the headline?

No. Staff has a different connotation; as a mass noun, it implies things that
are true generally* of the staff. “Employees” would be about as good as
“staffers”; the latter is shorter, though, which often is preferable in
headlines.

~~~
retox
Employees is already used in the headline, so the editor would have been
looking for a different word to use anyway.

------
asow92
Who honestly finds this surprising?

~~~
spydum
Very very few people I hope. There is no economic incentive for companies to
invest in secure design/architecture. The costs of building it right almost
always lose to other product owner features and timelines.

Until we can actually get incentives aligned, companies will continue to build
leaky, poorly designed apps with no layered controls. Yes, someone will jump
on me and say "but my company does is right!". Great for you! Share your
knowledge! You are a snowflake, and probably fortunate to have ethical
leadership! The rest of us are doomed.

------
Avshalom
Just a reminder: "The Kinder Gentler XXXX" or "XXXX but more Ethical" is no
the same as "Kind and Gentle" or "Ethical"

------
lorddoig
The sixth word in this article is ‘scandal’—is it just me or is a cab company
knowing your pick up and drop off locations really not that outrageous?

------
outworlder
Form TFA: "I have tried to change that from within"

Uh oh. This will probably narrow down enough so that they can have a pretty
good idea of who posted this...

------
pc2g4d
Strange coincidence that this headline shares the top spot with an
announcement of a new Waze Carpool service... which seems to compete with
Lyft?

------
mattdodge
None of the data that was available sounds like sensitive PII so I'm not sure
why anyone would be surprised by this. I would probably think that
rider/driver feedback isn't PII at all.

I suppose it might be a bit questionable if Lyft was creating and providing
tools to make it easy to look this stuff up and promoting it within the
company but that doesn't sound like the case either.

~~~
ageitgey
> None of the data that was available sounds like sensitive PII so I'm not
> sure why anyone would be surprised by this.

The first sentence says employees would "view the personal contact info and
ride history of the startup’s passengers."

I would consider contact info and exact physical movements to absolutely be
PII and information that is sensitive. If not that, then what?

~~~
mattdodge
Not all PII is inherently "sensitive" though. Meaning not everything that can
be used to actually identify you needs to be encrypted and protected. I don't
know for sure but I don't think names or addresses qualify as that.

~~~
s73ver_
I absolutely would say it would, especially where there's a very good chance
that the home and work addresses are part of that list, and the idea that
someone would use a database like that to spy on an ex and harass and/or
assault them is an actual thing that happens.

------
lakechfoma
This is a shame for obvious reasons but I don't like the comparison to "god
view". Uber designed a UI around accessing PII for their employees to use
wanton, Lyft sounds like they need to get their data access under control,
stop making excuses for why anyone "needs" real user data, and actually run
audits.

------
benmmurphy
requiring a second person to verify/supervise access to confidential data
solves a lot of this problem (at a cost though). like it doesn't prevent a
strong adversary from getting access to data they shouldn't but it prevents
normal people from abusing power they have stumbled upon. require two logins +
audit and a lot of the abuse goes away. people are less inclined to lose their
job because their colleague wants to check up on their girlfriend.

------
Kiro
And here we go. Hopefully people will finally understand that there's nothing
uniquely evil with Uber and that all players in this sector are rotten.

------
nicky0
A tangent, but why "staffers" and not "staff"?

------
cbsmith
I don't know if I blame the company for this. Anecdotal queries are useful
debugging tools. Or makes more sense for employees to just have some sense of
decency.

------
DerBesserWisser
Organisational data protection is no data protection at all.

------
wfbarks
I hate that word "Staffers", immediately makes me think of politics.

~~~
anigbrowl
That's you. It's not particular to any industry.

------
Animats
You used to be able to get info like that by putting a scanner on taxicab
radio channels. Few people bothered.

------
pfarnsworth
This is the biggest point I've been making to all my friends. What they
believe Uber is so evilly doing, it will probably show that companies like
Lyft and Didi and Grab, etc are all engaged in the same sorts of actions. Uber
is the whipping boy for ridesharing, but I'm willing to bet that they are do
basically the same thing.

My question for Steve Yegge is:

He seems to somehow believe that Grab has the moral high ground over Uber and
Lyft. What is he going to do when he finds out that Grab behaves in exactly
the same manner as Uber?

