Don't check secrets into VCS, folks!
One snippet of the email the article didn't mention was that Sullivan's firing happened pretty much right after Dara learned of the breach and an investigation was conducted. It definitely inspires more confidence in leadership seeing that the CEO will not tolerate unethical behavior.
Also, Uber has been hiring a lot of new people - the ratio of new people vs old timers is really high. I'm obviously just one anecdata point, but I believe new hires (and a lot of old timers) want Uber to be an ethical company, and many have joined the company specifically to tackle that challenge. One great example that comes to mind was when one board member made a sexist remark on an all-hands meeting a few months ago and by the end of that same day, Liane Hornsey (who had just joined as the new head of HR) had him give up his seat.
There's a big push towards trying to make things right, with the holden report, the 180 days of change campaign, the implementation of new training courses, anonymous complaint hotline for employees, etc. And the unspoken message right now is pretty clear: inappropriate conduct _will_ get you fired, even if you are the head of your org.
Obviously there's still a lot of work to be done, but I think we're at least in the right track now.
> it would be a shame for a culture shift to coincide with the realization of one of them
I think everyone at Uber has at least some idea about the P&L situation, but there's no doubt in people's minds that we need to drop the go-fast-and-dubiously culture and embrace a do-things-properly culture. If anything, I think it's more likely that a major crisis would continue to drive home that idea.
Hopefully he will also slowly eradicate the existing unethical behaviour.
... crap. My kids won't be buying a GM car.
I probably should have spoonfed the readers more. They grew up in a world that doesn't need critical thinking anymore so it's probably too much to ask for their brains to activate while reading on a website and have them put distinct ideas together to form a grander one.
Must. Downvote. Comments full of facts but from people I dislike. Must.... errooorrrrroooorrrrrr. 505.
It's okay. Every time I see downvotes here, I know I said something great but I just pissed someone in power off. I'm used to being a minority oppressed by a majority in power. It's no big deal. The system just builds people like that these days.
Either make a valid point or let your comments stand. Leave the /r/iamverysmart tandems at the door
Its always a person, its never a institution, or organisation, never a boring measurement like bureacratic oversight or well made laws.
Good luck and I hope you're doing it for the money, cause nobody should buy the "Uber is an ethical company" bs.
I don't have any context on why someone would have put production secrets in a GH repo. If it had happened in my team, I would definitely have sounded the alarm at code review.
Yeah, I'm totally with you there. Not cool :(
In an organization of about 200 engineers across various products, 1000+ github repos, and 10 or so different CI systems. We enforce 2FA at github. I can still easily see how someone could easily gain access to source code with secrets in it.
Wait, what? That's 5+ repos per engineer. What on earth would warrant that level of granularity? I've only worked once in my career in a place that used more than 2-3 repositories total, and that was a "MegaTechGiant" with thousands of engineers.
- 1 repo for the frontend
- 1 for each api
- 1 for the infrastructure terraform scripts
It's good for CI / CD and general code base organization. Also easier to track changes and handle security. You give devs access only to the repos they need to do their job.
Our team has a product with multiple integrations and internal apis, so we easily have 40+ repos.
I'm surprised you're being so heavily downvoted for your question. Engineering teams (and software companies) come in all shapes and sizes. It is absolutely reasonable for even an experienced engineer to have only worked at companies with a handful of repos.
Rather than downvoting, it would have been helpful to explain why your company has opted for such granularity (perhaps engineers or teams have a high level of autonomy, or your software is highly componentised and built from a great many, separately managed, parts).
I personally don't have a strong opinion about either way - they both have tradeoffs.
I can see that with a company that has grown day 1 around Github, especially during early startup stages with a variety of contributors but no formalised "organization".
> Kalanick, Uber’s co-founder and former CEO, learned of the hack in November 2016, a month after it took place, the company said.
I don't know if they were using GHE. If they were, at the time it did not come with a good way for them to enforce 2FA for users.
Well, sort of - at the application level, that's true, but GHE is typically run behind a VPN. Certainly that should be the case for a company the size of Uber.
Even before GHE added 2FA, it shouldn't have been possible for a leaked set of login credentials to be used to access GHE, without some other sort of compromise (VPN cert, physical compromise of hardware, etc.).
Lateral movement by an attacker is a real thing. And while credential reuse is something most security focused web companies are trying to mitigate, a push for "sso"-like account management is seemingly undoing most of that effort inside the network if not done properly (specifically, auditing and monitoring of behavior).
This is why 2FA is important! I worked for a company that had a very similar setup: I essentially had a single "LDAP" password. But: everything web-browser went through a single sign-on site, and it required 2FA (and so, you were never entering your password into even random internal applications: there was exactly one page where you should log in). Terminal stuff had a similar flow that also required 2FA (e.g., for SSH). As a user, the experience was not painful at all.
It does seem like, however, from an operations standpoint, getting such a setup in the first place is not trivial.
They don't use GHE, they use Phabricator.
I don't know how to feel knowing that there is even one software-focused company out there that doesn't enforce 2fa on its github accounts. Like... how?! Why?!
Just one of the many ways to bypass it in this case: hack a developer machine and look at the local checkout.
Who cares about access to individual dev's machines if the credentials to access code on github are obtained - 2FA at least offers some degree of protection in this scenario. The scope for attack is extremely different.
They run browsers, communication tools, all sort of product experiments and testbeds, and they even connect to random airport/hotel wifi.
Attack a laptop and all software and hardware 2FA tokens are useless. A backdoor can sit around and wait for the user to press the button.
There exist 2FA protocols that permit tying the 2FA challenge to a particular context: you can't just take the response from the 2FA hardware and use it anywhere. In this regard, the malware doesn't get anything more than what they already have, and the 2FA still adds protection: if the malware is able to compromise your password (e.g., through keylogging) it doesn't immediately get access to everything you have access to. Now, of course, if you 2FA for some resource, then yes, at that point, you're probably doomed, but I don't believe that gets the malware anything new (e.g., once the auth is complete, if that results in a "user is logged in" cookie, the malware could just read that, and go to town.)
Compromise of a local machine is definitely bad, and not what you want, but 2FA tokens are not useless, even in that situation.
If you have an ultra-secure door, the thiefs will just enter through your regular window.
Sure, there are only 13 projects on https://uber.github.io/, but there are 169 on https://github.com/uber, and it only takes a short while to scan for access keys. There are plenty of open tools that will scan github for keys.
This may not have been targeted at Uber but a net for all of github with Uber being just one company that was hit up for cash. Unless you're saying that you know the motivations of the attackers.
Do you give every enployee a mobile phone, or do you ask your employees to use their own personal phones?
Asking them to use their personal phones seems like a very bad solution. Many software companies do not routinely give developers mobile phones...
This is incorrect.
You only need the ability to generate TOTP or U2F tokens. This is often done using a smartphone app, but can also be done by a desktop app like 1Password or a hardware device like a Yubikey:
It's things like that that make me wonder why TOTP tokens are supposed to be conceptually different from passwords. A TOTP scheme involves knowing a master password, and nothing else.
Why? You're not any less secure by using a personal phone. What are the odds that an employee is going to be phished and have their phone compromised by the same entity.
I'm already answering emails out of office hours which is for my employers benefit and they want to functionaly own my phone because of it?
For companies that don't do that Github also offers the option of FIDO U2F compatible keys.
I've never once worked in a company that permitted source code to leave the company network.
I think you’ve misinterpreted people’s reactions. It’s not at all controversial to use other companies’ services for your most sensitive assets, it’s your opinion that appears controversial to them. If you’re in control of your own servers, what remains is to trust GitHub Enterprise not to literally phone home your source code or to enable remote code execution on your own server. There are myriad information security policies and compliance methodologies for compartmentalizing, quantifying sharing that risk.
For what it’s worth, having personally performed security assessments for over 50 different companies across the gamut of size/maturity, nearly all of them use a centralized VCS hosted or produced by GitHub or Bitbucket (and nowadays, occasionally GitLab too).
HN users tend toward a very pro-SaaS stance.
If that were the case, there would be no authentication whatsoever to access the closed-source site; the hacker would have just needed to guess the right url.
Edit: I mean it would surprise me if it wasn't recommended practice, but it would also surprise me if it was somehow strictly enforced.
The attacker can submit your info to GitHub the moment you submit to the malicious site. You receive the token via SMS as expected, enter it on the second page of the malicious site, granting them access.
I'm intrigued. Why would that be a higher-value target?
I am thinking now would be a good time to port it to working with webhooks as well.
The tool would have blocked the aws credentials from being checked in: https://github.com/opnfv/releng-anteater/blob/master/master_...
One you can use something like keypass to store a database in a shared location if you don't trust the SaaS offerings.
If you are looking at storing credentials for automation purposes, and don't have a secret store built in, you could look at something like Hashicorp Vault to help provide this for you
The user in question has some specific interest in editing LogMeIn, parent of LastPass, pages: https://en.wikipedia.org/w/index.php?limit=50&title=Special%...
Sometimes I just go on google hangouts and share my screen if I'm feeling lazy.
1 - https://www.envkey.com
Removing the secrets from the repository is nice to have, but not that necessary - what is mandatory is to ensure that the compromised secrets are no longer useful, since they aren't secret any more and won't be ever again.
> Warning: Once you have pushed a commit to GitHub, you should consider any data it contains to be compromised. If you committed a password, change it! If you committed a key, generate a new one.
Is a good argument as to why you shouldn't let users erase this data from history, it's already out there so no matter how painful or convoluted your process is for regenerating auth credentials is, you need to do it if you've published them into your SCM. If the process is painful you might want to simplify it because you'll probably need to do it sometime in the future again... yes even you large corporate workers who have no control over credential regeneration, an arduous process leads to credential sharing between projects which is another horrible thing.
There are cases- such as complying with court orders- where removing the data is appropriate (even if a bit futile in the long run).
I suppose? But at this point they have your code base. You are so owned at that point.
I don't think either of those companies would cease to exist if their code bases leaked online today. Sure, someone might get something to build, but there is surely A LOT of things around the code bases to support all of this, which means the code bases would mostly serve as a study for software in general (and finding holes obviously).
Github is a bit unfair comparision, as their business is literally to make your code private, so if it leaks then of course it would be a hard hit. For the general company, I think leaking access credentials is a much bigger (but easier to fix) problem than leaking the source code itself.
A serious Photoshop clone that can match PS feature for feature would wipe Adobe, people cannot wait to get rid of them. 25% of MS revenues comes directly from Office and another 25% from Windows or other commercial offerings that are basically driven by Office, so yeah, MS would survive a working Office clone, but they would be deeply wounded; they pulled all the dirty tricks in the book to keep competitors from integrating seamlessly... having the real code responsible for their formats available in the open, would hurt them massively.
These companies are as big as they are because they did the right moves at the right time, and now they have spent so many man-decades on their codebases that nobody can realistically hope to catch up starting from scratch; but having a good look at their codebases would likely kickstart oozes of competitors with very good chances to replace them in a very short time.
> For the general company, I think leaking access credentials is a much bigger (but easier to fix) problem than leaking the source code itself.
Credentials are a mean to an end: protecting something. If you are Ashley Madison, your valuable IP is your database of users and their preferences; but if you are Microsoft or Adobe, what credentials are protecting is your source code. Adobe survived their user credentials being leaked, like so many other companies. They would have hurt much more had they leaked the entire PS codebase.
Just open a shop in China and obfuscate a bit. Job done.
I hate these types of arguments. Yeah no one said that ever.
Losing your code base is terrible. I view it as losing a journal. What your company tries, tests you run, funny comments, or funny mistakes. I mean they post it on the net, blackmail team members, imposter team members, forge for leaks, sell it, pushes to prod from compromised accounts, CI systems, -- seems bad to me. Sure don't have aws keys in there.
Also "pushes to prod from compromised accounts, CI systems" seems more related to access keys and account security rather than the actual code base.
But hey, in the end I'm no security expert so what do I know.
Maybe pushing something that was labeled as a "security patch" but was actually a disguised vulnerability? I could see not even checking into that, and just downloading it. But I'm on a small team. Do big companies have procedures to protect against this?
Quick google yielded this https://github.com/awslabs/git-secrets
If someone gains access to a system that uses the credentials, then there is, in principle, no difference between puppeteering that system versus stealing its credentials.
Ok, how do you handle a bootstrap problem?
The "I didn't know, I just took a vast salary to play golf" argument should not be any kind of defence. If there is the real prospect of going to jail, golfers will resign, those who take the job would actually take an interest and have the ability to do so.
An idea whose time has come.
No sensible person would sign up for the CSO position if they risked jail time when their company gets hacked. You can't really control it. A random engineer could make a mistake that gets hackers a step closer. Or it could be a zero-day vulnerability that nobody knows how to protect against.
There are millions of motivated adversaries out there and a finite number of employees at your company to outsmart them. It's a game you can't win. The larger your company becomes, the broader your attack surface becomes, and the higher value a target you become.
You just have to hope that when you get hacked, it is a "forgiveable" hack like a zero-day or highly targeted attack.
If CSO's are to be personally accountable for the malicious actions of others, it needs to be due to clear negligence on their part and the responsibilities need to be clearly defined.
We're talking about cover up, if you cover up the fact someone stole private data belonging to other people you took responsibility for. If you try and pretend it didn't happen because you might get away with it then claim you didn't know when it comes out? Then yes, absolutely, you deserve to risk jail time for that. As does your board of directors.
CSOs, senior management, boards of directors should be personally responsible for their own actions. They need to have something at stake that they really dread losing when making the decision "perhaps we can get away with this?"
And how do you make that scale? If I miss a semicolon and leak 5 people's data, then I'd hardly get any jail time. If I miss a semicolon and leak 150,000,000 people's data, I will die in prison. In both scenarios, I made the same error, but the outcomes were insanely different!
So how does one draw the lines between bad luck, reasonable security problems, everyday poor performance, civil liability, and criminal negligence?
> A random engineer could make a mistake that gets hackers a step closer
That could be prevented, to a large extent, with much tighter controls. Of course, those controls would greatly increases the cost of operations and other things.
Is it possible we're all accustomed to the wrong model, that our standard of IT security is like the standard of car safety in the early auto industry (and maybe until the 1970s) - far too lenient? Maybe we should be facing the potential fact that the normal cost of IT should include those controls and other security expenses.
By analyzing how they prepared for the inevitable attack (mitigation), as well as how they respond to it after the fact.
Essentially we need a price tag on personal data. Let's say 1$ for each email and password leaked to an unknown number of entities. That would be a 114M$ incentive for Uber to keep their data secure.
It's a shame this happened pre-GDPR because that has steep fines - 4% of worldwide revenue - which would be north of $260M going off their 2015 numbers. And that's assuming they get off with a single fine.
As CEO, former engineer and customer I really hope this gets some serious traction. IMHO if you are making money from customers, it should be mandatory to follow compliance regulations and protect all data.
Sure they can. It is called "insurance". Sort of like malpractice. CSO wants to get paid millions of dollars? Excellent, either be personally on the hook or have an insurance company that would be willing to underwrite your method of dealing with it, be that having your own crack team of people who get to oversee everything, or relying on Jr system admins from your company or whatever else.
I think that's a very sad commentary on how little your company values security.
OP's a realistic. His perspective is nothing to do with how a company values security.
No one in security assumes they won't get hacked, we assume we will and when we do get compromised. Our metrics aren't measured on if, our success metrics are:
* How quickly we find out
* How much damage we can mitigate
* How quickly we mitigate the risks and controls for X vulnerability and
* How we incorporate our reporting to find trends to find the event quicker next time
Now we report on many compromises. I'm not talking just about data breaches here, there's a whole spectrum of compromises that we manage and mitigate.
I don't know anyone who operates in Security who has a different mindset to OP.
Of course it does. The stick is not big enough so CSOs just do not care enough. Increase a size of the stick and it would split the group of CSOs into two:
1. Like OP will run away saying "I'm not going to put myself in a line of fire if crap gets hacked". We need broomsticks for those.
2. The ones that will say "OK, two years", do their best and probably succeed.
This has nothing to do with not valuing security, it's just about being realistic. Can you guarantee that your company is hacker-proof? No? Then we're on the same page.
It's great that you take all those steps and investment. The fact that you still don't believe you can control whether or not you get hacked is a sad reflection of modern software practices, which are akin to throwing together a house out of plywood, newspaper, and gasoline, then asking the security team to place fire extinguishers.
I believe it's more like getting into a car accident. You can be the best driver in the world, you can always drive under the speed limit and take all precautions but you are bound to be in an accident at one point or another.
You may go decades without incident but it's almost a certainty that you will find yourself in a situation where another driver collides with you in a way that couldn't have been forseen. This driver could have hit you accidentally or on purpose, it doesn't matter. You could be teaching another how to drive during the incident, you could have had a momentary lapse in judgment...it doesn't matter. What matters is how you handle the situation after the fact and the steps you took to mitigate the damage.
If you spend enough time on the road the likelihood of an incident approaches 100%.
If you don't assume that you will be hacked, then you won't design in auditing, alerting and containment that will tell you when you've been hacked, let you determine what data was compromised, and prevent the attacker from having free reign over all of your systems.
Otherwise, you'll be like a former coworker that refused to secure internal systems because "We paid a lot of money for our firewall, it's going to block any hackers". It took me less than 30 minutes on my first day to hack the login passwords of senior executives because they logged into a non-SSL reporting server (and I did through a simple MAC overflow attack on a network switch from a network port in the break room)
I see a big difference between preparing for the event of a hack, and believing that a hack is inevitable no matter what practices are in place.
CSO: We have airtight security, we cannot get hacked.
CSO: Please approve and fund this plan to handle a breach in case we are hacked.
CEO: But you just told me we can't get hacked.
CSO: Right, it's impossible.
CEO: So why do we need to spend money preparing for it?
CSO: Just in case.
CEO: Just in case what? You just told me it can't happen.
That seems a little like asking for money to prepare for an alien invasion or a zombie attack.
Also, the personal liability for board members and managers is something that is exceedingly pursued by shareholders and creditors (for the financial liability) and prosecutors (for the criminal liability) compared to how it used to be.
I don't think it matters much, though.
You cant just give Jail-time for data breeches. It would encourage cover ups and scape goats. Also never underestimate just how disorganised large organisations are, incompetence at addressing issues is systemic and goes far beyond data protection. What seems like malice is sometimes just plain stupidity.
It has to be backed by some sort of regulatory framework. Just like a fire code or employment rights. But crafted in a way that it doesnt end up like PCI, ratings agencies or financial auditors. IE creating an industry that sells compliance and not actual security.
Perhaps something light, like mandetory minimum bug-bounty schemes for all companies, where fines (or more) are imposed for not addressing issues and an independant regulator works with larger companies to resolve issues (or penalise the company severely if they deliberately wont).
The reasonable company director should have known X and when found out was bound to report it. Person Y did not report it, should have known as it was their job to know and there aren't extenuating circumstances. Guilty. 6 months. Next case.
"I don't know anything about this company I accept 7 figure sums to oversee as a director." Should never be any kind of legal defence. If senior management and directors have something personally at risk you'll see vastly improved behavior. Right now we're selecting for the opposite and seeing the inevitable results.
There is a story like this about directors and management cover ups every single day
Who will fill the void ? People who are overconfident and people who are not scared of going to jail.
It's much better to impose financial penalties. Should the directors or the shareholders pay ? Let them figure it out between themselves!
We do this for CFOs, Chief Compliance Officers and many other roles for many other things.
For example, the Target credit card breach occurred because malware intercepted the credit card information at the Point of Sale appliances before the information was encrypted and transmitted.
Prison time seems extreme, but Congress should should absolutely establish statutory fines (for companies) for breaches of PII. Then any company officer can save the company money by simply spending more on prevention because it will lower breach insurance premiums.
WEll that's already happening without jail time so maybe give it a whirl. LEt's get real here, the idea of suits going to jail is just scary to some people but it'll be fine.
This happened more than a year ago, and only now that they're planning on offering identity theft protection? That's ridiculous.
"Sorry we left uranium in your house a year ago and didn't bother telling you. Here's a coupon for free cancer screenings."
I don't think the average Joe is up to date with this news, or even care about.
Just like we don't know anything about the CEO of the product making your detergents, the CEO of the brand of clothes you purchase, the CEO of your oven at home... Not knowing about CEOs is rather the norm, not the exception, and ultimately if the product/service is good, the CEO does not matter for most people, or they are only going to care about it in passing and then return to their old habits. GoDaddy is still in business.
If everyone in the country was told "write a check to GM for $50 or go to jail," and conservative media wasn't berating Tesla/Musk, public opinion would be a lot different... Take it all with some healthy skepticism.
Personally, I like him quite a bit, but to be fair I know that outside of my own echo chamber of my news and social media feeds, that there are a lot of people who don't like him, and where that negativity is coming from.
HN is a community. If users don't have some consistent identity for others to relate to, we may as well have no usernames and no community at all. That would be a different kind of forum.
Anonymity is fine, and throwaways for a specific purpose are ok. Just not routinely.
There are a couple different things at play.
First, one plank in their infowar strategy is to combat anything that even indirectly propagates any understanding of climate change among the proles. They take positions even against more-efficient-than-incandescent light bulbs, so this line of attack certainly includes targeting electric cars and solar. Musk is obviously a celebrity of sorts in these areas. Any government help to build solar plants or subsidize non-fossil-fuel alternatives (e.g. electric vehicles) is portrayed as deeply corrupt, a betrayal of American values and working families, etc. Ergo, Musk is bad.
Two, Elon Musk and John McCain have a strong association. Musk has supported McCain and in turn McCain has supported Musk and his business ventures. This is the kind of invest-in-politicians-who-can-help-you relationship that is pretty much a fundamental building block of how the American government works, but it always looks bad to somebody inclined to see it that way. (It's probably also objectively bad that this is how the system works, but anyway it is.) So I think a lot of conservative media that doesn't like McCain (because he is too "establishment" or whatever the reason) have repeatedly brought Musk into it, implying corruption on the part of McCain to help Musk use Russian rocket engines at SpaceX, for example. McCain is bad, ergo his sleazy buddy Musk is also bad.
Secondly, SpaceX have been spending millions in political lobbying and McCain's political campaign is among many who benefited from such largess (and his own McCain institute) from Musk. Most Americans don't see this kind of lobbying activities with millions dollars spent on politicians as a "fundamental building block" of a well-functioning gov't, but a corrosive force that serves interests of a few at the expense of the majority, however well-meaning in the eyes of Musk supporters. I personally don't see any problem with organizing an interest group to better represent their views -- or lobbyists -- but when it involves so much money and the final outcome ends in lopsided legislation favoring one particular individual or company over others, it's probably a good time to question their "invest-in-politicians-who-can-help-you" relationship.
Ideologically, McCain's views are aligned with those of the "neoconservative" wing of the republican party -- he's mostly known for aggressive foreign policies, American democracy everywhere, and subsequently pro-Military Industry Complex (MIC) which inevitably all leads to a bigger gov't. While most conservatives are also for strong national defense, not everyone is necessarily on board with permanent warfare and welfare (and police) state and that's why "other" conservatives are so annoyed with McCain.
So, once you put these together, it's not too difficult to see why the holy alliance between Must and McCain is criticized by those on the right. They are not necessarily grounded on "anti-facts" or alt-right views as you mischaracterized here. It's just too bad that your pathetic, uninformed comment had to start with the poisoning the well logical fallacy.
Although I do think there tends to be a broader overlap on the "conservative" side, for reasons for that are complicated and don't necessarily have a lot to do with being conservative, the "liberal" side does indeed have its vaccine deniers, MSG paranoiacs, and so on. (However, they don't have TV networks dedicated to these things, available in every hotel and airport in the country...)
I try to judge media organizations (and people) based on their commitment to truth and openness to empirical evidence and new information. Their political leanings may be interesting, but are a (much) less significant data point.
So very recently, and unless you've been to college in those years, you won't be aware of it.
I forget where he said it, or I would link to it. It might have been in a recent conversation he had with Jordan Peterson.
Also, Rush Limbaugh hates Musk (he has the #1 talk show since 1987... Since record-keeping began, so a lot of people are exposed to that negativity)
Funny, they opened a satellite office right near my apartment and I'd considered applying. Then I heard pretty disconcerting stuff about the environment, and now this. Dodged a bullet, I guess.
I would never work as an engineer for a company like that. How can I trust that it will honor any deal I make and not screw me? I have to think about that with every company but this one in particular can’t even spell ”integrity”.
If the company views engineers as better than other people and someone they wouldn't want to screw with, I'm not working there either on principle.
I expect blowback. I expect negative news. They essentially pulled it off by looking at every day as combat where fighting dirty was rewarded.
The biggest part of the comment was seeing the taxi driver protest in Seattle when I was there on business. My hotel room window had a view of city hall and I watched a bunch of cabs with a news crew pull up for about 45 seconds and start honking their horns. Then they all left and went back to taking fares.
When I watched the local news that night, the broadcast made it look as if they'd blockaded city hall for the day in protest.
It's the things like that that give me pause when I see bad press around a company that has upset entrenched interests.
Who are getting screwed.
Having to register for that is quite surreal.
To be fair, they weren't spying on their customers. They were fingerprinting phones which is against the Apple ToS.
"As an online discussion grows longer, the probability of a comparison involving Trump approaches 1"
User starik36's comment was in a downvoted state. Which is what prompted me to write that comment. I didn't think what he said deemed a down vote because from general observation what he stated seems true.
Looks like they fired two people over this, pretty immediately at that. Uncertain if the new CEO was aware of the cover-up until (presumably) contacted for comment by a news org.
The fact that the cover-up persisted this long is bad, but on the other hand the Kalanick-era Uber probably would've gone to war with the journalists breaking the stories rather than admit fault, so there's that.
Edit: allow me to replace the word "found" with "created." I was just using a figure of speech.
I can think of a few: https://en.wikipedia.org/wiki/United_States_presidential_ele...
If you think the current sitting POTUS is an innocent victim of politics, then I have a bridge to sell you. Uber has used similar PR tactics in the past to deflect/detract from their actions.
I think the point that the great great? grandparent top post was making is that whoever is in charge of dealing with the media at Uber is doing a horrible job.
Also, I am sad that we don't talk about the policies and rather focus on the personal flaws. I think there would be a chance of a compromise if we debated on policy. I mean if we talk about just personality, what makes our Honorable Governor of New Jersey eligible for office? Not a fan of 45 but really I think politics has become too polarized.