I mean, someone actually sat down and said, ok when, not if, law enforcement comes to raid one of our offices, how can we do our best to block a lawful search warrant.
It's not that simple.
Team A is given the task of writing SW that locks down all computers in case some alarm goes off (e.g., opening a door without disarming an alarm). In the start, this alarm and all workstations are connected to an on-premises server.
Team B is given the task of moving the server into the cloud (or home office) because HR realized they could automatically monitor which employees enter and exit the office. For security auditing.
The system has been moved to the cloud and team C is given the task of extending the SW to make it possible to block certain employees for logging in, e.g., after they've accumulated a certain number of work hours during a week. To prove that the company complies with work regulations.
Now you have built a system, all under plausible and "just" pretenses, that allows you to lock down an office anytime remotely.
Team D (managers) are given the task of paging a number when a raid happens. Team E is instructed to lock down an office when the number is paged (possibly they're told that it's a system test or something else).
If anyone is doing anything unethical here, it's the managers and, possibly, team E (executing the lockdown).
So in short: if you trigger this system in reaction to a police visit you are guilty of obstruction of justice and can face up to 10 years in jail (in the US, my state). There is nothing the company can do to shield you from this.
Anyone who has ordered you to do this would be accessory to obstruction of justice, punishable by up to 10 years of jail (not 100% sure).
Neither of those people can defend themselves by saying that it was in the company's interest, or that they were ordered to do so. This is criminal law, and your claim would be akin to saying that your boss asked you to murder someone. If you do it, it's your ass (of course ordering a murder is a crime too, but it doesn't absolve the murderer). This goes to quite extreme lengths and even applies in cases of coercion (so if you murder someone because your boss kidnapped your daughter, you're still a murderer and can still be convicted as such).
The same would apply here.
One hopes Uber is not just throwing employees under the bus (because the IRS, I guarantee you, would get an obstruction of justice persecution started if Ripley happened to them. They have sent people to jail for far less)
I agree, but in a different way. Who's to say that it's not governments that are the major criminals here? Many westerners would agree regarding China, North Korea, Iran, and so on. But what if the laws being used against Uber are products of corruption? To stifle competition against the taxi industry, which seems likely.
But whatever. This is some damn impressive OPSEC, no? I mean, serious anarchocapitalist OPSEC. And that just brings a smile to my face.
I like to believe that if there's a legitimate use for a tool, then the tool is fine. What's the difference between a lawful search warrant and robbery? Why do we focus on the text rather than the fact that the leadership created a protocol and a scheme to do this? Why does Travis get a pass?
- Nathaniel Borenstein (paraphrased)
Software engineering doesn't even follow that rule.
TL/DR: police had given the materials taken during the police raids to organized criminals, who used them to fraudulently reclaim $230m of the taxes previously paid by the company.
Surely we're all aware that financial avarice is rampant in developing/emerging markets and, when presented with an unaccommodating adversary, these developing/emerging markets will readily resort to "fishing expeditions" and other avenues of blatant extortion.
When writing end-to-end encrypted chat, client-side encrypted hosting/backup, etc. you ask yourself the same question. And yet I consider those ethical.
This is ridiculous, clearly there's a sliding scale of how much morality versus how much immorality you enable. How far along that scale is a tool whose sole purpose is to block lawful warrants?
From the article:
The three people with knowledge of the program say they believe Ripley’s use was justified in some cases because police outside the U.S. didn’t always come with warrants or relied on broad orders to conduct fishing expeditions.
Talk about misuse, sure. But there's nothing at all about that tool that restricts it to blocking lawful warrants.
There's no hint in the article that this was the sole purpose. From the article:
> Employees aware of its existence eventually took to calling it Ripley
So it was not codenamed from the start, there's an implication that a relatively small circle of users knew it.
> Workers in Uber’s IT department were soon tasked with creating a system to keep internal records hidden from intruders
Doesn't necessarily seem like something an ethical developer should refuse to do - it could easily be "sold" internally as security software protecting the sensitive information in case of network intrusion.
Which is the wrong approach. We don't know enough details to know what they knew or didn't know going into the project. If Uber at the highest level is okay with deceiving the government, then it's not crazy to think they wouldn't inform their developers about the potentially illegal things they were going to do. And this was all in 2015-2016 when Uber was still a "scrappy startup" in the people's eyes and not the monster it's been exposed to be.
However, we 100% know of people who did know about the project, specifically the higher ups. Our anger should be directed at them because they have no excuse.
Morality and legality are both very subjective, either to the person or to the legal system.
Is the use of state power to enforce taxi monopolies immoral?
Many companies openly discuss their countermeasures to protect their customer and internal data from lawful requests from the Chinese government. I'm not saying it's right or wrong, just that it's not a secret.
And then once the programmer writes it, it's a piece of cake to make it do the same thing in Canada.
This seems like a serious counterpoint to "they were using it against lawful warrants therefore it's bad". Arguing that line and simultaneously excusing defensive measures against China, Turkey, or other governments isn't easy.
The obvious resolution is "ethics aren't equivalent to law", and using this tool to dodge ethical requests is still bad. But that's a far more discretionary standard than a lot of the positions I'm seeing here.
Maybe Uber just wants to ensure that discovery proceeds after careful negotiation.
The difference seems obvious to us, but it wouldn't be to someone outside of tech, and similar justification will be used by used by people building the Uber program.
I suspect the large majority of entities who are stymied by Apples extensive encryption who wouldn't be stymied by trivial encryption are law enforcement and other state entities.
I also suspect that Uber's protocol has been invoked against against intruders and/or commercial visitors.
So while Apple good and Uber bad, the difference is gray, not black & white.
That wasn't my takeaway from the Apple/FBI interactions.
They said pretty plainly: "We'll help however we can, but even we can't get into our customers data. We're also not going to break that so that anyone, including you, are able to."
Isn't that a big difference?
Uber is only protecting its profits. The customers are incidental to its blatant lawbreaking.
Apple is not your friend and does not have your best interest at heart. It is a company like Facebook, Microsoft, Ford, Boeing... They all just want the money. If making hard to break encryption sells more units, then that is the only motivation.
Is it 'breaking the law' to turn off computers when you see the police knocking at your door (or in the parking log) but don't know why? Meaning they haven't presented a warrant or even exchanged words with you yet?
Even in the particular event cited: "about 10 investigators for the Quebec tax authority burst into Uber Technologies Inc.’s office in Montreal"
The act of bursting in but not (yet) saying anything? I would argue that no laws have been broken.
Edit: A similar situation might be if you see flashing lights behind you when you are driving. Doesn't mean the police are there to stop you could be someone else or could be a mistake you haven't been officially notified of an infraction.
Breaking into an office is not the same as presenting a warrant and then prevent access to data detailed in a warrant.
If they had simply kicked off users who were likely cops, that would have been legal. That’s what (some) biker bars do.
But they went beyond that and actively misled such users in a way that wasted their time. That amounts to lying to investigators and is illegal.
You can kick people out of your bar for giving off a cop vibe. You can’t keep them around and feed them fake intel on drug deals so they waste their time.
You can pull your shade down to keep people from looking in your window. You can’t make shadow puppets of a homicide with the intent to get the cop outside to investigate a spurious lead.
Any relevant case law on this? Lying to the police isn't allowed, but lying to some new guy at the bar is legal, regardless of your suspicions of who the new guy is.
Believing that you're merely lying to "the new guy" or "an abusive user" isn't enough, as you note, so that part of the program (send spurious data to likely bots) wasn't illegal.
People who grew up in a situation where the law and the institutions that create and enforce it did them little good and plenty of bad (i.e. urban poor) often have little respect for it (though they may follow it as that's usually the least worst option when you're moving up the economic ladder).
I don't see a problem with software that shuffles data and access around to create cross-jurisdiction paperwork headaches for the police. Forcing them to cross their t's and dot their i's before going in makes it much harder for people with political ambitions to go fishing for stuff to put on their resume. We all know that law enforcement practices saying no via similar "oh you have to go get the info from these people" spaghetti fairly often with FOIA reqs.
Does this change if the person is suspected of having pot or of killing children?
> Does this change if the person is suspected of having pot or of killing children?
The U.S. legal system has a presumption of innocence, so merely being suspected of committing a crime shouldn't affect the legal rights you're entitled to. (Many innocent people become suspects in police investigations.)
It just depends on how people were explained why and what they need to do. In the end these engineers even might have had no idea that the thing they produced will be used this way.
Additionally, I know a bunch of people who developed similar system for small companies in my country. However the situation is different: law enforcement institutions are often misused to get the bribe from business owners. Means the less they can find in your office, the less chances they will take your money away for nothing.
And is the last line necessary?
That’s how the engineers must have been convinced on owning up the requirement.
But was that 'someone' a programmer? I'd say whoever programmed this probably did so on order of their boss. Tech isn't immune to the same pressures that exist in every field. Basically, make the boss happy.
They exist because a corporation is inherently coordinated in a way that gives them power over workers. Unions just allow workers to adopt that same inherent coordination to seize power back. Unions aren't a moral force, they're just a counter-balance.
What about all those high-powered MBAs and other finance types? Most anyone who's received an Ivy League education is going to be in demand. Yet would we really expect more ethical behavior from them? Some would say we would expect worse. Programmers are no different from any other worker.
>I'd also assume that a lawsuit in case they were fired for refusing to write software which the purpose to break the law would go pretty well, but I'm not a lawyer.
Yes but if you filed that lawsuit you wouldn't exactly be 'in high demand' anymore, would you?
An MBA largely has value because it acts like a cult promoting the networking between MBA's. It's arguably a great hack to extract value from organizations as opposed to customers.
I don't think this applies to the average MBA (full disclosure, I have an MBA). Most MBAs are just normal people who are just trying to do their job better.
Having said that, I guess some grads of the "most elite" business schools do that cultish thing you're talking about, but I would argue it has less to do with having an MBA than being a member of that top school. Those people tend to be the most visible representatives of MBAs while only being a very small portion of all people with MBAs.
Sure, they tend have high earnings, but this is the same pool that both want to and could get into a 'top' school which is a rather biased sample.
What you're saying seems akin to saying that a Stanford MBA is in less demand than the average programmer. (1) I think that's wrong. (2) It's really avoiding the point which his that being in demand doesn't really imply that those individuals will act more ethically.
Various euphemisms exist, like taking a break or the ever popular consulting. But, getting a job in school or being poached while employed is the happy path. Get off that path and it can be very hard to get back.
PS: I don't think demand changes the average much, but it's often outliers that stand up in the first place. So, I do think demand is meaningful as it reduces the burden of ethical behavior which impacts people at the edge.
Your assumption ignores the fact that morals are relative and can be built around individual life experiences. There are plenty of segments of the population who don't trust the government or law enforcement organizations because they've seen or experienced corruption.
Your assumption also ignores the fact that some people could have debt and mouths to feed other than their own. People who aren't sitting on a huge rainy day fund could be risk averse when it comes to career changes.
You can make a group of programmers think they're pirates going against the establishment, and they'll do anything under that idea.
perhaps the features were used inappropriately, but I would hope Uber can remote wipe a laptop, log users out of company systems, or centrally enforce encryption policies.
this line is just silly.
--"Later versions of Ripley gave Uber the ability to selectively provide information to government agencies that searched the company’s foreign offices. At the direction of company lawyers, security engineers could select which information to share with officials who had warrants to access Uber’s systems, the people say"
What is the alternative? Giving law enforcement access to all data without any discretion? Querying ride data for 1 person is technically "selectively provide information", but that seems perfectly acceptable.
The majority of people on this site are from countries that work under the rule of the law. I understand that they'd feel uneasy by these subterfuges.
But in places like Russia, China, Belarus, most of Africa and parts of Latin America these resources are more than justified. You should fear the police as much as the bad guys.
Actually, as a non-American, if I were to land on an U.S. airport with a computer or cellphone I'd also take careful measures to avoid abusive searches, even if they're substantiated by law.
This wouldn't be news if I told you that Intel makes its engineers encrypt their hard disks and require passwords on wake and insist no one leave their computer unlocked.
The news here is that they're using this technology specifically for obstruction of justice.
It's not the technology in question, it's the decision to use it for this specific purpose.
Those don't just automatically go hand in hand. And police accountability is not some kind of team sport.
People here are talking about enforcement of actual laws.
I have a strong dislike for companies that willingly flaunt rules that are meant to level the playing field. And this is not the same as companies that disrupt (in fact, the taxi industry was badly in need of disruption and was NOT a level playing field before ride-sharing).
Same goes for police. If a policeman has murdered someone, they should be held to trial just like a civilian would. But we all know that's not actually what happens.
It's like when the small-time drug dealer and the top bully at your high school get into a fistfight. The ideal outcome is that they both get a savage beating, and then get expelled.
Also, police is fine here and so are cab drivers.
I can easily see a toolkit that makes sure everything is FDE with a distributed key network, and revocation from anywhere if needed. I also see remote distributed shutdown requests, sealed storage locking, remote device nuking, and plenty of other features if a device falls into the wrong hands.... even if that is local law enforcement.
Part of this also feels like the Neuromancer universe, where companies are the state actors, and the real states have only limited jurisdiction.
(And yes, I would help build a set of tools like this. They have multiple purposes, legal and illegal. Not my fault if someone uses them illegally in a jurisdiction. )
It's there an OSS version of this somewhere: contact a server, server initiates clients on all devices to perform lockout with pre-arranged credentials. Client can clean caches, wipe partitions, etc., as required.
Most of the time, your adversary is a thief with low/mild tech intelligence who'd let it talk to the net. Then game over, data's gone and you have a portable spy rig if you play your cards right.
Isn’t this “tipping off”? Which is a crime in itself.
I'm not a lawyer, but unless there's a gag order, I don't think it's a crime to tell someone you're being raided by police, especially if that someone is your security or legal staff.
Where I grew up, yes that is a crime, at least in the sense it's something to be ticketed for, they're not going to haul you to jail.
In the UK, in respect to the Proceeds Of Crime Act 2002. Financial regulations tend to be fairly universal in the West so I don’t doubt Canada has them. Start jailing people who send these messages and it will soon stop.
Now, if they're found guilty, I have no problem "executing" the corporate charter as a death sentence. Just laws need to have teeth, and no piddly $10M fine is a deterrent.
Sounds more like Ripley's believe it or not
I for one believe this can be a good thing, since humans corrupt but computers are strictly deterministic.
Humans, on the other hand, cannot be trusted in the way that a computer can.
These corporations care mainly about profit so i don't think whatever rules they come up with will be good for society.
Maybe in theory, but I haven't found my computers to behave that way.