I just can’t work up the ability to sympathize with Cellebrite. The law may have something to say about Moxie’s writing, but in my opinion he has the clear ethical upper ground in this argument.
> If you work at Cellebrite, on the other hand: get down off your high horse, stop it with the “we’re the good guys” shtick, quit selling to authoritarian governments, and for god’s sake, fix your shit.
> Giving defense attorneys more ammo to push back harder against the use of Cellebrite devices against their clients is Good and Right and Just. The general point that Moxie made — Cellebrite’s tools are buggy A.F. and can be exploited in ways that undermine the reliability of their reports and extractions as evidence, which is the entire point of their existence — is actually more important than the specifics of this exploit
You're kind of missing the point of the article. The article agrees with you that Signal's hack was a net positive and Cellebrite is not a good company.
I also disagree with the notion that it’s good that Cellebrite exists because without them we’d have stronger anti-encryption laws. That’s hypothetical and all we know is what we have today. I’m not thrilled that someone is peeing on my basement carpet instead of peeing in my living room; I’d rather not have someone peeing on any of my rugs.
- The threat is likelier to annoy judges than garner sympathy
- Following through on it is probably illegal
- Worse, following through could put their users in legal (and/or physical) jeopardy
- More generally, Signal should consult with lawyers before doing things like this
This bit is very relevant, and I agree. It’s ethically dubious to put unknowing users at risk in that way, whether from democratic or authoritarian governments.
The other points, though, very much assume that the goal is to change the outcomes of American court processes. The focus is almost entirely on what a judge in the US would think, on evidence rules in American courts, etc. Maybe American law and law enforcement isn’t as relevant as an American lawyer thinks it is, and Signal is betting that the PR and politics game is more important.
If they did make that bet, which I think is likely, then the article has some valid arguments at the end – this hack(or non-hack) may lead politicians to introduce stronger laws – and _that’s_ where the focus should be. Is this a god move, politically?
And, to reiterate from the beginning: Does this put end users in danger? If it does, it’s likely not worth the price even if there was some political victory in the end.
I'm curious how? If they announce publicly that they will place files on devices that may exploit a publicly announced vulnerability in Cellebrite, then it's Cellebrite's prerogative to fix the vulnerability. If they knowingly ignore a publicly disclosed risk, then they have only themselves to blame.
>Uh, is that legal?
>No, intentionally spoiling evidence — or “spoliating,” to use the legal term — is definitely not legal.
>Neither is hacking somebody’s computer, which is what Signal’s blog post is saying a “real exploit payload” could do. It said, “a real exploit payload would likely seek to undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.” All of those things are a violation of the federal anti-hacking law known as the Computer Fraud and Abuse Act, or CFAA, and probably also of many state-law versions of the CFAA. (If the computer belongs to a federal law enforcement agency, it’s definitely a CFAA violation. If it’s a state, local, or tribal government law enforcement agency, then, because of how the CFAA defines “protected computers” covered by the Act, it might depend on whether the Windows machine that’s used for Cellebrite extractions is connected to the internet or not. That machine should be segmented apart from the rest of the police department’s network, but if it has an internet connection, the CFAA applies. And even if it doesn’t, I bet there are other ways of easily satisfying the “protected computer” definition.)
What's the principle being applied here? How would the same principle be applied in the case of digital property?
Difficult philosophical questions arise with the phrases “knowingly causes” and “intentionally causes damage,” but a jury can use common sense to resolve them on the evidence in a particular case. The same issues arise when trying to determine intent and causation when someone fires a gun or carries a bag full of white powder. The details matter.
The author also misses the point of the "show me yours I'll show you mine". Cellebrite is, from what I understand, knowingly leaving everyone's machine vulnerable in order to conduct their business.'
This is something that _should_ be illegal. Not disclosing (and actively benefiting from) vulnerabilities in other peoples products is what we should have laws against.
> An attacker need not directly send the required transmission to the victim computer in order to violate this statute. In one case, a defendant inserted malicious code into a software program he wrote to run on his employer's computer network. United States v. Sullivan, 40 Fed. Appx. 740 (4th Cir. 2002) (unpublished) . After lying dormant for four months, the malicious code activated and downloaded certain other malicious code to several hundred employee handheld computers, making them unusable. Id. at 741. The court held that the defendant knowingly caused transmission of code in violation of the statute. Id. at 743.
The CFAA is notoriously broad, which is probably why Pfefferkorn didn’t feel the need to undertake a detailed analysis of exactly how it prohibits the deployment of a targeted exploit which would “undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.”
Say I have a USB Stick with important data on it. It has a warning label on it that says "if you plug this in, it may destroy your computer unless you have the correct password file.". If you plug it in (and your OS is vulnerable) it wipes all drives (including itself) it can find unless it finds a particular password file.
Is this USB Stick illegal?
Signal made it very very clear that scanning their users with Celebrite tools might trigger some behavior. Now if you still go ahead and use this tool can Signal be blamed, despite warning you that this will occurr?
I find all of this far from obvious. What Signal did is purely defensive _and_ clearly labeled. It's very unlike the examples cited so far.
(And after all we are talking about a scenario where the cops can still get the evidence simply by taking screenshots of the open app, so they are not even preventing cops from getting to the evidence, merely making it more inconvenient.)
What worries me most about this disclosure is the potential for abuse inside law enforcement agencies and departments . What if a evidence gathering machine is deliberately not patched against this e exploit?
If I sold software like Cellebrite I would have at least attempted to make enforceable the cessation of licenses for any out of date instalation.
What really confuses me is why vendors like Cellebrite don't have a commercial case for at least some level of independent testing of their wares in order to provide a limited warranty for the operation and results.
Until now I actually thought it was necessary to obtain suchlike independent testing and make appropriate assurances to LEO to be able to legally sell such software in the first place.
Article concludes the uneasy status quo permits all parties to do their best work respectively. Unmentioned is that that at least pays lip service to the American Way of meriticracy and endeavour and the ideal ultimate effect of fairness to all.
ThIs is probably my naivety again ; but why can't laws prohibiting the use of 0Days exploitation work to the advantage of the law and society and commerce alike?
If zero day exploits had to be disclosed to a central independent organisation (comprised of members from LEO and civilian life and working on a mandatory equal resources footing to enable citizen participants without any need for corporate sponsorship) and there was a definite widow permitting the use of exploits ended with a mandatory tested patch release and public announcement, I don't see how it would be unfair or the unreasonable for anyone on either side of the law. I would even consider it isn't a bad thing to disclose vulns identified by software engineering and not discovered publicly, to be notified to federal agencies when identified. I actually think that we should do this already for the protection of our diplomats and overseas representatives.
Since we already have the instrumentation to selectively patch individual devices in widespread use, why cannot agencies request the exception of devices under surveillance to enable the security of the general public?
I realise this doesn't work for covert and unlawful intercepts. And there do exist reasons for covert intercepts to be carried out. However every advanced society should be pushing such incidents to the margins with every available force possible.
Security experts are worried about this argument because the global security of the USA is increasingly and credibly threatened. Show me how a well designed infrastructure for the protection of the innocent from unwarranted invasion, how I've outlined here, can possibly be a negative for law enforcement and national security and I'll eat my hat : the suggestions I'm making entirely reinforce the accessibility of intercept capabilities for lawful deployment and instrumentation for device specific code patching only enhances the potential for positive acquisition of intelligence on criminals and foreign agents. The USA should be peeling back the layers of the baseband implementations of 5G and immediately order the decommissioning of all 2G installations that are trivial to abuse.
The faster the USA creates a viable OSS 5G RAN code base the faster foreign potentially hostile competition is disabled in the race for budget handsets and deployment.
The number of people who have any interest in this field is small enough for background checks to not be prohibitive to open source goals. However serious consideration needs to be given to any blanket release to higher education institutions because the number of overseas students is simply too great to rule out hostile intentions.
Along the similar lines we need to undo academic publishing holds on legitimate interest interest in research. Because only hostile nations are served by making the distributors of publicly funded research available to the public.
I mentioned that last point because I think the most important argument of the article was about the blurring of the lines where actually really sensitive concerns do exist on the national basis that are being trivialized by a leading vendor of personal privacy communication software touting hacks in the way the author explained he found unbecoming and - unspoken but clear to me at least - dangerous to society as a whole.
Last year I implemented so called "content protection" software for my company which enables the restriction of eg sending emails with sensitive words included. Or the attachment of files. And in depth classification and full text inspection tools and services. This is a growth market right now and I would strongly encourage anyone wanting interesting and well paid consulting work to study this area and particularly spend time for looking at how many new entrants are appearing constantly. My company doesn't expect to see much benefits from this expensive software installation, but the purpose we have is to use the obtained metadata for eg graph database analysis for assisting with our own research and development of opportunities from customer provided documentation and research. We're planning on linking back to raw incorporation filing feeds on individual parties and even public LinkedIn posts and comments.
I'm mentioning that because the value of captured surveillance data in the raw becomes massively more potent information combined with the associated network of correspondents and individual sources and references.
At one time when I was young I thought the cost for academic research papers was the cost of government surveillance of interested parties obtaining advanced insights into technology and analysis and systems.
The software my company purchased is in theory capable of tracking the lifetime of a document that has been passed through any number of hands.
Obviously it's trivial to air gap your reading device. But consider the volume of individual papers and documents you consume in any given year and certainly for the hn crowd that's likely a large number.
Make it difficult for criminals to conceal the pathway taken up to their own devices by a very large number of information sources and the resulting black hole is a hypothetical perfect telltale snitch.
Conversely, it's perfectly possible to enable free acquisition of research documents by a intermediary for the consumption of a legitimate researcher or team. I have worked for 30 years in specialist publishing in industry association members journals paid for by advertising. The Internet allegedly destroyed the viability of my business. What did happen was advertising agencies suddenly declared print media dead and ceased operations in my field almost in choreographed unanimity. This was 25 years ago. I actually think that it was my field that Google was interested in when they declared reported in Advertising Age and other trade media to have, along with a consortium of the biggest publishing houses, that their multi year and multiple hundreds of millions of dollars project for trading printed advertising online had failed and mentioned that particular obstacles included the very problems my company overcame just to start trading in 96. I don't think Google wanted to help anyone sell consumer targeted advertising. They almost certainly even in 04 knew that would be their market to themselves. But highly vertical advertising within industry niches where what's being advertised often is incomprehensible without accompanying features commissioned by the publication to cover a niche within a niche and attract everyone in that market as advertisers. Take 200 thousand times 4 for quarterly issues and 50 thousand average readers by name times 4 a low "reach" estimate gives 1.6*10^11 pairs of eyeballs per year in this forgotten and buried business.
That's who will be only too happy to bear the infrastructure costs of the document management system necessary for a truly global scale tracking of research dissemination.
Don't dismiss this immediately only for concerns about privacy : this couldn't fly without a way to give real privacy for the protection of researchers needing to avoid any giveaway of their direction and interests. Legally double blind intermediary agents as proxies are far from trouble to implement and I know that demand exists for such a proxy among some customers of ours for a additional layer of privacy and discretion for their work.
We've almost forgotten because of the global economy how much the USA and critical input from other western nations is advanced compared to the row. I personally think that the expansion of university campus facilities has been happening because of foreign students demand and potentially profits from them assuming that zero interest rates continue until the debts are paid and assuming that that happens before the lifetime expectancy of the buildings erected creates a financial noose around higher educations head. The borrowing I've looked at doesn't have principal repayment horizons early enough by a very long way.
Such expansion of a surveillance of research rrs
Also, if you know someone is stealing your lunch from the shared work fridge, so you add rat poison to your lunch, do you get to walk away scot free on the theory that it’s the thief’s fault?
Let's look at another case, I remember that some people had USB drivers that detected "wrigglers" and shut down the computer in response to such a wiggler. Would that also be illegal?
If I install anti scan files and anti mouse wrigglers when travelling to China do they become legal then?
Presumption of innocence is the most fundamental cornerstone of common law.
Similarly, if your app/device damages government property and tampers with legal evidence, both you and the creators would likely be held responsible. Even if the law may be unclear, you will definitely face charges for this, given how defensive police departments are in these cases (there was one case where a person they beat up had extra charges brought against him for dirtying the officers' uniforms with his blood... ).
Furthermore, simply creating exploit code and releasing it into the wild is illegal, so Signal, if it were ever found to have done what they let us believe they could do, could be held legally responsible, even if the code never made it to exploit a live system at all.
You definitely are not allowed to have traps in your house with the intention of hurting potential thieves. So definitely no bear traps etc.
Permanent ink would probably still fall under that category. And below that it becomes grey area.
That would not apply to protecting your vault from theft, by using physical - automated - violence against the thief.
My point is Cellebrite/the Cellebrite user would be the one spoiling the evidence. The evidence is sitting there on the device unspoiled, and only if the user decides to charge ahead without heeding the public warning that doing so without the necessary precautions will spoil the evidence will the evidence actually be spoiled.
Signal itself has no knowledge of which files constitute evidence (it applies this completely indiscriminately), so I don't think you could argue that it is knowingly spoiling evidence.
The article, written by a legal scholar with a specialty in precisely these issues, directly contradicts this.
Signal coyly threatened to make their app hack Cellebrite machines with the intent of spoiling evidence. It doesn't matter that they aren't targeting specific evidence. Blanket spoiling all Cellebrite evidence would apparently be enough to get them in legal trouble.
I'm having a hard time imagining this being a viable argument. Seems like the vendor should just fix their software if they expect it to work reliably. Anything else would be too large of a transgression on civil freedom.
Since Signal would be deploying that exploit to millions of devices to combat surveillance tech, I would expect that to at least result in a suit even if they were able to defend themselves successfully. It would be especially interesting to see how Cellebrite’s use by various repressive regimes entered into that: an American court might, for example, be sympathetic to a campaign trying to protect dissidents in China which happens to impact an American police agency using the same tool.
People are looking at Cellebrite wrong due to law enforcement using it. Cellebrite is a set of specialized thieving tools. Those tools can be wielded by anyone. The fact law enforcement has unwisely and blindly integrated it into their toolchain does not mean the device should be given special protection over anything else. All this does is further cement "law enforcememt" as a special privileged class in the United States, to whom Constitutional boundaries (5th Amendment, which at this point, I hold that testimony by electronic device metadata disclosure/compromise should realistically cover when breaking through encryption is involved, and 4th Amendment when Third Party Doctrine is taken into account).
I'm not arguing that it should have whatever “special protection” you have in mind. This is why I mentioned the concept of intent: just as having lock picks or a gun isn't automatically a crime, I think having an exploit for Cellebrite would depend on why you were developing and installing it.
If you were, say, helping dissidents in another country I would expect a judge to be far more supportive of that than if it came up in the context of a criminal investigation with a lawful search warrant. In the latter case, you are aware of but refusing to comply with the legal system and, irregardless of how any of us personally feel about it, that's just not going to end well in most cases.
In that case, as long as one is not intending to interfere with a search warrant or other legal process, it should be fine for them to deliberately install a Cellebrite hack.
Encryption has no special treatment that would cause 5th amendment to apply. 5th amendment may apply if they ask you to tell something (e.g. a password), but if they can break your encryption without your assistance, then there's no difference if they're decrypting a physical letter or an electronic file, if the evidence (that letter or that file) was lawfully obtained, they can do that.
Doesn't mean it isn't net positive, just means the details of how they did it were... maybe not the cleverest. But who knows, one person's opinion, etc.
And that’s not for (American) lawyers and judges to decide against, its for politicians in all democratic countries:)
Note that this is one of many, many reasons it’s unlikely that I’ll ever be appointed Lord Emperor.
Cellebrite, as I recall hearing (Or was it StingRay?) have pretty strict non-disclosure license terms; I doubt Cellebrite knowingly sold one to Moxie.