Hacker News new | past | comments | ask | show | jobs | submit login
I have a lot to say about Signal’s Cellebrite hack (stanford.edu)
367 points by curmudgeon22 on May 16, 2021 | hide | past | favorite | 298 comments



Eh, I can’t be bothered to care. Cellebrite hoards 0-days so they can use them to hack phones. They know about exploitable vulnerabilities but aren’t saying anything about them because they profit from insecurity. Thing is, just because Cellebrite knows about a thing doesn’t mean, say, China’s CCP or the Russian mafia or anyone else doesn’t also know about that thing. You and I are less safe just because Cellebrite wants to profit off of those vulnerabilities.

I just can’t work up the ability to sympathize with Cellebrite. The law may have something to say about Moxie’s writing, but in my opinion he has the clear ethical upper ground in this argument.


You're not supposed to sympathize with Cellebrite, according to the article.

> If you work at Cellebrite, on the other hand: get down off your high horse, stop it with the “we’re the good guys” shtick, quit selling to authoritarian governments, and for god’s sake, fix your shit.

> Giving defense attorneys more ammo to push back harder against the use of Cellebrite devices against their clients is Good and Right and Just. The general point that Moxie made — Cellebrite’s tools are buggy A.F. and can be exploited in ways that undermine the reliability of their reports and extractions as evidence, which is the entire point of their existence — is actually more important than the specifics of this exploit

You're kind of missing the point of the article. The article agrees with you that Signal's hack was a net positive and Cellebrite is not a good company.


I saw those parts, but my overall impression was that the author thought Signal was foolish to write up their adventure and they shouldn’t have done it (while conceding that Cellebrite aren’t angels).

I also disagree with the notion that it’s good that Cellebrite exists because without them we’d have stronger anti-encryption laws. That’s hypothetical and all we know is what we have today. I’m not thrilled that someone is peeing on my basement carpet instead of peeing in my living room; I’d rather not have someone peeing on any of my rugs.


I’m not reading the article as a criticism of the work Signal has done, but their “lol u got pwned” way of announcing it — in particular, their coy threat about exploiting the vulnerability. Specifically:

- The threat is likelier to annoy judges than garner sympathy

- Following through on it is probably illegal

- Worse, following through could put their users in legal (and/or physical) jeopardy

- More generally, Signal should consult with lawyers before doing things like this


> - Worse, following through could put their users in legal (and/or physical) jeopardy

This bit is very relevant, and I agree. It’s ethically dubious to put unknowing users at risk in that way, whether from democratic or authoritarian governments.

The other points, though, very much assume that the goal is to change the outcomes of American court processes. The focus is almost entirely on what a judge in the US would think, on evidence rules in American courts, etc. Maybe American law and law enforcement isn’t as relevant as an American lawyer thinks it is, and Signal is betting that the PR and politics game is more important.

If they did make that bet, which I think is likely, then the article has some valid arguments at the end – this hack(or non-hack) may lead politicians to introduce stronger laws – and _that’s_ where the focus should be. Is this a god move, politically?

And, to reiterate from the beginning: Does this put end users in danger? If it does, it’s likely not worth the price even if there was some political victory in the end.


> Following through on it is probably illegal

I'm curious how? If they announce publicly that they will place files on devices that may exploit a publicly announced vulnerability in Cellebrite, then it's Cellebrite's prerogative to fix the vulnerability. If they knowingly ignore a publicly disclosed risk, then they have only themselves to blame.


TFA:

>Uh, is that legal?

>No, intentionally spoiling evidence — or “spoliating,” to use the legal term — is definitely not legal.

>Neither is hacking somebody’s computer, which is what Signal’s blog post is saying a “real exploit payload” could do. It said, “a real exploit payload would likely seek to undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.” All of those things are a violation of the federal anti-hacking law known as the Computer Fraud and Abuse Act, or CFAA, and probably also of many state-law versions of the CFAA. (If the computer belongs to a federal law enforcement agency, it’s definitely a CFAA violation. If it’s a state, local, or tribal government law enforcement agency, then, because of how the CFAA defines “protected computers” covered by the Act, it might depend on whether the Windows machine that’s used for Cellebrite extractions is connected to the internet or not. That machine should be segmented apart from the rest of the police department’s network, but if it has an internet connection, the CFAA applies. And even if it doesn’t, I bet there are other ways of easily satisfying the “protected computer” definition.)


Imagine physical vault in your house. This vault has mechanism within it such that if anyone forces it open it will destroy all its contents. It may have defense mechanisms triggered to act as deterrence – it may spill/spew very bad odor and permanent ink – on anything nearby and that odor and color will be very hard to get rid of. Is such a vault legal? If someone breaks into your house and steals such a vault and in the process of trying to open it, if they suffer damage, is the owner of the vault liable?

What's the principle being applied here? How would the same principle be applied in the case of digital property?


The parts of the article quoted above suggest that the principle is a CFAA violation – someone who distributes an exploit tailored to destroy evidence captured by Cellebrite probably “knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer.”

Difficult philosophical questions arise with the phrases “knowingly causes” and “intentionally causes damage,” but a jury can use common sense to resolve them on the evidence in a particular case. The same issues arise when trying to determine intent and causation when someone fires a gun or carries a bag full of white powder. The details matter.


I think the crucial point glossed over in the analysis of the blog post is "knowingly causes the transmission". The user of Cellebrite causes the transmission. I would really like to see a proper legal analysis of the situation, and this doesn't seem to be it.

The author also misses the point of the "show me yours I'll show you mine". Cellebrite is, from what I understand, knowingly leaving everyone's machine vulnerable in order to conduct their business.'

This is something that _should_ be illegal. Not disclosing (and actively benefiting from) vulnerabilities in other peoples products is what we should have laws against.


The DOJ publishes legal guidance on prosecuting computer crimes [1], which includes this relevant passage:

> An attacker need not directly send the required transmission to the victim computer in order to violate this statute. In one case, a defendant inserted malicious code into a software program he wrote to run on his employer's computer network. United States v. Sullivan, 40 Fed. Appx. 740 (4th Cir. 2002) (unpublished) [2]. After lying dormant for four months, the malicious code activated and downloaded certain other malicious code to several hundred employee handheld computers, making them unusable. Id. at 741. The court held that the defendant knowingly caused transmission of code in violation of the statute. Id. at 743.

The CFAA is notoriously broad, which is probably why Pfefferkorn didn’t feel the need to undertake a detailed analysis of exactly how it prohibits the deployment of a targeted exploit which would “undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.”

[1] https://www.justice.gov/sites/default/files/criminal-ccips/l...

[2] https://www.anylaw.com/case/united-states-v-sullivan/fourth-...


This passage describes a really different situation though.

Say I have a USB Stick with important data on it. It has a warning label on it that says "if you plug this in, it may destroy your computer unless you have the correct password file.". If you plug it in (and your OS is vulnerable) it wipes all drives (including itself) it can find unless it finds a particular password file.

Is this USB Stick illegal?

Signal made it very very clear that scanning their users with Celebrite tools might trigger some behavior. Now if you still go ahead and use this tool can Signal be blamed, despite warning you that this will occurr?

I find all of this far from obvious. What Signal did is purely defensive _and_ clearly labeled. It's very unlike the examples cited so far.

(And after all we are talking about a scenario where the cops can still get the evidence simply by taking screenshots of the open app, so they are not even preventing cops from getting to the evidence, merely making it more inconvenient.)


I'm inclined to think that any computer used in good faith by law enforcement duly authorized to obtain evidence ought to be considered a "protected computer if it is specifically targeted as opposed to being affected by a ubiquitous harmful code not distributed with any expectations of causing harm to LEO discovery machines (eg the authors of ransom ware might reasonably expect law enforcement agencies to bare metal install known good OSs)

What worries me most about this disclosure is the potential for abuse inside law enforcement agencies and departments . What if a evidence gathering machine is deliberately not patched against this e exploit?

If I sold software like Cellebrite I would have at least attempted to make enforceable the cessation of licenses for any out of date instalation.

What really confuses me is why vendors like Cellebrite don't have a commercial case for at least some level of independent testing of their wares in order to provide a limited warranty for the operation and results.

Until now I actually thought it was necessary to obtain suchlike independent testing and make appropriate assurances to LEO to be able to legally sell such software in the first place.

Article concludes the uneasy status quo permits all parties to do their best work respectively. Unmentioned is that that at least pays lip service to the American Way of meriticracy and endeavour and the ideal ultimate effect of fairness to all.

ThIs is probably my naivety again ; but why can't laws prohibiting the use of 0Days exploitation work to the advantage of the law and society and commerce alike?

If zero day exploits had to be disclosed to a central independent organisation (comprised of members from LEO and civilian life and working on a mandatory equal resources footing to enable citizen participants without any need for corporate sponsorship) and there was a definite widow permitting the use of exploits ended with a mandatory tested patch release and public announcement, I don't see how it would be unfair or the unreasonable for anyone on either side of the law. I would even consider it isn't a bad thing to disclose vulns identified by software engineering and not discovered publicly, to be notified to federal agencies when identified. I actually think that we should do this already for the protection of our diplomats and overseas representatives.

Since we already have the instrumentation to selectively patch individual devices in widespread use, why cannot agencies request the exception of devices under surveillance to enable the security of the general public?

I realise this doesn't work for covert and unlawful intercepts. And there do exist reasons for covert intercepts to be carried out. However every advanced society should be pushing such incidents to the margins with every available force possible.

Security experts are worried about this argument because the global security of the USA is increasingly and credibly threatened. Show me how a well designed infrastructure for the protection of the innocent from unwarranted invasion, how I've outlined here, can possibly be a negative for law enforcement and national security and I'll eat my hat : the suggestions I'm making entirely reinforce the accessibility of intercept capabilities for lawful deployment and instrumentation for device specific code patching only enhances the potential for positive acquisition of intelligence on criminals and foreign agents. The USA should be peeling back the layers of the baseband implementations of 5G and immediately order the decommissioning of all 2G installations that are trivial to abuse.

The faster the USA creates a viable OSS 5G RAN code base the faster foreign potentially hostile competition is disabled in the race for budget handsets and deployment.

The number of people who have any interest in this field is small enough for background checks to not be prohibitive to open source goals. However serious consideration needs to be given to any blanket release to higher education institutions because the number of overseas students is simply too great to rule out hostile intentions.

Along the similar lines we need to undo academic publishing holds on legitimate interest interest in research. Because only hostile nations are served by making the distributors of publicly funded research available to the public.

I mentioned that last point because I think the most important argument of the article was about the blurring of the lines where actually really sensitive concerns do exist on the national basis that are being trivialized by a leading vendor of personal privacy communication software touting hacks in the way the author explained he found unbecoming and - unspoken but clear to me at least - dangerous to society as a whole.

Last year I implemented so called "content protection" software for my company which enables the restriction of eg sending emails with sensitive words included. Or the attachment of files. And in depth classification and full text inspection tools and services. This is a growth market right now and I would strongly encourage anyone wanting interesting and well paid consulting work to study this area and particularly spend time for looking at how many new entrants are appearing constantly. My company doesn't expect to see much benefits from this expensive software installation, but the purpose we have is to use the obtained metadata for eg graph database analysis for assisting with our own research and development of opportunities from customer provided documentation and research. We're planning on linking back to raw incorporation filing feeds on individual parties and even public LinkedIn posts and comments.

I'm mentioning that because the value of captured surveillance data in the raw becomes massively more potent information combined with the associated network of correspondents and individual sources and references.

At one time when I was young I thought the cost for academic research papers was the cost of government surveillance of interested parties obtaining advanced insights into technology and analysis and systems.

The software my company purchased is in theory capable of tracking the lifetime of a document that has been passed through any number of hands.

Obviously it's trivial to air gap your reading device. But consider the volume of individual papers and documents you consume in any given year and certainly for the hn crowd that's likely a large number.

Make it difficult for criminals to conceal the pathway taken up to their own devices by a very large number of information sources and the resulting black hole is a hypothetical perfect telltale snitch.

Conversely, it's perfectly possible to enable free acquisition of research documents by a intermediary for the consumption of a legitimate researcher or team. I have worked for 30 years in specialist publishing in industry association members journals paid for by advertising. The Internet allegedly destroyed the viability of my business. What did happen was advertising agencies suddenly declared print media dead and ceased operations in my field almost in choreographed unanimity. This was 25 years ago. I actually think that it was my field that Google was interested in when they declared reported in Advertising Age and other trade media to have, along with a consortium of the biggest publishing houses, that their multi year and multiple hundreds of millions of dollars project for trading printed advertising online had failed and mentioned that particular obstacles included the very problems my company overcame just to start trading in 96. I don't think Google wanted to help anyone sell consumer targeted advertising. They almost certainly even in 04 knew that would be their market to themselves. But highly vertical advertising within industry niches where what's being advertised often is incomprehensible without accompanying features commissioned by the publication to cover a niche within a niche and attract everyone in that market as advertisers. Take 200 thousand times 4 for quarterly issues and 50 thousand average readers by name times 4 a low "reach" estimate gives 1.6*10^11 pairs of eyeballs per year in this forgotten and buried business.

That's who will be only too happy to bear the infrastructure costs of the document management system necessary for a truly global scale tracking of research dissemination.

Don't dismiss this immediately only for concerns about privacy : this couldn't fly without a way to give real privacy for the protection of researchers needing to avoid any giveaway of their direction and interests. Legally double blind intermediary agents as proxies are far from trouble to implement and I know that demand exists for such a proxy among some customers of ours for a additional layer of privacy and discretion for their work.

We've almost forgotten because of the global economy how much the USA and critical input from other western nations is advanced compared to the row. I personally think that the expansion of university campus facilities has been happening because of foreign students demand and potentially profits from them assuming that zero interest rates continue until the debts are paid and assuming that that happens before the lifetime expectancy of the buildings erected creates a financial noose around higher educations head. The borrowing I've looked at doesn't have principal repayment horizons early enough by a very long way.

Such expansion of a surveillance of research rrs


The purported hack here would specifically target Cellebrite, not anyone accessing these files in general.

Also, if you know someone is stealing your lunch from the shared work fridge, so you add rat poison to your lunch, do you get to walk away scot free on the theory that it’s the thief’s fault?


But cellebrite could be used by an oppressive regime, or criminals that got their hands on it. I'm doubtful such an argument would hold up in court, but I don't think you can honestly say targeting cellebrite is the same as targeting US law enforcement.


The issue here is not "someone" breaking into your house and stealing it, it is the authorities doing it. Destruction/sabotage of the evidence collection process is very possibly going to be held against you.


It seems like there's a bit of a logical leap in that argument. As the article notes, Cellebrite isn't exactly discerning when it comes to their customer base. It seems like they sell their tools to just about anyone willing to pay their steep fee, not just US law enforcement. I'd argue it's more akin to a specialized crowbar or blowtorch in the safe analogy. Sure, law enforcement might use it to try to crack your safe, but so could various other bad actors. There would be a legitimate non-spoilation purpose in protecting political dissidents who have their phones seized at a foreign border or stolen, for example.


But all Signal is (threatening to be) doing is blowing up devices that parse all files using insecure software.

Let's look at another case, I remember that some people had USB drivers that detected "wrigglers" and shut down the computer in response to such a wiggler. Would that also be illegal?

If I install anti scan files and anti mouse wrigglers when travelling to China do they become legal then?


The article quotes the part of the Signal blog that said “a real exploit payload would likely seek to undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.” A complex exploit like that would say much more about the author’s intent than a driver that shuts down the computer when a “wiggler” is detected.


But what if Signal's (or anyone else's) exploitation simply locked the device when the file was read, preventing further data extraction?


Well if it's the authorities, they can present you with a warrant and request that you disable your defenses. You should not be required to roll over and present your defenseless underbelly to everyone that wants to break in, in case some of them are "the authorities".


You don’t have to help anyone collect evidence against you, you’re innocent until proven guilty, it’s up to others to prove their case- why would you help implicate yourself?

Presumption of innocence is the most fundamental cornerstone of common law.


Yeah, and the authorities have to do it the right way. There's a reason why this is such a big issue and not as straightforward as you make it to be. https://www.vox.com/recode/2020/2/24/21133600/police-fbi-pho...


I agree that the overall situation around evidence recovery from locked devices is not straightforward, but I don't think I referenced this in my comment–I merely provided insight into why the specific actions might be considered to be illegal (using the argument in the blog post, I might add).


Physical booby traps tend to be illegal if they aren't supervised. Not sure if that's only for physical damages / injuries or applies to other damage as well.

https://en.wikipedia.org/wiki/Katko_v._Briney


There's a fellow on YouTube who leaves bait boxes that shoot glitter and spray stink spray.


Not a lawyer but if I was a lawyer, I would tell my client to never do that. Should something go wrong and the glitter gets into someones eyes, that's a case. Even worse, glitter dispenses while person is operating a vehicle and pedestrians are injured, that's a case.


If the vault hurts police officers who had a legal warrant for opening it, through a feature that was purposefully designed for this, I would bet that yes, it would be completely illegal, and both the manufacturer AND owner (if it can be proved they were aware) may be held responsible for the injuries.

Similarly, if your app/device damages government property and tampers with legal evidence, both you and the creators would likely be held responsible. Even if the law may be unclear, you will definitely face charges for this, given how defensive police departments are in these cases (there was one case where a person they beat up had extra charges brought against him for dirtying the officers' uniforms with his blood... ).

Furthermore, simply creating exploit code and releasing it into the wild is illegal, so Signal, if it were ever found to have done what they let us believe they could do, could be held legally responsible, even if the code never made it to exploit a live system at all.


"Is such a vault legal? "

You definitely are not allowed to have traps in your house with the intention of hurting potential thieves. So definitely no bear traps etc.

Permanent ink would probably still fall under that category. And below that it becomes grey area.


What about dye packs from banks?


You are probably not a bank, so not the same rules apply.



That depends on the state in the US. Stand Your Ground and Castle doctrine make it substantially less clear.


Less clear, maybe, but as far as I understand the various regulations, they all refer to personal self defense. So to protect yourself from harm, you may use (deadly) force.

That would not apply to protecting your vault from theft, by using physical - automated - violence against the thief.


>No, intentionally spoiling evidence — or “spoliating,” to use the legal term — is definitely not legal.

My point is Cellebrite/the Cellebrite user would be the one spoiling the evidence. The evidence is sitting there on the device unspoiled, and only if the user decides to charge ahead without heeding the public warning that doing so without the necessary precautions will spoil the evidence will the evidence actually be spoiled.

Signal itself has no knowledge of which files constitute evidence (it applies this completely indiscriminately), so I don't think you could argue that it is knowingly spoiling evidence.


> Signal itself has no knowledge of which files constitute evidence (it applies this completely indiscriminately), so I don't think you could argue that it is knowingly spoiling evidence.

The article, written by a legal scholar with a specialty in precisely these issues, directly contradicts this.

Signal coyly threatened to make their app hack Cellebrite machines with the intent of spoiling evidence. It doesn't matter that they aren't targeting specific evidence. Blanket spoiling all Cellebrite evidence would apparently be enough to get them in legal trouble.


Where is this special status for Cellebrite coming from? Just because they're one of the vendors whose software happens to be used by some governments, I'm suddenly forbidden from having an arbitrary sequence of bytes on my device in case someone else happens to connect and run some arbitrary software on it?

I'm having a hard time imagining this being a viable argument. Seems like the vendor should just fix their software if they expect it to work reliably. Anything else would be too large of a transgression on civil freedom.


There’s no special status for Cellebrite: it comes down to intent and, especially, that judges are not computers rigidly executing code. If you do something which is designed to damage equipment used by law enforcement, a judge is going to ask what your intention was, not just throw up their hands and say anyone could have had those random bytes. As a real-world analogy, consider for the sake of argument how having a trap on your home safe might look if you were a) in a very high-crime neighborhood or b) engaged in criminal activities and had written of your desire to harm cops – even if the action is exactly the same (and illegal in your jurisdiction), I’d expect the latter situation to go a lot worse because you’re knowingly targeting law enforcement engaged in (at least from the court’s perspective) legitimate activities.

Since Signal would be deploying that exploit to millions of devices to combat surveillance tech, I would expect that to at least result in a suit even if they were able to defend themselves successfully. It would be especially interesting to see how Cellebrite’s use by various repressive regimes entered into that: an American court might, for example, be sympathetic to a campaign trying to protect dissidents in China which happens to impact an American police agency using the same tool.


There is still legitimate utility to this behavior defending against non-United States Law Enforcement actors.

People are looking at Cellebrite wrong due to law enforcement using it. Cellebrite is a set of specialized thieving tools. Those tools can be wielded by anyone. The fact law enforcement has unwisely and blindly integrated it into their toolchain does not mean the device should be given special protection over anything else. All this does is further cement "law enforcememt" as a special privileged class in the United States, to whom Constitutional boundaries (5th Amendment, which at this point, I hold that testimony by electronic device metadata disclosure/compromise should realistically cover when breaking through encryption is involved, and 4th Amendment when Third Party Doctrine is taken into account).


> The fact law enforcement has unwisely and blindly integrated it into their toolchain does not mean the device should be given special protection over anything else.

I'm not arguing that it should have whatever “special protection” you have in mind. This is why I mentioned the concept of intent: just as having lock picks or a gun isn't automatically a crime, I think having an exploit for Cellebrite would depend on why you were developing and installing it.

If you were, say, helping dissidents in another country I would expect a judge to be far more supportive of that than if it came up in the context of a criminal investigation with a lawful search warrant. In the latter case, you are aware of but refusing to comply with the legal system and, irregardless of how any of us personally feel about it, that's just not going to end well in most cases.


> I'm not arguing that it should have whatever “special protection” you have in mind. This is why I mentioned the concept of intent: just as having lock picks or a gun isn't automatically a crime, I think having an exploit for Cellebrite would depend on why you were developing and installing it.

In that case, as long as one is not intending to interfere with a search warrant or other legal process, it should be fine for them to deliberately install a Cellebrite hack.


Constitutional boundaries and the 4th amendment applies. They do need a warrant, but with a warrant they are allowed to go through all your electronic records on your devices just as they are allowed to go through all your written records in your drawers and safes and envelopes.

Encryption has no special treatment that would cause 5th amendment to apply. 5th amendment may apply if they ask you to tell something (e.g. a password), but if they can break your encryption without your assistance, then there's no difference if they're decrypting a physical letter or an electronic file, if the evidence (that letter or that file) was lawfully obtained, they can do that.


My read was closer to what the article stated at the end -- the issue is that it is written for a tech geek audience when the real audience should have been judges and lawyers. So being vague and flippant was why they were foolish, not in saying something at all. And that they should probably not have implied that they were going to break the law, which also seems foolish.

Doesn't mean it isn't net positive, just means the details of how they did it were... maybe not the cleverest. But who knows, one person's opinion, etc.


I think it should have been written for politicians. The big problem isn’t that some American police agencies use these tools, the problem is that it’s legal to make, sell, use, and export them.

And that’s not for (American) lawyers and judges to decide against, its for politicians in all democratic countries:)


I think the article was pretty clearly written, and did not in any way appeal to or try to engender sympathy for Cellebrite.


No, the complaint was with how it was written, not what signal did.


Is there a way to further damage its profitability and force it to release 0-day in a legal way?


If/when I am appointed Lord Emperor, I would absolutely enforce the Computer Fraud and Abuse Act against the officers of such companies. Unless an audit could prove that every single one of their customers was a legitimate law enforcement organization, I’d go with the default assumption that they’re black hat hackers who happen to have a couple of legal sales. Let them prove otherwise.

Note that this is one of many, many reasons it’s unlikely that I’ll ever be appointed Lord Emperor.


The fact that Signal got their hands on a copy of the Cellebrite product makes “let them prove otherwise” disclaimer a moot point ;)


Not necessarily. That could have been an unused loaner from an anonymous law enforcement quartermaster who had a moment of conscience.

Cellebrite, as I recall hearing (Or was it StingRay?) have pretty strict non-disclosure license terms; I doubt Cellebrite knowingly sold one to Moxie.


I’m sure they have some sort of a business case lying around somewhere. So they can put a price tag on each 0-day. If they can't be forced, maybe they can be paid.


If Cellebrite was disclosing these vulns when they found them, there would be no business, thus no Cellebrite, thus they wouldn’t have found them. “Destroy Cellebrite” is a possible outcome but “Have Cellebrite release 0days when they find them” isn’t.


Their terrible business model isn’t my concern. And “keep security vulnerabilities secret and hope that we’re the only ones who can use them” is, indeed, terrible.


Right, so your beef with Cellebrite is that they exist (fair) not that they hoard 0days (which is a necessary condition for them to exist).


I disagree with your re-casting of the parent's statement. I believe the parent said that they are in fact opposed to Cellebrite's hoarding of 0-days.


I’m not sure how you came up with that incorrect conclusion.


Any 0-days they find can be used on already confiscated devices even if they report them and the manufacturer issues a fix, so their business could in theory still work.


I disagree with this. Signal isn't hacking Cellebrite by creating a malformed file that causes Cellebrite's software to implode.

I would be interested in seeing this go in front of a court because Signal isn't directly targeting any specific person, and the files are fine until they are processed through a specific broken pipeline.

If I put a fake USB port on my phone that was a USB zapper to kill the device it's connected to, it would not be illegal and it would be on the people seizing my phone to take responsibility for it. You cannot repackage vulnerabilities for police and then turn around and play coy because you're not able to keep your software up to date.

In the defense attorney section, the argument shouldn't be about the PoC but the fact that the PoC shows that Cellebrite's software is outdated and could be compromised. You can specifically ask for the backup that was extracted from the mobile device to be analyzed by third party software.


I'm curious if booby-trap laws would have anything to say about this. If I can distill the arguments down to my (completely abstract) understanding:

1) Cellebrite has to interface with software to do extract data. 2) Signal is the software in some scenarios. 3) Signal can alter itself so that if Cellebrite interfaces with it, Cellebrite breaks. 4) If Cellebrite doesn't interface with Signal, Cellebrite is fine, Signal is fine, and no one is hacked.

If I trespass on someone's property, and they have a booby trap that blows my leg off, I believe in most US jurisdictions, I can take them to court and have a good chance of winning.

Isn't this the same type of thing?

On the other hand. If I have a guard dog, and a bunch of "Beware of Dog" signs, and someone trespasses on my property, and the dog attacks them, I don't believe I'm liable. So by publishing this information, has Signal avoided the important nuance of being a trap?


It is not the same type of thing because booby traps are, legally, specific about causing bodily harm to a living thing. In practice the scope and applicability of the CFAA has to be explored.


Also not the same thing because the law also considers the harm caused to innocent 3rd parties from booby traps. No firefighter or EMT is gonna be harmed by the Signal code designed to break the cellebrite device.


"Police officer executing a warrant" is another typical category of innocent 3rd party that the laws around booby traps think about. If the harm is directed at a police officer using cellebrite -- say by infecting their computer with malware -- that may not be regarded favorably by a court.


Disclosure would be an interesting precedent.

"I have Signal installed on my phone" seems a reasonable disclosure of a known potential trap.

If a police officer chooses to scan said phone with Cellebrite, it feels reasonable that you have discharged your knowledge to the extent possible.


> "I have Signal installed on my phone" seems a reasonable disclosure of a known potential trap.

No, that's a straightforward statement of fact which a software expert might realize implies there's a trap. A police officer could not reasonably be expected to know that.

A reasonable disclosure of a known potential trap is

"I have Signal installed on my phone, so if you use a Cellebrite device to pull my data your Cellebrite device might get hacked by Signal."


Unless you're an information security professional, it seems unreasonable to expect an average Signal user to know more about the security of Cellebrite than Cellebrite's user (the police).


Probably so. Thanks for pointing that out.

If you're using Signal at all, you probably care at least a little about security, so it's not a given, but probably the vast majority of Signal users wil never hear about this.

I wasn't trying to say you're automatically guilty if you don't warn the police about what Signal could do.

I was just trying to say that a statement that _implies_ a trap exists, if you know enough about a piece of software, it's not a reasonable warning that a trap exists.

If you know the trap exists, a case can be made that such an indirect statement is actually baiting the trap, rather than warning about it.


It’s more knowingly destroying evidence than causing bodily harm.


That’s the thing: It’s not evidence at the time you set the trap. There are plenty of legitimate reasons you may want to protect your computer from Celebrite users.

It’s not just cops that use this software. Bad foreign actors do. Private investigators might. Hell, Signal demonstrated that they’re falling off trucks, maybe you want to protect yourself from Moxie!


I understand the short answer is probably "Judges see it differently so the logic doesn't matter", but I don't get the difference between setting a "booby trap" to wipe a phone and the basic phone-wipe security settings that are already on phones.

In the San Bernadino iPhone case there was a lot of hand-wringing about Apple's password limits, but no one was accusing Apple of purposefully destroying evidence because it has a setting that wipes data after multiple failed login attempts.

Cellebrite does not only sell its software to the US government; one of its chief criticisms is that it doesn't really care who gets its code. So the threat model to end-users is the same, the same fears that would make me want to wipe my phone if someone is trying to get into it might make me want to wipe my phone if someone is trying to automatically pull large amounts of data off of it.

Is the worry here that Cellebrite's vulnerability would need to be executed on a different computer, so it's in a different category? Forget technicalities and cleverness, I don't understand even the basic logical difference between Signal destroying its own data on export and iPhones wiping their data after failed login attempts. I trust the author, but I just don't get it. What security measures are acceptable to build into software?


> Is the worry here that Cellebrite's vulnerability would need to be executed on a different computer, so it's in a different category?

Yes, possibly. The legal system is all about trying to establish clear lines dividing the spectrum of obviously OK behavior and obviously unacceptable behavior. It is obviously OK to delete text messages off your phone. It is obviously unacceptable to break into a police station and delete evidence off their computers. Somewhere in between is the dividing line, and if this went before a judge it is entirely plausible that they would draw the line there.


> no one was accusing Apple of purposefully destroying evidence because it has a setting that wipes data after multiple failed login attempts.

That’s because the feature is designed to protect your data and phone after it’s been stolen by a criminal, not to destroy evidence lawfully sought by law enforcement.


> not to destroy evidence lawfully sought by law enforcement.

But Apple's encryption does still destroy evidence lawfully sought by law enforcement. And in fact, Apple has gone out of its way to make sure that its encryption will still destroy evidence even in instances where law enforcement is trying to access it -- that was the entire controversy behind the San Bernadino case.

Apple wasn't willing to put holes in its security even knowing that their position made it harder for police officers to execute a lawful warrant.

> is designed to protect your data and phone after it’s been stolen by a criminal

I think I already talked about this above:

> Cellebrite does not only sell its software to the US government; one of its chief criticisms is that it doesn't really care who gets its code. So the threat model to end-users is the same, the same fears that would make me want to wipe my phone if someone is trying to get into it might make me want to wipe my phone if someone is trying to automatically pull large amounts of data off of it.

It's not clear to me that the threat model from Cellebrite is different than the threat model from a criminal. Cellebrite does not exclusively sell its software to the US government. And if Signal's devs could get their hands on it then there's no reason to believe that criminals couldn't get their hands on it as well. We already know their software has been leaked outside of law enforcement, because Moxie has it right now.

I'm not saying that the author is wrong, I believe them. And I can understand that exploiting a vulnerability might be treated differently than on-device data deletion. But the specific reasoning you give about intent to disrupt an investigation makes no sense to me. In both cases I'm defending against criminals who may have stolen my phone and may be trying to exfiltrate data. The police aren't who I'm worried about here.

It's not even that Apple/Signal's threat model seem to be 'technically' the same tunder some kind of narrow criteria or letter of the law, they're basically identical. As a layperson on the street or as a programmer trying to build secure software, I don't know how I would tell the two threat models apart. I don't know what test I can use here to tell when I am and am not allowed to be afraid of criminals stealing my phone and data.


Intent matters a lot in the law.

The fact that evidence is destroyed even though it is lawfully sought by law enforcement can be, as attorneys would say, incidental. The same goes for people like me who regularly shred documents. I don't do it to frustrate law enforcement; I do it to frustrate identity thieves.

BTW, companies destroy what could be future evidence as a routine matter under the guise of data retention policies. Take, for example, a policy that all incoming and outgoing emails are expunged after 90 days to conserve space and help contain the damage caused by corporate espionage. Courts aren't going to hold companies accountable for willful destruction of evidence if some evidence they would have lawfully sought is gone due to the execution of the policy. However, if a company has been given notice that they are subject to a subsequent preservation order, they must suspend the policy to the extent needed to effectuate the warrant or subpoena to avoid getting into trouble.

> I don't know what test I can use here to tell when I am and am not allowed to be afraid of criminals stealing my phone and data.

That's why we hire attorneys! Legal counsel is here to help. Ain't nothing wrong with relying on subject matter experts. For the same reason, I hire people to crawl through my attics, too.


> Intent matters a lot in the law.

I guess this is what it would come down to (ignoring other liabilities like the CFAA): if Signal actually implemented their feature, would they be able to argue that their intent was to stop criminals abusing Cellebrite software, or would it be possible to argue that their intent was from the start to disrupt police investigations?

With Apple that's probably going to be hard to argue, they'll come out with some basic stats that talk about theft reduction, they'll point at their advertising and messaging around the feature.

I can see a distinction there, even though I don't see a super-clear reason to believe from Signal's one blog post that they're specifically trying to disrupt the police.

> That's why we hire attorneys! Legal counsel is here to help.

This is obviously good advice, and people shouldn't be looking at HN musings to figure out what is and isn't legal. But at the same time, minor sidenote:

I'm not angry at you, and this isn't anyone's fault in specific, but I low-key hate this answer because a metric ton of innovative software gets built by people who do not have the resources to hire attorneys for every decision that they make, and it's really unreasonable on a societal level to expect every Open Source dev, teenager, single-founder entrepreneur, etc... to have the resources to have legal counsel on hand. The majority of people in the US can't just go out and talk to a lawyer whenever they want.

Not really relevant to what we're talking about, and again, nothing to do with you, but I'm still unable to keep myself from ranting about that whenever this topic comes up.


> if Signal actually implemented their feature, would they be able to argue that their intent was to stop criminals abusing Cellebrite software, or would it be possible to argue that their intent was from the start to disrupt police investigations?

You should read the article again if you haven't already. The author discusses the context that would make this a hard sell to a court. Moxie Marlinspike has been around a long time, and he hasn't been particularly discreet about his opinions about law enforcement.

> etric ton of innovative software gets built by people who do not have the resources to hire attorneys for every decision that they make, and it's really unreasonable on a societal level to expect every Open Source dev, teenager, single-founder entrepreneur, etc... to have the resources to have legal counsel on hand. The majority of people in the US can't just go out and talk to a lawyer whenever they want.

Have you tried? I think there's a misconception that lawyers are resources that will only talk to you if you can verify you have $1M in the bank. Even when I was a poor student I found that I could phone up just about any lawyer and get 15 minutes of their time. This is often enough to get the gist of whether whatever I want to do is going to be legally risky. Most attorneys are nice enough to tell you what you're about to do is incredibly stupid (assuming it is, in fact, incredibly stupid) without charging you for the privilege.

And come on, we're not talking about writing a new game or a container orchestrator or some new ML algorithm here; we're talking about technology that clearly has a strong relationship to law enforcement and has a legacy of adversarial practices. Let's practice a little common sense here.


> Moxie Marlinspike has been around a long time, and he hasn't been particularly discreet about his opinions about law enforcement.

I did read the article, I don't personally see the intent that the author is attributing (although I understand how other people might, and again, this is all separate from the CFAA concerns). The author is claiming that Cellebrite's users are synonymous and interchangeable with law enforcement. But that's clearly not the case, or else Moxie wouldn't have a copy of the software since he's not a cop.

Signal never mentions law enforcement in the original post; the only mention they make to governments are non-US authoritarian countries, which... it's not illegal in the US to build software that Turkey dislikes. The only category of users that Signal's post specifically supports by name are journalists and activists -- in other words, not criminals under US investigation.

And this is part of what weirds me out about this conversation. When you start talking about how obviously Moxie is trying to hack police departments because he's criticized them in the past -- my opinions about law enforcement overall shouldn't block me from writing secure code. I believe in police accountability, I publicly backed Apple's position during the San Bernadino case, which according to multiple police spokespeople apparently means that I love terrorists and hate America. Does their framing of that position mean I'm not allowed to build secure software now?

You don't see it as problematic that being critical of the government would mean that you have less legal leeway to write secure code? "Hasn't been particularly discrete about his opinions of law enforcement" to me reads as "he's going to face increased legal scrutiny and have fewer legal protections purely because of 1st Amendment protected speech."

> Have you tried? I think there's a misconception that lawyers are resources that will only talk to you if you can verify you have $1M in the bank.

It's entirely possible that I'm bad at looking around at stuff like this. I haven't seen specialist law offices that aren't charging more than a hundred dollars an hour, but... I am not going to pretend I'm an expert on this stuff, at all. I'm not an expert on anything we're talking about.

> And come on, we're not talking about writing a new game or a container orchestrator or some new ML algorithm here; we're talking about technology that clearly has a strong relationship to law enforcement and has a legacy of adversarial practices. Let's practice a little common sense here.

I'm not sure I follow, is your argument that security code is in a separate category from other Open Source software? I don't understand what you're implying. Signal is a messaging app, shouldn't ordinary people be able to build those?

There aren't a ton of consumer-facing projects I can build that won't have to care about security and privacy. You don't think that games, or music storage/tagging, or backup systems have to care about this stuff? It's not only banks that do encryption, any system that touches user data should be able to protect that data from criminals.


Is it considered evidence before law enforcement knows the contents of the device?


Yes. At least in a civil context, the doctrine around spoliation requires a party to preserve evidence when they know, or should know, that the evidence is likely to be relevant to pending or future litigation.


"Booby trap" laws aren't relevant because they're specific to physical injury.

However, there are explicit laws against tampering with evidence or things that you'd expect are likely become evidence.


Blowing your leg up is not the right comparison, this is closer to a lock that breaks your lockpick if you try to pick it.

No one is harmed but the tool being used to break in.


I strongly doubt it's anything to do with booby-trapping. The article was less focused on the physical act of data destruction and more the context of destroying data related to an investigation, which arguably constitutes Destruction, Alteration, or falsification of records as per https://www.law.cornell.edu/uscode/text/18/1519

I say arguably of course as I doubt that this is a scenario that has really been carefully tested in law, but the article seemed more focused on how such an act would be interpreted by the courts if it really came to it. And the important part of such a situation is mentioned here:

"... (if the user gets blamed for what her phone does to a Cellebrite machine, she will be plunged into a world of pain, irrespective of whether she would ultimately be held culpable for the design of an app she had installed on her phone)... "

My layperson read of all this situation is that it doesn't really matter whether or not you know at the time of such a file existing whether or not you're under investigation, if the end effect the file will have on a process used by a Government Agency in a legal search is known, the fear seems to be that even if ultimately you end up inculpable, it's a very long and rocky road to get there legally speaking, as the Government will likely not even try to argue about the quality of code or whether it's you or Cellebrite (or whoever) that is responsible, but that you had a file which had a known effect which in turn impeded an investigation.

Remember that criminal cases have specific challenges that the prosecution accuses of and that the defendant protects against; if the challenge is that "[you] knowingly put data on your computer which impedes investigations by Agencies of the US Government by means of corrupting data", the mechanism of how the data got corrupted is not nearly as important as the actual act/intention.

If Signal really is doing this, I think their intent was trying to undermine this with the random users part (so that users couldn't feasible know if they had a trap or not), but personally, I think this is kind of weak as with the announcement, there's knowledge that an application you have has a non-0 chance of causing such disruption, and probably it's enough to at least waste a lot of your time and money. (Especially if it really did a number on a Cellebrite device that damaged a lot of other investigations...probably they'd go after a person just out of spite at that point.)


This also highlights what should be considered a disqualification for overly broad legislative statute. It seems to me that CFAA was put into practice with woefully little thought as to how computing systems, and by extension, how the legal system interacts with them.

This results in an incredibly powerful tool landing in the prosecutions hands that can be used to quite literally drive a person to death by legal system while everyone non-technical has to be brought up to speed on how this stuff works and doesn't work.

Realistically speaking, I don't think the CFAA has actually been driven by real concerted attention from tge public. If it had been, I don't think the software industry would be anywhere near as big as it is, because when you really understand what you're doing when accepting a EULA, and then comparing that expectation with what programs actually do, there'd be an awful lot of software that actually falls under CFAA used for official purposes than anyone not in an official capacity would feel comfortable with; nevermind software for private use.


I would guess that it is way simpler than that. What will be gauged is the intent.

This file is targetted to damage LE property, it is hard to argue that they are random bytes.


The intent is to protect the content of the user’s device from unauthorized access by poorly written forensic extraction code. It’s not offensive but defensive, as Cellebrite has to actively read the file, a file which is not theirs. Technically, Signal owns the copyright and does not authorize Cellebrite to read it (I’d assume, legally).


And technically if the LEO user of Cellebrite has a warrant, they are legally authorized to look at that data no matter what Signal or the phone's owner thinks. So, in that sense, it's the ultimate 'authorized access'.


Cellebrite sells their hardware to lots of non-LEO, including corporate security, law firms, private investigators and regimes that have no respect for human rights. Cellebrite's UFED hardware even shows up on eBay and other online sales platforms.

If someone has a Cellebrite, they do not necessarily have a lawful right to access. Could just be a $16/hr Pinkerton (a wholly owned subsidiary of Securitas) using UFED on a phone they stole from an employee or contractor of the organization they contract with.


I feel like this is a key point for the entire discussion. The legality of deliberately foiling lawful investigations is debatable, but protecting yourself against wild-west malware decidedly less so.

That being said - if it's all fine and dandy, I don't see why a Cellebrite-foiler couldn't be a separate app. Moxie (threatening to) piggy-back it onto Signal, purely because it's the app he controls, is a deeply user-hostile move.


> Cellebrite's UFED hardware even shows up on eBay and other online sales platforms.

I'm surprised Cellebrite allows for this. I would have assumed that the sales contract would include a "this is a lease, not a purchase" type of wording so that "you may not sale this device to anyone" with a buy back clause provided instead.


You'd be surprised but older generation UFEDs end up on eBay for under $1000 often.

I have one at home but yeah, there's no EULA preventing you from selling something that's yours.


I'm not surprised that it is happening at all. Of course someone that paid a large sum of money for something that they no longer need/want will result in them trying to sell it to recoup some money. My surprise is that Cellebrite does not buy them back to keep the demand/supply artificial.


But are you obliged to consent? Do you have to give them all your passwords too? Can they use "data exfiltration tools" on humans? If the focus of your exploit payload is specifically to neutralize the attack and not cause extended or arbitrary damage, could this be counted as "self defense" or just protecting your privacy?


Yes, you're obliged to consent - you don't necessarily have to help them, but you're also prohibited to obstruct or delay them, especially destroy or hide the evidence. A lawful warrant overrides any expectations of privacy. Furthermore, there's no "self-defence" concept in any computer-related statutes; protecting a life can be an excuse for certain otherwise illegal actions, but protecting your data or devices is not; a "hack-back" is a crime on its own even if it would run on a criminal's computer, but in the Cellebrite case it's presumed that when law enforcement runs the data collection, they have full legal rights to access that device and data.

For a physical world analogy, let's suppose that someone gets shot, and you run away with the gun used and throw it into a river. Even if you'd be acquitted for the shooting itself (due to e.g. self-defence, or perhaps someone else did that shooting), you can be convicted for throwing the gun into the river as tampering with physical evidence, as hiding that gun is a crime by itself if the jury assumes that you did it so that it wouldn't get used as evidence. That applies even if there wasn't any warrant yet, as the investigation hadn't yet started; it's sufficient that you would have expected that this might get used as evidence.


I think a better physical analogy is a tamper proof safe which destroys its contents. With such a safe (or the signal hack), if I disclosed the presence of the safeguard and LE either 1) failed to open the device without triggering the safeguard 2) trigger the safeguard unintentionally, is it really any different from not providing a password? Because the password would be the operable piece of information preventing them from getting the desired data without my consent.


A tamper proof safe which destroys its contents indeed seems like a good analogy.

However, I do think that in the case of a suspect having such a tamper proof safe, it would be a valid court order to require you to open that safe without destroying the contents. It's not really good to try and look for analogies with how similar or different something is to "providing a password" since providing a password itself is a boundary case between the right to not testify against yourself and the duty to provide the evidence so the specific case matters and in multiple cases people in USA have gotten jail time for refusing to provide passwords. Locked safes predate passwords, it's well-established that the contents of the safe are fair game with a warrant, so any analogies between safes and passwords are arguments that people should be required to disclose their passwords, not that people get to keep their safes unopened.

And if the safe did actually destroy its contents, they would charge you with tampering with evidence, and in addition to that, the prosecution might be allowed to assert as fact that the destroyed evidence did actually contain all the things harmful to your case that they intended to find there - the "spoliation inference" concept.

Also, your motivation for having such a safeguard matters. If a reasonable person would believe that you chose to use such a safeguard so that it would prevent police from getting to evidence and destroy it, that may be treated as a crime even if they manage to circumvent it and no evidence is destroyed. It's not about any specific method or process, taking any willful action with such an intent itself is a crime - you're not allowed to try to prevent a warrant from getting successfully executed.


I’m not interested in the password angle and more interested in the “is signal culpable” angle. In which case we could just as easily assume there is no known password or combination. If the evidence ends up destroyed, I don’t think the maker of the safeguard is responsible. That’s more my point.

Whether or not Signal is making a valid “safeguard” is an interesting question as you point out. I think you’d be on thin ice, but I’d be curious to see how that plays out in court.


Now you're asking the important questions. We're getting to a point where we have so much of our personal memory, effects, and essence on our phones that there is no reasonable substitute to seperate the data on our devices as being seperate from our mind. That implies 5th Amendment should apply to electronic testimony, which will be fought tooth and nail against by the judiciary. This will have to be tested sooner rather than later as we get closer to realistic and functional Brain/computer interfaces.


Sure, but the LEO has to ensure that their tools are not broken. This is not anyone's responsibility but theirs.


Yes, I agree LEO with authorization to seize and search someone’s property (with a warrant or similar due process) is authorized to utilize a forensics tool chain that adheres to application security best practices (without which taints chain of custody). None of my comments should be construed as supporting the evasion of a legal law enforcement act.

I take issue with LEO overreach when it occurs and vendors shoveling garbage (paid for with tax dollars) to the justice system.


IANAL but I’m pretty sure that’s not how it works. A judge won’t throw out evidence just because a tool doesn’t “adhere to application security best practices”. They’ll need to be convinced that the specific evidence they’re reviewing is compromised or unreliable.


Agreed! I’d like to see it tested in court. Tools used in the justice process must be held in the highest regard considering someone’s freedom hangs in the balance.

The courts forced a breathalyzer manufacturer to release their source code, so there is precedent for critical review of such tools.

https://nccriminallaw.sog.unc.edu/breathalyzer-source-code/


I would like to see that as well. But I think that sidesteps what I’m saying, which is that it’s probably not sufficient to show that the tool used to gather evidence might, under certain circumstances, be susceptible to malicious interference.


Nope. As soon as one case has reasonable doubt introduced, it's precedent. Now judges will balk at backpropagating it to previous cases before the exploit possibility was made public knowledge; however, one could make the case a recheck effort should be initiated, because in the grand scheme of things, Moxie is technocally a disinterested party. Those who have direct stake may have been keeping this potential capability in their back pocket for strategic use.


Well, we won’t know until this actually gets tested in court. But frankly, I’m putting my money on the analysis from the Stanford lawyer with subject matter expertise, not random armchair lawyers on HN.


Maybe in the USA. In Europe, evidence in criminal court cases needs to adhere to standards such as chain of custody. Which Cellebrite obviously fails given this research - their tool's output could be manipulated.


And someone could have opened an evidence bag and resealed it, but my guess is you don’t see courts throwing out evidence unless there’s reason to believe that actually happened.


There is, in fact, a current court case around a Sheriff's deputy accused of planting methamphetamine related paraphenelia in over 162 cases. See the ongoing Zachary Wester affair.

That's falsely implicating 162 innocents of a felonius crime, tarnishing their future prospects for life, which reulted in plea bargains being accepted because the accused could not get sufficient legal representation to make the prospect of a successful legal defense doable, and a completely innocent of the charge individual didn't want to run the risk of amplification of sentence just for exercising a civil right. Besides being an example of why plea bargains make a mockery of our legal system; "just take this lesser charge that we don't have to really work at proving it so we can be done with it, because think of how bad the sentence will be if you make us work at it", it demonstrates that even procedures as they are now are such that an officer/prosecutor have and are willing to exploit their capability to manufacture suffering for those they serve for personal gain.

The System is getting shocked by it's vulnerability to the untrustworthy agent currently. So I wouldn't discount some fundamental reassessments of procedure down the road.


It is a travesty but I think that case works more for the person you’re replying to: it was evidence of misconduct (the prosecutor noticing his body cam footage didn’t match his reports) which called that into question, not just saying someone _could_ have tampered with the evidence.

This will work the same way: if there’s corroborating evidence, it’s likely to be as futile as the original article’s author predicts, but if there is something speculative which has only unconfirmed evidence from Cellebrite it might be enough to get that thrown out.


Actually the opposite: you would expect courts to toss out (and/or: opposition to successfully challenge) all evidence unless chain of custody is guaranteed.

That's a very important part of "innocent till proven guilty".

Your suggestion flips this around, thus leading to "guilty till proven innocent". Perhaps that is how things stand in practice in the USA. I don't think that that is how things should be.


It’s not “guilty until proven innocent” to keep evidence if the defense can’t show it’s been compromised.

As a logical extreme example, you are suggesting that courts forbid eyewitness testimony, since it’s always possible that they are mistaken or lying.


Not GP, but I would highly prefer they give eye witness testimony much less weight than they do today.


> A judge won’t throw out evidence just because a tool doesn’t “adhere to application security best practices”. They’ll need to be convinced that the specific evidence they’re reviewing is compromised or unreliable.

If the tool doesn’t “adhere to application security best practices” then the evidence is compromised and unreliable. Take a wild guess at how many states and other well-funded actors have been quietly deploying their anti-Cellebrite defences in the wild until Signal has made theirs public. If Signal was able to obtain Cellebrite, what is the chance they weren’t?


This could have been a potential line of reasoning, if they would have implemented it, and if Signal were taken to court for it.

The problem is, they wrote this article. Where they say they will put 'aesthetically pleasing' code on installations. But not all installations. So Cellebrite can claim it's a menace (sort of a booby-trap) , and not a protection (sort of a shield).

Another issue: they say they obtained then kit because it fell of a truck. Any judge knows that 'fell of a truck' is a manner of speaking. Thanks to their statement it will be possible for Cellebrite to say "We checked the last 6 months of deliveries, no truck incidents". Cellebrite can try to find civil statutes (fraud?) and ask for discovery. If that is allowed by the judge, Signal will have to show all documents and messages pertaining to the hack, or risk contempt of court or perjury.

This is what the article is about. TL;DR: great work guys, wish you had consulted a lawyer before writing the post though.


Copyright does not prevent lawful seizure of material.


Signal payloads don’t prevent competent, properly implemented seizure and forensic analysis.


Cellebrite sells their hardware to corporate security, law firms, private investigators and despotic regimes and their UFED hardware even shows up on eBay and other platforms for the general public to buy at times.

LEOs are far from the only buyers of this hardware, a good chunk of Cellebrites userbase operate extrajudically and do not have a lawful right to attempt to access the contents of phones they use UFED on.


+1. Signal was in exceptional territory with it's release, but we've seen an increasing acceptance among sophisticated legal minds of hacking hackers. The blog's analysis was pretty half-baked to me.


Cellebrite is not law enforcement property, I'm sure the copyright and code is still owned by Cellebrite. Additionally Cellebrite sells their tools to more than just LE.


Just so Linux is not copyright to me, but to introduce my computer to malware is crime against my property, not Linux Foundation.


The file doesn't have to damage anything owned by LE/private companies. It could protect files on the device from unauthorized tampering by wiping or encrypting files when it's executed.


> Signal isn't hacking Cellebrite by creating a malformed file that causes Cellebrite's software to implode.

It absolutely is. This is the sort of "technically I didn't break the law!" nonsense that he explicitly called out in the article:

> Trying to find the edges of the law using technology will not make a judge, or prosecutors for that matter, shrug and throw up their hands and say “Wow, that’s so clever! You sure got us.” They won’t reward your cleverness with reddit coins and upvotes and retweets. They will throw the book at you.


This type of legal revisionism power in the judiciary is exactly what Thomas Jefferson and Madison were leery of, as it does come down to legislating from the bench.

When case law can take an inch given in statute, and turn it into a mile or a femtometer, you have a problem.


unfortunately it seems pretty clear to me that it'd be a CFAA violation:

"(A)knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;"

if Signal downloaded Cellebrite-pwning shellcode to its app, that shows intent.. it's "knowingly causing transmission" in the sense that worm authors knowingly cause transmission of their viruses by equipping them with exploits (and are thus responsible for damage wherever the worm goes.)

if that shellcode does anything at all - deleting files, adding files, bricking the device, that falls under the CFAA's definition of "damage," since it affects the integrity of the extracted files.

you could argue that Signal doesn't knowingly intend to infect "protected computers," but unless these updates are, say, geofenced to not hit the US, it'll be obvious to the court that they could anticipate government machines getting hit.

the smartest thing would be for Signal to follow through on delivering these files, but have them be cat pictures. troll Cellebrite and muddy the waters for prosecutors. no CFAA violation, but Cellebrite can't ever be sure (and prosecutors can't ever prove.)


> if Signal downloaded Cellebrite-pwning shellcode to its app

The transmission in this legal context is the other way imo. Cellebrite's device is transmitting Signal data out, and Signal is not intentionally sending data to these devices.


So, let's say for the sake of argument that Signal does download files that are intended to exploit these Cellebrite vulns to users' phones.

The part of the statute we're talking about triggers when someone:

- knowingly causes the transmission of a program, information, code, or command

- and as a result of such conduct, intentionally causes damage without authorization to a protected computer

Notice: this didn't require someone to transmit code to the victim machine: they knowingly cause code to be transmitted somewhere, and that intentionally causes damage as a result. Isn't that what you have here? In our assumed world, Signal's devs have written the app to pull down the exploit to the users' phones, thereby knowingly causing it to be transmitted. I think it'd be hard to claim with a straight face that your Cellebrite-targeting code (that you told the world about) wasn't intentionally targeting Cellebrite.

Under your rule of "you have to intentionally send data to the victim device," what result if you write malware and post it, say, on Facebook: just as you intended, anyone who clicks is infected, but the payload is inert as to Facebook's servers. Are you in the clear because the harmed users all initiated the download themselves?


Are you trying to convince a jury, or just bring charges?


I'm not going to pretend it's a 100% open-and-shut case: the CFAA in its great broadness is a fairly controversial area, and this is a "weird" case. And as always, who knows what a jury will do.

On the other hand, in the hypothetical scenario where this actually happened and damaged some law-enforcement-owned machines, I don't see the average jury being too sympathetic.

It's certainly problematic enough that it's a legitimate concern, I'd say.


But those are two separate actions:

- Signal transmits a code (exploit) and keeps it in its cache, the code is dormant, nothing is being damaged here, it could stay like this forever, no harm.

- Cellebrite transmits Signal's files and cache, including the exploit, and gets hacked by reading it with their scanner.

The key is that the first action is harmless, and the second action is performed by Cellebrite, so can't blame Signal for it. I don't think these two actions can be consider as one.

And the main difference from the malware scenario is that this Signal code is not meant for reading, it is inaccessible and harmless for anyone except the Cellebrite hackers. The exploit is activated by unauthorized use, unlike the malware.


What about the other countries?


A "protected computer" as defined in the CFAA is, to a first approximation, any computer. Particularly if it's connected to the Internet.


Reminds me of "DOS ain't done til Lotus won't run."


This is false, there are laws about booby traps (they’re generally illegal in all jurisdictions). If you plant mines on your property and a thief walks onto your lawn and loses a leg, he can sue you and will likely win with a competent lawyer.

You don’t get to say “well you shouldn’t have been there”


Laws on booby traps specify that the trap be used to cause harm to a living thing. It's not a booby trap if it wasn't meant to harm someone. So the digital booby trap isn't governed by the same laws.


I agree. But that law demonstrates the concept.


However, in many jurisdictions, a trap is only a trap if it is hidden.

For example, you can have barbed or razor-wire fences, its obviously barbed, its your fault if you climb up it and get impaled.

They advertised this, it's not hidden, so even if trap laws governed it (which they don't because its not hurting a person), it's not a trap.


Even if it were a trap, it would be a trap for a device, not a person. Like anti-drill pins in a lock, which can damage or destroy a drill bit. I think you could reasonably say anti-drill pins are traps for drill bits, and they're certainly not a violation of anti-boobytrap laws.


booby traps that cause harm are illegal, someone stealing my phone who cuts themselves on a broken piece of glass from the screen can't argue "it's a booby trap"


This is true. Also, if you add hidden buttons that alter the way a data port works to either disable or enable it, or fry a device attached to it, this isn't intentional harm caused to police equipment specifically. It is a deterrent-after-the-fact, a protection, and a personal choice to modify one's own device without targeting anyone in particular and not intentionally, unexpectedly causing bodily harm.


the owner of the device could be charged with tampering with evidence. Signal is, paradoxically, an uninvolved third-party imho (technically uninvolved).


If Signal put this on their home page, in a sort of "download this to fuck with da police" advertising, maybe.

Judges aren't machines, that slavishly follow any tenuous reasoning. They're judging; it's in the name. Specifically, they have to judge your intent. In many cases American criminal law requires a 'mens rea', or 'guilty mind'

https://en.m.wikipedia.org/wiki/Mens_rea


Traps that harm people are illegal. Traps that harm animals are legal under some circumstances (mousetraps are legal basically everywhere afaik.) But traps that harm devices rather than people or animals? Are you sure that is generally illegal?


> If I put a fake USB port on my phone that was a USB zapper to kill the device it's connected to, it would not be illegal and it would be on the people seizing my phone to take responsibility for it.

‘If I put a bunch of laxatives on my sandwich to give the office sandwich thief violent diarrhea, it would not be illegal and it would be on the people stealing my food to take responsibility for it.’

This is the analogy that comes to mind which I’ve always heard to be illegal.


If I put laxatives in my drink because I am having digestive issues, and someone steals my drink I don't think that's illegal. If I choose to store exploits on my own device and someone steals it from me and runs it through a forensic tool that can't handle the files, that's not on me either. At a minimum, the vulnerabilities should introduce doubt as to whether or not the capture is a forensically sound copy. There are too many variables otherwise. For instance, who is to say that some previous device didn't exploit the vulnerabilities?


Intent matters.

If the court (it's not that a laxative issue is likely to come to a full court, but still) believes that you did actually put laxatives in your drink because of your digestive issues, then that's legal, but if they get convinced that you did it with the intent to mess with your coworker, then the exact same action is illegal. In a smilar manner, if the court considers it plausible that you did just happen to have that file among various exploits that you store there for a specific reasonable purpose, then that would be legal, but it is illegal if the court considers it likely that you placed the exact same file there with the intent that it will destroy evidence - perhaps based on some other evidence, such as your online discussions on this topic and the timestamps of downloading and placing that file in relation to whatever other crime they are investigating.

Yes, there are many variables in play, but it doesn't make the problem undecidable, if it comes to court, your lawyer and the prosecution will point to them and the jury will decide.


are you arguing about what the law should be or what the law actually is?


What the law is can't be realistically argued. What the law is is a function of what a prosecutor, jury, and judge collectively are willing to let a conviction go forward on.

This is what bugs me about common law. You can't take a statute at face value anymore once a sufficient amount of case law comes into the picture, and there is no active effort to reconcile the original statute with the reality of the case law it spawns.


I have not seen anyone state what the law actually is. Just a lot of FUD about what I am not allowed to do with my device. If you are a lawyer and can break down what I got wrong and why, I am happy to listen.


I feel confident that if you left a conspicuous note on the sandwich that said "Warning! This sandwich may contain laxatives! Do not eat!" and the thief disregarded it and ate the sandwich anyway, then it would be entirely legal.


I am not a lawyer or even particularly well-read on this specific scenario, but my read of the spirit of the law is that (if it indeed contains laxatives) it might still be illegal. What matters is not whether you sprang a surprise to one-up someone who was violating your rights or they had a fair chance to know what exactly the consequences would be, but that you deployed the laxative with the intent of harming the thief, and most modern legal systems do not like vigilante violence (even if it's "cute" vigilante violence like giving someone diarrhea). For an intuition pump, I'm not sure there is a significant difference between the note in question and "Warning! If you take this sandwich, and I know who you are, I will personally hunt you down, wrestle you to the ground and force-feed you laxatives!" as far as the law is concerned, but would you expect to get away with this act just because the target previously stole your sandwich and you warned them?

(On the other hand, I feel like the zappy USB port may actually be easier to get away with, especially if you say your threat model was corporate spies or criminals trying to steal your password, because "violence against tools" does not seem to be put in the vigilante violence box. Those special materials they have for safe doors that are designed to damage angle grinders (https://www.newscientist.com/article/2249275-material-that-c...) are not illegal.)


I know the old laxative in the fridge item has been a stand by in office etiquette enforcement for longer than I've been alive.

The court follows the principle of clean hands. A thief is not going to have a compelling case against someone when they are throwing the first stone by stealing someone's sandwich. It's an interesting take, and I'd have to dive into actual case law to even determine if the test scenario is apocrypha or not. However, I doubt the laxative in sandwich argument would be compelling to a judge when there are much more relevant and easy to reach challenges to overcome.

-I.e. State mandated weakness to exploitation by Law Enforcement (You can't defend yourself from exploits used by Law Enforcement, which you are also not allowed to know about under Executive Privilege) -The ability of the Government to ensure their tooling meets chain of custody preserving standards Etc...

As usual, not a lawyer, just read a book on legal research and reasoning once.


Yes. I expect that if I made myself a Miralax sandwich and labeled it with a huge caution label that I would be fine if someone decided to eat it anyway. It would be trivial to show that I had no intention of harming someone.


You would probably still be sued anyway, because why would a reasonable person make a laxative sandwich? The note is obviously a joke.


You are discriminating against the illiterate and those who do not read English, or have vision issues.


The analogy breaks down because the USB zapper isn't intended to cause bodily harm, unlike a laxative.


There's a distinction between setting booby traps that fire shotgun shells and modifying one's device to make it trickier to access. In one case, it maybe necessary to take shelter in a random person's house (cabin in the woods during a blizzard), while in the other the point isn't necessarily to destroy police equipment but to prevent personally-unauthorized extraction by anyone.


That's true, but I didn't put the USB zapper to harm anyone. I use it for testing USB grounding and some officer stole my phone.


Blatantly lyng about your intent doesn't make you more innocent in court.


I wouldn't find a website where professional lawyers opine on startups and programming especially compelling.

I don't find HN threads where tech folk opine on what their opinion of how the law should be interpreted to be especially compelling either.

This is especially true here where I note that the author of the post folks are commenting on has an incredibly notable credentials and frankly it's somewhat ridiculous for lay-folk to be arguing with someone with such bone fides:

Riana [Pfefferkorn] was the Associate Director of Surveillance and Cybersecurity at the Stanford Center for Internet and Society. Prior to joining Stanford, Riana was an associate in the Internet Strategy & Litigation group at the law firm of Wilson Sonsini Goodrich & Rosati, where she worked on litigation and counseling matters involving online privacy, Internet intermediary liability, consumer protection, copyright, trademark, and trade secrets and was actively involved in the firm's pro bono program. Before that, Riana clerked for the Honorable Bruce J. McGiverin of the U.S. District Court for the District of Puerto Rico. She also interned during law school for the Honorable Stephen Reinhardt of the U.S. Court of Appeals for the Ninth Circuit. Riana earned her law degree from the University of Washington School of Law and her undergraduate degree from Whitman College."


Arguments and opinions should be up for discussion regardless of who the author is. No one is questioning the validity of the author's interpretation of the law itself.

It sounds like you're making an appeal to authority rather than an actual point about the article.

The discussion isn't about how a law is being interpreted. This blog article is about how the Signal article can be interpreted by a tech informed lay person vs a judge and the security theater surrounding it.


She framed it as a personal opinion from the very start, where she sought to impress upon us that she may never be hired again by Signal after this post. I thought in this case the in depth legal analysis didn’t add anything to the arguments she was trying to make, though maybe helpful background for some. I don’t think anybody seriously thought Moxie was trying to or had any chance of getting any criminal convictions thrown out, especially not anything concluded before the hack was public! So most of it was pretty moot. HN is well within its lane talking about the substantive points she was going for. And on those, I found her a bit heavy on appeals to “duh” like the following:

> Basically, “I’ll show you mine if you show me yours.” That is not generally how vulnerability disclosure works, and AFAIK, Cellebrite has not taken them up on the offer so far.

This was not an attempt at responsible disclosure, nor was it a specific exploitable disclosure at all. It was a wake up call to everyone, her included, that law enforcement tech is just as shitty as every other kind of tech. Her ideas about how things generally work are not really relevant, but that was literally all she had to say about that. Then back to the perfectly good lawsplainer which formed the vast majority of this opinion piece.

Also, what judges are going around being offended on someone else’s behalf, on the not-court-appropriate cutesy language used outside court in the course of vigorous public debate, by someone who is not even a party to the hypothetical proceedings she discussed? Yes, judges don’t like it when you get cute with them. We get it, you know judges, but this was not the same thing at all, the blog post was not a court filing. It just demonstrated the proposition that Cellebrite evidence was unreliable until proven otherwise. It said: “all ye who are affected by this, start your engines”. It certainly made her run around in circles trying to analyse the implications. That was the point.


Appeal to Authority.

Just because you have credentials, does not mean you infallibly know your ass from your elbow. It means you know how to apply a process to an end, and can be relied upon to reproduce someones idea of that process.

Meanwhile, real and substantive contributions come from those never priveleged with having someone else in a position to vouch for them.

Let the facts and results speak for themselves. Which in this case, won't hapen til the first case gets exercised well. Regardless of how it resolves, everyone else has room to opine, as the law has stake held in it by all of us.


Appeal to Authority - "a form of argument in which the opinion of an authority on a topic is used as evidence to support an argument"

Absolutely, and you realize that when it comes to legal matters that's exactly why we have lawyers (like the OP post author) and why lawyers spend years becoming lawyers so we pay them stupid amounts of money to interpret and opine for us on what a judge (or jury) will think of a given case? And why we don't consult people who flip burgers or drive taxis what their opinion about the same case is.

Where people are getting confused here is the difference between having an opinion on what you think legislation should be around evidence tampering (public policy) vs how a judge or court would decide on this specific issue given the laws as they are on the statute today (law).

What the OP wrote about is about is this specific case. How lay-people in this thread think a court would decide on Moxie and Signal's actions, if bought to court, is frankly irrelevant and especially when arguing with someone who is highly qualified! That fact that people here don't get this is the very point I'm making - you're not lawyers.

Matters of the law are all about Appeal to Authority, I don't understand what the problem is with that (have you never paid for a lawyer before??). Matters of public policy are for the public, there's a subtle difference.

Sorry to be just replying to your thread salawat but this applies to most of the comments here.


Laws aren't a black box that only lawyers and judges are allowed to discuss and interpret. The difference you're making doesn't make sense especially when you look at criminal law and the concept of having a jury.

The fact that a significant criminal trial involving Signal, Cellebrite, and the CFAA would call in an expert witness also is worth remembering. The kind of expert witness that would be needed to break down and explain the "hack" and who also would visit this thread to read or comment.

The situation hasn't been tested in court and no helpful precedent exists, otherwise it wouldn't really be something that needs a discussion.


I disagree with this. I think it's very important what tech people think about tech laws and, more generally, what people think about laws.

After all, laws are here to protect what the people consider important. Credentials are not necessarily the most important factor here.


I expected the appeal to authority to be followed up with some counterpoint but no...

No one needs to question the law portions of this to question the underlying premise.

Saying things like "this is bad because Cellebrite is currently being used on rioters" right after you claim what Signal may or may not have done will have no effect on evidence is a flimsy argument you don't need a law degree to oppose.

Ditto for implying Cellebrite should somehow be seen in a positive light because by... enabling and normalizing the invasion of privacy it... somehow preserves privacy?

As if politicians aren't more likely to wave the successes of Cellebrite as exactly why backdoors should be required than the opposite? And even worse, wave the failures that naturally occur as reasons for backdoors?


The standard in U.S. criminal courts is "beyond a reasonable doubt."

All I understood Moxie's original article to be doing was sowing that seed of "reasonable doubt." Is it now reasonable, based on Moxie's article, to doubt that information obtained by a Cellebrite device from a device running Signal is reliable? If I were a juror, I would probably think so.

That doesn't at all mean someone couldn't be convicted on the strength of other evidence, but if the primary evidence the prosecution relied on was Cellebrited off a phone running Signal, I'd have some trouble trusting it enough to render a guilty verdict.


Multiple legal experts have chimed in, all suggesting that this is unlikely to impact cases. People on HN might assume that the existence of a vulnerability strongly implies that it will have been exploited in every case where it is relevant, but that's not how normal people think, and in this case, the normal people are closer to the truth than the nerds are.


But what about the vulnerability as an indicator of the general quality and reliability of the tool? Casting doubt as to the ability of the tool to generally be accurate and specifically maintain proper chain of custody documentation would seem to be a reasonable legal defense tactic.

I would expect a defense lawyer to say something like "The tool so confuses its input that it can mistake message data for its own internal instructions. How certain can we be that it has properly analyzed its inputs and maintained the necessary chain-of-custody metadata, and provided adequate protections against evidence tampering? If a police officer were unable to tell the difference between his or her own thoughts and things he or she had read, we would dismiss him or her as a reliable witness."


That argument has never had the lightest effect on the (ongoing) use of polygraph testing by law enforcement agencies, nor on the use of polygraph results in obtaining convictions by courts. And yet it is well and truly established that polygraph testing is snake oil.


Polygraphs are not generally admissible in court.

https://www.justice.gov/archives/jm/criminal-resource-manual... (just one example).


I stand corrected. Are they still so widely used on employees by Three Letter Agencies?


If that argument was going to be meaningful in court, it would have applied just as well to EnCase. Never did, though.


>I would expect a defense lawyer to say something like "The tool so confuses its input that it can mistake message data for its own internal instructions. How certain can we be that it has properly analyzed its inputs and maintained the necessary chain-of-custody metadata, and provided adequate protections against evidence tampering?

To be clear, all you accomplish with that statement as a defense attorney is that you didn't get a credible enough expert, as any Computer Scientist should point out that is the fundamental character of the Von Neumann computing machine architecture, the very model of computing that most computers are designed according to, and most programs are written to run against. They would then further expound that software development had developed methods to mitigate this problem, which minimize the llkelihood of such architecture quirks being exploited, and most certainly leading to a state of affairs where any such vulnerability could be identified via a source code audit. This would open the door for the defense to require the prosecution to produce source code for their tool to prove to the court whether the vulnerability exists or not.

A good defense would then follow up by asking whether or not there was some way to detect whether there had been a successful exploitation on a device. "That's where things get tricky", the expert should reply, "because if arbitrary code can be run, given enough time, someone could cover their tracks successfully. It is plausible a mistake could be made in terms of the implementer of the exploit missing a timestamp, not properly serializing something, not cleaning out a log that could be then reconciled with something else, but the possibility of a completely clean alteration given enough time and resource was still on the table.

The prosecution would then endeavor through chain of custody logs, affidavits, data on the device, possibly comparisons to other cases convince the jury this is all hogwash, and the defense is grasping at straws, and ultimately full of shit, without tipping the defenses hand that if this case is in question, other cases may be.

Mind the brilliance in Moxie's actions is not that he'd get someone off the hook, but that he's now forced prosecutors into a position where if they want to rely on Cellebrite data as a lynchpin of their case, they have to open the door to public scrutiny of the implementation. Of course, this will just be mitigated by law enforcement ultimately engaging in parallel construction anyway.

Or, Cellebrite updates/audits their software to mitigate the vulnerability, or re-implements it on a non-Von-Neumann computer.

Again, not a lawyer, just read some stuff on how to think like one once.


Again, I suggest that if we want to understand how this stuff plays in reality, we'd do better looking to examples of how arguments like this have fared in previous cases, rather than trying to reconstruct this case from first principles. The idea of arguing against the reliability of computer evidence is not a new one; nor are vulnerabilities in forensic software (or, for that matter, the existence of very important major commercial forensics tools that defendants could know about and do vulnerability research on).

Here, by way of example, is the Grugq talking about this idea twenty years ago (presumably: about EnCase).

https://twitter.com/thegrugq/status/1393941106136543232


It seems to me that a malicious boobytrapped file did it is really just a subset of I was hacked, which is neither new defense nor (I assume) a particularly successful one.


Reasonable doubt matters to a judge and/or a jury. It is quite unclear that such vulnerabilities will raise reasonable doubt. For this to happen, the defense would need to demonstrate that files were actually damaged or modified.

There is other forensic software used for desktops and servers that is very widely used in court cases. There are significant and substantial vulnerabilities there: https://www.cvedetails.com/vulnerability-list/vendor_id-3015...

This type of issue is not new with signal.


If the primary evidence relied off of Cellebrite at all it should be doubted, period. The exploits involved weren't difficult to develop, and it is reasonable to assume are now rapidly spreading as independent implementations to nefarious groups, foreign intelligence etc. if they weren't already there.


What type of evidence are you thinking of? If they found something like child sex abuse material, or chat logs with gang members about trafficking drugs, how would the defense attorney convince you Signal might have planted that?


It depends on the nature of the material. What if the booby trap went and downloaded random files of 100k+ size from some dark web site?

On the other hand, if it's chat with names in it, it's likely not from the booby trap.


You're still going to have a very hard time convincing the jury that Signal booby trapped your phone to download child porn at random from the dark web.

Law enforcement doesn't confiscate and use Cellebrite on phones at random. If they have a warrant to search your phone, they already have some reason for suspicion.

For example, maybe someone accused the defendant of molesting a child, but the police didn't have hard evidence. They use Cellebrite and find some child porn. The defense argues that Signal might have planted it there with a booby trapped file that downloads stuff at random from the dark web. Do you think the jury will buy that?

At that point, why not just say you left your phone unlocked in public and a stranger probably used it to download the child porn?


There are other uses for data outside court admissible evidence with lower standards. Like probable cause for further warrants.


A somewhat tangential point: I think Signal's overall response was quite poor and somewhat concerning. Putting aside the usual discussion (Cellebrite sketch, Signal secure), I think the fact that this got published is evidence that Signal does not have very good self-control; or, possibly even worse, that Moxie does not have good self-control and Signal can't stop him from making snap decisions. Doing this kind of stunt is cool when you're a sole hacker working on your own, but when you run a company that makes software for many millions of people you cannot be this cavalier. There should be someone at Signal whose job is to moderate these kinds of responses, and obviously they either do not exist or are not able to do their job, and that is deeply problematic for the company. The blog post showed that Moxie (dragging along Signal) will go scorched earth against anyone who slights him–I mean, really, does a lazy PR blog post from Cellebrite really deserve this kind of response? They're living "rent free" in your head, dude.

(And, just to be fully clear, my support for Cellebrite/law enforcement in this situation is approximately zero. I just think that Signal could spend their time in better ways than going full nuclear against anyone who pisses the CEO off, which is what happened here.)


It is incredibly disingenuous to frame this as a personal "slight". This is a real security threat to the users of Signal, not some vendetta. Celebrite sells their tech to oppressive regimes and they use this against their opponents, who likely use Signal to communicate.


Cellebrite has been operating with their present capabilities for several years. I can see few reasons why Signal would choose to publish a blog post such as this now, if not for the fact that a couple months ago Cellebrite write a blog post that specifically mentioned their ability to extract data from Signal (which, to be clear, was not a specific vulnerability in Signal). This prompted a very pointed response from Signal, which you can read here: https://signal.org/blog/cellebrite-and-clickbait/. The timing makes it pretty obvious that Signal/Moxie took this personally (to say nothing of the general atmosphere when that blog post was written) and then took a few months acquiring a UFED and exploiting it, the results of which we're seeing now.


It's not a vendetta. This blogpost has had cellbrite deploy a more secure update to their software. Law enforcement agencies will be more secure, plus they will probably refrain from hacking signal which is a net positive for it's users. Publishing that blog post is a sound business decision


> There should be someone at Signal whose job is to moderate these kinds of responses

The article proposes the correct solution to this.

> Signal doesn’t have their own in-house General Counsel. At this point, with many millions of users around the globe depending upon them for their privacy, security, and even physical safety, they really should.

Even if the founder was then most composed person in the world that never made snap decisions, Signal is doing something a lot of people don't like. Those people have a lot of legal power and Signal needs to understand the playing field.


> ... Moxie ... will go full scorched earth against anyone who slights him ...

And by "anyone" we mean a billion dollar transnational corporation dedicated to putting his users at risk.


Actually, no! I would be a bit more lenient if this appeared to the case, although the response is still not great. As I mentioned in a comment further down the in the thread, it seems like the trigger for this was not Cellebrite doing its thing against Signal users (which is nothing new) but rather Cellebrite writing a blog post where they claimed that they could target Signal users (which Signal took to mean a claim that Cellebrite had broken their encryption): https://news.ycombinator.com/item?id=27172104


Actually, yes! I didn't mean to participate in an academic seminar on Jungian psycho-analysis. Let's make do with what we know for a fact.


If signal weren't centralized it would just be a server admin/client author you'd be worried about but now if he does something really stupid you'll have to rebuild your social graph somewhere else.

This is why I absolutely refuse to sign up for any more whatsapp/signal style apps.


> but when you run a company that makes software for many millions of people you cannot be this cavalier.

Why? Signal is a nonprofit, has no investors to provide returns unto, has no subscribers or paying customers. Why can't they take a moral stance in the market? Who is it hurting?

I felt their post was entirely fair/fine. It's not like they've shareholders or revenue to worry about. They're free to do what they want. Even the client and server are free software, if Signal itself imploded tomorrow someone else could release a new fork with a different API URL configured and stand up a server somewhere.


> No, intentionally spoiling evidence — or “spoliating,” to use the legal term — is definitely not legal.

> If they’re saying what they’re hinting they’re saying, Signal basically announced that they plan to update their app to hack law enforcement computers and also tamper with and spoliate evidence in criminal cases.

If you set up an anti-hack tool on your phone, you have no way to know if it's going to be the police hacking it.


This seems immensely reasonable, but if this post is to be believed, our legal system values protecting intel gathering tools more than an individuals expectation of privacy. Depressing but not surprising.


I do not believe the majority of judges would buy this.

At least I sure hope not.


Even something that just corrupts all the data on your own phone?

There are lots of tamper-resistant devices that will self-destruct.


If what signal is doing illegal then how do tv satellite provides get away pushing malicious updates on their feeds? Before you claim that cellebrite is only used by law enforcement, signal got their hands on a device and cellebrite sells to other governments besides the US. I should legally allowed to protect my device from foreign adversary from stealing my company’s trade secrets.


They owned the devices they were hacking with their feeds. That’s why the hack worked to begin with: they controlled the code on the cards.


I’d claim they didn’t own the cards. When they send out new cards to replace the H cards, they didn’t have us send them back. They abandoned the property and no longer can claim ownership.


Since they abandoned the cards in part by disabling them, I'm not sure the Law of the Briny Deep does a lot of lifting for you here.


Besides being abandoned when they upgraded. You actually owned the card because they came with receivers when you purchased them from brick and Mortar stores. No agreement on the purchase. You called to activate them. Satellite providers also fried the receivers as well by burning all writes on flash memory that held the firmware. So that $500 directivo receiver could get hosed during an exploit.


The cards literally had "this is the property of NDS and must be surrendered upon request" printed on them. You're reaching.


That doesn’t mean anything if paid $200 for a receiver at best buy and it came with a card. It is legally mine.


Turns out it isn't.


1st sale doctrine says it’s mine.


Besides the technical and legal points raised it's in the last paragraphs that the most important point is raised:

> The timing looks kinda fash. I also think the timing of Signal’s blog post was suboptimal. Why? Because Cellebrite devices were used in some of the criminal cases against the Capitol rioters, to extract data from their phones after they were arrested. It’s still early days in those criminal prosecutions, those cases are still ongoing, and there are hundreds of them. (I don’t know how many of them involve Cellebrite evidence.) The DOJ is already stretched extremely thin because of how damn many of these cases there are, and if even a fraction of those defendants got Cellebrited-upon, and they all decide to file a motion to examine the Cellebrite device and throw out the Cellebrite evidence, that will add further strain.

> Now, don’t get me wrong, I’m no fan of the DOJ, as you may have guessed by now. But I also don’t like seditious fascists, and I think the people who tried to violently overthrow our democratically-elected government should be caught and held accountable. And the timing of this blog post kinda makes it look like Moxie — who is famously an anarchist — is giving the fascists ammunition in their legal cases to try to get off the hook. As said, I don’t think it’ll work, and even fascists deserve due process and not to be convicted on the basis of bug-riddled spy machines, but it’s helpful to them nonetheless.

It's the usual knife/gun conversation again but indeed - as in the author's words - that likely won't get him anymore job with Signal.


I would advise you to consider any BLM protestors as well as the Capitol protestors. You're being a bit one-sided to your countrypeople.

The Law is blind, but the police are not. You can't buy into investigative techniques being employed in some cases and not others. Any type of edge will be exploited as early and as often as possible to build a case.

Regardless, those optics are kind of silly to apply, as they are largely irrelevant to the legal question at hand, even if they might be relevant at higher levels of the political machine.


Signal announced they have the know-how to disrupt any Cellebrite-extracted files and then likely sprinkled "poison pill" files in their data. So if a Cellebrite user was to extract data from a Signal user's phone, the data would corrupt Cellebrite's data.

This simply disrupts trust in Cellebrite. Nothing illegal. All Moxie is saying is "Don't want potentially corrupted data? Don't use Cellebrite." It absolutely is retribution for Cellebrite coming at Signal.


I guess this is different in the USA, but in a lot of places, spreading FUD in an attempt to harm a business could be considered defamatory. It might not even matter if what is being said is true if there is no positive social purpose to the statements.


There is a positive social purpose to his statements. Innocents may be coerced into confessing or pleading guilty to crimes they didn't commit.

Ensuring not one innocent gets steamrolled by the judicial system is a positive social purpose.

Whether or not you consider it compelling enough to be worth it is another question. I unwaveringly acknowledge the positive value to what he has done. If you can't come out and prove there is nothing dirty or worthy of doubt with the tools you're using to strip someone of their freedom and liberty, you have no business using it. Period. As a society, we've compromised on a high standard for this far too long.


Is it FUD if he shows a PoC?


I disagree and this was far too much writing to get your point across. Signal isn’t Facebook; they don’t have to act (or try to be) politically correct. Cellebrite deserved what they got, and if this writer understood how painful vuln reporting is they would understand why a (semi) full disclosure release works and when to use it.


If it’s illegal to secure your own property, then something has gone badly wrong with society. Time for open resistance and support for regime change I think. Legal scholars can then engage in beard stroking around the new laws. In a democracy, the laws are not king; the people are. Time to re-learn that lesson.

Cellebrite no doubt thought that hoarding vulnerabilities made them super smart, forgetting that everything they need to operate is now riddled with vulnerabilities that someone else has hoarded.

Doesn’t affect their business model though, which just requires bamboozling a jury of people who think the word crypto means ‘pyramid scheme my uncle invested in’.


slightly bizarre to see the author explain that Cellebrite have major contracts w/ US law enforcement & ICE then go on to say "but they have bad clients too!"

& I don't like the Capitol rioters either, but I don't see how you can evince a belief in due process & the "rule of law" then criticise someone for potentially providing exculpatory evidence to a group of defendants you dislike. you can't have it both ways. and the implication that someone being an anarchist makes them more likely to want to help out fascists is odd, to say the least


I am not a fancy legal expert so I only have two things to say:

1. Abolish the CFAA. All of it. It is unsalvageable. Nothing good has ever come from it.

2. I will never listen to Stanford and anyone associated with Stanford about ethics. You profit from parent trolls. You have zero moral high ground.


Jesus, what a depressing post. We must allow the existence of shitty "backup" software because otherwise they'll just mandate backdoors? Have you already given up?

How about citizens have an expectation of integrity in using their computation devices that the state may not infringe upon. The state buying these tools and using them, in what is often a constitutional gray area, is harming all of us by making our devices less secure.


For real, Stanford should be embarrassed.


The post seems very unprofessionally written. It is unfortunate that they publish this trash!


> But Cellebrite has lots of customers besides U.S. law enforcement agencies. And some of them aren’t so nice.

> But a lot of vendors [...] sell not only to the U.S. and other countries that respect the rule of law,

They lost me at the presumption that USA respects the (international) rule of law and has nice law enforcement.


From my experience, most _independently_ owned cell phone retail stores (Verizon, Sprint, AT&T, etc) have several Cellebrite devices no site which are used daily to aid in device migration from old to new.

As I understand it, Cellebrite devices are not exactly hard to acquire.


They have multiple devices. The thing T-Mobile buys to help you upgrade your phone isn’t the same one the FBI buys to hack it.


Does the Cellebrite device exploit hacks in iOS? My understanding is that iOS shouldn't ever allow something plugged in over USB to read data on the device like this. I've been assuming the only reason they continue to work is that they found some unpatched vulnerabilities in iOS, and that Apple hasn't been able to obtain a cellebrite device to reverse engineer so they can fix the bugs.

But if Signal got one, I'd be surprised if Apple couldn't. (Or if Signal wants to really stick it to cellebrite, they should loan their device to apple so apple can fix the security holes that cellebrite exploits.)


That's why Apple added Settings -> Face ID & Passcode -> Allow access when locked: USB Accessories. They set it off by default because they know there is some vulnerability involving USB, but they don't know what it is.


The author mentions that Cellebrite is only able to extract data from the phone as if it was the equivalent of someone taking screenshots of everything while going through an already unlocked phone, just in an automated way. So my immediate guess is that they’re not really exploiting something, but who knows!


Doing just that in an automated fashion requires at least one iOS exploit. Cellebrite, of course, has many.


> “I’ll show you mine if you show me yours.” That is not generally how vulnerability disclosure works

My understanding was that people will responsibly disclose information to protect the public.

Signal disclosing these vulnerabilities would have mostly protected Cellebrite, who have made it abundantly clear that the good of the general population is none of their concern and who's business model is based on keeping everyone insecure for their own profit. Now that is how responsible disclosure doesn't work.


And the Cellebrite/Signal sockpuppet/commercial continues... Can anyone please wake me up when actual (0day) code is reversed ? Because so far its all speculation, theory crafting and blah blah - in case somebody didn't notice.


> Plus, admittedly I haven’t actually looked into this at all, but it seems like it could get Signal kicked out of the Apple and Google app stores, if the companies interpret this as a violation of their app store rules against malware.

This is an interesting question, since Apple/Google are actually on the same side as Signal on this one (vis a vis Cellebrite). If Signal is being vague/coy enough about what they're doing, will the app stores overlook the possible bad behavior on the grounds that "the enemy of my enemy is my friend"?


> This blog post was plainly written in order to impress and entertain other hackers and computer people. But other hackers aren’t the real target audience; it’s lawyers and judges and the law enforcement agencies

Says who? The intentional ambiguity may have had multiple audiences, quite possibly including computer people that handle the use of these products, their procurement, or their adversarial study.


> No, intentionally spoiling evidence — or “spoliating,” to use the legal term — is definitely not legal.

> Neither is hacking somebody’s computer, which is what Signal’s blog post is saying a “real exploit payload” could do. It said, “a real exploit payload would likely seek to undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.” All of those things are a violation of the federal anti-hacking law known as the Computer Fraud and Abuse Act, or CFAA, and probably also of many state-law versions of the CFAA

I'm not sure if that will hold in court. You can argue that the Signal app has built in hacking defenses. A more common case would be that Signal app detects that it is being hacked by Celebrite and self destructs (i.e. deletes all data) -- that's what an iphone does, if you make too many passcode attempts. In this case Signal jokes that it might counter hack even, but since it's a defense to being hacked in the first place, it shouldn't illegal.


"might counter hack even, but since it's a defense to being hacked in the first place, it shouldn't illegal. "

Perhaps it shouldn't be illegal, but as of now it very definitely is illegal. There are no 'self-defence' clauses in current computer security laws, and any "counter-hack" is exactly as illegal as an equivalent "direct-hack", it literally does not matter if you did it as a defense.


If Moxie and team get taken to court, I'll happily donate to their legal fund... and I'm sure a lot of other people here will too.


I do not have to help a third party possibly incriminate myself in all legislations signal is used.

I might also be charged with protecting information by constitution, law or international statute (e.g. as a health or legal professional), something that border officials often like to ignore, actually possibly even incriminating themselves in third countries and the us

I am also not responsible if a tool a third party provides is defect by choice and this defect causes damage - especially if the defect is already known and we'll documented through responsible disclosure (thx signal).

If I then warn them (in a general way) and they plug in anyway...

might be an interesting case, but should be well prepared, best with the help of a third countries lawyer's association as a test case ;)


Well now that folks know how the software works, I'm surprised nobody has set up a public repo with similar files-- especially since the exploits for a lot of these older vulnerabilities are already out there.


Placing aesthetic files needn't be illegal; see eg: https://en.wikipedia.org/wiki/Intelligent_banknote_neutralis... for a similar situation with banks, banknotes, and -presumably aesthetic- dye packs.

This is pretty similar. Only hostile breach attempts are thwarted.

It may need precedent or legislation to be fully legal, however. I would hope for EU-wide legislation to that effect in short order.


It's not pretty similar to banknote neutralisation.

First, the thwarted breach most likely isn't "hostile" - we'd assume that the law enforcement people running the Cellebrite tool are running it on a device in their lawful possession; and second, unlike in the money case, it's quite explicitly a crime to try to thwart their attempts, obstruction of justice is pretty much a universal concept.


I'm not sure that that is a safe assumption.

Do remember that Cellebrite has more customers than just your local domestic (law enforcement) agency. In fact, that would obviously just be one customer. It thus follows logically that all the other customers are not your local domestic law enforcement agency.


>First, the thwarted breach most likely isn't "hostile" - we'd assume that the law enforcement people running the Cellebrite tool are running it on a device in their lawful possession; and second, unlike in the money case, it's quite explicitly a crime to try to thwart their attempts, obstruction of justice is pretty much a universal concept.

Uh huh.

Tell that to the Uiyghurs currently lawfully detained in reeducation camps and being systematically genocided. Tell that to any person wrongfully imprisoned because a prosecutor wanted a slam dunk instead of making damn sure that the facts line up. Tell that to those of Jewish lineage, or who were on the wrong side of the legal/political edifice in Germany between the years of 1940 and 1945. The same for those of German or Japanese descent in the United States, those of Native American descent since before the United States was formally a thing, or those dissidents that crossed the ocean to get away from the legal reality that shaped their time.

There is an American cultural value that places that which is ultimately the moral right over that which is legal. I cannot for the life of me figure out how it seems to have gotten so wantomly diluted over the years, but I'm tired of hearing the argument that what is right is somehow constrained by what is legal. It's the other way around, consequently in flux, resulting in a moral imperative to exploit an inflexible legal system with as much care, due vigilance, and as much scrutiny on all sides (prosecutorial/law enforcement conduct, criminal activity by those unapprehended, the treatment and rehabilitation of those that were apprehended, and the public's overall safety from the other three) as possible if the values of Liberty and Freedom mean anything at all.


if you're aware of the details of the situation feel free to skip ahead to part IV, and save yourself a lot of reading. the first 3 parts are mostly just summarizing what happened


It looks like the linked post may have been taken down. There is a mirror here https://web.archive.org/web/20210513030656/https://cyberlaw.....


Regardless of the legal position, I would be very happy if every device I owned would attack, corrupt, disable, etc. any system or software that attempts to use the device or access its contents without my authorisation.


So the government should have the right to rifle through your shit and can deny you access to e2e encryption and we have to put up with Cellebrite or else they'll just start banning encryption and mandating backdoors, along with the horseshoe theory that the Anarchist is helping out all the Fash. Oh and he's not directly worried kiddie porn he just accepts that the governments will get whatever they want in the name of kiddie porn.

This is how moderates get you to give up your rights, because they'll convince you that if you don't give up some of your rights, you'll wind up losing all of them, and nobody wants that to happen. It is very Good Cop / Bad Cop.


Also, the author seems to forget that Cellebrite market is not just the US, but also many other countries, some with far less respect to human rights (not saying the us has a perfect track record on this front). Are all the journalists / activities / opposition in these countries not worth some of consideration?


That isn't a moderate take at all in my opinion. That's an extreme authoritarian assuming they can bend anyone to their side. Even a moderate realizes there is a line that should not be crossed.


Where is the ‘software bill of materials’ the US president’s executive order requires of government software vendors, like Cellebrite?

Is this applicable?


Signal hasn't hacked Cellebrite, to the best of my knowledge.

They just pointed out that the software is poorly constructed in a blog post.

Any claims otherwise are premature.

Even their claims they they might put such exploit files on Signal devices were written in such a way as to be plausibly deniable.

Until and unless a Cellebrite device is known to have been exploited by such a file, we are speculating idly.

(FWIW, Signal doesn't even need to deploy the files now to have tainted the evidence that comes out of any Cellebrite device. The blog post was sufficient.)


>My guess is that it’s pretty rare that the Cellebrite evidence is the dispositive crux of a case, meaning, but for evidence pulled from a phone using a Cellebrite device, the jury would not have voted to convict.

Let's also consider cases that could have warrants that would not have been approved if the integrity of the data from a Cellebrite extraction was questionable. I could see some defense lawyers challenge the validity of warrants from this.


Even warrants based on bad information, as long as the people authorizing the warrant thought the information was true, it's not enough to make the warrant illegal. As long as they thought the warrant was legal at the time, the evidence gathered won't be excluded.

https://en.wikipedia.org/wiki/Good-faith_exception?wprov=sfl...


In the U.S., warrants only require "probable cause"[1], not evidence "beyond a reasonable doubt". The fact that there's a tiny probability that some data could have been corrupted probably wouldn't affect the validity of a warrant.

Cops can get warrants based on much less reliable sources, such as a statement from a witness or an informant.

[1] https://en.wikipedia.org/wiki/Probable_cause


No way. "I put a file on my device whose only purpose is to obstruct justice" does not cancel probable cause for a search.


The issue is that the Celebrite device may have scanned someone else's device that contained a malicious file, meaning that any forensic evidence collected for your device is questionable.

But IMHO (IANAL) this won't actually have an impact -- the defense would need to have some evidence that the particular Celebrite machine was hacked and that this had an impact on the data taken from that particular device. "It could've been hacked" is purely speculative and isn't nearly enough to get evidence thrown out. I mean, courts routinely accept phone screenshots as evidence -- "it could be a fake screenshot" is much more likely thing to happen, and yet you'd still need to provide evidence that the screenshot might be fake.

Not to mention that the bar for warrants is even lower -- you only need "probable cause" in the US.


Reasonable doubt.

The defense doesn't need to prove anything except that the prosecution hasn't done their job of assuring they've chased down everything they have to.

The prosecution, if relying on Cellebrite, can no longer just say "we dumped the contents of the phone" without picking up the additional investigative burden of proving their chain of custody was not maliciously tampered with successfully. That means source code audits, admitting knowledge of the tool into the public record, or doing cross checks with another tool that isn't known to be vulnerable to an undisclosed exploit, which only holds out til the same type of thing happens to the other tool.


> Cellebrite’s products are part of the industry of “mobile device forensics” tools. “The mobile forensics process aims to recover digital evidence or relevant data from a mobile device in a way that will preserve the evidence in a forensically sound condition,” using accepted methods, so that it can later be presented in court.

>“For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question."

A tool with such a vulnerability, one that can affect past, present, and future uses of it, absolutely calls into question the "forensically sound condition" of the data it produces. One wouldn't even need to argue that they or the person they are representing was the one who could have corrupted the data. It could have been any previous device that was scanned.


Obstruct justice? Cellebrite is often used by authoritarian unjust postliberal states.

I have personally spoken to important figures in these countries with great political power or influence in the economy.

Usually they`re very anti privacy except when it comes to their personal shady dealings.

“Obstruction of justice” is a moral imperative when “justice” is defined and administered by those with the moral character of Roland Freisler.


I love the conflation between Cellebrite, private Israeli business, with "Justice", I presume around the world?


Anybody got any of that payload software that you could install on your phone to corrupt Cellebrite's data?


as best i can tell, this person wrote too many words to say:

1. cellebrite is ultimately good because it allows governments to spy on, harass, imprison, terrorize, torture, and murder its citizens, esp its journalist citizens, and

2. moxie used the wrong tone in his blog post.

something tells me this person doesn't think of themself as a typical government hack, which is presumably the only reason this blog post would be interesting enough to HN to show up here?

also interesting that this person thinks that cellebrite only sells their tech to 'authoritarian' governments.

which ones are those?


The post quite literally lists these out:

> But Cellebrite has lots of customers besides U.S. law enforcement agencies. And some of them aren’t so nice. As Signal’s blog post notes, “Their customer list has included authoritarian regimes in Belarus, Russia, Venezuela, and China; death squads in Bangladesh; military juntas in Myanmar; and those seeking to abuse and oppress in Turkey, UAE, and elsewhere.”


With that argument, How is what cellbright is doing legal? Is it just that they are not responsible for actions taken by their users?


I worked with a guy at a startup who wrote and designed a Mac hacking system that exploited Firewire because it can read memory directly. Firewire is basically a security nightmare like several other peripheral interfaces (Wikipedia says "PCI Express, PC Card, ExpressCard, FireWire [yeap], PCI, and PCI-X")

Thunderbolt 4 allegedly includes mitigations to prevent arbitrary DMA transactions and Thunderspy.

https://en.wikipedia.org/wiki/Thunderbolt_(interface)#Vulner...

Btw, partial list of USB attacks:

https://www.bleepingcomputer.com/news/security/heres-a-list-...

https://www.sciencedirect.com/science/article/pii/S016740481...


As an American, I see this as a right to bear arms in the modern day.


IMO law enforcement as a whole is evil, particularly on a global lev. So anything that messes with law enforcement, as a whole, is good with me.

I think it's an opinion that messing with law enforcement is _bad_.


I said this at the time, the things Signal was saying it might do were so clearly illegal that it was more for the naive star-struck blog reader than anything else. It got a lot of play here and Reddit bec they eat this nonsense up. But any lawyer will tell you that by disclosing this vuln in the way they did Signal only opened themselves up to lawsuits.

If they do hire in house counsel the first that guy would tell this is “call Cellbrite and tell them exactly what the vuln is and how to mitigate it.”


Why is it illegal to store a file that may cause a buffer overflow in software that should not be reading my data?


Intent matters.

It's prohibited to alter data on someone else's computer systems (CFAA), so what matters if that file was placed there with the hopes that it would overwrite someone else's Cellebrite database, and it doesn't matter if you placed it on your phone or sent it over a phishing email.

And as a separate crime, it's forbidden to destroy or conceal evidence or things that would be used as evidence, no matter if that evidence is files on your phone protected by such a buffer overflow, or a gun that you throw in a river.

In essence, storing such a file is illegal if the jury gets convinced that you placed such a file with the goal to have that buffer overflow to actually happen on someone else's machine.


I would not be the person destroying evidence. If the examiner used a toolkit not riddled with flaws, the files would mean nothing. Also, if I placed the files on my device as a general deterrent-- since cellebrite's forensic software is not limited to just LE usage, it would be difficult to prove I had a specific target in mind.


If you actually placed the files on your device as a general deterrent, then that would be fine - intent matters. However, if you did place it to try and prevent evidence gathering (I mean, this is in context of someone being detained for some other crime, not just a random person), and just assert this claim, realistically, it would not be that difficult to convince a jury that you intended it to deter LE and not some hypothetical target.

For an exaggerated example, if someone is tried (among other things) for routinely performing some illict acticity (i.e. they clearly had an expectation that they might be arrested and their phone analysed by LE) and has prior arrests in which their phone was actually analysed, which they knew and didn't like and after one such arrest they just "place the files", then it would be straightforward to get a conviction; and on the opposite direction if a privacy activist using all kinds of interesting features has had this thing on their phone for years and then gets detained for something unrelated to that (i.e. not a case of e.g. putting these files in the morning before going out to do some activism that's likely to get them arrested, but perhaps before going to Saudi Arabia - deterring foreign agents instead of local LE would be a legitimate purpose), then indeed it would be difficult to prove that they had a specific target in mind.


I still am not convinced and we most likely won't come together on this, but I find your arguments well put and appreciate the discussion.


Because your "should not" is the law's "should", in a lawful search and seizure.


That's just silly. If I made my phone incompatible with forensic software, is that illegal too?


This is the problem with common law. You get multiple-inheritance conundrums given enough time.


If that were the case, why doesn’t the same principle apply to Cellebrite? After all, they exploit vulnerabilities in lots of software to deliver their own services and there’s no indication they share their work with the vendors.


The proper perspective is not to look at "exploiting Cellebrite's software" an "exploiting Signal's software" but to look at who has the lawful possession of specific devices that software is running on.

It's not a crime to exploit vulnerabilities in software developed by someone else, it's a crime to exploit vulnerabilities to do things on systems owned/run by others.

A LE officer with a proper warrant running Cellebrite extraction tools on your phone has full rights to execute exploits on your phone.

On the other hand, the phone's owner or Signal has no right to execute exploits on that officers' computer with Cellebrite tools. They can get their own computer with Cellebrite tools (as Signal did) and exploit vulnerabilities there as much as they want (as Signal did), and not tell the vendor the details (as Signal did), that's all legal, but deploying an exploit on phones with the intent that it might get executed on someone else's machine is illegal.


You're asking why lawful search and seizure is OK but obstruction of justice is not. Intent matters in the law.


Cellebrite isn't definitionally lawful. It's a private company that sells to other private entities that may or may not choose to use their software lawfully. I don't think they (should) get a pass just because some of their clients happen to be LEO.


You're going to have a long way to go to claim that Signal publishing files onto hundreds or thousands of devices is obstruction of justice.

The law doesn't look kindly on prior restraint.


Because they don’t plant boobytraps in their wake.

Signal isn’t saying “we’re just hacking into property we have a legal right to access” (which is what LE is doing in the US) they’re essentially saying “we’re gonna hack and damage government property and police evidence”


Sure, that's what the article is claiming, but I don't see how Cellebrite is "government" or "police".


Cellebrite, the company, wouldn't be the one getting hacked. It's the police agencies' equipment that could potentially get hacked.


Who would sue them, and what would the claim be?


If there's an actual case where evidence on stored on a law-enforcement operated Cellebrite machine gets destroyed or corrupted, and if further forensics shows that it happened due to e.g. an aesthetically pleasing file with a specific payload being deployed on that phone, then the state is likely to charge the people involved in this particular act for tampering with evidence. And if it happens, then Moxie's original blog post would help prosecution a lot.


Generally, from what I can tell, statutes regarding “tampering with evidence” require both intent (which would likely exist here) and knowledge that the action is interfering with evidence of a crime (which seems pretty implausible).

If Signal set up their phones so the user could click “I’m gonna do crimes, opt me in” and it would add the nefarious files, the risk of prosecution would seem much more likely.


Police or Cellbrite for hacking and/or boobytrapping.


I feel like you’re conflating “charging” and “suing” here. Cellbrite would sue, and in doing so would need to allege damages. Given that they wrote code to extract data from Signal, and Signal gave public warning that doing so might be dangerous, it seems implausible they’d be able to claim damages. “Boobytrapping” in physical space usually becomes problematic when it’s a surprise: if you hop over my fence to rob me and are injured by land mines in my yard, I’m at fault. If I have signs saying “Warning, this yard is full of dangerous objects, do not hop the fence, you may be harmed”… dramatically less so.

“The police” generally don’t sue, The State charges. And for the state to charge someone with a crime, it can’t just vaguely be “hacking”, it would need to be a violation of a criminal statue. At best, this would perhaps be tampering with evidence? Because I doubt you’d find a jury willing to convict someone of a violation of, for example, the CFAA, for activity that they undertook solely within their own app, which did not initiate any activity until scanned by Celebrite’s tool


The article says that "it should be pretty straightforward for law enforcement to disprove an accusation about the Cellebrite machine", because they can perform the same extraction with another vendor's machine and compare the results.

But if some app actually decided to use this hack, then wouldn't it be likely that in addition to modifying the contents of the data dump it would also modify the on-device data? In that case it wouldn't matter if the other vendors have vulnerabilities, since the device itself was already compromised.


The most interesting version of this exploit (to me) would be one that wiped the device being examined.

Edit: changed disruptive to interesting. There could be many many more disruptive versions...


Quite the 'aesthetically pleasing' side-effect for sure.


Tell it to the jury I guess. "Yes, of course there's evidence of a crime on my phone, but actually I put it there just to trick the police."


Or: "If there is any evidence of a crime on my phone, it was probably planted there by a version of Cellebrite that got infected with a virus when you scanned someone else's phone with it."


The sentence would be true iff your replace "probably" with "possibly". But - as the original article states - that's not sufficient. The defence may try to assert that this is the case, which may cause that possibilty to be investigated in more detail, but such a statement would not automatically disqualify the evidence without something more substantial, merely asserting that such a possibility exists isn't enough.

E.g. such a claim might result in a forensic analysis of that Cellebrite computer, and if the analysis indicates that it indeed got infected with a virus when scanning someone else's phone, that's likely cause all the evidence to be questioned, but again, even in that case there may be other ways than the Cellebrite logs to confirm that this evidence was indeed on your phone (the original article asserts this as well).


Most of this was better left unsaid.

So many words to state the obvious that like, for example, this would be illegal? Did the coy language not tip you off to the fact they realize that? Then suddenly trying to champion Cellbrite as the reason we something as anti-privacy as backdoor mandates and encryption bans while at the same we're already seeing countries inch towards that?

And then seriously, acting like because Cellbrite is being used against rioters somehow this was a bad time for Signal to point out the fact that Cellbrite is an insecure pos on top of it's dubious intended purpose?? Didn't I just go through 1000 words explaining why what Signal did won't matter anyways?

-

This whole thing just reads like someone who needed to go "well actually", it's not really saying anything novel or interesting, and in the pursuit of defending Cellebrite of all things, it makes some pretty dubious connections.


I am a little confused on why the author makes a distinction between Cellebrite using zero-days to hack a phone to read data and Signal's hack. While the US government might have the framework to not be considered violating CFAA, what about when other governments use Cellebrite? From this point of view, Cellebrite isn't a valve that's stopping back door decryption in systems, it is the back door. Signal including these files is in a sense covering a back door.

Also, is leaving a file that breaks the admissibility of previously gathered evidence considered active hacking? Am I misunderstanding something about the function of the files in Signal? I thought the only way Cellebrite's software is interacted with is if it tries to access Signal on the device. Signal isn't actively searching to hack back. It's triggered by Cellebrite's software, not Signal's.

The straightforward workaround would be to delete Signal before using the Cellebrite software which I think is the real point. Signal isn't trying to protect the end user actively and can't do anything if it's not installed on a phone.


In the US, Cellebrite extractions are done under color of law; for the same reason, the police don’t get in trouble for forcing open locked doors while executing search warrants.


> In the US, Cellebrite extractions are done under color of law

Are you sure that is the case 100% of the time?

Bars on my door keep out burglars as well as hinder police serving a warrant.

Even if Signal does deploy these files (no evidence thus far that they have deployed Cellebrite exploits, as their blog post was careful not to claim that), the vast majority of them (just like deadbolts) will not be used to hinder search warrants.


> Cellebrite extractions are done under color of law

Until they find they're in the wrong house. Or that a warrant is invalid.


> Also, is leaving a file that breaks the admissibility of previously gathered evidence considered active hacking? Am I misunderstanding something about the function of the files in Signal? I thought the only way Cellebrite's software is interacted with is if it tries to access Signal on the device. Signal isn't actively searching to hack back. It's triggered by Cellebrite's software, not Signal's.

Yes this is called booby trapping and at least in the US is illegal. It’s akin to tying a trigger to a door opening so that whoever opens the door gets shot, even if that person had no right to be there, you can still be held liable for any injury the bullet causes.


I don't think axiomatic derivation from Quora posts about physical booby traps is going to be a reliable way of understanding how the law actually functions here.


Booby trapping specifies a device that causes bodily harm to a living thing, legally. There is no chance of bodily harm from evidence becoming inadmissible. The question of whether it is considered a computer virus, legally, is my question. Signal isn't actively trying to hack Cellebrite.


In your estimation is having a zip bomb I'm proud of crafting on my laptop a boobytrap because it could cause problems for a malicious actor naively unzipping all my files?


Do such laws apply to digital booby traps though?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: