In the future there WILL be brain computer interfaces. Memory offloading, recording of visual and auditory cortex data, and other more mundane uses. This may seem like fantasy/scifi. But barring some societal/technological collapse, this will eventually happen. If the precident is set for breakable encryption/back doors, this will absolutely be used to "supoena" people's "brain information" in that context. The invasive-ness will have no end.
I believe this would be a terrible idea because people's recollection of events isn't going to magically improve just because someone connects to their brains directly instead of asking them some questions under oath. Add dreams and other literally crazy things that go in people's minds and there's a big filtering problem. Judicial systems that take "brain dumps" to be the truth would remove the need for human judges, and so we'd end up with a "Minority Report" plus "1984" plus "Skynet"/"Terminator" situation...people's brains being dumped (or "hacked") all the time and any information in that dump used against them to detect pre-crime and stop it. The logical conclusion is that all humans in the future are going to be in prisons run by robots, unless they're wiped out like we've seen in many books and movies already.
Australians also don’t think a lot in general (many exist who do but as a general populace we’re culturally weak and short on history), so the long term effects of such reassurance just never really makes it into the discussion
Anecdotally there’s been a slight uptick in my social circles recently about people genuinely caring about privacy, so maybe there’s hope
The real question that I would want to see some Congress person put to agencies is
>"Do you believe there should be any inherent limits at all? If we developed the technology someday to read people's minds, should it be permissible to go through their brains with a warrant? It would certainly let you find the guilty of some 'crimes', where for 'crimes' we should keep in mind that gay sex and interracial relations were felonies in the near past."
I mean that's the real thing, if security agencies could root through people's brains I see no need to beat around the bush that likely at least a few truly horrific crimes would be stopped or solved. There would be children saved, terrorists stopped, murderers caught. But I think not just the abuse of it, but even the use of it to eliminate any gray area for a human society would be so horrific that it's just plain not worth it. That yes, some children will be abused/kill, some murderers will escape, some terrorists succeed, and that really is the price we need to pay. That we should try to reduce it as much as possible but only in opposition to strong privacy and an inviolable personal sphere. And that should include artificial augmentations to our minds, which typical mobile devices are already arguably at the point of.
The incentive structures right now for law enforcement agencies and intel agencies remains geared always towards more more more, and paying attention to singular big harms rather then small harms across enormous swaths of the population. It hasn't evolved much from decades and centuries in the past arguably. I think that's the ground to fight on though, will they argue that total erasure of the private sphere is worth it? Will the public agree? I think the answer is no, and with that established it's a lot easier to argue back against "think of the children/terrorists/drugs" typical attack.
This law would rid us of the last safe harbor, your own mind. At that point allowing some individuals the ability to skirt this law would create a class definition of ultimate scale. Those who are allowed to keep their own thoughts to themselves and those who aren't allowed to.
There will be no perfect surveillance. Period.
Humans are humans, and inherently imperfect.
Until all humans are eliminated, there will be no perfect ... anything.
> If you base your entire opposition around the key, what if at some point someone does have an entirely formally verified stack and strong measures and can reasonably argue that keys aren't going to leak?
Irrelevant, since formally proving the stack says absolutely nothing about compromising the system at a human level, and 'reasonably argue' does not mean 'proving that the key cannot leak'.
The key will always leak. It's just a matter of when. There will never be any technical solution to that, and it's not a technical problem. It is a hole which fundamentally cannot be plugged, and the hole is people. Technology has nothing to do with this.
>>"Do you believe there should be any inherent limits at all? If we developed the technology someday to read people's minds, should it be permissible to go through their brains with a warrant? It would certainly let you find the guilty of some 'crimes', where for 'crimes' we should keep in mind that gay sex and interracial relations were felonies in the near past."
But that's always a question of legality, not technical capability. Legislation is all about where we draw lines.
However, it is entirely true that legislation (or rather, the 'undefined behaviour' in it) tends to be abused a great deal before it can be shut down.
The argument that the key will leak is not technical, and strictly speaking, nor is the argument that 'technology may one day do this and then where would we be?'. But one is a lot less speculative than the other, and admits no mitigations.
Ok, so again then what would be your response if some law enforcement official said "well what about Apple then?" iPhone is now 13+ years old, why hasn't their master key leaked? Microsoft or Google signing keys, same boat. Or what about SSL in general for that matter, the entire HTTPS paradigm depends upon master private keys that are not leaked, or at least not often. It would certainly be potentially worth a lot to certain adversaries if they could simply spoof major banks, not through some CA hack but literally just getting their keys. Yet broadly the effort to keep that information protected seems to be fairly successful, for better and for worse.
I mean, that's kind of my issue, it's not hard to raise plausible counter examples, and once we get into the weeds about "how likely" and "when" vs "how many will be saved in that time huh" I think we may have already lost. It distracts from the real debate, which is about how valuable a zone of private space is and the harm that comes from infringing upon it. That's the real cost. Just because you can doesn't mean you should.
>But that's always a question of legality, not technical capability.
I don't think that's quite justified by the history of law and privacy. There are plenty of practices law enforcement/intel can and do engage in now that were simply inconceivable in the relatively near past, when much of the law that still governs us was written. Not everything automatically adapts, it's worth watching out for things that are simply taken for granted because they're considered inviolable, and in turn lack explicit legal protection. There should be legal protection, what I'm saying is that winning it requires a frank discussion about harm tradeoffs and trying to get people to more generally grasp new kinds of costs, like emergent effects.
>The argument that the key will leak is not technical
I think it really is. I mean, even in your argument where you say "the hole is people", you're making a technological implicit assumption that people remain involved. Do your assumptions make the same sense if we imagine human equivalent or better AI? I'd rather just talk about why we shouldn't as a matter of ideals, even if we could, and even if it means some things we don't like go unstopped.
For example, I was 100% sure that some event in 10 years past did happen, because I was there and saw it. Then a friend of mine said that it did not happen. I did not believe and checked the recording - and yes, it did not happen. What I thought I remembered actually happened at another time with a completely different person.
There's a character in there that has his "brain drive" stolen and he's disabled by not being able to access all that memory.
That part of the story got me thinking about how that data would be securely stored.
And then more recently in "Fall; or Dodge in Hell" there is a brief discussion about the uploaded peeoples data being encrypted in such a way that only they would know their own "thoughts", while those on the outside could observer via metadata, the goings on in the digital rhelm.
Step 1. Create a file sharing mechanism. (http / https / sftp / rsyncd / nntp(s) / smtp(s) / whatever)
Step 2. Start Tor.
Step 3. Share link.
They obviously don't understand the Streisand effect though, because it was completely hamfisted. They should have just arranged to schedule a bunch of other interesting talks at the same time, or put them late in the day in a small and distant meeting room during the cocktail hour. Amateurs.
When I see comments like former Australian PM Malcolm Turnbull's "Well the laws of Australia prevail in Australia, I can assure you of that. The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia", I despair. Such profound ignorance should be kept well away from any sort of power. Regrettably, we haven't yet found a modern King Canute who can demonstrate the principle to our politicians and leave them without any doubt.
Another fundamental point that Schneier is very strong on is that in most countries, cybersecurity responsibilities have been co-opted by the military. Being that the military thinks in offensive terms (even when considering defence), that's the position they take here as well, which explains why they would rather hoard zero days to use as weapons than disclose them so that everybody can patch and be secure.
The argument to make to politicians is always: do you trust the other politicans not to spy on you when they are in power? That will always make them think twice.
On that end there's the inherent expectation of "If we can do it with analog phone, why can't we do it with digital?" by law&order politics. Another big part is also the false assumption that people have a perfect right to privacy.
In theory, people may have a right to privacy, but in practice, communication providers which actually facilitate communication are also bound by government actions .
But the counter to that counter-argument is that the traditional methods of interception had a significant practical cost and therefore an incentive to use them only when there was a legitimate interest in doing so. In our digital world, governments are trying to get access to everything on everyone.
Moreover, so far we have mostly been talking about threats due to reading data. To the extent that adverse changes are being effected as a result of attacks, most of them are probably being made using genuine credentials, which were themselves compromised.
However, the kinds of weaknesses we're talking mandating in encrypted communications would potentially also allow hostile actors to impersonate others and change the data directly. It doesn't take a genius to predict that this would create a future where everything in our ever-more-connected world is under threat, from financial transactions to medical instructions, from criminal records and intelligence profiles to control signalling for networked self-driving vehicles and essential utility supplies.
The large scale surveillance also existed in the analog time see the GDR or programs like ECHELON.
That's why for many politicians this is supposedly only an issue of "scaling up", which has also been happening  and then got blown wide open with Snowden revealing PRISM&co.
This hostility to cryptography is also anything but new: Cryptography in many countries is still subject to strict regulations and has been for decades , while intelligence agencies like the NSA do spend quite some efforts in undermining even the process of standardization .
Let's also not forget that for a really long time the majority of the Internet didn't even use any TLS for most of its traffic, once we got https rolled out somewhat widespread, guess what happened? Hearthbleed, weird how that went down.
Feels more like they want to legalize a whole lot of what's been going on anyway. Maybe I'm just paranoid, but at this point, it's difficult not to be.
It makes the argument extremely ignorant when you realize we didn't have this problem in the past because it was just too difficult.
Now, I disagree with him as to what the law should be. But I think people have been ridiculously uncharitable by implying he believes ... you know, I don't even know what people are implying. They are just pointing and laughing and not making a coherent point.
I prefer to give people the benefit of the doubt unless I have reason not to, but in this case, I did see some of the speeches he was giving at the time, like this one. It was a textbook example of a politician repeatedly quoting the same obvious sound-bites and buzzwords ("leadership", "keep us safe", "rule of law", and so on) without really understanding the issue. Occasionally he seemed to get pushed off-script, like the section where someone asks him to explain what a backdoor actually is, and he is literally hand-waving rapidly as he delivers a superficial explanation that doesn't suggest any sort of real understanding.
I even think he was making a joke.
Again, if you've seen the footage of him making the infamous statement, he might be trying to make a joke, but it's more like he's trying to laugh off the nonsense he's saying as if that makes it OK.
They are just pointing and laughing and not making a coherent point.
When you're dealing with someone who is demonstrably unable to recognise the basic facts of a situation and make reasonable arguments, sometimes ridicule is all that is left. As I said, sometimes the best we can do is try to keep such profound ignorance away from any sort of power.
Can you clarify? Wouldn't just having a second key which the government keeps in escrow in case they need to decrypt a message achieve exactly what you are saying is impossible?
(Note: I don't agree with the laws being created to undermine encryption ... but I think it undermines the argument against it to overstate the case.)
If we take it to mean all governments, then that means China gets a copy of the key and can use it to spy on the communications of the Australian government.
If we take it to mean just one government (or a few select governments), then what is the criteria for deciding the government of country X should have a key but country Y should not?
Ultimately of course it comes down to a matter of coercive power and jurisdiction, which particularly complicates efforts trying to build products or services that are intended for global usage. I believe we need to push back against anti-encryption laws in the west so we have some ground to stand on when making a case that they shouldn't be accepted elsewhere.
If you haven’t worked much with a large government, you don’t tend to realize just how fractured it all is.
This is the key, I feel. Just enough CYA while still doing BAU (which includes security agencies snooping on people). Governments think they will be able to secure their important stuff. Everyone else can have the illusion of security, without actual security. When there is a master key leak, it will be someone else's problem, and can be blamed on bad actors.
At this point the anti-encryption spooks and politicians have shown they don't want to listen so reaching them is impossible.
The rational debate has been over for decades and we won. Only irrational remains which means it is time to get downright mean. Instead cut them off and discredit them. Show no mercy to the evil fools.
How would that work? A basic principle of encryption systems is that only relevant parties have access to the secret information. Typically in asymmetric systems they generate their own private key or other secret and never share it with anyone. Moreover, techniques like perfect forward secrecy rely on generating new session keys at each step.
As soon as you mandate a systemic backdoor where someone else's secret can be used to unlock anything, you have undermined the whole premise of the system and created a huge single point of failure.
Not only that, but normally if there is any concern that a key has been compromised then you can change it to at least continue to protect future communications. With this sort of back door, any compromise is permanent.
If you think that's not a serious risk and governments could be trusted to keep such an important back door key secure, let me first ask you this: which government(s) should have access to that key? What happens when, say, someone in the US has family in Iran, and the governments of the US and Iran both insist on being able to intercept messages between them? What happens if a more authoritarian government is elected in Australia and a more liberal one in Canada, and the Five Eyes agreement collapses?
Don't forget that a large part of the reason there is so much public debate about these issues today is because someone walked out of the most powerful intelligence agency in the world with the modern equivalent of a note of its deepest secrets in his back pocket and then shared that note with the world.
Well, the government is now one of the relevant parties. It doesn't change anything fundamental about the encryption. It just adds the government as a party. I get that you think that is bad (and I do too), that you don't trust the government. But it doesn't break the encryption, that keeps working perfectly, just like it did before. Just more parties have access.
The problem here is so many people are going around arguing against this by saying it "breaks" or "backdoors" encryption. And all those people are being completely ignored because the government is getting perfectly reasonable advice that says that it isn't breaking anything.
It does change something fundamental: now you have three parties instead of two. Take for instance the most popular key agreement protocol, Diffie–Hellman, and suppose Alice wants to use it to send a message to Bob. When the only parties are Alice and Bob, she can do some calculations with her public key and Bob's public key, and then she has the shared key to be used to encrypt and authenticate the message; these calculations can be done fully offline. If you try to add a third party (George), not only are the calculations more complex, but also they need to be online (all parties have to exchange messages before any party knows the shared key), which makes important use cases harder or impossible.
And note that I said above "encrypt and authenticate": possession of the shared key allows one to also forge messages. When there are only two parties, this works fine (Bob knows he hasn't forged anything, so the message can only have come from Alice); with more than two parties, that is no longer the case.
That is: the design of a three-party protocol is very different from and much more complex than a two-party protocol. And it gets even more complicated once you want one of the parties to be able to decrypt and validate but not forge messages.
Or is this really about seeking a means for dragnet surveillance?
In any case, I don't know how to explain this without getting into a lot of technical details, but introducing an extra party who can read everything like that does fundamentally change the nature of the system. If you want a system where data is encrypted such that two independent parties can each decrypt it using only their own secrets, how do you think key exchange, encryption and decryption would work?
And if the government's private key has been leaked? How many parties will have access then?
Lol! No, no, no, no, no! It does entirely!
Others have already pointed out the problems of who has access to that key, I'll simply point out that the system would potentially collapse under its own weight. If there is a single master key then it's a huge risk. If that risk is recognized and there's a way to invalidate and replace that master key then suddenly you have an entire additional infrastructure of communications for key replacement that has to be in place. If you have multiple separate keys generated at the initiation of each encrypted communication, then there has to be a separate secure infrastructure for transmission of the additional key to some government entity. As a side note that potentially secure infrastructure by law probably has to have the same key sharing requirements. You also run into problems with things like embedded systems, and if you don't think that's a problem look back at the problems in the networking stack used in vxworks and other embedded systems that are in the field and effectively unpatchable.
Edit: ipnet, URGENT/11, https://www.bleepingcomputer.com/news/security/urgent-11-vxw...
Edit2: I forgot to mention, this is an all or nothing decision. If you're mandating this it has to apply to ALL communications or you end up with things shifting channels. Banking transactions? Master key. Medical patient portals? Master key. VPN to the office? Master key. Any secure website connection? Master key. You likely have to make using any non keyed encryption illegal with severe penalties, and you have to devise a system in which it's possible to identify encrypted communications not encrypted with that key without that key being available.
Wouldn't a similar argument apply to the secret keys at the roots of Microsoft's and Apple's and many other organization's code signing systems? And the secret keys at the root of SSL security?
We've already apparently decided that it is OK to rest our security on the idea that careful organizations can keep secret keys secret.
If a government escrow key was kept in a two rooms with air-gapped computers, one on either coast, and anybody hoping to decrypt messages will have to mail it on CD there and get a CD with the decrypted messages mailed back. I could see that system actually keeping the key secret. But proponents of the need for key escrow argue that existing decryption techniques are too slow and inconvenient I don't think that they would be satisfied with this solution. So I'd expect that anyone on the IT team at any of the FBI's 50 or so field offices to be able to get access to the key and so for it to leak in short order.
This goes to the heart of the real issue and argument that I think is more powerful ultimately, though it is harder to express.
Authorities argue that they are not getting anything new because they always used to have the ability to intercept phone calls, open mail etc. So they argue that interception of encrypted messaging is just "status quo". But it is NOT status quo, because being able to do it at large scale, using automation, sophisticated data analysis and with an overall order of magnitude less effort actually changes the fundamental nature of it altogether.
The powerful argument to me is to use a judo-style move where opponents of intercepted messaging actually embrace the "status-quo" argument but then use that to re-establish the same high threshold for that interception that used to exist.
For example, if the authorities want access to a specific communication stream, then they should have to involve the messaging provider and they should have to get a one-time key that is set to expire or scoped in some way to specific messages, and they should have to concede to limitations on how secretly that can be done. The most powerful argument of all is that anything that is more invasive will foster a huge ecosystem of end-to-end encrypted services. Encryption is ultimately mathematics and you can't stop people using it. The best you can hope for is to create something that is close enough to OK with everybody that the ecosystem doesn't grow because most people don't care.
I'm not sure that's true. We might tolerate that situation in specific circumstances where we haven't identified any better choice, but that isn't the same as thinking everything is OK.
For example, in the case of certificates used for things like downloading web content over HTTPS, the centralised nature of the CAs and the track record where not all CAs have proved to be trustworthy is a significant concern in the industry. This remains the case even though browsers can revoke the root for any CA that is compromised and despite the increasing use of mitigations like perfect forward secrecy.
The rise of Let's Encrypt in particular has been a boon for moving smaller sites to serving over HTTPS by default, but given how many sites now rely on it, it also creates a huge single point of failure in the security infrastructure of the web. Notwithstanding the precautions taken by the operators to keep the most important secrets secret, if anyone hostile did ever manage to compromise that infrastructure covertly, that would be catastrophic for security on the web.
If Microsoft or Apple leak their keys, they at least will spend a lot of effort on cycling keys to their huge number of deployed products and on the PR fallout. If it's managed poorly, they may lose customers.
If the US leaks their key, tough noogies, it's mandated to use that one. Companies, much less people, will find it hard to move to another country with better key management procedures.
Imagine being an outspoken left-wing blogger in America that votes for this. Imagine then Trump gets elected and 'the government' is now an entity that actively breaks established laws and targets outspoken critics.
Imagine someone even worse than Trump is elected after that.
Imagine the key is leaked.
Imagine China makes a trade with your government for your key.
There is no taking back that key now, whoever becomes 'the government' at any point in the future has it.
It becomes unfathomable how huge of a problem this could cause and then when it happens it get swept under the rug and not a big deal. If the PM thinks its ok to have a key under the mat then I wanna know how much time he should be doing when someone gets my personal information and makes me have to deal with identity theft. Would 10 seconds of jail time for my life long struggle be acceptable? For the Equifax CEO that would have been half a year.
Some people are fundamentally willing to sacrifice rights they don't view as necessary for some temporary gain, and trying to oppose this in any more direct way can result in having one's own moral standing challenged.
I think you meant to say "sacrifice everyone's rights." This isn't an opt in scenario, it would be your right to privacy being forcibly taken away.
The "censored" talks immediately had their slides put online and were extensively advertised with Schneier in particular, stating it's your duty to read them now. Nobody was arrested because the courts would not even allow a prosecution as there's no law that has been broken. The rule of law, while imperfect, means something in Australia.
Let's be very critical of what has happened here by all means, that's how we preserve a rule of law and equality before it.
Conferences are nearly all in-person live ads, cloud ads disguised as talks, or tutorial talks so shallow a Googling is better.
Triple that for infosec industry.
As far as designing in weaknesses and/or golded keys. Well, it does not add to security, but assuming that the other nation states are already able to break encryption and read what they want, it does not weaken it either for national security reasons. What it does provide however are easy means to stop terrorists, pedos and drug dealers from conducting business. That power in the right hands is good for society, just like having people with lethal weapons in law enforcement is good for society.
Remains the risk that some bad guys could get their hands on the golden keys. Yes, design to handle that?
The option isn't encryption broken by them or broken by us. It's secure encryption or encryption that can be broken by everybody with sufficient resources.
> Remains the risk that some bad guys could get their hands on the golden keys. Yes, design to handle that?
Yes, because secure systems are so secure that we should be trusting them to base all of the world's encryption on. It takes a single bug or intrusion to undermine everything. There's no way to design for a key that needs to be usable/accessible for when you need to access encrypted anything, while keeping it secure from bad actors.
That's even before considering how the US government and US institutions have, over the course of the past decades, pissed away any and all good will and trust that the public may have had in them. Why would anybody want to give the FBI or NSA free access to all of their data? After seeing enough stories of employees looking at naked pictures, stalking ex-girlfriends, dragnet surveillance or other gross abuses of power, there's no way these people can be trusted.
You are correct in exactly the same way that, if one assumes that the Earth is flat then it is impossible to fly around it in an airplane.
There exist cryptographic security measures that have flaws and can be broken by nation states. But there are also plenty of encryption techniques that will easily resist the efforts of even the most effective nation states (at least for the present day). The question is not whether such codes exist but whether we will pass laws preventing any law-abiding person or company from using them.
If you were right and the NSA could break these codes then they wouldn't be advocating for back doors, they would be encouraging everyone to encrypt their communications (to protect against everyone ELSE), then using their abilities to stop terrorists, pedis and drug dealers.
You can't. That's pretty simple and solid.
If you have design a master key, then it's the master key. Once it's out, it's out.
To be clear, I am not supporting this. But this will be an argument being made by the other side thus a good reply should be prepared. NOTPetya already demonstrated that malware can come with software updates. But up to now there is no hard evidence that keys of big players have been leaked.
And even for professionals it's hard to keep up. I am using Signal. From time-to-time I am reminded by Signal (on Android) that I need to update taking me even to an update screen. How would I know that it is genuine? No other app I have does this.
What is a "lawful request"? Does that include legitimate court orders originating from China, Russia, Syria, Sudan, etc?
That's why this will never fly. It's a horribly stupid idea.
FWIW, I am appalled by this constant call to weaken encryption. This is not worthy of any country who deems themselves under the "rule of law". It's even more appalling that they do the dirty work for the countries you listed...
My preference would be to group countries into categories that respect users and their privacy, and those who don't. And then don't pursue selling into countries that don't respect privacy. And no one gets "gold key" or "backdoor access". It is only a legal front door to the data the provider possesses in plaintext. Specifically, data residing SOLELY on the device would NEVER be in said provider's possession in plaintext form, if the user desired that.
But that will never happen. Because, growth markets, amirite? (Sad face)
> Should corporations get to decide what a lawful request is? That's a horrible idea either.
Agreed. Corporations don't get to 2nd-level guess the law (ignoring lobbing in this example). They either get to choose to operate within laws of the territories they do business in, or they don't do business in a said territory. This is my EXACT complaint against Uber, AirBNB, etc.
> FWIW, I am appalled by this constant call to weaken encryption. This is not worthy of any country who deems themselves under the "rule of law". It's even more appalling that they do the dirty work for the countries you listed...
In total agreement. Furthermore it is what is view as extremely easy. Of course the NSA/CSS/CIA/ABC/DEF whatever will always target and crack-open the endpoints. To do a double-duty and attack the crypto itself is just fucking annoying to me due to the collateral damage said efforts bring. They already own the endpoints. Just focus on that. Don't attack the math operations.
> What it does provide however are easy means to stop terrorists, pedos and drug dealers from conducting business.
I can't remember the last time a law passed to hunt down the pedos/terrorists/drug dealers hasn't been used to surpress or hurt some part of the population.
Remember when drug dealers just used SMS? When pedos exchanged magazines and floppies? When terrorists just called each other? And how the government, with all its wiretapping power, couldn't stop any of them? Encryption is a nice bonus for these people, but a lack of it is not stopping anyone.
Tangentially, if a country would focus on properly treating addiction instead of making boogeymen out of those darn drug dealers we might actually get anywhere as a species.
How would that even work in a crypto system? If you have the key, you can decrypt. That's the basis of cryptography. Wishing away the glaringly obvious problem with the proposed system by telling people to "design it not to break like that" doesn't work.
That assumption is the weakness in your argument. If you design in a systemic back door, it becomes a known fact that a hostile actor could find a way into your system, because you have created one for them. Not only that, but in the case of a golden key, if it leaks then anyone can get in.
Of course there is no guarantee that any encryption scheme is 100% unbreakable. Such is the nature of mathematics and research. But there is a fundamental difference between needing to discover a systemic weakness in an encryption scheme, say an efficient solution to the discrete logarithm problem or a side channel attack on a given implementation, and merely needing to know some master password that will work even if the mathematics of the encryption method and its practical implementation are sound.
after all, it worked so well for the TSA security key: https://techcrunch.com/2016/07/27/security-experts-have-clon...