
Telegram founder: US intelligence tried to bribe us to weaken encryption - anjalik
https://news.fastcompany.com/telegram-founder-us-intelligence-agencies-tried-to-bribe-us-to-weaken-encryption-4040876
======
otalp
>"It would be naive to think you can run an independent/secure cryptoapp based
in the US."

This seems to be a shot at WhatsApp and Signal, implying that they have
loopholes that allow the FBI to snoop in. I'm not sure how true that is. This
might be an attempt to deflect from the fact that Telegram uses a home-baked
encryption protocol which might be insecure, while WhatsApp uses the OWS
protocol.

~~~
permanentdecaf
Pavel Durov wants everyone to think security is about trust in people. Most
companies in that business do the same, because it's easier than building
something that doesn't require trust in people. The way Pavel Durov and others
like him present "trust" is (ironically) shady corporate structures[1], shell
companies, or use of the word "Switzerland."

They want people to think like that because they've built businesses that
require it. Telegram stores the messages you send/receive unencrypted on their
servers. That's not good, unless you've been trained to think that "privacy"
is just about choosing the company, government, or legal jurisdiction that
gets total access to your data.

Security professionals know that's not how we should think about security
(never trust _people_!), because Durov is leaving a lot out: there aren't safe
jurisdictions, servers get hacked, and centralized databases will get
compromised. Logic, though, is probably no match for conspiracy theories.

1: [https://www.washingtonpost.com/news/the-
intersect/wp/2015/11...](https://www.washingtonpost.com/news/the-
intersect/wp/2015/11/23/the-secret-american-origins-of-telegram-the-encrypted-
messaging-app-favored-by-the-islamic-state/)

~~~
skinnymuch
I read the WP article you cited, titled, "The secret American origins of
Telegram, the encrypted messaging app favored by the Islamic State".

If Telegram isn't that secure, then why are extremists like IS using it over
Signal or WhatsApp? I know Telegram has better features for big groups and
much better multi-platform support, so is that the reason? I'm legitimately
asking without any snark.

~~~
atmosx
You highly overestimate the people who are in position of power.

I am talking about politicians, head of terrorist groups, etc.

While saying that top politicians are complete idiots is probably wrong - then
again maybe not, I am not sure anymore - the fact that H. Clinton, run the
most expensive campaign in history, with backing from all major tech corps AND
her staff didn't bother to use encryption at any scale, let alone running a
mail server (God knows what kind of software the server was running if it was
OpenBSD or Windows Server 08), to me says a lot about how flawed the
understanding of these people and their consultant's is about today's world.

Watching "House of Cards" everybody seems incredibly smart, driven, etc. but
the politicians I see in real life on average and on the not-very-smart side
and the few I've met in person are clueless beyond salvation.

Ps. Sorry for possible mistakes, I'm reading from mobile.

~~~
jliptzin
Over 99% of people have no clue how computers work in the most basic sense of
the word understanding and most are proud of their ignorance.

------
drawkbox
I am not sure about the claim here but the FBI has always been all over
cryptography companies and products and this was well before Snowden, Phil
Zimmermann (PGP) knows about this.

In 2003-2006, we built a service that was a financial system to exchange
financial data through various means including AS/2 EDI over HTTP with big
companies and the government suppliers such as AAFES (Army and Air Force
Exchange). Initially we had RSA, PGP and a custom encryption in there, the
latter two for other features besides EDI. We got a letter from the FBI asking
us to switch only to RSA, they wanted to know about our use of PGP and wanted
to see our custom encryption if we continued to use it. Being a small/medium
company we switched to just RSA to avoid any issues. It was an odd day, when I
came into the office they told me I had an FBI letter on my desk and you can
imagine what happens around an office when something like that happens. Very
strange day indeed.

Moral of the story, if you create your own crypto or aren't using the ones you
are supposed to use, in any capacity, expect some knocking.

~~~
ben_w
Interesting. That strongly implies that RSA has a flaw, which is news to me.

~~~
tptacek
If the USG was aware of a secret flaw in RSA, they wouldn't be tipping their
hands about it to random small companies. Come on.

But: avoid RSA anyways. It's inferior to modern curve crypto.

~~~
drawkbox
I think the only reason they contacted us was because we were one of 25ish
companies certified to communicate with government endpoints early on in HTTP
based EDI. It was right at the moment that EDI went from faxing to FTP to
HTTP/email in AS2/3/4\. Others were Oracle, Microsoft, IBM, EDS, Axway, /n
software etc.

It wasn't until we had to connect to AAFES that it was a problem. We were
small but we were sending a good amount of orders and financial data from Wal-
mart, AAFES, and other gov't sources that had early EDI over HTTP/S going.
They were also recommending us to vendors on their approved EDI software list
and probably wanted to recommend ones that played nicely.

~~~
tptacek
This is entirely plausible. If you're doing crypto on behalf of USG data, they
have all sorts of dumb CYA rules.

------
angry_octet
Read the replies from all the serious crypto security people on twitter and
you will see the overwhelming consensus is that the FSB/Spetssviaz and FBI/NSA
probably _love_ Telegram for its roll-your-own-crypto and server mediated
group chats.

One also has to wonder if the FBI consider the Telegram team to be essentially
undeclared Russian agents, and hence fair game.

~~~
nf05papsjfVbc
Indeed despite what they say, we only have their word for it but no source
code to check whether they accepted these requests from the snoopers in usa.

~~~
rthomas6
I thought they open sourced the code?

~~~
slipmagic
Yeah, it's open and the encryption is done in the client, so it should be
available on Github.

[1] - [https://telegram.org/apps](https://telegram.org/apps)

------
strictnein
Cryptography experts like Matthew Green were having fun with some of his
claims on Twitter a couple of days ago. I would read whatever Durov claims
with a large amount of skepticism.

ex:
[https://twitter.com/matthew_d_green/status/87369621172278476...](https://twitter.com/matthew_d_green/status/873696211722784768)

~~~
tptacek
If there are any professional cryptographic engineers that take this
seriously, I'd be interested in hearing from them. From what I can tell, the
response to this has been pretty much unanimous.

------
throw2016
Governments never believed in privacy. Before they were opening envelopes and
tapping phones. Now they are trying to keep up with technology and given the
sheer scale of resources, manpower and power at hand operating 24/7 they will
prevail.

A journalist like Poitras is on all sorts of lists and incessantly harassed.
There are secret courts, secret laws and secret processes at play. And beyond
this the power of harassment, intimidation, blackmail and bribery. Individuals
and even organizations cannot prevail against the array of capabilities.

Its nice to think of democratic theory and the rights but these only exist
when not exercised as talking points. The moment you start exercising them you
end up on all sorts of lists, marked for harassment and basically have a
target on your back. Dissent is squashed even before it can formulate.

------
tuna-piano
Assuming this isn't just PR, in some ways this is scary and disheartening.

But my first reaction was "Cool, our government really cares, is creative and
has the necessary power to get things done."

For those of you who've worked with government, you've seen how insanely
difficult the procurement process is. Being as specific as needing to get
competitive bids for toilet paper purchases, etc. So the fact that they could
get potentially large amounts of bribe money means (a)This goes to high levels
in the organization (b)They've probably done this before.

I wonder how much they offered?

And I wonder how many other pieces of software have backdoors. I would think
the first things they would try and get access to is (a)Certificate issuers
and (b) VPN software.

Do we know that Godaddy,LetsEncrypt, OpenVPN, Cisco VPN, Juniper, etc don't
have backdoors?

~~~
falcolas
> Cool, our government really cares

I wonder what they really care about? Liberty, individual rights, security, or
more power? They're people too, so I'm sure they believe in the first three,
but the last one is a more seductive and drives a lot more of the bad we see
governments do; worse is that the drive for more power is frequently justified
as "we _need_ this power to protect our nation".

> has the necessary power to get things done

This actually frightens me a bit. The US government has the ability (and has
displayed the willingness) to absolutely destroy people's lives in their
pursuit of "national security". What checks and balances are in place to keep
this kind of power in check?

~~~
AmIFirstToThink
One... to see if you are for sale. If you can be bought, other state actors
will try and may succeed, whether US actually bribes you or not. US needs to
know if a successful App is up for taking bribe, including top developers
individually, or not, because millions of Americans including persons in
position of power might be using that app.

Two... essentially, this is power of people applied. Lot of interest in
keeping status quo of power, of narrative, of money. They want to have dirt on
everyone and then choose to use it when needed. Putting pressure or not can be
decided later, dirt will be collected by default. No point in digging the well
when you are thirsty, you need to do it well upfront the need arising.

>What checks and balances are in place to keep this kind of power in check?

Either play the game, or be too big to fail and lobby the government for your
needs. I think the best one can do is choose your masters either USA, China,
Russia or to some extent India or EU main players.

~~~
falcolas
I think that the first point is a legitimate one - if your own government can
buy you, so can someone else. Of course, when it comes down to it, everyone
has a pressure point. In my thoroughly uninformed opinion, it would be better
to identify and protect against such tactics than do them yourself and watch
it backfire like this.

The rest, I can't argue with, though it doesn't really make me feel any
better.

------
loceng
The ridiculousness of it all is it's unreasonable at its base to try to
prevent encryption as a form of safety and security from violence.

Sure, you can lock up all communication for privacy reasons, and the
government can spend all kinds of resources on trying to control to prevent or
circumvent encryption - however it's a waste of resources as it's simply a
bandaid.

If I wanted to do something violent or evil I/you can simply have regular
meetings and use paper communication - the old spy-style stuff. Of course
those networks can be infiltrated by governments with the resources, and they
can maintain that presence by allowing certain acts within networks to occur
vs. deciding which ones they should stop; it's how the war against Hitler was
won once their encryption was broken - watch the very well-done The Imitation
Game -
[http://www.imdb.com/title/tt2084970/](http://www.imdb.com/title/tt2084970/)
\- for a reference.

The only real solution is dealing with the root causes. I heard an analyst on
TV (a rare occasion for me) mention after Trump's Saudi visit and speech, that
he didn't mention that the Saudis should look into the root causes of why
there is terrorist activity growing in their countries; of course a lot of it
is historical karma and rage from violent acts against their families, however
a lot is because people's basic needs aren't being met which prevents the
higher levels of Maslow's Hierarchy of Needs from being reached and
maintained.

There's a solution and it requires building real community, locally, where you
are now - and striving for people to become healthy so they don't develop bias
and other coping mechanisms which prevent empathy and understanding and
therefore compassion; preventing responsible ownership of weapons isn't useful
either, not developing and supplying weapons on mass would be beneficial,
however most attacks recently have been with vehicles or knives.

Universal Basic Income will also allow closer to a truly free work market and
it can evolve from there, giving people the time to do what they feel is the
most important in that moment for themselves, while not having to be forced to
working in a shitty environment with shitty managers or co-workers; the health
improvement and increased productivity here alone is worth it.

~~~
aeorgnoieang
> however a lot is because people's basic needs aren't being met which
> prevents the higher levels of Maslow's Hierarchy of Needs from being reached
> and maintained.

This is contradicted by a lot of evidence. Terrorists are most commonly
_middle_ class members of their society and often well educated. If anything,
terrorism is a powerful means of satisfying the _higher_ levels of 'needs',
e.g. meaning, purpose, community.

~~~
loceng
Interesting and perhaps a fair point. We'd have to understand what quality of
life being 'middle class' in each society means, what well-educated means, and
there will be more factors of course. If you hurt someone enough either
directly or indirectly by killing family and friends' family members, that'll
definitely start to outweigh the feelings of being in a safe environment -
which doesn't simply come from being 'middle class' or well-educated. Safety
is the basic need and if you have a world power killing 100,000s of your
citizens, your future isn't going to feel safe; I could mention PTSD and the
such, though I think that minimizes and simplifies it too much.

------
jquast
"a few months later i was offered an interview for a position at the fbi
office for cyber-warfare in nyc who as well offered to fix my immigration
status"

and, "before going to monterey and while exploring the beauty of san francisco
i was contacted once by a us navy intelligence officer who seemingly
unintentionally appeared next to me at the bar"

[http://mickey.lucifier.net/b4ckd00r.html](http://mickey.lucifier.net/b4ckd00r.html)

~~~
groby_b
Would you mind clarifying what your quotes and the linked wall of text have to
do with the story?

~~~
dmix
Seems to be a similar story by a security guy who wrote crypto code for
OpenBSD. The larger quote:

> about the same time at the bazaar show in nyc i was contacted by a
> representative of us-ins and a ukrainian millitary attache at un. both
> investigating my involvement with openbsd. a few months later i was offered
> an interview for a position at the fbi office for cyber-warfare in nyc who
> as well offered to fix my immigration status (or none thereof at the time ;)
> with a greencard. nonetheless for the lack of credibility from the future
> employer i refused to talk to them as for me deportation did not sound like
> an acceptable worst-case scenario.

> before going to monterey and while exploring the beauty of san francisco i
> was contacted once by a us navy intelligence officer who seemingly
> unintentionally appeared next to me at the bar. later on my way back during
> a short stay in chicago also randomly appearing fbi agent. fellow was
> ordering food and beer for me and just like his navy pal gave me a warning
> to keep my mouth shut!

He was a foreign national visiting the US who probably got targeted by various
agencies after attending some security conferences.

~~~
groby_b
Ah, thank you.

------
19eightyfour
But wouldn't it be in the interests of mass surveillance to herd people toward
a chat option that isn't secure, or that the surveillants have a backdoor to?
You get two benefits: 1) chat people think is secret you can read, 2) people
self-identify as selectors / targets by choosing to try to hide their
communications, which you can actually read

And if such PR herding worked, wouldn't the surveillants be prepared to pay
for such efforts to make their job easier?

So, what seems readily apparent is: Telegram takes state money, to offer an
insecure option, while dissimulating to the world that it's: a) secure and b)
turning down state money all the time.

I know why this perspective isn't discussed in MSM. But I don't get why it's
not discussed more here. It seems obvious to me. And personally IMHO, I think
that's a good thing. Catch more criminals / terrorists.

------
ricksharp
Can someone correct me if I am wrong, but it seems relatively easy to make an
encrypted peer-to-peer messaging system.

I mean, simply use a public/private encryption algorithm that has proven to be
highly secure:

\- Share your public key openly

\- Anyone can send a message to you using your public key to encrypt the
message

\- You decrypt with your private key on device

Do all the encryption/decryption on device and viola, secure messaging. (This
is basically how https works.)

Of course this only allows a single device the ability to decrypt the message.

However, if you want to allow multiple devices to share a private key, they
can simple send each other their own private keys using the same encrypted
protocol.

In addition, for super paranoid use, a master password could be used to salt
the private key so that would be required with the private key to enable
decryption. (Which is similar to how password keepers basically work.)

What am I missing?

~~~
tptacek
Which public key algorithm? In what mode of operation?

What are you going to use to _actually_ encrypt messages? You don't want to
directly use the public key primitives to do this.

In what mode of operation are you going to use that second, bulk encryption
algorithm?

How are you going to authenticate messages?

What will you do to validate the public keys of your peers? When you close the
application, will it forget everyone's keys? How do you prevent MITM on first
contact?

What happens when your peers change devices, and thus public keys? How do you
authenticate those changes? If you get any of this wrong, remote attackers can
MITM messages.

How will you handle file transfers (and images and videos and voice, which
will probably need yet another cryptosystem)? How will you cryptographically
bind those transactions to the (presumably, somehow) authenticated chat
session you set up?

What happens when someone's device is compromised? Is every chat they've ever
sent also compromised?

What happens if someone is briefly compromised? Is every message they send in
the future also necessarily compromised?

How will you handle updating your software when, inevitably, someone finds a
vulnerability in it? What happens if you have to upgrade the whole protocol?

None of this is easy. Most of these problems by themselves are hard in their
own right, but there's a combinatorics to them as well.

~~~
ricksharp
Thanks!

This is an excellent list of potential issues.

"What are you going to use to actually encrypt messages? You don't want to
directly use the public key primitives to do this."

I'm not sure what you mean by this.

Could you explain why there is a need for another encryption protocol beyond a
public/private key encryption?

If the protocol is secure against brute force attack, both the public key and
the encrypted messages could be open and would not create a vulnerability to
the private key.

What am I missing?

~~~
tptacek
* Assymetric transforms are much, much slower than the AES transform or any other block or stream cipher.

* In most cases, an asymmetric transform gives you a deceptively small amount of headroom within which to fit your data before losing security.

* asymmetric transforms are less safe to implement than simple authenticated symmetric ciphers.

* for that matter, cost-effectively authenticating messages will require "symmetric" primitives anyways.

* modern asymmetric algorithms (like Curve25519) don't "directly" support encryption.

That's just off the top of my head. It is hard to think of a single competent
public key cryptosystem that encrypts directly with the asym transform.

------
Callmenorm
There aren't a lot of places that are embracing truly end-to-end encryption
for the masses. I think it would be tough in the U.S. but it's not clear to me
where the better place.

~~~
steevenwee
Something like Iceland or Sweden?

------
custos
Option 1: Could be Russian/Telegram propaganda.

Option 2: Could be true because seriously, who trusts the FBI/NSA not to
violate our privacy anymore?

Really not sure what to believe about this one.

~~~
mverwijs
A person born in Russia says something and you think it's "russian
propaganda".

Mccarthy would be proud.

~~~
custos
Both the US and Russia are well known in interfering with foreign governments.
Hell even America has their own foreign propaganda service "Voice of America"
radio.

Also I have no problem with communism, I have problems with corruption and
government over-reach/attempts to influence populations.

I'm not new to this. I've worked with people who flood social networks with
bullshit to sway public opinion.

Always be skeptical, of both sides. And at this point I don't trust either the
NSA or the Russian equivalents.

------
prawn
Surely it's not just an issue of location but scale? Unless there is a huge
team reviewing code, an individual or small team could be paid-off by an
agency to provide a backdoor? For the right combination of large scale app by
small team, there'd have to be a price at which many individuals capitulate?
If the backdoor is somehow revealed, "doesn't matter, got my money".

I used to wonder whether some success of social media companies couldn't be
explained by secret payments for backdoor access. You could be operating out
of Europe or Africa and still get offered money, and other pressure carefully
applied.

You might think you'd hold true to your plan of privacy-for-all, but if they
offer $x00m or more?

------
retox
Sounds like a reasonable exit if noone wants to buy your popular e2e encrypted
chat app. Take the bribe, shutdown and move on to the next iteration.

~~~
ethbro
1) Open source codebase pre-backdoor 2) Take bribe 3) Insert backdoor 4) Close
company

~~~
amq
truecrypt?

~~~
mverwijs
FUD?

[http://istruecryptauditedyet.com](http://istruecryptauditedyet.com)

------
amai
Pavel Durov is the guy who started vk.com which stores passwords in plain
text: [https://thehackernews.com/2016/06/vk-com-data-
breach.html](https://thehackernews.com/2016/06/vk-com-data-breach.html) He has
no clue about security. Whatever he claims, I would never use Telegram.

------
known
"Never do anything against conscience even if the state demands it."
\--Einstein

------
pigeons
There isn't even any need to weaken the homebrew encryption, good luck using
it. I don't even have the option on the Linux desktop client at least. The
"secret chat" feature isn't available.

------
robert_foss
This is pretty alarming stuff.

Especially considering how that competitors like Signal are US based. Signal
is owned by twitter which by no means is a small player, so it isn't likely to
fly under anyones radar.

~~~
Ar-Curunir
Signal is not owned by Twitter. Moxie works there, but that doesn't mean his
code is owned by Twitter.

~~~
WillyOnWheels
I hate to pedantically bore people but Moxie does not work at Twitter. He did
for a short period of time after Twitter acquired his 2 person startup.

------
Asmod4n
There is no need for the US intelligence do to that, looking at the choices
Telegram made on its own.

------
EternalData
The government has progressed from banning encryption to trying to subvert it
:/

~~~
killjoywashere
That's actually a central tenant of the NSA's mission. But that mission
predates the internet and public key crypto. Now it's like the gas company is
running around drilling holes in gas pipes.

------
known
How Telegram is making money?

------
lngnmn
Good PR.

------
logicallee
The problem I have as an end user is that I want the infrastructure protecting
me to be invisible. Let's return to this after the following paragraphs. I
will make some pretty far-reaching conclusions.

I think we can all agree that if some totally below-the-radar crypto anarchist
who happens to have a few million dollars from bitcoins figured out that they
actually have enough access via the dark web to bribe a few Russian generals
and long story short detonate a nuclear bomb a few miles outside New York
City, just for shits and giggles, then they should be stopped at some point
along the way. This will seem like a made-up example to you but I purposefully
don't want to confuse the issue with practical examples. We can all agree that
at some point this should be stopped.

A reasonable time to stop it might be if intelligence agencies get a literal
screenshot from a darkweb chatroom (from a concerned participant, where the
participant thinks they're really going too far) where this is being planned
in exacting detail but more information is needed to be precise. (For example,
suppose the source of the nuclear bomb were not Russia but not enough
information was given to identify it. There are actually quite a few nuclear
states and many of them are quite corrupt. A short list includes India, North
Korea, Pakistan.)

I would think that this kind of actionable urgent intelligence should unlock
whatever privacy safeguards are in place, but the issue is that if there is a
correct "technical" solution (if cryptography works 'correctly' and is not
broken, in an academic sense), then there is no technical possibility to
unlock anything. If Tor, crypto currencies, and encryption "work" (in a
binary, yes it works, or no, it's broken sense) then following the receipt of
such a screenshot there is no technical means of any further step.

Here I'm going to be philosophical for a second. The future of technology is
nearly infinite human power. You can already in the next few seconds initiate
a crypto currency transfer to anyone anywhere in the world, who can receive it
without any banking infrastructure or oversight.

The arc of technology has been personal human enablement. When individuals
become nearly God-like and all-powerful, it is dangerous to be in a position
where, like the Muslims reporting the madman banned from his U.K. mosque for
radical insanity, the status quo is that if you report your friend to the
authorities saying, "My online friend, God-like in his powers, is planning to
murder a million people just for shits and giggles, and he's kind of insane.
Unfortunately, I don't know where he is or what he's doing, but I'm pretty
concerned. He has a lot of money from a few ponzi schemes he ran. It's pretty
credible for the following specific reasons (screenshots, quotes, etc)." And
the only response from the authorities is, "Thanks for all this. We don't know
where he is either, in the grand scheme of things a million deaths isn't that
much and if it happens we will look at preventing another such case."

That's a pretty silly response, isn't it? That the only possible response is,
sorry, nothing can be done.

Okay, now I've laid out why there should probably be some infrastructure on
the back-end.

What I don't like is that this translates to humans literally reading people's
private correspondence, web searches, etc. It's not very good.

What is a good middle ground?

Can't the NSA make things that run locally, so that no human is reading your
correspondence or web traffic, but as you start researching nuclear weapons
and making plans on how to murder a million people, and start making those
transactions, all this starts adding up and, to quote the Constitution, its
tools can receive instructions "particularly describing the place to be
searched, and things to be seized", so that after such a report, its
perpetrator can be found, or at least enough information can be collected to
stop it if it is actually taking place?

I think that all of us here could be okay with being stopped at some point
between purchasing a hundred million dollars in anonymous currency, and
detonating a nuclear bomb. It's sensible. That can be part of the social
contract.

It's difficult. Nobody wants to live with a judge, jury, and executioner in
their home looking at everything they are doing in case they break some law.

I am glad that I personally don't have to answer these questions. But we can
all agree on the need for privacy (no human looks at what you're doing), and
also on the reasonableness, as each individual online progresses toward
infinite personal power, for protecting the rest of society from credible and
immediate, specific threats.

I agree with cryptographers who think of cryptography as a tool that is either
working or broken. (If it has a back door, it's 'broken').

Perhaps if tools included a certain portion that runs locally they could
increase the extent to which the tools are not actually 'broken' (i.e. they
are actually working, and actually not backdoored), while also increasing the
safety every single person has from other individuals being able to plan or
pay for their specific death anonymously, and with impunity.

I realize that my suggestions here are not specific enough to be actionable,
they are not clear recommendations. But I don't even see these possibilities
being discussed (at least publicly), so I wanted to at least move the
conversation a bit in this direction.

EDIT:

\---

I'm getting downvoted pretty heavily. Let me ask point-blank: are you okay
with someone being able to spend two weeks on the dark-web researching how to
make and detonate a bomb using totally innocent chemical purchases, and then
your spouse, parents, relatives, or you, being an innocent victim of my
exploding the results, or would you want that person to be stopped at some
point after they started doing that? The future of information is that it is
ubiquitous and easy to access [I edited this paragraph edited from first to
third person.]

Actually secure communications would mean that it is technically impossible to
see if someone has started communicating with people at ISIS who have overseen
and helped people explode themselves. I am not saying communication should be
weak and insecure, but should I really practically be able to start doing that
if I want?

This is not some kind of false example, either.

Also, for downvoters: I think it is easier for you to agree with the other
half of my statement, that nobody should be looking at our web traffic and
correspondence, and that it should be actually secure, and also actually
private.

~~~
geofft
> _I think we can all agree that if some totally below-the-radar crypto
> anarchist who happens to have a few million dollars from bitcoins figured
> out that they actually have enough access via the dark web to bribe a few
> Russian generals and long story short detonate a nuclear bomb a few miles
> outside New York City, just for shits and giggles, then they should be
> stopped at some point along the way._

I agree with this, and nuclear disarmament seems to be the best way to stop
it.

Any approach involving changes to electronic communications seems unlikely to
be effective: people have been trying to bribe Russian generals for centuries,
well before the internet.

> _Let me ask point-blank: are you okay with me being able to spend two weeks
> on the dark-web researching how to make and detonate a bomb using totally
> innocent chemical purchases, and then your spouse, parents, relatives, or
> you, being an innocent victim of my exploding the results, or would you want
> me to be stopped at some point after I started doing that?_

How is this different from you spending one day reading the 1971 _Anarchist
Cookbook_? In almost a half century since it's been released, we don't seem to
have had an epidemic of homemade bombs, so I don't seem to have an evidence-
based reason to object to people being able to read that book. Is the dark web
different?

Also, I live in America. You could just literally go buy a gun at Wal-Mart,
and you have the legal, constitutional right to do everything you do up to the
second where you point it (intentionally or not) at me or one of my loved ones
and fire: you _cannot_ be stopped. Shouldn't I be worried about that instead?

~~~
toss1
It seems the problem logicallee is working is the massively growing
destructive power available to individuals or small groups.

Technology is accelerating to the point where the destructive power that was
formerly available only to state actors with proper command & control systems
is now available to small states, groups, and even individuals -- chemical,
bioweapons, delivery by drone, etc. It is now possible to mail-order custom
gene sequences for garage bioengineering (yes, they do try to filter the
requests against homebrew bioweapons, but the operative word is 'try'). Even
computing power -- I'd be surprised if a random dozen people on this forum,
properly motivated and funded, could not take down the US power grid within a
year.

This scale of mass destruction in the hands of individuals is a far greater
scale and scope of problem than the ability of any nutjob to go to WalMart and
buy a hunting rifle to point at you, me, or a Congressman.

It is the kind of real problem that keeps serious security pros up at night.
And there are many of these scenarios becoming more real all the time, even if
logicallee's nuke example seems too fictitious for you.

The real question he's posing is whether its feasible to build an automated
system that's sufficiently private and intelligent so that it could scan the
comms without violating privacy while only alerting on genuine threats.

I think it's an interesting idea, but even if implementable, would fall to the
<Who Guards the Guards?> problem. What is to prevent the people who build,
maintain, operate the watch-system from abusing it? Nothing but the same level
of ethical training that we have now, so this is simply adding one level of
indirection.

~~~
c22
But crypto is built from math, which is available to anyone who possesses a
brain. Even if you locked down all the academic output related to encryption
you can't ensure no one will discover another way to hide and transmit secrets
either around or through your usually-private-except-for-serious-threats
communications network. You'd have better luck trying to lock down harmful
bioagents and fissionable materials, but as you alluded, as long as these
technologies exist the world faces a security threat. It seems to me the only
way to combat these threats is to construct a society where individuals never
feel the need to leverage their increasing power, a police state where
anyone's communications can be inspected or abused at will by an agent, a
heirarchy, or an algorithm seems antithetical to such a society.

