Hacker News new | past | comments | ask | show | jobs | submit login
EU Commission to staff: Switch to Signal messaging app (politico.eu)
495 points by maxbaines on Feb 24, 2020 | hide | past | favorite | 278 comments

I once worked in the Commission, briefly. Technical security seemed to be non-existent as far as I could tell (in an admittedly very junior role).

Once all of the interns got invited to the U.S. embassy to meet the Ambassador (some guy who literally said he got the job because he was friends with Obama). On the way out the nice embassy staff gave us goodie-bags, complete with handy pen drives...

Basically everyone was giving away free pen drives in Brussels then. I would be surprised to find that the U.S. didn't already have access to a large number of EU institution computers.

> some guy who literally said he got the job because he was friends with Obama

There are two kinds of U.S. ambassadors:

1) Career foreign service people

2) Friends of the presidential administration at the time

Examples of the latter aren't hard to find. Off the top of my head I'm familiar with William Timken, a US businessman who was appointed Ambassador to Germany by George W. Bush because he was a huge supporter [0,1]. Colloquially we could say he got the job because "he was a friend of Bush."

I remember being surprised to learn that US Ambassadors were (maybe still are) individually responsible for paying for a good part of their formal duties as Ambassadors(eg throwing parties, travel, etc). So historically, having wealthy people in the role was a requisite. I gleaned this from reading "In the Garden of Beasts" by Erik Larson [2].

The bits I have cobbled together suggest there is an interesting history surrounding the practice of selecting and acting as a US Ambassador.

[0] https://2001-2009.state.gov/outofdate/bios/t/53349.htm

[1] https://www.spiegel.de/international/spiegel/us-german-relat...

[2] https://www.goodreads.com/book/show/9938498-in-the-garden-of...

Or, to put that another way—there are two kinds of ambassadors/diplomats:

1. People the administration is paying to go out and brown-nose foreign leaders, or those leaders’ own ambassadors/diplomats, in a combination “outbound sales” and “customer service” sort of way;

2. People who presumably have the ear of the administration, where foreign leaders want to pay to impress them in order to access the administration through them.

Whether or not you call someone who’s a friend of the president an “ambassador”, foreign leaders are going to want to treat them as an ambassador, so you may as well give them the protections that playing that role requires—even if they never normally leave the country—on the off-chance that a foreign leader either runs into them or intentionally pursues and {hires through a proxy, seduces, coerces, etc.} them.

In this second sense, “ambassador” is short for “a person a state leader is worried about the welfare and mental sanctity of, in the face of global political machinery trying to manipulate the leader by any means necessary.”

The US typically sends Type 2 ambassadors to plush, comfortable countries, and Type 1 ambassadors to postings in difficult or very poor countries. I knew my small Eastern European country had made it when we started getting incompetent ambassadors.

All US ambassadorships are political appointment, therefore option 2 is the only answer. These positions are almost always ceremonial. They are there to execute the will of the president.

The real work is done by the diplomats who are appointed through the US Foreign Service office. They do the actual work of the embassies.

> All US ambassadorships are political appointment, therefore option 2 is the only answer. These positions are almost always ceremonial. They are there to execute the will of the president.

Not really, a foreign service officer can be appointed by a president to be an ambassador. A recent example from the news is Marie Yovanovitch. In regular times, my impression was that foreign service officers were appointed to the more difficult and critical ambassadorships (e.g. Ambassador to the USSR), and friends of the president were appointed to "easy" ambassadorships (e.g. to a staunch ally with a fashionable capital).

A president can appoint a politically connected friend or choose one from the diplomatic corp. But in any case they are all political appointments selected by the president.

Now, for sensitive countries you want a specialist. Like your probably not sending golfing buddies to broker Israel/Palestine peace agreements. But they'll probably do alright being the Ambassador to Belgium or Canada. Those embassies are practically on rails.

In this case instead of a golfing buddy, his son-in-law was appointed to deal with Israel/Palestine.

Nepotism is a problem, as is cronyism.

Just to reinforce that this is correct and to add detail - the Deputy Chief of Mission is the senior career foreign service person responsible for actual embassy operations.

That’s interesting. Elsewhere (Eastern Europe during communism) ambassadors were often political opponents of the government - a form of isolation and payoff in a cushier government supported role abroad. Their job is to parrot the government’s policy instead of spreading their own.

And I work at a European cybersecurity company and we are routinely tasked to inspect USB drives and devices EU politicians receive at conferences, so I guess we've come full circle.

I'm sure the NSA has implants in the EU systems but they're not dumb enough to just hand them out on USB drives that can be traced back to them.

>I'm sure the NSA has implants in the EU systems but they're not dumb enough to just hand them out on USB drives that can be traced back to them.

Why not? The US was found to be spying on Merkel and nothing seemed to come from that.

That's a good point. How much tracing back can be done if, say, I stuck the drive in my workstation and it proliferated over the DG's intranet? Assuming it's at least vaguely sophisticated wouldn't it obfuscate things like how long it's been on the system for?

What kind of tools do you use for that? Would they catch BadUSB-like malware?

Lasers, electron microscopes and x-ray scanners used for testing and verification in the semiconductor manufacturing industry plus loads of custom and open source tools.

Yes, x-ray scanners seem to of made big inroads and only recently I learned that companies use them to verify phones as in the past they would visual check them - but copies have gotten so good that xray is the only real way now.

Though even then, you have to have something to compare it with and also know what you are looking at and able to eliminate what should and shouldn't be there.

Though when you have multilayer flash, the possibility to have a nefarious layer, sandwiched between good layers, makes things way harder.

All that said, as a rule, I'd just downright ban any non-company/entity USB drive or any tech. Keyboards and Mice, more so.

Can it really be economically viable to have these scanned rather than just toss them out? I would imagine that anyone high enough in the EU to be a target is only using drives provided to them

The economical viability of things is way above my pay grade but if I were to hazard a guess it could be just to know who is "out to get you" or it could also be one of many forms of corporate welfare the same way the US shovels money into the F-35 bonfire to keep its arms industry afloat, the EU could be doing similar thing to keep its own security critical industries going, especially since Snowden. My €0.02

It's not so much about being able to use a free USB drive; more about knowing if someone is distributing malware.

I'm supprised they have not developed their own or at least, bankrolled the development of one via grants.

Though it would be good if there was an open source communications platform that would allow the public to engage with politicians in a formal and constructive way. Alas, so much disparity in solutions that it often irks me.

https://ec.europa.eu/digital-single-market/en/projects Be great if the interface was better for that site, though google didn't find anything of note.

France developed their own matrix protocol based network called tchap. Great idea for a nation that wants to stay independent in an increasingly american/chinese tech dependent world.

When the government in Germany does IT projects, they usually burn millions of Euros even for simple projects. And then, years after the planned rollout, the project is still unfinished and no politician wants to touch it anymore.

My favorites was the "special email inbox for lawyers".

So, I am somewhat hesitant to let the EU develop a chat client.

The Dutch government is also often terrible at managing IT projects. It seems an industry has evolved to game the public procurement rules required by the EU. Those companies just leech money for never-ending IT projects that tend to be over budget, over time, and working poorly.

Having worked at a company in public procurement I would say that matches my experience. I would say we were relatively honest compared to some of our competitors, but there were almost no incentives for us developers to do a good job other than our personal morals and everyone including us were more or less gaming the system.

I hated working there because I felt like a parasite just making the world worse. The in-hosue IT of our customer would probably have done a better job if they had just been allowed to hire some guys instead of having to go through an easily gameable procurement procedure. Not because they would have been more competent, but because they would have not have had perverse incentives.

This story is fascinating, do you know where I can read more about public procurement?

For a semi-fictional humorous take on American military procurement watch the movie The Pentagon Wars

Is that one more accurate than "Joint Session" (Veep S4E1)?

Never saw that Veep episode. It's based on a book with the same title by a whistleblower, and my understanding is they dramatize the specifics but the fundamentals gist is true if a little more complicated. For example, in the movie the main character thinks for a while and then decides to use the military rules against his opponents so he asks his secretary to get him the rulebook and figures out a rule with a loophole. I think in reality he used a generally similar loophole to get the word out in combination with a lot of other strategies, but he didn't figure it out from staring at "the rulebook" for a few hours one night.

I think it's more that one of the consequences of painful public procurement processes is that they incentivise public officials to avoid procurement as much as possible. This leads to larger/maximalist projects being specified and a proliferation of "framework agreements" under which small projects are undertaken, both of which seem to benefit the large incumbents disproportionately.

In Portugal there is this thing called viactt which is a crappy reinvention of email. The issue is that it is mandated that official communications with the state be done like through it. It emulates the conventional mail so much that the "letter" contains a picture of the stamps. If you are a company it is the official way of communication. It was done as part of a modernization initiative but now everybody just hates it due to its general uselessness and rare usage.


at least the Dutch sites work. try a post communist country gov site and you'll start appreciating it right away.

Have you ever used Estonia's?

If anybody wishes to explore, here's an example: https://www.eesti.ee/en/

I like how in "eesti.ee/en/" more than 50% of the letters are "e". So that how I picture an Estonian keyboard now.

The EU should just give the keys to the server room to Estonia and let them handle our IT.

I don't doubt there are countries that are worse at it, but "the Dutch sites work" is not always true, or sometimes true for questionable definitions of "work".

I mean, we've got this nice secure login system for government sites: DigiD, which works fine. So you need that to submit your taxes. A few years ago, it turned out you needed any valid DigiD to submit your taxes, not necessarily your own. There are tons of other sites that cost many millions, and resulted in an atrocious user experience. And some projects simply failed after going way over budget.

I wouldn't really call DigiD secure, it doesn't seem to be open-source. Plus, rolling some own protocol/crypto instead of using PKI, that has immense effort behind it to keep secure, is not a good thing.

Estonia has a similar solution, but "Smart-ID" was created after PKI was fully rolled out, it's terrible compared to that. Centralized, unverifiable, stores secrets insecurely, does some cryptographic bullshit. The two solutions somehow seem very related.

Can you explain why Smart-ID is bad? It is terrible compared to what, PKI, our ID card, the mobile-id system? What kind of bullshit does it do?

I haven't heard any such criticism about that system before, so it would be interesting to hear. I've been using it for years, it's easy to use and there haven't been any security problems with it. The company that developed it is trying to export it to other countries as well.

> What kind of bullshit does it do?

It doesn't utilize a hardware-backed secure secret storage on Android (~80% of marketshare in EE if I remember correctly). This means vital key material is unsafe from any compromise, just a plain file somewhere. This means that they have to go into a lot of effort to detect clones, which is also certainly not infallible. Cloning or compromising an ID-card of Mobile-ID is way way harder with it's audited security.

The technical overview describes how key is generated on the device and half of it is sent to AS SK's servers. There's no security benefit in that (they can't really verify you've deleted their half of the key from your device), except that now AS SK has half of your private key. In general their technical overview gives the feeling that like they want to pull wool over people's eyes, it's IMHO a bad sign.

It also totally lacks any privacy, every login you do is logged and counted. SK AS shouldn't mandatorily have that metadata.

It relies completely on their centralized servers, they go down, your authentication and signing goes totally down. Neither service owners or their clients can do anything about that. Identity mustn't be like that.

It's totally proprietary, thus their claims about security can't be easily verified, and it's limited to their blessed OSs (and versions). If they go bankrupt or similar, the identities of all those people are in jeopardy.

It's also expensive for server administrators and thus has a high barrier of entry. Identity and security shouldn't be behind a paywall for neither side. The web has LetsEncrypt and now a lot more sites are protected, the web is better off, SK AS has done the opposite.

In comparison, ID-card's PKI is usable for free, supported in every system (that can do TLS basically), entirely FOSS if you wish, the hardware is much more secure (EAL6+), it doesn't mandatorily leak metadata, doesn't rely on a centralized proprietary server to work.

Possibly. I'm giving it the benefit of the doubt because I haven't heard of any problems with DigiD itself, but you could certainly question whether that benefit is deserved here.

But my point is that even if DigiD is secure, that's still not going to help you when the tax service uses it incorrectly.

On the other hand, German public money had been the most important sponsor of GPG for quite a while. Unfortunately that stopped long ago, I guess all the reasons for that can be broken down to "it was too cheap" in one way or another...

When you try to buy software like this through conventional state procurement mechanism you'll inevitably waste a lot of money, but it would be really cheap to "grow" it through supporting an environment where that can happen. Why doesn't Werner Koch have a position in academia? Third party funding of whole departments is commonplace and is always a blind bet trusting that the grantees will come up with something rewarding.

Do consider that you never read an article "obscure government IT project delivered fully working and only slightly delayed" because that isn't attention-grabbing enough for newspapers to bother with, not because it never happens.

I always wonder how these projects can be so expensive. Heck, they are so expensive you could just order two companies to deliver a product and pit them against each other so that there is a strong incentive to deliver a working product.

Such projects are expensive since it's not only development but also rollout including training of users.

The development is also more expensive than in other areas since there is need for documentation.

And then there is a flaw in the process: Producing the software has to be done via some bidding process. For writing the requirements documents they already rely on companies, which can add clauses making it hard for competitors, after that the lowest bidder wins. In case there is a competitor and not some big player like T-Systems as the only one the offers typically are "optimistic" meaning that at some point in time the budget will run out and extra budget has to be granted (or the project would fail, which doesn't look good and they'd have to restart the whole process, thus wait another two years) and there a "smart" company can find quite a few extra costs. ("Oh wait, you updated from windows XP to 10, we have to redo a lot and then re-twst and re-certify")

It's a game some companies can play quite well.

That all said: Software is expensive. Gathering requirements is hard. Bureaucracy, which among other things tries to prevent corruption, causes extra work.

That, and they are expensive, because people expect them to be expensive.

When, say, the national Police asks companies to come up with sofware to manage contact details for all employees, you could come up with "Install and Configure SomeOpenSourceCRM on a VPS" for €3000.

Whereas typically some Enterprise Consultancy quotes a factor hundred of that to "integrate" it in a crappy way.

I've worked in such projects, for governments. The overhead is ridiculous. the NIH-syndrome is rampant. The specs and "but this is how we do things"-requirements are cut in stone.

Very often a evening of tuning some Open Source tool could suffice; But either some rule disallowing the language/server/deploy-speed, or some manager disliking it because last year they decided that from now on everything must be Java+XML, or you need at least three weeks of meetings before you are allowed to even start.

This is not a joke. I've had a simple off-the-shelve Rails tool cut down, because "we don't allow dynamic languages". I've had projects delayed with 3+ weeks because "we require a deploy-window to be announced 21 days in advance" and I've worked as only developer on a project with four(!) managers managing me.

That is why government projects are expensive: a catch22: "Because they are".

Well for many things there are "good" reasons.

> we don't allow dynamic languages

I would translate as "we have now experience in running and maintaining those platforms, which causes issues"

> require a deploy-window to be announced 21 days in advance"

This sounds a bit like ITIL or similar processes in oalce, which are there to ensure the systems are in defined state.

> with four(!) managers managing me.

Usually those should have different roles. Like technical oversight, communicating with the actual users and getting their requirements, then the IT management for operations and than oversight (variance exists, but lots of things, different stakeholders and as they spent public money they have to do extra documentation of processes for accountability reasons)

The private sector is very much the same, just look at SAP. You only hear less about it because their budgets aren't as detailed and few people care about waste that isn't paid for by their taxes.

While I also feel the temptation to yell "I could do it for 1/100th of that" at a screen when I read/hear about these projects, just basic probability suggests that I'm probably wrong. It's me (with as little enterprise" experience as I could manage) vs a whole lot of people that don't strike me as particularly stupid and who have lots of incentives to keep costs down.

Now that is a good idea!

Thats why it should be done like darpa, choosing the best in the end.

Yes, selection should happen after the fact. Build at least an MVP with all critical functionality implemented in working stages.

Then use that for the competition to get the contract.

Preselection is hot garbage, as proven time and time again. Of course mechanism to mitigate the upfront cost of participating in such a competition should be implemented but in the end it should turn out cheaper/more worthwhile since a working MVP already exists by time of selection.

I’m as certain as I can be that few if any companies will be willing to invest (hundreds of) millions into projects to build some custom solution just to risk losing the bid in the end. And in top of that they would have a product that may have 0 demand elsewhere. Imagine if your employer told you they’ll hire you after you successfully deliver a project to test your worth. You’d think it’s a joke.

And anyway most costs are incurred during the lifetime of the project, after you proved yours is the best solution, with endless change requests, delays that require additional money, etc. The companies bidding usually have far more experience at syphoning money than the client has at holding on to it.

This is true. What I see in my country is that initial product is maybe expensive, but still somehow acceptable. Problem is that, for some reason unknown to me, government does not gain rights to modify implemented solution and can't bring in another vendor to implement either next phase or do maintenance of the solution. And this introduces many years long vendor lock-in that pays off.

They don't have to do it. Just pay an established tech startup to do it via EU grants.

The German Bundeswehr seems also interested in using Matrix. [0] A custom and self hosted Matrix implementation from the EU could be a great IT Project to increase security and be less reliable on foreign software. It would require more work than just writing a memo though.

[0] (In German) https://www.heise.de/newsticker/meldung/Open-Source-Bundeswe...

To be fair, the vulnerability was fixed quickly. And in the end it improved the security of Matrix and the governement application. I prefer this to a closed-source application developed internally, without communication on security vulnerabilities.

Yes, does cover that in the link and from my perspective - ruddy good response time upon that and well handled.

I can think of many comparable situations in other countries (some EU ones as well) in which the person finding the issue would of very easily been locked up.

To be clear, the vulnerability was with the specific server configuration for the French Matrix deployment (authing people based on email domain), and not a flaw in Matrix itself. We're not aware of anyone else running in that config. https://matrix.org/blog/2019/04/18/security-update-sydent-1-... had the details.

The actual bug was thanks to a long-standing bug in python's standard email.utils library, which finally got fixed: https://bugs.python.org/issue34155, combined with insufficiently-defensive coding and testing on my side. (I wrote the auth code in question).

This is a good story, but the title seems needlessly clickbait. The flaw was fixed rather quickly, the process for communicating the issue did not seem to be a hassle.

It does not really say that the app is "super not secure". Just that people make mistakes, and it's not even shameful the way they reacted to it.

Interesting read. Nice trick with bypassing email domain restriction.

This is for communications to outsiders, who can't be expected to have an EU-internal app installed on their devices. Signal is reasonably widespread, and probably the best such messaging platform out there, although it has serious issues like the requirement for a phone number.

I've read it requires a phone number so that it does not need to collect or store any (other) data on users.

It also very shadily holds that number hostage, meaning you are forced to wait seven days after uninstalling the app in order to be able to unregister your number from signal. If you don't you'll just not get messages from people on Signal. My opinion of them is quite low thanks to the iMessage-tier bullshit.

This certainly isn't the case with any version I've used. It's just Settings -> Delete Account, and its a large, red, very visible button.

This certainly is the case when you don't have the app any more.

So why couldn't the EU fund the development of a Signal fork that doesn't need phone numbers?

We had 14 standards. With glorious government funding we make one standard to replace them all. We will have 15 standards.

Forking a piece of software whose market moat is 100% network effects is not easy.

In theory, they could. But IMHO it would not be wise to overestimate the EU’s competence when it comes to purchasing software projects and writing a sane requirement specification.

As a citizen within EU I would rather see them donate money to the Signal Technology Foundation instead. OTOH that could be controversial, so perhaps it would be better if citizens and companies would donate directly instead.

Perhaps they could establish a satellite office in the EU with developers that are officially [0] hired by the EU but work for Signal? That way there could be a purely EU owned fork of Signal.

[0] Signal conducts the hiring and interviews, EU pays the salary

The EU's budget is (was) just 4% of the US' budget. That's still 148 billion Euro, but the comparison might show why the EU isn't quite as keen to "invest in <x>" as people sometimes seem to think. (Although one might consider these suggestions a reflection of the EU's connection with dreaming big in people's minds)

The EU's structure is also a bit different than that of nation states, with cabinet members and parliament rather limited in their ability to make spending decisions. Anything serious needs approval by the council, i. e. the heads of state.

And just now is probably a bad time for new expenditures, as Brexit will severely cut into revenue.

Why wouldn't EU bureaucrats want to use their phone numbers as the identity?

Why would anyone trust a fork of Signal?

Why reinvent the wheel?

Signal does the job, doesn't store data, not even metadata, and can be used immediately.

> doesn't store data, not even metadata

Isn't that just a promise? Also, even then, it is based on your contacts, which are seen by google.

Signal's source code is published so you can go look for yourself. If you believe that despite precautions the source code won't match what actually runs on your phone then realistically you've no real option to use any technological artefact and will be obliged to resort to maybe whispering coded messages to close confidants. As a large technocracy this is not a practical option for the EU.

Your phone number is sent to Signal's servers during sign-up and it uses the conventional SMS service to "close the loop" and prove this number is under your control. Having signed up you can use a PIN to lock the number to you so that anyone without that PIN can't do the "new phone" dance (this expires if you stop answering PIN questions correctly)

If you choose to do so a digest of your contact's phone numbers can be sent to Signal for them to match against the set of (also digested) numbers of Signal users so they can tell you who has Signal enabled.

Whether you choose to give your contacts to Google, to Facebook, to Apple or whoever is up to you and outside Signal's control.

Signal does let you create an encrypted profile, and then your device can tell other people's devices the keys to look at the profile if you want to allow that. You don't have to use a profile or trust anybody else if you don't want to. Signal doesn't learn the keys (unless I guess you deliberately sent them those keys) so they can't read the profile.

Unlike many of its competitors Signal's messages can't be read by Signal, in most cases this includes who sent them (Signal's "Sealed Sender" means in most cases if you correspond with someone the indication of who sent them a message will be encrypted such that they can tell you sent it but Signal only knows it was someone they authorised to send them messages). When you attach images Signal avoids learning how large the images are exactly, and if you use a service like GIPHY to add typical meme images like Stephen Colbert eating popcorn Signal double-proxies this so that they don't learn which GIF you used, and GIPHY doesn't learn who used it.

Edit: Fixed name of GIPHY. Huh.

> so you can go look for yourself

I can look it out for myself but there won't be any point as they can simply run different code on their servers.

> If you believe that despite precautions the source code won't match what actually runs on your phone

On their servers

Also what precautions? As far as I know their binaries are not reproducible.

> this expires if you stop answering PIN questions correctly

After a week if I remember correctly.

> a digest of your contact's phone numbers

> also digested

A hash? This does not protect against anything. There are much less than 2^32 active mobile phone numbers per country. It would be trivial to brute-force it.

> Whether you choose to give your contacts to Google, to Facebook, to Apple or whoever is up to you and outside Signal's control.

The point is that someone* other than you will be able to see the metadata. It does not matter if it is Signal or not.

> I can look it out for myself but there won't be any point as they can simply run different code on their servers.


That's true for literally all services. Do you expect to be able to walk into the server rooms and dump the binaries to inspect them?

> Also what precautions? As far as I know their binaries are not reproducible.

The client builds are. Reproducible server builds don't tell you anything about what is running.

> That's true for literally all services

The point is that your client should not send any information which you expect to keep private to their services. It is the exact reason that we use e2ee rather than just tls for chats.

> The client builds are

Not fully, see https://signal.org/blog/reproducible-android/

> Reproducible builds for Java are simple, but the Signal Android codebase includes some native shared libraries that we employ for voice calls (WebRTC, etc). At the time this native code was added, there was no Gradle NDK support yet, so the shared libraries aren’t compiled with the project build.

> Getting the Gradle NDK support set up and making its output reproducible will likely be more difficult.

> The point is that your client should not send any information which you expect to keep private to their services. It is the exact reason that we use e2ee rather than just tls for chats.

Yes. And Signal achieves this better than all the other major options, given the number of footguns in the other tools.

If you are concerned about the client builds then run a decompiler. It's not hard. People have been auditing binaries for ages.

It's a promise that got taken to court and survived. https://signal.org/bigbrother/eastern-virginia-grand-jury/

You can use a different contacts app, you don't have to give all your information to Google. My contacts are managed by a Nextcloud instance.

> You can use a different contacts app, you don't have to give all your information to Google

As far as I know OWS does not mention this anywhere on their site nor on their program -- aren't the issues with usability of other programs and lack of sane defaults (such as with gpg) often given as an argument by signal supporters on why you should prefer it?

That being said, is that even possible? I admit that I am not too familiar with how Android phones work. Signal requires the google play services in order to work, right? Is this not enough for google to see your information?

Google Play Services and your contacts are completely different things. I'm confused on what information you think is being sent. As far as I'm aware, all FCM does is provide a push that tells the app to check in with Signal's server. No contact information is in play.

Signal also released a WebRTC version that doesn't depend on Google Play Services if that floats your boat.

You can see it from their court filings. I think it's an amazing technical accomplishment that the only information they have stored in the clear under a user's phone number is the last connection date and account creation date.


You can also disable contact backup on Android.

You can use some of the gapps alternatives. Google only sees contacts if you sync them through your Google account or use Google contact book app, right?

It's a wheel that they do not have any control over, from source, hosting and distribution.

But for an entity like the EU they could have complete control over the lot, with relative ease. The client, server and protocol are all open.

Is the server really open? I was under the impression that it was not. That's my only issue with Signal honestly, you can't rehost/fork easily if you're unhappy with the way the project is going.

Forking Signal isn't practical for the same reason Forking the UN wouldn't be practical. Network effect.

But the code is right there for you to read it and re-use it, you just won't get far building your own network with one user and demanding everybody else switch over.

Signal does store important metadata: the phone number itself. In many countries, you cannot purchase a SIM card now without showing ID, and the seller takes down your ID details and forwards them to the state. Then your phone number is inextricably linked to your identity. Yes, the contents of messages may be securely encrypted, but many states can easily determine which of their own citizens (and visitors to the country who bought a local SIM) are using Signal.

And in this context it's probably not a issue.

Because only a fool would needlessly rely on a foreign private entity accountable to no one. And while a private entity might trust a foreign government more than their own (or assume that the foreign government doesn't care about them), this doesn't apply to the EC.

While I'm _surprised_ that they didn't choose that approach, I'm glad they decided to do something sensible instead.

Developing a custom app would mean throwing tax money into a black hole, generating an inferior product that nobody would use (among other things due to the complete lack of network effect). Supporting an existing, good open-source product (and possibly spending grant money on _that_) makes a lot more sense, and is a practical solution.

>>Promoting the app, however, could antagonize the law enforcement community.

The margins of law enforcement and intelligence can be blurry, but to the extent that they're antagonistic towards private communication as a whole... "law enforcement" is kind of a euphemism. The article should have said "could antagonize the intelligence community," whether it's police or whatever.

Over time this "antagonism" is growing, because intelligence is increasing its reliance on these data sources.

"What do you mean we can't analyse IM chats. How are we supposed to do our job?"

This probably antagonizes law enforcement more than intelligence.

If you are interesting enough to be a target to the CIA's of the world then they will probably find a way to get at you, if not through your phone then through your spouse's or maybe your Echo or whatever. These people would do well to distance themselves from devices in general

For the masses who do nothing too wrong yet still engage in WrongThink(TM) (relative to the politics of the time and place) these apps are great because they astronomically increase the effort requires for law enforcement to rifle through your private communication which greatly reduces ability of police to have the power of arbitrary enforcement over the common man. A consequence of this is that tracking down drug dealers and other petty criminals requires "good old fashioned police work" as opposed to hooking their phone up to a black box like police have grown accustomed to.

I agree with you that the distinction between law enforcement and intelligence is blurry but it's the folks solidly on the law enforcement side that are hindered by E2E encrypted messaging because it diminishes their power over people who have done nothing wrong and makes their routine work marginally more difficult.

> If you are interesting enough to be a target to the CIA's of the world then they will probably find a way to get at you

The big thing for intelligence is that now they track everybody. It doesn't matter if they think you are relevant or not, if at some time you become, they already have plenty of material to blackmail you.

In previous discussion here on HN, Wire was claimed to be more secure than Signal (something related to initial key sharing?)

I don't understand why there's so much publicity behind Signal, and Wire is never mentioned. I've been using Wire for years, and it doesn't require a phone number to setup.

From the linked techcrunch article: "new CEO, who joined late 2017, didn’t really care about the free users ... : ‘If Wire works for you, fine, but we don’t really care about what you think about our ownership or funding structure as our corporate clients care about security, not about privacy.'”

I honestly don't see how privacy and security do not go hand-in-hand.

From Wikipedia[0]: "Wire stores unencrypted meta data for every user" ... "Wire changed its privacy policy from "sharing user data when required by law" to "sharing user data when necessary"." ... "Wire did not inform its users about this policy change, which makes it even more suspicious, considering that it is about a tool that promises privacy to its users."

I use to speak highly of Wire a few years ago but have not used their programs in a while. It's pretty clear that non-corporate users and their expectations of privacy are of little importance to them.

[0] https://en.wikipedia.org/wiki/Wire_(software)#Critcism

We started using Wire before signal few years ago. it was light and we loved the dark interface. Then it got bulky and my Android phone killed it too often to be any useful. Then we switched to signal, it was bare bones, but it was light and didn't miss notifications

Personally, I'm suspicious of Signal because of its publicity, smells like a pot of honey.

The code is open, if you'd like to decide for yourself: https://github.com/signalapp

It is definitely a pot of honey now for intelligence agencies who are going to crack down on it to break it since everyone is moving to it now.

Still protected by the math?

All the three-letter agencies already knew they had these tools available, GPG, veracrypt, Signal, etc.

Now it's just impossible for governments to whole sale eavesdrop on conversations. If they want access it now has to be targetted, and most likely device specific which is harder.

Sure but the math alone won't protect it. I'm thinking more of bugs in the program such as buffer overflows etc. that can potentially be exploited. They don't even have to exist in the Signal code itself but one of the libraries they use. Everything will be scrutinized increasingly heavily now.

Moxie had better preemptively order up some new code reviews.

One reason I stopped using was it turned from delightfully light to painfully bulky. Also its deliberate transition (I’m sure they had reasons for that) from a simple chat app to more Slack like.

I know Signal is secure and all — and I use it myself — but I can’t help but think how can we trusted that the central servers aren’t wiretapped? It would be the ultimate proof of security if one could transparently verify that the middle man is running the actual code it claims to be running.

Signal uses Intel SGX to give you some assurances about this, at least for parts of their serving stack. You can run the remote attestation tools and get a report back from Intel that says, in effect, "you connected to a genuine CPU and it's running software with this hash". Then you reproduce the build of the open source code and check the hashes match.

I'd be surprised if anyone has ever actually done this. It's a very obscure thing. But to their credit, it's possible.

SGX can't actually make you have secure end to end encryption though. The Signal protocol is necessary but not sufficient. The operators can always just push a software update that invalidates all the security guarantees. It's been a problem since the start but they never talk about fixing it, even though the issue is a glaringly obvious one.

What they could do is allow third parties to audit their software updates, and then cross-sign the binaries. Android allows this but there's no UI for it. It'd benefit from a collaboration with Google to allow multi-vendor apps.

After that comes sandboxing. Dalvik/ART doesn't support the Java SecurityManager API. However, for sandboxing software components cheaply it's hard to beat. If a component is sandboxed in a correct manner then the audit costs get much lower. You don't need to re-audit a component that's changed if the sandbox means it can't access keys or message data, for example (and if everything is memory safe: you'd have to do multi-process on iOS which is a lot more expensive).

> Then you reproduce the build of the open source code and check the hashes match.

Assuming that the sgx environment hasn't been tampered with. There have been several flaws in sgx, e.g. https://www.theregister.co.uk/2019/02/12/intel_sgx_hacked/

SGX is patchable via microcode updates (and some parts are software). The remote attestation contains the patchlevel of the system, so, a client can tell if you aren't keeping up to date with security patches.

And so far these are not zero day bugs. The researchers work with Intel to only publish when there are fixes available, usually. It's not much different to any other security system in that sense.

Very interesting, thanks for sharing. I had one idea of extreme transparency. What if the Signal server used a per-user-thread model where each thread was isolated in a sandbox, which only the user has access to. The user can log in with read only access via ssh and certify that nothing is logged, nothing is tampered with etc. A kind of extreme transparency.

You're heading in the right direction but traditional UNIX tools don't work here. That's why Signal needs to use SGX.

Consider: how do you know the SSH server isn't tampered with? It could be feeding you an entirely fake session.

You might say, the sandbox. How do you know the sandbox isn't tampered with?

How do you know the hard disk or the RAM isn't tampered with? The third party owns the machine itself.

SGX is based on the insight that it's very, very difficult to tamper with a physical circuit as small as a CPU. The CPU is the root of trust in any machine. If a CPU can produce a report that says the moral equivalent of "I'm running an OpenSSH server version X" and the CPU itself is preventing the server owner from tampering with the software, then you have the ability to reason about what the server is doing with your data.

This is possible with remote attestation. See the "Trust but verify" paragraph in https://signal.org/blog/private-contact-discovery/ and also https://signal.org/blog/secure-value-recovery/.

> Trust but verify

That phrase needs to be deprecated

It collapses to 'verify'. Just say that.

The point of signal is that you do not need to trust the middleman due to e2ee about your actual communication (but this does not include your metadata where you do need to trust their servers - and there is no way to prove that they run the code that they claim to run)

As for the actual client, see https://signal.org/blog/reproducible-android/

Right now signal is not fully reproducible so you can't trust that the binary that they distribute does not use a different code.

I know that the point of Signal is zero-trust, but a savvy middle man could still use large scale information about source and destination of encrypted packets for useful intelligence. Just like NSA would find it useful to inspect email headers despite the email body being encrypted.

They use the Intel sgx (secure enclave) to do exactly this. Although for message contents, it doesn't matter anyway because they are end to end encrypted and cannot be read except by the receiving party.

Decentralization would proof a lot more.

That's a tad unfortunate. Signal was just low profile enough that my wife and I could use it in China. This raises the profile of it just a bit higher than I'd like. Further, with the likes of https://news.ycombinator.com/item?id=22202110 having a higher profile makes the organisation more vulnerable to harassment from the government.

That's a pretty selfish view. On the other hand such a serious backer means political clout and attention in case there are attempts to compromise it. E.g. Whispers Systems could quickly relocate to Estonia or Germany if US law becomes untenable.

It's not selfish to want to talk to your s.o. Sure, it's better if the CCP doesn't go around censoring reasonable communications, but given that they are, people are entitled to want whatever reasonable alternatives they can get.

Not so much selfish as realistic: now intelligence agencies like NSA can justify spending much more resources to exploit potential vulnerabilities in Signal, so—with Wikileaks in mind—it practically makes Signal less safe long-term.

I believe signal had been on the radar of NSA since it has been the main topic of secure communications for few years already.

it is realistically selfish. Also Signal app/server is open-source. Anyone could churn up a server and fly under the radar.

My understanding is that US Senators already use it. At least there was a headline on HN ages back. The main issue with apps like Signal in my opinion comes from apps that snoop on your screen. I wouldnt be surprised if there are rogue custom keyboards that do this.

I would be more worried if only communist / socialist nations (you all know the ones I am talking about, not sure of a better name so calling them what they claim to be) were using it. If the NSA advises against Signal that might be another concern. It might mean a foreign intelligence has access somehow.

It would be more likely that the NSA would advise against Signal if it doesn't have access and want to scare people to use something they do have access to.

> "communist / socialist nations (you all know the ones I am talking about, not sure of a better name so calling them what they claim to be)"

"Totalitarian" would be a better word, I think. This isn't specific to economic systems, it's about governments wanting to control the communication of their citizens (or subjects, I guess).

> "If the NSA advises against Signal that might be another concern. It might mean a foreign intelligence has access somehow."

Or it could mean the NSA does not have access. They do have a history of wanting access to encryption systems.

Well I'd need to hear their reasons for it, but yeah. I should of phrased it better: in the event that the NSA advises people to upgrade Signal versions or something of the sort. Though this is much less likely than them advising something as much more mainstream as Windows.

Totalitarian is probably closer to the word I was trying to think of.

Because LEOs are oversteping their mark by a billion each and every fucking day, but the populace is too fucking computer illiterate to understand that ”data reading” (the norwegian euphemism for hacking into citizens computers) is extremly damaging to the fundamental concept of democracy.


You could've put the ACAB at the beginning so we know we shouldn't not mind the things you say.

I did it on purpose, since you obviously thought the rest was interesting :)

This is a mistake. They should at least compile their own version and not something that comes from an US based app store under US law. At any point the US can force a change.

This is as secure as purchasing a machine from Crypto AG. [1]

[1] https://en.wikipedia.org/wiki/Crypto_AG

The OS is US based too and they could easily keylog everything. Even if you wipe out the OS for a self-compiled one, there are blobs running full blown OS in the hardware and it has access to every single hardware resource, good luck getting rid of that one without building your own hardware and drivers from scratch.

> good luck getting rid of that one without building your own hardware and drivers from scratch.

For an individual that is obviously infeasible. For the continent of Europe as a whole, it obviously isn't. Why shouldn't they put some money into developing cellphone hardware with open source drivers?

The size of the supply chain required for such an endeavor is so huge and spread among various countries and companies worldwide this would be a big task even for the EU. I mean, competing with the like of Broadcom and ARM from scratch. Good luck with that. Even then, the NSA would just need to pay or blackmail a single entity in the supply chain to get their backdoor in there. The EU itself would probably implement their own backdoor. Also open source doesn't mean it will be backdoor free https://www.reddit.com/r/linux/comments/54in5s/the_nsa_has_t...

You don't have to refuse to take an ARM license (although RISC-V is also a thing). You may not even have to design your own chips at all -- they could just pay an existing manufacturer for design specifications and then both audit the design and develop and publish open source firmware for it.

Supply chain security is an independent problem. First you have to know exactly what the design is supposed to be, only then can you verify that it actually is.

Signal provides reproducible builds these days: https://signal.org/blog/reproducible-android/

Not sure how feasible this is on iOS (with Bitcode).

This is for external communications for any case people would most likely be using WhatsApp. It's not for official, internal or classified stuff.

Giving this advice has almost no cost and, at least arguably, it is better for staff to use Signal than other messengers. So, IMHO, this is good advice.

Not if the staff is lured by all the E2EE promises and all the lock icons to overestimate the security of the app and share more information than they would otherwise.

Not using something that actually improves security because people might think that it actually improves security is a crappy argument. If your phone is insecure for two separate reasons you should fix both of them instead of using each as an excuse not to fix the other.

Yeah, but the fix should be using Matrix, not Signal. IMHO

Perhaps a better idea would be to fund an audit of the Signal app. (Or has that been done already?)

Doesn't help figure out if the signal update of the day that comes from the store is any good.

That is true. But given how much interest there is to find vulnerabilities in the Signal app, I would be very surprised if anyone would succeed in putting up a compromised Signal app on the App store and also be able to fly under the radar.

The "anyone" in this case would be Signal. No need to fake it when Signal is under US law.

Ok lawyer, explain which US law allows compelled speech considering in djb vs US it was decided code was speech, and protected by the first amendment.

And if Signal did this you think no-one would notice?

With cooperation, you could literally target just certain devices with the compromised version.

IMHO, the cost of that attack is pretty darn high.

An audit would only prove that the current code is secure.

Does F-Droid support signed releases, and does Signal sign their APKs? In fact, aren't all APKs signed?

Signal is not on F-Droid, because m0xie does not want that[1].

Some forks of Signal are, though[2].

[1] https://community.signalusers.org/t/how-to-get-signal-apks-o... [2] https://forum.f-droid.org/t/signal-in-f-droid-in-2018/2847

Ah, yes, I forgot about that. Too bad they don't even publish the APK on signal.org.

Thanks, I missed that.

It's not as though staff will be installing it from the App store on their phones.

How else would they install it on their phones?

If it's Android the apk can be side loaded.

Their IT department will handle it

Crypto AG was an intel op the entire time. Where is your proof Signal works for and is owned by the American government?

As an EU citizen I think that's a great decision.

I don't think so.

Signal might be open source but the servers are still run in the USA and thus are subject to the US "legal system" (which in turn is subject to the mood of its president).

Signal servers only know who talked to whom, but otherwise are physically not able to see even a smattering of the contents of the communication.

Correction: The signal servers don't even have that bit of metadata. See [1], they only store the last time that a user connected to the server.

[1] https://en.wikipedia.org/wiki/Signal_(software)#Servers

> The signal servers don't even have that bit of metadata. See [1], they only store the last time that a user connected to the server.

Note that it's what they claim at least. It's not verifiable client side, and to be honest, it's hard to come up with a scalable protocol where this is the case, but you should still not repeat their claim as a matter of fact while in reality we only have their word that the code actually matches what's deployed. And even if they don't store anything, AWS could still provide interested entities access to the infrastructure to capture what Signal doesn't want to capture. Yes, features like sealed sender are awesome and are an important step, but the service still gets ip addresses, which do provide hints about the sender. Again, likely Signal doesn't store ip addresses but people with access to their infrastructure could.

Furthermore, Signal's encryption doesn't help against people storing all of Signal's traffic and waiting until attacks on crypto algorithms become practical (quantum computers, theoretical progress on attacks). Some secrets become irrelevant with time, others increase in value. The best defense is never having the message leave your country's network in the first place.

And there's the DOS problem. What happens if the american president decides that the EU should be cut off from all US network connections? The EU parliament members can't even organize a good response to this because they use an american service...

Signal app is also canonically distributed by Google Play/Apple Store, which are US entities under US law. When push comes to shove, an app update may get distributed to select individuals that will happily gather and send all their conversation histories and more.

As an EU citizen, I'm half puzzled and half horrified at how happy the EU institutions are to rely on foreign products: especially coming from a country that has a history of being trigger-happy and cutting people off in the name of a "trade war".

I compiled Signal for iOS and monitored the sent data through a proxy. Both behave identical. There could be a hidden switch in the distributed binaries that triggers other behavior, but I really doubt it. For Android, there are reproducible builds so you can actually check the code is the same. For iOS reproducible builds are harder but should still be possible.

Can I verify that the build installed on my Android phone[] is identical to the one that I compiled? For instance, if I mount the device in Linux I can only see /mnt/sdcard, not /, so I can't copy the binaries off.

[] i.e. the build installed on my phone, not the build available no Google's server to download.

What's the alternative? Private closed-source apps like Threema?

This also is not for official communication , it's just for any case where staff would currently use WhatsApp or similar spyware.

I do not think that anyone suggest to use proprietary alternatives. Instead it seems that the posters in this thread would be more happy if the EU was more independent from the US by for example hosting their own signal servers and forking the client.

Matrix, which is already used by the French government.

Under your threat model no internet connected smartphone is safe. Google can just push any arbitrary software to run on your phone and this includes spyware created by governments.

Can one really trust they don't store more if they physicaly have the information at one point in time ? Or possibly their upstream connectivity provider could do that metadata scrapping.

I will link to this each and every time this comes up:


Signal turned over everything they had on this user (which was two time stamps: user creation and last access), and fought the gag order to be able to publish the subpoena and the response. Signal would have to be pretty stupid to lie to a federal court.

Think what you want, but Signal doesn’t have any metadata to turn over.

If I worked for the intelligence agencies I would be capturing all the info going in and out of the signal servers at the infrastructure level.

Even if I couldn't break the encryption I'd have timing and connectivity data.

So, if I were a user, I would always operate on the assumption that info would leak.

In this threat model, the only defense you would have would be an overlay network resistant to correlation attacks where all nodes are involved in routing traffic (like I2P), or a mixnet like Katzenpost.

Getting people to use Tor for everything is hard enough, good luck getting people to use stuff even more obscure.

And how often where the silenced by US law and weren't even allowed to mention such a thing? We will never know.

Even so, by EU rules, I'd expect them to be required to store the data in the EU.

Do you have an alternative where this isn't true? Honestly the complaints I see about Signal here (servers and being distributed through iOS/Google store) seem extremely low priority and at the point where no true Scotsman exists. Is Signal just not supposed to have servers?

Is there no open source or European alternative?

Just keep relying on some Californian dude that insist i give him my and my friends phone numbers?

And harass me so i give him more info to “personalize my profile” !?

This is the “perfect solution fallacy”; just because Signal doesn’t check all the boxes, doesn’t mean it isn’t an improvement over the existing situation. Secure communications is the immediate goal, nation sovereignty secondary.

Looking at the other comments, the European alternatives don’t look very secure, so it looks like they prioritized security here.

> Looking at the other comments, the European alternatives don’t look very secure, so it looks like they prioritized security here.

In what way are Wire and Matrix not very secure, though?

To be secure step one should always be "Pick a service not controlled by your adversaries' law or intelligence community". Skipping step one has nothing to do with a perfect solution fallacy but is more like "Pissing in your pants for warmth".

A half-broken service would be better than one under US control as the US is the one doing the most snooping on EU. Getting everyone to switch if Signal ever stops being secure would be way harder than to pick a better service now and help secure it.

Matrix would be a better choice for one. Saying EU services isn't secure is hyperbole.

There's Matrix (https://matrix.org/).

How does he harass you?!

- I added a first name to my conversation contacts, but it keeps showing the phone number. In the app and in notifications.

- There is a fixed pop under at the bottom of the screen asking me to confirm my profile name, with “Get Started” or “remind me later” (which is apparently every time i come back to the app), and no “never“ option. I had already put a first name but that’s not good enough. Had to do it again to confirm i don’t want to share a last name with him / them.

- why does he care about the distinction between a “first name” (mandatory) and “Last Name” (optional)

-> Only the user should care about the name.

=> Why should the app / he / them get access to my contacts list to update a contact name? I’m perfectly willing to have duplicate contacts siloed in a “secure messaging app”.

Also, could someone explain why the app need phone numbers ? They only send some initial token over SMS, which can be spoofed.

The phone number deregister flow is such a dark pattern it's incredible. Not something a chat app should ever do to be reputable.

How so?

If you uninstall the app it has a seven day forced delay for releasing your phone number, it isn't disclosed anywhere and the form just silently errors out when seven days haven't passed.

wire: https://github.com/wireapp

session (very bleeding-edge p2p signal fork) https://github.com/Loki-project / https://getsession.org/

There is always boring old XMPP as commonly used for corporate stuff. But then they would have to try to explain to people what a protocol is. That ends up being a problem with IM stuff where people think of the client as the service.

There are, but it should be simple enough for non-techies (politicians) to understand. They could use Threema (Swiss) or Wire, but either-way it's better than plain SMS.

Signal is open source

Well they could use xmpp with omemo.

There is Matrix, which is as far as I know backed by a British company.

There is Threema. Does it have drawbacks?

wire.com is a good option. Open source, chats, calls, video calls, web app, phone app, etc.

wire.com moved HQ to US and “Individual consumers are no longer part of Wire’s strategy.”

I think it is a great decision, but it wil hopefully persuade other people in public bodies to follow suit - such as my friends who work in senior positions in a hospital and use Whatsapp - a practice that started when ransomware took down the NHS e-mail.

I hope nobody tells them that WhatsApp uses the same protocol. The argument for Signal would not be weaker but more complicated.

It's in the article

Main argument is that it's also open source.

They should use Threema [1] which is based in Switzerland. Even though Switzerland is not in the EU it's not as bad as the US (from an EU standpoint) and since Switzerland is heavily dependent on the EU the likelihood of them spying on the EU is pretty small. Apart from that Threema publishes a transparency report [2] where they list all requests from governmental authorities.

[1]: https://threema.ch/en

[2]: https://threema.ch/en/transparencyreport

Edit: Formatting

Threema is not safe either and its audit is years old.

The BÜPF law in Switzerland requires any company that has more than 100 request per year to retain data. Threema reached this in 2019.

   "Der Dienst ÜPF erklärt eine Anbieterin abgeleiteter Kommunikationsdienste als eine mit weitergehenden Auskunftspflichten (Art. 22 Abs. 4 BÜPF), wenn sie eine der nachstehenden Grössen erreicht hat:

   a. 100 Auskunftsgesuche in den letzten 12 Monaten (Stichtag: 30. Juni);

   b. Jahresumsatz in der Schweiz von 100 Millionen Franken in zwei aufeinander folgenden Geschäftsjahren, wobei ein grosser Teil ihrer Geschäftstätigkeit im Anbieten abgeleiteter Kommunikationsdienste besteht, und 5000 Teilnehmende, die die Dienste der Anbieterin in Anspruch nehmen." [1]
[1] https://www.admin.ch/opc/de/classified-compilation/20172173/...

Have you heard of Crypto AG ? It's a Swiss Company setup by the CIA that provided intentionally flawed crypto to a long list of European governments.

1. The last time people bought non-transparent crypto from Switzerland it turned out decades later the company was secretly owned by the CIA. https://www.theguardian.com/us-news/2020/feb/11/crypto-ag-ci...

2. Threema is proprietary around open source library. It's trivial to add a backoor after any audit, and it's trivial to lie in your transparency report. Open source stuff is _obvious_ choice. There's no reason to open entire source for Threema (no ~one's making a profit copying other messengers) unless they're hiding something.

I’d say it’s another missed opportunity for decentralised communication apps and services. None of them are in a state to be adopted by mass and for critical use-cases.

The staff will be using Whatsapp. Signal has its issues but it's more secure than Whatsapp.

The benefit is, once you've got to have Signal installed, you'll probably want to talk to your friends over it as well to save time switching between the apps.

Our top political campaigns use Wickr (it's pushed down from on high) but I have a hard time believing it adds any more security than simply mandating 2fa security keys w/ google advanced protection.

Given the problems ive experiences with the wickr app and lack of basic phone security ive seen this feels LESS secure to me.

Google put out research saying they had 0 successful phishing after mandated fobs. I guess there's a concern about forwards and that Wickr shows when someone screenshots but that doesnt stop anything...

I have seen lots of campaign staff that dont have passwords on their phone, or weak 4 digit ones. I use a password manager to store a long wickr pass but I think most just use a simple pass or re-use a password...

Wickr on my phone has render problems all the time and it has shown messages without me logging in at least twice.

It's also super inconvenient. if they really care about E2E - which doesnt even feel like the actual problem they are trying to fix (phishing/ability to read past messages when an account is compromised) - I'd rather have some enthusiastic outsiders develop an open source basic PGP chrome extension to sit on top of gmail or something (maybe that already exists)

Wickr is proprietary. That alone is a reason to abandon it ASAP.

How is something that's tied to your phone number "secure"?

The communications are encrypted, but my identity is public.

Security is not the same thing as anonymity. Maybe Signal's design goals just don't align with your requirements, and that's okay. But that doesn't mean that it can't be the right tool for people with a different set of requirements.

Well, besides anonymity, my requirements for a reliable communication method include not being tied to my phone.

Yes, and the huge majority of people don't have that requirement. Signal's devs have been very clear for years and years that the are optimizing for getting as many people as possible to use functioning e2e encrypted messaging rather than to focus on features that a subset of techies in the west care about.

I don't know about you, but I as a 'techie in the east-west' have a bunch of people who ask me or follow my choices wrt tech stuff.

I might recommend Signal to them if they specifically ask for something encrypted, but if they just follow what I use they'll see no Signal.

Sure. But the fact that the huge majority of messaging users use their phones is not exactly controversial.

Heck, whatsapp has gotten several orders of magnitude more people to use encrypted messaging than any other software, and techies hate it.

You are absolutely right. People on here love to crap on good solutions in search for the perfect solution, whether or not that actually solves users' problems. Case in point: your comment being greyed out. WhatsApp adopting Signal protocol probably accomplished several orders of magnitude more than anything else the Signal team has done. Of course Signal remains better for privacy, and yes, there are other messengers that don't require you to use a phone number. But for most people, using phone numbers solves so many more problems than it causes. Use the most secure messenger that works for you.

Sim-jacking is a thing that could damage security of future messages in the case of signal.

If you are concerned about this you can PIN lock Signal. It will prompt you periodically to confirm you still remember your PIN. When you get a new phone (or somebody tries to SIM-jack you) the new phone needs the PIN which you clearly remember because you kept entering it before.

If you stop answering the PIN confirmations eventually it expires and somebody with that number can sign up (and if they want, set a new PIN).

Regardless of whether you use PIN locks your contacts will be shown that something about the other party in the conversation changed, if they use in-person confirmations of identity they'll be invited to perform that over again.

This is about official communications in an organization with public lists of work phone numbers, not our underground cypherpunk community where we have no names maan we are nameless.

There's no requirement to be anonymous in this context. Metadata can be reduced via tech but it's not the top-priority -- quick fix to confidentiality problems OTOH is.

Right, and how do I communicate when my phone battery is dead? Or the phone breaks?

Looks like it wasn't the transport method that was vulnerable. Doesn't matter if you use Signal if the "vault" where you keep the messages can be broken into.

The EU budget is 148 Billion Euros. Surely, if they have a requirement for a secure communication app, they have the wherewithal to build one for their needs, that EU citizens and others could then use if they so chose? Essential knowledge is definitely public.

This is not about 'Signal' it's about why governments can't/won't deliver on so many issues they themselves deem to be very materially important to them, particularly in the area of IT.

As a citizen of a EU member state I would rather see them use Signal than try to come up with their own solution.

It is true that they could throw a lot of money on the problem. But that in itself does not guarantee that they outcome would be a world-class communication system. So given that Signal already exists and is the gold standard for secure communications, IMHO it’s better to use that.

What the EU can do however is to fund cryptography research. (And it would not surprise me if they already do.)

I'd like to see the EU contribute to the Signal (or Keybase!) ecosystem. A rising tide lifts all boats.

From various comments in this discussion, I've discovered that Signal is open source and has reproducible builds, and the server is open source too. It would be possible for the EU to create its own "EU-Certified" Signal environment for use by people who wish to see their data remain in the EU.

148 Billion Euros might sound large, but in terms of governmental spending it's pretty small (especially on a continental scale) and there, of course, other things competing for this money.

Chances are, if government would make a "secure communication app" it would suck (unlike private companies, government has no interest in making applications usable because they usually have no competition - if you have to use them, you have to use them). The projects are usually done by external contractors which will implement the unusable project that technically meets the requirements (implementing more is a waste of money). Add cryptography into an equation (cryptography is notoriously difficult to get right), and you have a recipe for disaster.

It's not for official or internal communications, just to replace any use case currently covered by the likes of WhatsApp.

EU projects are often a bit of a mess; design by committee taken to the extreme, with each committee member being a nation.

That is not how the EU works, at least not the parts which I have some insight into. There is indeed a lot of design by committee but the countries have little to no say in the day to day work of the EU agencies.

The EU has 27 (?) member states (plus others). All those governments have intelligence agencies. All of those want to know what MEPs are thinking etc. All of them have veto powers...

Nobody cares what MEPs are thinking.

But what should I use if I am a politician and fundamentally believe that the government should be able to read my communications?

Does such a person exist? And if so, then just do all your communication using existing means like email and blogs.

Personally I believe that if you become an elected representative of the public you should become a public person where all your in-person meetings and all your phone calls and messages are public for the period you are elected. If you meet someone without disclosing it, it should be a criminal offence.

I know this will never happen in real life but I think this is the only way to solve the problem of corrupt officials and revolving doors / lobbyist problems.

Edit: the book "Haze" by L.E. Modesitt Jr. has a good take on this.

Also: The Circle by Dave Eggers.

Thank you, I haven't read that one. It's going to be read within the next few weeks :)

As long as the people you are emailing with agree with that position as well.


Nothing stops you from explicitly publishing your communications. And whether you do it explicitly, or implicitly by using an insecure communication method, it's always a choice which you can revoke at any time.

>it's always a choice which you can revoke at any time.

It looks like the replies to this comment misunderstand the point.

It's not about one person choosing to make their communication public, but that there should be a record of government communication period. Have we not seen enough examples of our elected officials corruption in media that isn't encrypted? Now we want them to be able to communicate in all manner with no accountability?

> Now we want them to be able to communicate in all manner with no accountability?

I think you misunderstood my point, which was that they can always do that. You can't stop them from using Signal, if they want to do something nefarious. Hell, they can just meet in a dark alley if they want. Sure, you can make it against the rules, but we are already presupposing rule-breaking (or else we would not need oversight).

So, if there's going to be some kind of disclosure system, let's make it structured and explicit. Just requesting that they use unencrypted communications, and then having some sort of unaccountable gray-hat institution (i.e. intelligence services) snoop on those communications, is a bad system.

SMS and phone calls. The infrastructure is all there already.

Interestingly, the EU's position on this looks really confused. SMS messages should (in theory) only transit their own local telcos. The USA doesn't get a look-in unless it hacks the telcos themselves.

What the EU is doing here is routing all Commission traffic through US based server farms and roots of trust. The phones are controlled from the USA, the comms services are too. So their own local firms can no longer see the traffic but US firms can (Signal claim this isn't the case but people are wising up to the fact that this can't be true until more infrastructure is in place).

What actual threat are they trying to block here?

> What actual threat are they trying to block here?

A large powerful country ruled by a megalomaniac who has proved he isn't afraid to ruin his country's reputation by abusing his power.

Assuming you mean the USA, best way to solve that is for the data to never cross into US controlled facilities or equipment at all. Using apps made in California on operating systems also made in California to send messages to servers controlled from California and then trusting that the encryption works the way they say it works makes much less sense than just sending messages through EU carriers.

Consider, the Signal threat model assumes that you can't trust telcos. That's usually based on the assumption that you're some ordinary grassroots citizen who might be spied on by the government. But the EU Commission is the government. It seems a bit odd for them to implicitly assert the national European telecoms companies are untrustworthy.

Then make them public or submit them to whoever "the government" is, in that context.

Then post your communications to your official web page.

Depends on your jurisdiction. In the US, probably Facebook Messenger.

Just use email. Or whatsapp. Or iMessage with backups turned on.

Well, given how one political remit uses Twitter, I'd counter that not everything should be written down in the first place, let alone read.

Dumb question: is your comment meant to be sarcastic? I initially read it as such but now I'm not sure, given all the other replies.

It was sarcastic, but I just wondered what such politicians think when they get this advice... Like: Hell no! What if I'm a criminal some day!


Yes! Can somebody please think of children?!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact