Hacker News new | comments | show | ask | jobs | submit login

It seems that most people are completely in the dark when it comes to security, including myself, but there are some principles that should be unwavering that regularly get ignored again with every new iteration of "secure" software:

* If there is a weak layer in the stack, from the physical layer to to UI, then the system is not secure. Even if your messaging app is secure, your messages are not secure if your OS is not secure

* If the source code is not available for review, the software is not secure

* If you or someone you trust has not done a full and thorough review of all components of the stack you are using, the software is not secure

* Even if the source code is available, the runtime activity must be audited, as it could download binaries or take unsavory actions or connections.

* On the same note, if you do not have a mechanism for verifying the authenticity of the entire stack, the software is not secure.

* If any part of the stack has ever been compromised, including leaving your device unlocked for five minutes in a public place, the software is not secure.

I could go on, and I'm FAR from a security expert. People compromise way too much on security, and make all kinds of wrong assumptions when some new organization comes out and claims that their software is the "secure" option. We see this with apps like Telegram and Signal, where everyone thinks they are secure, but if you really dig down, most people believe they are secure for the wrong reasons:

* The dev team seems like honest and capable people

* Someone I trust or some famous person said this software is secure

* They have a home page full of buzzwords and crypto jargon

* They threw some code up on github

* I heard they are secure in half a dozen tweets and media channels




I think you are being too strict in your definition of 'secure'. 99.99% of devices run Android, iOS or Windows which are closed source and therefore not 'secure'.

To me, security is not a binary property but rather a sliding scale. WhatsApp say they use end-to-end encryption and they have a strong financial incentive to be telling the truth. No hacker has demonstrated that WhatsApp are lying and the Wikileaks dump suggests the CIA has been unable to intercept messages in transit. Given this information I would rate WhatsApp at least 'reasonably secure'.


> WhatsApp say they use end-to-end encryption and they have a strong financial incentive to be telling the truth.

I'm not giving much to the various "whatsapp backdoor" allegations but I'm curious to why they'd have financial incentive to provide privacy.

Most of their userbase likely still doesn't care about security and they do belong to Facebook - so if anything, they'd have a financial incentive not to use effective crypto.


I guess I would point to the Yahoo security breach as an example of how poor security can have an adverse financial effect on a company valuation and its CEO's remuneration. The Sony hack also comes to mind as something which damaged it's reputation and had an indirect affect on their financial position.

If hackers could get access to WhatsApp and dump all messages to Wikileaks it would make the company look very bad and a significant number of users would switch to something else. If security is not that important to users, why pretend to add end-to-end encryption at all?


And even if Facebook / WhatApp's customers don't care about security, their hired geeks do; and being seen as caring about having topnotch security might help with attracting and retaining talent.


> Most of their userbase likely still doesn't care about security

Teen Vogue just suggested people should use WhatsApp instead of Snapchat because it does end-to-end crypto. I don't think it's true any more that the general public doesn't care about security, if it was ever true.

http://www.teenvogue.com/story/how-to-keep-messages-secure


Based on the behavior after these leaks, there's moderate evidence to suggest that users care somewhat about privacy if it can be accomplished without any sacrifices in existing ease-of-use.

Facebook cares mostly about penetration for Whatsapp, to ensure that no other messaging app takes over.


HN, CNET, Motherboard, Slashdot, Ars, TC, NYT tech etc.

Free product advertising worth $N targeted to the more influential product adopters, who will then amplify said advertisements.

That's my guess, anyway.


That's the point. Android and iOS are not secure. Now the question is always, secure for what ? Against an average attacker, they are secure enough. Against the CIA, they are not. They can just threaten several people inside Google or Apple, get a back-door, and you can't check if they did.


https://source.android.com/ The source code for Android is open under the Apache 2.0 license. Of course, iOS and Windows are closed source.


You're ignoring that most drivers for android phones are proprietary, the baseband is entirely proprietary (required by law), the Google services are proprietary (which many apps use) and most apps are proprietary (including Google's replacements for the AOSP apps).

If you want a completely free software smartphone experience, it is simply not possible at the moment. Even Replicant[1] still hasn't cracked the baseband puzzle (and is still struggling with the firmware for a couple of phones).

So no, Android is definitely proprietary -- even if some parts are not.

[1]: http://www.replicant.us


Basebands fall under exactly which entities juristiction such that they can regulate a baseband to be 'entirely proprietary' ? I mean, BB + superhet. mixer => IF => carrier wave envelope containing your data. How do you even regulate a concept of physics? If you're paying the proper fees as a subscriber to $provider_foo you could even design your own receiver off the public standards documents.. (used to be a popular project for 4th year undergrads to do on FPGAs for the CE's who wanted to get closer to the silicon but MOSIS project space was reserved for only the EEs).

If you want a completely 'free' (as in GPL) cell phone experience, you can setup a OpenBTS transmitter and transmit at the 900mhz range which is commons property. To stay legal in the US, your antenna has to put out less than a watt, but the setup allows you to even use off-the-shelf phones and trunk into normal phone lines via standard POTS software. Your device would have to be something a-la http://alumni.media.mit.edu/~mellis/cellphone/ (just a janky setup, but just a proof-of-concept -- you can patch together components from DigiKey pretty easily these days; if you want free-silicon, I think the closest you're going to get is https://en.wikipedia.org/wiki/OsmocomBB or maybe some soft cores, but if you're actually going to take that soft core to tape-out, you're probably going to be running 6 figures just for masks...)


On the hardware side, there is a project "Free Calypso" to produce a completely libre (software, firmware, baseband, & hardware) "dumbphone" using the Calypso chipset.

Initially looking to reuse old phones with the Calypso chipsets, the project is now working on producing their own. Design files are completed; funding for the dev boards is about 66% complete.

https://www.freecalypso.org/fcdev3b.html

Mailing list is fairly active too.


what is the regulation that mandates proprietary baseband?


The FCC has requirements for manufacturers to make sure that their radios output to-spec EMR. In addition to this, they've been working toward trying to stop people from being able to arbitrarily modify their radios.[1]

While (AFAIK) there isn't a regulation stopping someone from selling radios that have completely free software basebands, you can bet that the manufacturer will be prosecuted if users suddenly start outputting radio waves that don't follow regulations (suing users is harder than suing a manufacturer). As a result, there's a disincentive for manufacturers to ever sell free software radios (because by definition they would have to allow modification).

[1] https://www.infoq.com/news/2015/07/FCC-Blocks-Open-Source


Google has moved a lot of the platform code to their closed source parts. So for the purposes of security your typical Android phone is just as closed source as Windows. These days, unless you're running a classic Linux distro or BSD the OS code isn't auditable. And even if you were running Ubuntu on the phone the baseband is completely closed and almost always has memory access with no MMU so you should never trust a phone with anything important.


This is only the base code. The manufacturer modifies this code when building a ROM and can add anything. It should provide the modified sources but many chinese vendors do not do it.

Even if manufacturer provides the code, it can preinstall additional closed source programs. For example, Facebook app or some "telemetry" app that are closed source. My chinese noname phone contained an app that was trying to send my phone number and other identifiers to China as a part of a "sales report" (exact URL was http://bigdata.adfuture.cn /reboot/salesCountInterface.do ). And one can only guess how many data does Facebook collect.

What the end user gets is a phone with a binary blob inside.

I think there should be a strict requirement banning collecting any data without consent from user. No "anonymous" "analytics" and telemetry, no crash reporting, no advertising ids, no checkboxes checked by default. There can be only legal solution to the problem of mass surveillance by software companies. Every byte your device sends to network can end up in the hands of the hackers from developing countries or NSA.


The modifications installed by your phone company, etc. are not open source. The baseband chip's firmware is not open sourced. I've even heard of DMA being allowed over baseband as part of the Lawful Intercept Protocol.


Well my keyboard app is closed source, and I even imported the binary from the US to my country. Naturally I consider my phone compromised.


Large parts of iOS is available at opensource.apple.com, including the kernel, Objective-C runtime, and CoreFoundation. LLVM, Clang, and Swift are also open source.


Right. Security threats are never eliminated, only mitigated, relative to the cost of mitigation.


Based on my (admittedly limited) understanding of the human condition, it seems like it would be more accurate to say "WhatsApp say they use end-to-end encryption and they have a strong financial incentive to be _lying_."


The Signal protocol (https://en.wikipedia.org/wiki/Signal_Protocol) has been vetted, and the code is online available to be audited:

- Signal code: https://github.com/whispersystems/

Telegram has had known flaws, which have been discussed in part here:

- Telegram protocol defeated. Authors are going to modify crypto-algorithm https://news.ycombinator.com/item?id=6948742

- A Crypto Challenge For The Telegram Developers https://news.ycombinator.com/item?id=6936539

- Telegram (initial discussion) https://news.ycombinator.com/item?id=6913456

https://hn.algolia.com/?query=telegram&sort=byPopularity&pre...


How much of that Telegram info applies to anything in the last 2 years or so?


They still haven't open sourced the server afaik. The thing that is a problem with Telegram is that they market on their security, but it's not secure by default, or on group chats. Moreover, it stores chat histories, and if you add someone to a group chat they get access to the historical data. A lot of folks use it without thinking about this, or understanding the implications.


The protocol is secure, but we have no idea if the implementations are secure (except Signal itself), because we can't audit them.

In fact, Facebook Messenger's implementation of Signal has very questionable security right out of the box, because if one party "reports" an encrypted conversation, the whole thing is decrypted and sent to facebook support staff.


So someone reporting a message to Facebook would be the equivalent of that person (either Alice or Bob) reporting and sending the content of the other person's encrypted conversation to a third party.

The Signal Protocol provides end-to-end encryption so you don't have to trust the intermediate parties/servers involved in relaying the message (e.g. you don't have to trust Facebook's servers), and to protect against the other person reporting and revealing your conversation to someone else, the Signal Protocol provides message repudiation [1], which effectively gives the sender plausible deniability because the receiving party cannot prove to a third party that a message came from you.

[1] https://en.wikipedia.org/wiki/Signal_Protocol#Properties


Yes, my concern is that this functionality is baked into the client and is at a high risk of being executable remotely.


It's not just about the protocol though, it's the whole stack, and the OSes that it runs on are frequently not secure. Also, the Signal app on Google Play requires your phone number and a Twilio API call to function. No thanks.


That's something we can't do anything about though.


We can't do anything about the phone number requirement?


The phone number is not required as per the Signal protocol -- it's an implementation detail -- another token could be used. At this time, phone number verification is used in the initial authentication flow to protect against someone else spoofing your phone number and pretending to be you.

See https://github.com/WhisperSystems/Signal-Android/issues/1181


Actually magic link sent to your email would be MUCH better. Also central authority but Gmail >>> any telecom


It also doesn't have to be a central authority, since "anyone" (meaning: anyone who can afford to operate a mailserver, which is actually a surprisingly-high number) can be such an authority for one's own mail.


Practically nobody is running own mail server these days. Email is extremely centralized


one single actor < a few large actors you can choose from < lots of actors to choose from < medium size organizations can and do often run their own < individuals can ran their own (Email Is Here) < individuals commonly run their own

Let's not make the perfect the enemy of the good.


I still run my own for personal email (postfix + dovecot). Runs off a Linode VM atm and has no problem getting through to Gmail, Hotmail etc. users.


"Most people don't know rocket science" "Actually I know some rocket science!"


Just because practically nobody is doesn't mean practically nobody can't, which was my point.


That's true, but not a given.


I used to run my own but I can't find anyone to relay the mail anymore.


> Also central authority but Gmail >>> any telecom

There's few of the big corps I trust as little as Google.


Source code isn't required to study what software may do.

If you were really worried about what a particular binary would do, trusting that the binary matched the source and studying runtime behavior would both be a waste of time compared to fully analyzing the binary in question.

If you treat the software as a black box and only study run time behavior, you have no idea if you have tripped a countermeasure that silences the malicious behavior; if you study the control flow directly, you can look for such countermeasures.


>If you treat the software as a black box and only study run time behavior, you have no idea if you have tripped a countermeasure that silences the malicious behavior; if you study the control flow directly, you can look for such countermeasures.

It would be great to find such a countermeasure, and be able to trigger it reliably, or assert the behavior on a permanent basis. Considering that particular weakness of such countermeasures though, wouldn't the safest [for the attacker] default countermeasure likely be to simply crash the device?


Could be.

A user that knows about malicious code (which you would have to in order to trigger it to go silent) in a binary just shouldn't use the binary at all though.

The broader point is more important: compiled software isn't a black box, treating it like a black box is not the only or best way to analyze it.


Security is not a binary. "Secure" is not something that can be evaluated without context, ie, a threat model. Security is something that comes with tradeoffs. If you require a 100% certainty that no adversary, no matter how well resourced, can obtain electronic communications from you when conducting active surveillance, your only defense is to stop using computers. Most of us don't live with that sort of a constraint, and thus can evaluate things that increase our relative security given a threat model of mostly passive surveillance by state actors and active malicious attacks from private parties that mostly just want our credit card numbers.


This post is grey, and I'm not quite sure why. It's a bit on the "pessimistic" side, but... that philosophy is actually spot on IMO when it comes to security. So why downvote this? I'm honestly a bit new to this community but to me this sceptic perspective as it pertains to software security is ... well, actually it isn't even enough. Is this a weakness w/HN where even justified pessimism is eschewed?


Oh, HN is plenty pessimistic...

But this sort of pessimism isn't really useful. The attitude that "anything is insecure if there is any closed source software anywhere in the stack" means that it's impossible to advance security, because it's almost impossible to avoid binaries (i. e. firmware).

Apple, for example, has done a few things that are laudable in this field – i. e. risking a public court fight with the FBI to keep the iPhone secure. If we say that such actions are meaningless because they ship binaries, they have no incentive to do such things. Just rolling over and giving the US gov big-pipe-access to everything like yahoo did becomes the better business proposition.

Similarly, what do you answer when a friend who works at the EPA asks you how to securely contact a journalist? If it starts with ordering a custom open-firmware mainboard from somewhere in China, your advice will be ignored.


Practical security is all about risk management. And the first step is understanding what your risks are - not assuming or pretending they don't exist. Depending on the nature of the secrets your friend wants to share and who they are trying to hide from, advising them to avoid phones altogether might not be a bad idea. And falsely assuring them something is secure when that can't be confirmed could cause somebody a world of hurt.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: