Hacker News new | past | comments | ask | show | jobs | submit login

I see a lot of complaints about closed code. That is the first thing that people bring up with Apple and security. But how is open source changing things here? No company open sources their server side components. Even if Google released their server code we have no confirmation that they deploying the same code on their servers. They are not vouching for that. We have a company here that really seems to want to do good on security and privacy. Immediately going to the closed source argument is just lazy and not helping.

Of course the good part about a crowd is all views come out and so th closed source thing has its place but we should atleast give them their due and some kudos. We know people will try to evaluate the implementation and see what happens. In this case it's just a PR article. Let's wait for them to release detail and see if it stands out. May be the protocol is enough to give us confidence that their claim is true. We don't know yet.




Signal has open source clients and server. Matrix has crypto support now (needs more auditing, but you can turn it on and nobody has cracked it yet to my knowledge) and thats fully open source.

No, open source does not guarantee they are using the secure algorithms advertised on their servers. But open source does is let you run your own server, that you can put much more trust in. People spin up their own signal and matrix servers all the time now for just that reason.


But Apple isn't in that business, they are trying to make common user's comm private. Open sourcing only helps the power user (audit, run their own), as you described.

Signal and iMessage both don't guarantee true privacy as we can't see the servers.


Any system where you don't control every component of that system is inherently insecure. I find it laughable that so many (especially on HN) will evangelize security while evangelizing Google, a company that literally wrote the book on and perfected the art of harvesting user data and selling it to advertisers, while tearing down Apple just because Apple refuses to let us see their code.


No service that uses a third party server can guarantee privacy unless they let you see the servers.

For common users the question is if you trust the server operator, and if not, to consider your communications insecure. What they do about that is up to them. And it is up to these providers to earn trust.


The problem with this is that it eliminates the value proposition of E2EE, which is designed specifically to protect you when you don't trust the server operator. If you already trust the server operator, you've obviated the issue that E2EE ostensibly solves.


> No service that uses a third party server can guarantee privacy unless they let you see the servers.

And there are no services that do that. Since that doesn't seem to be possible. Open sourcing the backend doesn't help here since you still have to blindly trust the server operators.


Personally I can't trust apple for a specific reason... On iOS they've made it quite difficult to turn location services off unless you jailbreak the device itself. You have to go into settings -> privace -> locations -> on - off -> are you sure you want to do it? -> yes ....

I mean when a company does that, it makes it pretty clear that they do want to monitor your location, making it annoying for you to turn it on and off.


How is that difficult to turn off?


The steps, I am a privacy junkie myself and I want to have location services off most of the time, apart from when am gonna use uber or gmaps. Even for me that I'd go the extra mile to turn it off, it just puts me off that you have to go through so many steps to do it.


Not sure what you exactly want from Apple here. Nobody other than a handful of people are regularly switching Location Services on or off. Especially at a global level as opposed to at the app level.

And even then it's only 4 taps. Which is 1 more tap than what's involved in changing WiFi networks something a lot of people do.


Well on Android it's just a swipe down on the system bar (or whatever it's called) and a tap--from anywhere. So yes of course people are constantly turning it on and off, mainly because it saves loads of battery, but also the obvious privacy reasons.


Open sourcing helps everybody. Power users make sure its save for them, if it is, it is save for 'normal' users as well.

I don't need to read Signal code to have confidence in its working.


OK so if your tech-inclined friend sets up a Signal server, for little to nothing in return, that's automatically more trustworthy than a multi-billion dollar corporation with an ingrained and (seemingly) solid attitude regarding NOT selling user's data?

I roll an OwnCloud server for me and my family, and I know that I'm not invading their privacy, but I could very easily do so and not leave a trace of my having done it even that a pro user could find, let alone a layman. To say that Apple is less trustworthy than a given power user solely because they use closed source is ridiculous, there is zero correlation there.


> Power users make sure its save for them, if it is, it is save for 'normal' users as well.

Power users can make sure the code they're running is safe for them. There's no guarantee that Signal for example is running the same code they release to others.


Signal for Android has reproducible builds, so you can verify that the version on the Play Store matches the code you audited.


How does that help you verify what is running on the server?


The whole point of e2e is that you don't have to trust the server (for content; you do have to trust that they aren't saving metadata).


They can use trusted computing or HSM tech for that. They have the money but probably won't spend it.


None of that addresses the point. Without third party, independent verification of their servers and the code that is running open source provides limited (if any) improvements to privacy and security.


That's part of the point rather than separate. A closed system without any way to prove it's running something often can't to get 3rd party verification or user trust that's consistently believable. An open-source, tamper-resistant system can. Quite a bit of difference. Once verifications come in, the effects of reputation then allow users with less technical knowledge to learn what's trustworthy or not.


But they don't, so why do they get points for having an open source server?


For a full solution, it would be a start to what they needed. The main benefit of an open-source server is that people can audit it for vulnerabilities. As in, they get free to cheap labor to reduce their liabilities. People might also submit extensions they find useful. The usual benefits of FOSS for suppliers.


"...if it is, it is save for 'normal' users as well..."

Not sure about that. I think even if a power user were to say "this is safe", you still have a boatload of integrity problems with the software.

-How do we know the power users can be trusted?

-Even if they can, what guarantee do we have that the service provider we use is using the same open source code that the power user validated?

-Etc, etc, etc. Basically, a lot of trust issues.

I agree with asadlionpk, open source can generally only be proven to help power users in a trustless environment. And where security is concerned, we cannot ascribe the "safe" attribute to any system where that safety cannot be proven.


>both don't guarantee true privacy as we can't see the servers.

What do you define as true privacy? Why isn't other privacy "true"?

What do you mean by "see the servers"? Surely you can see them as computers at the other end of a TCP connection, and the server cannot read the cleartext of an E2E encrypted message.


Helping the power user is the whole point imo: let those who know what they're doing see and run the code themselves. As such, it doesn't matter whether Apple is in some business or not, it's simply a security measure.


That's what I am trying to say, "Helping the power user" is not the business Apple is in.


I guess we're going to find out soon enough what Apple thinks of that demographic considering their WWDC announcements.


And how is open source a guarantee of better security? The SSL Heartbleed bug was in the wild for over a year


If you try to make a Signal-compatible server you'll be pushed out of business though. It's shared source, not open source.


In a system that tries to be secure against server attacks, nobody cares about the server-side code, because we can't trust that it's the same exact code (i.e. without backdoors) anyways. Therefore, it would be possible to make assertions about how secure iMessage is with only client code.


To some extend the same problem applies to application code. As long as the hardware and platform are closed, we don't really know what is going on when the application is executed. Despite this, opening the application would be of course a step to the right direction. Not because it would guarantee that Apple is not doing bad things, but because it would help others spot problems that may be there by accident.


Yes, I agree with all of that.


Even if the server code is good, frequently they're hosted on servers that have other vulnerabilities anyway. Because we're not the sysop, we have no idea what they're running on and if it's patched properly and up to date. The only way to be safe is to become an expert in digital security, an expert in systems management, an expert in reading and mitigating security problems in open source code, an expert in securely deploying secure communications systems and host it yourself.

For the rest of us, even those of us reasonably well versed in systems and application security, we kind of have to put our trust in someone and hope they don't fuck it up.


> Even if the server code is good, frequently they're hosted on servers that have other vulnerabilities anyway. Because we're not the sysop, we have no idea what they're running on and if it's patched properly and up to date. The only way to be safe is to become an expert in digital security, an expert in systems management, an expert in reading and mitigating security problems in open source code, an expert in securely deploying secure communications systems and host it yourself.

If the server is assumed to be compromised by the threat model, then none of this matters.

> For the rest of us, even those of us reasonably well versed in systems and application security, we kind of have to put our trust in someone and hope they don't fuck it up.

It is better to have to put your trust in any number of independent auditors than to have to put your trust in a single corporation.


Still, the other problems the parent mentioned still apply. Can you verify the build against the source? Can you verify the binary on your phone against the build?


No, of course not. I also can't build an iPhone from scratch or look at the hardware design or firmware. It's especially bad when you consider that Qualcomm baseband and the problems it has.

Hell, I can't even trust that with my desktop computer. Further, how do I know the light in front of me isn't fabricated and that I'm not a brain floating in fluids connected to a simulation?

Realistically, I believe Apple's intentions are in the right place. And I believe that, for the most part, iPhone backdoors are not a thing yet. Being able to look at the client code is not something I believe will happen, but I believe it would be good if it did because then the security of iMessage could be independently verified much easier. It is true that Apple could just lie about it and put different code up, but assuming their intentions are in the right place, it seems like a win-win for everyone.


You can't build an iPhone from scratch, but you can build one from parts. Of course, it would need the magic signature from Apple in order to make Touch ID work unless the home button and system board were bought together.


Client software (anything that runs on the user's device) should be open source for two reasons:

1. It's easier to audit, although binary analysis is still useful (and reverse engineers are often better at finding security holes than someone doing a source code review).

2. Reproducible builds.

Server software doesn't need to be open source. If you have E2E in your open source software, you don't need to trust the servers at all: https://paragonie.com/blog/2016/03/client-authenticity-is-no...


That seems like a very weak "should" and more of a "nice when it happens"


My previous comment is RFC 2119 compliant.


That's 90's RFC technology! WOULD BE NICE is an unfortunate omission.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: