Hacker News new | past | comments | ask | show | jobs | submit login
Apple, Google ban location tracking in apps using their contact-tracing system (reuters.com)
701 points by ingve on May 4, 2020 | hide | past | favorite | 536 comments



I have to say, working at Apple and knowing all the hard work that goes into this and making sure your data stays private while also being able to combat this disease, it's very frustrating to read a lot of the comments here. I can understand why the public is skeptical, but I feel like as a society we've swung so far away from institutional trust that now nothing good can actually emerge. The anti-vax movement is a perfect example where the collective work of thousands of people over decades to save millions of lives just gets tossed aside because some celebrity 'feels' like there's a connection that isn't there, and in the process, the level of public harm becomes severe.

Note: All opinions (in this comment and all of mine on HN) are my own.


I think the notion of trusting any company over a certain size is crazy - I wish people would stop anthropomorphizing companies that way. Basically, I can trust specific people if I feel that I know their character, and by proxy I might trust a company that these people have significant control over. Once a company reaches a certain size, any power that individual employees wield will have been diluted to microscopic levels, and even founders' altruistic motivations tend to fade or get drowned out by the basic profit motives of an increasingly large group of shareholders. At that point, it's not a matter of trusting, it's at best a matter of recognizing if the strategic goals of the company aligns with my interests in specific ways. For example Apple has made a strategic choice to double down on privacy, because invading privacy is not core to their business the way it is to Google and Facebook, so this allows them to gain an edge over their rivals. Great, that's something. That doesn't mean I "trust" Apple the way I would a person - it's more akin to how I'd recognize when a wild animal doesn't view humans as prey, so I'm willing to get relatively close to them without protective equipment. I don't trust them in general, I don't expect them to care about me at all, but I trust them to follow their instinct, which is to not attack me unprovoked.


I’ve thought for a while that trademarks should be non-transferrable, and should be voided on change-of-control of a company. Their primary purpose is to let the consumer know the origin of goods, and to allow companies to build a reputation around intangible properties of their products, like longevity, quality, and craftsmanship. Selling a trademark is fundamentally betraying that trust in exchange for a monetary reward, which wouldn’t be acceptable behavior in other settings.


Very interesting. Is this your own idea or has it been more widely discussed?


I don't recall hearing it from anywhere, but it's the sort of idea I'd expect to crop up from time to time. I haven't done any research to find out if anyone else has seriously suggested or investigated it.


Would you consider company going public as change-of-control that should trigger voiding trademarks?


Defining what represents a change-of-control is probably the trickiest part of moving this from an interesting idea to a real proposal. Probably, the key factor should be whether or not the senior management and/or the board membership stays essentially the same; new management should be required to earn their own trust from customers that might not follow company news.


Realistically that means no brand lasting longer than about ten years, and the total elimination of lots of century-old brands that people have fond relationships with. It's a stupidly excessive hammer that fails to address the real problems. And it's probably going to suffer the obvious free speech challenges.


> lots of century-old brands that people have fond relationships with.

That's the problem being discussed--the fond relationships are built with entities that no longer exist, and it is argued that current brand practices are deceiving for the consumer side of the relationship and abusive on the producer side.


"Free speech" is a ridiculous charge to level at this. The concept of a government-protected trademark is itself a restriction on free speech in the first place, as are all consumer-protection labeling laws.


> senior management and/or the board membership stays essentially the same

Defining "essentially the same" sounds hard, especially if a series of "small" changes happen over a period of time. Ship of Theseus, anyone?


A series of small changes gives consumers time to react and the value of the brand to naturally evolve, so arguably that's not a problem.


>so arguably that's not a problem.

As soon as you try defining "small changes" and the period of time on which they can occur, it is arguably a problem that will lead to loopholes easily abused by the legal departments of any big corp.


The purpose here is also likely to be very different for different use cases. You might not want the Apple trademark to be transferable as the management changes because you care about their approach to privacy. On the other hand, a consumer who thinks Diet Coke is tasty probably doesn't want to have to look up what brand name they were forced to change to when the CEO quit just to keep buying their favorite soda at the grocery store.


The purpose of trademarks is to capture value for their holders, not consumers. That it is why it is the holders who are awarded damages.


That's a brilliant proposal.


I kind of disagree with this. If a company publicly claims to follow certain principles and vows to have mechanisms in place to protect user privacy, people on the inside are able to judge if those principles are being followed. The larger a company is, the lower are chances that everyone would stay silent in case of a (systematic) breach of those principles. So in a sense, it's much harder for larger companies to secretly do things simply because larger groups of people are worse at keeping secrets.


If we were to outlaw NDA's and dramatically beef up whistleblower protections for corporate whistleblowers, that might work. But those are wildly implausible proposals! And while people are bad at keeping nefarious conspiracies secret, they're quite capable of keeping quiet in general - which happens to protect the less egregious but more relevant kind of grey-area, pushing the boundaries that slowly evolves. Just look at all kinds of corporate and governmental malfeasance stories to see that it's pretty normal for ethically dubious behavior to go on for years if not decades before it comes out. Most of it probably never comes out.

People can't keep crazy conspiracies secret very well, but are completely capable of keeping eroding standards quiet.


But history has proven this wrong. e.g, how did the entire German population wilfully engage in something that they now consider very wrong and ashamed about (WW2)?? (Quite a generalisation I know, but it's an example of a lot of people going along with something they may not entirely agree with, not comparing Apple to the Nazis BTW, nor am I saying that every single person went along with it).

I am merely saying that if it is possible at a national level, it is possible at a company level. If it is a culture where you work you can quickly become swept along with it.


WWII is somewhat different in that it was not secret what was happening. Speaking up on the inside is a much bigger step than just talking to outsiders.


What? The Holocaust was definitely kept under wraps from the general public until the concentration camps finally got liberated. Sure, some members of the public heard rumors, but it wasn't like Nazi Germany was doing it completely out in the open.


> The Holocaust was definitely kept under wraps from the general public

https://www.theguardian.com/uk/2001/feb/17/johnezard suggests otherwise.


For example, in a huge company lie Boeing it would be impossible to have a coverup of gross violation of the corporate principle of safety first, and as a consequence no one will ever be killed by a 747 Max 8 simply falling out of the sky taking hundreds of lives.


> The larger a company is, the lower are chances that everyone would stay silent in case of a (systematic) breach of those principles

that's wildly wrong hypothesis. the complete opposite of these ideas are uncontested knowledge in academia, and in popular culture as well.

see Hannah Arendt's work. (which doesn't even include profit and personal gains!)


It just strikes me as funny that in response to an argument against trusting large companies with sensitive information you say that "larger groups of people are worse at keeping secrets". I mean, isn't that just corroborating GPs point?



But the proposed solution in the U.K. and Australia is worse for privacy and is only subject to the scrutiny of the citizens of that country compared to the scrutiny of the entire planet for Apple and Google. Ultimately, if you don’t trust them, don’t take your phone out with you. Because any one of Apple, Google, Amazon, Huewai, Samsung or whoever could start tracking you with a simple software update at any time. Likewise don’t drive a car made in the last few years.


Correct. I'm an avid Apple product user (a dozen Apple products in the house actively used at the moment), but it's obvious that "privacy" was a PR push against rivals that had little or no impact on their bottom line.

If Apple relied on private info for revenue, the warm smile would be the same but the wording would be very different.

Still, the problem is more along the axis of technology. Even if Apple open-sourced their entire stack, we'd still have to take their word for what was running on our phones at any given moment. Encryption protects us but also ensures we can't know what information is sent from our phones to Apple. Hardware security, which protects us as well, turns our phones into a black box.

Are Apple good guys? Well, we could have a whistleblower come forward with evidence that they're not but a lack of whistleblowers isn't evidence that they are. If the entire stack isn't auditable by 3rd parties then we're just going on faith.

Personally, I'm ok with it. I like the aesthetics of Apple products more than their rivals. The threat to me, if it exists, isn't large enough for me to worry about.

If I were a criminal, I wouldn't trust my iPhone for a second.


>a lack of whistleblowers isn't evidence that they are.

It is, actually. It reduces the number of possible worlds in which Apple is an evil company, while leaving intact the number of possible worlds in which it is saintly.

It's only weak evidence, to be sure.


+1

Well put. Corporations aren’t people no matter what Citizens United “says.” Their goal is maximizing shareholder profit at any cost. They do nothing out of the goodness of the hearts they don’t have and they certainly are not guided by ethics the way individuals are.

As you said, if Apple “cares” about privacy more than Google or Facebook it is not because it is a more “moral” or “just” company, it is simply because its business model is not, and never was, based on hoovering up and monetizing personal data.

It’s not like a conscious decision was made by Tim Cook or Apple’s shareholders to do things differently than the aforementioned two companies.

Pointing out how Apple “cares about your privacy” is simply a PR (read: propaganda) move designed to gain some extra positive publicity which, of course, helps the bottom line. It’s the kind of thing companies do all the time. Taking these pronouncements at face value requires a certain naïveté (or a wilful suspension of disbelief).

And while Apple might not traffic in its customers’ personal data, the company recently revealed that it scans content stored on its servers, e.g. your iCloud Photo Library, because of “concerns” over child pornography.

This suggests privacy isn’t quite as sacred to Apple as its PR department would have one believe.

(Never mind that there is no evidence whatsoever that mass surveillance reduces incidence of child sexual abuse or decreases the production and distribution of child porn. Which begs the question, why <i>is</i> Apple spying on its paying customers’ personal content?)

Whenever a company, or a government, deploys the cliché “think about the children!” argument as a pretext for increasing internet surveillance, you can bet this reason was chosen because if anyone objects on privacy grounds they can say, “what, so you’re on the side of the child pornographers/suicide trolls/cyber bullies?” It’s a discussion stopping tactic.

Yeah, corporations are definitely not people, they are not your friends and they do not care about you as a person. That in 2020 this even needs to be said is a testament to how thoroughly corporate propaganda has been woven into the fabric of western culture.


> Once a company reaches a certain size, any power that individual employees wield will have been diluted to microscopic levels, and even founders' altruistic motivations tend to fade or get drowned out by the basic profit motives of an increasingly large group of shareholders.

I couldn't agree more. But at the same time, I'm completely baffled that while this sentiment about companies is widely agreed upon on HN, people around here don't expand this from companies to much more powerful and dangerous orgs: governments.


Regarding Apple's relationship with privacy, I implore you to take a gander at these pages:

https://apple.com/privacy (45 second read)

https://apple.com/privacy/features (many minute read with links to whitepapers)

I believe this goes well beyond following instinct, and at least for me does lead to a strong semblance of trust.

They really did a doozy by not being upfront about Siri's contractors (I already knew so it didn't affect me personally), and they're hemming on E2E iCloud backups (perhaps for Availability, perhaps for China), but they continue to push the envelope on what usable security and usable privacy looks like.

In addition; the iPhone 5s from 2013 is still getting security patches, as recently as March 24th.


> I think the notion of trusting any company over a certain size is crazy

I'd go the reverse. The bigger they are, the more I trust them.

> I wish people would stop anthropomorphizing companies that way.

But anybody who trusts them for that reason is indeed crazy.

Maybe out definition of trust is different. For me trust means predictability. Do I trust the sun will rise in the east or a stone will roll down the hill? Do I trust my mother will tell me off if I growl at my sister? Do I trust my father will ask me how the finances are going? These things are all predictable - so I trust them. To a lesser extent do I trust these people to keep their word? The answer is sometimes - if my mother promises to look after the kids then definitely, but if she promises not to buy them presents then I'll trust her a little less.

Companies are also predictable in some ways. Bigger companies more so than smaller ones, simply because it takes a lot longer to change the direction of a bigger ship. In my book predictable means trustworthy.

One of the things you can absolutely have faith in is a bigger company is driven by money. (A small company is more likely to be driven by idealism of its owners.) Money means users, so if a good record on privacy attracts users you can be pretty sure they will do their best to have a good record. Not surprisingly Apple and Google do have an extraordinarily good record on keeping data they store secure - far better than most governments for example.

It does not surprise me at all both companies are making a song and dance about how they are safeguarding your location data, and I trust them to do everything within their power to make good on that promise. Unfortunately they aren't gods - if they receive a court order to cough it up, they will. So trust has limits.


> I'd go the reverse. The bigger they are, the more I trust them.

that is just silly.

> bigger company is driven by money

Follow that money. Figure out who pays. Are you the customer?


That's a good writeup and a good practice.


Agreed. When it comes to corporations and privacy the rule of thumb is guilty until proven innocent.


[flagged]


There's a world of difference between projecting onto a non-living entity the properties of an animal, or those of a human - so I don't understand why you think what I did was an anthropomorphism. And I find the analogy to animals quite apt to be honest: Humans can have all sorts of motivations, including morality, and if you know that about a person, you can trust them. By comparison, wild animals (at least excluding primates, whales and dolphins) are driven almost exclusively by much simpler motivations, such as getting food or procreating, much the same way large companies are driven simply to generate profit and shareholder value. I don't need to trust a shark, but I can trust it to follow it's shark instincts. I don't need to trust companies, or expect them to care about me, but I trust them to care about their bottom line.

So to be clear: I'm glad Apple and Google are going this route, rather than allow governments to massively track their citizens. And I don't need to trust Apple in order for that to happen, and that's good.


I totally missed your point in the original post. Apologies for that! It seems others didn't. I agree with you.


I've long felt like Apple is uniquely easy to distrust, due to its almost total lack of visible employees. I assume there are actual people working there, but I've never seen one, say, speak (openly) at a meetup or answer a stackoverflow question, or the like. ~Everything I know about Apple either comes from an advertisement or from a guy who talked to a guy who talked to Gruber, and I have a feeling that that makes it easier for people to assume nefarious behavior behind the scenes.

Actually, with the exception of people working on standards committees, I think your comment may be the first time I've ever seen an Apple employee openly comment in a public forum.


Yep, I think this is quite right. In fact my first thought after seeing this guy self-identify was "welp, that guy's getting fired"


Yep, same thoughts.

Most of the open-source contributors in my projects that get hit by a bus, get hit by an Apple's bus (they start working at Apple, and cannot say anything publicly anymore or contribute to any project).


I've met an Apple employee at a conference, just once. Talking to him was reminiscent of talking to an employee of GCHQ (which I have also done). Relaxed and affable, but wielded the phrase "I can't tell you that" with the familiarity of one who says it several times a day.


Sounds about right. A pretty good friend of mine worked at Apple for a year or two, and I once asked if he enjoyed his time there. He answered: "no comment".


The general populace generally doesn’t pay any attention to any statements from any employees of any company. Those in the tech industry generally don’t pay any attention to statements from any employees of any company outside the tech industry. Being aware of statements from employees does not appear to be a factor in terms of trust in a company.


I read GP's post (and hence my reply) as being about sentiment here on HN. Obviously I agree that the general public's opinion of Apple isn't heavily influenced by meetup speeches and Gruber's blog :P


I’ve seen an Apple engineer in non-GUI or hardware job talking on his blog about curvatures he encounter over the course of his job, so that definitely happens sometimes.


Their infamous secrecy doesn't exactly help either.


Well there's WWDC of course. But I suppose that's not really "out in the world".


Do you find companies with public facing employees that give talks at meetups and conferences easier to trust?

In other words, you trust Facebook, Google, NSA, and Congress more than Apple?

Seems like an odd standard.


I said P⇒Q; you're asking if that means I believe !P⇒!Q.

That is: I'm not saying a company is trustworthy if-and-only-if its employees speak publicly. I'm saying that my mental model of what a company would or wouldn't do internally is strongly influenced by what I know about its employees.


The alternative is that you believe !P⇒Q, in which case you could have easily just said Q


What if there's an R?


As an Australian who has privacy concerns their government's COVIDSafe app (see https://github.com/vteague/contactTracing), and hence not installing it, I'm really thankful that Apple and Google are pushing this model of contact tracing. We still don't know if digital contact tracing is effective in practice, but it's still important to try, but we can do this in a way that avoids giving governments with worrying authoritarian tendencies another tool.


Having Google/Apple develop a tracking technology is the same as the US government having it. If you don't believe, read again what Snowden revealed several years ago.


What? I don't think this is accurate.

From what I recall, the US was/is spying on the major tech companies and would regularly demand data and place gag orders on those companies.

Neither actions are willful forms of data transfer. The first is actually an eternal game of cat and mouse. NSA finds a leak for some data, Google fixes it, new leak, etc. The second is targeted handover of data, and only affects a few individuals.

Equating these with the US government having full access to everyone's data is misleading. If you think otherwise, please provide more detail.


Exactly why they aren’t collecting GPS data and the system is built using anonymous Bluetooth keys, similar to the find my iPhone anonymous network. Can’t hand over what you don’t have.


How does this jive with Google, for instance, using Bluetooth scanning (enabled by default) for high location accuracy? This has been enabled for years and most people are simply aware of it being enabled (outside of HN).

Google has effectively been using Bluetooth scanning and contact tracing (of sorts) as part of their location tracking feature... Now they're turning around and saying they won't track location from Bluetooth scanning? Seems like a BS PR move.


Apple and google are working together on this. So it’s an agreement between them both to have a more privacy-centered protocol for this purpose.

Google lets you do a lot more privacy violation on Android, but Apple has been building their brand around privacy and wouldn’t participate in that.


>Equating these with the US government having full access to everyone's data is misleading. If you think otherwise, please provide more detail.

By everyone you mean "US citizens" because from my understanding non-US citizens are fair game and it it legal to spy on them.


No, they aren't allowed to hand over data on EU citizens either, that would break GDPR. USA wouldn't go after them for this but EU definitely would.


What about non EU citizens? Do you think NSA won't try to get the realtime location of EU and nonEU politicians? They can claim is for national security, would GDPR stop that?


You should maybe read Permanent Record from Snowden. Everything you describe is outdated since his whistleblowing in 2013.


> is spying on the major tech companies and would regularly demand data and place gag orders on those companies.

So you agree.

> Neither actions are willful forms of data transfer.

What’s that got to do with it?

> Equating these with the US government having full access to everyone's data is misleading.

If the data exists, the only prudent approach is to assume state-level actors, at least, can get access to it.


The discussion is beyond "if the data exists", it will be gathered and some people seem to prefer yelling at clouds instead of looking at the technical implementation.

Even nation state actors will have a harder time gathering data that only exists locally on a bunch of smartphones, separate from geolocation as proposed here, versus a centralised database lacking comprehensive oversight.

The rest is pretty irrelevant, we're talking about data collection using phones that already have an OS from both of these vendors. "But Snowden" is really no argument anybody in these discussions will listen to (and I'm not convinced they should if it's used in a way to imply that you shouldn't use the internet for anything). If you have a problem with data collection for contact tracing please be specific why and optimally provide what you feel would be a better alternative.


Google and Apple are the ones pushing for isolating where this data lives and how it can be used/abused here. They are doing this in an effort to curb far more dangerous data collection on the very same devices in architectures that infer location or send off all data that is gathered on central servers. Your comment boils down to "Gapple is evil because Snowden" and seems to be disconnected from the specific issue at hand. They are OS manufacturers, if they wanted to get malicious access to all kinds of tracing data they would have had to do exactly nothing.


> They are doing this in an effort to curb far more dangerous data collection

Don't be naïve. This is not an NGO or an institution. They are doing this so that they and noone else owns the data.


Or maybe so that ordinary people will continue to to trust them and people like me will start trusting them.

They have a long way to go in my case but every journey starts with a single step, and this seems like the twelfth or so step from Googles side towards becoming trustworthy (but they still have a long way to go!)

I think one shouldn't underestimate the business value of actually being a trustworthy vendor/business partner/SaaS company and while there are a few contenders that niche isn't too crowded for now :-)


> Don't be naïve. This is not an NGO or an institution. They are doing this so that they and noone else owns the data.

Even if this "do your research" level talking point would be true, they don't own the data in this proposal, the end user device does. The device you trust and use anyway, the device that has your geolocation and access to far more data Gapple could abuse at all times. Which is better than what the COVIDsafe/NHSX/ROBERT put forward for the specific topic of digital contact tracing.


Google/Apple developed this particular contact tracing technology such that they don't have any of the data nor any of the control, so it is not the same as the US government having it.


> Having Google/Apple develop a tracking technology is the same as the US government having it.

This is why you want an API that never uses any data you don't want anyone else to have. That's what this API is.

The "trust" here isn't about whether they'll keep your data safe from third parties including state level actors. Your "trust" only needs to be that the API does what it says on the tin.

Which leads to this conclusion: either you A) trust this API to be what it says: something that doesn't ever deal with any sensitive data. Basically an exchange of random numbers.

Or B) you think that there is something nefarious here and the API might associate who you are or where you are, and store or distribute that data.

If it's A) then you should be fine. If you think B) then you shouldn't use a phone from Apple or Google. Because as far as you are aware, they share your location and personal information.

As far as integrity goes, I can't see a situation where you would both accept running an iOS or Android phone but at the same time avoid apps with this API out of privacy concerns!


> then you shouldn't use a phone from Apple or Google

Because you have another choice...


I read Snowden's biography - Permanent Record. Your claims are untrue. The sibling comment has it right - the US govt could steal data from the tech giants but they gradually got better at plugging such leaks. For example, in response to the Snowden revelations, all data in transit between datacenters is now encrypted. On the other hand, the govt would also request the data of a few people, which was generally granted.

It's not correct that the US govt has root access to all systems and data.


That seems extraordinarily naive. The NSA would not give up access like that unless forced at gunpoint, and the US gov has clearly demonstrated it doesn't care (and actually quite likes) this sort of gross privacy invasion.

The only (in)tangible difference between 2005 and today is the presumed existence of national security letters and other warrants that compel these companies to provide access.


You must be ignorant of how these agencies work. All they want is plausible deniability. The spy agencies have all technology needed to access any phone. For example, it is widely known that Israel's agencies have the ability to enter any phone, iOS or Android, and get the information they want. They are now OPENLY using this technology to track corona virus cases:

https://www.cnn.com/2020/03/18/tech/israel-coronavirus-techn...

https://www.timesofisrael.com/israeli-tech-company-says-it-c...


Do you recall the news that NSA (US Intel Agency) paid RSA (US security vendor) $10m go backdoor encryptions libraries?

If this was exposed once it’s happening elsewhere.

There was also the case of the NIST elliptic curve encryption best practices being subverted for NSA backdoors standard.

They’ve got a job to do. They’re doing it. But worth noting that a vendor could claim be pro privacy while also cooperating with their government.


And we also have examples of companies refusing to comply. And we all use djb’s curves rather than the nist curves now.


Another consideration that seems to have been obscured by the debate on privacy and the narrow focus on the particular client implementation of the app is the significant problem of false positives and negatives.

A lot of voices have spoken out about this issue overseas (particularly in the US) while many local tech voices have skipped considering this at all.

See:

* Previous FTC CTO / Obama Whitehouse senior adviser: https://twitter.com/ashk4n/status/1248659875669798912

* Brookings Institute article: https://www.brookings.edu/techstream/inaccurate-and-insecure...

* Margolis Center for Health Policy at Duke University (pdf report): https://healthpolicy.duke.edu/sites/default/files/atoms/file...

* Bruce Schneier: https://www.schneier.com/blog/archives/2020/05/me_on_covad-1...


We need to be driving test numbers up until only 3-5% are showing positive in order to be confident about low prevalence in an area. I don't see a problem with false positives encouraging asymptomatic people to get tested - it's as good of a sub-population as any.


The project lead of Singapore's TraceTogether initiative goes into detail about the problems with an automated system, and why a human-in-the-loop is ideally required to evaluate the type of contact and make a determination.

A determination around being a close contact results in 14-day isolation regardless of symptoms, presumably because you may initially test negative before moving into an infectious and asymptomatic or symptomatic phase.

https://blog.gds-gov.tech/automated-contact-tracing-is-not-a...


It's worth re-iterating how unreliable bluetooth signal strength is in estimating proximity.

One recent data point using the CovidSafe app is here: https://twitter.com/jim_mussared/status/1256199078314078210

Exploration around the defects in that app is ongoing here: https://docs.google.com/document/d/1u5a5ersKBH6eG362atALrzuX...


14 day isolation was a policy choice. There is no reason that other municipalities do the same. We can discover a balance that keeps local r < 1.0


I assume that the use model is as a way to help human contact tracing, not to replace. Admittedly, the benefit is less then, but maybe it's still worth it?


The assumptions of benefit appear to rely on a naive theory of instantaneous contact and isolation, when in practice the entire process is unavoidably manual, requiring human intelligence to ascertain environmental factors with which to make a determination about whether someone is a close contact.

When you consider that close contacts include anyone you've spent more than 2 hours with in a room, it becomes clear that most life situations are not handled by estimates of proximity using bluetooth: home, family & friend visits, workplaces.

You can find a deep-dive into these issues here:

https://blog.crushthecurve.today/why-should-you-install-the-...


There are two sides to trusting the covid-19 app. One is the technical side those people are commenting on. Technical deficiencies can be fixed, and more to the point will be fixed if you just keep shining some light on them as they are doing.

The other side is trusting the government to keep it's promises. During this covid-19 crisis I do trust them, but in the longer term their record of keeping promises has been less than stellar. Frankly, keeping this app or any app of theirs installed over a few years on the basis them promising not to missuse the data is downright foolish given their past history. Such promises tend to become null and void at the next election.

But right now we have no choice - it's either take them at their word, or don't install the app. Yes, we can do what the gang of four above have done and de-compile it, but that takes a huge amount of effort that has to be repeated every new release. That effort isn't going to continue. If it doesn't continue the light doesn't continue to shine on it's technical deficiencies, and so they won't be fixed.

But - that can change with a few simple and cheap changes to the way the government does things. All they have to do is release the source to a public repository before they release the binary and have a reproducible build. Do that lots of things become much easier. Checking what the commented source does as opposed decompiled output is much easier, checking just the differences in source between one version and the next is much, much easier than checking the entire thing, using reproducible build to allow you to check the source rather than decompiled output is very much easier. Do that, and the light on the technical deficiencies will stay on forever.

Implementing those inexpensive and straightforward things has anther wonderful emergent properties aside from the technical deficiencies being fixed: you suddenly don't have to trust the government, you can trust the code instead.

But no one seems to focus on changes to the overall process. Instead it's essentially nit picking on how the app does things today. It's an unfortunate focus.


COVIDSafe will be using these new APIs.


Have you got a link / source for this? Genuinely interested, not being a dick.


From what I heard it works fine on Android, but on iOS it works ok... as long as you have the app running in the foreground, and your phone unlocked, while out and about.

The only way to fix that would be to use this new API on iOS.


Since people seem to be downvoting without commenting: https://www.gizmodo.com.au/2020/05/covidsafe-issues-ios-ipho...


I wasn't one who downvoted but parent post was asking for source on whether this app is going to utilize the new APIs, specifically. According to your subsequent link, near the bottom, currently it's unclear:

"The Government will work with Google and Apple to investigate whether the new functionality announced by Google and Apple partnership is beneficial for the app performance"


The COVID Safe app was already released (although not fully functional on the server side yet) but the Google and Apple APIs are not available yet, so doesn't that mean that it isn't using the Google and Apple APIs? At least currently, any way.

According to the Australian COVIDSafe app's privacy policy, when you register for the app, after you successfully enter a PIN sent by SMS, it transmits the following info to the Australian authorities: your mobile phone number, the name you enter, the age range you enter, the postcode you enter. The reasons for each are explained in the policy. This data is stored in the cloud. I don't see why the registration info (I'm just talking about the registration info, not the Blutooth-related data) can't simply be entered later, or stored locally on the device and not uploaded until, when and if, the user volunteers to share their registration info with health authorities, e.g. as a result of being notified that someone that was near them has tested positive for COVID-19. If this info wasn't transmitted as part of setting up the app, I expect the uptake of the Australian COVIDSafe app would be significantly higher.

I am also still waiting for Australia to publish the source code for their COVIDSafe app...


As a lifelong public servant, welcome to the ungrateful side of life. If you're providing a service to the public with the hopes of their gratitude, you will be sad. Do it for the same reason you change your baby's diaper: you love them and it needs doing. They might take pleasure in pissing on you while you're doing it, but this thing you love will be better off afterward.


Let's not compare apple to a public institution. What apple does is in the interest of the shareholders and nothing else.


>What apple does is in the interest of the shareholders and nothing else

Honestly what is with this attitude? Yeah, I get it.. you took a business class and learned that everything a company does should be to maximize shareholder value.

But welcome to the real world. Sure the company wants to make money, but they also want to provide a valuable service. Yeah sometimes making money gets in the way with the best product they could offer.

But it's people making the decisions of what to make. It's people who use the software (let's say osx/ios) that want to make it better. It's developers who sit there are think how can they improve this feature so it's great for people.


Perhaps the institutional shareholders have Apple holdings precisely because their privacy-centric practices increase their stability, independence, and profit—without having to sell out their customers.


No, they hold it just to get a return on their investment.


Actually I think the comparison is a good one, but mostly due to governments acting in their own interest rather than the interest of the people :)


It's nice to have some people like you, but a significant amount of public servants I encountered visibly took pressure in making the life of other's miserable, and the processes as convoluted as possible.


Some people in government want the government to succeed and make people's lives better. Some people in government want the government to fail in all ways besides keeping them and the people they like safe so they can personally benefit and pocket the tax savings on top.


> Some people in government want the government to fail in all ways besides keeping them and the people they like safe so they can personally benefit and pocket the tax savings on top.

and its much easier to make something fail than it is to succeed, which is why voting for (and keeping) the right people is so importamt


Were they aware that the process was convoluted, or did they simply not realise how obtuse they were making it?

I have worked with people who "designed" terrible systems or couldn't see that a system would never work, yet still attempted to operate it. It was like they were unaware of the dysfunctional state of it, yet it had to exist because their job was to "design".


[flagged]


This kind of dismissal of an analogy bothers me. It's quite common, most often just using the word comparing: "Are you seriously comparing this to ...???"

Analogies are a great tool for clarifying similarities and no, that doesn't mean that situation A shares every virtue or vice with situation B. If so, it wouldn't be an analogy, it would be the same.


Apple is a for profit corporation. They are literally the exact opposite of public servants.


Apple isnt a public company. Apply lives for profit. Apple is positioning themselves stratergically. Contact tracing is a vector for profit.

If the government doesnt misuse your data, Apple will, by proxy, buy training there AIs on your datas. Sure the AIs will start out "for the purpose of good" but at some point they will be leveraged for the "purpose of profit".


The anti-vax movement challenges science itself; the anti-Google/Apple movement challenges corporate-level ethics and human control over identity. You can independently reproduce experiments demonstrating the efficacy of medicine. You can’t reproduce the faith that you place in your own corporation.


But in this case the mechanics at play are the same: people can read the documentation and arrive at the idea that this protocol is actually good and much better than some alternatives.

Instead, they form an opinion based on generic distrust for apple and Google, just like no-vax people do it out of generic mistrust for institutions and big pharma.


While the protocol may be good, the app implementing it may be not. Google and Apple have gone to great length that you cannot control what's running on your device. The next update can be nefarious and you can do only little about it. Just have a look at the Australian assistance laws. The distrust is well earned.


but the next OS update can do whatever it wants too, how is this a change? We will always need independent verification and observation of the platform behaviour from the outside.

The distrust _is_ justified, and we should be wary. What is not justified is dismissing the CTF based only on that, imo.


While that might be an argument in general, the protocol Google and apple are working in is cryptographically secure. They provably can't back out data about you.


Is their code open source? How do you know this?



A PDF document with a specification is not even close to "open source code". There is no way to know whether the specification actually matches the implementation.


> There is no way to know whether the specification actually matches the implementation.

That would be true even if Apple suddenly decided "oh these folks on HN want to see the code for that tracing component, better open source our complete OS". I've seen assurances that parts of this will be open sourced as far as they can but I honestly don't see how that matters. You have, as is a general tradition, a description of what this system does cryptographically. The rest is a matter of reverse engineering that hardly benefits from having the code. It's not like Play store updates or whatever Apple uses to push this are reproducible builds the end user can verify. And you always have the closed source blob of whatever BTLE chipset your device uses which you can mistrust.

edit: Let me qualify "but I honestly don't see how that matters" for the question of trust in what runs on your phone here. If you read a dig at OSS into this you're generalizing far over what I'm addressing here.


There are defiantly benefits from having the source code when it comes to being able to verify that the implementation does in fact match the specification. Sure, reverse engineering the binary would be possible but far more difficult. Reading open source code is far easier than reverse engineering. (Added in edit: From what I've read, the spec itself doesn't seem bad. The point is, assurance that does not require trust in another entity that the spec was implemented as written would drastically improve the confidence in the system.)

I have compiled some of the apps on my phone myself so I know that the result does in fact come from the same source code. That may not work in the case of iOS devices, though.

> And you always have the closed source blob of whatever BTLE chipset your device uses which you can mistrust.

That is a good point and it'd be great if that chipset could also be open sourced but one step at a time.

> That would be true even if Apple suddenly decided "oh these folks on HN want to see the code for that tracing component, better open source our complete OS".

You are correct if it was simply open sourcing the OS. If it was possible to compile and install the OS ourselves, that would completely change the game.


> but I honestly don't see how that matters

https://www.gnu.org/philosophy/free-sw.html


There is no code available at that link, and no indication whatsoever that Google and Apple’s implementation of the algorithms and protocols described in those documents will be open source.

I believe your claim is false.


Realistically, those APIs will be analysed soon after they're available in the same way COVIDSafe was. If they do something different than what's in the design, we'll know pretty quickly.


As a former Apple employee, it's gutsy for you to post this, or anything, certainly on a controversial topic.

You don't work for the kind of employer that values someone sticking their head out and saying ... anything. They want to control the message, and saying anything may be worse for both you, and for Apple, than saying nothing.

Just be careful. Apple's got the people to manage this PR situation.


As someone else commented, this is one of the reasons why trusting companies like Apple is a bad idea. Even if employees had real concerns about a technology, or wanted to discuss the pros/cons, we would never hear about it. It should be obvious why a strong focus on secrecy and "controlling the message" breeds distrust.


The appropriate venue to discuss concerns with technology in a forum that will actually impact it is rarely a public message board.


That does not imbibe more trust in me regarding the corporate world, Apple this time. Big corporates have done many things to erode it, just like governments did. As we all know, trust leaves on horseback, comes back on foot. And in current 'distrust-first' society with populism feeding the gut, it may not come back at all for quite some time.


> it's gutsy for you to post this,

That comment seems completely neutral. Not being able to post that comment without worrying about consequences sounds absolutely dystopian.


It's Apple, that's how it works. You've frequently got no idea what the person down the hallway is working on.


Yeah, there’s so little critical thinking going on in general.

I wish people really tried to understand rather than just give knee-jerk negative responses to go along with their existing world view.

A lot of people are trying to do the right thing and working hard to do good. The cynical HN default response just feels like a lazy way to try and signal intelligence.


Edward snowden, gag laws and the history of FBI/CIA is what any educated critical thinking person needs to understand why these apps should not be trusted.

Apple and Google may have good intentions. But if there is one thing clear from history is that if anything can be abused it will be. Maybe not now but it certainly will eventually.


Sure - and there are nuanced/interesting discussions that can happen about the risks, but that’s not what most of this commentary is.

Most of the comments are empty of any actual content from people who haven’t even tried to read about the protocol they’re working on.

The protocol is designed by people aware and concerned about these risks (which is why the way it works is pretty interesting).

A lot of interesting work happens in the challenging areas where it’s critical to get things right, there are real risks, and the answers aren’t obvious. Engaging in those areas where things are complex is important.

It's easy to just remove yourself from the problem, say something can't be trusted, and then feel good about your moral purity, but that doesn't mean the problem doesn't exist. It does, and there is real value in putting work in to solve it.

If you care about these issues then you're the type of person that should be involved in solving it exactly because you're concerned about the risks.

Jeff Hammerbacher's quote from working at Facebook reminds me of this, “The best minds of my generation are thinking about how to make people click ads,” he says. “That sucks.” I think he's right, and working on these bigger non-ad software problems is where the effort should be, because they are hard, and because the outcome is important.

The governments of the west need to be capable, but still protect the privacy of their people - that isn't an easy problem.


>The governments of the west need to be capable, but still protect the privacy of their people - that isn't an easy problem

You're right, it's a hard problem. And until these companies or institutions can prove that they are doing this, why should we trust them?

>who haven’t even tried to read about the protocol they’re working on.

The companies will collect and store the tracking data, and the government will, one way or another, have access to it. What is so difficult to understand about that?

"It will only be used for good" is laughably naive, as history had demonstrated repeatedly.

>If you care about these issues then you're the type of person that should be involved in solving it

Protesting and making your opinions known is way to help solve the problem.


> The companies will collect and store the tracking data, and the government will, one way or another, have access to it. What is so difficult to understand about that?

It's difficult to understand because it's false (and this statement shows you haven't tried to understand the solution they are proposing): The data doesn't leave the phone without the user's consent, the consent is only asked for if the user contracts the disease, and the data uploaded when consent is given is only useful to those who have been in the vicinity of that user: the companies developing the system and the government do not get a list of users and their contacts (they get no data at all from anyone who does not consent, and they still don't get contact data from those who do).

The only issue of trust is whether the implementation on the phones matches their statements: but this is already a general problem with phones in general: if you are trusting them to run the OS you are already trusting they will not exfiltrate this and more data. I see no actual expansion in their powers through adopting this approach, and I see them making every effort to protect privacy while achieving the requirements, which is in stark constract to government approaches.


Thank you - the comment you replied to is basically a case in point of what I’m talking about.

> Protesting and making your opinions known is way to help solve the problem.

Not when the protesting and opinions are based on a false understanding that comes from motivated reasoning, knee-jerk negative responses, and not trying to understand the issue deeply.

Then it’s just noise that confuses the issue and makes things worse.

Most of the time it’s not helping to solve the real problem anyway. It just makes people feel good about themselves without actually engaging in the work.


> I feel like as a society we've swung so far away from institutional trust that now nothing good can actually emerge. The anti-vax movement is a perfect example where the collective work of thousands of people over decades to save millions of lives just gets tossed aside because some celebrity 'feels' like there's a connection that isn't there, and in the process, the level of public harm becomes severe

This is not an accident. A great deal of work has gone into destroying public trust and the capacity of institutions to make impartial decisions based on data. Partly from the profit motive, partly for destructive political purposes. And a lot of people in the tech industry and on here actively support the right to destroy that work by posting the most thoroughly debunked and dangerous lies on popular tech platforms - again for their own ideological purposes.


Release the source code and provide some mechanism for users to have verifiable proof that the published code matches the application running on the device and we can start talking about trust.

Given past track record of both companies, this is the only way I would ever have of not being skeptical.


Do you have some evidence that Apple and Google run backdoors to privacy on iOS or Android devices? You speak about the track record of both companies, surely you should be able to point out several examples of this.

The spec seems pretty clear - it's not possible. If you feel that you don't trust them to implement the spec correctly due to incompetence or malice on their part - you need to supply the proof. If we're relying on your gut feel to be skeptical, you just look like an anti-vaxxer saying "I don't trust Big Pharma's vaccines based on my gut feel."


There's arguably a lot more evidence against Google than Apple here. But Apple did lie about their involvement, or even knowledge of, the PRISM program. Snowden leaks. I highly recommend Glenn Grennwald's book No Place to Hide if you're not interested in digging though sources yourself.

Even against their will, companies with capabilities can and have been compelled by government agencies to go against their promises and lie about it.

Additionally, with organizations of this size, it's very plausible that the vast majority of employees and even executives are blissfully unaware of wrongdoings, abuse and involvement in conspiracy by individual employees or decision makers.

All it takes is a single compromised or malicious person to add new capabilities in an update to an already rolled-out and well-intentioned software. Possibly a handful if you add strongly enforced and audited mechanisms for co-signing releases and whatnot.


I've read Permanent Record by Edward Snowden. It doesn't get more detailed than that. There is nothing in the book that indicates that Google's implementation of the contact tracing spec would deviate from the spec either through incompetence or malice.

You're making vague, unsubstantiated claims about how this could be compromised. Even if there is a bug that allows such apps to collect location info, that has nothing to do with Snowden leaks.

If a government is keen on collecting such information, they would simply not use Google/Apple's APIs and instead roll their own using bluetooth and location services. This was pointed out in the article. Now could you explain to us why you're promoting a conspiracy theory that the government will lean on Apple and Google to introduce malicious bugs into their APIs when they don't need the APIs at all?


> There is nothing in the book that indicates that Google's implementation of the contact tracing spec would deviate from the spec either through incompetence or malice.

I agree I'd trust Google over the government and many others. But the OP you are replying to called for releasing the source code, and reproducible builds. If you do that, you don't have to trust somebodies word or the laws of the land, you have the laws of mathematics in your court.

Surely you are not saying what is a clearly inferior solution is one we should be using.


I'm not arguing for the existence of any specific deliberate conspiracy. So please don't misrepresent what I'm saying as conspiracy theory.

Still, there are multiple plausible ways in which these apps and systems can be compromised by malicious actors (from any nationality). The risks are significantly larger than they need to, to a large degree due to lack of transparency and verifiability, and significant trust elements.

These risks are not inherent in the design of the contact-tracing system or the specific companies themselves, but they are inherent in the way smartphone apps are typically developed and distributed to end-users on mainstream platforms today.

Then on top of that, past and ongoing incidents obliterates my personal confidence that large U.S. tech companies will live up to their promises.


Institutional trust has been eroding since Watergate and almost non-existent since Snowden leaks.

Seeing Apples name on the PRISM slide was a gut bunch, you dont and will never get blind trust anymore.

Saying this is similar to the anti-vax movement is so frustrating for me to read when special closed courts are set up in my country to deliver verdicts on the legality of data acquisition and they STILL found the GCHQ overeached and illegally spied on citizens.

The UK gov has even decided to not use the API provided by Apple here so if I could trust Apple (which I don't) its being undermined by the fact the government decided to take Palantir on to help make it and it looks like they are getting involved in the US effort too.


Yeah but it is still Palantir, so maybe don't trust the government's judgement that much.


If you want to restore trust then stop fighting with users and give them control over their devices.

iOS is full of dark patterns and a few absolutely awful restrictions that are very anti user. It shouldn’t be surprising that the users don’t trust Apple.

(Hopefully this doesn’t come across as disrespectful, I think the work you guys have done on contact tracing is awesome.)


In your example, institutions built up trust and it was broken by some third party. Public health was compromised as a result.

In the case of privacy, the trust was broken by the institutions themselves (Facebook, Apple, Google, Microsoft, US gov obviously as well). Public privacy was eroded as a result.


Yeah it's really self righteous of an Apple employee compare their employer to the entire international multi decade effort to develop and distribute life saving and civilization changing vaccines.


Apple is not an "institution". A arm of government or a long standing NGO are institutions. Apple is a business, it shouldn't get institutional trust.


> A arm of government or a long standing NGO are institutions

The New York Times is neither of those, yet many people would characterize it, alongside the Fourth Estate, as institutions. The term "institution" codes depth beyond legal form [1].

[1] https://en.wikipedia.org/wiki/Institution


Pertaining to 'institutional trust'. One ought to be able to trust e.g. government. Alas you can not (I am a public servant).

One should never unequivocally trust for-profit companies.

Also: https://www.bbc.co.uk/programmes/w3cszcms


Making such big difference between government and (big) companies seems naive. One should never trust neither government nor companies. They are not that different, both are power structures with vastly more power and different concerns and priorities than an individual.

There are many parallels between them. Companies run on economic power, government run on political (and also economic and hard) power. Companies are kept at bay by market forces, government by democratic process. Both of them may work or fail (monopolies in market, different kinds of identity politics in democratic process). Both companies and government can do extraordinary good (when incentives are well aligned and leaders are competent) and extraordinary bad (when either is not true). Both can empower petty tyrants inside their hierarchies. The reasonable way is to keep both dependent on each other and to limit each to their own domain.


Apple doesn't make it easy to be liked overall. For example, trying to get 30% of a company revenue to publish an app on ios store and push the arrogance all the way to ask a company to change its business model is insane. I'm not gonna go all the way to ask to be able to build or test things in containers, but at least do it easily in virtual machines legally without having to buy or rent some apple hardware for no good reason. Why not even being able to build ios apps outside of macos? There's no technical reason not to be able to do so neither.

For me, the discrepancy between apple marketing around the products and how it feels being stuck in the past as soon as having to build and support anything related to apple makes the whole company not likeable and not trustworthy.


We are culturally obsessed with "Luke, trust your feelings." There is something extremely wrong and dangerous with Star Wars and that idea to put away reason and just trust our feelings. Joseph Campbell what did you do?


Maybe not everything can be explained with reason, the unconscious mind can play its part.


Give me an example from history where everyday 'papers please' has worked out well.

There are many reasons to challenge broad authoritarian policy. Don't pollute the dialog with anti-vaxers, that shuts down a completely legitimate conversation that should happen in a healthy society.

Accepting wide-spread social tracking unchallenged is worrisome. I should not have to explain to intelligent minds why. The paths you open up when you allow something like social compliance tracking and scoring are just terrifying. Way more terrifying for our children than the immediate health problems.


> I feel like as a society we've swung so far away from institutional trust that now nothing good can actually emerge.

You frame that like society itself or the people that make up society are somehow at fault. Perhaps the issue is that elements of society like Apple have taken actions or have failed to take actions which would have engendered the sort of reaction that you would prefer to see.

If Apple wants my trust they need to earn it.

Why should I give Apple my trust again after being burned by them repeatedly?

Does Apple even trust me?


> feel like as a society we've swung so far away from institutional trust

The main mistake is that you compare, emotionally at least, Apple to an institute. But it's not, it's a commercial company. "We" never had or should have had any trust in commercial companies - maybe besides small ones as someone else noted here.

The problem is that as a society we allowed a few commercial companies to become such an inherent part of our lives.


The size of the company doesn’t matter. Small companies are not inherently less evil. You can have a company with no VC backing or shareholders and screw over your customers at every turn. The idea that the size of the organization is directly related to whether or not it is worth trusting is ridiculous. Just because I don’t know everyone at a hospital doesn’t mean they aren’t worth trusting.


I was referring to this comment, I agree with what's written there.

https://news.ycombinator.com/item?id=23077187


I think I trust Apple more than any other FAANG company, but that's like saying you're the cleanest hog in the pen. Even if it's true, it doesn't necessarily mean much and you're still tarred by association. The tech industry has burned through trust at a blistering pace in the last decade and it's hard not to be cynical.


> ...but I feel like as a society we’ve swung so far away from institutional trust that now nothing good can actually emerge.

I understand and completely empathize with your concern. I do think this lack of trust is and is going to cause very serious problems. However, this mistrust has been earned.

The incentives of large companies seem to be odds with what many people consider good for themselves or their communities.

We like to tell ourselves that a company’s goals must align with a community’s concerns or another company will come along and replace them. But this doesn’t seem to happen in reality and if it does, it can take decades which by that time, the damage is already done. Particularly if the company is above a certain size or retain a certain percent of their market. At scale companies can and regularly do act almost antagonistic towards their customers and the communities in which they live.

During some college research I found an older article from years ago and while I’ve looked repeatedly, I haven’t been able to find it again (If anyone knows it, please please let me know.) I’m running off of memory here so forgive me for the weak description. The author had researched and discovered how most very large companies used to require that every move they made would place a very high value on how it would impact their local community first and then place a high value on how the decision would impact their wider customer’s communities. These considerations played a massive role in whether or not they would move forward with the idea. If I’m remembering correctly this was written into these massive companies bylaws. Their first concern wasn’t regarding profit, it was about community impacts, and then profit came later in the decision.

I think we’ve gotten too far away from this and it’s going to cause even more damage than we’ve already seen. We’re so far away from this ideal and now an awful lot of companies will first decide if they can get away with something and if it will cause quarterly growth, then it’s worth moving forward on with little thought into wider impacts. This change has led to little if any thought going into the community wide second and third order effects. The fallout means we’re now dealing with a drastic diminishing trust in companies.

I share your concerns because I don’t think large organizations are inherently untrustworthy, and I know many (if not most) people inside these organizations have higher ideals and many companies have the best of intentions, but I think their hands are tied by systemic problems and these problems are eroding at fundamental societal trust systems.


Companies shouldn't be allow to become as rich and powerful as Apple. We're supposed to live in a democratic system. Corporate power on the scale of FAANG is the antithesis of democracy.


The problem with Apple is not that it is big, but that it uses its power in one market segment (phones) to control other market segments (phone apps). And not just to gain dominance, but to have total market control through App Store.

If anti-monopoly offices were able to force Microsoft to offer competing browsers after OS installation, then they should force Apple and Google to offer competing App Stores, each with its own app policies.


Thought experiment: People willingly giving this power over to an alternative group of people (a company in this case) could be considered a democratic vote from a populace? Providing power is willingly given to them by the actions of the group.


At best people are voting for who has the best product or service to solve their immediate needs. I don't buy Apple products (and I'm fully in the Apple ecosystem) because of Apple's great vision for how society should develop. Democracy operates on a different plane.


So a democratic system that ignores property rights when it suits the mob that is not actually Democratic.


property rights of companies or individuals?

democracy is about individual freedoms and rights, not necessarily those of profit-seeking entities


And are not individual workers "profit-seeking entities"

btw I do own Apple shares so what about my rights not to have those expropriated.


Corporations are just legal conctructs to easily define rights and powers of their owners. So property rights of companies are just property rights of their owners.


I do think that open sourcing and transparency matter a lot in this era. Do that first before claiming the hard work, if not what's the difference between a mad scientist and a doctor?


I don't doubt that the people doing this are competent, doing their best and are aiming for a privacy preserving solution.

But that is irrelevant.

1. Neither Google nor Apple deserve the trust this requires. Seriously!

2. The road to hell is paved with good intentions. You are arguably pushing tons of resources which will directly harm all current and future human societies. It is not what this precedent will do for us during this epidemic, but afterwards. Also, normalizing tracking is not self fulfilling for Google and Apple, not at all...

Thanks for all you hard work?


> but I feel like as a society we've swung so far away from institutional trust that now nothing good can actually emerge

Why do you assume we ever had that trust? Corporations and Government have never had personal data on such a scale before and we've never had such little control over this data. Especially with google, your expecting us to trust a spyware company.

> The anti-vax movement is a perfect example where the collective work of thousands of people over decades to save millions of lives just gets tossed aside because some celebrity 'feels' like there's a connection that isn't there, and in the process

Anti-vaxers have been around as long as vaccinations have: https://en.wikipedia.org/wiki/Vaccine_hesitancy#History

I'd say this trust has never existed, there was just an illusion that it was because we couldn't see into others bubbles so easily.


> I feel like as a society we've swung so far away from institutional trust that now nothing good can actually emerge.

Good. That took almost 10 millenia of human development to achieve. Let's not regress.


Unfortunately, these companies earned our distrust and cynicism. I'm not an anti-vaxxer but I can't bring myself to believe that Google and - to a lesser extent - Apple are not going to record this data. I also don't believe that American companies have the liberty to disobey the US government and their letter agencies if they request this data to be recorded for them to abuse later. And for that reason, I will be disabling any tracking on my phone.


>just gets tossed aside because some celebrity 'feels' like there's a connection that isn't there,

The anti-vaxxers may be wrong, but boiling it down to a "celebrity's feels" seriously underestimates the movement, and likely adds to the reasons these people don't trust the institutions.


Equating anti vaxxers with corporate skeptics is a surefire way to build trust /s


Distrust and caution are the parents of security.

- Benjamin Franklin


Institutions earned their current distrust.


> I can understand why the public is skeptical

You can understand the skepticism and yet equate the skeptical people to "anti-vax" movement? That's very sneaky of you.


"It is difficult to get a man to understand something when his salary depends upon his not understanding it." Upton Sinclair.


Both movements disregard evidence and reality to fit their feelings.


Please show the scientific consensus that corporations are good, have almost no side effects and we should trust them.


Please show me where I made such a blanket, absolute statement.


What evidence is being disregarded here?


[flagged]


I can’t wrap my head around this kind of hate towards the Apple/Google contact tracing. If Apple or Google wanted to track your location they would just do it using the GPS on your phone. Short of getting rid of your phone there would be nothing you could do to stop it. If they wanted to allow specific apps to track your location they would just give those apps access to your location in the background.

It seems like the governments, that you seem to trust, would much rather take that much simpler approach. Instead we have these two companies building a system that is more restrictive than location services in every way.

If you own a phone you are already trusting Apple, Google, or both. Let’s not pretend otherwise. The contact tracing system just means you don’t also need to trust the government or what ever entity is building contact tracing apps.


It completely outrageous for you to characterize my critical views as "hate" for anything.

It doesn't "seem". It is clearly my preference to have any sort of surveillance be done by a government that answers to ME, the citizen. EULA vs Constitution.

And no, I do not trust them with anything.

And also an interesting little fact about "governments". They change. People have been known to have substantial say in their behavior. And there is that little ("hateful") matter of Law of the land.


[flagged]


Disgusting comment. A person is genuinely invested and passionate about privacy (and we know Apple’s stance on privacy, from Steve Jobs, to their stance with FBI, privacy tsars, keeping data on the phone, Secure Enclave, etc) and you call their work garbage.


> A person is genuinely invested and passionate about privacy...and you call their work garbage.

I'm referring to the output of Google and Apple as garbage. You're the one trying to make this personal. I don't really care that an individual cares about privacy. It has nothing to do with the machine they are a part of.

> and we know Apple’s stance on privacy

No we don't. We don't "know" anything. Are you serious?


To be able to install anything that uses the new contact tracing stuff, you must own a phone that is based either on iOS or Android. I'm honestly curious: if you are using any of those already, is that not even worse than this "garbage" you talk about?


I actually posted my comment from my Android phone. And yes, it is garbage.


Amazing. A pretentious, emotionally-manipulative version of "if you don't trust closed-sourced public corporate #43214 you're basically an anti-vaxxer!" is the most upvoted comment in this HN thread. Hilarious


Comparing privacy advocates to anti-vaxxers is a bad-faith tactic that steers the conversation away from the technology.


How's anti-vaccination in any way comparable to lack of trust in corporations? We have decades of evidence showing the efficacy of vaccines. Ask any old pediatrician what hospitals were like before vaccines. Can Apple and Google produce equally strong evidence that their new surveillance system will reduce mortality and be resistant to abuse?


More to the point, we have decades of evidence showing the efficacy of vaccines, and we have decades of evidence showing that corporations cannot be trusted with anything and that it's always a matter of when, not if, they back on any promise they've made.


is the private data PRISM safe?


All your data stays on your device. You only upload your random identifier if you test positive, as confirmed by a trusted health provider. I would worry much more about them if I were you. Personally I'm far more concerned about dying from COVID-19 than the government who already knows much about me abuses my health status of this virus.


No data is. You can't judge the security or privacy of a design based on people selling it out in secret.


I know people at Facebook who say the same thing, but at the end of the day, both companies only pay lip service to privacy in the way their products and services actually operate.


If we paint privacy as simply black or white, sure. But if there exist shades of grey in privacy than Apple and Facebook are on vastly different sides of the spectrum.


One hands all your data over to the Chinese government if you are Chinese. The other gives your data to Cambridge Analytica. I wouldn't say they're vastly different.


Why are you so sure the data remains private? Has Apple never handed data over to the gov. for investigations criminal and otherwise? Some of the genetic companies hand data to the FBI. The reason is great and moral, tracking down serial killers and the like. But, still, regardless of how good the proximate reason, the overarching trend is not so great.

And yes, I'm sure all the telecoms, hospitals, etc. also hand over our data. Adding one more, much more comprehensive data source to the pile doesn't help anything.


Because this data doesn't exist on Apple servers, and what does exist are a bunch of randomly generated identifiers. Have you read the proposal at all [1]?

[1] https://www.apple.com/covid19/contacttracing


It's not clear from these specifications on how the rolling IDs are associated with a single user after a diagnosis is made, and whether those keys are interlinked. While this explains the phone-to-phone model it does not explain the other half of the system from what I can tell.


I think by design they're not meant to be associated with a single user. A hospital would publish all id's of users that have found positive with covid and each other individual user's themselves will see if they have been in contact recently with one since they have saved all id's they've seen recently


Could network analysis be used to deanonymize these rolling ids? I remember reading that network analysis easily deanonymizes blockchains. Seems same would apply to rolling ids.


Apple complies with warrants, yes. Do you think they should not?


This just sounds like ad-hominem to me. How is this relevant to the truth of the statements in the parent comment?


Sounds like they answered my question.


Show me any institution that has maintained trustworthiness for all of history, and I'll consider trusting Apple/Google/our govt. with the ability to tell where every citizen is and who they associate with for the rest of the future.


I'm just not sure why we're more trusting of our wireless service providers who have explicitly given personal location data away to not only the government, but also to advertisers, than we are of Apple who would love to sell us another $1200 phone.


Convenience.

Convenience always wins.


Read the fine specification.


orrrrr don't install it at all, or delete the app when Covid-19 is no longer a threat like I plan to do.


I find it disingenuous that u use antivax in your argument. First of all the tech industry has been thriving on selling out and exploiting personal data for decades, and this 'business model' has become in practice unavoidable on the online world. And while big pharma has many deep flaws, making people deliberately ill through vaxination programs isn't it.

Secondly, everyone that understands low powered radio wave signals and keeps abrest of covid-19 infection vectors can plainly see that any correlation between those two are going to be extremely weak at best, and extremely easy to game.

No vaxine with such poor results would be allowed on the market, let alone prescribed.

This is a bad case of tech utopist solutioning gone awry.


If you don't trust Google and Apple when they say their API does what they say it does, then you can't use an iPhone or Android phone.

I understand that people are conscious about these things but what I don't understand is how people say "oh I'd never run an app using an API from google/apple, they have a horrible track record" and then carry a phone in their pocket with an OS from either Google or Apple, where those companies can basically do whatever they want. If you carry such a phone you already trust them. If this API does what it says it does (basically exchange random numbers) then how is that worse than what you already trust your phone to do?


> If you carry such a phone you already trust them.

I don't. But not because I think my dumbphone isn't also basically a mic I carry around, but because mobile devices are a joke, one and all. It's like moving back to communicating with infant sounds after I learned how to walk, form sentences and use tools -- why would I? Because so many people do, that they already assume everybody does?

I don't need e.g. Whatsapp to stay in contact with people I care about and who care about me, if I was only reachable by mail they'd send me letters. Anyone incapable of that I would surely dislike for dozens of other reasons already, so the question of "how to stay in contact even though I don't have a smartphone" literally didn't come up once since Apple "changed the world" in 2007, heh.

This stuff is just a very recent blip in our evolution as a species, and the phase we're in is about as impressive as the lies people told each other once they invented writing. It too shall pass.


if I was only reachable by mail they'd send me letters.

Based on your comments above, I'd bet a lot of money that they wouldn't...


Because something you can't even specify rubbed you the wrong way, no person in the world loves me? That's just wishful thinking on your part.


The title of this article is incorrect. FTA: “Apple […] and […] Google on Monday said they would ban the use of location tracking in apps that use a new contact tracing system the two are building”.

So, contact tracing apps that don’t use that system, such as the one from the UK (https://www.bbc.com/news/technology-52441428) still would be allowed to do location tracking.


But will anyone use them? I wouldn't trust an app which has been outsourced and feeds data to palantir:

https://eandt.theiet.org/content/articles/2020/04/nhs-opts-f...

https://www.hsj.co.uk/technology-and-innovation/exclusive-wo...

The Apple/Google solution sounds much better to me - no central datastore.


Oh the UK government has got a great plan for that - they're going to stick the NHS logo on it. That way people will incorrectly think it's trustworthy, and when they find out it wasn't it'll damage the reputation of the NHS - which is a key goal of the current government.


Ok, I've tried to fit that in the title above.


Will be interesting to see if they can ban Indian govt. app[1] which needs full location access(clarified)[3]. A lot of people like this app(including me) but also know government does not have good track record in securing private data.

Previously Apple were made to bend their rules when India threatened to ban Apple devices if they don't allow TRAI Do Not Disturb app in 2018.[2]

[1] https://play.google.com/store/apps/details?id=nic.goi.aarogy...

[2] https://9to5mac.com/2018/11/30/apple-approves-india-dnd-app/

[3] https://paste.gg/p/anonymous/b7c95d3967514e78a652840b5b666d5...


As another commenter pointed out, if the app is not using the contact tracing framework developed by Google and Apple, then the app can basically do whatever it wants (mandate continuous location access etc.).


I don't think apps are allowed to do this on Android any more. You can designate that an app can only have location access while the app is open and in use.


Right, for sure, that’s a choice users have. (On the iphone too, fwiw). Sadly, outside of tech circles very few people would have the knowledge or even motivation to do this. The attitudes of the populace to online privacy are frighteningly callous.


Since Android 10, the system will prompt you once the app tries to get the locarion in background and you can opt out with one click.


I've been astonished at the number of apps that are grabbing my location in the background. Espescially since I previously considered myself to be quite on top of the permissions I'd granted my apps.


Android for work can reduce privacy leaks; get Island or another app to set up a work profile on your Android phone, and sign into a secondary account on it so it doesn't have your contacts. Now go to settings and disable location access for work apps. I've stopped paying attention to whether or not apps request location data or contacts access for to this. To prevent the apps from running in the background, just turn off work profile when not using it (or look up Greenify's deep hibernation).

Being on a stock, unrooted phone, the thing I miss most is xprivacy's prompts whenever an app wants to use a permission. I just make sure to check my permissions list every month or so to make sure Google hasn't silently allowed an app update to enable new permissions.


Sadly the ability to detect WiFi network is under the background location tracking permission.


I believe Singapore's app for iPhone instructed users to keep their phone unlocked with the app open because of this.


and it may prevent the app from functioning properly but the point is that Apple/Google will not allow you to publish an app on their stores which utilises both of location services and contact tracing APIs


Does the Indian app use the contact tracing API? I was under the assumption that it used the regular location/Bluetooth APIs in android.


The contact tracing API hasn't been released yet

It's scheduled for "mid-May"


Thanks, just looked it up as well. But to the GP’s point, I do hope Google/Apple start banning apps (especially released by governments) even if they don’t use the contact tracing API.


Not sure about Android, but if you're on iOS you have the choice to disable "full location access". In case it's needed to run certain feature in the app, you can choose the "Allow while using" option.

I do that will all the apps that require location access: local food delivery, cab services, vehicle rental and what not.


We can and for most apps I do this too. But some apps, like this one, require the location or they won't even start. Also at least on android(which also has fine-grained permissions now), this app specifically requires on-going(background?) location access.

Recently this has also been made mandatory for employees, public and private. So organizations have to ensure all employees have this app on their smartphones. We will see how much this is enforced.


You’re better off with Android then because there you can have extensions that send a fake location to specific apps.


If everyone else with the app has location access enabled it is redundant disabling it.

App knows their location, and app knows you’re near them. Location access by proxy.


If you turn off your Bluetooth entirely that should fix it, shouldn't it? Or are they using WiFi as well as Bluetooth?


My Android device allows for completely disabling location access (regardless of whether the app requires it) or for allowing access while the app is in use, as you mentioned. I'm never sure if these features are OEM additions or features in standard/AOSP Android versions, so I guess if you're using an Android device YMMV.


Apple made a new SMS reporting API, just so the app doesn't get permissions for reading all SMS, as they wanted.


If they ban it, my next phone will be an iPhone.


Apple is known for being far more heavy-handed with its banning of apps in their store, so I wish you luck on that endeavour.


The location permission is required to scan for bluetooth devices because it can be used to work out location.


Well not if once you use the new API specifically made for contact tracing, right? Since the bluetooth scanning will be done by a lower level system run by play services, the app itself doesn't need the permissions.


Why do you like it?


Mostly because it was released very early. And for a country like India, it may prove advantageous in coming months. We do not have any vaccine for Covid19 and can't be in lock-down forever. We have to learn to live with the virus for few months at least. Apps like these can help in contact tracing while allowing many to live normally.

Also I personally think the permissions(location and bluetooth) are fine for an app like this to really function. I have read someone mentioned on HN that these platforms prove their worth when >60% people are using them(I maybe remembering wrong though).


I've been wondering about the UK contact tracing app because they seem to be deliberately misleading saying that data is secure on the phone, yet it's a centralised model, and they are using bullshit terms like "clinically secure algorithm" to describe the one-time codes;

Is the source code of these apps something that could be FOI requested from NHSx seeing as it is publicly funded by the tax payer?

Also they've already started moving the goal posts; https://www.theregister.co.uk/2020/05/04/uk_covid_app_human_...

This* came from NCSC - that image about the NHS version worries me greatly.

* https://www.ncsc.gov.uk/blog-post/security-behind-nhs-contac...


Given that Palantir is involved, I'd not touch it with a long stick

https://tech.newstatesman.com/coronavirus/palantir-covid19-d...


Oh wow I had no idea Palintir was involved, I feel like I've finally turned in to the paranoid old fool that people laugh at, but this really upsets me to think of the millions of people who are going to install this without so much as a thought.


The thing about those paranoid old fools is that some of them are right. But where they are stuck at the level of trying to convince others to just "look with your eyes, see the truth right in front of you!" those with actual experience with praxis know it's much more fruitful to put your efforts toward political campaigns. Grassroots is a thing but it's always some weird combination of coercion and rhetoric about morals and ethics, because most people don't care so you have to make them care before the movement picks up steam.


Talk is cheap. Have vegetarians converted a lot of people to give up meat?

No, you need viable alternatives for people to switch. Like the Impossible Burger and so on. Until then talking won’t change anything. Not even Snowden level revelations would change it. Sure you can try to use government to fight government. Or... just build the alternative.

We built the Web. We killed AOL MSN and Compuserve.

Build open source, end to end encrypted, self healing and rebalancing networks that use a version of Kademlia DHT that removes IP addresses from each hop. And also run consensus in small groups about stuff.

That’s the future right there. Trouble is, we are only seeing its infancy. I can think of about 2-3 projects that are pulling it off. And they have been working for years.

PS: When this happens, user accounts and quotas will be replaced with crypto, and centralized servers and databases will be only by opt in. They wouldn’t automatically be a thing just because they provide the infrastructure. Infra should become completely commoditized.

PPS: But. You’ll have to worry massivelu about botnets, sybil attacks and massive disinfo campaigns and reputationao attacks by sleeper AI bots. “Fun” times ahead ...


  P A L A N T I R.
I would treat this app as advanced kleptographic malware after knowing that they're developing this app with the NHS to contact trace millions of people to 'stop the spread' on a centralised server.


Palantir involvement?

Yup ... I'll take the coronavirus, please.


I'll take 2 coronavirus.


> Palantir is involved

Holy shit, and I didn't think that this whole contact tracing bullshit could look even worse.


Contact tracing isn't bullshit. It is how they dealt with sexually transmitted diseases four decades ago...it works.

The issue is that the UK govt is composed of a combination of incompetent and corrupt people (one of the companies involved in this is owned by a minister). And they are trying to go with the low-cost option (whilst the govt is subsidising 1/4 of salaries paid in the whole country) rather than the old-school, proven way.

It is kind of shocking but it is the result of a political system (and culture) that rewards incompetence and sloth.


> It is how they dealt with sexually transmitted diseases four decades ago

That's not untrue, but that's clearly not contact tracing, much less was-in-the-same-room tracing.


Contact tracing is also in general a lot easier to do for STIs


Can someone please explain to me how a “libertarian” like Peter Thiel could start and run one of the most anti libertarian companies? What are his beliefs, exactly?

I mean besides “zero to one” and “competition is for losers, build a monopoly”?

I wonder how much effect he had on Zuck early on. He was the first big check in. Seems it was a great match there.


There's libertarians, and then there's libertarians. Thiel is a huge fan of Hans Hermann-Hoppe, who writes things like:

"In a covenant concluded among proprietor and community tenants for the purpose of protecting their private property, no such thing as a right to free (unlimited) speech exists, not even to unlimited speech on one's own tenant-property. One may say innumerable things and promote almost any idea under the sun, but naturally no one is permitted to advocate ideas contrary to the very purpose of the covenant of preserving and protecting private property, such as democracy and communism. There can be no tolerance toward democrats and communists in a libertarian social order. They will have to be physically separated and expelled from society. Likewise, in a covenant founded for the purpose of protecting family and kin, there can be no tolerance toward those habitually promoting lifestyles incompatible with this goal. They – the advocates of alternative, non-family and kin-centered lifestyles such as, for instance, individual hedonism, parasitism, nature-environment worship, homosexuality, or communism – will have to be physically removed from society, too, if one is to maintain a libertarian order."

This quote is from Hoppe's "Democracy: The God That Failed", which Thiel has referenced before. The overall thesis there is that economic freedom is the cornerstone of libertarianism, and it's fundamentally incompatible with democracy, so libertarians have to explore other options. It specifically promotes monarchy, on the basis that a monarch is more like a business owner with a vested long-term interest.

You might argue that this isn't really libertarianism - indeed, many libertarians don't consider Hoppe to be one. This is basically proto-neo-reaction, at the point where it still hasn't explicitly rejected libertarianism altogether. But regardless of where you draw the line, this all is the evolution of Murray Rothbard's strain of libertarianism, after it moved away from social liberalism towards paleoconservatism - it's not just some random people coming and declaring their politics to be "libertarian" out of the blue. Nor is there any shortage of ex-libertarians in NRx circles, so it's clearly a common path, not unique to Thiel.

Needless to say, the libertarian paradise described above could use something like Palantir to implement the "covenant".


Sounds more like fascism with the alignment of political and corporate power.

But the label is irrelevant, Thiel and Palantir's actions are anti-democratic and their involvement in any government activities immediately raise privacy and safety concerns.


Just because Thiel has referenced this speaker before doesn’t mean he agrees with everything he says. It’s a bit suspicious Thiel would agree that talk of “promoting“ homosexuality should get someone expelled from society.

Do you have any references showing Thiel believes these words you quoted?


Thiel didn't just reference Hoppe - he praised him and spoke at his conferences.

In general, Thiel's political opinions are hardly a secret; you can read his own words at https://www.cato-unbound.org/2009/04/13/peter-thiel/educatio...:

"I no longer believe that freedom and democracy are compatible"

This is literally the primary thesis of Hoppe's book.

And note that Hoppe's isn't saying that promoting homosexuality should get one expelled from society, but rather from communities with "covenants" that restrict it. This arrangement is all about one-dollar-one-vote, so rich people needn't worry - they can live however they want, even establishing their own "covenant" if need be.


Thanks, I will take a look.

That explanation is much more reasonable. If people can choose which covenant to belong to, that's better then some larger collective (like a federal government) imposing it's morals on smaller groups.


I will venture a guess.

Principles dont matter if you dont have the money to put them into practice.

If palantir is what it takes for him to fund/back a visionary company or leader that will create a truly freer society, then that investment was well worth it.


>Can someone please explain to me how a “libertarian” like Peter Thiel could start and run one of the most anti libertarian companies? What are his beliefs, exactly?

None. He is a slimy character with no principles or morals besides his own selfish drive. He wants to make money, period. Other considerations don't factor into it. Make no mistake, he (and people like him, the capitalist sociopaths that hold power in our society) would kill your entire family for a handful of pennies if they would get away with it.

If there were any justice in the world these kinds of people would be shunned and censored by society. Yet it's precisely the opposite. These are the behaviours which are encouraged and rewarded.


Because his wealth isn't enough and he can still insert himself into other policies to obtain more influence, more power, and more money?

My question to you is why you care what Thiel thinks and why you're dissapointed. As far as I'm concerned, these guys are the lowest of the low and they only hold their position through gross exploitation.

Nobody should be amassing this impossible wealth and seeking to gain more and these CEOs are basically economic despots to my mind.


Can anyone point to some articles of how Palantir is abusing data?

The reason I ask is, I also thought that it's "just supreme evil" until I listened to a podcast with Thiel (Dave Rubin podcast) recently... In it, he said that he started Palantir as an experiment, what can be done against organised crime/terrorism while respecting civil liberties and privacy. His argument was, that we do need to stop terrorism, because people (not HN crowd, but the public at large) are obviously not willing to compromise safety for privacy, so each terrorist attack results in privacy- & civil liberties-curtailing laws - so the question is, can we use data/ML/etc. to safeguard our society while preserving as much privacy & liberty as we can.

But I've almost no real (and insider) info about Palantir to judge whether that's actually what the organisation is, 10 years or so later.


That's called PR and spin.

Having said that I know nothing of Palantir, today was the first time I heard of it.

But I would never ever believe anything a CEO says in the media that can't be validated. Their very job is setting the company in a positive light above all else. Especially when it comes to vague terms about company ideals that can't be proven. And 10 years is an eternity in IT. Remember Google had their motto "Don't be evil". Look how that turned out.



Seems like many countries have contact tracing apps (Australia is one) are they all based on Palantir?


The Australian app is reportedly based on an open source app from Singapore. The Australian government said it would release the source for its own version in a couple of weeks. The code is GPL, if it's this one: https://github.com/OpenTrace-community.


Having worked at Palantir, I’m curious which outdated misconceptions you’re still harboring?

It’s a giant federated search engine... that’s about it. They don’t even hold any data themselves, it all stays on site with the customer.


I'll use an analogy: they might not be the warlords, but they are the weapon traffickers.


I interned in St. Louis one summer and my neighbor worked at Boeing. The only reason I remember him to this day is because he introduced himself as a programmer working on smart bombs. He immediately said it was okay because he didn't fire the bombs, he just made them smarter so they didn't kill innocent people.


Is it a realistic expectation that we have no one working on smart bombs/weapons technology in general?

I’d love to be in the utopia world where there is no war and the entire planet gets along. But even the US has political bipolar disorder, so we’re a looooong way from global peace.

I do think we spend an absurd and inappropriate amount on defense, but as long as we are forced to develop weapons due to the global geopolitical climate, I’d consider it preferable to work on making them more targeted.

My problem is I don’t like or agree with the people picking the targets, but that’s a whole other argument.


I don't believe it is unethical to work on weapons technology any more than I believe it is unethical to be member of the armed forces. We live in a world with bad actors, so some amount of work on defense is necessary for survival.

However, I also believe you are responsible for the consequences of the technology you create. If you decide to work on smart bombs and because of a bad government those end up killing innocent civilians, some of that blood is on your hands. If you're willing to accept the risk of that, I respect your choice.

Personally, I would rather avoid that by not working on that category of technology at all. It's not the kind of mark I want to leave on the world. I do appreciate that I only have the luxury of this choice because I live in a country where others do choose to work on defense so that I can be protected.


It seems to me that your second paragraph directly and immediately contradicts your first.

I don’t think both of those can be true simultaneously, unless you somehow consider “having blood on your hands” to be ethical.


The "if" is significant. If you work on weapons that turn out to be used to stop legitimately bad actors with minimal collateral damage, then your hands are as clean as anyone's can be in war.

I don't generally believe in black and white, so I'm making no claims that anything is 100% ethical, 100% unethical, 100% blood on your hands, etc. I think it's possible for good people to work on weapons and still sleep soundly at night. I also think it's possible for good people to work on weapons and end up regretting the consequences of that choice.


If you're working on weapon systems that will be used to attack, say, Nazi Germany, and you know or expect that they will also kill innocent civilians, then you are de-facto endorsing that murdering[0] innocent civilians is worth it to stop Nazi Germany, which is a resonable, if debatable, position.

If you're working on weapon systems that (you know or expect) will be just be directly used for murdering[1] innocent civilians, then you're endorsing murdering innocent civilians.

If you don't know where on that scale things are, then you're reasoning under uncertainty, and your ethics are going to have to deal with that.

0: technically most of those would actually be manslaughter, going by the usual definitions, but a: not all, b: that's not a verb.

1: nope, just murder


> However, I also believe you are responsible for the consequences of the technology you create.

I agree with this to an extent, but at the same time, technologies you develop can go well beyond what you intended them to be. Do you think Ritchie knew C would be used the way it is today? It is used in UAVs, smart missiles, and more. Building faster algorithms also means faster weapons. Everyone working at SpaceX is directly or indirectly working on missile technology. You can do this with practically any technology, even as simple as building a better screw. Building materials that are stronger, lighter, and cheaper reduce the prices of homes, but it also has applications in war. Studying diseases and vaccines directly impacts biological warfare research.

It is easy to say you're responsible when you're working on things like smart bombs. But it isn't easy if you're researching fertilizer. The thing that arguably has saved the most lives yet how many lives have bombs and bullets taken?

> Personally, I would rather avoid that by not working on that category of technology at all.

So what I'm saying is you're working on that tech, just how removed are you? I also don't think there's a cognitive dissonance. You can be pro putting nitrogen in soil and anti explosive nitrates. But saying the two aren't related is naive.


It's easy to extend this further.

Engaging in any kind of commerce (i.e. beyond growing food for yourself and your family) means paying taxes, which means funding governments and armies.



Going even further, those researchers are at ORNL, which is a DOE lab. (ORNL itself isn't a weapons lab, but other DOE labs are)


Maybe you can’t convince everyone else to behave ethically, but you can hold yourself to that standard. Working on bomb tech is over a line for me personally.


Sure, I get that. And I think it’s a position shared by a lot of people. Probably the majority of people. But it’s an ugly necessity of modern politics.

We could absolutely stop all weapons production in the US, I’d just ask for some time to learn Mandarin first.


As someone involved with the industry, this is my take.

The work defence contractors do and will continue to do is a measurable positive in the world. This work generally falls into three categories:

First is projects that make the lives of armed forces personnel easier. There is no moral or ethical hazard with these projects. They simply are making it so that pre-existing work, often non-combat work, is less tedious.

Second is projects that actively protect the lives of armed forces personnel and assets. Why would somebody have any reason to feel anything but pride in their work when they know that the only things it could ever do is save lives.

Third is the arguably morally shaky stuff. Weapons technology would fall under this category. The thing people seem to miss about this category however is what new projects' goals are. Very rarely is it ever to create a more lethal weapon. Almost all development of this kind focuses on either making existing technology cheaper, more precise, or have increased range.

In the first case, you could argue that it makes the weapons more likely to be used which is fair however if a weapon system is in active use already, it is unlikely that a cheaper version would do anything other than bring down operating costs. In the second case, these projects are actively making these weapons safer. Look at the Hellfire R9X as a prime example. This weapon sole purpose is to minimise casualties and minimise damage to the surrounding area. Finally the third and last case I mentioned. Increasing range is important for force projection reasons but the most immediate benefit is that it allows armed forces personnel to be further from danger. These people would be in this fight regardless but now they are able to operate from a safer distance instead.

Weapons tech gets a bad rap for obvious reasons but I see that as mostly having been a relic of a past era. Nowadays there isn't really any reason to make more lethal weapons. We already have those and you can see a steady trend in new military technology towards lower risks, costs, damage to the surroundings, and minimising civilian casualties.

Additionally, in my experience, you generally have a choice whether you work on a project or not. Leadership fully understands what they do as an organisation and get that some people may not be comfortable with doing certain projects. If you can't conscientiously work on a certain type of project, simply say so and leadership will take that into account when planning personnel assignments for projects.

TL;DR Armed conflict and existing weapons will never go away. What we as a society(namely the modern purpose of defence contractors) can do instead is make armed conflict less dangerous to those caught in the crossfire and those who risk life and limb to defend our countries.


Even those first two can have fairly foreseeable negative effects, though. Not necessarily net negative, mind you, but negative. If (e.g.) American soldiers have less tedium in their downtime (e.g., because paperwork is automated) and are safer (e.g. because their body armor is better), it could very reasonably lead to missions or even whole wars being undertaken that would otherwise be unconscionable. It's kind of like boxing gloves or football pads - on the face of it, it makes things safer, but because everyone knows it's "safer", they're willing to push it further.

Not judging your decision (although I personally prefer not working for defense/defense contractors), just pointing out that "it only makes soldiers safer!" isn't an ironclad rebuttal.


That's the point of armies, to be as lethal and effective as possible. A protracted war benefits no one, and a powerful army acts as a deterrent. I think that if you think your country has the right to self-defense, you have a moral obligation to make it as lethal as possible. Ethics is not an excuse for inaction or abandonment of responsibility.


I think this calculus becomes slightly different when you're talking about a nation that will never be in an existential defensive war. A protracted war benefits no one, but knowing that a war would be protracted might benefit the side that would otherwise be the loser by serving as a deterrent to the winner.


A protracted war benefits the military of the weaker side but I doubt their people would like it. Also, I can't recall the last time the United States was deterred from entering war because it would last too long.


I generally agree with your comment but this part is IMO delusional:

> These people would be in this fight regardless but now they are able to operate from a safer distance instead.

I sincerely strongly doubt US would be fighting 5 wars with boots-on-the-ground in Afganistan, Pakistan, Somalia, Yemen and Iraq, if it weren't for remotely-operated drones.


If one considers the whole operation of armed forces to be unethical, it's easy to find ethical and moral hazards with the first two points as well.

Analogy: Would you consider it unethical to provide tools and services that make the lives of human traffickers easier? To actively protect assets of drug lords distributing contaminated opiates and meth?

Slave- and drug trade are about as likely to go away as armed conflict.


I think this argument is a bit unfair. As much as I would love for our armed forces and the concept of armed forces as a whole to be unnecessary, the reality is that there is no realistic way for nation states to coexists without some type of armed forces as a deterrent to conflict even if they were never to be put into action. Because of that I don't think it is possible for the simple existence and operation of armed forces to be unethical. Sure armed forces can be used unethically and individuals or units can do unethical things but that is true for all organisations.

This is in the same way that corrupt charity organisations can defraud the causes they claim to support and corporations can knowingly risk the lives of their workers due to either negligence, mismanagement, or for the sole purpose of saving costs or making more money. This doesn't make charities or corporations unethical solely because they have the potential to be.

Additionally, armed forces can and do maintain their operational fitness by using their immense logistics networks and pools of human labour to do very real quantifiable good in the world. Outside of their use as a deterrent and show of power, the US armed forces spends a large chunk of their time and effort responding to natural disasters, taking part in humanitarian efforts, and building infrastructure.

---

On to the analogy, I would like to reframe both examples a bit.

For the purpose of this discussion, illegal transportation of individuals falls into two categories: Human trafficking and human smuggling. Human trafficking is done without consent of the individuals being transported. Conversely, human smuggling is done with consent. Despite being illegal, human smuggling provides a very important service to individuals. It may be against the law but it can be argued that it can provide a net benefit.

Mind you that it comes with severe downsides and by no means am I trying to minimise those. I'm just trying to keep this from turning into a 20 page essay on the topic. One of those downsides namely is that it increases the volume of people being transported which makes it cheaper and easier for human trafficking rings to operate. Another downside is that a large number of people die during transportation, both those consenting and non-consenting.

Now if you move onto the less clear cut outcomes, human smuggling allows individuals to enter a country without going through the vetting process. This opens a vector for criminals and terrorists to enter a country however at the same time, it provides a means for people who either aren't willing or aren't able to wait untold years to get a visa to enter the country only for it to then not be renewed a few months or years down the road. These may be individuals or families that want to be productive members of society in a country with a higher standard of living or are trying to move to a location where they can be treated for medical issues that they can't be treated for at home. Another common case in human smuggling is individuals using human smuggling to cross countries that have closed their borders to them so that they can get to a country that will accept them as refugees. The number of reasons somebody would illegally enter a country go on quite a bit and the list is full of both morally sound and morally reprehensible reasons.

With that in mind, while I personally wouldn't participate in any type of business connected to the illegal movement of people across national borders, I see no moral or ethical issues with someone who does so for the sole purpose of making the process safer. Despite the fact that it is illegal, the way I see it is that you are reducing the risk that people lose life and limb.

---

Moving on to the other example. My opinion is that most drugs should be legalised and regulated. By moving to a legitimate market and starving these organisations' source of income, we should see drug lords be replaced by corporations. You could argue that the cartels would just find a new market but I see giving them one less black market to deal in as a plus. If this were to happen, in the US those drugs would be required to meet FDA & ATF standards. Sure those that don't would still exist much like improperly distilled/methylated moonshine is still an issue however I could only hope that deaths and injuries due to contaminations would plummet compared to where we are now. Why would you buy contaminated drugs when cheaper, pure, regulated, and mass produced variants exist legally. At this point wouldn't working to improve safety standards and effective distribution in this industry be no more morally or ethically problematic than working for existing alcohol and tobacco corporations?

Now past the hypothetical, I would argue that making things less dangerous would be tangential to your example. Making the drugs themselves less dangerous would be working to increase production standards and decrease contaminations and impurities. With regard to the transportation aspect, drug trafficking is extraordinarily dangerous for those involved. Drug mules expose themselves to immense risk when carrying large doses within their bodies across borders. This is by no means the only way the drugs are transported across the border but if somebody was already involved and wanted to make a difference, making this safer would be one way. Drug mules are often either doing so as a way to support themselves financially, as part of a deal to allow them to be smuggled across the border, or against their will in some cases due to human trafficking or blackmail. In most of those cases, the mule is a more or less unwilling participant. Lowering the risk for these individuals is in my eyes both morally and ethically valid.

---

Ultimately I think all of this comes down to this. If you dedicate yourself to work that reduces the loss of life and limb, generally I find that to be a just cause. Defence contractors do this by developing weapon systems that operate at long range with surgical precision to minimise casualties. Armed forces generally do this by serving as a deterrent to violence and by responding to natural disasters and humanitarian crises. In the same sense, there can be people whose work with drug cartels and organisations facilitating illegal human transportation results in a positive impact on people.

Another way to put it would be that I would prefer a drug cartel making a pure product(of quality meeting FDA standards) that is transported without risking people's lives rather than what we have now. In the same way I would rather we have an organisation facilitating human smuggling and trafficking where nobody died or was permanently injured in transit than one where people died and were injured. Obviously in an ideal world we would prefer that there wasn't a reason for either to exist but like you said, they aren't going away any time soon and just because in these cases the organisation might be unethical it shouldn't make the people trying to improve the situation from the inside unethical.

Note: This post is a bit rushed and quite long winded as I wrote it up while taking a break and I've spent more time on this than I probably should have already. I would like to be able to actually write up a proper concise response with citations to back up my assertions however this should hopefully get enough of my point across.


> My problem is I don’t like or agree with the people picking the targets, but that’s a whole other argument.

People pretend they can't possibly know how things will get used, but this has never been true, and never been less true than today.

http://tech.mit.edu/V105/N16/weisen.16n.html


Reminds me of a song (https://www.youtube.com/watch?v=ajVIiS65TFY) "Don't ya know that the smart bombs are so clever? They only kill bad people now."

It's rather amazing the mental gymnastics people perform to get out of cognitive dissonance. I'm much more sympathetic with people who know exactly what they're working on and the full effects of that work, why some will find it distasteful, but nonetheless can at least make a case for why those people are wrong and the work important, rather than agreeing with them but hastily adding some transparent excuse to avoid being lumped in the distasteful category.


I mean, that seems legitimate to me. Various governments are going to continue dropping bombs. If there's a market for PGMs for U.S. operations, that means less collateral damage; whether you like to think of it that way or not.


Yeah, and the commenter upthread didn't personally saw Jamal Kashoggi up alive, and he's got a whole headful of justifications as to why he couldn't possibly be complicit in that. When he is. He doesn't have a single atom in his body with the ethics and morals of Tim Bray...


I disagree with that analogy. Palantir isn’t getting them more data. It’s helping them use it more effectively. The organizations already have all the data recorded.

If you don’t like the data that organizations record, I’m right there with you, but demonizing the tools that allow them to access data more effectively is an absurd stance to take. Their system also does things like add an immutable audit log, or enforce requirements on how data can be accessed.

Working with law enforcement, before Palantir there was nothing preventing an officer from looking up their ex husband or wife, or celebrities, or anyone at random. With it, there’s an audit trail of every search, and most of them need a case number as justification. Even then, certain searches would automatically get flagged for review.

We can go back to giving people direct and unaudited SQL access if you’d like, but I’d prefer some more accountability, even if the downside is organizations being able to use data _they already have_ more effectively.


I'm curious if "accountability" actually means anything in practice, or if it's just some of the Palantir corporate kool-aid.

I don't think everyone associates government agencies with pristine records of holding themselves accountable.


I’ve only worked with a handful of LEOs in my time there, but the ones I did work with were genuinely concerned about it.

If you’ve never worked with a government agency before, let me tell you, Hanlon’s Razor is in full effect. Never attribute to malice that which is adequately explained by stupidity.

The higher ups know that free reign for their officers to look up anything is a recipe for legal nightmares. The individual officers are often just too dumb to realize that they shouldn’t search for something, and do so out of curiosity. Having blocks in the way like requiring a case number be entered to run a search, and knowing that that search and results are forever associated to that case, cut down a lot on abuse.

Are all forms of abuse the same, or so easily mitigated? No, but I’d like for some system to audit that and at least try to enforce it.


> The individual officers are often just too dumb

You know, my dad always told me to just listen to people... eventually they'll tell you who they truly are.


During a training, demonstrating how to do geo searches on license plate data from ALPRs, a detective insisted we look up their car.

The hits were all more or less expected, a bunch around their home and the precinct, local supermarkets, etc. But then a weird grouping way out of town. Detective insisted it was a mistake and wanted more data, thinking someone stole his license plate (insert eye roll here).

Anyways, zooming in and looking at the highest density of hits, it ended up being a strip club that he frequented. He got bright red and his buddies didn't stop giving him shit for it all day.

And this was a detective, not a beat officer.


This right here is a great argument for why "helping them use [the data they already have] more effectively" (as you've said) is probably not as great a mission as you're making it out to be.


I mean, I left?

I’m not saying it’s perfect, but scapegoating the company is avoiding the real issue, which is the government policies or the enactment thereof.


Not sure why you're being downvoted. All your comments have been very well and politely argued (whether people agree with them or not) and this is the first time I got a pick at what palantir is actually doing. Thanks for the info.


Meh, it’s fine, downvotes don’t bother me. Especially when you take the position that goes against common opinion, it’s to be expected.

The irony to me is that when I was at Palantir they talked extremely openly about all their technology. A ton of it was open sourced, and in depth tech talks were posted to YouTube, but almost no one watched them.

I think people just like writing them off as shady or sketchy because it’s easier than needing to reason about the clear ethical nuance that exists in the government contractor space.


It prevents crimes of opportunity, much like leaving a brand new macbook pro in the driveway is a bad idea.


So governments already have all the potassium nitrate, Palantir is just helping them make bullets?


Sure, if you think querying a database is the same as shooting someone.

If so, please don’t tell me you’re a DBA.


I think this is position is a bit naive. Like saying "but child porn is just bits, like any other type of file".

A crucial step in shooting someone (or a drone strike) is figuring out who to target and where they are.

The fact that this is an important use case for Palantir makes it not hyperbole to talk about "weaponising information".


> I think this is position is a bit naive. Like saying "but child porn is just bits, like any other type of file".

Everything has a domain it operates in, so I believe that statement has nuance. In particular when you look at how general or specific the domain is.

From the perspective of something uploaded to S3? It sure is like any other file.

From the perspective of a website dedicated to the dissemination of child porn? It clearly is more than "just bits" in that context.

How about from the perspective of a search engine, where that site may be indexed? Welcome to the grey area that all of these debates are rooted in. Technically it's just indexed strings or ngrams, so it is like any other type of file. But there's an argument that a search engine should "know more" than just the raw data, and should be able to understand that context somehow.

Palantir is in this grey area. The software isn't built for spying. It's built for managing and understanding vast amounts of data. Can this be used for spying? Yes. It can also be used for double blind clinical trials. Or maintaining insurance systems. Or coordinating disaster relief efforts.

It operates technologically in a very general domain, but their flexible user-defined ontology makes it very powerful at operating in more specific domains. So it's a lot less cut and dry than the dissenters make it seem, IMHO.


The example given was hyperbolic but I do think it’s true that a database can be “weaponised” once it’s got the power of state violence behind it.


I'd rather we not help our own government "more effectively" spy on American citizens, thanks.


Oof, you’re going to have a hard time using pretty much any technology then. Especially if the EARN IT act passes.


Well, in the context of this post, that is irrelevant at best, or whataboutism at worst. I notice you don't offer a defense of Palantir here. Why is that?


> Well, in the context of this post, that is irrelevant at best, or whataboutism at worst. I notice you don't offer a defense of Palantir here. Why is that?

HN won't let me reply to that, so I'll do it here. I didn't feel the need to offer a "defense" of Palantir there since I already outlined it above and don't feel like there's value in repeating myself.

But you asked, here you go - The government is going to use the data regardless, so long as they have it. The only effective recourse is to make them collecting or storing the data illegal, but good luck with that, seeing as even when it is illegal they do it anyways (see: Snowden).

Do you think is Palantir weren't there they'd just say "Ah well, I guess that's that" and be done with it? Nope; they'd go to Lockheed, Raytheon, IBM, or another long-time contractor to build a replacement almost immediately. The technology is genuinely nothing special. It federates searches across disparate database, and stitches results together.

But for me, having worked at Palantir, I know what mechanisms are provided to attempt to mitigate abuse. So as long as that data does exist and is going to be used, I'd prefer to have it be as well controlled and audited as possible. And sure, you can argue that the government may forego auditing, or be entirely corrupt, but that doesn't seem to me to be a good reason to not use tools that at least provide that capability.


> The only effective recourse is to make them collecting or storing the data illegal, but good luck with that, seeing as even when it is illegal they do it anyways (see: Snowden).

So, basically "screw rule of law, people are gonna do it anyway, so let's make money off it!"


And we would think the same about Lockheed, Raytheon or IBM. Just because there'll always be someone without morals doesn't mean we just shrug and say anything goes.


I’m not saying we should shrug and say anything goes. I’m saying all these companies share one commonality - the are building things for the US government.

If you want to see change that is more than just superficial, that’s where you need to make it.


Sure. That doesn't mean that what Palantir are doing isn't wrong - selling tools to an entity which is going to use them for harm is wrong, even if they might result in slightly less harm than selling them some other tool.

If someone comes into a gun shop and tells you they want to shoot up a classroom, selling them a pistol instead of a machine gun doesn't make you immune from judgement.


What makes Palantir different than any other software vendor that sells to the government?

Want to know how a lot of that data is generated? Microsoft excel. Arguably without excel there would be less data to act abusively with. If only Microsoft just refused to sell excel to the government.

People are happy to accept that excel is general enough that it isn’t made for that one purpose, but refuse to apply the same reasoning to Palantir; that may seem like whataboutism, but they genuinely are comparable tools if you take away the marketing and fear mongering.

As for your analogy, it’s more like you own shop that makes metalworking tools, and someone buys some. You don’t know what they are going to do with it, and (here’s where it gets opinionated) you shouldn’t need to care. You sold a tool to someone. Can the tool be used to make a gun? Sure. But it can also be used for anything else involving metal work. At some point we need to accept that the responsibility for how a tool of used needs to be placed on the person using it.


Palantir develops tools to be better at tasks that are pitched to them in some degree of detail through the RFP process and similar processes. Palantir as a company would not exist if it were not for the less savoury of those tasks.

We do not hear, "Government X purchases a couple thousand Excel licenses to keep a database on people it'd like to kill", and I imagine that Microsoft does not sell Excel to Government X in response to an RFP for tooling to keep track of people it'd like to kill.

(Of course, it'd probably stick its metaphorical fingers in its ears if it did hear about Excel being used for that purpose.)


Sometimes it's easier to make the change by going after people whose actions implement the policy, rather than those who establish it.


That's just it: I don't want to see that kind of change.

Edit: Changed "we don't" to "I don't". I don't want to claim to speak for anyone else but myself.


You don't want to see what kind of change?

I'm talking about neutering the surveillance state. That's not something that demonizing individual companies is ever going to do. You need to push for that change at the government level.

Unless you're pro-surveillance, but your previous comments made it seem like the opposite.


I misread, then. I assumed by "change" you meant what Palantir was doing.


The analogy seems apt; weapons dealers don't give warlords armies (if they did, they'd be called mercenaries.) They give the armies of warlords the tools they need to be more effective. The violent acts, killing people or holding PII, are still performed by the warlord's organization, but does that excuse the people selling the tools used?


I guess we’ll just have to agree to disagree. I just don’t think that complaining about a tech company when what you actually dislike is what governments or other organizations are doing is going to impact anything.

But Palantir has been the “tech boogieman” since before I even left in 2014, so it probably always will be. It is admittedly easier to blame a smaller tech company as a scapegoat than try to address the larger governmental or organizational actors that you actually have grievances against.

Just know it won’t do anything. The people who complain online aren’t their target market and never will be. And the paranoia lends way more credence to their dressed up search engine than it really deserves on technical merit alone.

So in a way the people complaining about it are making the issue they’re so upset about worse.


The reason why Palantir is the "tech boogieman" is because it willingly and enthusiastically sells its tech to governments specifically for surveillance reasons, and even consults them as to how best integrate it all. And then we look at the politics of its owner, and it's explicitly anti-democratic - so it's not a coincidence.

And yes, of course, complaining about Palantir online isn't going to change the government policy. But if working for Palantir means that no other self-respecting software engineer will want to shake your hand, then fewer people will want to work there, and their surveillance tech will be lower quality and have more holes in it.


> It is admittedly easier to blame a smaller tech company as a scapegoat than try to address the larger governmental or organizational actors that you actually have grievances against.

Or you can do both. You might as well suppose that people are critical of the mercenary company Blackwater because they want to avoid criticizing America foreign policy and military exploits. I don't think that's accurate at all, it's been my experience that people who are critical of one are very often critical of the other as well.

Similarly, it's been my experience that people critical of Palintar are also critical of the organizations and governments who use the services of Palintar.


A rationalization at best. Facilitating an immoral act is active participation.


Do you buy things from amazon? They treat their warehouse employees terribly, so I hope not, lest you facilitate an immoral act.

Do you ever use uber? They work to keep their drivers legally declared as contractors, despite treating them employees in all other regards. That’s pretty immoral, so I hope you don’t encourage it by supplying them with business.


I don't think that warehouse employee situation is immoral so I'm good on that. A neither on the UBER case. I'm safe!


I don't agree entirely.

Machine Learning/AI adds an entirely new perspective on big data. It can see patterns in data that would be meaningless without it, correlations that can't be found in other ways. Some data is OK for a government to have if they can just look it up in exceptional cases, but not if they act on every single bit of data collected.

In essence, using ML is creating new data (or at least: Information from the raw data). This information can be very revealing and not at all in line with the scope the data was initially collected under, which is one of the principles of GDPR.

You can't really view data as static once it's collected. Combining with other existing data and new methods of data analysis make it a totally different picture in terms of privacy.


They create tools that enable the government to infringe on the privacy rights of their citizens


I could argue the exact same about literally any database used by the government. Their tools are just easier to use and more expressive.

If you don’t like the bullshit the government does, that seems like an issue you should take up with the government, no? You’re using a tech company as a scapegoat.


> Their tools are just easier to use and more expressive.

And that's the problem – they're giving governments tools that let them more easily and effectively infringe on the privacy rights of their citizens. Privacy isn't invaded when the data is collected, although that's a necessary step; it's invaded when a cop or bureaucrat queries that data to see what a citizen has done. Giving them that tool is morally complicit.

~ "That's not my department," said Wernher von Braun. ~


As soon as the data is collected it is going to be used. That is where the privacy invasion is. If a massive data set on citizens leaks and becomes available online, people will still consider it an invasion of privacy, even if it was never "used" prior.


This is a ridiculous stance to take. If true, then open sourcing any form of software is morally corrupt since the government could then use it for evil.

Quite frankly, I don't think any reasonable person would consider selling software to the government to be morally corrupt unless they were reasonably confident the government would use it for evil. Right now, nobody has made a compelling-enough argument to make me believe the government will use this software for evil.


Intent matters. Palantir specifically makes surveillance tech, and specifically markets it to governments.

As for compelling arguments... is the history of most of the world's governments not enough for you? In US, you can look at the census for a case in point: this data was used to chase draft dodgers during WW1, and to compile list of Japanese for internment during WW2. Curiously, by WW2, the federal census law had specific provisions preventing the Bureau from disclosing that information to any other government agencies, precisely to prevent this kind of use - Congress simply repealed those provisions. Nevertheless, the Bureau subsequently denied sharing that data and buried any leads, so we didn't have definitive proof until this century.

Or we could talk about COINTELPRO, PRISM etc.


> This is a ridiculous stance to take. If true, then open sourcing any form of software is morally corrupt since the government could then use it for evil.

There are popular licenses with such clauses.



A technology company that takes advantage of holes in the law, and actively promotes that those holes are not closed…


A company then...


Some databases used by governments are for things like making sure people driving cars have received the required training/certifications to share public roads, or that their vehicles carry the necessary insurance.

Other databases get used to track down, torture, and murder critical journalists - by dismembering them with a bone saw while they're still alive.

Remind me again which kind of database Palantir staff work on?


Well, when I was at Palantir I worked with police departments integrate a dozen or so databases so looking someone up took one query rather than several, cutting down on the time needed for traffic stops drastically.

The software was also used to run double blind clinical drug trials - the acl granularity was great at that; you could have different roles for drug manufacturers, doctors, and patients, none of which had absolute information. Then trial supervisors could open up the data for analysis at the end of the trial.

At that same time, the ability to use GPS phones to upload data over unreliable networks was used when they teamed up with the Clinton foundation and Team Rubicon to improve disaster relief coordination for hurricanes.

Myself or coworkers (“Palantir staff”) worked on all of these.


It isn't simply a database, as I'm sure you know. There are multiple pipelines of information which need to be glued together, cross correlated, etc. They (Palantir) provide services to make policing citizens easier and allow police departments to overstep their boundaries.

Also, I'd like to point out that this isn't an "either or" situation. Obviously the government is involved, as they no doubt put out the contracts, but Palantir is the one cashing the check. They've created their entire business model around big secretive government contracts.


The same argument is actually used w.r.t. DHS/ICE using GitHub to work on software to better track illegal immigrants.

This is software that will ostensibly cut down on errors and speed up processing times for things like asylum claims, but for some reason Github was called out as being complicit.

I think the argument against Palantir is better formed than an argument against Github based on specificity of the tool, but I think both are driven more by viral outrage than careful consideration of any moral calculus or actual hands-on knowledge of how the tools are used.


One is explicit, the other implicit.

Palantir is _explicitly_ contracted by a government to create a tool for combining surveillance information to make tracking individuals more effective.

Github is providing tools (Github Enterprise IIRC) which is presumable being used to create tools which are used to track and deport individuals.


> Is the source code of these apps something that could be FOI requested

There's no need for such a request. They themselves say "We intend to open source our codebase once the design is finalised" here: https://www.ncsc.gov.uk/report/nhs-covid-19-app-privacy-secu...


This is why I mentioned the time frame - this goes into trial EDIT:TOMORROW* (still don't think that gives much time to audit the source code) in Isle of Wight, and ITV news said it is going out to the rest of the UK Thursday - so its being installed without any code being shown yet.

https://www.bbc.com/news/explainers-52442754


"Stay Clam and Work on Your Social Credit."


Do you know where the app can be downloaded? I‘m interested in decompiling it.


Yeah, that what the Australian government said. That's what their own Privacy Impact Assessment document recommended.

That's what they recently told us they were not going to do after all. Because security.

:sigh:


Of course it's bad, it's run by the intersection of incompetent charlatans and the lawful-evil surveillance state.


It looks like you won't have to petition for it. According to https://www.ncsc.gov.uk/files/NHS-app-security-paper%20V0.1....,

"We intend to open source our codebase once the first release is finalised. The documentation accompanying that release will supersede this paper."

and from https://www.ncsc.gov.uk/blog-post/security-behind-nhs-contac...,

"The Secretary of State has also committed to making the source code open source. That may not happen immediately on release because the fabulous NHSX development team are all working hard on getting the product ready. But it will happen."

Sounds like there is an intention to deliver on the source. Bigger question is whether the server side is open sourced (as clearly that has limited value for verifying what is happening remotely).


> That may not happen immediately on release because the fabulous NHSX development team are all working hard on getting the product ready. But it will happen.

That's just annoying nonsense though isn't it?

There are reasonable (well 'ok they happen, whatever') causes for delay in open-sourcing something that didn't start out that way, but... Do any of them take more than seconds of developer time? They mean it 'may not happen immediately, because it's stuck in legal', surely?


The UK has passed the point of parody. If you want to justify any action, just say something about how great the NHS is before you do it.


It's the equivelent of saying "9/11" in the US.


Sounds like there is no intention to deliver the source on release, and once the release has been out for a couple of weeks, the benefit of public review of the source code drastically diminishes.

The source code should be released before the public release of the binaries, not after, and not doing so means they're trying to hide what the app does from timely public discourse.

Releasing a tarball (even if it lacks the tools to build it) isn't that hard, is it?


Just to update for anyone time travellers that arrive here in the future, the source came out on day of the binary "going live" to the public

https://github.com/nhsx/COVID-19-app-Android-BETA

https://github.com/nhsx/COVID-19-app-iOS-BETA


Australia has deployed their app but haven't released the source code yet, and may not release all of it, so until it actually happens who knows.

Also while the app on the phone has been released, the actual data wont be sent on to contract tracers until more testing and privacy policies are finalised.[0] [0]https://www.abc.net.au/news/2020-05-02/coronavirus-app-curre...


Contact tracing is a wet dream for five-eyes.

I don't even think it will even be effective given the anticipated number of cases.


You don't think the 5 eyes don't get handed out mobile data location directly from the cariers? I am more concerned about a database that will inevitably be accessed by lots of researchers, and of course one will think it's a good idea to store a copy on a USB key or a public s3 bucket, etc. It the same story happening again and again and again...


This is a legal framework, that is what is happening now. I noted this a while back in a flippant comment "Capitalism with Chinese Characteristics" (which didn't go down well :).

A contact tracing system and a virus with n days of non-symptomatic incubation, and international borders. Let's think this through a bit. Of what benefit is to the traveler admitting/host country (H) to note on the "medical passport" issued by origin country (O)? Obviously, for this thing to even work on paper, countries need to exchange massive amount of information.

The fact stands [challenge it!] that the socio-political regime being rolled out in guise of defense against Covid-19 and future viral friends aligns perfectly with a specific ideological position. It really was 'Nature's gift'.

What used to be accomplished via violent revolution is being accomplished with 'scientific' efficiency and very little blood. Oh, that figurative blood in the streets of middle and lower classes? "The Virus does not discriminate!".


But imagine the new frontiers it opens! Countries would exchange and trade massive amounts of user data like oil. Want to participate in air travel? Better to sign treaties to buy tracing data, as without it your country won't be compliant with the international safety laws. I see trillions of new wealth being built there. On a serious note, this reminds me of a quote from one book: "the fourth round will be remarked by a fierce battle between materialistic and spiritual forces in humanity". We're like being subjected by a high voltage electrolysis now: those with moral are being dragged to the right and those without it - to the left.


I just got an urge to resurrect Frank Herbert to see what he has to say about all this. Wonder who is our dear Leto the God Emperor, the benevolent monster who is dragging an un-surprisingly quiet and docile humanity to its designated future. One hears the monster has 7 heads.

I'm not sure about the hand me down binary model conclusion. We're in agreement regarding the inevitability of this turn of events, but then again it was apparently clear as day thousands of years ago to "self isolating" monks.

But the clarity of the choice is stark, and like a blast of cold air invigorates one's morals and character.

Good luck!


That's true...

In a way I think this will actually hamper their efforts.

After all: With the corona tracing apps, people are becoming much more aware that their data is used to trace them. A lot of people I know are talking about leaving their phones at home when they go out.

Considering the carriers have had this data forever and are probably sharing it, this awareness is a good thing for privacy.


Stop spreading FUD. Governments have had access to your location and associates for decades via the mobile phone network.

I've worked at a telco where we were analysing customer's behaviours and almost everyone follows a pattern at some point e.g. going from home, via public transport to work. The resolution isn't high but over time you build up enough samples to accurately determine exactly where they live, work etc.


Collecting vast swaths of data to know where I live and work, which are literally the first two questions on my income tax return btw, is very different from knowing the name of every person I've been near and the location where I'm near them. We've all seen how these "geofence warrants" have gone -- a lot of innocent people being dragged through the court system for absolutely no reason.

If we do end up implementing a centralized contact tracing system, the results will be very interesting. I would love to know the name and location of every lobbyist that my elected representatives have met with, and I'm sure that in the name of public health they won't make any efforts to thwart this tracking.


> Stop spreading FUD. Governments have had access to your location and associates for decades via the mobile phone network.

In that case the governments do not need this silly contact tracing app. They can just use the data they already have.


This is exactly correct. I helped create the system used by US telcos to track users. We had GPS precision tracking on all users and used big data to geolocate that data against other users, retail establishments, and road locations. This was not nefarious. We used it to find and fix tower, highway issues and market to companies. Users agreed to it as part of their service contract.

We were required to sell the data to the government on demand due to CALEA and similar legislation.

Those who complain endlessly about AANG largely seem to have no idea of how our communications networks actually work. Your location, activities, banking, etc. are not private, will never be private, have never been private, and it has nothing to do with AANG. The 3 AANG that I have worked are woolly lambs. That F guy is a bit suspicious, but I don't have first hand knowledge.


It’s much worse than FUD: reality. Mapping known associates is significantly more invasive than location privacy.


Yeah, but that is a small part of contact tracing (with law changes needed in some countries). The next step is to have personnel to notify suspected contacts, tell them to isolate themselves and then test them.

You need to have enough people for that, you need to have the testing capacity and you need to have people following that order.

It will work well if you have very low community spread and can start tracing on cluster level. If you are not there it can still be a good tool but you would probably still need a lot of other restrictions as well.


This is of little interest to 5-eyes.

For a start, installation of the app is voluntary so if you're planning something you really don't want the government to discover, you wouldn't install it.

Also, intelligence services already get call metadata so they know which mobile towers you are using and probably lots else.

As for effectiveness, I think it will be useful. There have been several papers crunching the numbers and it looks like it will have an effect. It doesn't need to be perfect - any non-trivial reduction in R is helpful.


They mentioned a few weeks back that the end goal would be if you had the app it could act as a virtual health passport and you’d be allowed to go out shopping, go to work and the pub, etc. And without it? Stay at home. So yeah, technically voluntary, but, y‘know...


Install it on a secondary phone? Of course it all depends on how serious the government is and how hard you are willing to work to circumvent controls.


Just a minor annoyance except of course for those not rich enough to own two phones, but I suppose if you didn't want to be oppressed you should have thought of that before you decided to be born poor, right?


You can get a cheap android phone and a faraday cage bag for the cost of a night out at the pub


Voluntary for now


Perhaps not but would be useful for planning and prep for wave 2 onwards.


The HSJ have a story, they agree with you.

https://www.hsj.co.uk/technology-and-innovation/exclusive-wo...

If this was written by NSHX I would install it, but given the dodgy provenance and lack of open source there's no way I'm going to install it.



I think this is actually quite innovative from the Uk government, take the existing NHS brand which has been a thorn in the side of the Conservatives for decades, co-opt the brand for dodgy spy software thta not only tarnishes the NHS brand, but also continues the UK government attempt at tracking every step every citizen takes.


I don't think that's what they're doing. Intersection of people who care about privacy and people who wouldn't see through that tactic is tiny.

They are using the NHS brand to get high adoption.

Plus the UK govt slogan right now is literally "protect the NHS", the existing Conservative party opposition to the NHS is basically gone now, hopefully for good.


> Plus the UK govt slogan right now is literally "protect the NHS", the existing Conservative party opposition to the NHS is basically gone now, hopefully for good.

One does not follow from the other. Slogans substitute for competency and funding. The opposition is both ideological and directly financially motivated, so I expect the fragmentary privatization to continue.

Watch out for a "now we must pay for coronavirus" user surcharge appearing ...


I remember the desperate efforts of Theresa May to pass the snoopers charter for such a long time. And when she became a PM it was on the top of her list, and she did it. And now with that unethical filthy blob as a PM, I only expect the worse of him and his lackeys..

IMHO Apple and Google should have finished the job and "volunteer" to make the app as well as "donate" some of the storages for the data. Not that I trust US companies to respect anyone's privacy, or to the fact that a gag order stapled to a subpoena would give all that data to the "5 eyes"(now 14-15).

I like my health as much as the next living human. But there are so many governments that will jump on this opportunity that it makes me want to avoid this app.


> I remember the desperate efforts of Theresa May to pass the snoopers charter for such a long time. And when she became a PM it was on the top of her list, and she did it.

RIPA was passed in the year 2000 under a Labour government when Jack Straw was Home Secretary.


Are we talking about the same snoopers charter?

From the independent's website dated 19/11/2016:

The Snooper's Charter passed into law this week – say goodbye to your privacy

https://www.independent.co.uk/voices/snoopers-charter-theres...


Btw the petition needs to on the parliament petitions site for a chance to have it debated


Does anyone else find this a little ironic given Google is a company that has been logging peoples location to a very fine detail for years


I have four arguments as to why google are behaving this way with regards to this data. I’ve no idea which, if any, they operate by:

1. It’s actually crap and they don’t want their advertisers to know that their ads are sold based on crap quality data/don’t want the backlash of doing a poor job helping

2. Most people at google care a lot about privacy (either in the secret kept between you and google sense or the more common sense definition people seem to use on hn) and they don’t really think about this data as something the firm has/should release

3. They are afraid that if governments realised they had this data then google would be regulated or every minor security agency/random government department would be demanding access to it by law.

4. They strongly feel that surveillance by (well intentioned?) private companies is ok but by governments it os not.


> 2. Most people at google care a lot about privacy (either in the secret kept between you and google sense or the more common sense definition people seem to use on hn)

When I worked at Google, I saw a lot of earnest efforts to keep data private, in a way that really was sufficient (k-anonymized with large k, for example), but in ways that have no outward proof that it was happening. Send all possible data to the server, then make sure it's properly clustered and scrubbed before it gets stored or analyzed. And it's not easy to explain to someone who can see the whole system that from an end-user's view this is identical to just scooping up everything.


I can verify that, from an outsider's perspective, this is indistinguishable from just scooping up everything and saying "trust us, we care about your privacy".


Doesn't this mean other people who might MITM the data flow (like NSA) would get the whole lot instead?


I work at Google but don't have any visibility into this project, nor do I speak for Google on this. My uninformed guess is that the answer is more precisely:

5. They very much want this project to succeed so that it can save lives. They know privacy concerns are (rightly) an existential threat to the success of the project, so they are trying to address those by drawing this firm line.

It's not 1 or 3 because anyone can look at the location data Google gathers on them and judge for themselves if it's crap. Just search for "view google location history" and it will take you right there.

I think 2 is pretty close to the mark. Despite HN's understandably cynical take, everyone I talk to at Google cares a lot about user privacy.


"Just search for "view google location history" and it will take you right there."

is that really all there is? they are showing you everything? theres no shadow profile?


Why do people/google suddenly start caring about privacy now?

You can only see the tiny fraction of data that relates to you, and only the data that has been identified to you. This says very little about the quality of the rest of the data google have. Doing the double slit experiment once with a single election will not tell you much about quantum mechanics.


Re: #1 The quality of their location data is pretty easy to audit personally by looking at ones location history.


I expect google buys data from data brokers and this is counted as sufficiently anonymous to not be included in the location history google shows. But this misses the point because looking at your location history shows you the quality of a tiny fraction of Google’s location data and don’t tell you much about the rest of the data


And it's fantastic. I love the location timeline feature they build for you from it.


3. It's worth considering that governments may or are likely to have people within these big firms.

Also, Google and Apple where part of PRISM. So there is a history of cooperation.


And we'll see how this plays out, perhaps it will be tragically ironic. The majority of people don't ever think about this and are happy to give their location info to Google constantly all day long. This protocol is in almost all ways much less intrusive and more protective of privacy, and the upside for society is potentially high.

It should be a complete no-brainer that everyone who is ok using google or apple maps, i.e. probably 95+% of the country, be ok downloading their local health department's contact tracing app if it uses these APIs. But we'll see how it plays out, in the end it will surely come down to how this is politicized and not actually be based on the technical merits of this protocol at all.


It all seemed so innocent and helpful a decade ago...

Free GPS and navigation services have a price. I doubt many people are going back to Garmin anytime soon. Just the way it is now.


any time I've used garmin over the years it had been only slightly better then useless.

there are other options though. ive been using openstreetmap for about 4 years and have only found the need to use Google once in all that time. although with openstreetmap being crowd sources it really depends on how much work has been done in a particular area but in my country its pretty decent


And Apple.


Apple logs location history? Other than the wifi location log from ios4 and the significant locations feature (opt in, device only), I don't know how apple is logging locations.


On your iPhone you can go to Settings -> Privacy -> Location Services -> System Services -> Significant Locations and you will see that Apple does keep track of at least some of your location history and tries to analyze it if you don't actively turn it off.


On my iPhone running iOS 13, I had to go a level deeper to find that: Settings -> Privacy -> Location Services -> System Services -> Significant Locations.

To view Significant Locations, I have to provide my Touch ID again. Also, the fine print says "Significant Locations are end-to-end encrypted and cannot be read by Apple." Text repeated in "Location Services & Privacy"[0].

[0] https://support.apple.com/en-us/HT207056


Is there any negative consequences to turning that off? What are those locations used for?


If you click on the link in the post you respond to, it explains what the locations are used for.


Thanks, but that's not very specific.

> It is used to provide you with personalized services, such as predictive traffic routing, and to build better Memories in Photos.


> you will see that Apple does keep track of at least some of your location history

That's not Apple keeping the data, that's your phone keeping the data. And that data isn't accessible to Apple - only you and your device.

From page 6 of https://www.apple.com/privacy/docs/Location_Services_White_P...

"This data is not shared with third parties, is fully encrypted, and can’t be read by Apple."


That claim is worthless when the software is closed source. It's impossible to verify.


Claims aren't worthless, they'd be breaking the law if they lied. Apple has a pretty decent incentive to not break the law (PR mostly, but also fines), so I would bet they're telling the truth here.


And as we all know, and history shows, those incentives ensure corporations never break the law.


If security researchers couldn't audit closed source software they would be pretty rubbish at their jobs. It's a lot easier to audit open source of course, but researchers do reverse engineer closed source stuff all the time.


Well you can always look at the network packets, most easily accomplished from jailbroken devices. There are many people who look at such matters. It's far from worthless.


Can't Apple push silent code updates to specific phones? That would mean they can access that data whenever they choose to. Even if they don't do it right now, they retain that ability.


Actually the claim is worth a big payday if you can prove Apple lied. One leak of customer data and Apple is screwed.


>One leak of customer data and Apple is screwed.

Yes, the Equifax incident really showed us how data leaks ruin companies.


Not true. The iOS location history is encrypted and stored in your iCloud account with a key that never leaves your device. So it's quite different from Google's Location History: Apple stores it, but they cannot read it.

And if you disable iCloud, the data remains exclusively on your device.


That iCloud key only prevents man in the middle attacks. You can still use the key to decrypt the data at rest in the cloud servers, and Apple stores both.

And if you're in China, Apple stores both on Chinese government owned servers: https://www.amnesty.org/en/latest/news/2018/02/5-things-you-...


Not only it does but I even remember a story of a crime that was solved in Germany a few years ago just by requesting from Apple the history of the locations.


"I remember a story that is the entire opposite of what you said". If you have an argument to make based on a specific event, you're going to have to do better and provide some sources. "I remember a story", are we going to throw the full privacy history of Apple away based on your "history" that you remember? Please provide sources or do some research before you comment.


You can’t put Apple and Google in the same bucket when it comes to privacy.

Sure, neither is probably perfect, but only one of them is an egregious violator whose business model depends on surveillance capitalism


apple is closed source and extremely secretive so i will keep putting them in the same bucket thanks


That seems like a different topic than privacy’s relationship to their business model...


Are you talking about the one that won't let you install apps on your own phone without telling it who you are? They're the same. If anything, Apple is worse.


It is very valuable information, why would they let you access it if they dont have to?


I mean, imagine the repercussions of allowing third party apps to do this. It's tragedy of the commons for health info.

i.e. A crappy mobile app that spams notifications when you're around someone who was in contact with Covid infected. One which doesn't have any oversight and motivation other than mobile ad views.


The repurcussion is.. you know someone near you was in contact with covid (a good thing to know) and it has ads (devs need to eat too)?


It means scammy apps will spam contact notifications without proof just to get eyeballs, and that's on top of the ones using covid fears to phish for info. Bad actors will exploit this.


Google have flat out banned any apps that have anything to do with combating COVID-19 if they aren't either funded by their government or are a registered health company. I tried releasing a symptom tracker app before Zoe released theirs and it was rejected for this reason.


I was lucky enough to discuss it with google people.

There has been a huge influx of people trying to upload scammy covid apps. As a result the mere mention of covid is enough to get your app flagged automatically.

I don't currently work on a covid app, but I know somebody who does. Their app was flagged but they were able to resolve it by contacting google.


>they were able to resolve it by contacting google.

Do they happen to be government-funded, or represent a health company as stated above? To be fair, I didn't follow up the rejection, I just assumed the rule would be set in stone and they wouldn't budge. The app was free, no ads or any method to profit from covid so I was pretty shocked.


not at all, it couldn't have been smaller : personal app from an unknown dev.

I don't remember the specifics since I thankfully don't get my apps rejected often, but there should be a button to contact the play store support somewhere in the play store UI.

Unfortunately in this kind of situation, the play store handling, while understandable, does not make it easy for legitimate covid apps to be posted.


They will ban and flag your account if your app just mentions the word Covid. It is absolutely absurd. Only Google is allowed to make a Covid app.


It may be a bit harsh, but it makes sense. Too many people are trying to take advantage of the situation to pull scams.

It's a lot easier for Google to operate on a whitelist model than a blacklist model.


No, it doesn’t. They can do the exact thing Apple is doing which is to accept or reject each app on an individual basis instead of blanket refusing every app that is even related to Covid.

What they’re effectively doing is creating a monopoly in the private sector. No other private company is allowed.


Then Google would be building a whitelist. They aren't doing that, they are using a position of power to isolate the market for themselves. That is a monopoly.


They are building a whitelist. They even made the criteria public. You have to be a government or health entity to publish a Covid related app.


That isn't a whitelist, there is no way on it with a legitimate app when the entry criteria is being a state.


That is a whitelist. They don't have to be easy to get on, or, indeed, to even allow any new entries.


With the sheer amount of scam, privacy invading apps that are preying on people’s fear, can you blame them?


Yes, I can. They should do their job of vetting the apps. They have a disclaimer that it will take 7 days then they just reject it for being tangentially related to Covid even if it could have a positive impact.


A silver lining for the failure of PWAs: they can't really be used to get around the appstore requirements and defraud people.


Download an APK if you want. Google doesn't have a complete monopoly on Android app marketplaces.


Sure. It's maybe a bit unsettling to see a global corporate monopoly telling countries what they're allowed to do, but in this case it's the right thing.


I think it’s unsettling that were so used to companies and people with actual power doing fuck all to stand up to government abuses that it’s weird when it actually happens.


Apple makes the news periodically for telling the gov't to piss off.


Yeah, in this case, I think it's clear that if GPS data was attached, user adoption would be ruined.

The most important thing is to get absolutely as many people possible using the technology.

Besides, for their own selfish reasons, neither Google or Apple want to be so blatantly attached to something that could be so easily abused.


Ok so devils advocate here.

Google and Apple need to advocate privacy for reasons. But they also want to help the government (PRISM example). So they carefully craft a bug that only government knows about and that allows the extraction of location data. If ever discovered it was not intentional. The NSA backdoor at RSA is an example [1].

[1] https://www.reuters.com/article/us-usa-security-nsa-rsa/excl...


I trust a democratically elected government above a duopoly personally, and I'm not convinced the decision of how far to trade off privacy concerns against public health is one those corporations are competent to make.


Contact tracing with location tracking must surely be more effective than contract tracing without location tracking.

I'd quite like my government to decide how contact tracing should work in my country, instead of two companies making that decision. I cannot vote against Apple or Google if I don't like what they do.

Also, I'd like to leave my home without worrying about catching a dangerous and possible fatal disease, and if I have to sacrifice some privacy to do this then that's ok.

Privacy is important, and it's lovely that IT people care so much about it, but all the people on zero-hour contracts and with underlying health conditions would probably rather that we prioritise the most effective approach to eliminating the virus.


Its interesting that they call this a "Contact Tracing app" even after changing the naming to ExposureNotification.framework

I think these restrictions are meant to win confidence with a somewhat skeptical public. This will also confine the apps to be single purpose for contact tracing only.


What is interesting about that?


"We have established the person over there has been exposed to Bad Ideas on May 22 2020. Let us find all those people that were exposed to him. We trigger the exposure event via ExposureNotification.framework"


Why would Reuters follow Apple's branding?


The title says "ban use of location tracking", but the article says "will not allow use of GPS data". There are many ways to extrapolate a user's precise location from non-GPS data.


That kind of data, for instance visible access points or cell towers, is not available for apps on iOS.

If you disagree, could you please list ways you think this can be done?


As an app developer that sounds like disallowing precise location but allowing general location.


Can anyone show me an app in the wild that uses Bluetooth like or close to what the Covid apps wants to?

Why isn't HN talking about the technical side at all?

We know Bluetooth on phones can't do what the governments says it can.

We've all gone through the stage of, what if we used Bluetooth to track people indoors and do cool stuff! Then we realise you can't. The best we see is advertising maybe doing low quality beacons.

It like we think C19 makes the impossible possible.


The Australian one seems to follow the spec pretty will. Uses rolling random IDs and BT RSSI to check for proximity to infected people. “Infected” is declared by the patients getting the hospital to input a private key when they’re diagnosed, which then uploads their last x days of random IDs to declare them as infected.

Source code is public and has been shared/audited on twitter etc but no formal audits that I’ve seen yet.


> Source code is public and has been shared/audited on twitter etc but no formal audits that I’ve seen yet.

I don't believe that's true. There is certainly decompiled code floating around, but release of the code has been delayed whilst the Signals Directorate investigate the app. [0]

Worth noting that decompiling the app to see if it actually does what it says it does is a crime under the legislation backing it.

> Agreed. The PIA and source code will be released subject to consultation with the Australian Signals Directorate’s Australian Cyber Security Centre.

[0] https://www.health.gov.au/sites/default/files/documents/2020...


I've decompiled the app for android and had a look over the source and what you say isn't true.

The rolling ID doesn't work so 3rd parties can track you.

Source code has not been released at all.

It does not work in iOS I think is a fair statement. It won't until Apple do the update.

It does not check the 'proximity to infected people'.

All data processing is done by a human. The app just dumps all your info. All interactions with other phones with the app to a human. Then then work out times and if the person was 'close'

So this is why I've asked the question.

Surely someone on HN has made an android/iOS app and can comment. It would have to be around a phone Bluetoothing to a phone.


We probably shouldn't call them "contact tracing" apps since what they plan to do is so different than manual contact tracing. "Exposure notification" is a better term.

Nothing prevents anyone from using their phone's location history to remember what to tell the contact tracing people.


What exactly is the difference?


Say Alice tests positive.

Contact tracing: Alice is able to say, “I was in contact with Bill and Carol.” Then authorities can talk to Bill and Carol, and have them trigger their phones to see who they’ve been near. But because that’s slow, most plans would upload the lists of who’s been near who to a central server. Then the authorities can do a simple query to see who’s been near who.

Exposure notification: Alice enters a code that she got with her positive test result in to the app. The app has been continually broadcasting rotating, random identifiers which it then uploads to the central service. The code she entered verifies to the central service that she has a legitimate positive result. Bob and Carol’s phones periodically check with the central server for the list of positive IDs. Their phones stored one of the IDs from Alice’s phone when they were near each other earlier. Once they get the latest list of infected IDs, their phones will alert them that they have been exposed and should be tested.

In CT, the central service has all the data, and you can trace contacts without the knowledge of the users. In EN, the service has a list of infected people, and everyone needs to check that list periodically.

Pretty sure there’s some subtlety with the IDs being a cryptographic sequence or something so there isn’t a gigantic list of IDs everybody is constantly pulling down, but this is the gist of it.

ETA: The FAQ from Apple+Google is a pretty quick rundown of where exactly each part of the data is stored and when it leaves your device. https://blog.google/documents/73/Exposure_Notification_-_FAQ...


covid19 has become a buzzword factory. Politicians popularize terms like "herd immunity" , "crush the curve", "testing", "ventilators", "PPE" etc to appear to be doing something. "Tracing" is the next in line, but it's a total sham. No country has been able to contain the epidemic with bluetooth. And all the countries that manage the epidemic have first waited until they have very few cases , which can be traced manually, and they did isolation well. As long as there is a high number of active cases, tracing won't work.

So, it's good that apple+google are banning those apps because they would be useless and a damn spying vector.


Bluetooth is only really an option for Apple, not any iOS developer. Slightly different for Google, where Android lets background apps use Bluetooth. So no country has tried to use bluetooth in their apps. Other than South Korea it sounded like contract tracing had been abandoned.


In South Korea, I suspect that the big difference is the specialist tracing teams they are using combined with the random testing of the public to find new potential sources really quickly. I am sure the app helps but it is probably not the route to normality. It requires thousands of contact tracing teams and high amounts of random testing on very low numbers to work, it is a small piece a combined set of policies that work together.


Would you rather nothing be done? This is potentially a big deal way to automate at scale that has no precedent. It's impossible to say this will/won't work, as it's never been tried.

Perfect is the enemy of progress.


This is not the opposite of nothing. It's not that phone tracking hasnt been tried, indeed it did not work where it was tried in e.g. in singapore, korea and no other country made it work. What did work, in multiple countries, is "bring the cases down to very low numbers, interview new cases, isolate aggressively". Perhaps they should focus on the latter 2.

> to automate at scale

If you have infections "at scale" you have already failed, as the apps are going to be popping up false positives left and right and you end up with the entire population individually quarantining themselves.


You seem to agree with Schneier! I'm with you, the whole premise is utterly useless. https://www.schneier.com/blog/archives/2020/05/me_on_covad-1...


Schneier's a security expert, not an epidemiology expert. I don't think I'm going to put much weight in his opinion. The UK's epidemiology and behavioural teams have some confidence that this app will have an effect.

It doesn't need to be perfectly effective to be useful. Even a small reduction in R is very helpful.


To be clear: Sage thinks it will work, but we're not told who is on Sage not which of them think this will work.

For all we know this is Cummings not understanding any of the science.


For what it's worth, I was thinking of developing an app on similar principles a couple of months ago and I talked to at one of the Sage people. They were enthusiastic about it. There have also been papers modelling the effect.

I don't like Cummings' politics but he is a smart guy. I think he'd follow the science.


We took a deeper look in an article linked in another comment on this thread.

One thing that's interesting to think about more deeply is how difficult it is to estimate proximity based on the combinatorial explosion of different hardware, individual device peculiarities, battery levels and environmental factors (walls, glass windows, partitions, ventilation).

The very limited data published appears like interesting preliminary field work which finds significant variability in signal strength across hardware, and rather than concluding proximity estimations are useful, ends in a plea for OEMs to release factory calibration data for their BLE implementations:

https://github.com/opentrace-community/opentrace-calibration...


We've been looking at digital contact tracing from the perspective of Australia, as we see a huge push for the COVIDSafe app, based on Singapore's TraceTogether app.

It seems the sensible order of questions is:

1) Do we have a contact tracing problem?

2) Does digital contact tracing generally solve it?

3) Is the specific app / implementation useful / safe / privacy-respecting?

It appears the national conversation almost entirely skips thinking about 1) and 2) and gets lost in the limited analysis of 3).

We had a deeper look at 2) in this recent piece:

https://blog.crushthecurve.today/why-should-you-install-the-...


The post has a very city-specific view. (and it may have good points about that environment) Compare it to a regional town: minimal public transport, few/no dense residential buildings, no large offices. For me the app is literally a "does anyone I stood next to in the shop / petrol station test positive" indicator.


In a regional context a bluetooth proximity app offers even less theoretical value.

A contact is only registered after 15 minutes of time spent within an estimated proximity of 1.5 metres (itself a primitive model of infectious disease transmission based on a 1942 paper). Note other countries set the distance at 2 metres.

As outlined in other comments in this thread, close contact rules include anyone in a room for more than 2 hours, so even though all close contacts have to be manually interviewed (there is no instantaneous notification and isolation) for most social situations close contacts won't be registered through crude estimates of proximity: home, family / friends, work all require thinking about and providing contacts.

When you strip out all the situations that aren't beneficial, that starts to leave public transport in major metro situations where commutes are greater than 15 minutes.

Keep in mind that also implies the end of any social distancing (as otherwise no contacts are registered). That seems obvious, as public transport becomes overwhelmed if capacity is significantly reduced.


Location information, political preferences, socioeconomic status, photos, emails, search locations, to name a few. Is very dangerous that they are openly monopolizing access to everybody's life without regulations in place. I personally believe that Engineering teams do their best to anonimize individuals but reality is that other products teams such as Ads or growth don't do it



Where are the bug bounty programs? That's easy. It's a crime to see how any of the app is running. Reporting a security flaw would likely see you receive a $5000 fine, and potential jail time atop of that.

They didn't bother to get the servers running before pushing out a gigantic advertising campaign shaming anyone for not using it.

... Despite it having obvious flaws from day one, that showed it was mostly a cut 'n paste of Singapore's GPL app. (Though you can't access the source. National security trumps freedom of information and promises.)


> It's a crime to see how any of the app is running.

Have you got something supporting this? Here's a panel of Australian-based security people decompiling and discussion the details of the app: https://www.youtube.com/watch?v=U3dN99ljgD4 Are you saying they've all publicly admitted to committing a crime and are unaware of those laws? Are all editors here https://docs.google.com/document/d/17GuApb1fG3Bn0_DVgDQgrtnd... criminals?

The only serious analysis I can find is in http://www.austlii.edu.au/au/journals/JlLawInfoSci/2003/2.ht... and it's "kinda depends why you're doing it, but either way it's largely untested".


It isn't actually a law yet - that happens later this month. Instead, we've received a determination by the minister [0], which will act as a kind of back-date for when those laws are passed.

> A person must not decrypt encrypted COVID app data that is stored on a mobile telecommunications device.

That video shows them looking into how the data bundle is assembled, but I don't believe they actually touch it or run it in an emulator, which would very much breach the determination - because unless you're one of the exceptions, you're not legally allowed to run the software outside of tracing.

Exceptions are given for those in employ of the health department, or other government bodies.

Whilst that might vaguely not mean decompiling the app, the minister's own press conference is clearer on the intent [1]:

> It cannot leave the country, it cannot be accessed by anybody other than a state public health official, it cannot be used for any purpose other than the provision of data for the purposes of finding people with whom you have been in close contact, and it is punishable by jail if there is a breach of that.

Decompiling the app steps outside the provisions for looking at the data, and yes, you don't have permission to look at your own data.

[0] https://www.legislation.gov.au/Details/F2020L00480/Html/Text

[1] https://www.health.gov.au/ministers/the-hon-greg-hunt-mp/med...


IANAL, but I think your interpretation goes beyond what they're after.

The determination is clearly aimed at data usage and prevents people from trying to decrypt the reports from other users. The whole fragment of the interview is about the data produced by the app and how it should be protected as sensitive information. I can't see anything there that would prevent you from reverse engineering "to see how any of the app is running."

It's not even obfuscated or protected from decompilation in any way, so it's trivial to look at with static analysis tools. (i.e. without trying to run it)

Even the headings don't mention the code: "Collection, use or disclosure of COVID app data", "Treatment of COVID app data", "Decrypting COVID app data", "Coercing the use of COVIDSafe".


> IANAL, but I think your interpretation goes beyond what they're after.

Perhaps more than the intent, but this is a government that doesn't deserve the benefit of the doubt.

Circumventing any "access control technical protection measures" is currently a crime under Australian law (Section 116, Copyright Act). They may well consider any decompiling tools to fall under that particular law, as well as use of said tools.

In March, they pressured a university in firing someone researching into their own data breach to see how bad it is. [0] There isn't a law against de-identifying, especially when it is in the public interest, but they went ahead and threatened severe legal action anyway. Whilst simultaneously claiming that said data breach doesn't contain any personally identifiable information.

They had to be taken to the High Court to be shown that an algorithm cannot be used as evidence that a debt exists, and that decision makers actually need to do more than just trust the system. [1]

If it embarrasses them in any way, then they are not above twisting laws to suit them. [2]

[0] https://www.theguardian.com/australia-news/2020/mar/08/melbo...

[1] https://www.theguardian.com/australia-news/2019/nov/28/robod...

[2] https://www.abc.net.au/news/2020-02-28/abc-not-appealing-fed...


So far as I can tell, the legal instrument that sets the requirements for the app is the Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) (Emergency Requirements—Public Health Contact Information) Determination 2020:

https://webcache.googleusercontent.com/search?q=cache:EJWgZa...

Reading through its text, I can't find any clause having the effect that you describe.


The idea that this won't be abused is nuts after seeing what's already been done by the likes of Facebook, Zoom, Microsoft and others. Privacy is privacy. No one has the right to take that away.


I have two questions.

1. Why are location data needed? What difference could it make vs just contacts?

2. Couldn't this ban be easily circumvented by telling people to install another app that shares location data?


I can't imagine that working out in China.

EDIT: Nevermind. Looks like this only applies to apps using their new contact-tracing framework.


Here in Australia our Covid tracing app is already out.

It uses Bluetooth proximity logs and local storage unless you test positive and are asked to upload.

I had a look at whats in the apps DB and it seems it just keeps a log of unique ids it bumps into. When you test positive and upload I'm guessing it publishes those unique ids if the logs show over 15 minutes of contact.

The government already have our locations via cell towers so they just have to match the data if they really wanted to.


A misunderstanding is that this system results in instantaneous contact notification and isolation (which is what one of the original papers on digital contact tracing efficacy assumes).

The reality is the entire 'human in the loop' contact tracing process is still manual. Health staff working in state contact tracing teams still need to call and interview every contact to determine whether they could be considered a close contact from an epidemiological perspective.

Also consider that close contacts include anyone you've spent more than 2 hours in an enclosed room with.

So that immediately limits the utility of any bluetooth proximity app: you still need an entirely manual process for home, work, social gatherings - any situation where you are in an enclosed room.

We've taken a deeper look at the assumptions and expert opinion, both from individuals and institutions, which appear to discount this kind of system from providing significant value.

Even the product lead of Singapore's TraceTogether app, as a natural advocate for this kind of initiative, admits the technology is oversold and is only an additional tool (due to the significant potential for false positives / negatives):

https://blog.gds-gov.tech/automated-contact-tracing-is-not-a...


Thanks for the insight.

Why wouldn't they just auto-alert anyone who came in adequate proximity to get tested though? Surely there is some benefit to letting people know they need to be extra careful now and get tested ASAP.

All the manual human work can continue as normal.


If someone is determined as a close contact they have to legally self-isolate for 14 days regardless of symptoms or test results.

The policy there probably reflects the understanding that people in the early stages of infection won't test positive or exhibit symptoms.


The GPS data couldn't be "rounded" to have less precision, depending on population density?


Wouldn't that sort of defeat the point here?

Contact tracing apps want to use location data to know when I might have been in contact with someone. If you do something like round the data to a suburban neighborhood or a densely populated city block in Manhattan, you can't effectively contact trace.

Any granularity of data that is useful for contact tracing would seemingly raise the same concerns that are leading to them banning this.


Does anyone know if the original statements from Apple and/or Google are available online?


I believe it comes from here:

https://blog.google/documents/72/Exposure_Notifications_Serv...

Section 3.c.i


Thanks.

It sounds like this only applies to Google's own exposure notification service and would not apply to standalone contact-tracing apps such as the one being proposed by the UK government.


Somewhat inevitable that the briefing will begin tomorrow against Apple and Google that American big tech firms are deliberately frustrating attempts to fight COVID and wouldn’t it be better if we just taxed them out of existence.

Shame, as a UK citizen I’m entirely supportive of the stance Apple and Google are taking.


I'm not so sure that this type of framing will work so well in the UK. I suspect a lot of the press will jump straight to "it's because Dominic Cummings wants to benefit his mates who do dodgy data mining". Even if that's demonstrably totally untrue, enough people will believe that narrative for it to become mainstream.

The fact that there are plenty of other countries who are content to go with the Google/Apple API will also neutralize a lot of this type of criticism.


I have no idea how to predict the UK public anymore. When it comes to the snoopers charter it went by with barely a whisper from the populace and yet when it comes to the 18+ filter on broadband something like 93% of people had opted out on their home broadband. It's exceptionally rare to find so many people do the opposite of default. I can't work out if the Uk public do or do not care about privacy and their security or what but they surprise me with their actions at times.


Interesting. I wonder if countries will try to force them to re-enable it


What is the status of contact tracing apps in the United States?


Not launched yet.


Under the cover of coronavirus, governments punish adversaries and reward friends https://archive.vn/Ea2qr


Will they boot existing contact tracing apps with GPS from the App Market?


this is a text book cluster fack


I’m so confused. Can you opt out of this?


Can you opt out of installing an app? Is that actually what you're asking here? If so, yes..


If Apple wanted to they could force your phone to install an Unremovable app or just add in the software in an update.


This is not just a straw man it's a straw giant.


True. I just haven’t been following this. I’m glad to hear it’s completely voluntary.


Which is very unfortunate, because it means it'll be far less effective.


Awesome but not yet launched.!


Yes. I believe them. Totally. The borg has my best interests at heart. My big brothers Apple and Google will look after me.


Horrible.


if the goal is 'privacy', I'd think contract tracing is every bit if not more more of a privacy concern than is location


How so? You're an anonymous identifier in this system.


even without location tracking it’s creepy. what happens when the govt contact traces a man with his mistress and then blackmails him?


I laugh at conspiracy theories which revolve around the Government paying special attention to normal individuals.

Dude, Governments have the power to tax millions of people. They don’t need your piddling bribery money. It’s too much work for too little reward.


It’s not about the money and it’s not about “normal individuals”, whatever that means.

Law enforcement and intelligence agencies find this sort of information useful to further their agendas.


Then (in your absurd hypothetical scenario) those agencies are being incredibly stupid, they're wasting their time on on high risk / low reward strategies.

Oh wait, you're probably talking about the United States? It's scary how fucked up that place is just millimetres below the surface.


i’m talking about what they did to mlk jr. even if it doesn’t affect me it still matters.

https://en.wikipedia.org/wiki/FBI%E2%80%93King_suicide_lette...


They have to protect their valuable behavioral surplus.


I am sure most people would agree life > privacy any day.

Edit: But then again, if there was an option to opt out that would be great too. Got to give people options!


Good luck implementing it in other side of the world (India). You can't refuse if Govt mandates it. So what's the point in Apple and Google wants to control Privacy. I can trust Govt more than Greedy Private companies.


Just curious: why do you feel you trust your government more than "greedy" private companies?


So GPS for me but not thee?

"Privacy experts have warned that any cache of location data related to health issues could make businesses and individuals vulnerable to being ostracized if the data is exposed."

Apple and Google already have that data in-house, it's just as vulnerable as any data.

How is this not an absurd stance?


I can't speak for Google, but I know that Apple goes out of their way not to have any clue where your are. For example, when you do mapping directions, you get issues multiple single-use codes, that are refreshed during the trip, so not only does Apple not know what directions you asked for, they can't even associate the trip with a single entity.

When you say, "Apple has the data in-house" - what are you referring to?


> they can't even associate the trip

That's assuming the single-use codes are not persisted on the other side. Maybe they aren't.

"Apple has the ability to have the data in-house" / "Apple promises not to have the data in-house" would be more precise. The rest relies on the current behaviour and T&C.


It’s only absurd when you assume that access implies blanket authorization.

Apple Maps is allowed to use my location for the purpose of providing me directions. Sure privacy policies are usually overly broad but for sure “to allow Mark in accounting to track his ex’s” aren’t part of it.

I don’t see a contradiction in two large players refusing to cooperate with governments that want to slurp up people’s location data. They might lose the court case and be forced to allow it but I see no absurdity in the fight.


It's the reasoning they give -- that it might be compromised and used for bad ends. That's already possible, Anyone genuinely thinks that the partitioning of data for different purposes provides any real privacy protections, I've got a LifeLock subscription to sell them.


In terms of technical competence, I trust Google and Apple 10x more than I trust most governments.


Lemme guess, libertarian? Or at least "fiscally conservative, socially liberal"? Blind faith in the Free Market Fairy (https://bitworking.org/news/2008/01/The-Free-Market-Fairy) is a big part of how we got here. As Einstein said, "We can't solve problems by using the same kind of thinking we used when we created them".


I’m genuinely suprised that an implementation came out so quickly ... one would think the pandemia must have been advanced before the government even considered doing this ... and here we are with Android and iOS approved apps, with bluetooth and permission solutions made by a team of available engineers, implementing goverment documented requirements ... hmmm


There is not a single app using this API in "approved" state, let alone active use. The API is kept very simple, its implementation has just entered a beta stage, and the Bluetooth tech it uses is almost a decade old already. And the entire idea of contact tracing via BT has been discussed for months already by several teams worldwide and already been implemented a few times in various ways, so it's not exactly new either.

But don't worry, just go on with your demonstrative "critical thinking"...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: