You can see a summary on https://www.eff.org/deeplinks/2012/02/what-actually-changed-...
For completeness sake, the diff from the changes on June 28th: https://www.google.de/intl/en/policies/privacy/archive/20160...
Disclaimer: I work at Google, but not on anything related to this. I speak for myself, not for the company.
> Google's servers also automatically collect telephony log information (including calling-party number, forwarding numbers, time and date of calls, duration of calls, SMS routing information, and types of calls).
But then a couple of years later they turn on that data collection without anyone realizing, and it may take a few years before some do.
when verizon did their super cokie, everyone went crazy (as they should).
now that its google (doing that since before verizon it seems) the top coment is a googler throwing the issue to the side in a hitchhikers-guide-to-galaxy way ("you should have checked the alpha centaury records in 2012")
I was quite sceptical about Android, partly because it has a foreign userspace and partly because there were no nice vanilla Android forks without Google code and with a focus on privacy.
I'm now tired of waiting for any Maemo successor to catch up and I reckon F-droid has become a pretty nice ecosystem. Also the announcement of CopperheadOS, Guardian Project and F-droid joining forces is exciting .
Otherwise, in case you do try it, here's some things that I found useful:
If the apps on F-Droid aren't enough for your taste, you can also use Aptoide  to access a big portion of the apps that are available on the Play Store.
The Amazon Store is also an option, although their store is also not really the greatest when it comes to not invading your privacy.
And if you cannot live without some of the apps on the Play Store, or if you've paid for some apps there and would prefer to not lose access to them, I can recommend Raccoon  for accessing them.
There's also some online-services, like for example Evozi , which can download unpaid apps from the Play Store for you, but well, for one, usually only unpaid apps, and secondly, I have yet to find one of those services which is not hopelessly overloaded.
I think, if you look more into app-piracy tools, you might also find a web-service which let's you download paid apps, but I've personally just stuck to Raccoon, so I can't make any recommemdations there.
Finally, if one of your apps really does need Google Play Services, there's a project aiming to reimplement the functionality without the privacy problems, called microG .
I haven't personally tried it yet, but I think, it's somewhat hit-and-miss, so while it's probably worth trying, you shouldn't expect too much from it...
Default language on their page is in German but everything is available in English (including support).
to be fair, one was a thing that happened, the other is a "by my reading of the ToS, they could do this"
Admittedly it is not as easy to spin this as a more explicit ToS, but I am sure that with a sufficient number of legal experts, this has considerable leeway in terms of interpretation.
because they did change that, according to this very thread. With nice diffs and all.
But the surprise twist is that Apple does the same thing!
To disable it on iPhones go to: Settings -> Privacy -> Advertising -> Limit Ad
That is absurd revisionism: iAd died because Apple stipulated a minimum ad spend of $1 million. They later dropped the minimum spend to $50, but the damage was already done and the platform was in a death-spiral.
iAd died because Apple unsuccessful tried to capture the 'premium' end of the Ad Market, not because it "respected user privacy too much".
...is false. They are loathe to allow anyone else access to it, and sold ads as many exchanges (like Facebook) do - not sharing user info, but promising they can target the demographic you want with precision.
And just to drive home the point, they will still be using targeted data when they start showing App Store ads in iOS 10. If you think there is no targeting there, you're hiding your head in the sand.
My theory goes like this - Apple gets to still advertise, lose its full iAd sales team, has a captive audience and ad platform and so can set pricing however they want, use native (better) ads instead of banner (worse and dying) ads, and so on.
Important to note is that it also says "...if you clear your cache, you will lose your opt-out setting." What cache?
You can control a lot of what you see from that page (which has been around for at least 6 years).
It Shouldn't Take a Letter from Congress for Google to Give Straight Answers
Unfortunately, while the policy might be easier to understand, Google did a
less impressive job of publicly explaining what in the policy had actually
been changed. In fact, it took a letter from eight Representatives to
persuade them to provide straightforward answers to the public about their
Unfortunately, Google’s original explanation left much to be desired. The
policy’s overview page said nothing about the substantive changes that were
occurring in the policy, and the FAQ was equally vague:
Here's notification as written by someone who actually wants to notify rather than do the minimum possible to cover their ass while simultaneously obscuring what they actually collect:
We will now record and store every phone number you call.
ps -- under substantial changes, nothing about telephony. So apparently the eff didn't see or didn't understand the change either.
Anonymity on the internet is dead for the regular user and it sucks. I wish something like Freenet was better and had more users/content.
The issue is not in any sense limited to "free" things.
Don't forget about all of your packets being sniffed at every hop as they go through all those telco routers you pay to use. Dead there to unless you use tor too, but then you just get flagged for using tor.
All thanks to advertising companies. What a rotten state of affairs.
At some point, society came to the conclusion that the workers and factory owners were not on equal footing, and so rules were instated, despite strong protests from the factory owners that contract law was sacrosanct and workers should be free to agree to anything.
Perhaps it's time for regulation to solve this dilemma?
Yet, people buy iPhones and Androids instead. They wanted those features over privacy or security. Next step was people posting hardening guides plus making private apps for these. Most people still didn't use them. Next step, given that and low sales of "crypto phones," was to make new crypto phones & mobile solutions that pre-hardened Android, pre-supplied key apps we needed, and provided things like remote wipe. Most people and businesses don't buy those even if the price gets down to a normal smartphone.
I mean, what else is even left to do to appeal to majority who won't buy a fully-featured, privacy-enabled, Android phone for Android prices? At this point, I feel comfortable saying the buyers are the problem or (said differently) they have a clear preference against any private phone companies produce. They're for existing UX, tons of apps, more tracking for app's features, cloud backups providers can read, faster, prettier, and so on. Everything that enables hackers.
So, I suggest companies just say "Screw it! I'm just going to do a marginal improvement on whatever customers want while making excuses when problems happen." Since telling them it's their own choice doing it or offering them a secure phone will both lead to financial losses.
That being said, could you give us a good example for these privacy-enabled phones and apps? I am willing to make a collection out of these and write a guide on hardening Android. And I am not just talking apps that are incredibly hard to use; I mean the better generation of them who are actually mass-audience-friendly.
Of course we must not forget that Android phones could have a backdoor at the kernel or even at the hardware level. But I still think we should do the best that we can. As many security researches say, you aren't absolutely breach-proof, but if you work hard enough you're not a target that's worth the effort, especially having in mind you're not a legitimate threat to any government.
I view this sort of like the people who got away from the Matrix in the movies; as the Architect and the Oracle implied, as long as these people aren't an escalating threat for the entire system, they're allowed to live however they choose.
What's your opinion on the Turing Phone and Sailfish OS in general, by the way? Do you think that it gives us a fair progress in the direction of the more snoop-proof end-user tech?
Those were usually very simple. A good thing compared to modern ones. They all cost in the $1,000-3,000 per unit range due to extra costs and low volume. Sectera Edge was probably most secure and rugged. Cryptophone was easy to use plus had nice features like hardened Windows and published source for crypto. You basically called the person, read out what was on your screen, listened to them do the same, and listen to each other's voices to make sure you recognized them. It was favorite outside of just defense use. Switched to Android later. That's the demo I found.
Note: The letters you see are the codes you read.
These were pretty expensive. So, companies started developing software for regular phones... often one or two models... that turned them into encrypted phones optionally with hardening. Prior list had some. SecureStar (PhoneCrypt), SecureGSM, and Cellcrypt come to mind. Eventually, recognizing encryption wasn't enough, this segment sort of combined with Android and other software to produce dedicated phones that were cheaper than older cryptophones. Well, some of them haha. Two examples with second being the open Redphone.
Examples of the phones produced include Boeing Black, Bull Hoox, the Cryptophones, and recently the Blackphones w/ Silent Circle. Blackphone was among the cheapest we saw at regular, smartphone prices. It was common for crypto phones to come with voice and SMS at least. Blackphone added quite a few privacy-oriented apps over most to be all-in-one solution. I remember that as an advantage.
Far as messengers, we have good open ones these days so I mostly forgot the others outside cryptophones and above. Signal is super easy, free, and quite secure. Main recommendation. There was also ChatSecure and TextSecure. Given open ones, no reason to trust commercial ones since subversion and BS is high in this industry. Still worth looking at them for how they do usability aspect to increase adoption. I know Threema got significant adoption. Worth looking at. I'm open to others' suggestions here on crypto apps with good security protocols that also have great usability. Thing is, if it's really end-to-end, usability is inherently lower than centralized one due to verification aspect. Anything truly frictionless is suspect in my view with Signal representing the high end of what I'm expecting.
Bruce Schneier, for Congressional submission, did ask us all to list as many crypto products as possible for him. You might find something of interest there. Here's that thread:
Note: Also, the original way we did this outside expensive cryptophones is called Voice over Secure IP (VoSIP). That means you set up the strongest VPN (or link encryptor) between two points that are communicating. Then, you force a normal app to go through it. One can automate this process so it's painless for users. Often stronger than average secure voice app given what scrutiny goes into some implementations of transport-level security. Or existence of dedicated lines between branches.
"I view this sort of like the people who got away from the Matrix in the movies; as the Architect and the Oracle implied, as long as these people aren't an escalating threat for the entire system, they're allowed to live however they choose."
Possibly but don't count on it. Depends where you live. The U.S. increasingly targets harmless citizens with anything it can up to and including just stealing their money without charges under civil forfeiture laws. Just using Tor or crypto is grounds for NSA to put increased scrutiny on you per the leaks. So, this isn't guaranteed. Keep real secrets off online or wireless devices period. Face-to-face only. The rest we have to keep doing more and more to protect. Can incrementally deploy it, though, where sales drive increases in not just features but assurance of more of the stack. My recommendation.
"What's your opinion on the Turing Phone and Sailfish OS in general, by the way? Do you think that it gives us a fair progress in the direction of the more snoop-proof end-user tech?"
Let me help you out by showing you what all they have to protect. You can look at this list, look at the marketing/technical material, and usually tell if it's going to be victim to future attacks.
By those standards, the above aren't even close. I haven't studied these phones where I can say much more, though. I do like aspects of Sailfish in terms of a more open phone but it's still owned by one company from Wikipedia's description. That one also licenses key I.P. in proprietary fashion. So, there is risk of it being another Google Android situation. Turing Phone article I read on Wired sounds like a pile of marketing BS plus lock-in waiting to happen. People are better off using apps like Signal, Redphone, Cryptophone, or Silent Circle that at least come from people who know what they're doing. Who we know have a track record. That's my (common) initial impression.
Sadly, on the topic of the Turing Phone, I suspected as much. I really like to believe but yes, they're quite new to the market and are still closed in terms of what they use for this alleged "more secure" phone/OS. I'm still interested but my enthusiasm is not so high compared to the time of the original announcement...
I wanted to use Signal several times but I have to admit, it's use-case and convenience points aren't looking well. I'll take a more serious look, though.
You can't use it without installing closed-source Google Apps (Play Services for GCM at minimum), and means you agree to hand over your phone metadata to Google (per the OP's top-thread). Moxie has stated he is open to consider high quality PR's to add Websocket functionality. (Removing close-source binary blobs would be a prerequisite to distributing on anything other than Google Play to though, which Moxie's also said isn't on the roadmap - I assume primarily because of resources).
In the meantime, Conversations.IM has OMEMO and Vector.IM has Olm/MegOlm.
There's not a lot of good voice options. Vector.IM's just added WebRTC, which is meant to be DTLS secured. CSipSimple does ZRTP, but it hasn't been updated in a long time.
None of the apps mentioned above has been audited and scrutinised to the extent Signal has.
If you really need privacy & security, CopperheadOS is the only Android distro AFAIAA that fits the bill at the moment.
Perhaps some more volunteers putting effort in could remedy the situation.
I wouldn't pin too much hope on having a high quality PR written and integrated back to Signal soon. It doesn't look like a top priority for them. OWS also like the telemetry that Play gives them for diagnostics and have stated they won't be looking at FDroid unless someone can replace that.
Fixing the funding model is an independent problem.
I'm not saying that advertising and software development aren't prresently linked through funding. But the process is _fundamentally_ unbundled, through the vehicle of Free Software development and licensing.
The costs for advertising are approximately $500 per year per capita in the developed world (~1 billion persons), plus all the associated privacy, surveillance, security, and chilling-effects risks.
A shift of that basis from advertising to a syndicated content-and-development support tax would cut the cord between both advertising and content, and advertising and software development. Treating both writing of content (fiction, nonfiction, journalism, research) and software development as public goods could achieve some very strong social benefits (or at least present a different set of problems for us to jaw over on HN, though the concept of a technology startup incubator might also see some ground shifts).
Bust out of that box a bit, Nick.
Detachable headphones and mics.
Oh no, it's a very, intertwined problem. One is only likely to succeed in business using models that are shown to work. A browser, secure platform, and so on is usually tens to hundreds of millions in labor. The only companies that have pulled that off either used premium, licensed software (eg Microsoft) or advertising (the rest). There's scores of attempts at alternative, business models to break into all these ad-dominated markets. Almost none of them work. So, this is important.
"through the vehicle of Free Software development and licensing."
It's possible but few have been built that way. There's a bunch of small players trying in the secure collaboration space. Making almost no money. Used iMessage, Facebook Messenger, WhatApp, and SMS instead. The ones making it, almost none in privacy, seem to be outliers that have lasted quite a while or VC-funded ones that we shouldn't trust due to sell-out risks.
"The costs for advertising are approximately $500 per year per capita in the developed world (~1 billion persons), plus all the associated privacy, surveillance, security, and chilling-effects risks."
Better to look at what it costs to develop top-notch products vs what they make on advertising, proprietary licensing, and FOSS licensing/support. What they make and if it justifies continuing the offering matter more than the $500/yr they probably haven't heard of. It's new to me, too.
"Treating both writing of content (fiction, nonfiction, journalism, research) and software development as public goods could achieve some very strong social benefits (or at least present a different set of problems for us to jaw over on HN, though the concept of a technology startup incubator might also see some ground shifts)."
I agree. It's just that 90-99% of buyers don't consistently for a decade or so.
"Detachable headphones and mics."
Mine and some others' designs call for simple switches that cut power or connection to mic, camera, and so on. That's doable if people want it. I'm also for jumper- and/or crypto-enabled updates of firmware that are tamper-evidence. Also doable.
"Bust out of that box a bit, Nick."
The model I've been considering is to get proprietary vendors to each contribute a bit of their revenue to the development of OSS dependencies. These might even be new proprietary vendors that are starting for the purpose of pushing better, paid software plus new models. The idea is that each of them contribute to say a mobile OS, a SSL library, a Linux/BSD, disk encryption, secure backup, and so on. As they improve and succeed, so do the critical components they are sponsoring. They can include it in the marketing material for customers along with examples like Heartbleed that came from supporting competition that didn't invest in critical infrastructure. So, there's immediate benefit, long-term maintenance, and public good all in one package. Selling the participants is the hard part here.
Note: I also thought getting CompSci people developing bug hunting or code generation tools to use them on these projects with their grant money would be nice, too.
Note 2: We haven't even gotten to the risk of patent suits on these companies whose business models have nowhere near as much money for lawyers or buying patents as those suing. That makes situation more dire at least in U.S..
Your point that many Free Software projects make little money (at least as Free Software projects) misses the more salient point that they generally don't need to. Scratching itches, low barriers to collaboration, and solving problems in other application spaces makes this possible.
The giants' very reliance on vast revenue flows is also a vulnerability.
And no, I'm not arguing that costs disappear (though efficiencies do appear), but rather that they're distributed and loaded throughout numerous other organisations and activities making money on their own.
The "90-99% of buyers" problem is precisely why you look at funding alternatives which bypass per-copy market sales. I've been looking into the history of publishing and creative works, it's an interesting space. Patronage, busking/performing, crown sanction (essentially a content tax), BBC tax, etc. Information goods and markets interact very poorly: https://redd.it/2vm2da
(UC Berkely / Google economist Hal Varian has an extremely similar treatment which I ran across recently.)
The detachable headphones and mics comment was a reference to an earlier exchange we had, I thought you might recall it.
Your bust-out-of-the-box model isn't too far from what I'm suggesting for content tax/syndication. Whether voluntary (free-rider problem) or compulsory (politically difficult but possibly inevitable) you're distributing contribution to a shared resource. Much as, say, we ended up with language or the law.
Patent threats are also somewhat diminished through small pockets (less attractive target), or might be addressed specifically via legislation and/or international treaty. Say, something with a different philosophy than the TTP, TTIP, TiSA, and BITS crud being shoved down our throats by Google, Apple, Amazon, Microsoft, IBM, AT&T, et al.
That would make more sense if various companies didn't sink $1+ billion into more robust systems and software over the decades that most users & developers intentionally avoided in favor of what was faster, cheaper, supported buzzword/feature X, and/or was known unreliable/insecure. They make the same tradeoffs today. Look at use & tradeoffs of Facebook Messenger vs WhatApp vs Signal vs Threema . Even when cheap/free and easy, vast majority will not make slightest effort for increased security and privacy.
 Not endorsing Threema so much as to say it was quite marketable & good example of what should get more adoption if users aren't to blame.
Itanium got posted here recently, too. It had enhanced reliability, stack protection, read/write/execute per page, and memory key isolation if one wanted it. Most server software that was mission critical continued to run on Xeon x86 instead of leveraging improvements in Itanium even when portable source was available for mere re-compile to Itanium. Why? Xeon was cheaper and took no extra effort.
Rinse, repeat for all kinds of software and tooling. There's often supply but little demand that pays. Which sucks extra here given high reliability or security costs more not less for producers. There has to be sustained demand that will pay at least 30-50% premium to develop each component or app. Since there's not, the companies are entirely justified in producing unreliable, insecure or surveillance-oriented garbage for people that exclusively use or buy such things. And the buyers are to blame wherever there were clear alternatives that were inexpensive given their choices collectively decide the issue.
"Perhaps it's time for regulation to solve this dilemma?"
I supported that on Schneier's blog with specific points showing where market consistently fails (even for itself) and how regulation would help:
Despite low demand, I also actually write a counterpoint justifying continued focus on niche that cares about quality/security with recommendations for "low-cost but high-value" practices to deliver that profitably as a differentiator. What's in this post is an example of how my software liability argument in other one might play out. Regulations would enforce stuff known to knock out problems consistently to form a better baseline without driving up cost or having vague stuff prone to frivilous lawsuits.
No, it's all thanks to the availability of cheap storage and video cameras and fast internet, driving the price of recording almost to zero. Couple that with the love people have for mobile phones, which are spying devices.
This would maintain the illusion of things happening without actually allowing them to come to fruition.
The sequence is also important: first they build services and products, sell (or give away) them on their merits, and then turn the analytics to 11. Doing both at the same time is a no-no, but once a user is dragged in, natural laziness and other migration costs kick in.
They are long cons, and to be fair, the products are pretty nice.
I understand their need for data. It makes business and technical sense. I just wish they made more of an effort to anonymous it. (but that won't happen as long as ads are the primary market of the web)
With very good cause.
There is a significant portion of the population that doesn't know what metadata even is, much less that a company they use collects it and why that might be a problem.
Apple's push for "privacy" has only one purpose - to cash in on government from the information. Now government gets it for free, so when there will be "privacy" they will need to pay money to Apple to read it. If there wouldn't be profit, Apple wouldn't push for any "privacy".
Besides all of that, the older I get, the more I realise the following two things: #1 that we're entirely in the dark ages of technology (and science to a lesser extent), regardless of the technology we have acquired, because society, governance and the public discourses all seem so very crude - we don't know how to handle our newly found powers and dogmas are still very existent: they've just moved from religion to science and technology; and #2: that the entire ordeal we find ourselves in is wholly inevitable, and, for the most part, much less important than it appears to be, perhaps even insignificant in the grand scheme of things. I don't believe that there is going to be a global doom scenario, or at least, not more than medieval witch hunts and the rise (and 'fall' of sorts) of the Roman Catholic power and influence structure.
Historically speaking I'm inclined to think that the only element of humanity that survives everything that's happened, is human bonding and the forming of relationships and communities. In Roman Catholic wording, I'd say that everything else is a lot more temporal, and also very volatile and subject to change. If that comes across as indifference, so be it.
But that's just how I see things now, so feel free to poke holes in my theory.
From those days i am not surprised with this practice.
Well, yes because that's the accusation, but no because you're correct in the sense is that you have to assume the worst if they aren't making it clear. Once the catch-all is in place they can do what they want and there doesn't have to be a clear indication that anything has changed.
I attempted to find something that would indicate they had to suddenly include that clause, but the closest I found was the introduction of Google Voice to the iPhone a month before the SMS was first mentioned. Apple's site makes no mention of metadata collection so I assume this has nothing to do with it.
The more I try to defend it the less convinced I am.
: https://en.wikipedia.org/wiki/Google_Voice#Rejection_from_th... Mentions Sept. 2010 and Nov. 2010 as being important dates around the iPhone release.
Disclaimer: These views are my own.
Source: I was working on https://github.com/moezbhatti/qksms when Messenger was released.
Why would you have such doubts? Companies are not in the business of making you comfortable and ensuring you have the privacy that you value, they're there to maximize shareholder value. Because you trust a corporation to not make money off of you by personalizing even further, if they can?
Why? What incentive is there to not collect that data if they've deemed it valuable enough to adjust their policies to collect it?
> during each phone call, Android tries to connect to a Google server
I've always considered Google to be evil if not proven otherwise. It is a simple conflict of interest to trust Google with the user's privacy. They make money by creating profiles on users and serving relevant ads. I'd be an idiot to keep on asking them to play nice and respect my privacy.
Apple on the other hand seems to be aligned to respect privacy in principle, and publicly claims it doesn't violate user's privacy. But, there is no way to verify their system because of the proprietary nature. So I treat them as less evil.
Before you suggest something like Replicant. They are only slightly better than the feature phones because of the lack of the new and fancy apps and with a label that reads "If you need serious privacy stay away from any telephony-enabled device" http://www.replicant.us/freedom-privacy-security-issues.php
So, I guess the only choice is to stay paranoid and not trust your phone with any sensitive private data.
Your metadata is sensitive private data, who you call, when you call them and how often you call them is sensitive.
Suicide Hotlines, Charities for LGBT people, Medical Numbers etc all build a picture about you and that's beyond personal contacts.
"When you use our services or view content provided by Google, we automatically collect and store certain information in server logs. This includes:"
I am not sure this applies to regular phone calls, no? Can someone explain here because I am certainly not understanding. Is this only for, say, Google Voice or related services that they have?
In all honesty though, you are prompted about whether you want to use this feature.
From time to time I wonder why the EU accepts foreign companies stealing so much personal data. China did the right thing. We should learn from them. India, afaik, also has better protection.
I have a lot of friends internationally and I feel I can't exit FB, Google Hangouts, etc. The idea of federated social networking never really made it. :(
I don't think China did the right thing. Their citizens were almost certainly better off with Google holding their meta-data and private email communications.
Isn't the EU trying to fix this by tearing up safe harbour and instituting new legislation?
Use apps from other app stores, in my case F-Droid.
Cyanogenmod's Privacy Guard is easy enough to use though its protections don't go as far as I'd like. Pdroid seemed to be a reasonable compromise in ease of use and protection between the two, though that seems no longer to be available.
I do not think the UI is great, but articles like this are not either. I made my choice.
* By using and selling user data and profiles.
There's a reason people still refer to them as M$.
And then Microsoft acquired LinkedIn.
Microsoft should have recognized they were considered more trustworthy, in one sector anyways, than Google or Facebook. Building on that trust by building a Windows 10 with great privacy would have helped their reputation and stemmed the bleeding a bit. At any rate, I have no idea why MS doesn't just allow a "disable all tracking and telemetry" option on all versions of the OS. Would kill most of the complaints immediately, and most people wouldn't bother opting out so Microsoft would still have the data they think they need.
Google is kinda like having a psycho girl/boy-friend that wants to know your every move. Creepy.
You can't buy an open source 2d printer, but can buy and build 3d OSS ones no problem. In like kind, I don't think people find dumbphones interesting anymore, sadly. Actually, why print at all when you can "Google Cloud Print"?
I am astonished and disappointed that Apple saw fit not to warn the user this could happen. Even Google gets this right; when you connect an Android device, you get a warning and are required to confirm before proceeding. It's inexcusable that Apple doesn't do the same - while I understand and agree with the reasoning behind the existence of the capability, the fact that it's silently enabled is appalling.
 Yes, I know it's hardly news, but (I flatter myself that) I'm generally reasonably savvy, and if I only just found out about it, then there's probably someone else reading this thread who could benefit from it being mentioned. So I mention it.
Is all this stuff only a concern when the device is enrolled via MDM? That seems improbable; the (several) sources I've found on the subject don't mention MDM in any context, but just warn that Exchange connection = remote wipe capability. Similarly, while I had my phone connected via Exchange, I visited the devices page in Outlook Web Access and saw that the "Wipe" option was enabled for the device.
As for ways to use the same device for work and personal, newer Android-versions (I believe from Lollipop onwards) have the option to create different user-profiles. I think that would work, although I've never actually used them myself...
It's 2016, device manufacturers should have gotten this right already instead of focusing on wallpaper.
If you want a compromise, like me, Cyanogen with out GApps. What other options do you have?
The more I got into Android, the sooner I realized it was a crap shoot when addressing your question and my core interests in hobbyist computing on phones just like computers.
Play store alone has enough permissions to collect call metadata
So even if you find a relatively safe Android version to use, you're getting a heavily gimped experience where a very limited selection of apps work. (Generally, what you find on F-Droid.)
You could try this:
And even though it uses Linux as kernel, even that doesn't matter.
The API surface that NDK code is allowed to use is so constrained that they could replace the kernel with something else and only OEMs would notice.
I guess the question is how you define a platform or an operating system, but I think that the platform apps on it run off of is a major part of it. And the difference in what will run on AOSP and what will run on Google Play flavored Android is staggering.
Google could be more open. But there isn't any evidence the parts of Android that are not open are not open in order to stop open or even competitive semi-closed Android derivatives.
They are open enough enable competitors to use the same technology platform, including some very valuable technologies like the ART compilers. If they intended to lock out competitors, they could easily have done so.
Samsung can't compete because there is no viable Samsung ecosystem, and Samsung has not aligned with an ecosystem partner other than Google.
The problem is that Google is creating a walled garden, to prevent competing app stores. Apps developed on Android could be easily listed, downloaded, and installed on any flavor of Android. Samsung, Amazon, etc. could all have their app stores, and developers could script a simple command to update their apps everywhere.
However, by creating a proprietary layer like Play Services, Google has effectively made a proprietary fork of Android the dominant one, requiring different levels of effort to support non-Google phones.
If I correctly understand `Last modified: June 28, 2016, they are writing`, this feature should not be available before that date, so either it comes as an update to a gapps or to Android.
Who will be the Snowden of the advertising industry?
> Unfortunately, the code in your browser that powers Facebook still knows what you typed—even if you decide not to publish it.* It turns out that the things you explicitly choose not to share aren't entirely private.
Petraeus and Broadwell used fake names to create free webmail accounts
exchanging messages without encryption tools.
They would share an email account, with one saving a message in the
drafts folder and the other deleting it after reading it.
> your phone number, calling-party number, forwarding numbers, time and date of calls, duration of calls, SMS routing information and types of calls.
that I have made over the past 6 years, 10 months, and 20-something days.
This is a service they provided, that I'm using, if the data were not there, I would have gotten rid of my gvoice number, found a service that preserved my data like any self-respecting service provider (I'm really at a loss here - is this not available on iOS?) and complained loudly that Google Caused Data Loss For Users and should probably get hit with a lawsuit for damages.
I cannot believe how one-sided the discussion here is.
The claim that GOOG is capturing all this data on all calls and texts on all android devices is -completely- unsubstantiated. There's one link to a german-language blog, whose (admittedly, Google-) translated claim seems to be that they tested - a - phone. One. With phone calls. Did they test SMS too? Was there a network connection to google's servers? The omission could just be the translation, but right now I think this is actually a story with 181 comments about, at most, a phone that also has hangouts, that pings the server to tell google to mark their user as being in a call.
My only affiliation with Google is as a user.
Oh, and let's not forget here, that if Google were trying to subtly reinterpret "your use of our services" as "we capture everything from all Android devices", why on earth would they do a time-synchronized, easily-reproducible log when the call starts? (And how would they already have the call duration, blah blah blah at that time?) I can assure you, they're quite familiar with queuing data on the device to be sent later.
The most irritating comment thread I've read in a long time. None of what is being claimed is even remotely cogent, on the face of it, and the reaction is just an endless woe-is-me of "what did you expect, we all saw this coming, what can ya do, the companies these days, they're just not like they used to be" I mean, is this a brexit hangover or what?
Just sprinkle on a little "the NSA does exactly thing tho omg" and "metadata===murder" (yup, I think of you as js fanboys, that's how bad it is) just and... yeah, the expletive is necessary. Completely fucking ridiculous.
The software for this project can be found on github:
Candidly, the code there looks pretty simple. Most of it is using pygame for the graphics and then just sending serial commands to the modem. It looks like most of the complicated functionality is just part of the chip.
Anyway, yes, I've thought about doing a phone implementation on top of some kind of Linux distribution and Pyra was high on that list. If someone else has some information or knows of other projects that have done this, please let us know.
While BB gave Canda authorities access..Google by default does not..
Not to be a tinfoil-hatter but can you be 100% sure of that?
The fact that it had remained undetected because no one had taken a closer look at it for several years does not mean that the questions we ask are unwarranted.
Peter Schaar, former German federal commissioner for data security and - at the time - chairman of the ARTICLE 29 Data Protection Working Party of the EU, says Google’s practices may even violate fundamental rights under German and European law.
The current German federal commissioner for data security is looking into it because of our reporting, so does the commissioner for data security of the German federal state of Hamburg, which has authority over Google because the company’s German headquarters are located there.
We will hopefully hear from them shortly on how they assess the situation. Also, a request for a statement from the Irish DPC (Google’s EU headquarters are located in Dublin) is under way.
Researching our original story, we of course asked Google to answer our questions on whose data exactly is collected under the provisions of the private policy, what data is collected, for how long, where it is stored and why Google thinks it can justify this data collection and processing on consent to their terms and services alone.
Now, after the story was picked up by dpa, Germany’s largest national news agency, and several influential tech news sites (German language only, Golem: http://www.golem.de/news/ueberwachung-google-sammelt-gesprae... | heise: http://www.heise.de/newsticker/meldung/Google-Wirbel-um-priv...), Google has now issued a statement that poses more questions than it answers. We published our story today (https://mobilsicher.de/aktuelles/google-speichert-telefondat...) and also provided an English language version (Google admits it collects telephony log information, doesn’t specify which exactly - https://mobilsicher.de/uncategorized/google-admits-it-collec...) because we think this discussion is very relevant for an international community.
We are a small team but will keep reporting on this, trying to clarify the legal situation as well as the technical details. For this we’ll be doing more of our own analysis. In case any of you has helpful information, i.e. logs showing telephony data being transmitted to servers, please let us know (m.spielkamp at mobilsicher.de). But please make sure it can be reproduced by us, otherwise we’ll have a hard time using it.
I mean, the scale of the violations, it boggles the mind.
But I guess this is a nasty way to implement some unpopular policies, then point the finger at the user stating he complied to it.
 And by fixing this I mean make it a law.
Unfortunately, the Google Maps app won't remember searches unless you're logged in and share your entire location history. There's just no way to keep a local history of places in Maps.
I'll get rid of Google Maps as soon as Maps.me let me stream OSM maps instead of downloading packages one-by-one.
Same for email/calendar/contacts, 10 GB hosting + own domain = couple less beers per year and don't miss Gmail a single bit.
Machine learning/AI active on phone? By someone with policies like this? I'd rather use a fliphone and carry a small laptop around. Oh wait, maybe all we'll be left with will be Chromebooks sigh...
We detached this subthread from https://news.ycombinator.com/item?id=12016316 and marked it off-topic.
There's nothing that matters here that requires personal disrespect to point out.
Neo900 & Sailfish broke my heart.
But looking at how much people are willing to pay for smart watches, perhaps not.
"Starting with iOS 10, Apple is using Differential Privacy technology to help discover the usage patterns of a large number of users without compromising individual privacy."
To paraphrase, Apple has been collecting telemetry all along and is announcing that they're starting to a new proprietary algorithm to obfuscate it a bit.
Their privacy page seems to agree with my interpretation:
Though since iOS is closed source - we can't know for certain.
Possible discussion from 2011, cannot open the page:
Links from the past:
I am open to argument, but it seems to me that most experience of terrible problems on the internet come from malware, browser hijacks, spam, basically the actions of criminals.
IMO, the only hope for a "civilian" in maintaining a semblance of online security is to rely upon the large providers of mass computing services.
I do not know of anyone who has had a negative experience as the result of not having a high level of online privacy.
From actions undertaken by the government or business.
The closest I know of is cases where individuals have been arrested on possessing or accessing child porn. There are also numerous examples in the media of people involved in the drug trade, or sex industries.
I don't think your average citizen is going to be swayed by such accounts that they should undertake costly efforts to protect themselves from such activities. On the contrary, I would hazard a guess that they would see the surrender of such "liberties" to be a net benefit to society.
I'm not saying this is a correct view, I could be persuaded that it's not, but I have yet been so persuaded. I'm befuddled by apparent disconnect here.