Hacker News new | past | comments | ask | show | jobs | submit login

I have to admit, I'm really starting to like the direction that Apple is heading despite being previously disenchanted. I only wish that they would go ahead and put everything under a free software license, since they're in the business of selling hardware that's coincidentally bundled with their software.



> I only wish that they would go ahead and put everything under a free software license, since they're in the business of selling hardware that's coincidentally bundled with their software.

That's never going to happen. Apple sells a 'User Experience' not just hardware - having a complete and mostly closed product is an inevitable consequence of the former - and the number of Linux users that would buy a Macbook isn't a large enough part of the market for them to worry about.

With that said I've had a Macbook Pro and it was pretty much a better piece of hardware (at least as far as build quality) than any other notebook I've used.


In what way does changing the software license impede user experience? No other company would have the proverbial balls to straight up copy their software either.


Have you heard about a company called Xiaomi? They have the balls to straight up copy the industrial design of Apple, so pretty sure they would have no problem copying the software.


>No other company would have the proverbial balls to straight up copy their software either.

How can you say this looking at the hardware landscape?


To paint another viewpoint, Apple initially went all gung ho about privacy and wanted to make not collecting data a big play (and fairly so, full respect to them).

The recent WWDC obviously shows a big shift towards AI and ML applications within the company. Some things are possible on the device, but many neural nets just cannot be served from an iPhone reasonably. Hence, the move towards more data collection. I really wish they give out more information here. Until then, I'm not sure how much they are actually collecting after their realization that they do need the data to do AI well.


They talked a bit more about differential privacy in the State of the Union. Basically, they hash the data and add noise. By collecting data from a bunch of people that noise gets averaged out. They also limit the amount of samples (over a relatively short period of time) they can get from a single person so they won't be able to identify them.


Intersting. That's a smart way to collect data while not having too much noise flood in the dataset. I need to watch this State of the Union.


State of the Union is what the WWDC keynote used to be before it started being watched by press and the public. Much more technical detail, and information in the underlying frameworks rather than user visible features.


https://developer.apple.com/videos/play/wwdc2016/102/

You can find all the videos from WWDC 2016 some time after the session is done. I usually check the next day. They have the videos for several previous WWDCs up as well.


At some point, regardless of adding noise, you're definitely losing your privacy. I'd be happy with an "opt-out" feature that I know worked (as far as I can see, only if it was open-source). I didn't watch WWDC, perhaps they mentioned this.


I agree opt-out is definitely something that should be deployed alongside differential privacy, but what makes you so sure that it doesn't work "at some point"? If the noise means an specific query to one user's information has a significant chance of being wrong, how does this not equate to privacy? You can add a lot more noise than you might imagine if you know the kind of analysis you'll be doing with the data; for example, a lot of statistical techniques are constructed to be mostly immune to Gaussian noise since it's very common with some kinds of data.


The whole point of collecting the data is to predict the actions or information needs of individual users. That in itself is a privacy issue.

If a recommender system for iTunes can predict the likelihood of me appreciating movies that contain violence against women, that information could be subpoenaed when I am falsely accused of having strangled my girlfriend.

I appreciate that Apple is trying to protect our privacy where they can. But if we want them to make predictions about or behavior, we have to be aware of the fact that we are necessarily giving up some privacy.


You're misunderstanding where this is to be used. It is specifically not for things like iTunes suggestions, where it would be useless. It's for situations where they want to get aggregated metrics without collecting identifiable information. The obfuscation can be performed by the client so that they never have a database on the server with accurate (at the specific user level) data.


I don't think I am misunderstanding (although I'm not completely sure about that). My point isn't about iTunes. My point is about the purpose of data collection. If that purpose is predicting our actions, then that in itself is a privacy issue.

I understand that the database Apple wants to build does not contain accurate information about individual users. But if that database allows them to make predictions of our behavior, then there is a privacy issue. If the purpose is not prediction, then what is it?


It could be a number of things, but one possibility is identifying broad correlations between metrics. Ssince you can't trust the accuracy of the individual metrics, you will have a limited ability to apply the correlation to individual users, but if you use the right kind of noise aggregated conditional probabilities may survive.

So Apple can (for example) predict that listing to band A means you are likely to like band C, and then send a list of correlations to your device so the predictions can be made there by examining your library locally. A more probable use is analytics for marketing purposes. Another is selling just these correlations and other aggregate statistics to other parties; this is actually how Mint makes money.


>So Apple can (for example) predict that listing to band A means you are likely to like band C

And how is that different from my iTunes example?


I used "you" incorrectly, my bad. They can predict that people who listen to band A are likely to like band C, but their data for whether you listen to band A still has a significant chance of being wrong.


Yes, the data has a significant chance of being wrong. But it is useful only insofar as it supports a prediction with a probability of being right that is greater than 0.5.

That's what makes the data useful and that's what makes it a privacy issue at the same time.


It doesn't have to support that prediction in specific instances, just in a general trend, where random noise tends to average itself out in a lot of cases. There are lots of different distributions with the same averages, the same conditional probablities, etc. with wildly different data. If you have some mathematical proofs that say you can not reach one of these other distributions by injecting random noise to mask individual contributions, then please write a paper on it! But to my knowledge, Cynthia Dwork's work and others still stands. There is definitely no simple, common sense reason that it doesn't work.

How does sending the same list of conditional probabilities for liking pairs of bands to everyone's device and then having the device pick out the ones actually pertinent to your library compromise your privacy?


I don't doubt the validity of Dwork's work. I think we're talking past each other.

What I'm saying is that if Apple keeps data on its servers that is sufficient to predict some of my actions or likes with any accuracy greater than 50%, then that is a privacy concern.

But if you're saying that the data in Apple's database does not have any predictive power on its own, then I agree that it is not a privacy concern.

In that case, my device would have to download some of Apple's data and combine it with data that resides only on my device in order to make a prediction locally on my device.

If that's how it works then I have no concerns.


I believe that's how it works.

They even limit the number of samples they get from a specific person so they can't filter out the noise for that person and get their individual response.

But, keep in mind that Apple will have records of all your iTunes rentals and purchases at least for billing purposes. However, at least in the US there's a law about keeping that data private (because of Robert Bork).

https://epic.org/privacy/vppa/


Differential Privacy is now $AAPL's licence to collect more "anonymised" personal data & benefit. Such personal data collection is not an evil anymore. I'll have some of the "PR" they're having!


> Apple initially went all gung ho about privacy and wanted to make not collecting data a big play

My impression always has been that Apple does not collect data that can lead you to be personally identified. I never got any impression that "Apple does not collect data".


They know it's one of the ways to differentiate from the other big companies, by offering privacy and marketing in such a way that it's impossible for the other big players to do the classic "me too" approach.


Who knows, maybe one day that will be the reality. They are taking a lot of steps in the right direction towards facilitating development on their platform – In my opinion it doesn't seem too far fetched to believe that one day they might open up their software too.


Fuck Apple. The only thing they are programmed to care about is the bottom line and getting more users. That programming is complicated and unlikely to change to suit the consumer's needs, or desires.


You say that despite this privacy push being a direct response to consumer sentiments. Apple knows well that as tech becomes wearables and the Internet of things, privacy concerns skyrocket. My grandparents won't buy things online. Soon my generation will be those old and anxious curmudgeons unless our concerns are eased.

Do they care about their bottom line? Of course. It's for that very reason they are investing the time now to secure the trust of generations of consumers.


This 'push' is just another unprovable unverifiable marketing bs that is just enough for the type of audience they attract until it's all opened to FS licenses and hardware is made to accept any software that user wants it to (like it should if you bought it).


Agreed. Apple is only doing as much security as they reasonably needed to satisfy customers. Most customers don't understand the various types of suffering (and suffering risk I might add) caused by software, but large companies do. Large companies end up "making decisions" for users en masse. This is fine for button placement but falls apart logically when presented in a discussion based on trusted infrastruture. You will note that liquidse's response is conflicted. Those rationalizations are the indicators of dissonance in the privacy debate.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: