

On Privacy (or: what Buzz failed to learn from Newsfeed) - prosa
http://blog.rosania.org/on-privacy-or-what-buzz-failed-to-learn-from

======
ekiru
The article claims that "Companies don't understand privacy." It then goes on
to explain how _users_ don't understand privacy. If you are publishing
something in public on the Internet, you should expect people to be able to
see it. Perhaps companies like Google and Facebook should be doing a better
job of educating users that, yes, if you publish something publicly on
Facebook, people will be able to see it, but I don't think it is the companies
who lack the understanding of privacy.

------
calcnerd256
The article is about the problem caused by not being explicit with our privacy
intentions and expecting software to figure it out. If we keep catering to the
expectation that software will solve this problem, we'll miss out on lots of
opportunities, including the opportunity to develop a privacy-aware society.
People are able to learn these things. They will, if you let them. Human
nature is not a fixed value in this problem. It will grow around the best
tools.

~~~
prosa
That's a great point. One thing that I did not explore in the article is what
the implications are for product marketing and adoption. By launching Buzz
with preliminary connections already baked in, Google is really boosting the
chances for uptake.

I agree that people can learn to use the tools, but can we design go-to-market
strategies that do not depend on software continuing to guess who wants to
talk to who, about what?

------
jasonlotito
Highlight of the article for me:

"* Computers suck at intent. Inferring privacy preferences for new software,
based on prior actions in old software, is a recipe for failure, and a PR
nightmare. * People assume computers are great at intent. We publish things to
much wider contexts than we intend, and don't notice or care until new
products and features make incorrect inferences based on that."

I think that best describes the problem. General privacy policies might make
sense, and in truth, might be correct. Indeed, Buzz might be following the
policies set, but at the same time, it makes the first mistake. Couple that
with peoples assumption of the status quo and really not working to understand
what they use, makes for a volatile mix.

In the end, this is probably the most important thing of all:

"Inferring privacy preferences for new software, based on prior actions in old
software, is a recipe for failure."

~~~
frossie
_"Inferring privacy preferences for new software, based on prior actions in
old software, is a recipe for failure."_

What I find so _very_ strange, is that this isn't obvious by now. What
happened here?

Nobody at Google thought of this?

Someone thought of this but was too scared to mention this?

Someone did mention it but was overruled?

I really can't decide which one of those is worst.

~~~
jasonlotito
Maybe the people at Google didn't see it this way.

"Inferring privacy preferences for a new feature, based on privacy choices in
the existing software, is a recipe for failure."

That would be how to apply this to Buzz.

