
The Filter Bubble: Algorithms as Gatekeepers - acrum
http://www.bigspaceship.com/blog/think/the-filter-bubble-algorithms-as-gatekeepers/
======
mhb
Pariser's best point is that we should be aware that what we are seeing is
actually being filtered, but I think that he is overstating the cost of
personalized filtering and understating its benefit.

There is too much information available to view it unfiltered and it's unclear
what that even means. If Google returned the same search results to everyone,
that doesn't mean its unfiltered - it determined what to return based on what
other people wanted to see. You're just seeing things with the same filter as
everyone else. Why is that better? If you think it's better, consider that the
reason there is so much trivial crap on TV is because the same people who
watch it are the ones generating the aggregate filter.

Old media was already filtered. Just because everyone reads the same NY Times
doesn't mean that it doesn't have a ____ slant and that all its subscribers
aren't being affected by that confirmation bias. How many people who subscribe
to the NYT also subscribe to Reason magazine to get the libertarian viewpoint
on current events?

A way this could become better is for the personalizing algorithm to become
smarter. Instead of showing you conservative viewpoints because you click on
conservative opinions more than liberal ones, a smarter filter would make a
higher level assessment of you. So if you click on some liberal ones, maybe it
concludes something about you instead of something more trivial about what you
look at - maybe it concludes that you like to hear other viewpoints and
modifies your filter to better suit that conclusion.

------
mhb
Suggest reading the original instead of this commentary on it:
<http://www.nytimes.com/2011/05/23/opinion/23pariser.html>

------
spacemanaki
I'm really curious to read any discussion about this on HN, because I think
the other times Eli Pariser's TED talk and book have been posted there haven't
been much discussion. It's been circulating in my family (mostly by non-
technical members) and there was some amount of alarm stirred up. I'm
skeptical of this just because the evidence doesn't seem that damning.

Has anyone read the book? Is there significant evidence that this is a serious
issue? I completely believe that this is beginning to happen, and would
believe that FB does it since I don't spend enough time on the news feed to
notice if it was happening, but the example in the TED talk and in the NYTimes
column linked to by mhb is a Google search for "Egypt" earlier in the year,
and two people seeing drastically different results: "Two people who each
search on Google for “Egypt” may get significantly different results, based on
their past clicks."

Aren't there other reasons for two people being served different results by
Google? I've often read that they do heavy A/B style testing and stuff like
that, which seems like it could explain some discrepancies.

So what does HN think? Is there stronger evidence here than comparing a few
Google searches?

~~~
jerf
The problem with the problem is that it just... is. You can't consume all the
information you possibly can come into contact with. You must filter it
somehow. All filters will induce biases. Any filter that claims to somehow be
unbiased just means that it matches the biases of whoever created the filter
so thoroughly they can't see it, but it's still a filter. If you decide to
periodically randomize things, that's part of your filter. If you give up,
that just means you're filtering everything.

You can discuss desirable characteristics of filters, but you rapidly run up
against the twin problems of how your characteristics are subjective, and the
fact that you don't really fully understand how filters will affect you anyhow
so it's all speculation. Perhaps that later point will go away as we have more
experience but it's going to be hard to share those experiences with each
other; they serialize into English poorly.

I think it seems like an interesting problem at first, it's something you
should know about and it's worth upvoting the occasional mention of it... but
there is really very little to say that isn't _really_ about some other topic
entirely (epistemology, ethics, validity of other philosophical or political
viewpoints, etc.). There's not all that much interesting stuff to say about
it, in the end.

~~~
spacemanaki
I guess I was approaching it from a technical debunking perspective, but what
you say makes a lot of sense. I'll ponder this.

------
ericd
Hm, if people estimate the popularity of something by the relative frequency
with which they encounter it, and if they end up seeing only stuff that's
tailored to them, then they may start believing that everything they like is
mainstream and "correct" as a result. People may become less tolerant of stuff
far outside of their comfort zone and end up becoming more polarized, which is
not what I had imagined would be the effect of the internet.

