
Alex Payne — Fever and the Future of Feed Readers - GVRV
http://al3x.net/2009/07/18/fever-and-the-future-of-feed-readers.html
======
DannoHung
Here's my rule: If a feed emits more than about 5 items per day, it's probably
a shitty feed. And more importantly, that anything that is actually
interesting on them will be repeated here, on Hacker News, or on Digg, or
reddit, or on one of the feeds that I _do_ follow.

I _largely_ use my feed reader to make sure I don't miss stuff from the sites
that publish much more irregularly (and so that I don't have to periodically
check them).

~~~
TomOfTTB
I disagree to a certain extent. I see your point on using a feed reader for
irregular sites but I think you’re just as likely if not more likely to
accidentally miss something on a site like HN or Digg because there’s such a
high volume of posts.

The fact that you’re trusting "the crowd" to do your filtering at all presents
a problem. Because even in a community of likeminded people like HN we all
have our own special interests which aren’t necessarily shared by the crowd
(one example, I program in .Net but most HN readers don’t so a lot of good
.Net stories never make it out of the "New" section)

So in the end that's the problem the author is laying out. How do you take a
large number of messages and make the feed reader as effective as it is with
the small sites? I don't know the answer either but I think it's still a huge
problem being left on the table.

~~~
DannoHung
My rationality is thus: High volume means general interest. I do not want to
be the filter feeder for general interest. Importantly, I don't care if I miss
a few things; I'm eliminating the vast majority of false positives. I run the
risk of missing a few false negatives, but I'm already roughly at my limit for
dealing with the true positives anyhow, so it doesn't really matter: nothing I
miss is going to cause me to lose a hojillion dollars or anything serious.

------
yason
Aside from some special-interest forums, I've ended up reading just HN.

(Warm thanks to you guys for being my filtering function.)

One by one, I dropped following various news sites from my Mozilla toolbar. I
must say I never really bought into the RSS boom as the best sites I read were
pretty simple and looked much like a beautified RSS layout.

First went Slashdot. This was years ago, about the time Reddit started. Reddit
lost it last year: too congested with useless links with a low signal to noise
ratio. I've been dropping many of my national news sites, too -- tech news and
otherwise -- leaving only those of best quality. Then I dropped them as well
because nothing much in the mainstream doesn't really interest me. This was a
big thing I had to accept. I really don't have to know the details of some
crisis that is 5000km away no matter how civilized or educated that may be
considered; if I start hearing about a major change in the given situation
only then I'll look up for some more news.

Last spring I dropped reading the website of the main newspaper in Finland.
The news were mostly uninteresting except maybe for some local things. I still
check their news listing maybe once a week but I rarely read any news items. I
realized that I'll hear about anything important from other people anyway. So
I'm effectively using my friends and co-workers as a live filter.

TV and TV news I dropped 10 years ago.

So HN it is. Then, plus a few web forums / mailinglists but those provide a
mix of both information and connections to people. People are usually more
interesting than the latest "facts" of random subjects.

------
yurylifshits
There is another issue: a lot of quality content is not available in the form
of feeds: event feeds, shopping feeds, feeds of modern art or last episodes of
LOST available on Hulu.

Thus, an interesting opportunity is extraction + feed reader. You first
extract and format quality content, then serve it to a user. Few days ago I
released a demo: Photo Reader. Go check it out: <http://semabox.com>

I am actively working in the area of "semantic news". Talk to me if you are in
the same field.

Yury Lifshits yury@yury.name

~~~
jrussino
If you want the most recent episodes of LOST, this should work:

<http://bit.ly/gJtxL>

It may not be so useful right now, since the season is over.

DISCLAIMER: I currently work for this company (www.truveo.com). We run a video
search engine.

AFAIK, Hulu doesn't provide episodes of LOST; they're only available on ABC's
website.

~~~
yurylifshits
Cool!

Does Truveo have well-formatted results in XML (title-link-thumbnail-
embedcode)?

~~~
jrussino
Yes, I believe so. We use Yahoo's media RSS (MRSS) standard for the feeds on
the site: <http://video.search.yahoo.com/mrss>

You should always get title/link/thumbnail/source, plus embed code and other
metadata (description, tags, runtime, etc.) if it's available

------
csbartus
Sooner or later you'll have to create Your Personal News Agency. A reader or
any aggregator is just the INPUT part of the job but how do you process and
make available the generated knowledge, the OUTPUT? In a wiki or a blog post?
None of them are semantic so inapropriate for storing knowledge.

Future Readers must focus on the output part of the process otherwise there
will be so much input without proper output that everything will look like
noise.

Signal vs. Noise is in fact your I/O rate.

~~~
hachiya
When reading information on the web or through my feedreader, newsbeuter, I
use Zim (<http://zim-wiki.org>) to take notes on whatever I think I may want
to refer back to in the future. I place notes in an appropriate category,
apply a datestamp, and optionally add some tags.

Zim automatically indexes everything, so when I need to find out how to apply
that cool Rails trick I read about 3 weeks ago, it's easily found.

So, yes, output is very important, and taking notes helps me get a lot more
out of my online reading than if I simply tried to make my brain act as a
sponge.

------
aj
I started using Snackr a few months ago and have been extremely pleased with
it. It shows items from a list of feeds I can setup, but the items are
displayed randomly. It has a ticker interface, no unread counter and a nice
clean interface.

The only drawback? Development seems to have died on the project which sucks.
So I'm just going on with the last stable release which creaks along

------
rythie
I would agree there is not much point subscribing to TechCrunch, Mashable et.
all, But there are other types of content, like:

\- friend's blogs

\- vanity/competitor searches

\- Niche blogs that you follow

[Warning: blatant plug coming] I use <http://friendbinder.com> to follow my
friends RSS, Twitter,Facebook etc. in the same place. I don't use a feed
reader for big news sites anymore - they are just too noisy.

------
nir
I've written a similar URL-matching-between-feeds system for my own use:
<http://crowdwhisper.com/crowd/1/items> (the output is also fed to Twitter:
<http://twitter.com/crowds>)

The code is in Rails, and could use some bugfixing, refactoring and general
TLC :) If anyone is interested, email niryariv@gmail.com for SVN access.

------
aneesh
One of my biggest problems with Google Reader is those annoying unread counts.
It makes it feel like email you have to get through ... so I quit Google
Reader about a year ago, and started using Twitter as my new "Feed Reader". A
stream you can check out when you want, and ignore when you don't want, is
really nice. Plus, it's easier to whip through 140-character blurbs than
paragraphs of text.

~~~
lunchbox
To solve your problem: in Google Reader, click the dropdown in the
Subscriptions pane and click "Hide unread counts".

~~~
smhinsey
Nice tip, thanks.

You'll probably want to also do this for the "All items" section as well,
that'll take care of the unread count in the page's title.

------
rythie
RSS seems well suited to feeds that update less than once a week, since I'll
probably forget to check. For example my blog posting rate is about 1 every 2
months.

If a site updates at least once a day I'll probably remember to check it.

