*edited for spelling >_>
Also: any time I consider using identi.ca again, it's with this view. Even if it doesn't gain popularity, I can at least provide my own guarantees that it will be around (so long as some few others use it) even when Facebook or Twitter go away.
That's exactly the idea. It's a bootstrap. You use the systems that are in place now to boot up the successor.
It's never either/or. You use everything that works, that has people on it that you need to reach, as long as they welcome you.
This has been the problem with Google-Plus. They don't have an API that lets you post to it. But Twitter does. To everyone who follows me on Twitter, they don't have any idea that I'm not really on their network. In every sense that matters I am.
But when Twitter goes down, I keep posting, and people who are hooked in my feed still get the new stuff.
Isn't this exactly the problem? Keeping a service alive in perpetuity is missing the point because it's the communication that's important, not the service upon which the communication flows... Human communication is inherently temporally limited. If you ignore that, you're not gonna be heard now, which makes you even less likely to be heard in the future.
EDIT: I guess I'm just saying: nobody reads old messages on this sort of service. If you're not targeting "now", you're starting off with an immense disability.
 - a microblogging site that uses ostatus; we integrate with identi.ca. code here: https://github.com/hotsh/rstat.us
"8. BTW, it has to hook into Twitter. Key point. The thing that's kept the other networks from working is that they don't peer with Twitter. Luckily this is in keeping with the new Twitter mandate of putting stuff in but not taking stuff out. Great. If you want to read what someone says on Twitter you have to use Twitter. Not a big deal it turns out."
I don't use Twitter but I see celebrities and T&A a lot more when I log out of my yahoo webmail than when I'm on Facebook. Real, honest to god porn, for that matter, has stayed on the "open web". There are still, uh, forces pushing people that way.
Also, as far as I know the masses have never been on Twitter.
I don't know if you're going to entice many developers with that combination. What we need is a simple protocol (not an API), maybe JSON/MessagePack based with UDP signaling, that makes it easy to build distributed Twitter-like services, while also reachable by HTTP. The developer experience needs to be easy enough so that a distributed "hello world" service can be built in less than 5 minutes. It needs to come with a cross-platform P2P server component, and client libraries for a few popular languages. Make the barrier to entry so low that any dev can do "apt-get install <fancy-distributed-system>" to get the server/client bits.
The average user doesn't know DNS (username.twitter.com) as well as they do email addresses (email@example.com) and URIs (twitter.com/username). If this is going to gain adoption, it needs to prioritize UX familiarity over technical superiority. Everyone has an email address, so use that for identification, but don't clutter people's inboxes by using them to transport or store app data.
Make it super easy to federate with existing walled gardens by providing open-source implementations of server components so Twitter, Facebook, Google, etc. can get up and running quickly.
This package, and the underlying OStatus protocol, is where organizations that want to retain control over their own reliability and namespace should be looking.
Using DNS to identify users is unwise, in my opinion, because it means that people won't own their own on-line identities -- they'll have to rent them and for real money, too. And if some users are assigned a sub-domain on a shared domain, their identity won't be portable.
I think it is worth doing a little extra work to make a user name system that doesn't have those problems.
DNS has the virtue of being here now, being tested and refined over multiple decades, and offering a choice between subdomains for free or portability for a nominal annual cost. It's not perfect but it's good.
send email to firstname.lastname@example.org
=> lookup example.net
=> pull MX record
=> route email to example.com
 - https://code.google.com/p/webfinger/wiki/WebFingerProtocol
In this context we're talking about what it takes to avoid relying on DNS (because DNS is a centralized, highly politicized system). Your solution would still rely on DNS.
In a truly decentralized system, you're not going to be able to have readable unique names without collisions. Why? What happens when the network splits, then people on either side of the split setup the same username. How do you rectify this when the network rejoins? How do you know that the network has split vs. a node going offline (if you wanted to do something like shut down new usernames until the network was whole again)?
The concern here is that whoever is currently leasing the domain name has authority over users' identities. A better system would let users own their identities outright.
"In a truly decentralized system, you're not going to be able to have readable unique names without collisions. Why? [....]"
This is a well explored topic. A good place to start might be to look up "Zooko's Triangle" and then go forward from there towards various ways people have figured out to deal with such problems. (Zooko's wasn't the last word.)
Besides, the general public can't even use HTML well, so what chance do they have with xml?
The other compelling thing about twitter is that 140 characters thing. Blogs enable people to train-of-thought-rant for pages before making their point (if at all). Tweets, on the other hand, force people to think and condense before writing. That's an awesome feature for readers. Also, twitter makes it very easy to follow and unfollow.
I agree that expecting people to muck around with DNS and even RSS wouldn't gain wide-scale adoption.
Many of the same businesses are using Twitter to get their message out far more effectively.
What if the user wants to have more than one feed? Or wants, sometime down the road, to have routable resources that are not feeds?
Wouldn't it be better to say that a user name is a user name and that a default feed name can be automatically built given just the user name?
In twitter APIs, can't you get something like, say, a user's avatar image by keying off the user name?
So, even on twitter, a user name maps to multiple different things -- not just a single feed.
Are you just trying to post/read feeds as an individual? If that is the case the open alternative does not provide you a benefit, and you will probably find it less convenient.
On the other hand, what if you have many users under one org? How about you're just a member of a division of another org, and they have many division and subdivisions. And they really want be sure that your message gets out there, without relying on a third party. The public relies on those messages, for example a fire department posting about wildfires. Those are the use cases that will probably see benefits.
If you have the infrastructure to handle a setup like that, you should already have an internal email server.
EDIT: After your edit, I think I've found where the disconnect is from my perspective. The problem with this method is that discovery(arguably the most important part of what Twitter provides) is still reliant on a third party's index.
From a user's perspective, Twitter provides 3 key services from one URL:
1. A unified feed for everyone you follow(This proposal also does this).
2. An easy way to post/host content(This proposal does not deal with this).
3. An easy way to discover new people to follow(This proposal also does not deal with this).
Out of those three, I would argue that the second and third are the most important. The problem isn't getting the message out to people that are already subscribed with Twitter, email, or a hosted website. The problem is discovery, and giving people an easy way to actually find the information that they're looking for.
The only way to handle discovery on this way is to have some hosted, third party method of searching through the users to find the ones you want to follow.
People won't be using this new decentralized service. They'll be using Twitter. I guess you can feed into Twitter, but the last mile is still Twitter.
Twitter will not make or break most organizations. If it were to disappear tomorrow most any organization will have plenty of time to look for alternatives. In the meantime Twitter wins. Always. By a mile.
I'm not really sure it matters where the users are. Ideally in a well designed system/protocol, it shouldn't matter. Email solved that problem 30 years ago.
This is something no single organization can solve. It's impossible. And email is about the absolute worst example. No, it absolutely didn't solve the problem. First the users had to come. Before that it was worthless.
I think many here are delusional about how this works.
But in the long run you won't beat Facebook and Twitter by merely replicating their functionality.
If you look at tech industry cycles the leaders don't get beat, they run out of room to grow, or evolve into something less monolithic.
Hegemony is always short lived. Once you get on top of the heap it's usually a short time before there's a new generation rising up.
It seems like internet is still a very young thing and people jump the ship or move on to a new thing very often, without much sentiment, like a child on a playground going from one toy to another.
Twitter introduced photos sharing long time ago, but it didn't stop guys from Insta to literally rip it off (adding "follows" and similar features). Where one thought "well I cannot build photo sharing service because the giant in the room (twitter) just introduced it" -- the others just continued pushing the code and eventually exit for $1B.
So you want to come up with Facebook killer? no problem -- just build a system (whether mechanical or manual) where users can take real advantage out of their network, not miss important update from important friends and miss the unnecessary garbage they are not interested in. Don't be fooled by "oh they wont come because they did all this work building up their facebook profiles". Not only there is FB Api that you can connect to and download all FB user data in minutes, but if FB would close that gate, you can still program a info scrapper that would do so.
Bottom line: Facebook won't be here forever, and next stage of social is million connected friends but only relevant info rendered.
Twitter, Facebook and the like should be like emails: if you send me an email from hotmail I can read it on gmail or any other mall client. I should be able to subscribe to friends, interesting people or other social content provider and consume this comment from the client of my choice. Something that Google reader was not very far to provide.
Wrote something about this view sometime ago: http://www.douban.com/note/174513094/
More here: http://GoPalmetto.com/
Probably the best candidates would be all the Twitter client apps that are getting burned by their API lockdown
I have to go to all the trouble to maintain my webserver and setup DNS, then go to twitter to read others? Why is this not a big deal?
Do RSS titles have character limits?
It's great, except it's completely unusable due to it being a huge download of a page full of disparate links with no organization other than by date.
The trick is not in writing something like this, it's in making the information aggregated and easily organized.
A simple, open, server-to-server web-hook-based pubsub (publish/subscribe) protocol as an extension to Atom and RSS.
Parties (servers) speaking the PubSubHubbub protocol can get near-instant notifications (via webhook callbacks) when a topic (feed URL) they're interested in is updated.
The protocol in a nutshell is as follows:
- An feed URL (a "topic") declares its Hub server(s) in its Atom or RSS XML file, via <link rel="hub" ...>. The hub(s) can be run by the publisher of the feed, or can be a community hub that anybody can use. (Atom and RssFeeds are supported)
- A subscriber (a server that's interested in a topic), initially fetches the Atom URL as normal. If the Atom file declares its hubs, the subscriber can then avoid lame, repeated polling of the URL and can instead register with the feed's hub(s) and subscribe to updates.
- The subscriber subscribes to the Topic URL from the Topic URL's declared Hub(s).
- When the Publisher next updates the Topic URL, the publisher software pings the Hub(s) saying that there's an update.
- The hub efficiently fetches the published feed and multicasts the new/changed content out to all registered subscribers.
The protocol is decentralized and free. No company is at the center of this controlling it. Anybody can run a hub, or anybody can ping (publish) or subscribe using open hubs.
To bootstrap this, we've provided an open source reference implementation of the hub (the hard part of the protocol) that runs on Google App Engine, and is open for anybody to use.
There's been some drama between the developers of the two systems in the past:
(Drama being nothing new to the RSS world, alas. Sigh)
What I wasn't sure of is when he mentions polling is that instead of pshh or would that work with it?
Like others, I'm allergic to DNS for identity. Yes, the UX is atrocious (see: OpenID).
But, what about getting away from a global namespace?
Let people refer to their friends however they want! Use marked up links to reference the underlying feeds.
(Not all, but some of them follow a similar principle)
Why not email (SMTP/IMAP)? Deployed, standardized, widely supported.
Big statement. How do you get critical mass when practically every journalist, celebrity, and person you went to school with uses Twitter or Facebook and not your new open system?
While I definitely think an open solution could eventually "take over the space" and leave Twitter and Facebook as "AOL-like also rans," it's far from obvious to me how one would do it in a year.