The idea seems pretty good. It's just the basics: you follow people and receive their content (text, images, whatever), and people follow you and you share content with them. The most amazing thing is that, just using these simple concepts the possibilities are infinite. As they said, every social network out there can be implemented in this way. ¿Twitter? It's trivial, just make the format description and you're ready. ¿Facebook? The only thing you should specify is that the relationships are symmetric (you can follow me only if I decide to follow you too, that is, we are mutual friends).
To me, the idea seems absolutely great. The problem will be execution: what apps are created using this protocol. I have also the doubt if apps will be interoperable. Example: I build a twitter-like app named Foo and another guy builds another twitter-like app named Bar. Both use similar formats, so, can an user using Foo see the contents posted with Bar? I imagine that this will be possible as long as they share the same post format, but I'm not sure.
Anyways, good work. I would really like to see Tent to expand and grow.
Good luck to them!
On the positive I can see this catching on because it's easy for existing social networks to integrate, it would almost become do or die forever more if a network that does integrate them catches on
definitely a good idea, and if the setup for a tent service isnt too cumbersome I'll make sure to setup one for testing.
They never replied, I guess they were well out of their depth as it was...
Of course, as ideas go, I allowed myself to think further into the possibilities, and found some interesting avenues.
For instance, why allow the facebooks, twitters, etc to own domain over our content? Let people store their own data, and offer API endpoints giving facebook, twitter, etc access. They essentially become frontends and search engines to our shared content. We get control of our own data (and privacy therein), they get to provide an interface to that data in a way that fits what they're trying to offer their "customers".
And then if you take that even further, why allow anyone control over your data? Why not store all my purchase data and credit info on my own servers, and allow authorized companies access as needed? Census time? Popup shows up on my phone asking if i'd like to allow the government access to some of my data for census - I pick what data is allowed, and it's done.
Electric company's system automatically logs in to get my electric usage. Phone provider does the same. Publishing a book literally allows access by readers to your own servers. Releasing an album - same deal. We still have "stores", but those stores are merely search engines offering a service to both the content creators and consumers.
It went further, and weirder (in interesting ways). I'm not sure such a system would truly be beneficial, but I love the idea of allowing people to Truly Own their own data.
Apologies for the tangent. Good luck to you. I'm a fan of the idea as it's presented and I hope you're successful.
If I want to change my email address? Change it in one place. If I want to let my friends know something? Update it in one place. It's very DRY you know?
But then I think, why even let Facebook use my data? If we have these data stores let's build a peer to peer network to let trusted friends access our information. In real life, if I want to tell my friend something I don't pass it through a 3rd party first, I tell them directly! This can let us have really fine grained control over who we share with by authenticating the users that request the information. And with the authentication can come encryption.
This move to the cloud is, frankly, annoying. Why should we have to trust our data to all these people we don't know? They pretend that they give us "free" service, while actually using that data for profit. Good UI shouldn't have to come at such a cost.
I have been thinking about this for a while, if there are other people interested in building such a web I would love to know.
1.) Go to web page and use application, or
2.) Download app and use application
This is how Facebook / Twitter / etc. do it., and they're clearly doing it right. But there's no law which says that the application needs to keep its data on a third-party server. Why not have it run a Peer-to-peer service?
In fact, I think that Peer-to-peer is especially suitable to social networking, given that social networks are typically highly clustered (eg., people who are my friends are much more likely to be each others' friends as well). This means that if the every client application is able to redistribute copies of friends' status updates as well as its own status updates, then a sync between any given pair of clients is likely to allow them to bring a large subset of their friends' statuses up to date.
Of course if you're rebroadcasting your friends' status updates as a P2P host, then the question of spoofing/authentication becomes paramount. Obviously that would have to be solved by having all status updates transmitted in encrypted formats which can only be decrypted / authenticated with a users' public key.
Oh, and once you've got dual-key authentication embedded in widely-used social trust networks, you could probably solve a lot of other authentication problems while you're at it.
All of this would need to be completely transparent to the user, of course. If it's any more difficult to set up and use than standard Facebook / Twitter, it will fail. But I don't see any fundamental obstacles to making it that easy.
Unfortunately I don't have time to do anything about this!
As much as I would love to use and have a network like this, until all the people I keep in touch with through facebook use it, I'll still have a facebook.
I wonder if it's plausible, even possible, to have all those p2p benefits (encryption, authentication, privacy) and some how also have interoperability with facebook?
FB and Twitter have APIs. Of course, they can revoke your key. But they have web clients, so they can be scraped.
Faced with a genuinely distributed opponent, there's no way the existing behemoths can keep your data in their silos.
For instance, FB can block tent.is at the API level or even at the IP level. But if Tent hosts pop up all around the internets, and if they are general-purpose enough that users can install their own scraping gateways, which can't be attacked centrally using technical or legal means... it's game over. To me this is one of the less recognized advantages of a distributed service.
That is the goal of unhosted. http://unhosted.org/
I really whish their RemoteStorage protocol would gain traction.
The locker project is heading into the same direction, although with a different approach. Posting content to Facebook et al and then recollecting it into your locker. http://www.lockerproject.org/
Granted, the issue can be mitigated with trust networks, social conventions, or laws, but it comes down the same issues faced by DRM media formats: if someone can consume the content, they can find a way to duplicate the content.
Using known cryptographic methods, we can construct a system that works as follows:
1) Alice looks up Bob's public key (or gets it from Bob to avoid needing a trusted key service)
2) Using Bob's public key, Alice encrypts a message
3) Alice sends the encrypted message into the network, addressed to Bob
4) Using his private key, Bob decrypts the message
In the second example, the recipient (Bob) is the only one who can release the data. In the first example, a number of potentially malicious parties have access to the data without corporation from Alice or Bob.
I used a direct message for simplicity, but we do have crystallographic methods to encrypt a message such that an arbitrary group of people can decrypt it.
This doesn't solve the problem you're responding to: "once data is given out, it can never be retracted". That remains the case. If you encrypt your photo so that each of your 770 friends can decrypt it, but then you unfriend Bob for being a jerk, he's still got the key and the encrypted data. So he still sees that photo.
In contrast, although you could theoretically save every photo and update anyone ever lets you see on Facebook, it would be difficult enough that in practice no one does.
That particular photo, yes. But Bob could also have saved that photo as soon as he saw it or maybe he's got eidetic memory, so crypto can't help with that anymore.
However, any new photos shared through the same channel can't be seen by Bob anymore because I assume the key changes as the members of the access group change.
Still, while you can't solve the original problem (Bob can leak everything), you can still do slightly better than that by properly implementing Off-The-Record messaging. This adds perfect forward secrecy, as well as deniability: The latter is very interesting because in OTR it means that both parties can authenticate received messages to be certain of the sender's identity, but they can also FORGE any message to look like it's been signed by the other. This means that even if Bob decides to publish your private messages to him, he can never prove you were the one that wrote and signed them, because he could have forged them himself.
Two problems with that: I'm not sure how the OTR protocol extends to multiple recipients (but I bet there's some research on it), and while this "deniability" might be enough for private (text) messages, it's not much use for photographs in many cases: If Bob decides to publish an embarassing photo of you that he had once access to, it's not going to be much use that you can argue "ha, but you can't prove I sent you that photo!".
Still, for textual communication, it adds a (thin) layer of extra security even though you can never beat Eidetic Bob.
 http://www.cypherpunks.ca/otr/index.php#docs and in particular http://www.cypherpunks.ca/otr/otr-codecon.pdf
OTR is pointless for a medium that's mostly about photo sharing. For the Facebook use case, what's far more important than crypto is the set of social norms the site establishes via what's easy to do (share) and what's hard, e.g., archiving everything your friends share as it comes in. Not only would that be tricky to do without getting detected as a bot, but 99.9% of users would never think to try it. This problem is inherently more tractable with a distributed system, and that could be bad.
Actually, facebook does, and that information is (potentially) available to third parties. The intent of my system is that in order for information to leek, you would need to have one of your trusted recipients be compromised.
For the rest of us, though, someone we unfriend (an ex-partner, for example) being able to archive and still access all of our old photos/updates is a much bigger concern. I argue that distributed social software should follow Facebook's lead and offer a sane default of not having such an archiving feature, and not going out of its way to make it particularly easy to add one, either. Otherwise the "creepy" aspect of not being able to rescind access to your photos/updates is going to be a serious downside to this new network, and could keep people from using it.
Edited to add:
Also, what's more secure in theory isn't always more secure in practice. Moxie Marlinspike observed that the original version of Cryptocat (which did end-to-end encryption in JS) was potentially less safe than Gchat. Why? Because if you compromised Cryptocat's server, you could make it serve JS with a hidden backdoor. And Cryptocat, being a one-man shop, would likely be a softer target than a company like Google, which has had plenty of time and expertise hardening their systems. How much do you trust the guy behind Cryptocat, versus Google with their reputation to protect?
Along the same lines, imagine you use a hosted Tent server for social networking — you don't have time to bother running your own server, but you've heard Tent keeps your data safe from Facebook, so it'll protect your privacy better, right? But then the random guy hosting your Tent account turns around and leaks your info. Or he's running old software with a vulnerability in it, and gets pwned. Suddenly hackers have all your data. Would that have happened on Facebook?
When I was curious about signing up with Diaspora, one of the open community pods had a cheeky note from the server admin saying basically that he would peek in and read your stuff if he felt like it. So this is not entirely hypothetical. I'm a huge fan of efforts like Tent, but let's not forget there are upsides to Facebook and Google's stewardship, and an alternative system can easily have as many privacy/security cons as it does pros. Tread carefully.
I guess the deniability part is sort of out the window, seeing as how facebook will be logging the traffic pattern(s) -- but you might at least claim that "no, that wasn't what I said".
Because you shouldn't be trying to close the analog hole, it's essentially impossible (just look at sites like failbook).
Instead you should be trying to be about as secure as email, and allow other people to secure it if that's what they want to try and do.
Operating with these assumptions means that the things that you make public are totally and incontrovertibly public. It is, in a manner of speaking, playing for keeps.
Isn't Twitter the same though? Deleted tweet's have a way of staying around... :P
Why not, it will go to that point.
An analogue would be the naming distinction between HTTP, the protocol, and httpd, the first Web server (http://en.wikipedia.org/wiki/CERN_httpd). That naming split made it easier for people to understand what part of the system others were talking about, and helped make it clear that the two pieces were not tightly coupled to each other.
Maybe you're already planning on doing this when you release the server, it's not clear from the web site. If that's the case, feel free to ignore...
edit: when the hosted version launches it will be on a different domain, to make clear tent.io is the protocol alone and we aren't monopolizing the hosting space (we're doing it more as a community service than as a business).
Yurt, Caravan, Convoy?
Another issue is that this assumes that the web will be the client of choice in the future... with mobile apps being as big as they are in the social space, this seems a bit shortsighted.
Don't get me wrong, I like the idea behind having a "social server", but I don't necessarily think that starting with HTTP is the way to go.
I don't have any particular argument with using JSON for data transfer though... I think that is probably a good choice. Also using SSL for all connections is probably a good call too.
Developers may implement other protocols in the future, but we are targeting HTTP as an accessible starting point.
We are definitely anticipating heavy mobile use, both through mobile web apps as well as native. There will eventually be iOS and Android frameworks to handle all of the communication with Tent servers.
As a mobile/desktop/server engineer, I would love the opportunity to work with other server-side teams that aren't wedded to the web/HTTP via historical accident and thus don't force us to use HTTP.
Do you have any suggestion that provides the same features, or should we forgo them because HTTP is "hefty"?
That's all HTTP really is, but it's dressed up in a bunch of historical complexity and inefficiency centered around supporting web browsers.
Load balancers know how to load balance straight TCP. HTTP caching servers are an HTTP-centric idea.
The 'libraries' you'll need can be much, much smaller when all you need is a bit of framing and serialization, instead of a complete complex RFC compliant HTTP client stack.
It's not writing the protocol that I find the most difficult. It's reimplementing everything the uses the protocol.
Load balancers know how to load balance straight TCP.
Which is only useful if all the nodes are exactly the same, but that prevents you from distributing the data across them based on the user profiles, and then load balance according to the user id, as (if I'm not mistaken) Netflix does. Since they're using subdomains as user identifiers, you'd get that for free using an existing, well-tested HTTP load balancer.
HTTP caching servers are an HTTP-centric idea.
That's a tautology. The question is: are they a useful idea? Is being able to take advantage of existing and deployed solutions like CDNs useful? Seems to me like it would be.
I think you underestimate the advantages that some of the core HTTP concepts provide.
I'm not sure what you think makes that complicated to implement without HTTP, or why you consider it 'free'. Netflix had to write custom code to support that, and could have just as easily done so on top of a message passing architecture ala ZeroMQ or even AMQP.
> That's a tautology. The question is: are they a useful idea? Is being able to take advantage of existing and deployed solutions like CDNs useful? Seems to me like it would be.
Not really, no -- neither a tautology nor are they particularly useful for API implementation. Their primary value is in caching resources for HTTP requests in a way that meshes well with the complexity of HTTP.
If you need geographically distributed resource distribution than HTTP may be a good idea simply because:
- There's widespread standardized support for HTTP resource distribution.
- Its inefficiencies are easily outweighed by the simple transit costs of a large file transfer.
We're largely talking about server "API", however.
> I think you underestimate the advantages that some of the core HTTP concepts provide.
No, the core concepts are more-or-less fine. It's the stack that's inefficient and grossly complex, largely due to browser constraints and historical limitations.
It's free because it already exists. Load balancers for hypothetical protocols don't.
Isn't the whole point of this system to transfer people's content - posts, pictures, videos, etc - between servers? I would think pure API "calls" would be a small part of the whole traffic.
But to implement them, you need more than "a bit of framing and serialization".
I posit you're still grossly overestimating complexity based on your own experience with HTTP, coupled with grossly underestimating the complexity, time costs, and efficiency costs of the stack HTTP weds you to.
A TCP stream is simple. It's as simple as it gets. Load balancing it requires a few hundred lines of code, at a maximum. It only gets complicated when you start layering on a protocol stack that is targeted at web browsers, grew over the past 20 years, requires all sorts of hoop-jumping for efficiency (keep-alive, websockets, long-polling), requires a slew of text parsing and escaping (percent-escapes, URL encoding, base64 HTTP basic auth, OAuth, ...), cookies, MIME parsing/encoding, so on and so forth.
All this complexity is targeted at web browsers, introduces significant inefficiencies, and requires huge libraries to make it accessible to application/server engineers.
What's the gain? Nothing other than familiarity, as evidenced by your belief that the core of what HTTP provides is so incredibly complicated, and you couldn't possibly replace it.
No -- it's the complexity of HTTP that's complicated, not the concepts that underly it. Drop the HTTP legacy and things get a heckuvalot simpler.
Furthermore, I think that even if the developers of this project could replace the required tools and forgo the rest, I doubt it'd make sense.
Frankly, you'd need a working prototype to convince me of the contrary, so I guess we'll have to leave it at that. I'm a stubborn man ;)
By the way, the AMQP spec is roughly the same size as the HTTP spec, and the latter spends a lot of pages listing just status codes.
And of course, AMQP uses a model based on Sessions, which is great if the components of the system are static, but not that great if you're talking to a lot of nodes who come and go, since you'll end up with uneven load distribution on your servers.
Regardless of HTTP as a particular implementation, I think statelessness makes perfect sense in a unreliable network of nodes.
To test this, we implemented fallback-to-HTTPS behavior in a very widely used previously non-HTTP client. We then observed the number of clients that failed to connect via our custom protocol, but succeeded in falling back to HTTPS.
The numbers were negligible.
It's ridiculous that we'd seriously believe that we can't trust that TCP works on the internet. We joke about it being the "interweb", but I see no reason to sow fear, uncertainty, and doubt, and thus and actually turn the interweb into reality.
I don't see that we should model the internet architecture on bad technical choices made on a limited number of open wifi networks.
Or, we just frame our standard protocol over websockets as an (unfortunate) fallback, if it ever is revealed to be a real problem.
Given how many other things are broken by networks that foolishly only open port 80 and 443, and their (in my experience) relative rarity, I'd suggest that it's not worth bothering with, except possibly as a fall-back to measure the actual number of people trying to use your service behind such a network.
It's starting to sound like you've never used CORBA.
You really should also think about two separate protocols... one server-to-server, the other server-to-client.
At least, that's how I've been working on it :)
Our protocol has two distinct parts, one server-server, the other server-app. It's just easier to manage a single "API" than having some support server-server but not server-app--we're hoping to avoid ecosystem fragmentation into those who don't want to support apps or vice versa.
I also don't understand the concern over "inventing" a new binary protocol. It's not like it's any more complicated or difficult than "inventing" a non-binary protocol. The framing rules just don't use ASCII/UTF-8 field delimiters. There are plenty of existing encodings to use, no more difficult than JSON.
I'm not sure exactly what you mean by maintaining state, but if you mean long-lived connections: this already exists in HTTP with chunked-encoding and the new web socket extensions. Stateful protocols tend to be more difficult to write programs for and load balance.
For the same reason cars all want to drive on the same roads.
Relying on HTTP, SSL, Ruby, JSON does not inspire confidence . When I see "Ruby 100%" on the github page for something that aims to move us forward out of centralisation, it makes me shudder. You need to think beyond the web, and get lower level (Tor hidden services, being outside the centralised web, are the right idea, but Tor has a certain stigma). You do need to get below the level of partisan languages- for a number of reasons, one of which is to make it open and easy for anyone to build on, not just web developers and people who know certain languages. That's a given. Get people connected (stateful) and all that will follow.
Something more like cjdns. It's the connection that's important, not the web or whatever else you chose to use the connection for. A webserver is one of many things that can be offered over a connection. I don't want to make the connection through an all-purpose application like a web browser, written by some company (Chrome in all its complex glory is trying to supplant your OS's DNS fer chrissakes, wake up- in the end, it's all about control). I want a very small and simple open source app that handles the connection to my peers and which works via the OS, not "the web". It makes the connection, keeps it alive and otherwise stays out of the way.
I think these kids need to go back to the drawing board. But anyone working on stuff like this should not give up!
Whatever the successor to centralised social networking is, it will largely be a matter of timing. The best solution might not prevail. Instead it might be the one that hits the news at exactly the right time and catches on for some unexplainable reason.
There will be more to come. That's a promise.
Tor hidden services, being outside the centralised web, are the right idea
But most Tor hidden services are as much as the web as normal web services! They talk in a server-client model using HTTP - that's the web, right there.
I don't want to make the connection through an all-purpose application like a web browser
Using the web doesn't mean using a web browser.
written by some company (Chrome in all its complex glory is trying to supplant your OS's DNS fer chrissakes, wake up- in the end, it's all about control).
...and it certainly doesn't mean using Chrome or other browser written by some big company; there's Amaya, Arora, Camino, Dillo, Dooble, ELinks, Flock, Galeon, GNU IceCat, K-Meleon, Konqueror, Links, Lynx, Midori, Firefox, SeaMonkey, Shiira, Uzbl, Luakit and more.
I want a very small and simple open source app that handles the connection to my peers and which works via the OS, not "the web". It makes the connection, keeps it alive and otherwise stays out of the way.
Which is completely feasible using HTTP - the web - as a core protocol.
But the point is the same - the control is concentrated in middlemen (e.g. DNS, Hosting - why do I need them?) - and you articulated this correctly: client-server. Calf-cow. Not peer-to-peer. That's what I'm keen to get past.
1. That's my point about Tor's hidden services. You need Tor's help of course but those services are free from the need for DNS and Hosting - free from the middlemen that control "the web" (not to mention controlling email- does anyone send email using IP numbers anymore? the spam fighting fanatics think every revolves aroudn DNS and domain names- your mail might well get rejected because you lack a "domain name"). I must confess I've never actually used the hidden services. I only read the docs and source code.
"Using the web doesn't mean using a web browser"
Not sure what you mean here. You need to use HTTP. So you need an http client. Chances are you'll be fed heaps of html and other garbage. Parsing it is a PITA. And eventually, if you want to view tables and such, you'll be using a "browser".
Then there's the matter of state. So you're saying HTTP but not in a RESTful way? You just like HTTP headers, chunking and what not? HTTP is popular but it is not exactly unique. There are hundreds of other protocols in the RFC's, all of which would probably work just as well. HTTP is aimed at the client-server concept. That's fine. But it's a limited use for a network with so much potential. Something like a telephone conversation is not "client-server".
I'm talking about making connections that are application agnostic. Like Ethernet. If you're suggesting tunneling everything in HTTP I think that's unnecessary. There are ways to deal with firewalls. HTTP tunneling is a last resort.
So why HTTP? Why does it have to be at that layer? Why something that tied to specific applications that presume so much about what I want to do?
I want freedom from applications. I can write my own apps, thanks.
I want freedom to create new protocols, just for me and my friends. I want my own network, that we control. [This is possible using stuff that's been around for many years, and I have a working prototype. You folks are a bit too cynical to be beta testers, it's command line driven. Maybe someday.]
I've tried nearly all those "browser" options you mentioned, believe it or not. All except for one suck.
And if I had my way I'd extract the html table parser from it and have that as a standalone filter. I'd make it a UNIX utility.
The whole "browser" concept is outdated. People want to watch video, listen to audio, look at photos and read plain TEXT (with great fonts of course). I don't need html to do any of those things. And I don't need html or other HTTP junk to do search to find video, audio or images. I need a tcp (or other) connection and a video player/audio player/image viewer/typesetter, as the task requires.
All that said, hypertext is neat. But it's not world-changing. I can still do great research without "hyperlinks". Hypertext is the great benefit of HTTP. But at this stage, it is so weighed down with cruft and used in so many silly ways, making everything dependent on a monstrous abomination of a program called a "browser" (Firefox is freakin HUGE), it has become more of a burden than a benefit. It is a limitation, not a path to the future.
But the client-server is only for a single request, it's not a static property of a node. You can be both a client and a server simultaneously, making and accepting requests at all times.
Not sure what you mean here. You need to use HTTP. So you need an http client.
Chances are you'll be fed heaps of html and other garbage.
No, HTML is definitively not necessary. Think of what it's commonly called "Web Services"; they're often only available using structured data encoding formats like JSON and they serve a whole lot of applications that are not web browsers: native mobile apps, for example.
HTML is just one of the many formats that can be transported by HTTP, nothing forces anyone to use it in order to use the web.
Then there's the matter of state. So you're saying HTTP but not in a RESTful way?
REST doesn't prevent state. It just prevents session/context state from being stored on the server. You can still store it on the client (which, again, would just be one of the roles of a node) and permanent state on both.
If you're suggesting tunneling everything in HTTP I think that's unnecessary.
If you mean tunneling as in SOAP, definitively not. I suggesting using HTTP as it's supposed to be used. Something similar would have to be reimplemented anyway, and HTTP is already there and supported by plenty of tools, services, etc.
Why does it have to be at that layer? Why applications? (...) I want freedom from applications.
I don't get what you're saying.
And if I had my way I'd extract the html table parser from it and have that as a standalone filter.
The whole "browser" concept is outdated. People want to watch video, listen to audio, look at photos and read TEXT. I don't need html to do any of those things. And I don't need html or other HTTP junk to do search to find video. I need a tcp (or other) connection and a video player.
Again, no HTML is needed. And no, a TCP connection and a video player is definitively not enough. You need some way to identify the video you want to watch, to decide what part of the video you want (e.g. if you already watched the first half), of knowing if that video still exists on the server, possibly some way to authenticate yourself (I don't want to share my personal videos with the world), and it would be nice if the video player could tell what formats it supports so that the server could send it the right version or say that none exists without having to download it.
HTTP is well known, ubiquitous, provides all that and it certainly doesn't need HTML for it.
If firewalls were actually an issue, you could always run the service on port 443. I don't know how you'd tell anyone else that you're running on a non-standard port, but it would be possible. That's actually another potential problem... this is a server that is masquerading as an HTTPS server... I hope that it plays nice with non-Tent HTTP clients. It isn't nice to hijack ports like that.
They are seem to be assuming a web-based client, so that traffic (which does need to worry about firewalls), should pass though a firewall without issue.
Edit: parent got deleted, it was
One word: Firewalls
Here is a use case scenario I am imagining. I define two servers for myself: home.me.com and cloud.me.com. Where home.me.com is a dyndns to the freedombox. Dyndyns being unreliable, if a tent msg cannot get to my home server, then the messages are sent to cloud.me.com and then pushed to
home.me.com when it comes back online (think POP mail).
The facebook killer then, is a hosted service like cloud.me.com for non-tech people, but a seamless transition to the hosted at home service as soon as you buy a freedombox. This way you have the best of both worlds. Your face in the cloud, and long term storage at home.
Other app wishlist: tent to smtp and smtp to tent adapters for gmail killing
Do you have an IRC channel? Feel free to also join our channel at #unhosted. :)
As it is the FAQ reads like "those are old and busted, we wanted something new and hot," which gives off an aura of NIH syndrome.
• no support for private message (pubsubhubbub, anything atom-based)
• inability to move relationships when changing service
• no standard API for application interaction
by leaving each of these (and others) out of scope see: http://ostatus.org/sites/default/files/ostatus-1.0-draft-2-s... they have created an ecosystem unfriendly to developers (who have to approach each provider separately to work out auth schemes and APIs) and likely to lead to vendor lock-in (because relationships can't be transferred and basic features are implemented differently in each system).
First, welcome to the community of people working on this important problem. I highly recommend you join this group to stay informed about what's going on elsewhere:
It's an "issue" group, not dedicated to any one protocol, service, or software package. Please, make sure you're a part of it!
A couple of notes:
1. We're working on including private messages in PubSubHubbub 0.4 and thus into the next version of OStatus. Understood that it's a big deal.
2. You're right, there's no standard API. ActivityPub is an attempt at that; see here: http://www.w3.org/community/activitypub/
Thanks for considering. Let's make sure we interoperate!
* for messaging why would you not use xmpp and/or smtp?
As for things that have been tried before, we have ping backs -- that IMNHO never really worked. And there's Diaspora that have yet to come up with a stable protocol -- and have an implementation that is pretty badly broken.
It's also a good illustration of going the "full http" route: publishing becomes easy; interaction (server to server) becomes hard if you want to have any kind of security in place.
The federated social net was a great first step, but the lack of support for access control/private messages and account portability means that you end up with lots of proprietary implementations of basic features which can create vendor lock-in.
Alice gets her updates from Bob via something like:
You'd of course need to synchronize access passwords/keys/tokens somehow -- but that could be part of "friending" someone?
Integrate with something like cacert.org so you don't have to manage certs (as part of this project). A friend request includes the requesters cert (could be self signed, or via a trusted authority, like cacert), encrypted with the public cert of whomever the request is sent to.
When a friendship is accepted on the other end, store the cert, and use that for authentication. Add your own authorization rules (Alice is a close friend).
It might be a benefit to set it up as follows: everyone has a personal cert. They generate and sign a proxy cert for their tent server. The public "top" cert is used for user management and federation -- numerous such "downstream" certs could be generated, along with revocation certs.
Question: what features that are taken for granted on today's popular social networks are difficult/impossible in this kind of distributed system? for example, i suspect something like "friend suggestions" might be difficult, since you only have access to a part of the network. Auto-friend tagging in pictures would be tough too. I'm seeing a lot of upsides listed, but there must be some things you just can't do. A candid discussion of the drawbacks would be helpful.
(1) I don't see how one Tent entity contacts another proactively - it looks like A can't message B unless B has already chosen to follow A. If this is so, is it an anti-spam measure? Given that StatusNet etc. are infested with spam, it seems like a very wise one :-) On the other hand, it is rather limiting as compared to centralized platforms, don't you think?
(2) I don't see how you maintain the promise of a portable identity when your identities are hosted URIs. Eg, if my identity is tent.is/foobar, and I want to move to a different host, how do I do that? I can download my data, sure. It looks like I can even bounce from tent.is to another data server. But unless I want to break all my social connections, don't I remain at the mercy of tent.is? This strikes me as a rather unsolvable problem, but it would be reassuring to clarify that you're not solving it :-)
2) Every piece of data in the system will be available via the API including negotiated app and follow credentials, moving will consist of authorizing an importer app to have read access to everything, and then pushing a post that tells all the servers to check the profile again for updated entity and server details. It should be a very simple process.
After all, there's a reason social services are centralized on today's Internets. The reason (IMHO) is that the Internets since 1992 or so have been an antisocial network, and anything worth attacking that lacks a centralized defense command is rapidly overrun by digital Huns. For instance, SMTP exists today because it existed before eternal September, and being valuable was (barely) defended; but if it didn't exist as a legacy from the old, social Internet, it would be very difficult to create it in the new antisocial one. If not impossible.
I mean, it's certainly not that some of us rotting old neckbeards weren't using finger and talk on the firewall-free Internet in 1989. So we know how cool it would be if some bright young whippersnapper could solve teh problem...
(2) This is useful but inevitably imperfect, as forcing every interlocutor to equate the old and new names is of course impossible. Eg, HTTP redirects make it possible to change your DNS identity - but hardly trivial, though the redirect itself is trivial.
And of course it's a process that your existing host could easily frustrate, though that would be very ill-mannered. Not saying there are any perfect solutions here.
1. Just because it is allowed by the protocol doesn't mean any given client needs to pay any attention. Just like email, I can filter out any messages from people not in my contacts. I may choose not to and instead run each one of those messages through a spam filter. In this respect it really seems no different than email. Individual clients/servers can choose to be as strict as they like (but servers are servers, and they are sitting on the internet, so spammers can see them and send messages that will be ignored if they like).
2. Since connections with most of your contacts are theoretically maintained so you can push out new data, updating is more akin to propagating a new ip through the DNS system than using a redirect. Yes a DNS server can misbehave, but that can only screw up a network of well behaved servers for so long.
There must be some reason we haven't seen successful new decentralized service protocols on the Internet since the early '90s. I don't know of a more obvious one.
You can see the issues with StatusNet and spam:
2. The problem is that contact names propagate outward from the master state where a push will update them. For instance, they get written down on business cards. They also get cached, imprudently but inevitably, in forms that are still digital but don't update properly.
Imagine a protocol that you could use to update your email address this way, and you'll see the problem. In theory, you could design a special SMTP message that would cause all clients to update their address books. In reality this would scale quite poorly and be quite unreliable, leading people to avoid it, leading it to be even more unreliable, etc. Of course, your chances are much better with a bright, shiny new protocol... but still.
This would be especially bad if the server you're using is hacked and they stop everyone from leaving :/
The best solution that comes to mind now is building a server with a database of all the users and their respective servers. But, of course, that would break the decentralized purpose of Tent. It's an interesting problem to solve.
EDIT: Another big problem, how do I search users? A DNS-like solution would be feasible?
I think a better system could be publishing a triple (name, keyfingerprint, current-server) on a shared datastore (e.g. DHT). The user doing the search would still need to find some out-of-protocol way to identify the right person, though.
EDIT: Kadmelia on Emule is a working implementation of a similar system to what I described, but for files instead of users.
cjdns has a simple approach: your tcp6 address is your public key. This the right idea. Leverage what you already have that is unique (not to imply IP address is necessarily truly unique, e.g., anycast).
But in general to be on the network, you have to at least one unique item: your network address. So leverage that to make other unique identifiers.
To encrypt communications, you may need to maintin encryption keys. So leverage them to be part of each peer's unique identifier on the network.
Connecting to strangers, and only using the network to get each and every bit of infromation from the outside world, is all fine and good, but this sort of peer-to-peer networking is much more valuable with peers who you can identify in person. Without using a computer. You can exchange all the above numbers (identifiers) on business cards.
Secondly, it ties you to them; what happens if you switch servers, or ISP, or whatever? I mean, right now I'm planning on switching to a cheaper, faster offer from another ISP, but I wouldn't do it if I were to lose all access to my accounts on the different services.
I think the id should be both controlled by the user and portable; that mostly leaves public keys and their fingerprints as ids.
They could shut you down, but I don't think they could pretend to be you since they don't have your private key..
For the me the ultimate social network would be just blogs, RSS and a feed reader, with people either managing the blog themselves or using a third party to do it for them-- the point is it doesn't matter.
The problem is that blogging is complicated, anything with multiple options is complicated, and discovery is complicated. I know where to look to find a friend on facebook, I don't know where to look to find his blog.
I don't have time right now (work) to look into Tent in more detail, but it sounds like it's a definite step in the right direction.
If a million people decide to 'camp in my tent' (?), my server is suddenly pushing out gigs of data every time I make a post.
followers and followeds gradually increased over the duration.
I have this number handy since the connection is reverse-proxied via pagekite.net, which is metered.
Over the last few months, I've had all static content offloaded to another server, which reduced the bandwith used, I'm not sure by how much.
Thank you for the numbers!
We also have a setting in subscriber settings that control which types of posts are pushed in their entirety to different followers vs just a notification being sent vs nothing at all. So blog entries and status updates might get pushed to all 1M, but you probably wouldn't push an HD video update to all of them without some more serious server architecture.
This way you could have a path with all the videos on it that you could proxy off to a server with more bandwidth. It would get the user a lot more control over how their content is accessed.
Edit: And what if my server is down when one of my friends makes a post? Will I never see it?
Posts have 'views', so a video or photo post would be pushed out with the 'meta' view which would include a URL to the content instead of the content inline.
This is decentralized. In my expectation this network is not about a few celebs having millions of followers. I'd love to see it become peer to peer on the human level. Friends who are actually real friends, or at least people you have met and have had a real human interaction with.
Repo starred, eagerly awaiting runnable stuff.
From what I can tell, the idea is to create a standard set of objects and rules for interacting with these objects. Of course that is how protocols tend to look.
What are some of the new objects/concepts proposed by Tent? For example, is there a distinction between "home" and "users" akin to server/client? Are there several types of messages, compared to email? Is there a standard cookie-like object? What is the conceptual model for sharing? Any insight would be appreciated.
The protocol seems to have some fundamental limitations.
For my money I'd rather go with FETHR (see http://dsandler.org/brdfdr/ and this paper: http://dsandler.org/brdfdr/doc/iptps-fethr/) and its implementation - which has code available right now (https://bitbucket.org/dsandler/brdfdr/).
> What is wrong with other social services?
Centralized Social Service Providers limit what you can share and who you can share with. They only allow users to interact with other users on the same network. Because their products are centralized and maintained by a company, users are left in the cold when the company changes its products or shuts down. There's nothing wrong with a company offering users social services. But users shouldn't be limited by those companies. Imagine if you could only email other customers of your Internet Service Provider. Unfortunately Centralized Social Service Providers have done just that. You can only communicate directly with other users of their closed network.
> If you don't like a bank you can withdraw your money and deposit it somewhere else, including your own home. You could even start a new bank where you and your friends felt safe. You can still pay your bills and maintain your financial relationships, just tell them about your new account. We aren't talking about money. Your data is far more valuable– your family and friends' photos, locations, and private communications. You should be able to store them somewhere you trust, move them when you want, control who can and can't see them.
It's not an easy problem to solve when it comes to privacy and security: http://www.faqs.org/patents/app/20120110469#b
Eventually to arrive at this: http://myownstream.com
I am curious as to how retaining copyright will help them prevent fragmentation?
Can they not elect themselves as project leaders of the opensource project and prevent fragmentation?
Project leader is not a formalized position (or even a meaningful concept, really) in free software and doesn't come with the power to prevent forks or fragmentation. I guess their thinking is once a community of users and developers is brought together, they can be trusted to establish a model that retains compatibility since it is in everybody's interest. While at the moment, an incompatible fork would have the same "network effect" as the original.
Either way, the copyright only covers the software, not the protocol.
It is possible that they could craft a license such that protocol-incompatible changes are not allowed, but I don't think that would be in their best interest. This is a starting point. The risk isn't fragmentation, it's crickets (ie nobody cares) and an untested protocol that may need multiple versions to get right.
It's really nice to see people are working on ways to sort of "replace" the current centralized services out there.
Let us hope they are attractive enough to developers and users.
NB: I am the technical cofounder of elgg.org, and I believe software like Tent is (part of) the future of the social web.
Also, I really think they're making a mistake by not using secure Websockets for their protocol. Plain HTTP has too much overhead for what needs to be an efficient messaging protocol and the potential need for persistent connections.
P2P camping site will establish when you are waiting the bus, taking the boring meeting, or camping at the river bank. I tried to use wifi-direct/bluetooth, but I found iPhone and Android system set device default to non-discoverable for security issue. But I do found a lot of Nokia/LG phone in discoverable mode on subway.
I hope the tent will be successful.
Be an application to make strangers knowing each other and people to go outdoor.
"There are four issues with decentralized, large-scale sharing of content between non-technical people..."
and you claim those four issues are
1. Identity Persistence
2. Data Persistence
4. Access Restrictions
I do not see why this is so. But even if it is, it is not even clear if the main point, at this stage, is to identify the atomic components of the "social web" -- perhaps what is
sought after now is a slightly higher-level conceptual framework which puts these together.
For example, "publishing" may be missing the point - perhaps we need a finer grained concept: different kinds of collaboration, delegation of authority, distribution of tasks, self-communication, electronic cloud prayer...
Figuring out how to bootstrap network effects is critical.
edit: wow, wrong link.
The culture of decentralized web doesn't bode well, not enough capitalism, which might in turn effect the quality of the product.
I can't help but read that and think, "terrorists." Then again, there will always be that tradeoff and you are probably on the right side.
How so ? It looks quite decentralized to me.
1. signing data so that the source can be verrified
2. encrypting messages so that they can only be read by particular recipients
Where provider.tld provides specification/api.. like robot.txt/tent.json that would specify actual api endpoints for given user.
I can swear I saw a section with names on the site, but can't seem to find it now. It looks like it was taken out.
"If the app does not respond with 2XX, then the server should try again later."
"Shirley has her client in maintenence mode; jerrold.me will attempt to deliver the notification later using an exponential backoff algorithm."
This effectively makes each person an island in h/herself and hence the model of social-web breaks down. It wouldn't work and people know it.
In order to liberate the data, you're throwing the baby out with the bath water.
The real social-web aspect of the Facebook is in their Groups, or Pages, where users collaborate, converge "together" on a single entity (of any particular Group or Page).
Do you follow my logic now?
I am against the idea of making users standing independent on their own because they then become an island in themselves. It is fine if you are making them a product, a brand, or portfolio, but none of this is helpful for the authentic social-interaction. You don't need a brand or portfolio to be social, that's not its original purpose. The original purpose of the user (individual human being) is to communicate, be social, create communities.