From what I remember, Mastodon switched from OStatus to ActivityPub a few releases ago. In the past they had to extend the OStatus protocol for private messages, which meant some clients may not honor the private status of theses posts. ActivityPub has private messages defined properly in the spec.
It seems PeerTube (https://github.com/Chocobozzz/PeerTube) will also use ActivityPub.
For anyone who's similarly curious or confused, LWN published a pretty great overview last month—their "Federation in social networks" article:
The ideas about using DID's (https://w3c-ccg.github.io/did-spec/) and moving off a common transport like HTTP was also interesting.
If building the next version of internet discussion and sharing, we really need first-class support for encryption.
If my personal twitter-style communication got out it would be worse to me than my more private messages, but it is worse to me personally if all the twitter style communication gets out than only my more private messages.
You have to account for manipulative big data risks in your analysis, thinking only about your personal data is an outdated approach.
Doing it the other way around? We literally have 50 years of experience of that being a terrible, terrible idea.
The biggest advantages of ActivityPub over OStatus are: JSON instead of Atom XML; a defined client-to-server API; and private distribution, including to contact lists (like Diaspora's aspects).
As someone who has worked on this type of standard for over a decade, I can say that it's the best federation standard I've ever seen, and I've seen them all.
Does this mean they may have accidentally published a message intended as private, to the public?
If so, I don’t think I can imagine a more efficient way to rob me of any confidence I ever had in Mastodon…
All of this without breaking compatibility across instances running different versions; quite a nice piece of engineering IMHO.
This solution was discussed at length with mastodon devs before the implementation of the private messages. It was ignored. Now we have a situation were Mastodon is likely to switch off OStatus soon, leaving behind all those projects that don't have the dev resources to rewrite their core federation systems every few years.
The Ostatus/AP dual stack is also pretty hacky and not even valid according to the AP spec, although it's getting better all the time.
If you want to keep something private, don't post it on a social media platform -- or at least, encrypt it out of band before you do.
Really, the only time I should expect the content I share on a social media platform to be public is if I shared it as public content.
[Note that I am talking about technical capabilities and privacy here. Of course somebody with whom I shared private information can re-post it publicly, but solving that is out of scope for a technical solution - and is also true outside of social media platforms]
If they are incompatible, they aren't orthogonal, they are opposed.
For some people, this is strictly limited to the technical capability to prevent (or allow) the flow of information. I'd long subscribed to this point of view, but have found it inadequate, if only because capabilities have been changing so dramatically on this point.
Rather, and this is something of a personal definition, though I'm not certain it's specific or original to me is that privacy is the ability to define and defend limits and boundaries on information sharing.
You might ask "why use that definition?"
My principle answer is that if it isn't possible to share information in a limited context, then there's something immense and profound which is lost. Two people, or twenty, or even several tens of thousands (say: a large company or government bureaucracy) might need to share information without disclosure.
I've been following the ICIJ's multiple projects on various data leaks, which are particularly poignant in revealing both sides of the argument. On the one hand, the ICIJ itself farms out its data to several hundred journalists from numerous organisations, and keeps a lid on the project until an agreed-on release date. (The group has produced a video showing just this process.)
On the other, ICIJ's stock in trade have been leaks of information from other organisations whose privacy has been breached.
The question of "which of these is right?" should very much arise. And it's not an easy question to answer, though I believe there are some guides which can be used.
As such, privacy is not an inviolable right, but it is a crucial element of social organisation.
The definition draws in part on Jeffrey Rosen's The Unwanted Gaze, published in the late 1990s, and apparently quite underappreciated and/or known. Though it's been a while since I've read it.
But it's like saying that publishing is opposite to privacy. Yeah... It is so by design and definition.
If person email@example.com mentions firstname.lastname@example.org, a salmon is sent from example.com to differentsite.org containing the message. The endpoint that this is posted to is the 'salmon endpoint'.
Using a second endpoint for privacy enhanced messages would have the way to go. Old servers wouldn't ever receive private messages, while new servers that understood the extensions could have kept 90% of their old code and infrastructure.
The other question is whether we want server-to-server federation at all instead of a P2P network like bittorrent. With the decreasing cost of compute, I am also optimistic of everyone being able to run a little sandbox on the cloud to exchange data with others, instead of routing it via FB/Twitter.
This is exactly what namecoin tried to solve. Buy an identifier, put whatever you want in it. Example: https://nameid.org/?name=rakoo
> The other question is whether we want server-to-server federation at all instead of a P2P network like bittorrent.
We want something in between. The best model I know is the one of scuttlebutt (http://scuttlebot.io/more/protocols/secure-scuttlebutt.html): Each peer is identified by an asymetric keypair, and writes posts/comments/photos to a local ledger. Everything is signed and potentially encrypted if the message is to be read only by select people. Diffusion follows the peers' connections: stuff is sent from peer to peer as they connect together, friends can be used as a third leg (ie a common friend can carry stuff even if it's not for them), and you also have pubs where more people can connect and get more stuff faster. This system is different from bittorrent in that everything depends on the human connections: information spreads along human acquaintances, names aren't globally unique, they depend on how your friends agree to name you, etc...
It's better than naive server-to-server federation because pubs can be simple, stupid message forwarders yet still have all the advantages of being always-up servers.
The only issue is client support and the fact that I don't have a good sense of the state of development/stability/soundness for SSB.
If you (or anyone) has experience actually using SSB or similar true P2P platforms, I'd be very interested to hear what did or didn't work well.
On the other hand, the underlying libraries are opinionated in that they usually use depject and a handful of other tools/techniques that aren't super familiar to those outside the SSB community, which makes it harder to write brand new clients.
I'd use SSB exclusively if they offered a web client (as I'm on a Chromebook), but we're slowly getting there.
you do need to host your `sbot` somewhere tho
very interesting and maybe better than SSB..
it has hypercore - raw distributed streams, and hyperdrive - file abstraction on top, and a whole bunch of related projects, like Beaker browser.
the protocol could be turned into a generic federated messaging framework..
There is a discussion about that somewhere but I can't find it anymore
- Mobile users content should remain available when they are disconnected.
- Participating in a federated environment still contains plenty of technical challenges, if we want non-tech-savvy users to participate, we need admins.
I personally would like to see servers for web applications, be they Sandstorm.io-type app platforms or Mastodon-type social networks, in a family-and-friends scale operation. I'd happily host resources for family and friends, for instance. While most people may not be able to run their own server, almost everyone is either family or friends with someone who can, and corporate offerings will fill in the gaps.
Currently ActivityPub servers in practice use HTTP Signatures and Linked Data Signatures, so there's a certain amount of proof of the origin of messages there. But in moving towards a much more peer to peer system, we can do even better by stripping out SSL Certificate Authorities and DNS altogether. The paper linked above discusses one path to accomplishing that in ActivityPub using DIDs. Hope that's interesting to you!
> Unfortunately at the time of standardization, there are no strongly agreed upon mechanisms for authentication. Some possible directions for authentication are laid out in the Social Web Community Group Authentication and Authorization best practices report.
We have two different p2p social feed apps that are compatible, but as we plan for future development we're considering ActivityPub/ActivityStreams
Same with federation: It's what you need right now for "normal" users (e.g. compare user numbers of Mastodon etc to purely p2p alternatives). While I personally self-host a lot of stuff and would be willing to run a Mastodon instance for friends and family, I can't expect or even recommend they run something themselves individually. Thanks to federation, nothing is stopping those that can to run their own single-user software speaking the protocol, or for instances to only serve very small groups.
I guess there could be some sort of "super node" solution, where a known Mastodon server would just forward you the "what happened while I was asleep" stuff and I guess people could encrypt their messages to you with your advertised public key, so that you don't have to trust the super nodes, but I am not sure about the additional complexity involved in implementing that vs current Mastodon using ActivityPub, nor about the usability challenges involved.
It's actually remarkably easy to onboard to SSB with Patchwork. The problems set in when you start considering mobile devices or multi-device support (the latter is currently possible but risky).
For example, in Freenode you don't control what gets downloaded and served (which also helps to hide what you're fetching on purpose).
There is a way to do this! Check out https://bockstack.org. We are working on decentralized identity and key management.
Blockchain. Namecoin, in particular.
What's interesting is that ActivityPub requires signing the data twice, first on a HTTP request level, then the JSON-LD itself. It seems like the designers tried to make it simple (let's use JSON-LD and HTTP) but after several edge cases the standard got out of control.
: "Linked Data Notifications also supports a number of RDF serializations which are not required for ActivityPub implementations. However, ActivityPub implementations which wish to be more broadly compatible with Linked Data Notifications implementations may wish to support other RDF representations." source: https://www.w3.org/TR/activitypub/
Server authentication: For example XMPP uses client certificates to authenticate servers to other servers (TLS server certificates usually can be used for client authentication too).
Message authentication: maybe exchanging OpenPGP messages would be easier? They already are signed and the payload can be anything. Of course it's not as easy to list a collection of these messages as it is with JSON (just collect into an array).
> If you avoid LD-signatures then you don't have to mess with JSON-LD canonicalization either,
I want to be interoperable with existing software so avoiding it is not a practical option (although allowed by the ActivityPub spec). Just like having non encrypted HTTP2 is not a practical option (although allowed in the spec).
Look in particular at that cool video in it. Someone posts a video to Peertube, which gets federated to Mastodon. People comment on Mastodon and the comment gets federated back into Peertube. People reply on Peertube, and the comment goes back to Mastodon. Kinda nifty:
Btw, I have been greatly enjoying Mastodon. I'm @JordiGH@mathstodon.xyz in case anyone feels like chatting to me there.
Here is a spec for something no one currently uses, please take a risk and invest your time to understand ActivityPub, build an impl of ActivityPub, and then build your thing on top of it. Then, if many others also do this, it might have a chance at success.
Heed the lesson of OStatus. Open source has killed RFC style development. If the backers of ActivityPub genuinely want people to use it then they should publish a reference grpc microservice porcelain on docker so people can use it from any language, without thinking about it. Otherwise the cost to implement ActivityPub is just too high.
I tried out Mastodon but didn't become a regular user. So I'm just curious, are you claiming a million monthly active users or a million accounts created since inception?
A million a month is roughly what serverfault.com gets. If you have 1MAU/month, that's a nice achievement, though it's still 1/130th the market share of twitter.
I hope Mastodon keeps growing to level out the playing field. That's why I suggested a way to make it easier for other sw engineers to support it.
RFCs are a sign of maturity of open source projects.
If your goal is actually different implementations and not just "an open source project", standards are important. The space was quite fragmented, ActivityPub is a chance to have more stuff talk with each other.
At Hathi (https://newguard.icei.org/projects/index.html) we're using ActivityPub as our data exchange protocol.
It's well thought out, and saves us a lot of trouble. It's easier to discuss architecture with something on hand to refer to.
As for the investment of time.. yes, but we figure it'll be worth it. We're not satisfied with any of the current platforms. We want to be able to collaborate without distractions, and we figure others will join us along the way.
If you feel this is a small point, please consider the centralization which has happened with email, jabber, and other federated systems.
It would seem to be a good base to build something like ActivityPub on top of.
It's difficult to tell what requirements they have considered, and what they have forgotten without reading (and re-reading) the entire spec.
(although, that now just sounds like 'drinking alone')
But I believe that, in the end of the day, mass adoption can only be done when there is a popular platform. Whether it's closed source like Facebook or Twitter, or open source like Qbix, it needs to be actually used. That's how, for instance, oAuth came to be so widely adopted, and many others.
Here is an overview of what I mean:
Protocols like Scuttlebutt or ActivityPub can be bolted on after the actual infrastructure is there.
If we didn't have these restrictions, this would be a purely academic debate, because there are already many practical ways to extract information from a resource and transmit it to other computers.
"Federation protocols" are a way of volunteering this information in a common language, but there's no real technical reason that's necessary to get "decentralized social media" or other such things. They're only considered potential solutions because effective scraping is not allowed.
Decentralized / democratized information access is not an issue that can be solved technically. It needs to be solved socially (primarily because the necessary technical infrastructure already exists, and is just held back by legal anti-solutions).
It was the only thing missing from Google+ IMO.
and the older https://www.w3.org/wiki/Activity_Streams/Examples
SSB uses an append-only log similar to a blockchain, but it only contains things your identity knows about (you and your friends and channels).
As much as people far too freely giving up information about themselves online. Decentralisation can work as an anti-censorship measure but I wouldn't expect something I send out over Bittorrent (for example) to remain my personal private info.
For the record I'm the main developer of a XMPP based "social network" project (Salut à Toi), and we are able to communicate natively with others XMPP projects (e.g. Movim) for years (we can share comments like in the video, it's the basis of common standard).
The "many extensions" thing is commonly misunderstood by non XMPP people thinking that it's making software more complicated or hard to maintain. But it's not true: extensions is a strength of XMPP allowing to concentrate on one feature at a time, evolve it, change it if needed, and there is a very good negotiation mechanism. Software are evolving, and it's normal that different clients/servers have different feature, but even with that they can still communicate together.
XMPP is not a single technology but a base to support many coherent technologies, for many different use cases.
To go back to this ActivityPub, while I'm a bit annoyed that nobody tried to contact us to join our efforts on XMPP, resulting in yet another standard, at least if it's followed by some platforms, it may simplify the creation of gateways. I've looked at the specs, I don't think that putting "like", and "followers" as the main feature of a social network is a good idea. At first sight, it doesn't seem too difficult to translate to XMPP.
If you really insist in doing all by hand, you can start by checking compliance suits (XEP-0375) or https://xmpp.org/about/technology-overview.html. Those two links are actually at the top of the XEP list (https://xmpp.org/extensions/).
If your development workflow uses sprints in any form, the format of XMPP documentation is perfect. If you want to implement everything at once, good luck.
I know there's a big scary "Rejected" warning at the top, but in this case it's okay to disregard that. It was rejected for bike-sheddy reasons about it not being perfect yet and is still a good starting place. The warning should go away sometime in the next few weeks (the problems that it was rejected for have been addressed, so the next time the council meets it will likely be accepted).
Exactly my feeling when I trying to read ActivityPub and other "federated social network" specs
Nevertheless, I agree that it would not fit the XMPP ecosystem to be used as a micro blogging service. It's like using e-mail for IM: Possible, but not recommended.
I don't think there is a point in getting it complete. You choose the XEPs you want by looking at what your client/server goals are.
Also if you need help in choosing, there are pointer XEPs, like the Compliance Suites, or the mailing list, or the chatrooms. See https://xmpp.org/community/
> Getting it 'right' seems to be a problem when the XEP definition is open to interpretation.
When in doubt, ask the author/council.
For example how many desktop clients support OMEMO, MAM and Carbon copies? Those are just 3 XEPs all related to secure multi device communication. Yes, they haven't reached the status recommended yet, but I think that is more a symptom than the root of the problem.
I love the XMPP and Golang stuff you are doing.
And far as I can see it Dino seems to have pretty good XEP collection nowadays:
Dino's still relatively new software so it may not be all that stable yet (I don't actually use it enough to know). I hope it works out for you though; it looks nice if nothing else!
> I love the XMPP and Golang stuff you are doing.
Thanks! It's still very early days and I don't get a lot of time to work on it, but I'm glad someone has seen it and found it useful.
Check out Movim for an example: https://movim.eu/
Each actor has an RSA keypair. Server-to-server requests are authenticated using HTTP signatures (that's a different spec), so you know a delivery is legit and on behalf of which actor it is. For messages that are passed directly between origin and destination there is no reason to believe the sender has tampered with the message (SSL is a prerequisite anyway). For messages that are about other messages, such as reblogs, you would not trust the sender, you would look up the message by its ID on its origin server instead. This is always possible because ActivityPub prescribes IDs to be resolveable URIs. There's also another method to verify the authenticity of a passed message, called Linked-Data Signatures (that's a different spec). It's a way of signing JSON inside JSON, it's a bit of a bother to implement though (To make sure the signature works regardless of how the payload is re-formatted or re-ordered in transit, the JSON is "canonicalized" using RDF)