Hacker News new | past | comments | ask | show | jobs | submit login
Twitter Could Have Become a Protocol (austingardnersmith.me)
222 points by gardnersmitha on Nov 7, 2016 | hide | past | favorite | 131 comments



They built it.

They had an experimental project called 'annotations' where you could attach 1k of json to each tweet, like a DIY microformat.

I got onto the beta and created a prototype twitter client which you could attach mini 'apps' to tweets based on the payload type, e.g. you could tweet out a poll, or an inviation to play a game or a job advert or whatever, and you could attach your own app as a listener.

I think it could have been pretty amazing, basically A Message Bus For Everyone. If you build this yourself, you will run into the chicken and egg position, twitter could have pulled it off as they had the eyeballs and the developers.

Unfortunately they pulled the plug on it, around the time they started to close down the ecosystem.


It's not surprising really, is being a centralized message bus really attractive when it just means that you'll be responsible for policing it?

If you run a central message bus off which people can produce their own applications, you'll find yourself having to deal with everything that ISPs and mail providers have to deal with. That means being asked to keep logs for years, dealing with DMCA takedowns, filtering the worst of the internet, etc.

And for what? As soon as anyone relying on it achieves scale they'll re-implement it themselves.

Besides, what you've described is the opposite of a protocol: an inner platform.


I disagree that anyone else could re-implement it. Once you have a critical mass of listeners, it would become the first port of call for any message you wanted to distribute.

As for policing, I think that would probably be where the money would be made - reputation/quality of the messages being posted.


Why would people pay additionally when they don't need to pay now (relative) to distribute/post messages?


Suppose twitter had become a giant json firehose, and you could listen to it and, for example, build a jobs listings website from the jobadvert microformatted tweets.

It would end up full of fake job spam, and the listeners might want quality filters. I can imagine that you might pay to validate your identity as not-a-spammer.


They could add several seconds of latency by default and sell high speed premium to trading bots. Humans don't care about a few seconds.


Humans definitely care about a few seconds, interesting strategy though.


How would "Twitter" capture that value and not some third-party platform who and offers those filter services? The value and revenue isn't with Twitter then, so then how does Twitter pay bills? Or am I seeing this in a different light than you do?


As far as I can tell a simple freemium model would have worked. You charge for the firehose and anything high volume, but you give a generous free allowance to keep the grassroots ecosystem healthy. They could have played all the same pricing games that Facebook is now playing, except across a much wider range of channels.

I'm not saying it was a slam dunk, but a lot of talk about this back in 2007/2008 and there was a palpable excitement about Twitter apps. Within a few years the media-company mindset had taken grasp, the valuations and expectations exploded, and now we have boring old Twitter which is still a damn cool thing in its own right, but which is seen as a failure because of a combination of pedestrian vision and mismanaged expectations.


I'm not saying it would have been a good business decision for Twitter, but it would have been pretty neat for developers and users.


Wouldn't spammers (assuming they're spamming to make money, and therefore likely make more than $0 per spam) also just pay money to be validated in the same way?


Interesting thought. Legitimate posters could hopefully could out price spammers.


Why would you want to build a service that distributes applications to mobile phones? There are many ways to make money.


I was thinking of a service that distributes applications to tweets.


What is the competitive differentiation and defensibility then in your mind?


"They had an experimental project called 'annotations' where you could attach 1k of json to each tweet, like a DIY microformat."

That's what we're doing with "Oh By"[1]. When you speak of a "message bus for everyone", that's our goal.

You are correct, however, that there is a chicken and egg problem - Oh By Codes are interesting if everyone knows what Oh By Codes are. Otherwise, people just wonder why a weird code is written in chalk on the sidewalk ...

[1] https://0x.co


A similar complaint came up here recently on a thread called Next steps for Gmane [1], starting with "I really miss the newsgroups that focused on just the messages, and could be consumed by any NNTP client, stored offline, searched, etc."

Quoting a portion of a great answer [2] from HN user niftich, that feels very appropriate here:

> "Not enough people make new running-on-TCP or running-on-UDP protocols because new protocols are hard to design, they don't work with the one application where everyone spends 70+% of their time (the web browser), and they probably get blocked on a middlebox except if you use port 80 or 443 and fake being HTTP anyway. For all but very specialized use-cases, vomiting blobs of JSON (or if you want to feel extra good, some custom binary serialization format like protobuf or Thrift or Cap'nProto or MessagePack) across HTTP endpoints is pretty okay."

[1] https://news.ycombinator.com/item?id=12440230

[2] https://news.ycombinator.com/item?id=12440783


Running over 80 and 443 using an arbitrary protocol often doesn't even work with next gen firewalls because they don't recognize the protocol as HTTP.


Curious how that would work on 443, given that the whole point of SSL is that you can't recognize the payload as...well, anything.


Payload aside, SSL is identifiable as SSL. The host name is transmitted in clear text (SNI) as well as the protocol version and cipher negotiation.

Just because the traffic is encrypted, it doesn't mean the connection type isn't identifiable.


Right, but presumably any arbitrary protocol that plans to tunnel over port 443 would use a standard SSL negotiation, standard HTTPS connection, and then send the application-level protocol as part of the encrypted payload.

My very first job (back in 2000) actually did exactly this - we tunneled arbitrary TCP/IP traffic over HTTPS. The startup died over various management & investor relations fuck-ups, but the product was working - you could do things like make a SourceSafe connection across a firewall without exposing the server to the Internet. There was some concern about whether firewalls would kill the connection if it was open for too long, but it turned out that none of them actually did this (which unfortunately we didn't figure out until we were almost out of money and had wasted several months handling this case; see again re: management fuck-ups).


That's a different point though as you're now effectively using HTTP as your transport layer rather than using your proprietary protocol "naked" on port 443. So whether you send your protocol in a proprietary binary format over HTTP or via JSON API's, it's still just HTTP over TLS. You're still sending HTTP headers, just with our own proprietary content as the body. The real problem is if you want persistent types of connections. As websockets have taught us: a great many equipment that supports HTTP don't support websockets and thus why even websites are still often stuck with RESTful APIs even when that doesn't always fit their problem quite right (that situation is improving however).

If one is using HTTP to transport your protocol (and the aforementioned websockets point isn't an issue) then your protocol should work over port 80 as well as 443. Barring any corporate / nationwide web filtering; but as others have already pointed out, that could also be an issue with 443 even with advantage of TLS.

By the way, was it Socks2HTTP you worked on? I remember that project back around the time you described and was quite fascinated by it.


> That's a different point though as you're now effectively using HTTP as your transport layer rather than using your proprietary protocol "naked" on port 443.

HTTPS is built on top of TLS, not the other way around. You can't (passively) tell if a 443 TLS connection is HTTPS or a proprietary stream. You can take a guess based on statistics (which is what the Chinese firewall does to detect tunneling) but that's about it.


I'm aware HTTP is built on top of TLS in HTTPS. I think what you're discussing now is a little different from the HTTPS as a transport vs bespoke protocols running "naked" over TCP/IP (unless I've badly misunderstood any of the previous posts?).

Warning: nonsensical brain dump follows:

However to address your point, you might be able to use SNI (which is sent from the client before the TLS connection is encrypted) to make some assumptions about the content. Granted this would be more in the realm of web filtering where you'd blacklist suspect domains or - in extreme cases - banned terms within hostnames. I wouldn't be surprised if SNI is one of the "statistics" the Chinese firewall uses (I'm not familiar with the implementation details of the Great Firewall of China")


A middlebox has no way[1] to tell what protocol is inside of a TLS connection unless it terminates TLS. It could be HTTP, it could be telnet, it could be whatever. So no, you're not effectively using HTTP as your transport.

1. Almost no way. The great firewall of china has traffic models of what payload sizes should look like for upload and download traffic of standard HTTP over TLS. If you start to tunnel TCP/IP over it, the pattern changes enough (small payloads for TCP ACKs, etc) that they will inject a TCP RST into the stream to screw up the connection. It's impressive and super frustrating.


> A middlebox has no way to tell what protocol is inside of a TLS connection unless it terminates TLS. It could be HTTP, it could be telnet, it could be whatever. So no, you're not effectively using HTTP as your transport.

While you're right that you could just use TLS without HTTP (as a great many services already do), the comment I was replying to was talking about running over HTTPS (he specifically stated HTTPS), ie TLS + HTTP. Which is effectively just using HTTP as your transport.

Like everything in IT, there's multiple ways one can approach this problem and it's probably fair to say that using TLS without wrapping your data inside a HTTP body makes greater sense if you're writing your own protocol from scratch. But on this occasion the post I was replying to - and many of the posts that preceded it - did make frequent references to HTTP and HTTPS.

> The great firewall of china has traffic models of what payload sizes should look like for upload and download traffic of standard HTTP over TLS. If you start to tunnel TCP/IP over it, the pattern changes enough (small payloads for TCP ACKs, etc) that they will inject a TCP RST into the stream to screw up the connection. It's impressive and super frustrating.

Ouch. Impressive though. Could one change the payload sizes of the TLS connection to make it look more like HTTP traffic? Shouldn't be that hard to do as most of the time you'd probably just need to add junk to the end of the server replies (assuming you have a client / server relationship with your TLS protocol). You'd probably need to make the TLS connection RESTful as well - to further mimic HTTPS and the limited connection times. Though before long you've just reinvented HTTP....

By the way, how well does websockets work over TLS? Do they throw up false positives on the Chinese firewall?


I've used sshuttle once, which is an IP-over-SSH program. Great stuff but it's kind of sad how much extra work firewalls create.


> Great stuff but it's kind of sad how much extra work firewalls create.

That's a weird statement to make given the point of firewalls is to limit ones access to a particular resource. If you're the systems / network administrator then the extra work is part of the job securing your infrastructure. If you're not an administrator then you're bypassing the security measures put in place by your administrators - which may well be in breach of your employment contract (as they often outline IT policies). Worse yet, if you're not even hired by whatever company nor individual who owns that infrastructure, then what you are doing is illegal.


Some companies MitM the traffic (installing their cert on all computers) so there's that.


Companies have the privilege of blocking traffic on their network.

If they agree on letting employees use a service they'll let the protocol go through the MitM. Not only that, they'll let it go through the firewall on its native port without the need of tunneling it into https.

But if a company blocks a service, employees should not circumvent the block. That would be risky.


That view may be ethically valid, but anecdotally, normal employees are more creative in their pursuit to get the job done than most hackers.


I know and you can just use your phone nowadays. But be careful at doing what you shouldn't contractually do. If your boss is looking for a way to fire you, you're helping him even in those countries with strong protections for the employees.


Depends if the network your on is state level and/or corporate ownership of device level MitM or not.


And as my view in that post foretells, I believe Twitter is already a protocol, we just call it the Twitter API. It's not an open protocol, of course, and its domain model is fairly anemic [1], especially compared to that of Facebook [2][3][4]. But in my opinion, this is what the author is trying to say: Twitter could have been a place where people interact with structured resources created and consumed by other apps.

It's a possibility, especially since Facebook became that place instead. Ultimately, Twitter didn't because they wanted to focus on driving more traffic through their first-party app (presumably as a captive client on which they can eventually display ads), and because they focused on cultivating a community (like Medium, Tumblr, LiveJournal) rather than a resource ecosystem.

[1] https://dev.twitter.com/rest/public [2] https://developers.facebook.com/docs/graph-api/reference [3] https://developers.facebook.com/docs/sharing/opengraph/using... [4] https://developers.facebook.com/docs/sharing/opengraph/objec...


> "and they probably get blocked on a middlebox except if you use port 80 or 443 and fake being HTTP anyway"

Curiously, games using UDP still work fine. Stop justifying bad protocols with the proxy straw man.


> Twitter had a chance to become a sort of de-facto API for lots of other applications.

The problem I see is that there doesn't seem to be any way to make money from this, for Twitter.

And really if everyone started using it this way, the privacy concerns would be even greater than the concerns people have about Facebook.


Yes there is. The same way there is money to be made in providing premium content. You could pay for extra capacity, bandwidth, fault tolerance, indexing, archiving, and many other desirable properties you'd need out of your messaging layer.

Instead the convoluted strategy by the higher-ups destroyed the entire thing. They thought they could make money with ads and "promoted content" but they somehow managed to fumble the execution of that one as well.


I think this is the Medium strategy. I think there is a mid size business in it, but one that's hard to defend and will see margins erode with time.

There is absolutely no "moat" in Twitter being a protocol.


You'd be surprised what happens when you become a hub and start seeing the effects of 3rd party integrations. That aspect can be turned into a moat if managed properly.


I think you are right but there's a time and place for it. I don't think it can save Twitter at this point but could have created a healthy ecosystem if timed well.

But it's very hard to time this well. Even Facebook and their apps ecosystem didn't create enough of a moat.


Well, Facebook's "app ecosystem" was dominated by Zynga. For a long time, Farmville was the top app and it was the only app that Facebook's highest engagement users interacted with. As such, when Zynga went downhill, it took the entire ecosystem with it.


If it was a truly open API layer, I could get these things from third parties. If it's not truly open, nobody would commit to using it.


The only people who win with ads are those who make money already and don't really need the ads. It's not worthwhile.


Well, where's the money in email? Services, right? The reason Twitter didn't go the same way as email is because it's centrally controlled and hostile to services, in my opinion.


Are there any pure-play free email providers?


Yes, the ones where you pay with your content.


I can think about at least a couple of services that do it out of sheer altruism or for political reasons (like Autistici or Riseup) and don't mine your content.


Interesting, I never heard of them.


Degooglisons-internet [1] is an initiative (broader than just email) to promote free services without selling your data. It aims at giving easily deployable apps so more hosters can appear.

[1] https://degooglisons-internet.org/


Are these stand-alone, or in a company with other service offerings?


If they control the pipe?! For sure there's ways to make money


Exactly, you can't put ads in a protocol.



Twitter was too busy navel gazing and pushing PR about how they had the ability to incite revolutions.

They were afraid to grow beyond tweets. In life you need to grow or die. Ten years from now, they'll be fondly remembered as the AOL Instant Messenger of the 2010s.


Infinite growth is infeasible. Perhaps Twitter will be one of the few tech companies that will stabilize. Or get killed by a competitor.


>>In life you need to grow or die.

Says who? Plenty of businesses reach a certain size and maintain healthy levels of profitability over the long term.

"Grow or die" only holds true if you receive venture capital.


Not just grow in terms of dollars, grow in function and purpose.

If someone knocked you on the head in 2011 and you woke up today and logged into Twitter, you'd find that little has changed.


So what?


I think that is being unfair to AIM, which was vastly more useful at the time.


Didn't App.net try to be more of a protocol than twitter? How's that going for them?


It worked out great for Dalton Caldwell, who failed upwards to become a YC partner.


Yup, and they bungled that one pretty well when they shut down the entire ecosystem. They never recovered after that. They lost all the developer goodwill and any chances of becoming a platform was vaporized at that exact instant.


See reddit.com/r/rad_decentralization

All of these technopolies will eventually be superseded by protocols. It just doesn't make sense to continue to rely on monopoly companies to provide core services. That's not to say there can't be variations or companies that build on top of core protocols etc. Just more than one, and not for the most common aspects.


I think the author has some good ideas about the things Twitter can leverage once it becomes the de facto method of broadcasting in the world.

However become a protocol would not have helped Twitter get there.

I think the better path is to build and then protect a captive audience. Instead Twitter saw its audience based erode away not once, but multiple times with Facebook News Feed (Twitter for news) then Instagram Video (Vine).

Twitter once had a massive captive audience and unique data. Now their competitive advantages have all disappeared. Opening up more won't help them gain an audience.


This is where I think people are still missing the boat on twitter.

Twitter is not an aggregator of data to be passed along. Twitter is a massive un-walled global community. Though people may want access to the data as a bi product of that. The future is in cultivating community, not protocols.


I respectfully disagree as to your note about twitter being "un-walled". The easiest test would be: if twitter would shut down right now, and its infrastructure would go dark as a consequence, could you/our/any community on twitter continue? I believe the answer is "no", because its centralized (and proprietary). I do agree with you that cultivating communities are important...though we do need some universally accepted, open, and available protocol to allow for cultivating of communities...sort of like what legacy protocols have done - e.g. email, irc, etc.).


I see the power of twitter in its brevity not its depth. Its a content gateway.

I think legacy protocols such as email, IRC, SOAP...have proven time after time that there is no need for an information specific protocol.

Twitter could be more open, but I think they realized that would be giving away the keys to the castle.


I think one of the more insidious problems with many of these unicorn startups is that there is no longer the fiduciary freedom to make "modest" plays like becoming a protocol.

A company with $1B in venture funding isn't allowed to aspire to being a $500M business. It must grow as big as the sun, or die trying.


I think there's an even more general disdain for commodity offerings, at least in American culture. There's probably a guy who's in the .01% of his ZIP code because the regional store brand paper towels needed some of those cardboard tubes. But "success" looks like Zuckerberg or Jobs, not like that guy.


V1 api was awesome and part because i started to love programming, v2 was a pain already" simply took away the simplicity and now i can not even create unlimited accounts anymore, even less use the API with those.

Twitter lost big in that regard imo


Indeed, my weather station used to sit and tweet happily all day and all night to all four followers.

Then they introduced rate-limiting ( 60 per hour, when the station generated a Tweet every 48 seconds ), application validation, phone-number verification for accounts... I gave up after that and just resorted to RSS.


Wasn't there already Jabber or IRC? But they don't make money. Sad world where we prefer badly coded shit that makes money, to awesome tech that don't.


badly coded _fancy/shiny_ shit with a web interface ;)

Apart from the sarcasm, I'm serious on the web part, which is terrible: all around the world people are "porting" protocols to JSON+HTTP ( example: JMAP[1] ). IRC is awesome[2], but it's not GET & POST and the new kids on the block run away if it's a real protocol instead of a HTTP hack.

[1]: http://jmap.io/

[2]: https://aaronparecki.com/2015/08/29/8/why-i-live-in-irc


It's because of firewalls.


Sorry, but this is bs.

You could run any application on any port: IRC on port 80 with SSL would still run just fine.


It'll run fine, but I won't be able to connect to it from the inside of a corporate firewall, which doesn't actually grant me access to the internet, but redirects me transparently to an HTTP proxy that only permits HTTP requests to tcp/80 and tcp/443, with SSL interception on the proxy.

You can run whatever you want however you want, but that doesn't change the reality of the enterprise environment.


The thing is, you really shouldn't be using your corporate network for non-work-related things. Use your own network service for that kind of thing!

Do you want your boss to see your reddit history?


Some of us don't have Reddit histories we'd be ashamed of others seeing.


That's a real problem which I forgot about. Sorry.


lol Twitter doesn't make money


I've been thinking along the same lines. I really wished there was an open protocol that provides all of twitter's functionality (and no more, to avoid scope creep).

But as much as I hate to admit it, Twitter's value is Protocol + Moderation. I'm not savvy with their operations but having worked with other cloud platforms, I know that any platform that has even a modicum of visibility is immediateley abused in many ways that are hard to forsee. Malicious attacks against the platform itsef are also an issue.

Email is a good example of open and federated platform, which unfortunately carries more abuse & noise than actual signal.

I still think building a protocol that incorporates a form of moderation is possible, but I'm not sure how to solve this.


> Email ... carries more abuse & noise than actual signal.

Really? That is not my experience at all. There was a span of years where spam got bad, early-2000's IIRC, but my email is now mostly signal and has been since since at least 2010. I think this is partially due to better spam filters, e.g., gmail was better than average when it went public in 2007ish, but also due to the rising popularity of DKIM and SPF.

I see the abuse and noise out there on twitter and in comments sections, etc. I don't see it with email. I know people with big public personalities do get hate email and whatnot, but I thought for the average person, email was kind of a solved problem.


That is indeed due to spam filters. Look at what a typical MTA gets before the filter kicks in.

But to your point, maybe the same mitigation technique can be applied.


I was under the impression that the intention was that Twitter would be the largest message bus in the world, but management decided it would become a media platform instead. Clearly as a media platform it has failed and now it's neither.


Twitter has restrictions that make perfect sense given its use, but would be really strange decisions for a protocol: ie. 140 character limit.


The 140 character limit is so that Tweets can fit in a 160 character SMS, with space left over for a username. SMS was originally a popular way to use Twitter. (And I think you still can?)


SMS is actually 140 octets, but the GSM 7-bit encoding that's typically used means you can get 160 characters. If you use characters outside that set, its switches to 16-bit UCS-2 so you are limited to 70 characters.

In reality you can send messages longer than this and they will be split into multiple messages over the wire - however in the US where you had (have?) to pay to receive messages it meant you would be charged for each individual message.

TL;DR; These limits may have made sense for the MVP, but as soon as most people moved to IP clients they were obsolete.


"SMS was originally a popular way to use Twitter."

I use SMS->tweet all the time. There is also a set of commands you can use over SMS to talk to twitter. [0] Doing so makes more sense (to me) as SMS is a reliable protocol over mobile networks.

[0] https://support.twitter.com/articles/14020


Well, one of the big benefits of SMS twitter was you could subscribe with no account. We used this for our systems notification twitter account to send out status updates during downtime or any time when we were pretty sure our user base couldn't access our status page through normal means.

Mildly annoying was at the time there wasn't a way to see how many SMS subscribers you had. Don't know if that has changed or not, but it left us constantly wondering how many we had outside of our IRL headcount.


DNS was limited to 512 bytes for a long time. Not necessarily a strange decision at all.


It's curious but IRC servers also seem to have that limit (for a total message payload size).

I can't seem to track down a /reason/ for this common limit. Systems were a lot smaller back in the day, but 512 is fairly easy to hit and I'd honestly expect something in the range of 1-8 KB to be the actual limit.


It's in the spec. From RFC 791, "Internet Protocol version 4", page 13 (https://tools.ietf.org/html/rfc791):

    Total Length is the length of the datagram, measured in octets,
    including internet header and data.  This field allows the length of
    a datagram to be up to 65,535 octets.  Such long datagrams are
    impractical for most hosts and networks.  All hosts must be prepared
    to accept datagrams of up to 576 octets (whether they arrive whole
    or in fragments).  It is recommended that hosts only send datagrams
    larger than 576 octets if they have assurance that the destination
    is prepared to accept the larger datagrams.

    The number 576 is selected to allow a reasonable sized data block to
    be transmitted in addition to the required header information.  For
    example, this size allows a data block of 512 octets plus 64 header
    octets to fit in a datagram.  The maximal internet header is 60
    octets, and a typical internet header is 20 octets, allowing a
    margin for headers of higher level protocols.
Note: That was published in 1981, when internetwork speeds were likely around 1 Mbps or lower.


well, DNS now supports EDNS, which if specified, allows for packets of up to max UDP packets sizes (though in practice this isn't larger than 4096). The larger a UDP packet, the more likely there will be fragmentation at each hop, thus increasing the risk of losing the packet in transit. To reduce this, staying under the MTU of the network is desirable, 1400-1500 bytes for most people.

Though, b/c of jerks DDOSing systems, and reflection/amplification attacks, some DNS servers are requiring TCP for any packets larger than 512.


Keep in mind that operating Twitter, even if you eliminated all the stuff that's there to support advertisers, requires (bare minimum) roughly 2K servers just to handle the runtime request path. Monitoring those minimally requires another ~500 servers to ingest and aggregate all the log data and metrics. This wouldn't support any analytics platform, wouldn't allow for spam identification (because you wouldn't be able to build any spam models), so if you care about that stuff you need to add another 5K servers or so for the Hadoop ecosystem.

Essentially you can't run an advertising-less Twitter under roughly 10K servers (order of magnitude accurate -- people will want to quibble over these numbers but they won't be able to push it below 5K). I may well be forgetting something important that pushes it higher!

End result: there's no way to monetize Twitter at the scale it operates at without decentralizing it. Many of you will go "yeah, of course -- it shouldn't be a centrally controlled system in the first place." That's a fine sentiment, but many problems you can solve in a straightforward (not easy, but doable) way in a centralized system immediately become much, much harder, and now you are asking ISPs and individual users to operate these resources for free.

Take spam as one example. The main "solution" to spam we use these days is to all use a system that see enough email to power machine models that identify and filter spam out before we have to deal with it. IOW, we use centralized systems. This gets paid for with advertising!

You can fractally reinvent the system or you can just have Twitter.


Blaine Cook used to talk this way. Then he "left" twitter, rumors saying it was (among other reasons less interesting but dreadfully predictable) because the management didn't want to try and make twitter a hybrid of a human-to-human and machine-to-machine network.

Blaine even gave a really interesting presentation I remember watching on the subject, about how very challenging such a backplane is compared to a more simple human-human messaging network.


I remember a past boss of mine said one time "it's hard to monetize protocols".

It's really the only thing he said I thought was smart, so it stuck with me.


So where is the open source Twitter alternative?


Mastodon [1][2]

[1]: https://mastodon.social [2]: https://github.com/Gargron/mastodon

Disclaimer: I'm the developer


There are a lot of them.

i.e. https://www.gnu.org/software/social/


The biggest problem with creating a free software social service is that you can't just apply the old method that GNU has used historically: Just mirror the interface and you're done.

That method works fine for many different tools. But with social apps you need to be able to federate, which means you need to have a proprietary vendor who wants to help, which isn't going to happen.

I'm hoping Matrix will provide the bridging required to be able to send notifications from Diaspora to Twitter (for example). I will always be a hack though. :/


I think there's also a whole lot of other issues. The technical part is one thing, but people's feelings are a lot harder to deal with.

We, hackers, think that everything is so sexy with P2P, federation and decentralization in general, but I don't think 'normal' people share that sentiment.

People love brands, they're surrounded by them, and they feel loyalty towards them. If there's no company behind something, it just won't feel right. You won't feel that push towards a thing. The reason why people use Snapchat isn't just because it's a good tool, it's also because it's cool.

I just don't see how something like GNU Social will ever become cool, if there isn't some timely and powerful brand to push it.


Good points...tech is no longer enough...we have to have a brand as well as an attraction point; that is, some "place" where people think "the party" is happening (alternatively not a party but for example, where stuff or news happens). I think I've come to the point where - right now - we don;t need everyone to adopt decentralized platforms like gnu social...But, i do feel if there is at least a non-trivial percentage of users who can choose to not be limited to interaction by way of silos, then i think that's a pretty good start. And who knows, maybe in the future, when there is enough to attract the casual users (as well as the tech is easier to setup and use), more folks will slip over to something decentralized, though still be able to interact with the/their communities.


Yes, I totally agree with this. We need to improve the tech, to the point of it being accessible and as good as the mainstream offerings.

The thing is also that we don't have any clear incentives to use P2P-services right now, because the silos aren't posing any clear threats to us as individuals. If we see something like a major data breach, or something like a "Snowden for social networks" that change the way we relate to these behemoths, we might just see users getting ready to give them up.


GNU Social is the seventh level of usability hell.


Proprietary services don't even federate with each other so why hold free software products to a higher standard? The problem isn't that they can't (often the free tools can federate with each other, which is more than the proprietary tools can do.) It's that, as you mentioned, they copy something existing and notable, so practically by definition they are already too late to be the popular thing everybody uses.


One of the things currently regarded as a serious problem for Twitter is abuse. Federation doesn't make dealing with this easier, it makes it harder - look at what happened to USENET.


IRC is a protocol. Slack is where the money is at.

(Don't become an software architecture astronaut. IRC is also a terrible protocol. It was still successful)


If you feel that it could improve, contribute to the IRCv3 working group! http://ircv3.net/


Why is IRCv3 still relegating chat history to "IRC bouncers" instead of making it a feature of the protocol?

Chat history makes chat more usable, and it makes it usable at all on intermittent connections.


why would they include that? (Seriously curious)

As far as i can tell it would add no benefit but only overhead to the protocol.


My opinion is because it's the vital feature, after sending and receiving images itself, that irc needs.

It's like asking why would http protocol allow users to send data. Receiving is enough.


Regulation. Having a full record of important conversations to fall back to for legal reasons is a big part of both corporate and governmental policies. Clinton's emails being deleted being an example of why ethereal messages are not taken lightly


Are you okay with missing messages, or do you only ever communicate with people over an always-on wired connection?


IMO thats the magic of IRC. Channels i care about i bounce, channels i do not care i dont bounce in. If messages would wait for me that would mean so much overhead for the protocol.

I like IRC because its simple. I've build a IRC out of boredom, and a bouncer because the one i used missed a feature i wanted. Please nobody take that simplicity away :/


I just thought about the server implementation. This is simply not dueable. IRC servers are ment to run thousands of users, caching messages would be awful.


What's the timeline for getting IRCv3 ratified?


Ratified with/by who?


Perhaps ratified is the wrong word. When is v3 going to be solidified where we'll see production implementation of the spec/new features?


As far as I know, v3 features are already in use on some servers.

e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=687798

There's also an IRCv3 section on blog.irccoud.com but I can't access it from here.


Whatever relevant working group would be responsible for standard ratification within the IETF, much like RFCs 1459 and 2812?


I'm sorry, I was only able to read about a third of the way through. Around that point, my screen got taken over by something suggesting I subscribe to something. The experience was so jarring that I just stopped.

Please, don't do that! Besides, I'm not going to make a subscribe decision until I've read the whole thing.


There was OStatus:

https://en.wikipedia.org/wiki/OStatus https://www.w3.org/community/ostatus/

Never went anywhere though, AFAICT


Was? GNU Social is a viable and actively developed and maintained OStatus implementation:

https://git.gnu.io/gnu/gnu-social/commits/nightly


We've even seen botnets in the wild, using Twitter for their command & control.

http://www.welivesecurity.com/2016/08/24/first-twitter-contr...


Whenever I go to twitter, I think "I wish there were MORE robots and MORE automatically sent spam".

Also tweets from my fridge.


Twitter is an infrastructure, not a protocol. Anybody can copy twitter but it's a bit more expensive to actually run it as scale. It took VC money, and a lot of it, which expects a lot of returns. There is already a "Twitter protocol", it's called RSS.


The basic problem for Twitter was that they opened up their API for third parties before they had any clue about how to monetize the service. This then shifted the focus from SMS to apps and web virtually over night.


It was, they called it IRC.


As far as I can tell, the biggest users of IRC in this fashion have been malware and bitcoin (I think bitcoin daemon finally removed this method of peer discovery.)


Well, venture capitalists typically do not invest in the hopes of building a free, open protocol for all to use. That's... not really what they're trying to accomplish.


Thank the gods for small blessings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: