Hacker News new | past | comments | ask | show | jobs | submit login
Twitter cut off the ability to read a tweet by fetching its URL with a HTTP GET (twitter.com/zarfeblong)
326 points by LyalinDotCom on Dec 18, 2020 | hide | past | favorite | 189 comments

> We've detected that JavaScript is disabled in this

> browser. Please enable JavaScript or switch to a supported

> browser to continue using twitter.com. You can see a list

> of supported browsers in our Help Center.

So long then, Twitter.

Consider for a minute the number of services/programs that just got screwed, the number of people on slow internet connections they just cut off, the number of people on low-end mobile devices that just lost access to local community discussions - the list goes on.

Was it really so much to ask for a basic HTML interface to display text and images? In terms of resources, surely this is quite easy to support? I've been using mbasic.facebook.com without JS for years, for example. Sure, some features are screwed, but for the most part it works.

I've been using nitter.net to read tweets without javascript.

I just append everything after the twitter.com domain name to https://nitter.net to read the tweet in question.

For example, the tweet this HN post links to could be read without javascript here:


You can use this [0] browser plugin to automatically redirect to Nitter. You can also choose which instance to redirect to, or to choose an instance randomly every time (for load balancing). Also supports:

YouTube -> Invidio.us

Instagram -> Bibliogram

Google Maps -> OpenStreetMap

reddit.com -> old.reddit.com or i.reddit.com

[0]: https://addons.mozilla.org/en-GB/firefox/addon/privacy-redir...

Open Street Map is not a replacement for Google Maps. Not even close. It's a different product with overlapping features. As a detailed map, it's vastly superior. As a way to find locations and information about them, it's not nearly as good.

For reddit, there's also teddit: https://teddit.net/

> Invidio.us

This domain is dead, I currently am using invidious.xyz

The plugin lets you choose between 12 alternatives, or choose randomly every time.

AFAIK the extension HTTPS Everywhere can do this too, but its UI for doing this is kind of haphazard.

+1 for nitter.net. In recent years, Twitter started to block public proxies and Tor aggressively, if an IP address is rate-limited, all accesses are denied. Worse, because of JavaScript, it only denies access after the script has given up after spending 60 seconds. And even when I'm not blocked, the script is still slow, and if you accidentally refreshed the page while in the middle of an endless scrolling, you'll have to try again. Nitter.net solves all the problems at once. I don't use Twitter but I do maintain a list of people in tech, which I check from time to time.

You don't even have to be using Tor to get blocked like this; I get rate limited fairly regularly just following Twitter links on my phone while on mobile data. I suspect it's because my carrier is doing CGNAT and I share an IP with several other users.

Makes it annoying when someone links to Twitter as usually I'll have to sit there refreshing a few times before I can actually see it.

I have no proof but I always assumed they do this to push phone users towards downloading the app.

I run into issues like this running the app on my mobile.

There are a bunch of different instances running the nitter software too, so please consider choosing another one to avoid hugging nitter.net too much. Here's the list: https://github.com/zedeus/nitter/wiki/Instances

nitter is really the way I want to consume tweets (I'm not even on twitter but have my bookmarks to existing topics e.g. twitter lists and users that I tune into still - users that I used to follow when there - I used to follow hundreds but those that keep me coming back aren't more than 5). twitter without an account is twitter without the noise or risk of getting sucked into "contributing" (e.g. riling myself up lol)

Sadly, when you submit links from nitter to HN they get automatically demoted (they won't even have a discuss link and will never see the frontpage).

I don't know the reasoning but I guess the argument could be that

- it violates original content rule (that official sources must be used).

- some engineering just went into HN to show the user-name part of twitter posts in submitted links. (I took that as a reasoning why HN prefers twitter links rather than nitter - suppose that could easily be expanded though).

- potential legal reasons (? I can't think of any but maybe HN respects the claim of twitter that the data belongs to them etc ...)

just speculation but fact is there is no nitter support on HN.

I love Nitter, but I can't help but feel like it's on borrowed time. Eventually Twitter is going to decide it's a problem.

The day Nitter is killed is likely the day I stop using Twitter. Yes, I know you can host your own, but if Twitter decides to actively fight against it the best case scenario is an annoying cat and mouse game that makes the service unreliable.

My favorite part of nitter.net is the RSS feeds. Stick them in your reader of choice, refresh them with an hourly cron job, and you can keep up with all your favorite twatterers with no need for an account.

I also use nitter for the that reason. I was under the impression that nitter screen scrapes so it’s interesting that it’s still working unchanged.

That's not fair. Someone put a lot of time and effort into that spinning wheel animation. By removing the need for JS the tweet now loads too quickly for it to be seen!

By Jove, is this interface snappier and more convenient.

I'm using the privacy redirect extension which does this for all twitter links. It has equivalents for Instagram, maps and reddit too

Does somebody know if something exists for safari on macos ? Or is this with the new web extension framework still not possible ?

TIL! This is great. Thanks for posting.

BTW messenger over mbasic.facebook.com stopped working for me last week and throws errors. Am I the only one? The only web way to use FB messages for me now is the bloated desktop web experience (which takes 10s+ to load)

You are not the only one. I've used other browsers on my android to use facebook over m.facebook.com, but about a week ago Facebook Messenger messages was disabled with an error message and also the alternative mbasic.facebook.com was crippled for me. So anyone messaging me have to wait until I'm at home. Their ordinary web experience since the latest(?) change of UI is horrible, notifications takes forever to render forcing a full page refresh to get back working.

https://mbasic.facebook.com/messages/ works for me in Chrome on Android 11. If it stops working i am done with FB messaging forever.

Thx for data point. Android 11, Chrome 87, "content not found" for me at the same URL.

I'm connecting from France, I'm wondering if it's maybe EU-only thing related to FB's latest rollout of "we disable some stuff for EU users to be GDPR compliant"...

I have been using mbasic on iOS for a long time, I will never install the app. Unfortunately it stopped working about a week ago for me too.

There's Messenger Lite, I think meant for low internet connection speeds and free of a lot of bloat. Hopefully the antitrust lawsuit will keep Zuck from getting too egregious and getting rid of that too, but who knows.

mbasic not working here either. I suspect it's connected to the "Some features are not available" shown on the desktop-browser messages UI. Something to do with complying with European legislation.

We all saw this coming, even my fairly normal friends tell each other to leave facebook at this point though so thankfully I'm not impacted.

Same happens to me too

mbasic.facebook.com still works perfectly fine for me.

The website continues to work, but when you go to "Messages" tab, it shows an error.

Not for me? Seems to work as per normal in my case. I can't tell you exactly why that is though.

Are you based in EU or outside? Perhaps it's an EU-only bug/feature after latest changes to be compliant with EU privacy rules.

I'm currently based outside the EU and US, perhaps it is as you say. I'm not entirely sure why the privacy rules would change so drastically on a HTML based page though...

> Was it really so much to ask for a basic HTML interface to display text and images?

Yes; it allows automated mass aggregation of hard, accurate, and current information, which is not only non-ideal from profit perspective but also for maintaining community active and/or engaging

Guess its time for me to uninstall the app. That is entirely detestable behavior in my eyes. If you want me to use your products dont cripple them.

Can you use chromium cli?


I don’t think this would really count as CLI, you still load up a full browser.

I remember there was a tui python client for a number of such sites like reddit and stuff.

Twitter doesn't make any money off those people.

You can HTTP GET tweets again by changing your useragent to Googlebot.

curl -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://google.com/bot.html)" "https://twitter.com/zarfeblong/status/1339742840142872577"

Peak SEO when users are faced with more friction than Googlebots and crawlers.

In 2020, the only way for netzidents to do get what they naturally deserve is by hacking.

The original web browser, NCSA Mosiac, encouraged users to change their User-Agent string, so-called "spoofing" or "masquerading".


The User-Agent header is not mandatory and was never intended to be used by tech companies for denying access or fingerprinting. It was supposed to be used, at the user's discretion, to help with interoperability problems. RFC7231 specifically refers to user-agent masquerading by the user as a useful practice. It explicitly discourages using this header as a means of supposed user identification, e.g., fingerprinting.


Setting your user agent would only be considered hacking by the same people who think the Internet is a series of pipes. The browsers themselves copy each other's user agents for interoperability, so it's far past the point that changing it to look like another agent would be considered devious.

Yeah, but on the POV of whoever runs the network, circumventing such blocks is "abuse"

How do you “naturally deserve” to access the contents of the Twitter website?

Since it became a dissemination service for public officials. The moment it became illegal for the US President to block people on Twitter, it should have become illegal for Twitter to restrict access to information to the public, for the same reason.

I agree, that's just cruel. No one deserves being subjected to Twitter.

Given that trick is spreading for several sites now, the trick won't last long. Google could for example generate secret unique user agents for the biggest players. Biggest players would then only allow requests from that secret unique UA.

I think Google shares IP range blocks so you could implement a check like "if(isGooglebot(user_agent) && isGooglebotIp(ip_addr))" in your system.

Edit: ah no, they stopped https://developers.google.com/search/docs/advanced/crawling/.... I don't think 2 DNS lookups are acceptable to block a GET request but I think it can be done out of band, i.e. isGooglebotIp function can fire away a Redis query and if nothing was found, to put the ip into the DNS verify queue. A few requests later, the bot will now get banned thanks to a new record in Redis.

No need to use a Googlebot UA string. Others will work. Such as

Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Firefox/78.0

no need a secret unique UA. just PTR the IP and check the host.

This method is already used by stackoverflow to hide the sitemap.


Things that would be inadvisable while most of these companies are actively under suit for antitrust issues.

it already happens, try to create your own search engine an index Amazon. captchas everywhere, their robots.txt is just for show.

This trick is almost older than the internet, so if someone cared they would block it already. Sending google.com as Referrer is another variant of it. Before stackoverflow days this was very useful for getting past the paywall on expertsexchange for example.

I was under the impression that serving other content to google would greatly punish your pagerank and even pull you off the search results completely.

so much to google embracing the open web

When Twitter announced they were going to stop supporting browsers not on their list of supported browsers, I figured their attempts to block would be something more than just checking the value of the user-agent header.

They should just announce that users must use a particular user-agent header value and provide a list of approved values. If no one else compiles a list of acceptable user-agent header values for Twitter, I might have to do it.

Every user should just use the same user-agent header value. That would negate any utility of the user-agent header.

It’s been received wisdom until now that Google penalizes websites which behave differently when scraped by the Googlebot. Is that no longer the case?

Pinterest has proven by spamming SERP for years that if you're big enough Google will close eyes on it.

That applies to more than just google

If you are big enough, there are separate rules for you (or no rules)

They will serve same content to users with JS enabled and legit Googlebots while blocking clients w/o JS and bots. I don't think it violates Google rules but ofc is of questionable decency.

And Google can verify by crawling the site with and without JavaScript.

Also, google has a commercial relationship with them: https://www.convinceandconvert.com/social-media-research/twi...

You just need the word "Bot" in your user agent. Requires for fetching Twittercards for link previews too. Changed earlier this year.

Speaking of Twitter UI, am I the only one having to hard refresh every other Twitter page to get rid of the “This is not available for you” message?

You need to clear your service workers for twitter


- on chrom(e|ium) based browsers: chrome://serviceworker-internals/

- on firefox: about:debugging and scroll to service workers section

1. Why do I have to do that; what is the underlying bug?

2. How comes such a bug is still in production for months?

They deployed a bugged service worker long ago (beginning of 2020 or something), but service workers can be hard to replace because they tamper with requests and cache. "Technically" it's been fixed in production for a while, but it doesn't propagate so many still have the bug.. Still seems very strange to me that they allowed this situation to happen for such a huge (in traffic) website.

Not sure how exactly the service worker was broken (I didn't have the curiosity to check in devtools, and now it's gone), but I had this exact same issue and saw other people having it right here on HN, and this was the way to solve it too.

I see. Thanks!

Or use incognito for Twitter, and you'll you'll get fresh service workers every time.

Not possible in Safari I guess?


Now I'm trying to figure out how to do it on Mobile Safari, where this bug always hits me.

Go to settings/safari/advanced/website data. At least clearing there worked for me.

Thanks, if that works that would be amazing. I'm most often getting it inside in-app browsers, e.g. when opening a Twitter link in Tweetbot, Apollo or similar apps. Not sure if they share the same central settings/service workers.

I'm afraid that does not solve this particular problem, just happened again on the very first try. I still think this is intentional or at least happily accepted on part of Twitter, with the happy side effect to make it as shitty an experience as possible when you're not using their app.

Should be possible, I just don't know how

about:serviceworkers is also a way to view them on firefox

No, I as well always use uMatrix's reload with cache bypass to circumvent the block.

I also got blocked a few hours after registering and had to give my phone number to get unblocked. I did nothing suspicious: filled out my profile in a normal way with nothing political and followed a few researchers but nothing en masse. I wrote to support to ask why this is, but no reply of course.

I still gave it because I'm a little guy and want to get the benefits of being tapped into my professional community, to know what's up, to be visible myself etc. Such individual decisions result in this.

We users have a hard time coordinating as we have individual incentives while Twitter is one entity and can do whatever it decides.

I had the same happening, created an account and it was locked the same day only to be unlocked when I provide a phone number. I wrote them an email telling them I refuse to give them my number for privacy reasons and they unlocked my account the next day.

Which email address did you use to contact them?

I've had this exact same problem about half a dozen times, and every time I have difficulty getting in contact with a real person, and not being led through an automated system that just ends up asking me for a phone number, or getting a reply hours later about how that inbox isn't monitored.

Sorry, it's been a while, I don't quite remember, but I think I either replied to the mail they sent informing me of the lock or it was a support contact address in that email. And they never replied, I just logged into my account the next day and the lock was gone. I wonder if a real person ever read my mail or if I simply solved some sort of CAPTCHA by writing that mail.

I'm pretty sure asking for the phone number is standard

But then why not ask for it when registering?

Well, of course to decrease friction and avoid turning people away at the entrance. You get smacked in the face with the request for your phone, once you've followed some interesting people, interacted with the site a bit and will form some attachment. Then you are more likely to enter your phone to keep using it.

There's a psychological effect of feeling more strongly about retaining a given amount of already owned value compared to gaining an equivalent amount of stuff you don't have yet.

You wouldn't pay with your phone number to get a Twitter account, but you would pay with it to keep an account you already have and started using.

Dark patterns.

I'll tell them mine if they tell me theirs.

Twitter once enflamed my enthusiasm for API programming. Today i can barely make an account without getting blocked. Its sad how it changed

This is a reoccuring pattern

Startup = Good Open API

Middle = Start Closing API for "Quality, Security and Privacy"

Large = Block all access to API unless you are another Big corporation...

Every platform has gone through this life cycle, it is predictable at this point

Edit: And dont for get the new excuse "bots are everywhere" the go to red herring to justify anti-user anti-consumer actions on 2020

They have a bot issue thats currently out of control.

Thought experiment:

How hard would it be to write a centralized 1-to-many message pipe that replicates Twitters core functionality?

How hard would it be to run such a thing? The moral and legal needs to censor content would probably be a bigger task then the technical implementation?

Mastodon (and other fediverse FLOSS) already have millions of active users across thousands of providers. https://the-federation.info/mastodon

It's easy to set up your presence and start small, e.g. with a web app which syncs your Mastodon and Twitter accounts. https://moa.party/

Is it really "millions of active users"? (I don't doubt it's possible, but the incentives are very much aligned with "let's count this bot activity as a real user." It doesn't even need to be malicious -- I fell into that trap myself.)

EDIT: Odd, I'd like to report a bug in that site. https://imgur.com/EsGyvnT It doesn't seem to load. Or rather, I saw it briefly flash the statistics, once, before it went back to loading. The 0.3 seconds I saw the stats looked impressive though. :)

The only thing in dev console is a warning that seems unrelated: "<ApolloProvider>.provide() is deprecated. Use the 'apolloProvider' option instead with the provider object directly."

Ah, the root cause is that the graphql fetch takes ~32sec: https://imgur.com/q769ozR So, never mind! Just a bit slow.


Anyway, it claims 400,000 active users in the last month. What counts as an active Mastodon user? It also claims 400 million Mastodon posts (total?) which I suppose are tweets.

Interesting... Twitter has about ~17M tweets per day, iirc. If this growth is legit – a big if – then Mastodon may be gaining more momentum than it seemed.

Actually, twitter gets 431M tweets/day: https://twitter.com/jasonbaumgartne/status/13320033501922713...

Hmm. I wonder how much exponential growth would be required for Mastodon to catch up to twitter...

Does it matter whether the aggregate Mastodon user count ever catches up to Twitter? Is Twitter today actually better for its users than the Twitter of yesteryear that only had ~17 million tweets per day?

I have met many people on the fediverse, and at least two friends of mine have moved thousands of miles due to relationships and opportunities that likely would not have occured if the fediverse (Mastodon, Pleroma, etc) and its variety of platforms and userbase did not congregate in this manner.

One friend even bummed a plane ride on a tiny general aviation plane by serendipitous happenstance through the fediverse to make the multi-state move out of an abusive living situation.

Unbounded growth is not always good, and many fediverse instances have limited or closed new account registrations to ensure their community doesn't become the next Voat, FreeSpeechExtremist (which I was banned from for fairly benign speech) or /r/TheDonald.

I agree in spirit, but a community that isn’t growing is probably dead.

Growth Hacking is a thing. Whether that counts as real growth or not I don’t know. But metrics can become suspect when they become a target.

There are quite a number of people using Mastodon and it is quite nice but it does feel a bit echoey all the same. At the same time there is a lot of garbage on twitter that you really couldn’t call “active users” and keep a straight face. As usual e truth is usually somewhere in the middle ...

In my experience bots are not that common, but it's very common for the same person to have multiple accounts.

> It's easy to set up your presence and start small, e.g. with a web app which syncs your Mastodon and Twitter accounts. https://moa.party/

True, but some of the more avidly pro-Mastodon and anti-Twitter regulars take it upon themselves to be gatekeepers (some nice, some nasty) if you try the dual-platform method. Just noting this for the sake of those who haven't yet done so. Feel free to give it a go, but don't be surprised if you get smacked in the face fairly early in the experience.

> moral and legal needs to censor content

I for one would love for there to be a decentralized system that by design makes it impossible to censor any content, and if the content would be stored in a decentralized, encrypted manner it would also be impossible to legally enforce it as it would not even easily be known where it would be stored at all.

Think of perhaps a network that retrieves its content viā an union-routing-esque mechanism where information is published that cannot feasibly be altered any more once put there whose origins and locations would be computationally infeasible to trace and as it's stored in an encrypted fashion the parts of the network that do store it do not even know what they are storing.

This exists, its called Freenet: https://freenetproject.org

Not a particularly new concept, Freenet has been around for over 20 years now. Tor and I2P have more traction in this segment.

Sounds like an interesting thing; let's see whether there's some interesting content on it amidst all the child porn.

And sure enough, I already found a website that says: “Please note that the Freenet network (much like Tor) attracts pedophiles and a large amount of sites contain child pornography. Some sites jokingly add a disclaimer saying This site does not contain child pornography. click here to continue.”

Mhmm, hence why the Freenet proposition is a hard sell. The average person doesn't want to devote half their computer's storage space to storing encrypted data they have no insight into, especially when that data could be considered illicit in some jurisdictions when decrypted.

I suppose the problem of such ideas is that it will generally be filled with content that is illegal in at least some jurisdictions and will only be used by those that need it to evade the law and the amount of content that isn't legal on it will be minimal.

So the end result is that all one finds on it is child pornography, controversial opinions, leaks of government data, and the lot.

I suppose my ideal wish was a platform such as twitter that intermixes legal and illegal content.

That is what stopped me, but it is also a check on society. If something else ends up on freenet that is not cp, it is an indicator of something having gone terribly wrong.

I didn't realize it then, but if Hunter Bidens data had been put there they would have been basically impossible to censor and it might have swayed the election.

Freenet is designed for data storage, while I2P (https://geti2p.net) provides any service on top of the anonymizing network.

While those do exist (and plenty of them), many people avoid them precisely because content moderation isn't possible or regularly done in those communities. From people who prefer not to sift through egregious amounts of bigotry/spam, to people who are at very real risk of having their physical safety or livelihoods damaged by online abuse (e.g. children), there are plenty of reasons to avoid those communities--reasons why those platforms are often ghost towns or toxic echo chambers.

Like, this isn't a "think of the children" argument that those communities shouldn't be allowed to exist. Of course they should. But don't be surprised when people prefer more centralized and "censored" (policed/moderated) communities. For the vast majority of people, that is a feature, not a bug--and they tend to realize that and boomerang back to Twitter etc. when they try out decentralized/anticensorship-oriented alternatives.

Usually when a man speaks out against censorship, he tends to still want to censor whatever he finds abhorrent himself.

A rather big difference is also the relatively lower level of activity on those alternatives.

In the recent past, I've seen Twitter harshly ban account I randomly make. Sometimes its weeks before they ask for a phone number and if you don't have a real number (iirc they check against VoIP) they won't unrestrict you. If you have the same IP as other account I think they also might auto restrict you.

They banned my account for unnamed terms of use violations after my first post saying “well, I finally made a Twitter account.” Their support replied with “queries will not be responded to.” It was for the best

When they locked my freshly created account and didn't let me use it unless I gave them my phone number I mailed them and told them I would not do so for privacy reasons. My account was unlocked the next day.

Several years ago I tried to make a Twitter account. It got banned before I posted anything. I hadn't even subscribed to any users. I was going through the settings, and then \bam\, banned for being a bot, or some such nonsense. Nothing of value was lost.

Twitter aggressively tries to get phone numbers from new users - providing it is the main way to get unbanned in cases like this.

Unless you're my mother, or maybe if you're part of my immediate family, you're not getting my phone number.

American giant corporation with questionable ethics: hard no.

Yeah, I had to get a separate number just to please twitter's automation.

Their approach is somewhat understandable - stopping bots is a hard problem, phone number in theory represents a material cost and a hurdle to someone who wants to rapidly register thousands of new accounts.

Don't you have a burner phone/sim for these types of requests?

This is brought up every time this subject is discussed. Is a burner phone a normal thing that regular people have?

But no, I don't have a burner phone. I'm not going to buy a phone and a SIM just as some kind of non-solution to tech companies having a hard-on for invading peoples privacy.

I have an old phone (who doesn't) which I leave plugged in on my desk, with a £5 payg sim card.

Many alternatives exist but none reach critical mass because twitter has buy in from the major media outlets and the journalists they employ. Unlike other social networks that grew spontaneously, Twitter was effectively crowned.

Every twitter clone that tries to compete without first having the approval of the major media outlets will either be publicly ignored, belittled, or outright attacked.

This is a very conspiratorial vision of things.

Really, it's just that social networks work by having a lot of people, and lots of people are on Twitter because other people are on Twitter (I think there's an audience thing as well).

Though, to your point, I think Twitter has outsized influence for the same reason Google Reader did: loud nerds and journalists were always using it. Though now the politicians using it probably count for a lot too

I don’t think it’s a conspiracy theory to acknowledge that the exposure twitter got from media outlets universally promoting their twitter handles next to their core brands had a significant impact on twitters early growth, as well as a reinforcing effect to this day. Not every small social network gets that type of treatment.

Facebook did not grow this way, even though their relationship with the media is nearly indistinguishable from twitter’s today.

> This is a very conspiratorial vision of things.

History is full of conspiracies.

> Really, it's just that social networks work by having a lot of people, and lots of people are on Twitter because other people are on Twitter (I think there's an audience thing as well).

The network effect is real, but it is not the only effect. Twitter and other tech giants now think that their role is to protect the world from information they deem wrong. As well, prominent people in media and politics are urging them to take on that role more expansively. Naturally, therefore, they will oppose competitors whose distinguishing feature is to have less or no censorship.

The media establishment does not look kindly upon those who seek to deconstruct it.

> The network effect is real, but it is not the only effect. Twitter and other tech giants now think that their role is to protect the world from information they deem wrong

Do you have any examples of anything "they" "deemed wrong" which wasn't factually wrong? And not "alternative facts" bullshit, regular, proven facts.

If you care about the truth, it's your responsibility to find it, not to demand that I feed it to you. You are on the Internet, after all, and since DNS has yet to be censored, you can still seek alternative sources.

Now, you place the burden of proof on me, assuming that Twitter is in the right by censoring information and demanding that I prove that their censorship is wrong. But why don't you place the burden of proof on Twitter? Why do you apologize for censorship?

In your other comments, you say that "Only China can stop China," and you say that the U.S. mustn't respond to cyber attacks by China, Russia, and Iran, or those nations might escalate, so we must look the other way when these nations infiltrate and attack us.

One wonders what you're after.

When it comes to publishing, it doesn't matter where you publish, only that you can put your words out there. If someone has a web browser they can read what you say. The only thing that suffers is engagement, which you'll have a hard time getting regardless what website you use to publish if you're just starting out. But engagement is either meaningful or noise, and meaningful engagement comes from thoughtful words, not the size of the website.

So don't worry about critical mass. Pick where you want to publish based on what is important to you.

I almost exclusively use FOSS, federated networks to publish my thoughts, the only exception being this site. I engage here pretty often because I like the quality of the discussion and the rule of only contributing if you have something constructive to contribute, but ideally this website wouldn't be a silo.

I made no judgments on the value or importance of critical mass.

Critical mass seems important for exposure and personally I don’t think exposure is important unless you are seeking to maximize something that comes from exposure, usually it’s profit.

My comment was mainly to bring attention to what I consider the primary causal factor behind twitter’s critical mass.

Twitter got popular because early adopters used it, then celebrities, then Oprah and then media journalists and then Arab Spring and then news journalists and then early adopters left.

Do you have more information and / or evidence of this? I was an early Twitter user (2008) and I don’t recall things playing out this way. I’m not suggesting you’re wrong, and it’s entirely possible I was (am?) being naive about how things played out and would love to learn more.

My recollection matches yours. Twitter didn’t have an incumbent to overcome (insofar it was, and largely continues to be, complimentary to the other social networks). The early days of Twitter were a wasteland as far as old media engagement is concerned. It was dominated by TechCrunch, Scoble, etc.

And then in 2008 it ramped up for a multitude of reasons. A politician nobody had heard of somehow managed to mobilise a lot of support via a tech platform many hadn’t heard of, and get to a point where it looked like he had a legitimate shot at becoming US President. By early 2009 it was popular enough that Facebook updated their timeline to try and combat it, everyone was talking about their growth, and Ashton Kutcher was goading CNN into a competition to be the first account on the platform to have 1M followers (he was the first btw).

Old media was late to the game here and had to play catch up to stay relevant. I suspect that was also the tipping point though which changed journalism forever into this clickbait and eyeball driven economy. If anything Twitter inadvertently controlled old media, not the other way.

Can you name some alternatives? I am not aware of any.

Please note that I explicitly said "centralized". Because decentralized solutions like Mastodon have not resulted in viable alternatives for the average user so far.

ActivityPub and the Fediverse of which Mastodon is part have absolutely resulted in viable alternatives. I use the network for probably 90% of my online social interaction. It is far superior to any alternative I've found so far, mainly because I can publish freely, cannot be banned from the network, nobody has to have an account to see what I say, and it is actually a network, not a website calling itself a network. I can think of no way to reach millions of people that is less restrictive than this network.

What is the difference? Centralization applies only to moderation policy.

Minds, gab, and Parler come to mind. There are certainly others.

Parler is a no go for me partly because I don't like ideological monoculture, but mainly because users have to have an account to even see what you say. IMO this makes it worse than Twitter, you shouldn't publish your thoughts somewhere with conditions on who can see what you say.

I've never used Minds and dabbled with Gab when it first came online only to find that it is as noisy and worthless a place to engage as Twitter is, possibly moreso.

> users have to have an account to even see what you say

fwiw, I thought the same until recently but discovered they are visible with the following format:



Looking at Minds, I already see a big flaw: All posts are under the /newsfeed/ namespace.

So instead of

your posts are here:

I don't think users want to get "robbed" of their posts like this.

There also is no embed functionality. For Twitter type shoutouts it is absolutely vital that they can be embedded around the web.

Fun fact: the user part of a Twitter URL isn’t important. You can replace it with any other user.

Username in status url and “embedibility” are good features to have but i don’t think they are causal in determining whether or not an twitter clone reaches critical mass. Maybe they are though. You should build an alternative!

The latter two are dominated by people who've managed to be too fascist for Twitter, which is quite a feat.

> Every twitter clone that tries to compete without first having the approval of the major media outlets will either be publicly ignored, belittled, or outright attacked.

You proved that point

Is the person you're responding to wrong? You can't just slap a conspiracy label as if that somehow makes what they said wrong

Yes they are wrong. There’s no evidence that either of those platforms are dominated by fascists. The claim is as frivolous as the claim that HN is dominated by fascists.

How so?


https://twtxt.readthedocs.io/en/latest/ is an interesting way to do it.

RSS with comments can work. Mastodon, Pleroma, misskey, gnusocial all do it. They have the added benefit that a user cannot be censored from the network entirely, only from servers by their admins, and blocked by users.

You can't even get people to write their stories on their blogs and then link to Twitter, instead somebody created a twitter bot that will do that, but it has to be summoned in the comments.

That is a software repo. Not a public centralized service that lets users communicate with each other.

One more click leads to https://joinmastodon.org/

> ... centralized ...

That was not in the GP’s requirements.

Edit: plus: GP explicitly asked for software.

Why do you want it to be centralised? Surely a federated system like email is just as fine, if it provides a good user experience?

If. But so far, no federated system has resulted in good user experience.

Twitter tried to build one of those back in 2007.

You can go back and read tech media from the time about all the engineering challenges they ran into and the regular downtimes from trying to scale the thing to a tiny fraction of its current volume.

Turns out to be hard.

That’s because they wrote it all in ruby, which don’t get me wrong is a fine language but did have scaling issues back in the day. As it happens this was a growth experience for ruby too.

However limited their technology choice might have been from a performance standpoint it did give them the adapatability and flexibility to provide the fun and entertaining product that got them off the ground in the first place.

It did inevitably lead to the growing pains we discuss but I doubt anybody expected it to get as big as it did.

WRT the original comment here ... the main limiting factor nowadays is just getting people to use it. The network effect. Mastodon is a thing, and people do use it but it’s a minnow relatively speaking.

The next big challenge I think will be some kind of decentralised twitter based on some kind of blockchain (by which I mean progressive content hashing and consensus) and I don’t mean federated, like mastodon. The mastodon name is ridiculous too .. I’m not going to go into why but if you know you know, and stupid as it is it does matter.

Thanks for the nitter.net pointer -- I wasn't aware of that. That will be helpful.

I know (in principle) how to use curl to send an authenticated API request via HTTP. That's authenticated; you have to log in. I already have a twitter client for that use case.

There's also unauthenticated API access (read-only public tweets only, as you'd expect). (I assume this is what nitter.net is doing.) Less tracking, but you still have to request an API key, so it is under Twitter's control and they could in theory cut it off. It's still not supporting the basic web principle of "you can GET the flippin' data".

Anyhow, I'm glad this is getting some public attention. I'm a little surprised that I was the first one to make a fuss about it.

It's really annoying that I can't consume information from social media sites without also opting into all of their slot-machine like mechanisms. Twitter is designed to pull people in and the main website contains a lot of gamification mechanisms that are designed to keep people on the site.

For a while already I had the idea of building a "diet social media" software that would display information from e.g. Twitter, LinkedIn, Facebook and other platforms without gamified addiction mechanisms. Turns out this is impossible to do as all platforms have drastically restricted the way you can consume their data.

> For a while already I had the idea of building a "diet social media" software that would display information from e.g. Twitter, LinkedIn, Facebook and other platforms without gamified addiction mechanisms.

This already exists. It's called RSS.


The only major issue with RSS is a dearth of decent feed-reader software. I personally use newsboat[0], but elfeed[1] also looks pretty good if you're an Emacs Chad.

[0] https://newsboat.org/ [1] https://github.com/skeeto/elfeed

Disclosure: author of Bubble

Bubble [1] is a new kind of VPN that does just about exactly what you are looking for. On FB, Twitter, LinkedIn, Insta, etc, you can scroll through your feed and:

* No ads!

* No tracking! (except what you click, can't avoid this)

* Auto-block posts matching keywords you don't like (politics/etc)

* Auto-block posts that contain links to news websites (I use this on FB, it's great)

* Block specific users whether you're connected to them or not.

* Blocked users don't know you've blocked them, because the block is at the network-level, not in the social media service.

Bubble is open source [2], our commercial offering is currently in beta.

How does it work? It's basically a WireGuard VPN with a tightly-integrated mitmproxy. So it works on any device, but the social media blocking only works on the web (native apps are cert pinned, which breaks mitm). AMA.

Oh did I mention it works on HN too? I rarely need to block a user here, but when I do, it sure is nice to have that feature!

[1] https://getbubblenow.com

[2] https://github.com/getbubblenow/bubble-docs

Sorry, but this sounds a bit terrifying. All your traffic is going through Bubble's servers where it is TLS-intercepted and accessible as plaintext. Compared to a browser extension like uBlock, you don't see what is happening on Bubble's servers. You essentially need to trust a startup with all your internet traffic.

The claim that it's "open source" a bit disingenious after looking through your GitHub repositories. It's not like you could test Bubble locally and then decide to use your hosted service.

I think the overall use case is solid, but would be much better served with a browser extension. Also, very likely mitmproxy won't really scale that well to lots of users.

Please understand that you always run your own Bubble, it generates its own certificate, no one else can see your traffic.

Even if you launch using our paid service, it’s the same- we don’t have access to the system we launch on your behalf.

Bubble is as private as running your own WireGuard VPN, it’s just a lot easier to set up and comes with some cool features.

You can absolutely try the open source version right now: https://git.bubblev.org/bubblev/bubble

This is cool in a sense, but I don't really trust anybody enough to let them MITM all of my internet traffic through a remote server.

See my reply above. You are mitm-ing yourself.

We can’t see your traffic. We can’t login to your Bubble— but you can install an ssh key and root around all you want.

We are anti-SaaS

>VPN with a tightly-integrated mitmproxy

Are you for real?! This is basically the polar opposite of what people should be looking for, no?

Bubble is open source and self hosted. You should never trust anyone to touch your plaintext traffic but you.

> It's really annoying that I can't consume information from social media sites without also opting into all of their slot-machine like mechanisms

The content was only ever the cheese to draw us into the advertising trap. They're not in the business of giving us free content, they're in the business of selling our attention as we gorge ourselves on "free" content.

You cant even link to a tweet that is part of a thread , only to the top tweet. It takes effort to do such blunders

So they joined the ranks of websites that can't be used without JS. Websites that basically just display text. This is breaking the web happening right in front of us.

This explains why my broken link checker bot is blowing up for every single twitter URL.

I guess this means a lot of malware/botnets that uses Twitter to communicate just broke today.

Twitter previously didn't serve up a preview with mobile.twitter.com links. I noticed it in Whatsapp, so not sure whether it's twitter's fault or Whatsapp's fault. Changing mobile.twitter.com to twitter.com enabled the tweet preview.

“We found site visitation increased 1% after removing the preview, ship it”

I mean for any network with that many users if you can get a 1% increase for any single change that's pretty impressive.

I mean Google has invented whole protocols and formats to save small percentages of traffic.

This is an issue I've noticed too... when I open a new tab (in the background) to Twitter, then Safari won't execute the javascript until I go to the tab and hit refresh.

It sucks, because it adds a new refresh cycle just to see the tweet.

I don't use Safari but there are possibly 3 issues behind this:

- Twitter having a broken service worker (saw this a lot on Twitter discussions in last weeks)

- Safari perhaps lowering the JS execution priority in background tabs

- Twitter trying to be nice and only exec JS while their page is in foreground, and hitting a bug in Safari regarding visibilitychange callback not firing (which is fixed in Safari 14 by the way)

I'm running Safari 14.0.2 on Mac OS 11.1. Safari does lower background javascript execution, it allows it to save on battery life for background tabs.

I want this. I want it to save battery life.

So Twitter should make sure their javascript executes correctly when it is foregrounded and Safari does start executing it.

Either way, I honestly don't care too much why it is broken, Twitter could have sent a pre-rendered page down.

> Twitter having a broken service worker (saw this a lot on Twitter discussions in last weeks)

Any references/links for that sort of breakage? I'm looking at using service workers for a personal project (mainly of offlining, but partly for background processing). Being throttled in a background tab is fine, but I'd like it to work fully without manual interaction (a refresh) once the tab has focus.

I don't know the details unfortunately, other than "force-reload fixes the issue".

So what’s the alternative? Is there some easy scripting tool to automate a browser to login (?) and issue requests to read tweets? I’m not very clear on what exactly is blocked here and what the workarounds are.

I have never understood how to use Twitter. I simply cannot follow "the flow".

I expect an RSS style newsreader. I just want to see what's new, in order.

IMHO John Carmacks' old school finger file updates was the pinnacle of human achievement. Just aggregate everyone's finger updates into a swanky offline mail reader style UI. Heaven.

I don't understand why links aren't links. eg Open this in a new tab.

I don't understand why pull quotes are images.

And tellingly, I don't care enough to figure it out, find a better way.

PS- One thing missing from Twitter is the drive-by downvote. Too bad.

Hmm, so I guess that Twitter just feels big enough to just leave the World Wide Web?

I'm surprised this has been noticed now. This has been happening to me with Firefox on and off for about a year, until I decided to switch to nitter.

Twitter recently moved to AWS; Is it due to it?

nitter.net still allows you to view tweets without JavaScript.

Just replace the twitter.com part of the tweet URL.

Where is the evidence/documentation for this claim; ie that it's a general policy? I'm not disputing it but its just a tweet that doesn't provide context.

Here is how you verify it yourself,

    curl https://twitter.com/zarfeblong/status/1339742840142872577

As pointed out by https://twitter.com/magusnn/status/1339833122456662017, you can impersonate GoogleBot and get an eye-watering 500kb of HTML to read a single tweet.

Still possible to curl, just looks like it is a bit more obnoxious now and requires parameters like your browserAgent and cookie to be set. Look in your network tab for requests to an endpoint like this[1] with a lot of query parameters after.

1- https://twitter.com/i/api/2/timeline/conversation/<twitter status id>.json

Any HTTP request you can make with a browser can in principle be made with curl. The issue is that seeing content now requires you to jump through lots of hoops, whereas it was previously openly accessible by default.

That's true. I agree it is a worse change and obnoxious for twitter to do (unless there is some easy way I/we just aren't aware of that lets you easily get the tweets as before).

Turn off JS then go to twitter.com. There's the evidence/docs.

Everyone is missing the question asked here. The poster isn't asking for evidence of the bug. The poster is asking about the assumption that this is Twitter's policy. This could also be a bug in how user-agents are handled.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact