Well, with SO, at least you can search on Google and view the version cached by Google just fine.
With Reddit however, these days almost all comments are locked behind “view entire discussion” or “continue this thread”. In fact, just now I searched for something for which the most relevant discussion was on Reddit; Reddit was down so I opened the cached version, and was literally greeted by five “continue this thread”s and nothing else. What a joke.
Reddit's attempts at dark patterns are embarrassing from all perspectives. If you use dark patterns it's a laughably abysmal implementation. If you abhor dark patterns, it's a frustration.
They've actually done a masterful job of finding this balance. I've been on reddit for 15 years and would have quit if they didn't leave the old interface available.
I honestly thought Reddit would die when they introduced Reddit awards, it seemed like such an obvious cash grab. You can't underestimate the amount of community momentum that the site has though.
Yeah it's crazy how bad user-hostile reddit.com has become. Fortunately old.reddit.com is still available, but for how long? If only Javascript did not exist, it would be impossible for UX people to come up with something that bad.
> only Javascript did not exist, it would be impossible for UX people to come up with something that bad.
Arrange the html so that the list of comments is at the end (via css). Keep the http connection open, have the show more button send some of request, and when you receive that request send the rest of the page over the original http connection.
As usual, solve people problems via people, not tech.
② A submit button or link to a URL that returns status 204 No Content.
(CSS image loading in any form is not as robust because some clients will have images disabled. background-image is probably (unverified claim!) less robust than pseudoelement content as accessibility modes (like high contrast) are more likely to strip background images, though I’m not sure if they are skipped outright or load and aren’t shown. :active is neither robust nor correct: it doesn’t respond to keyboard activation, and it’s triggered on mouse down rather than mouse up. Little tip here for a thing that people often get wrong: mouse things activate on mouseup, keyboard things on keydown.)
“Continue this thread” links don’t depend on JavaScript at all.
“View entire discussion” couldn’t be implemented perfectly with <details> in its present form, but you can get quite close to it with a couple of different approaches.
I think the infinite scrolling of subreddits is about the only thing that would really be lost by shedding JavaScript. Even inline replies can be implemented quite successfully with <details> if you really want.
Why wait? Teddit has been a great substitute for reading in a mobile browser, and making an iOS shortcut for transforming Reddit links was pretty straightforward.
Impossible? Man, it's crazy how fast people forget things like good old fashioned <form> GETs and POSTs. It would obviously be a full page refresh, but other than that the same awful UX could still be implemented.
I guess there is a market for search engine (maybe accessed through tor) which does not care about robots.txt, DMCAs, right to be forgotten etc. Bootstrapping it should not be that hard since it can also provide better results for some queries since nobody is fighting about the position until it's widely known.
I'm not sure how far are we from being able to do full text internet search. Or rather even quote search, preferably some fuzziness options. That would be cool, Google's quotation marks were really neat back when they were working.
That's the good old Easter eggs, perhaps a memory from when Reddit was a nice place. They stop appearing and are replaced by dark patterns once sites jump the shark.
I reod some people use false slugs in the robots.txt as a honey pot of sorts. IPs that actually read the robots.txt, ignore the disallow, and still access the uri are outright banned.
It might be related to the time few years ago when Google added exclusions for user agent t1300 in regard to its founders. Gort seems to be a robot from old scifi and bender might be something similar.
> I guess there is a market for search engine (maybe accessed through tor) which does not care about robots.txt, DMCAs, right to be forgotten etc. Bootstrapping it should not be that hard since it can also provide better results for some queries since nobody is fighting about the position until it's widely known.
There is a solution for all this mess and I'm blocking HN and a few different domains until I implement at least the first step after which I can share it here.
Also, even if search engines are allowed, old.reddit.com pages are not canonical (<link rel="canonical"> points to the www.reddit.com version, which is actually reasonable behavior), so pages there would not be crawled as often or at all.
Is this a call for competition? I regard Cloudflare as state-of-the-art in terms of security and ease-of-use. I certainly hope their knowledge replicates across other organizations. As of now they're still building highly impactful tools that are easy to use and that noone else quite provides. I don't really expect another organization to match them given the strength of their current leadership. I think they've built in a head start for awhile.
> Cloudflare as state-of-the-art in terms of security and ease-of-use
Depends whose security. I value my security dearly and that's why i use the Tor Browser. Cloudflare has decided i cannot browse any of their websites if i care about my security (they filter out tor users and archiving bots agressively) so i'm not using any cloudflare-powered website. Is it good for security that we prevent people from using security-oriented tooling, and let a single multinational corporation decide who gets to enter a website or not? In my book creating a SPOF is already bad practice, but having them filter out entrances is even worse.
Also, are all of these CDNs and other cloud providers are solving the right problems?
If you want your service to be resilient against DDOS attacks, you don't need such huge infrastructure. I've seen WP site operators move to Cloudflare because they had no caching in place, let alone a static site.
If you want better connectivity in remote places where our optic fiber overlords haven't invested yet, P2P technology has much better guarantees than a CDN (content-addressing, no SPOF). IPFS/dat/Freenet/Bittorrent... even multicast can be used for spreading content far and wide.
Why do sysadmins want/use CDNs? Can't we find better solutions? Solutions that are more respectful to spiders and privacy-minding folks with NoScript and/or Tor Browser?
Speaking for myself here, I don't see how people can use the web without javascript. As for Tor, you're routing other people's traffic while they route yours, so I can understand how such connections would be blocked given that blocking IPs is still a method for mitigating security issues, and you can't determine the IP of a Tor browser.
I prefer tech that I can use both at work and on hobby projects at home.
To that end I've only used cloudflare and netlify. The others have too much friction to try out. I expect I would get experience on the job if necessary.
Fair point. Maybe Fastly is more akin to Akamai given it seems to be more enterprise-y. By market cap, Cloudflare is 26 billion, Akamai is 18, and Fastly is 6.
Fastly's free offering gives you "$50 worth of traffic" whereas Cloudflare has a perpetually free option. And for Akamai you have to apply for a free trial.
Akamai is balls deep in video streaming, which is probably the most bandwidth/traffic intense thing for a CDN to dabble with. My guess is that CF has much more diverse traffic. Hence the fallout from an interruption would be quite different.
Not quite, Akamai is more large corp centric (they don't serve average Joe) besides that they do also security. If it went down you would get all of sudden e.g. a lot of DDOS possible.
That doesn't take away their embarrassment. It's mean how many websites rely on fastly. Twitter hasn't been loading emojis in a while, and I believe it's for the same reason.
We use Fastly (and our site is down too) but I asked them about this a couple of years ago.
It is deliberate.
They said it was so they can tell if it is their Varnish service or the customer's Varnish service that is down
Fastly modified the Varnish error to ensure that it is known if the error is returned by Fastly's Varnish or by the origin's Varnish should the customer run their own Varnish on the origin.
Someone (I can't unfortunately due to IP block) needs to change that. The part about the spelling is false, apparently [1] it's an intentional change by Fastly so that they can tell if it's their own Varnish or a customer's Varnish that is throwing an error.
Cloudfront, by Amazon's own admission, specialises in high bandwidth delivery (ie huge videos). Fastly has consistently better performance as a small object cache, which makes it the choice for web assets
I imagine it works well for the whole business that they allow product teams to use the best cloud tools for the job rather than requiring them to use AWS for everything. If AWS is forced to compete even for Amazon.com's custom, that should make the whole company more resilient to long term technical stagnation.
really, m.media-amazon.com seems to have a very short TTL (showing 37 seconds right now) and has been weighted to cloudfront now.
Amazon is also known to use Akamai. Sure, Amazon relies heavily on AWS, but why should it surprise anyone that a retail website obsessed with instant loading of pages decides to use non-AWS CDNs if the performance is better.
Even if CloudFront became the default, I'm certain amazon.com would keep contracts with fastly and akamai just so they can weight traffic away from CloudFront in an outage.
Their CSS and JS were down for a few minutes. I was able to login to Amazon but the entire site was in Times New Roman, but was fixed a few minutes later
Good thing we use Cloudfront and Cloudflare where I work.
> Statuspage Automation updated third-party component Spreedly Core from Operational to Major Outage.
> Statuspage Automation updated third-party component Filestack API from Operational to Degraded Performance.
Oh, right. :-D
Don't get me wrong, I love the proliferation of APIs and easily-integrated services over the past 20 years. We're all one interdependent family, for better and for worse.
Yikes seeing just a "connection failure" on Paypal is something else.
edit: PayPal looks be back up at least in US East but when I turn off my VPN and access from Asia I get "Fastly error: unknown domain: www.paypal.com."
> Monitoring
The issue has been identified and a fix has been applied. Customers may experience increased origin load as global services return.
Posted 4 minutes ago. Jun 08, 2021 - 10:57 UTC
Vendors don’t even agree on whether the :gun: is a revolver or an automatic or space ray guns or even water guns, btw it’s an 1911 in original DoCoMo emojis
Sure, that's a benefit of emojis being semantic. If you want 'SFW' emojis, you can get them. Converting them to images makes that impossible. And uses vastly more bandwidth, makes them impossible to copy+paste, probably has accessibility issues, etc.
Same reason why Gmail uses their own emojis rather than the system ones — (as said above) branding. When you send a tweet, Twitter wants it to look identical across all devices. The classic native UI vs cross-platform UI debate in a nutshell.
Cool, so instead of actually serving text, they could also just serve up little SVGs for each letter. Because god forbid the recipient chooses a different font than Gmail!
Twitter is a media between people. Removing emoji representation differences on user devices is a way to hopefully reduce misunderstandings between users.
What's far worse than half of the internet being down was that Hacker News also had problems. If I waited long enough on a comments page I got an error message. I don't quite understand what happened there. The communication between my system and HN must have been working otherwise I would never have gotten an error message, so it must have been some internal HN problem. But since HN should only need its own internal "database" to generate comment pages, I don't understand why it should be impacted by the Fastly problems.
I could not tell from the fastly status page. What caused the fault? Could anyone point to any past stories which may be of similar nature other than DDos?
Please don't call it a lie. It means that they knowingly presented something they knew to be false as the truth. So far I have seen no evidence to support that.
It is definitely a lie, but it's the same lie sold by all cloud offerings. Can you name a single cloud/CDN operator without downtimes?
It's normal to have downtimes but they are usually scheduled and quick (think <10 minutes per month for rebooting and/or hardware parts replacement). I'm pretty sure most non-profit hosts like disroot.org or globenet.org have similar or better 9's than all these fancy cloud services.
How is having a large chunk of the internet using the same CDN provider not "centralizing"? It's not a hard monopoly obviously but still it meets the definition of centralization.
how is private companies choosing to use a common supplier in a competitive market centralization? monopolies are not centralization either. you need to read a better book.
How is a market competitive when there's a quasi-monopoly on infrastructure? When public money is used to irrigate the same corporations with huge $$$, while non-profit network operators are left to rot?
it's centralization because they all use the same provider. Why do you care about incentives here? The result is the same, just like capitalism and free market tend to monopolies in the long run.
For what its worth, I'm having these problems also with cnn.com, reddit and many others, however when I switch away from WiFi to use my cell provider network, they work fine.
If you aren't prepared to do CDN changes on a whim when something like this happens, it's often better to wait for the problem to be resolved instead of making things worse for yourself due to misconfigurations, revealing your origin IPs, etc.
Can always improve the process for the next outage.
For sure, similar to other industries all changes come after big troubles like this. But would be interesting to heard about how them (paypal) deal with that
Is their anything these big sites could do in this situation, or must they choose between running and maintaining all of their own infra or relying on a single CDN?
If you have absolutely vanilla CDN requirements, you can run multiple CDNs and fail-over or load balance between them using DNS.
Quite a few Fastly customers have more than vanilla requirements though, and may have a lot of business logic performed within the CDN itself. That Fastly is "just Varnish" and you can perform powerful traffic manipulation is one of it's main selling points.
I suppose it’s still a bad experience for the user if some % of attempts to connect fail or if some % of scripts/styles/images fail to load. So I think that means dns information about failures needs to somehow be propagated quickly. Not sure how well that works in practice.
Use two CDNs and DNS providers for redundancy. Gets expensive, but at scale, probably doesn't make a huge difference. More complexity for the site operators to manage, however.
That's the problem with these black-box cloud offerings, that you can never know what will work (or not) and from where. You get semi-random, pseudo-localized outages that are not accounted for in all the 9's of availability.
With a standard TCP/UDP session, it mostly just works or doesn't and you can get a proper traceroute to know what's up. With these fancy CDNs, there's a whole new can of worms to deal with and from a client's perspective you have no clue what's happening because it's all taking place in their private network space where we have no "looking glass".
Same here in central Poland (Łódź area), no problem with any of linked websites.
edit: My whole Twitter timeline is full of posts saying "Twitter outage? what outage?". Same on Reddit and Twitch chat, feels like for a short time I was invited into some exclusive circle lmao. StackOverflow and other StackExchange sites also work so I can look stuff up for you.
>At the core of Fastly is Varnish, an open source web accelerator that’s designed for high-performance content delivery. Varnish is the key to being able to accelerate dynamic content, APIs, and logic at the edge.
I think Fastly is the one having problems (they happen to use varnish but I haven't seen anything which says varnish is the root cause) - so all sites using it are down.
It's OK though, because large swathes of this discussion seem to have turned HN into reddit, at least temporarily. Normal service will no doubt resume in due course.
Edit: I didn’t mean anything negative here! Just slightly shocked that as the UK is opening up under 30 vaccinations, the US is struggling to find any more willing takers. It’s really probably a sign that there’s fewer anti-vaxxers in the UK more than anything. And that universal healthcare is more efficient at distribution than an inherently for profit system. I don’t know, but I just didn’t realize it was so different in the UK
I think this may be because we've had much higher uptake as far as I know, so getting down the age ranges has been slower (by which I mean, yes, maybe the US has made it available to all adults, but how many (as a proportion) have taken it up)
I have seen the argument made that one of the reasons for high vaccine confidence in the UK is as a result of Andrew Wakefield's MMR fraud, which was perhaps debunked more effectively in the UK than the US.
US and UK have very similar vaccination rates despite the US being open to more age ranges. This indicates that a higher percentage of eligible people have gotten the vaccine in the UK, and the US has somewhat hit a wall in terms of vaccinations (though there is the concern that the rates will slow down in the UK also).
I must admit, it has been strange seeing my US peers getting the vaccine months before I can in the UK, but I guess I take comfort knowing that both countries are still doing pretty well!
Fascinating. So those rates are including only ages 30+, which means that once it’s unrestricted the UK should have a very high vaccination rate while ~15-25% of the US will still remain unvaccinated entirely by choice. Wow. So you’re absolutely right, the UK is in reality far far ahead and the US is completely broken as far as public health is concerned because of willing ignorance.
[0] https://www.gov.uk/
https://m.media-amazon.com/
https://pages.github.com/
https://www.paypal.com/
https://stackoverflow.com/
https://nytimes.com/
Edit:
Fastly's incident report status page: https://status.fastly.com/incidents/vpk0ssybt3bj