Full-stack engineer turned engineering leader, 13+ years building and scaling web platforms. Most recently modernized a biodiversity monitoring platform at Helmholtz (UFZ) — consolidated scattered repos into a monorepo, built the entire CI/CD pipeline, designed Helm umbrella charts, and deployed observability from zero. Before that, led backend and frontend teams (7 engineers) at a location-based mobile startup — designed the GraphQL architecture from scratch, single-handedly executed the AWS EKS migration, and built the push notification system. Equally comfortable writing resolvers and writing Helm charts. TypeScript and Kubernetes are home turf. Fluent in German and English.
Good to see Sarah mentioned here, it's the same for me. Her music is the one constantly good thing that I'm always looking forward to in the last year, highly recommended.
The metareddit monitor doesn't search through an index. It's a crawler that constantly fetches all new comments and submissions and looks for the (several thousand) keywords in each of them.
If that is the only problem, flipping one bit in every response seems like a really simple solution. Why hasn't Cloudfront fixed it yet, do they know about this?
Yes they know about it. Below is the response from Amazon. The logic they employ is that since it is broken in an old version of Squid, it is fine for it to be broken on Cloudfront.
While we are aware of the issue with range request HTTP/1.0 206 responses and Chrome, we cannot provide an ETA for a fix. Since this issue is specific to range requests, an immediate workaround is to disable range requests on your origin server if this is possible for your use case.
It is also worth mentioning that multiple web proxy and cache application vendors have using HTTP/1.0 as a de facto standard for many years, so you will probably sporadically get similar reports from your end users using Chrome, but not other browsers such as Firefox or Safari. For example, here is a discussion between a Chrome developer on the mailing list for the popular Squid web cache about a similar report:
http://www.squid-cache.org/mail-archive/squid-dev/201204/011...
I am not saying that always returning HTTP/1.0 will stick around forever, but it is fairly common in real world situations today.
I am affiliated with one of the sites JDownloader supports, and they certainly did not ask us for approval. Whatever you try to stop them is futile, since they can react quickly.
JDownloader breaks our business model since we don't have premium accounts, but in the end you have no other choice but to accept that some people are leechers.
So, please don't think you're doing good when using tools like that and advocating their use.
Remote: Yes (preferred)
Willing to relocate: Depends
Technologies: TypeScript, Node.js, React, GraphQL, Python, Kubernetes, Helm, ArgoCD, Docker, Terraform, AWS (EKS, S3), GitLab CI, OpenTelemetry, Datadog, PostgreSQL/PostGIS
Résumé/CV: https://misera.org/cv
Email: thomas[@]misera.org
Full-stack engineer turned engineering leader, 13+ years building and scaling web platforms. Most recently modernized a biodiversity monitoring platform at Helmholtz (UFZ) — consolidated scattered repos into a monorepo, built the entire CI/CD pipeline, designed Helm umbrella charts, and deployed observability from zero. Before that, led backend and frontend teams (7 engineers) at a location-based mobile startup — designed the GraphQL architecture from scratch, single-handedly executed the AWS EKS migration, and built the push notification system. Equally comfortable writing resolvers and writing Helm charts. TypeScript and Kubernetes are home turf. Fluent in German and English.
reply