Hacker Newsnew | past | comments | ask | show | jobs | submit | breakingcups's commentslogin

I'm not familiar with how this process went in this case, the blog post seems to suggest that feedback from other vendors was incorporated. Can you elaborate on the outstanding issues?

If you follow the links to browser positions, there are still discussions about various concerns that don't seem to have been addressed.

What I like about exe.dev is that you only need ssh to access it, is something like that under consideration for Sprites.dev?

Additionally, is Tailscale/Wireguard connectivity something you'd consider?


Nope, re: SSH. Tailscale should already work on a Sprite. Everything we do at Fly.io is connected by WireGuard, so it's just a question of whether we want to expose that to users.

Those aren't mutually exclusive

"VM creation is temporarily unavailable. Our apologies!"


Breaking Windows Updates is a bad thing, security-wise.


IS there a better solution for self-healing S3 storage that you could recommend? I'm also curious what will make a rook cluster croak after some time and what kind of maintenance is required in your experience.


I have unfortunately got a ceph cluster in a bad enough state that I just had to delete the pools and start from scratch. It was due to improper sequencing when removing OSDs, but that is kindof the point is you have to know what you are doing to know how to do things safely. For the most part I have so far learned by blundering things and learning hard lessons. Ceph clusters when mistreated can get into death spirals that only an experienced practitioner can advert through very carefully modifying cluster state through things like upmaps. You also need to make sure you understand your failure domains and how to spread mons and osds across the domains to properly handle failure. Lots of people don’t think about this and then one day a rack goes poof and you didn’t replicate your data across racks and you have data loss. Same thing with mons, you should be deploying mons across at least 3 failure domains (ideally 3 different datacenters) to maintain quorum during an outage.


Not used it yet, but RustFS sounds like it has self healing

https://docs.rustfs.com/troubleshooting/healing.html


ceph?


Rook is ceph.


Can't wait for this to be wrapped up in a neat detection algorithm, with about the same efficacy rate as the college plagiarism detectors and the LLM writing detectors.


Unless you run FreeBSD, apparently


Define "some"


Oh I don't know, 2? or 3?


2 or 3 what?


> Conclusion

> A production-grade WAL isn't just code, it's a contract.

I hate that I'm now suspicious of this formulation.


You’re not insane. This is definitely AI.


In what sense? The phrasing is just a generalization, production-grade anything needs consideration of the needs and goals of the project.


“<x> isn’t just <y>, it’s <z>” is an AI smell.


It is, but partly because it is a common form in the training data. LLM output seems to use the form more than people, presumably either due to some bias in the training data (or the way it is tokenised) or due to other common token sequences leading into it (remember: it isn't an official acronym but Glorified Predictive Text is an accurate description). While it is a smell, it certainly isn't a reliable marker, there needs to be more evidence than that.


Wouldn't that just be because the construction is common in the training materials, which means it's a common construction in human writing?


It must be, but any given article is likely to not be the average of the training material, and thus has a different expectedness of such a construction.


https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

the language technique of negative parallel construction is a classic signal for AI writing


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: