What would be an use case for `os.Root`? Based on my understanding ( https://github.com/golang/go/issues/67002 ), it is related to security. However, under the hood, it doesn't use `Chroot`, so I could imagine, that eventually someone finds a way to escape from the Root.
chroot only makes sense for applications which can commit to exclusively operating out of a single directory, ever. (It also requires the process to have superuser privileges, so it can't be used by applications which are run as users.)
os.Root() is more about putting a "seatbelt" on filesystem operations - like restricting operations related to an application's cache to its cache directory, or restricting a file server to serving files from the appropriate shared directory. It's not the same kind of ironclad guarantee as chroot, but it'll still protect an application from simple directory traversals.
After this dance, you can call chroot from within the new namespace. It's often also possible to use unprivileged bind-mount /dev, /sys, /proc, for a more regular execution environment (although some container runtimes block this unfortunately).
Yeah, I like your examples. In such scenarios, it makes sense when we're just trying to protect against our own bugs rather than a user deliberately sending a path that leads to the password.txt file.
Why would it use `chroot`? Combined with a sandboxing facility, like Capsicum, you can open a directory before entering capability mode and later, you use `os.Root` to open files in the file system tree under the opened directory.
I am not sure, is this custom Os.Root implementation good enough to relay on it? I see that it is based on openat, and validation of paths/symlinks. But should we expect CVEs, which will break this protection layer?
OrioleDB continues to be a fully open source and liberally licensed. We're working with the OrioleDB team to provide an initial distribution channel so they can focus on the storage engine vs hosting + providing lots of user feedback/bug reports. Our shared goal is to advance OrioleDB until it becomes the go-to storage engine for Postgres, both on Supabase and everywhere else.
I don't want to hijack Datadogs+Quickwit's post comment section with unrelated promotional-looking info. Quick summary below but if you have any other questions pls tag olirice in a Supabase GH discussion.
The OrioleDB storage engine for postgres is a drop-in replacement for the default heap method. Its takes advantage of modern hardware (e.g. SSDs) and cloud infrastructure. The most basic benefit is that throughput at scale is > 5x higher than heap [1], but it also is architected for a bunch of other cool stuff [2]. copy-on-write unblocks branching. row-level-WAL enables an S3 backend and scale-to-zero compute. The combination of those two makes it a suitable target for multi-master.
So yes, given that it could greatly improve performance on the platform, it is a goal to release in Supabase's primary image once everything is buttoned up. Note that an OrioleDB release doesn't take away any of your existing options. Its implemented as an extension so users would be able to optionally create all heap tables, all orioledb tables, or a mix of both.
Makes sense, perhaps the previous commenter thought OrioleDB was itself a database rather than an implementation detail alternative to current databases. That's what I thought before I went to their site.
Acquisitions don't necessarily mean the end of innovation. Sometimes, it allows them to take innovations they've worked hard on for years and expand the reach to a significantly larger audience :)
I have met the founders of all 3 of these companies and can assure you they all care tremendously about bringing their work to the world.
ParadeDB is independent and without plans to sell anytime soon, though :)
Not hating on Quickwit, but almost never does an acquisition in the modern era mean continual innovation, most companies are now suborned to a greater purpose, and its almost never going to drive them to build the best thing they already have ended their lifecycle - nobody is going to buy them from DD and their quality/dev process will dominate, and that is of a decent size corporation.
It also looks like most of DD's observability acquisitions are either integrated directly (seemingly with a full rewrite) or look a lot like aquihires for senior folks, so I wouldn't hold my breath here.
> Here's where Rivet's architecture gets fun – we don't rely on a traditional orchestrator like Kubernetes or Nomad for our runtime. Instead, our orchestrator is powered by an in-house actor-like workflow engine – similar to how FoundationDB is powered by their own actor library (Flow [4]) internally. It lets us reliably & efficiently build complex logic – like our orchestrator – that would normally be incredibly difficult to build correctly. For example, here's the logic that powers Rivet Actors themselves with complex mechanisms like retry upgrades, retry backoffs, and draining [2].
It is a bit unclear for me, do you use actors themself to develop Rivet Actors, or it is another actor-like workflow engine, not the final product?
(I would be super happy to read an article, which explains architecture, main blocks of the system, gives some an example)
I really like an idea to solve scheduling problem via compiling to WASI. Many months ago I had conversation with friends, how to implement deterministic testing in Go, without custom IO runtime (common approach in Scala/Rust/C++). We were talking about a few random things, which require a lot of effort (compare to WASI):