Hacker Newsnew | past | comments | ask | show | jobs | submit | jzelinskie's commentslogin

Just wanted to say thanks for such a good write-up and the great work on Otter over the years. We've used Ristretto since the beginning of building SpiceDB and have been watching a lot of the progress in this space over time. We've carved out an interface for our cache usage a while back so that we could experiment with Theine, but it just hasn't been a priority. Some of these new features are exciting enough that I could justify an evaluation for Otter v2.

Another major for on-heap caches that wasn't mentioned their portability: for us that matters because they can compile to WebAssembly.


I actually modified SpiceDB to inject a groupcache and Redis cache implementation. My PoC was trying to build a leopard index that could materialize tuples into Redis and then serve them via the dispatch API. I found it easier to just use the aforementioned cache interface and have it delegate to Redis.


Another good comparison would be against https://pkg.go.dev/github.com/puzpuzpuz/xsync/v3#Map


definitely, I can expand my comparisons and benchmarks


Honestly, if you're better than the rest, I would suggest collaborating with the existing solutions.


I recommend folks check out the linked paper -- it's discussing more than just confidentiality tests as a benchmark for being ready for B2B AI usage.

But when it comes to confidentiality, having fine-grained authorization securing your RAG layer is the only valid solution that I've seen in used in industry. Injecting data into the context window and relying on prompting will never be secure.


Is that sufficient? I'm not very adept at modern AI but it feels to me like the only reliable solution is to not have the data in the model at all. Is that what you're saying accomplishes?


Yes. It's basically treat the model as another frontend approach - that way the model has the same scopes as any frontend app would.


Why wouldn't the human mind have the same problem? Hell, it's ironic because one thing ML is pretty damn good at is to get humans to violate their prompting, and, frankly, basic rational thought:

https://www.ic3.gov/PSA/2024/PSA241203

Or, more concretely:

https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-ho...


Container runs OCI (docker) compatible by creating lightweight VMs.

This repository houses the command-line interface which is powered by containerization[0], the Swift framework wrapping Virtualization.framework to implement an OCI runtime.

[0]: https://github.com/apple/containerization


I am going to show my ineptitude by admitting this, for the life of me I couldn’t get around to implement the Mac Os native way to run linux VMs and used vm-ware fusion instead. [0]

I’m glad this more accessible package is available vs docker desktop on mac os or the aforementioned, likely to be abandoned vmware non enterprise license.

[0] [apple virtualization docs](https://developer.apple.com/documentation/virtualization/cre...)


Lima makes this really straightforward and supports vz virtualization. I particularly like that you can run x86 containers through rosetta2 via those Linux VMs with nerdctl. If you want to implement it yourself of course you can, but I appreciate the work from this project so far and have used it for a couple of years.

https://lima-vm.io/


And you also have `colima`[0] on top of it that makes working with Docker on the command-line a seamless experience.

[0] https://github.com/abiosoft/colima


VMware Fusion is a perfectly good way of running VMs, and IMO has a better and more native UI than any other solution (Parallels, UTM, etc)


This is a weird take to me.

VMWare Fusion very much feels like a cheap one-time port of VMWare Workstation to macOS. On a modern macOS it stands out very clearly with numerous elements that are reminiscent of the Aqua days: icon styles, the tabs-within-tabs structure, etc.

Fusion also has had some pretty horrific bugs related to guest networking causing indefinite hangs in the VM(s) at startup.

Parallels isn't always perfect sailing but put it this way: I have had a paid license for both (and VBox installed), for many years to build vagrant images, but when it comes to actually running a VM for purposes other than building an image, I almost exclusively turn to Parallels.


> reminiscent of the Aqua days

Maybe early Aqua. We're still in the Aqua days, if you don't count yesterday's Liquid Glass announcement. :)


Not on Apple Silicon it's not. In the Intel days, sure it was great.


I still can run the latest ARM Fedora Workstation on Apple Silicon with Fusion, and similar distros straight from the ISO without having to tweak stuff around or having problems with 3D acceleration, unlike UTM.


The screenshot in TFA pretty clearly shows docker-like workflows pulling images, showing tags and digests and running what looks to be the official Docker library version of Postgres.


Every container system is "docker-like". Some (like Podman) even have a drop-in replacement for the Docker CLI. Ultimately there are always subtle differences which make swapping between Docker <> Podman <> LXC or whatever else impossible without introducing messy bugs in your workflow, so you need to pick one and stick to it.


If you've not tried it recently, I suggest give the latest version of podman another shot. I'm currently using it over docker and a lot of the compatibility problems are gone. They've put in massive efforts into compatibility including docker compose support.



Yeah, from a quick glance the options are 1:1 mapped so an

  alias docker='container'
Should work, at least for basic and common operations


The Brooklyn Public Library in Green Point has a tool library, although it isn't very large if that's close to you at all. I'm not sure if it's available at any other library locations but the one in Green Point is fairly new and has great programming.


Thanks for sharing this! I think we'll add a list of existing tool library so it will be easier to find the ones near you.


No commentary on your latter points about Oxide's compensation structure, but I fundamentally don't share the same sentiment you have about the dynamics of cash flow for venture-backed startups.

Maybe there are still VC-backed companies having catered food, but I think they're by far the exception and not the rule. ZIRP is long over and a decent portion of this generation of startups began in COVID and subsequently don't even have an office. Maybe I'm the one that's in the bubble, but when you take VC money you're on the line to hit growth numbers in a way that you aren't when you bootstrap and can take your time to slowly grow once you've hit ramen profitability.


A separate policy language is explicitly useful for those that want to be able to reuse policies in programs written in different languages. It's a part of the best practice (for larger orgs/products) for decoupling authorization logic and data from your application codebases.

When you're just hacking something together, you're totally right, it might as well be Rust!


That’s fair. Another pro is the flexibility that comes from being able to store policies in a database and manage them as data instead of code. E.G. roll your own IAM.

A good problem to solve when you need to, but for many of my projects, which admittedly don’t grow into big organizations, I find myself valuing the simplicity of the reduced toolkit.


This project looks like a very nice lightweight way to implement policy in a Rust application; I really like the ergonomics of the builder. Despite being very different systems, the core permissions check being the same signature as a call to SpiceDB[0] (e.g. the subject, action, resource, and context) shows the beauty of the authorization problem-domain regardless of the implementation.

I would like to add some color that a policy engine is not all you need to implement authorization for your applications. Without data, there's nothing for a policy engine to execute a policy against and not all data is going to be conveniently in the request context to pass along. I'd like to see more policy engines take stances on how their users should get that data to their applications to improve the DX. Without doing so, you get the OPA[1] ecosystem where there are bunch of implementations filling the gap as an afterthought, which is great, but doesn't give a first-class experience.

[0] https://spicedb.io

[1] https://openpolicyagent.org


Agreed! But you're glossing over the Zanzibar point of view on this topic, which falls back to dual-writes. That approach has a lot of downsides: "Unfortunately, when making writes to multiple systems, there are no easy answers."[0]

Having spoken with the actual creators of Zanzibar, they lament the massive challenge this design presents and the heroics they undertook over 7+ years at Google to overcome them.

By contrast, we're seeing lots of the best tech companies opt for approaches that let them leave the data in their source database wherever and as much as possible [1]

[0] https://authzed.com/blog/the-dual-write-problem

[1] https://www.osohq.com/post/local-authorization

I'm founder of Oso btw.


Yeah a big motivation for us was avoiding the need to keep another system up to date. Gatehouse basically sits at the execute policy layer, and we let the application code decide how to unify the data (or not).

Oso local authorization looks like a fantastic solution.


Thanks for the response. In my opinion, oso does the best of job of any policy engine at prescribing how input data to their system is fed (and your linked blog demonstrates it well!).

I do think you might have pivoted the conversation, though. My post was purely about federation strategies and policy engines, but you appear to discussing consistency and Zanzibar, which is only tangentially related. Federation and consistency aren't necessarily coupled. Oso also would require a complex scheme for achieving strict serializability, but it instead chooses to trade-off consistency of the centralized data in favor for the local data.


AuthZed (W21) | Multiple Roles | Remote | Full-time | $150-270k

AuthZed Cloud is fully-managed, database-as-a-service platform built to be the foundation for authorization across product suites and oceans. At the core of AuthZed Cloud is SpiceDB, the premier open source database designed specifically to store and query access control data.

For years, there have been libraries to help developers build authorization systems, but that hasn't stopped broken access control from becoming a substantial threat to the internet. The core hypothesis behind SpiceDB is that the best foundation for authorization systems is one that is centralized rather implemented ad-hoc in each application. By providing a system that is designed to be ran by a platform team centrally for an entire organization, developers can standardize workflows, testing flows, and how to interoperate between their applications.

You can find all of our open roles at the bottom of this page: https://www.workatastartup.com/companies/authzed

- Sr. Site Reliability Engineer (Remote U.S. - Eastern or EU)

- Enterprise Sales Manager - NY or Eastern U.S. (Remote)

- Senior Software Engineer - U.S. or EU (Remote)

- Enterprise Customer Success Engineer (United States - Remote)


Interested, but don't want to accumulate another online account to apply.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: