Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Riza - Safely run untrusted code from your app (riza.io)
24 points by conroy 8 months ago | hide | past | favorite | 5 comments
Hi HN, I’m Kyle and together with Andrew (https://news.ycombinator.com/user?id=stanleydrew) we’ve been working on Riza (https://riza.io), a project to make WASM sandboxing more approachable. We’re excited to share a developer preview of our code interpreter API with HN.

There’s a bit of a backstory here. A few months ago, an old coworker reached out asking how to execute untrusted code generated by an LLM. Based on our experience building a plugin system for sqlc (https://sqlc.dev), we thought a sandboxed WASM runtime would be a good fit. A bit of hacking later, we got everything wired up to solve his issue. Now the API is ready for other developers to try out.

The Riza Code Interpreter API is an HTTP interface to various dynamic language interpreters, each running inside a WASM sandbox without access to the outside world (for now). We modeled the API to align with a POSIX shell-style interface.

We made a playground so you can try it out without signing up: https://riza.io

The API documentation lives here: https://docs.riza.io

There are many limitations at the moment, but we expect to rapidly expand capabilities so that programs can e.g. access the network and filesystem. Our roadmap has more details: https://docs.riza.io/reference/roadmap

If you need to execute LLM-generated code we’d love to have you try the API and let us know if you run into any issues. You can email us directly at founders@riza.io.




Been using Riza for the past few months at our startup for executing code generated by GPT4.

We use it for local dev, running model eval (when changing prompts), in CI and production work loads.

- It's the easiest to setup. It took us just a few minutes to execute our first function call.

- Multiple languages support - we use both JS and Python for code generated by LLM , Riza works great with both out of the box.

- No cold start - this is important because latency matters in our product.

- No infra management - even if we use AWS lambda or similar serverless product we felt like we still needed to a bunch of setup to make sure its fast + secure.

Congrats on launching!


Hey Kyle and congrats to the launch! Riza looks great!

Looks like we're building in the same space. I'm Vasek, co-founder of E2B [0]. We recently launched our Code Interpreter SDK [1].

We think safe code execution for AI generated code has a big future potential. I'd love to chat some time if you're up to it! Maybe there's a way we could join forces and build something great.

[0] https://e2b.dev/

[1] https://github.com/e2b-dev/code-interpreter


Hi Vasek, happy to chat sometime soon. Send us an email or DM on Discord?

We'll check out the e2b code interpreter sdk. Looks interesting!


really interesting direction! my company, assembled, builds software for customer support teams. i could imagine being on either end of this. on the one hand, we've built applications to run in the curated zendesk/salesforce sandbox. on the other hand, we get tons of requests to incorporate custom workflows or metrics natively in our application.


Will its security ever be defeated?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: