Hacker News new | past | comments | ask | show | jobs | submit | oliverrice's comments login

OrioleDB maintains a very small set of Postgres patches that are targeted for upstream. The storage engine that mitigates the need for vacuuming [1] is implemented in a standard Postgres extension, so that will still need to be installed by the Postgres host in order to take advantage of it.

But yeah, looking forward to that day too!

[1] https://www.orioledb.com/blog/no-more-vacuum-in-postgresql


(disclaimer: supabase employee)

OrioleDB continues to be a fully open source and liberally licensed. We're working with the OrioleDB team to provide an initial distribution channel so they can focus on the storage engine vs hosting + providing lots of user feedback/bug reports. Our shared goal is to advance OrioleDB until it becomes the go-to storage engine for Postgres, both on Supabase and everywhere else.

Happy to hear any concerns you have


Please forgive and help remedy my ignorance: it's a coherent goal to want OrioleDB to be the go-to storage engine for Postgres, on Supabase?


I don't want to hijack Datadogs+Quickwit's post comment section with unrelated promotional-looking info. Quick summary below but if you have any other questions pls tag olirice in a Supabase GH discussion.

The OrioleDB storage engine for postgres is a drop-in replacement for the default heap method. Its takes advantage of modern hardware (e.g. SSDs) and cloud infrastructure. The most basic benefit is that throughput at scale is > 5x higher than heap [1], but it also is architected for a bunch of other cool stuff [2]. copy-on-write unblocks branching. row-level-WAL enables an S3 backend and scale-to-zero compute. The combination of those two makes it a suitable target for multi-master.

So yes, given that it could greatly improve performance on the platform, it is a goal to release in Supabase's primary image once everything is buttoned up. Note that an OrioleDB release doesn't take away any of your existing options. Its implemented as an extension so users would be able to optionally create all heap tables, all orioledb tables, or a mix of both.

[1] https://www.orioledb.com/blog/orioledb-beta7-benchmarks

[2] https://www.orioledb.com/docs


Makes sense, perhaps the previous commenter thought OrioleDB was itself a database rather than an implementation detail alternative to current databases. That's what I thought before I went to their site.


Yes, exactly. :)


Thanks! TIL


what's your use-case for multi-master that would not be supported by something like regional routed for read-replicas with high availability? i.e. do you have a specific need for low global write latency, or is it motivated by something else?


> do you have a specific need for low global write latency, or is it motivated by something else?

It's motivated by liking of reliable systems that keep working when a node fails.

There are a bunch of non-webscale companies that pick mysql over postgresql because they can use either percona xtradb cluster or galera cluster.

Having an open source multi-master solution would mean that postgresql could finally be used over there as well.


Thank you, your feedback is appreciated.

The important design issue about building active-active multi-master on the base of raft protocol is about being able to apply changes locally without immediately putting them into a log (without sacrificing durability). MySQL implements a binlog, which is separate from a log, to ensure durability. OrioleDB implements copy-on-write checkpoints and row-level WAL. That gives us a chance to implement MM and durability using a single log.


yeah, fair shout `columnar index` for hybrid analytical workloads and `multi-master` roadmap items haven't been put in an order on the schedule yet so maybe multi-master makes sense to do first. Both align with webscale requirements though. Columnar would move into the AlloyDB space and multi-master would be more like PlanetScale (although not quite the same audience in the latter case)


At Supabase we also recently switched to Nix for packaging our Postgres+extensions bundle

https://github.com/supabase/postgres/blob/develop/flake.nix


In Studio you'd need to click on a query to get the recommendations but the most important queries to optimize are on several metrics like "Most time consuming", "Most frequent", "Slowest execution" are sorted worst-to-best in tab views.

But if you do want to scan all of your queries at once for possible indexes you can do it in SQL with

```sql

create extension index_advisor cascade schema extensions;

select ia., pss.query

from pg_stat_statements pss, lateral( select from extensions.index_advisor(pss.query)) ia

order by (total_cost_after::numeric - total_cost_before::numeric)

limit 100;

```


> I'm concerned about getting locked in

Supabase is one of the most portable platforms out there.

The whole stack is self-hostable and open source. All of the data are contained in Postgres. You're one pg_dump away from being able to switch to a different Postgres host. Or if you're switching to something else entirely, you can export the data to CSVs and take it anywhere. But we're confident you won't want to :)

disclaimer - Supabase employee


> Supabase is one of the most portable platforms out there

Not in my experience. The documentation and infra is just not there to make it easy to use an external postgres db.


we know of many, many companies using Supabase self-hosted or with an external database.

if you have any problems, feel free to reach out to me directly. We want this to be simple (and you can see that there are non-supabase commentors in this thread who are self-hosting, so it's not just lip service)


I self host a couple supabase instances.

That said it feels like every other week there's an update that breaks the self hosted compose, looking through the GitHub issue tracker shows a few issues where the suggestion is "oh yeah the latest X image doesn't work, regress the version to get it running"

I really like supabase, but stability on the self hosted images is my biggest gripe with it currently.


That does not track with anything I've ever seen out of using Supabase

If you target Postgres, just about any Postgres instance works the moment you enter a connection URL.

You can hop from one managed Postgres offering to another in 10 minutes and lose no functionality. Everything from auth to programming (and even RLS implemented in the query space) will work instantly: no additional software needed.

Are you claiming that's the case with Supabase and its JWT auth/RLS entanglement?


What additional tooling do you think would be needed to turn this concept into a maintainable stack with good UX for mid-to-large sized applications?


I've been using Astro (https://astro.build) for static site serving + PostgREST Htmx for just simple data-centric components.


For anything mid-large size, I believe a separate abstraction layer is needed, which would be an API. I also tried to build something similar on top of SQL and the tech stack is Jinja-templated SQL and an OpenAPI layer implemented in YML but I would still scope it out for internal tooling. Here it is: https://jinj.at


hey, author here (although not of this feature)

Exposing postgres functions through the GraphQL API has been one of our most frequent requests. I'm really looking forward to seeing what people are able to build with this flexibility. The coolest one I've come across so far is a full text search entrypoint on the Query object.

Happy to answer any questions


hi, author here & happy to answer any questions

Really looking forward to the next steps with vecs and would love to hear any thoughts about the `future ideas`.

Mainly, what kind of interface would you like to see for adapters/transforms so collections can feel like you're inserting text/images/video/whatever rather than passing vectors around?


PostGIS is a native extension mostly written in C. Since there is no C trusted language it would be hard to fit PostGIS into the TLE paradigm

Its less of a problem for well known and trusted extensions like PostGIS though because it comes pre-installed on most hosted providers (Supabase, RDS, etc)


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: