Hacker News new | past | comments | ask | show | jobs | submit login
Postgres.new: In-browser Postgres with an AI interface (supabase.com)
366 points by kiwicopple 26 days ago | hide | past | favorite | 106 comments
hey HN, supabase ceo here

This is a new service that we're experimenting with that uses PGLite[0], a WASM build of Postgres that runs in the browser. You might remember an earlier WASM build[1] that was around ~30MB. The Electric team [2] have gone one step further and created a complete build of Postgres that’s under 3MB.

Their implementation is technically interesting. Postgres is normally multi-process - each client connection is handed to a child process by the postmaster process. In WASM there’s limited/no support for process forking and threads. Fortunately, Postgres has a relatively unknown built-in “single user mode” [3] primarily designed for bootstrapping a new database and disaster recovery. Single-user mode only supports a minimal cancel REPL, so PGlite adds wire-protocol support which enables parametrised queries etc.

We have created https://postgres.new as an experiment. You can think of it like a love-child between Postgres and ChatGPT: in-browser Postgres sandbox with AI assistance. You can spin up as many new Postgres databases as you want because they all live inside your browser. We pair PGlite with an LLM (currently GPT-4o) and give it full reign over the database with unrestricted permissions. This is an important detail - giving an LLM full autonomy means that it can run multiple operations back-to-back: any SQL errors from Postgres are fed back to the language model so that it can have a few more attempts to solve the problem. Since it’s in-browser it’s low risk.

Some other features include:

    - CSV upload: you can upload a CSV and it will automatically create a Postgres table which you can query with natural language.

    - Charts: you can ask the LLM to create a chart with the data and change the colors of the charts.

    - RAG / pgvector: PGLite supports pgvector, so you can ask the LLM to create embeddings for RAG. The site uses transformers.js [4] to create embeddings inside the browser.
We’re working on an update to deploy your databases and serve them from S3 using pg-gateway [5]. We expect to have a read-only deployments ready by the end of the week. You can access them using any postgres-compatible tool (eg: psql).

Everything is open source. A huge shout-out to the Electric team who have been a pleasure to build with.

[0] PGLite: https://github.com/electric-sql/pglite

[1] Postgres-wasm: https://supabase.com/blog/postgres-wasm

[2] Electric: https://electric-sql.com/

[3] Single user mode: https://www.postgresql.org/docs/current/app-postgres.html#AP...

[4] transformers.js: https://github.com/xenova/transformers.js

[5] pg-gateway: https://github.com/supabase-community/pg-gateway




This is seriously impressive. I asked it to create 3 different databases:

- a customer orders database with products with a timeseries of prices, and multiple fulfilments per order. - an issue tracking system with a reflexive user/manager database - a family relationship model

In each case I got it to put in sample model and then asked postgres.new to answer some questions about the data it had inserted. I thought the family model would trip it up, especially when I told it to put in cousins and uncles. But no, it was pretty bang on.

The only thing it didn't quite manage is that some of the relationships are reciprocal (i.e. my sibling also has me as a sibling).

I asked postgres.new to review the data and it fixed some, and i asked it to check again and it fixed the rest. This is a very useful tool that I can see myself using!


It is a neat tech demo but it clearly shows the limits of AI:

- I got it to generate invalid SQL resulting in errors - it merely generates reasonable SQL, but in my case it generated to disjoint set of tables…. - In practice you have tot review all code - It can point you into the wrong direction. Novel systems often have something smart/abstract in there. This system creates mostly Straightforward simple systems. That’s not where the value is

All in all, it’s not worth it to me. Writing code myself is easier than having to review LLM code

Within our organization we have forbidden full LLM merge request because more often than not the code was suboptimal. And had sneaky bugs/mistakes.

I’m not saying these can’t be overcome. But not with current LLM design. They mostly generate stuff they have seen and are bad as truly new stuff.


I have had tremdendous success with using LLM's to generate SQL. In my use, the majority of the time, ChatGPT gets things spot-on. Even for really sophisticated queries that are going to inspect and aggregate multiple tables into one single output.

I do agree it is not perfect - and things need to be reviewed - but I rarely get the sort of gobbledygook that "resembles" valid SQL and is in fact meaningless.


meaningless is good. You can take one look and discard it. The problem is that you can't trust there aren't subtle errors though. So you still need to go everything with a fine toothed comb. If you don't, you're just sitting on a ticking timebomb. "it seems to be working" is very very different to "this does what I had in mind and it will work for all inputs"


Clicking "New database" doesn't do anything for me...? No changes in the UI, and no messages in the console. Admittedly I'm not signed in with Github, but isn't that only for the AI thing (that really I don't want to use).

Edit: Okay reading kiwicopple's comment makes it clearer that chatGPT is not-optional. I'm... not enthused by this. Why would you take something that's local-first and gatekeep it with something as useless as an AI-for-your-db?

https://pglite.dev/repl/ is available as a more barebones browser-pglite playground.


Design does not communicate well; you need to sign in with your GitHub to create new database or even type into Input field. Infact it says "To prevent abuse we ask you to sign in before chatting with AI." Which they do not, you have to do it yourself. Why show create "New database" button and Input field before user sign in?


We tested several models and GPT-4o was the most accurate and makes the most sense for our initial launch

That said: it's 100% our intention to add local models. This is just our v1.


The point is that the verbiage ("To prevent abuse we ask you to sign in before chatting with AI.") implies that only the AI won't work if you don't sign in, not the entire product/site.


good feedback, thanks

we'll change the verbiage for now, and look at ways we can provide a 100% local experience without logins


I think the point is that many of us would like this without any AI in it, just a simpler Postgres playground in the browser, much like the Rust or Go playgrounds.


> just a simpler Postgres playground in the browser...

Then you want pglite. This project uses it and provides a link to it.


I obviously saw that since it's linked in the very first message in the chain of messages I responded to.

pglite.dev/repl does not have the same level of visualizations as postgres.new.

What I 'want' is exactly what I described in my previous message, postgres.new but without the required LLM integration, the same sentiment others had in this thread.


we have something like that from an earlier launch:

https://wasm.supabase.com/

Be aware that the WASM file is 30Mb, but i think it fits what you're describing. details here: https://supabase.com/blog/postgres-wasm


Benchmarks suggest otherwise. Toqan's sql benchmark shows other models way up in the ranking compared to gpt-4o [1]

Open Weight models specifically fine-tuned on sql generation and modification also rank pretty well compared to SOTA proprietary models. If you want to eval alternative models, check out sqleval [2]

1 https://prollm.toqan.ai/leaderboard/stack-unseen?type=concep...

2 https://github.com/defog-ai/sql-eval


Yeah perhaps I came in with the wrong expectations. The ui/url made me expect a postgre playground, when instead I got this. Perhaps Postgres.ai?

Also... the page title is "Postgres Sandbox". This is, at best, misleading.


> Clicking "New database" doesn't do anything for me...?

You need to start typing after clicking on the button. It needs some UX rethinking, but no bugs, just start typing on the chat


(So uh, just to clarify for posterity, when I originally made this comment it was just a link to https://postgres.new/ without any description, and kiwicopple's comment was just a link to another comment of his on a related post. Obviously now there are more words, everywhere, explaining everything.)

(I still hate LLMs in my DB. I KNOW SQL LET ME WRITE IT.)


There's a video overview here: https://www.youtube.com/watch?v=ooWaPVvljlU

Really impressive stuff - congratulations!


Basic question. Is there a service out there where I can easily link my database to an LLM to do this exact same type of analysis, except on one of my own Postgres DBs instead of one backed by PGlite? My org has several non-technical people who would greatly benefit from being able to interact with our DB via an LLM, rather than SQL 101 queries. The PostgreSQL Explorer extension on VS Code helps some, but doesn't quite make it as seamless at this.


If you want to use an LLM to type queries in English, Visual DB can do it: https://visualdb.com/


Another possibility might be to export the database (or a subset of it) to be loaded in a more ephemeral environment like PGlite so that you don't have to worry about non technical users running inefficient/unindexed queries taking down the prod DB.


Mine has been a little bit more along the lines of helping them understand how everything is linked. They can't really even understand the power of JOIN, which greatly reduces the power of what I set up. Basically, I'm sort of stuck in a place where I built something that's too powerful for them to use, and I can see an LLM finally bridging that gap


I'm in the middle of building something like this but it's not ready yet.

You'll just provide an openAI/anthropic api key and connection details to the db/schema. I intend for it to work a lot like postgres.new but with regular postgres instances.


I presume you've done your competitor analysis because it's a really crowded space.

https://news.ycombinator.com/item?id=41227656


Apparently it's already so common, it's become a pedagogical task for learning to code with Llms :

https://aws.amazon.com/blogs/machine-learning/build-a-robust...


Well this is the second time Supabase has had a product announcement very similar to mine.

My implentation is more focused on data analysis and visualization via a natural language interface.However, the straighforward database operations that postgres.new tries to tackle are included.


I was mainly referring to the plethora of other services doing exactly that: analysis and visualization via natural language. Supabase is different since they are doing it locally in the browser but just regular text-to-SQL is one of the most common applications of LLMs. How will you differentiate yourself from the rest of the "chat with your database" services?


I don't know enough to endorse a specific one but there are probably hundreds of services doing text-to-SQL using LLMs. "Chat with your database" is one of the most popular products in the space.


It would be cool to have this without the AI stuff.

Also, does the WASM build perhaps enable using Postgres as an embedded db, where typically SQlite would be used?


Check out https://github.com/electric-sql/pglite, which this is built with.


Normally not a fan of the “AI/LLM” + {existing workflow} headlines that companies have been pumping out but honestly. This might be a decent case. In my experience, LLM is pretty good at generating on the fly data for inserting into databases. So instead of hand rolling or building a query to insert data, it would be easier to query LLM.

Overall, looks pretty good. I’m on mobile but stumbled upon the blog post in comments.

Ask: I understand why it won’t run on mobile but at least give mobile a synopsis of what it’s suppose to do. I almost ignored this if it wasn’t for the luck of seeing your comment.


> at least give mobile a synopsis of what it’s suppose to do

agreed - we rushed this one a bit. we're working on some some updates now for mobile users

edit: we've shipped some changes for mobile which embed the video and link to the blog post so that it's a bit clearer. Thanks for the feedback


This tool is amazing for us. There's so many pieces to this - the capability is a big step forward for architecting databases.

Rudimentary compared to what you've done, but is it possible to take an existing database schema, developed either in the Supabase migrations style or another tool like Flyway, and draw the diagram? That alone is huge for us in an enterprise setting, and would allow us to communicate the structure of the database to business folks (especially large databases). How does the tool build that from migrations currently?


> take an existing database schema, developed either in the Supabase migrations ... and draw the diagram?

It might not be so obvious, but this diagram tool is already built into the Supabase Dashboard (under "Database")

I can see some value in providing this as a generic service for any PG database - I'll look into it - but I know that Dbeaver already has something like this built in (and I think pgadmin too)


Our scenario is trying to produce a diagram during CICD. We have ongoing changes that happen to our SQL codebase, and it's a constant problem to align our documentation and diagrams to that. If we could see the current ER diagram of 'main' branch, and ideally also any given historic commit, it would be huge for our product owners, as they could consume the diagram and understand the current schema without disrupting the engineers. The CICD pipeline would ideally spit out a static page of the ER diagram during build, either as SVG or HTML.

Dbeaver and PGAdmin's diagramming functions aren't anywhere near as advanced as what you've built here, and fall apart for large diagrams. Your team has a lot of smarts!


This one isn’t available in that format, but it looks like this open source tool could help:

https://github.com/KarnerTh/mermerd


Supabase engineer here. This was a lot of fun to build with the Electric team. There were a lot of technical hurdles to jump over - I’m happy to cover any questions. We’ll continue shipping throughout the week. I see a lot of feedback in this thread already which we’ll get started with.


Is there a way to query the local instance without using the AI input ?


at this stage there is not, but you can just write SQL and the LLM is smart enough to run it directly on the database.

I can see some situations you might want direct access to postgres, so we'll jot that done as something to look at too


I'm using it, but keep getting

       POST https://postgres.new/api/chat 500 (Internal Server Error)
I've tried, reload, create new project and go back, anything. db id oy6g6g6o5qbt78fo


if your session is completely broken you might need to clear your localstorage (in the browser devtools)

I'll flag this with the team but we can't actually access your "db id oy6g6g6o5qbt78fo" as it's run directly in your browser and doesn't touch our servers at all


Hey my guy, just letting you know that I'm pretty excited about the whole PGLite project (sorry if my comments above sounded overly negative >_>, managing expectations and all).


> Please connect from a laptop or desktop to use postgres.new.

There's nothing wrong using Webkit / Safari on your laptop or desktop. There are dozens of us, DOZENS!


Why not just display a warning and let me try anyway? How sure are they that their filtering out absolutely every device that can't run it successfully. Why risk denying it unnecessarily?


I think it checks for the APIs it needs, and bails when they are unavailable. Atleast, that’s what I feel given the brief flash of the application before the warning replaces it.


There are way more than dozens. In fact, it looks like Safari is (probably) ahead of Firefox on desktop: https://gs.statcounter.com/browser-market-share/desktop/worl... (9.1% Safari, 6.6% Firefox), https://www.similarweb.com/browsers/worldwide/desktop/ (7.6% Safari, 5.7% Firefox), https://radar.cloudflare.com/reports/browser-market-share-20... (scroll to "Market Share by OS" and select "Desktop"; Cloudflare is the one that shows Safari behind Firefox at 6.6% marketshare compared to Firefox at 7.2% - but also 39% of Mac users are on Safari, a highly valuable demographic given the higher-income skew of Mac users as well as the fact that a third of professional developers are on Mac (and only 47% on Windows according to Stack Overflow's survey)).


fwiw this will show for 2 reasons:

1/ you're using a small window (even on Chrome/FF). We don't support mobile yet, so if you're seeing this on a desktop just expand the window

2/ you're on safari, which doesn't have support for OPFS yet: https://github.com/electric-sql/pglite/pull/130


The website works fine for me on Safari. Had it generate a 3-table schema no issues.


my bad - it has been a crazy week with lots of messages flying around. You're right about safari


It would be worth spending a few minutes improving this notice. Give more context for what the heck this is, and distinguish between "your screen is too small" and "you're using a browser that doesn't have the API we need".


I was getting that in firefox on my desktop as well. I had it in a half sized window, but making it full screen let me move past.

It throws an error when its full screen though. There is more to the error, but it wont let me copy it.

    indexDB.databases is not a function.


It's working fine from Safari for me



Immediately closed when you require me to connect to github to just run a query without the use of AI.


I like the UI. The chat interface is a good fit for this task. How do you prevent or should you prevent users from entering "write me a fib(n) in python" in the chat? To me the chat is solely designed for table creation directives.


From a quick play this is pretty cool! The design choices it made all seemed pretty sensible to me. For example I asked it to create a schema to support a chat application and it came up with something that works pretty well. I then asked it to modify the schema to support various different bits of functionality (e.g. adding support for images in messages, and soft deleting participants from chats) and it was able to handle all those. In addition it was suggesting sensible constraints (foreign keys, nullable, unique) where you'd expect them.

Good work


Pretty much everything you described is ChatGPT features.


From the blog's section on semantic search:

  Under the hood, we store the embeddings in a meta.embeddings table then pass back to AI the resulting IDs for each embedding. We do this because embedding vectors are big, and sending these back and forth to the model is not only expensive, but also error prone. Instead, the language model is aware of the meta.embeddings table and simply subqueries to it when it needs access to an embedding.
A couple Qs if anyone knows:

1. What does it mean to "pass back to AI the resulting IDs for each embedding" but not the table rows corresponding to the matched vectors?

2. Does "the language model is aware of the meta.embedding table" mean Supabase has deployed a fine-tuned GPT-4o?


1. You can see the relevant code block here[0]. To summarise, instead of storing the embeddings "next to" the data that you provide, we create a table that can be referenced. This is because often we need to send the LLM some data from the table (like creating a chart). If the LLM sees a reference to `meta.embeddings` then it knows it can "fetch" that data later if it's needed (for RAG etc)

2. We haven't fine-tuned, it's all just detailed prompts which you can find in the code. We might need to fine tune later for local-only models, but for now GPT-4o is solid.

[0] https://github.com/supabase-community/postgres-new/blob/4d6c...


Hey Sam from Electric here, I work on PGlite

What the team at Supabase have built with postgres.new is incredible, it has been a lot of fun to work with them on it over the last month or so. It's fead directly back into the development of PGlite and helped us iron out some bugs and finish hitting the feature list required.

SQL databases and LLMs go together so incredible well, the structured data and schema enables an LLM to infer so much context about the database and the underlying data. This exploration into a UI over this paring is going to be incredibly influential, it opens up what has traditionally been a complex technical problem to everyone. That's the true power of LLMs.

I'm not going to go on about PGlite here as postgres.new really deserves the limelight!


which mobile browsers can this run on? seems like its not happening on Safari on iPhone. If it did that would be a game changer. Any status on whether ElectricSQL can run on mobile devices in general?

also is the UI tool open source? id prefer to run it locally

edit: this is an incredible tool. its going to eat a lot of backend engineers billing hours.

suggestion: add a copy button for the generated SQLs


Everything is open source with the exception of the LLM at the moment, which we'll get to. Here are all the tools:

* postgres-new (https://github.com/supabase-community/postgres-new): The frontend. Apache 2.0

* PGlite (https://github.com/electric-sql/pglite): A WASM build of Postgres. Apache 2.0

* pg-gateway (https://github.com/supabase-community/pg-gateway): Postgres wire protocol for the server-side. MIT

* transformers.js (https://github.com/xenova/transformers.js): Run Transformers directly in your browser


Utilizes PGLite

2 hours ago | 41 comments: https://news.ycombinator.com/item?id=41224689


Very cool! What about projects/integrations with

- duckdb which also supports WASM - UWData's Mosaic (https://github.com/uwdata/mosaic) which supports real-time plots

would be really nice to have a kind of "drop-in" page that we could add to any intranet where people could just retrieve an export of some database, and plot it with your code


This is so cool, "just because we could", but I am curious about possible use cases for Postgres in the browser?


There are a lot of reasons that you might want Postgres in the browser, but one of my favorite possibilities is to use it as a local data store (similar to mobx or redux)

The Electric team have built this extension which will fit that use-case very well: https://pglite.dev/extensions/#live


Playgrounds like this which don't need server resources.

Adding offline support (PWA) to an application which was primarily designed for use on a server. Though I'm not sure how well that'll work in practice, since the rest of the application will be designed to run on a server as well.


Nice! If it’s WASM it should also work on iOS, so why can’t I connect? Pg.dev works though. I wonder if postgresql wasm can be made to run on A-Shell, even though there is a working SQLite wasm port running on A-Shell, which should be enough


It can be annoying on IOS. For example OPFS only works inside worker threaders, so you may design your app to access OPFS from the app + OPFS from postgres and realize OPFS isn't working in half your app... Happened to me when I built https://cluttr.ai which uses SQLite WASM


we're working on this right now - I don't know if we'll get it out for today, but it will certainly be available this week


Super cool. Great work team! Love the deploy feature to deploy the entire playground to the cloud and get a connection string. Helps devs get started with Postgres projects very quickly.


> create a partitioned table for events with a unique constraint on the event id

infinite error loop


this looks awesome. Is it possible to create a database and load it with data and then share it with others? Would be amazing for teaching SQL, but also just many data collaboration tasks


Yes, we are working on it. It should be available very soon!


Nothing a decent SQL training course could not teach.


what kind of data load are we talking here? I know it gets stored locally. So, is it limited by my local disk size?

How will it perform if I have 1TB of data?


1TB might be a struggle :) Yes, this is currently limit by both browser storage (IndexedDB) and memory. The reason it is also memory bound is because Emscripten's IndexedDB VFS is actually implemented as an in-memory VFS which syncs files to IndexedDB every once in awhile (due to an async limitation).

PGlite is working on an OPFS backend, which will likely increase the max data size quite a bit.


I really do think that software engineering as we know it is ending. It will take 3-5 years for tools like this to mature. But it will happen, and hard fought skill sets like SQL database design, query design, and maybe even ORMs will become obsolete.

My biggest prediction is that ORMs will not be necessary when LLMs can generate SQL. Low level SQL is a better abstraction for LLMs than ORMs, and as people are removed from the equation, so too will abstractions built to help them craft code.


Giving everyone a smartphone with a great video camera built in didn't obsolete the field of cinematography. I don't think giving everyone tools to help them build software will obsolete software engineering.


Are you sure? Most of the popular videos today do not have what one would call great cinematography, but it doesn't seem to matter. No one cares. Sure, movies still use cinematographers, but movie watching time is getting eaten up with Instagram/TikTok, where cinematography doesn't matter.

I fear applications will suffer the same fate. "Good enough" will take over "well-architected".


I think you are 100% spot-on. Good enough has always been fine for the vast majority of people and the vast majority of use-cases.

Couple this with decreasing costs of storage (and ideally compute), and it doesn't matter if the data model is garbage, people can still get something workable that's better than the awful Excel files they curate now. It will still make errors, but eventually fewer than their spreadsheets.


> it doesn't matter if the data model is garbage

There is no "good enough" for data modeling. There is correct, and there is "this works, but it has latent bugs that will eventually surface." You either have referential integrity, or you don't.


LLMs don't have the context to make good decisions though. You need all of that hard fought skills to make those decisions. And people are the only ones that can have the context enough to actually make decisions.

Not only that, but AI is way more expensive than we think. We're currently in a hype bubble funded by last ditch effort VC money. When that money runs out, and it will eventually, AI is going to get WAY more expensive.


I personally think LLMs make much better decisions than me. I often have a design in mind and then when I prompt Claude it gives me a much better one, and also takes a lot of edge case into account that I didn't even think of. Maybe I'm a useless programmer but I'm sure I'm not the only one.


Even when I write working code, I prompt claude, and it adds a bunch of stuff I would never have added. It astonishes me how good it is


Do you use Claude in the web interface/directly prompt it or some tools for that?


> It will take 3-5 years for tools like this to mature. But it will happen, and hard fought skill sets like SQL database design, query design, and maybe even ORMs will become obsolete.

Until a query becomes a bottleneck and no one knows why because no one knows how databases work anymore.


How does that make any of the skills obsolete? If anything, it makes them even more important.

In the 20th century you got away with knowing the syntax and hacking away. Now you really need to have a deep understanding of relational algebra, since the LLM is doing the typing for you.


We already had SQL model and code generators well before LLMs. What does adding in random output do to improve that?


You havent been using these tools if you think this


Yes I don’t need to use these tools because I already have code generators. Wondering about config options I use documentation or a search engine. It’s cute to put them together in a single UI but it doesn’t make these tool inherently more intelligent. It just saves me a few alt+tabs.

An LLM is just taking prefabbed templates and swapping the possibilities in the answer for a statistically relevant solution. My code generator outputs a prefabbed template with a deterministic solution no statistical guessing required.


What kind of code generators are you talking about? The ones I have are just templates with macros for scaffolding boilerplate, but they are not even remotely comparable to how I use LLMs and definitely not a substitute.


You dont understand how powerful LLMs are. Go use claude sonnet 3.5. Paste in 1000 lines of code, and describe a code change you want to make. Iterate on the solution it gives. Do this for a week.


If I did that I would waste so much time. I know what code is in my codebase. Maybe if I was a novice this would be effective to help me learn it. Is the point of the exercise to wow myself for a week that an LLM can spit out solutions?


No its that it will code a days worth of work in a few minutes


I don’t get the exercise here though. If I have a 10K LOC then why would I iterate over the file to make changes? It’s a bad code base. Why wouldn’t I have my LLM break down the file into smaller components so it’s not so daunting every time I need to make a change and require an LLM to save a days worth of work time.

Let’s say there is a reason to keep this 10K LOC together in a single file. I have never had work in SWE that involved making minor iterations to a file over a week where the work took a whole day to complete. I can see how that could happen but requiring a day to change code seems like there are bigger issues than a 10K LOC file. Unless I’m a complete amateur that thinks they’ve always been not an amateur, and needs an LLM to make a minor change. I just don’t see the point a lot of the times.

What do I do with all this extra time I’m saving? Retire early because I’m getting paid more for doing less right (and I saved all that time)?

How about when the LLM doesnt work right? If I’m a junior engineer that lets a computer write everything for me; How much time do I spend hacking at a prompt to get what I want vs just writing the damn code?


10x LoC is going to require more automation, to manage the sheer mass of it, which is more tools/money/layers of abstraction. AI coders need AI testers and AI peer reviewers, and need to iterate over and over to compensate for incorrectness to produce a working feature. That sounds hellishly inefficient. (but all it has to be is cheaper i suppose)


You're speaking theoretically but we're already using it like this and it's not hellish or inefficient, or I wouldn't use it. Granted, it fits certain tasks better than others but when it does it's a massive relief and I can't imagine going back.


Wow!


I have, and I (mostly) agree with GP's point.

The utility of LLMs with code generation varies widely with the problem domain and the amount of experience the developer has.


Maybe for apps with a handful of users.

Why wouldn’t the LLM use an ORM?


Reducing barrier to entry is not a bad thing


> In-browser Postgres

NICE!

> with an AI interface

... uhh


[deleted]


[flagged]


Pot, meet kettle.


[flagged]


Most people aren't bothered by that because you're completely over-interpreting this clause.

In the clause, "you" refers to Supabase. This is a Terms of Use, not a copyright license. ChatGPT text is not copyright protected and the protection of that clause does not extend to the text or to downstream consumers of the text.

The clause is entirely to prevent people asking ChatGPT to generate training data they feed into another model.


It’s asking me to install Flash. Any ideas?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: