It also adds a severe amount of visual clutter between the code. Individual preference I’m sure, but I’d prefer less comments, I think. Or maybe my IDE just needs to collapse comments inside functions automatically.
If this were commonly-used code at the core of a project, I would definitely agree. However, this is the kind of thing that gets stuck into a CI pipeline and not looked at for months or years, and it's written in two languages (GraphViz and very SQLite-specific SQL) that probably most people don't use regularly. I gave it the comments I'll wish it had the next time I come back to it, to figure out how it works.
This is very cool. Reminds me of my days 13 years ago using dot to draw complex planaarized graph diagrams before switching to physics / springs models, graph embeddings, and other cool things.
I hope you don't mind? If you don't want ur code there let me know and I'll sadly but obediently take it down and just link to it from someplace on there I can readily find. :)
Likewise, the niche pressure for me came from SQLite being agnostic to a canonical form for SQL `.schema`. I did not need to get into parsing every flavor.
My opinion will probably not be popular, but by making the mirror you are helping with creating this monopoly.
I see the solution in creating small single/few page(s) landing site and linking to the code and releases, being it to self/hosted Gitea, Forgejo, Gitlab, GitHub...
It's not unpopular: I know. I mean, I mentioned the monopoly in my comment for a reason.
But for this niche purpose, GitHub is my (last) social media, and GitHub stars are my bookmarks.
So, yeah, I agree, but your suggestion does little for me to not forget it when I'm looking for something SQLite related, and definitely doesn't help me follow project updates (like a proper GitHub mirror would).
Somebody upthread[1] also made a GitHub mirror. I appreciate that different people have different comfort-levels with the centralisation of services like GitHub, but luckily it's really easy for people to copy a Git repo to a host they're more comfortable with, like GitHub or SourceHut or even making a local clone.
I have not considered outputting to MermaidJS, but (from a quick glance at that documentation) it looks like the same "SQL template" technique should work. Actually doing it is left as an exercise for the reader. :)
A place I worked at during the dot-com era had a large format printer[0] and the DBAs would occasionally print database schema diagram posters that they would hang on the walls. It was amazingly useful, especially as we staffed up and had a lot of new employees.
@thristian - can you specify a paper size?
[0] That once the marketing department found out about, was always out of ink.
So far as I can tell, GraphViz does not allow you to specify a paper size. However, if you render to SVG, you can open the result in Inkscape and rearrange things fairly easily. That's not quite as convenient as having it done automatically, but GraphViz can struggle with laying out a complex schema even when assuming infinite space - some amount of hand-tweaking is going to be necessary regardless.
1. Takes in a .dot file
2. Presents a simple UI for selecting which tables/relationships you want in the final diagram
3. Lets you highlight a table and add all directly related tables to the selected tables
4. Lets you select two tables and adds the tables for the shortest route between the tables
5. Lets you assign colors to tables/relationships for the final diagram
6. Optionally shows only key fields in the final diagram
7. Generates the necessary graph source and copies it to the clipboard, and loads either of two GraphViz pages to let you paste the source and see the graph.
If that would be of interest to anyone I'd be happy to post it.
Azimutt is an online-only, non free tool. If you don’t want to publish your design or need to analyze someone else’s design, this is a no-go for me. (Skimming it it promises to be able to work only in the browser if you want to, personally I wouldn’t risk it)
Check out https://schemaspy.org/, which creates a documentation locally, if the original project here doesn’t work for you.
Tried it on SQLite's own Fossil repo, which is a kind of SQLite db too.
The resulting diagram shows no relationship arrows.
Turns out the Fossil's schema uses REFERENCES clause with a table name only; I guess, this points to table's primary key by default. Apparently, the diagram generator requires explicit column names.
Huh. The syntax diagram in the documentation[1] suggests this is possible, but the documentation on foreign keys[2] does not mention that syntax, or how it's interpreted.
Thanks for the idea! I have a repo that (ab)uses Gitlab CI to periodically produce an SQLite database from a bunch of other data sources and this is a great addition to the README
Haha, I'm abusing gitlab pipeline minutes to run a periodical cypress task to test signup+login in production on my pet-project :) Scheduled pipelines for the win!
no, in my case only when the Migrations/ folder changes (you can specify that in .gitlab-ci.yml or using come commandline-fu). I'm using EF core as an ORM, thats why it is also easy to create an empty SQLite DB from the sources.
This seems very clever. I’ve enjoyed abusing SQL, too. And note that abuse is the developer’s term for how what he’s doing in sqlite-schema-diagram.sql. I’m not trying to be insulting. I actually do like it.
What I like here is the "unix style do one thing" part of here's a simple ("simple") SQL script that pumps into GraphViz which does all of the heavy lifting.
I remember writing a script for doing this, more than 10 years ago. I haven't used it much, and not for many years.
The problem is that a fully automatic schema is only readable for very small databases. So small that very soon you can keep the structure in your head. For larger databases, the automatic schema will be awful. Even with just 20 tables, graphviz (dot | neato) will make a mess when some tables are referenced everywhere (think of `created_by` and `updated_by` pointing to `user_id`).
When I need a map of a large database, I usually create a few specialized diagrams with dbeaver, then combine them into a PDF file. Each diagram is a careful selection of tables.
You might try https://schemaspy.org/ - it generates a website with ER diagrams that only go one or two relationships out, but they have clickable table names to get to the next diagram
> ER diagrams that only go one or two relationships out
Actually, SchemaSpy gives you a full diagram of the entire schema as well: it gives it to you with a truncated columns list and a full columns list per table. The "Relationships" option at the top of the page is where the full diagram is accessed.
The one & two relations out limited views are if you're getting to the diagram from the scope of a specific table... it will show you one and two relations away from the current table when using that perspective. And, as you say, you can navigate the relationships that way.
What I really like about SchemaSpy (I use it with PostgreSQL) is that I can `COMMENT ON` database objects like tables and columns using markdown and SchemaSpy will render the Markdown in it's output. Simple markdown still looks decent when viewed from something like psql, too, so it's a nice way to have documentation carried with the database.
You can do this a little bit easier using SQLPage [1], without GraphViz, and you'll get an always up to date schema:
select 'table' as component, 'Foreign keys' as markdown;
select *, (
select
group_concat(
printf('[%s.%s](?table=%s)', fk."table",fk."to",fk."table"),
', '
)
from pragma_foreign_key_list($table) fk
where col.name = fk."from"
) as "Foreign keys"
from pragma_table_info($table) col;
I don't see how this is in any way comparable - it looks like it'd just produces a table rather than a diagram? You can indeed do that too with a single sqlite query as well if it's not the diagram you want.
Nor how running some other tool that runs a web service qualifies as "easier" than running a query using sqlite itself, and a command line tool that's trivially scriptable.
I've used this in the past and is one of my first ways of approaching a new codebase. It's great at loading a schema and letting me lay out the tables. I'll sometimes make many different subset diagrams. I hacked it a bit for working with MySQL schema and inferring 'foreign keys' by naming convention as they are often not enforced by the db schema.
What output do you get when you run these commands?
$ sqlite3 --version
-- Loading resources from /home/st/.sqliterc
3.45.1 2024-01-30 16:01:20 e876e51a0ed5c5b3126f52e532044363a014bc594cfefa87ffb5b82257ccalt1 (64-bit)
$ sqlite3 :memory: -init /dev/null "select * from pragma_table_list();"
-- Loading resources from /dev/null
main|sqlite_schema|table|5|0|0
temp|sqlite_temp_schema|table|5|0|0
EDIT: The ability to use pragmas as table-valued functions was added in version 3.16.0[1], but the table_list pragma was first added in 3.37.0[2], which is newer than your sqlite3 version.
Such an awesome find, I'm thinking of sticking this to my CI Pipeline now! :D
I use SQLite for a gameserver, having 3 different databases for different stuff. And this would be a lifesaver for others working on anything requiring the main database which has a lot of relations, thanks to normalizing it and having a lot of different but related data.
Thank you for this!
Years ago I wrote something similar for PostgreSQL. Unlike SQLite, it supports[1] the much richer "information_schema" database schema that's defined by the SQL standard. As long as you can figure out how it represents tables, columns, and primary and foreign keys, it shouldn't be too difficult to adapt this SQL query to fit. After all, reshaping relational data to extract the information you need is what SQL is for.
My project pdot^1 has a full-ERD mode but it's honestly less useful than the semi-interactive/contextual mode of navigating schema subgraphs in a database of any size. pdot can output mermaid and render other graphs too, like trigger cascades and grant inheritance.
> Lots of database management tools include some kind of schema diagram view,
either automatically generated
or manually editable so you can get the layout just right.
But it's usually part of a much bigger suite of tools,
and sometimes I don't want to install a tool,
I just want to get a basic overview quickly.
An old colleague of mine created an interactive web app that does this. We use it internally and I find it super useful. Supports SQLite, among others: https://azimutt.app/
I did something related to this https://github.com/wallymathieu/mejram
Main reason I did it was that I've worked on some old databases that do not have a nice normalised schema. Some of the foreign keys have been missing. Using dot-render can give you nicer graphs compared to some of the built in tools like SQL Server Management Studio.
Cool, I recently made a similar tool for generating diagrams like that for SQLalchemy data models. Can definitely be useful for understanding a complex schema.
This seems really dangerous to use given it's AGPL license. IANAL but besides the inherent infectious nature of the .sql file itself, wouldn't the output .svg (or whatever) files that you produce by running this code _also_ be AGPL licensed?
Google (as in, the corporation, not the search results) says anything derived from AGPL code may be covered by the AGPL license. It's enough of a threat that internal use of AGPL licensed tools is forbidden at both Google and Apple and probably others.
I was just looking for something like this. Ended up using DbVisualiser, which is far too heavy and complex for the simple task I wanted it for. This looks much neater.
> A properly normalised database can wind up with a lot of small tables connected by a complex network of foreign key references
I think the last time I properly normalized a database was at a university. Avoidng lots of small tables and complex networks would be the main reason.
Your message is ambigous. One can read it as “don’t give up on normalisation” but one can also read it as “don’t normalise”. Which of the meanings did you intend?
As long as you are doing OLTP using an RDBMS, I believe the proper way to "denormalize" is to just use materialized views and therefore sacrifice a bit of write performance in order to gain read performance. For the OLAP scenario you are ingesting data from the OLTP which is normalized therefore it's materialized views with extra steps.
If you are forced to use a document database you have to denormalise because joining is hard.
So if by scale you mean using a document database, sure, but otherwise, especially on SSDs, RDBMSs usually benefit from normalization, by having less data to read, especially if old features (by today's standards) like join elimination are implemented. Normalization also enables vertical partitioning.
There was an argument to be had about RDBMSs on HDDs because HDDs heavily favour sequential reads rather than random reads. But that was really the consequence of the RDBMS being a leaky abstraction over the hardware.
Document databases have a better scalability story but not because of denormalization. Instead it's usually because of sacrificing ACID guarantees, choosing Availability and Lower Latency over Consistency from the CAP (PACELC) theorem.
CAP has to do with distributed systems, not necessarily databases (unless they are also distributed)
Document databases/KV stores had a reputation for scalability/speed primarily because of the way they were used (key querying), and also popular ones such as MongoDB can do automatic horizontal sharding, not available in most freely available RDBMS. However you an also treat RDBMS as KV stores these days (with JSONB and simple primary key/index) if you want, and there are distributed RDBMS such as Cockroach and Yugabyte