I have used this as well as many of the other lower-level db drivers (which don't check your SQL at compile time) and I can say I much prefer the latter.
My issues with SQLx when I first tried it were that it was really awkward (nigh impossible) to abstract away the underlying DB backend, I expect those issues are fixed now but for some simple apps it's nice to be able to start with SQLite and then switch out with postgres.
Then I wanted to dockerize an SQLx app at one point and it all becomes a hassle as you need postgres running at compile time and trying to integrate with docker compose was a real chore.
Now I don't use SQLx at all. I recommend other libraries like sqlite[1] or postgres[2] instead.
SQLx is a nice idea but too cumbersome in my experience.
I'm have no experience with abstracting away the backend, but Dockerizing is actually pretty easy now - there's an offline mode[1] where you can have sqlx generate some files which let it work when there's no DB running.
It's definitely not perfect, but I think both of those issues are better now, if not fully solved.
For needing a DB at compile time, there's an option to have it produce artefacts on demand that replace the DB, although you'll need to connect to a DB again each time your queries change. Even that is all optional though, if you want it to compile time check your queries.
I know it's annoying (and apparently there is a solution for generating the required files before the build), but in these kinds of situations Go and Rust are great for doing a static build on the system and then copying into a scratch image.
Versus Python and Node often needing to properly link with the system they'll actually be running in.
Why would you want to abstract away the underlying database?
Wouldn't it better to already use the target DB to cattch potential issues earlier? Also to avoid creating another layer of indirection, potentially complecting the codebase and reducing performance?
Primarily for libraries and deployment environments that aren't fully in your control which is still pretty common once you get to B2B interactions, SaaS is not something you can easily sell to certain environments. Depending on the assurance you need, you might even need to mock out the database entirely to test certain classes of database errors being recoverable or fail in a consistent state.
Even in SaaS systems, once you get large enough with a large enough test suite you'll be wanting to tier those tests starting with a lowest common denominator (sqlite) that doesn't incur network latency before getting into the serious integration tests.
Thanks, interesting experience - so much depends on getting developer ergonomics right. There is something to be said for checking the SQL at compile-time, though - esp. if trying to ORM to a typesafe language.
How long ago did you try SQLx? Not necessarily promoting SQLX, but the `query_as` which lets one make queries without the live database macro has been around for 5 years [1].
For lower level libraries there is also the more downloaded SQLite library, rusqlite [2] who is also the maintainer of libsqlite3-sys which is what the sqlite library wraps.
The most pleasant ORM experience, when you want one, IMO is the SeaQl ecosystem [3] (which also has a nice migrations library), since it uses derive macros. Even with an ORM I don't try to make databases swappable via the ORM so I can support database-specific enhancements.
The most Rust-like in an idealist sense is Diesel, but its well-defined path is to use a live database to generate Rust code that uses macros to then define the schema-defining types which are used in the row structs type/member checking. If the auto-detect does not work, then you have to use its patch_file system that can't be maintained automatically just through Cargo [4] (I wrote a Makefile scheme for myself). You most likely will have to use the patch_file if you want to use the chrono::DateTime<chrono::Utc> for timestamps with time zones, e.g., Timestamp -> Timestamptz for postgres. And if you do anything advanced like multiple schemas, you may be out of luck [5]. And it may not be the best library for you if want large denormalized tables [6] because compile times, and because a database that is not normalized [7], is considered an anti-pattern by project.
If you are just starting out with Rust, I'd recommend checking out SeaQl. And then if you can benchmark that you need faster performance, swap out for one of the lower level libraries for the affected methods/services.
Interesting as I was researching this recently and certainly not impressed with the quality of the Readability implementations in various languages. Although Readability.js was clearly the best, it being Javascript didn't suit my project.
In the end I found the python trifatura library to extract the best quality content with accurate meta data.
You might want to compare your implementation to trifatura to see if there is room for improvement.
If you're using Go, I maintain Go ports of Readability[0] and Trafilatura[1]. They're actively maintained, and for Trafilatura, the extraction performance is comparable to the Python version.
for the curious: Trafilatura means "extrusion" in Italian.
| This method creates a porous surface that distinguishes pasta trafilata for its extraordinary way of holding the sauce. search maccheroni trafilati vs maccheroni lisci :)
This is very interesting, are there any examples of interacting with LLMs? If the queries are compiled and loaded into the database ahead of time the pattern of asking an LLM to generate a query from a natural language request seems difficult because current LLMs aren't going to know your query language yet and compiling each query for each prompt would add unnecessary overhead.
This is definitely a problem we want to work on fixing quickly. We're currently planning an MCP tool that can traverse the graph and decide for itself at each step where to go to next. As opposed to having to generate actual text written queries.
I mentioned in another comment that you can provide a grammar with constrained decoding to force the LLM to generate tokens that comply with the grammar. This ensures that only valid syntactic constructs are produced.
As somebody with a wooden house and the feeling to learn carpentry and spend less time programming I think this is brilliant. Combining minimal design with a hacker and DIY ethos is brilliant. Kudos, bookmarked; hope I can find the time to tinker with the designs.
I recommend making time to build at least one piece of furniture. I did not use hyperwood principles, but I built my own computer desk and workbench to my specifications and I cannot imagine ever buying a premade work surface again. It is rewarding, helps you think about a project from both a production and use-case perspective, and unlike my programming/tech troubleshooting efforts, the results are very tangible and something I can touch and see every day, lending to a lasting sense of accomplishment.
Good article but a minor nitpick is that port zero is not strictly an invalid port as it's often used to allow the OS to pick an available port at random.
The problem is that the requirements can be vastly different. A collaborative editor is very different to say syncing encrypted blobs. Perhaps there is a one size fits all but I doubt it.
I've been working on sync for the latter use case for a while and CRDTs would definitely be overkill.
Really hope this project succeeds as somebody heavily invested in a Flutter app with a Rust backend Dioxus could be great for us so we will continue to follow closely.
Flutters hot reloading is awesome but some days I feel like it is death by a thousand cuts working with Flutter (so many unresolved bugs) so to have a cross-platform framework that wouldn't require Flutter and Dart would be great.
Same! I inherited a Flutter app from a previous team, and while it's OK it feels kind of like developing on a Galapagos island — by which I mean, we'll never use Dart in any other context, and Flutter's web story doesn't really work for most kinds of apps).
I think Dioxus actually fits into a fairly unique set of slots. There's Tauri, with which it shares a lot of stuff, but Tauri's web story is mostly "build it yourself". There's Leptos, which arguably has a better web app story, but lacks most of the rest.
It is also heartening to see how these projects really do share a lot of the building blocks and don't seem to be overly competitive.
The likelihood that someone would be able to do this in 50 years time, without your company still around? Close to zero.
Passwords, even ssh keys and passkeys, are little pieces of plain text. If you think needing a specialised sdk or cli to retrieve plain text is a good software architecture, I think we see the world quite differently.
That's the exact reason it's open source, so it would still be possible to access your data in such an event.
We clearly see things differently but I think using computers to make our lives easier is worthwhile and storing/managing our secrets securely, effectively and conveniently is better managed by software than some ad-hoc setup.
Nitpick, passkeys are not text, they are binary blobs.
Yes, a screenshot would be good to get a visual on this.
I have done a simple shell using Rustyline and Clap and this could be something I might be interested in but it's hard to say without a visual idea, asccinema would be perfect!
asccinema is a killer idea! but I am still chasing my perfect shell . If you end up implementing an insane looking shell with this, contributions are always open.
I don't have the time to look deeper into it right now as I am quite happy with my current setup but if you do want to make an ascinema recording here is a little tool I wrote to help testing and recording CLIs
I suspect that it's perfectly valid to name a column "from" or any other SQL keyword so allowing a trailing comma would make the grammar ambiguous.
Personally I agree with the sentiment, I find it annoying to have to juggle the commas on the last column name but I think there is likely a valid reason to do with making lexing easier to reason about.
Most SQL implementations do seem to allow you to name a column or a table with a keyword, but to refer to it you may need to put it in quotes or backticks.
I'm not sure if this is a solved problem at the level of the ANSI SQL spec or if every vendor does their own thing, but there's definitely plenty of precedent that ambiguous grammar is allowed and can be resolved.
I know that MySQL uses backticks, Postgres/Sqlite uses double quotes, and MS SQL Server uses square brackets, when using keywords for a column or table.
My issues with SQLx when I first tried it were that it was really awkward (nigh impossible) to abstract away the underlying DB backend, I expect those issues are fixed now but for some simple apps it's nice to be able to start with SQLite and then switch out with postgres.
Then I wanted to dockerize an SQLx app at one point and it all becomes a hassle as you need postgres running at compile time and trying to integrate with docker compose was a real chore.
Now I don't use SQLx at all. I recommend other libraries like sqlite[1] or postgres[2] instead.
SQLx is a nice idea but too cumbersome in my experience.
[1]: https://docs.rs/sqlite/latest/sqlite/ [2]: https://docs.rs/postgres/latest/postgres/
reply