Hacker News new | past | comments | ask | show | jobs | submit | leononame's comments login

A Boatmurdered read is one of the few times I actually burst out laughing in front of my screen - multiple times.

Every time I read it I want to get back into dwarf fortress, but this damn game introduces so much new stuff over time that I can't shake off the feeling it's just too much for me. I _loved_ that game until about ten years back but haven't played in ages.


Congratulations! The local API client space urgently needs some Open Source tools that don't suck. I've ditched Postman and Insomnia for Bruno, but since this supports gRPC I really want to give it a try.

Python as a scripting language sounds nice, I don't particularly enjoy working with JS and I think it'd be a nice addition for people like me who prefer something else. Not having to install an electron app is also huge for me.

For me, installation failed with "Package xkbcommon-x11 was not found in the pkg-config search path.". I needed to install libxkbcommon-x11-devel on fedora.

It definitely is super snappy and lightweight.

Is there an option to use a dedicated directory for a workspace to share it in git? That'd be huge for me, being able to commit the request yaml files to git is super nice


Hi Thanks for your feedback, all the configs are stored as file and ofc you can share it in any ware works for you.

On linux you can find the files on ~/home/.config/chapar

And each workspace is a directory there.


What I mean is that it'd be nice if you could store them somewhere else explicitly. Create a new workspace and store its contents in your code repository so that you can commit it and share it with your team directly through git.


Right sound good, I'll put it in my todo list. Thanks for your feedback


I'd be super interested in your writeup, I'd like to build a homemade Tonie for my kids as well. Any way you could contact me when you're finished or do you have a blog I could regularly check out?


It definitely does not show that, it's just a number and you're interpreting it. But other interpretations are just as valid, e.g.

- Germans founding less because they're more risk-averse - having less AI startups because other types of companies are founded more often - Less wish to found because of a strong job market that gives you excellent jobs with decision power/money/whatever it is you seek in founding yourself - People have more families and don't want to quit their current jobs

I probably could come up with more. Is it a hassle to found a new company in Germany? Sure. But I've found that for people who do want to start something, that's not a showstopper. Personally, I would attribute low startup numbers to cultural risk aversion of Germans


> I would attribute low startup numbers to cultural risk aversion of Germans

Which is literally one aspect of how easy or hard it is to create a startup and get it somewhere. Can't fire people quickly enough since market is changing daily? Well it will be a nightmare for the owners to become or remain relevant enough to stay afloat.

I am not bashing EU just to be clear, having more job stability has tons of long term positive consequences on population's mental state and 2nd and 3rd order positive effects (the usage of various 'mental' medication in US seems to be ridiculously high compared to 3 countries in Europe I've lived in for example), but good environment for agile fail fast startups it ain't.


In Germany specifically, but I'd guess it applies to most other countries as well, these strict rules apply once you have 11 employees. A lot of startups work with contractors anyways. And you can always fire people if your company needs it to survive.

As far as I'm aware, California has some of the strongest labor protection laws in the US, yet lots of startups. There's clearly more factors involved than labor protection laws, e.g. VC money, culture, etc.

Sure, stronger labor protection might be a result of cultural risk aversion. But I'd be very hesitant to call it "literally one aspect of how easy or hard it is to create a startup and get it somewhere"


I didn't say its the only aspect, just one of key ones. Bureaucracy must be another critical aspect, one reason why ie France or Italy are literally centuries behind with very little to no hope to ever catch up given its population priorities.

I agree with what you say, SV culture and sheer amount of startup-compatible talent plays a massive role too. Its a momentum that would take half a century of dedicated effort to even catch up.


The thesis of the article is broadly correct but the reasoning it uses to get there is questionable.

Every country has its quirks. The US has a very high cost of living and consequent high salaries. No wonder it’s important to be able to fire workers quickly. The high wages are also more than balanced by the amount of capital available.

In Germany it’s harder to fire workers but they also get paid a fraction of their counterparts in the US, so there’s not such a rush.

I wouldn’t start a startup in Germany either, because of the insane dysfunction of German bureaucracy. The money is not even a factor when dealing with the system is already such a pain.


For me, it's the opposite. The type system is decent, but it's generics can get extremely out of hand, it's not sound, and I run into weird type errors with libraries more often than not.

Having no integer types (ok, this isn't something typescript could just implement) other than BigInt is another big one for me.

That you can just do `as unknown as T` is an endless source of pain and sometimes it doesn't help that JS just does whatever it wants with type coercion. I've seen React libraries with number inputs that actually returned string values in their callbacks, but you wouldn't notice this until you tried doing addition and ended up with concatenation. Have fun finding out where your number turned into a string.

The number of times I've read `... does not exist on type undefined` reaches trauma-inducing levels.

TypeScript is as good as it can get for being a superset of JS. But it's not a language I have fun writing stuff in (and I even fear it on the backend). It has its place, and it's definitely a decent language, but I would choose something else for frontend if I could, and wouldn't use it on the backend at all. I somehow don't trust it. I know people write amazing software with it, but YMMV I guess.


> it's not sound

This comes up in every one of these threads and I always wonder: do you actually experience problems with soundness in your regular coding? (Aside from `as unknown`, which as others have noted just means you need a linter to stop the bad practices.)

It feels like such a theoretical issue to me, and I've yet to see anyone cite an example of where it came up in production code. The complaint comes off as being more a general sense of ickiness than a practical concern.


Soundness is a constraint more than a feature. Having it limits what is possible in the language, forcing you into a smaller set of available solutions.

So for me it's not about running into unsound types regularly but how much complexity is possible and needs to be carved away to get at a working idea because of it. In TS I spend relatively a lot of time thinking about and tweaking minute mechanics of the types as I code. Where by comparison in ocaml (or more relevantly rescript) I just declare some straightforward type constructors up front and everything else is inferred as I go. When I get the logic complete I can go back and formalize the signatures.

Because of the unsoundness (I think? I'm not a type systems expert) TS's inference isn't as good, and you lose this practical distinction between type and logic. And more subtly and subjectively, you lose the temporal distinction. Nothing comes first: you have to work it all out at once, solving the type problems as you construct the logic, holding both models as you work through the details.


yes, all the time. It's more of an issue of no runtime type safety, which makes it a poor choice for many backend projects. There are workarounds, but it's silly when there are many better alternatives.


> It's more of an issue of no runtime type safety...

This is a completely orthogonal question to soundness.


In this case, not really. TypeScript can't be sound because there is zero runtime type safety in JS. That you are able to do `as unknown as T` makes TypeScript unsound, but it's also an escape hatch often needed to interact with JavaScript's dynamic typing.


> often needed

It's never needed, it's just often convenient for something quick and dirty. You can always write a guard to accomplish the same thing—roll your own runtime safety. If you want to avoid doing it manually there's Zod. It's not that much different than writing a binding for a C library in another language in that you're manually enforcing the constraints of your app at the boundaries.


> an escape hatch often needed

The only time you need that is in tests.

Otherwise you're better off using guards then blindly casting


The code base I work on has more test code than production code.


You're blaming TypeScript for self-inflicted wounds.

Don't blame the type system that you banished (with `as unknown as T`) for not catching you; or for some React library having bugs, e.g. an incorrect interface; for not defining types and Pikachu-facing when types are `undefined`. These traumas are fixed by using TypeScript, not ignoring it.


These issues don't exist in languages that aren't built on a marsh.

More specifically though, I feel like the way javascript libraries with types work is often painful to use and that's why people use TS's escape hatches, whereas the same thing doesn't happen in languages where everything is built on types from the get go.

The same friction is true for example of using C libraries from other languages.


The rest of the world is an equally muddy marsh. C++: static_cast, C: void-pointers and unions, java/c#: casting back from (object) or IAbstractWidget.

If anything, typescript has the most expressive, accurate and powerful type system of the lot, letting you be a lot more precise in your generics and then get the exact type back in your callback, rather than some abstract IWidget that needs to be cast back to the more concrete class. The structural typing also makes it much easier to be lax and let the type engine deduce correctness, rather than manually casting things around.


C is famously unsafe. But in Java/C#, you do have runtime type safety. You can't just cast something to something else and call it a day. In the worst case, you'll get an exception telling you that this does not work. In TypeScript, the "cast" succeeds, but then 200 lines later, you get some weird runtime error like "does not exist on type undefined" which doesn't tell you at all what the source of the error is.

In TypeScript, a variable can have a different runtime type from its declared type entirely, that's just not true for many other languages


I'm not sure about the CLR, but the JVM has no type safety whatsoever. Java the language does, but all of that goes away once you are in the JVM bytecode.


> the JVM has no type safety whatsoever.

This is just partially true (or completely untrue in the mathematical sense since your statement is "_no_ type safety _whatsoever_" :P ). The whole purpose of the bytecode verifier is to ensure that the bytecode accesses objects according to their known type and this is enforced at classloading time. I think you meant type erasure which is related - generic types do not exist on the bytecode level, their type parameters are treated as Object. This does not violate the bytecode verifier's static guarantees of memory safety (since those Objects will be cast and the cast will be verified at runtime), but indeed, it is not a 100% mapping of the Java types - nevertheless, it is still a mapping and it is typed, just on a reduced subset.


"don't exist in languages that aren't built on a marsh"

Sure. Last time I checked, JavaScript is the language that actually powers the web. If you can get a language that isn't built on a marsh along with all the libraries to run the web, I'll switch in a second.

In other words, the criticism is simply irrelevant. If it works, it works. We don't talk about technologies that don't exist.


We do in fact talk about technologies that don't exist. Creating new technologies would be rather difficult otherwise.


When they don't have to interface with JS apis (e.g., for the DOM) this is true for libraries in other languages built for WASM.


Not run on the web, "to run the web". Maybe someday WASM will be complete enough to actually run the web, but JavaScript is what we have right now and it's done a pretty okay job so far.


> These issues don't exist in languages that aren't built on a marsh.

Unsafe casts exist in almost any GCed strongly typed language. You can unsafe things even in Rust, if you want to. Author of this code has deliberately made a choice to circumvent language's limitation and forgo it's guarantees. We have been doing it since undefined behaviour in C, how is that Typescript's fault?


> Unsafe casts exist in almost any GCed strongly typed language.

Well, and not GCed languages too. There are even weakly typed associative arrays.

  std::map<std::string, std::any>


Totally! I really wonder what these libraries people are complaining about that have such bad type definitions. In my experience TS definitions on the average NPM package are generally fairly decent.


Well, the reality of the situation still is that there are libraries with incorrect or low quality typings that blow up in your face. Me using TypeScript will not make that library better, but this problem is still the daily reality of using TypeScript. It's not the fault of TS, but still a pain you encounter when working with it.

I haven't worked with a language where you can statically cast invalid types that easily since C, a language not exactly famously known for its safety.

There's a reason `as unknown as T` exists, and it's JavaScript's highly dynamic nature and absence of runtime types (ignoring classes and primitives). It's an escape hatch that is needed sometimes. Sure, within your own codebase everything will be fine if you forbid its use, but every library call is an API boundary that you potentially have to check types for. That's just not great DX


I haven't worked with a language where you can statically cast invalid types that easily since C, a language not exactly famously known for its safety.

But it’s not the same situation at all, is it? If you make an invalid cast in C, your program will crash or behave in bizarre ways.

If you make an invalid cast in TS, that doesn’t affect the JS code at all. The results will be consistent and perfectly well-defined. (It probably won’t do what you wanted, of course, but you can’t have everything.)

TS is much more like Java than it is like C (but with a much nicer type system than either).


Meh, in Java (afaik) you'll get exceptions when you cast incorrectly. In JS and C, it just gets allowed and you get some runtime error later on and have to make your way back to find your incorrect cast.


I tend to agree with you but for problem like this one:

> That you can just do `as unknown as T` is an endless source of pain

You should be using strict typingcheck/linting rules somewhere in your pipeline to make these illegal (or at least carefully scrutinised and documented).


Sure, I agree in general, but I've found that:

1. If someone is willing to do `as unknown as T`, they're probably also just as willing to do `// @ts-ignore`. 2. It's not only your own code, it's the libraries you use as well. Typings can often be slightly incorrect and you have to work around that occasionally.


Popular libraries tend to get type hygiene issues ironed out rather quickly for 90% of the API surface area. For this reason, i find that lib selection from npm is much easier these days. The heuristic is simple:

1) has types? 2) has (large) download count? 3) has docs?

After that it’s generally smooth sailing. Of course this doesnt at all apply to the application codebase being applied to, but one of the parent/sibling remarks emphasized “madness” and i seek to smooth that over.

Noisy? Yes. Madness? Nah.


Don't you care about whether it's an abandoned project, how many dependencies it has or its license?

Your simple heuristics seems like a bad approach to me.


Then fail builds on @ts-ignore.

"But the bad dev will just open up your build config and remove that restriction."

Okay fine. At some point you gotta establish a floor on the insolence of your teammates.


For #1, this is literally what PRs are for. Someone might be willing to do it, but it should be stopped before merge. If it isn’t, you have bigger problems to solve than type coercion.

For #2, if it’s open source you’re welcome to change the source or its typings.


You can also turn off all warnings in C and C++ (and C#?). That's not a flaw in the language it's a flaw in code bases and programmers that turn them off.


Those rules should be enabled by default.


ESLint rules that require type information (not just stripping types) are prohibitively expensive for larger code bases.

As far as I know, there isn't any kind of tsconfig rule to disallow this (please correct me if I'm missing something here!). So unless you're using tools I don't know about, this is kind of a mandatory last bastion of "any".

You can disallow any, enable the strictest possible null/undefined checks (including noUncheckedIndexedAccess). And there's also the assertion TS check that normally prevents erroneous type assertions.

But "as unknown as MyType" is not preventable by means of tsc, as far as I know. Unless there's an option I don't know do disable this kind of assertion (or even all assertions).


How large is too large and what counts as prohibitive? We're using lints with types on over a million lines of TypeScript and the lints are instant inside of the editors. They take a good 10 minutes to run in CI across the whole project, but that's shorter than the tests which are running in parallel.


Good point, I was talking about similarly sized code bases, yes.

Because of the hefty CI runtime increase, myself I opposed to adding it. We have lots of parallel branches that people work on and many code reviews every day, where the CI just needs to run from a clean slate so to speak.

But most of the current long CI run penalty in the frontend of that comes from tsc, not ESLint, in my case.

I might look into it again.

In the project there already are all kinds of optimizations (caching to speed up full Ci runs with common isolated parts of a monorepo).

And for TS, project references + incremental builds for development are used, tsc in a monorepo would be miserable without them.


I think it depends on your code and dependencies. At work, the time between making a change in our codebase (which is much smaller than a million LOC) to having ESLint update itself in the IDE can take 5+ seconds, depending on what you changed. But we also use some pretty over-engineered, generic-heavy dependencies across our entire codebase.


lint-staged on pre-commit and full lint in CI solves this problem very well.


> Having no integer types (ok, this isn't something typescript could just implement) other than BigInt is another big one for me.

Is that a performance thing? I believe JavaScript VMs can specialize/optimize Numbers and BigInts that only contain valid 32 int values.


None of these are performance concerns. Modern JS engines are plenty fast for most of my use cases.

It irks me that I can't trust it to be an integer within a given range. Especially with Number, I often have the sensation that the type system just doesn't have my back. Sure, I can be careful and make sure it's always an integer, I've got 53 bits of integer precision which is plenty. But I've shot myself in the foot too many times, and I hust don't trust it to be an integer even if I know it is.

As for BigInt, I default to it by now and I've not found my performance noticeably worse. But it irks me that I can get a number that's out of range of an actual int32 or int64, especially when doing databases. Will I get tto that point? Probably not, but it's a potential error waiting to be overlooked that could be so easily solved if JS had int32/int64 data types.


Do you have any more specific examples where it has caused a problem not having int type specifically?

I can't really remember having a problem with it myself, but maybe your usecases are different.


Sound currency arithmetic is a lot harder when you have to constantly watch out for the accidental introduction of a fractional part that the type system can't warn you about, and that can never be safe with IEEE 754 floats. (This doesn't just bite in and near finance: go use floating-point math to compute sales tax, you'll find out soon enough what I mean.)

Bigints solve that problem, but can't be natively represented by JSON, so there tends to be a lot of resistance to their use.


I’m converting a billing system from js to ts right now.

Looking for a therapist.


Not really. In my parent comment I tried to make clear that it's not a limitation for me in real-world scenarios I encounter, but still something I feel like being a potential class of problems that could be so easily solved.

When I really needed dedicated integer types of a specific size, e.g. for encoding/decoding some binary data, so far I've been successful using something like Uint8Array


> Especially with Number, I often have the sensation that the type system just doesn't have my back.

That's sounding dangerously close to dependent types, which are awesome but barely exist in any programming languages, let alone mainstream general purpose programming languages.

You could do this with a branded type. The downside will be ergonomics, since you can't safely use e.g. the normal arithmetic operators on these restricted integer types.


> As for BigInt, I default to it by now and I've not found my performance noticeably worse. But it irks me that I can get a number that's out of range of an actual int32 or int64, especially when doing databases. Will I get tto that point? Probably not, but it's a potential error waiting to be overlooked that could be so easily solved if JS had int32/int64 data types.

If your numbers can get out of the range of 32 or 64 bits then representing them as int32 or int64 will not solve your problems, it will just give you other problems instead ;)

If you want integers in JS/TS I think using bigint is a great option. The performance cost is completely negligible, the literal syntax is concise, and plenty of other languages (Python, etc.) have gotten away with using arbitrary precision bignums for their integers without any trouble. One could even do `type Int = bigint` to make it clear in code that the "big" part is not why the type is used.


> That you can just do `as unknown as T`

I mean, yeah... if you're using escape hatches all the time I can see why you are having a bad time.


How is the dart backend story? I've not heard much about people using dart on the server, are there mature frameworks out there?


I love it personally and with gRPC I don’t really have much of a need for any kind of backend framework at all. I just follow the guidance at aip.dev and raw dog it using language primitives for the most part.


But they do ask you only two digits of the pin on each try and they probably will lock your account after three incorrect attempts. Not saying 6 digits is secure, but it's better than everyone using "password" if they have a string policy on incorrect attempts.

And don't hm they have 2FA for executing transactions?

I'm pretty sure banks are some of the most targeted IT systems. I don't trust them blindly, but when it comes to online security, I trust that they built a system that's reasonably well secured and other cases, I'd get my money back, similar to credit cards.


Am I blind? I don't seem to find the email address at all on that page


Only thing I can find are office mails, which looks more like a trashbin than mail which would respond. Also not where I'd look for a contact mail.

They seem to only want you to connect via social media (which is a poor choice for primary contact IMO).


I have s feeling GDPR is often used as an excuse in these cases while there is little evidence that it's actually slowing anything down. Especially for government: they do have the data alread and the GDPR applies to the dat itself, not whether you put a fancy frontend on it or not.

Government departments tend to be slow to adopt - again, based on feeling more than hard evidence - especially emin Germany. They'll just try to find some scapegoat for why they're failing, and GDPR is perfect. I've seen the same in businesses as well, where I've seen told numerous times they're behind schedule because of GDPR or they can't do this because of GDPR and it's just not true most of the time. People just like to hide their incompetence

I don't know anything about EVB-IT, so I'll shut up about that part


I know about EVB-IT and GDPR. And it is actually slowing down a lot. While each of these things can be managed the combination is a productive killer. You will understand this if you ever worked as a it-project manager in the German goverment. There is a representative for everything and he is just doing his part and blocking everything not in his work field. It is not something like GDPR alone but the combination and the handling of these aspects. And law aspects always get the highest proirority.


Thanks for the interesting article. Lots of things seem to happen in SQLite land at the moment and I appreciate that the SQLite team documents their quirks so openly, it gives great confidence.

Since I don't know where else to ask, maybe this is a good place: How do async wrappers around SQLite (e.g. for node or python) work? SQLite only uses synchronous I/O if I'm not mistaken. Is it just a pretend async function with only synchronous code?

And, as a follow-up: If I have a server with say 100 incoming connections that will all read from the database, I've got 100 readers. No problem in WAL mode. However, I still could get congested by file I/O, right? Because every time a reader is waiting for data from disk, I can't execute the application code of another connection in a different thread since execution is blocked on my current thread. Is there any benefit to having a thread pool with a limit of more than $NUM_CPU readers?

And one more: Would you recommend actually pooling connections or just opening/closing the database for each request as needed? Could keeping a file handle open prevent SQLite from checkpointing under certain conditions?


You get concurrency in SQLite by using multiple connections - and typically a dedicated thread per connection.

When using async wrappers, a good solution is connection pooling like you mentioned - exactly the same concept as used by client->server database drivers. So you can have 5 or 10 read connections serving those 100 connections, with a statement/transaction queue to manage spikes in load. It's probably not worth having more connections than CPUs, but it depends a little on whether your queries are limited by I/O or CPU, and whether you have other delays in your transactions (each transaction requires exclusive use of one connection while it's running).

SQLite maintains an in-memory cache of recently-accessed pages of data. However, this gets cleared on all other connections whenever you write to the database, so is not that efficient when you have high write loads. But the OS filesystem cache will still make a massive difference here - in many cases your connections will just read from the filesystem cache, which is much faster than the underlying storage.

Open connections don't block checkpointing in SQLite. The main case I'm aware of that does block it, is always having one or more active transactions. I believe that's quite rare in practice unless you have really high and continuous load, but if you do then the WAL2 branch may be for you.

I feel connection pooling is much more rare in SQLite libraries than it should be. I'm maintaining one implementation (sqlite_async for Dart), but feel like this should be the standard for all languages with async/await support.


> I feel connection pooling is much more rare in SQLite libraries than it should be. I'm maintaining one implementation (sqlite_async for Dart), but feel like this should be the standard for all languages with async/await support.

I completely agree. But I simply have no reference / good-practice implementations to take inspiration from. I'd be more than willing to have an Elixir FFI bridge to a Rust library (and write both in the process) that actually make full use of parallelism to fully utilize SQLite's strengths but again, I got nowhere to steal from. :) Or I am not aware where to look.


Libsql fork has good rust <-> node async, you could look at them for inspiration. Maintained by Turso.


Thank you. Is this the one you are talking about?

https://github.com/tursodatabase/libsql


Yes, also the node bindings https://github.com/tursodatabase/libsql-js


Thanks.

All good and valid questions.

1. I work mostly in Rust so I'll answer there in terms of async. This library [0] uses queues to manage workload. I run a modified version [1] which creates 1 writer and n reader connections to a WAL backed SQLite and dispatch async transactions against them. The n readers will pull work from a shared common queue.

2. Yes there is not much you can do about file IO but SQLite is still a full database engine with caching. You could use this benchmarking tool to help understand where your limits would be (you can do a run against a ramdisk then against your real storage).

3. As per #1, I keep connections open and distribute transactions across them myself. Checkpointing will only be a problem under considerable sustained write load but you should be able to simulate your load and observe the behavior. The WAL2 branch of SQLite is intended to prevent sustained load problems.

[0]: https://github.com/programatik29/tokio-rusqlite [1]: https://github.com/seddonm1/s3ite/blob/0.5.0/src/database.rs


Thanks for your answer.

For 1, what is a good n? More than NUM_CPU probably does not make sense, right? But would I want to keep it lower?

Also, you dispatch transactions in your queue? You define your whole workload upfront, send it to the queue and wait for it to finish?


I went through the same mental process as you and also use num_cpus [0] but this is based only on intuition that is likely wrong. More benchmarking is needed as my benchmarks show that more parallelism only works to a point.

You can see how the transactions work in this example[1]. I have a connection `.write()` or `.read()` which decides which queue to use. I am in the process [2] of trying to do a PR against rusqlite to set the default transaction behavior as a result of this benchmarking so hopefully `write()` will default to IMMEDIATE and `read()` remains DEFERRED.

[0] https://docs.rs/num_cpus/latest/num_cpus/ [1] https://github.com/seddonm1/s3ite/blob/0.5.0/src/s3.rs#L147 [2] https://github.com/rusqlite/rusqlite/pull/1532


Valuable info and links, instant bookmarks, thank you!

If you don't mind me asking, why did you go with rusqlite + a tokio wrapper for it and not go with sqlx?


Whilst I love the idea of SQLX compile-time checked queries it is not always practical to need a database connection to compile the code in my experience. If it works for you then thats great but we had a few tricky edge cases when dealing with migrations etc.

Also, and more fundamentally, your application state is the most valuable thing you have. Do whatever you feel makes you most comfortable to make sure that state (and state transitions) is as well understood as possible. rusqlite is that for me.


Thank you, good perspective.

Weren't the compile-time connections to DB optional btw? They could be turned off I think (last I checked, which was last year admittedly).

My question was more about the fact that sqlx is integrated with tokio out of the box and does not need an extra crate like rusqlite does. But I am guessing you don't mind that.


SQLX has an offline mode where it saves the metadata of the SQL database structure but then you run into risk of that being out of sync with the database?

Yeah I just drop this one file [0] into my Tokio projects and I have a SQLite with single writer/multi reader pool done in a few seconds.

[0]: https://github.com/seddonm1/s3ite/blob/0.5.0/src/database.rs


Thanks again!

I'll be resuming my effort to build an Elixir <-> Rust SQLite bridge in the next several months. Hope you won't mind some questions.


I wrote an async wrapper around SQLite in Python - I'm using a thread pool: https://github.com/simonw/datasette/blob/main/datasette/data...

I have multiple threads for reads and a single dedicated thread for writes, which I send operations to via a queue. That way I avoid ever having two writes against the same connection at the same time.


If you have a server with 100 cores to serve 100 connections simultaneously - and really need this setup -, you should probably be using Postgres or smth else.


It's a made up example to clarify whether I understand potential congestion scenarios and limitations correctly, not my actual situation.

If I had a server with 100 cores to serve 100 connections, but each query took only 5ms, SQLite might be totally viable. There's no blanket solution.

Edit: More importantly, SQLite async limitations come into play when I have only 12 cores but 100 incoming connections, and on top of querying data from SQLite, I do have other CPU bound work to do with the results. If I had 100 cores, 100 connections to the database would be no problem at all since each core could hold a connection and block without problem.


You can make SQLite scale way beyond the limitations of WAL mode or even Begin Concurrent mode, all while doing synchronous writes

https://oldmoe.blog/2024/07/08/the-write-stuff-concurrent-wr...


If synchronous IO is blocking your CPU bound application code, this won't help you. My made up example was not about concurrent writes, and the concurrent reads I mentioned were not my main point. For all I care, you could have 100 different databases or even normal files in this scenario and you read them.

I was wondering how the async wrappers around SQLite work when SQLite itself only has synchronous IO. At least for the Rust example by Op, the async part is only used when awaiting a queue, but the IO itself still has the potential of blocking all your application code while idling.


How did you come to that conclusion? No, the synchronous IO is not blocking the application because the committer that actually does the writing to disk lives in an external process.

This implementayion turns synchronous IO to 100% async while still maintaining the chatty transaction api and the illusion of serail execution on the client side


> 12 cores but 100 incoming connections

Especially when using a modern storage medium, which most servers nowadays use, I doubt that filesystem I/O will be a bottleneck for the vast majority of use cases.

I/O is extremely fast and will be negligible compared to other stuff going on to serve those requests, even running queries themselves.

The CPU work done by SQLite will vastly outshine the time it takes to read/write to disk.

It might be a bottleneck to reading if you have a very large database file, though.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: