Wanting SQLite in Go touches on something that I think is quite a waste in modern Go circles, but happens everywhere to varying degrees.
There's often (for instance, in Go projects wanting to avoid cgo) a desire for everything to be in the single source language - Go. In what resembles NIH syndrome, there will be clones of existing libraries, offering little over the original except being "Written in Go". From experience this often makes for more bugs, as the Go version is commonly much younger and lessor used than the existing non-Go library.
The Python world does it a lot less, perhaps the slowness of Python helps encourage using non-python libraries in Python modules. But that sure does making building and distributing Python projects "fun".
What I'm trying to say is that:
A world where every language community has it's own SQLite project because the communities shun code written in other languages just feels like a profound waste.
It's not as simple as NIH. Using cgo means you have to build and link external dependencies in another language. Compared to Go, this makes it quite a bit more difficult to build and distribute than a single binary. But I think the biggest reason is cgo is slow. Unlike rust which uses the C calling convention (correct me if I'm wrong) and has no garbage collector or coroutine stacks to worry about, Go has a hefty price to pay when calling a C funcion. For something like sqlite that causes a noticable slowdown. Typically what I like to do in this case is write a C wrapper around the slow API that let's me batch work or combine multiple calls into one. With sqlite, instead of fetching rows one at a time, I would write a wrapper that takes some arrays and or buffers to fetch multiple rows at a time. That amortizes the overhead and makes it less significant. When that's not possible, I use unsafe and or assembly to lift just the problematic C calls into Go. That can sometimes work wonders, but it's also not a magic bullet.
Rust can use the C calling convention to call C functions or export functions to C code, but this requires extra annotations. By default, Rust uses its own unstable ABI.
Interesting! I've never considered batching my calls to SQLite from Go that way. Do you have any numbers you can share about performance when doing that?
I don't, but I'm general it matters most for the cheapest C calls, so the functions doing the least work. Batching those somehow can give big speedups, over 2x, depending on how much of the total time was going into cgo overhead.
To be honest the reason why Go developers want "pure Go" libraries is simply because they can be statically + cross compiled and used without having to carry around an additional library, especially in environments where you hardly have anything other than the binary itself (e.g: Docker containers "FROM scratch" or Gokrazy)
In this case sqlite is bundled as a single C source file. You could just use Zig as your C cross compiler to cross compile alongside Go for almost any platform[1].
It is a bug, and we will fix it, but keep in mind the scope - this is something that affects old versions of glibc. Newer versions of glibc are not affected, and neither is musl libc (often preferable for cross compiling to Linux).
You can target a newer glibc like this: -target x86_64-linux-gnu.2.28
You can target musl libc like this: -target x86_64-linux-musl
I just can't agree with this. It's true that one piece, the compiling, is less painful. But the entire rest of the system from developing to testing to routing out implementation bugs is way, way more painful
It's ridiculously more painful for a single one off by a single dev for a single tool but an actual ecosystem of pure go reimplementations has popped up where that load can become collectively shared over time and ultimately using native implementations doesn't carry that burden to the end developer. It has to start somewhere though.
I've seen the same thing in the Java world, I presume for the same reasons as in Go: calling native code in Java is painful, and traditionally Java always had an emphasis on "write once, run everywhere". So the Java community tends to reimplement C code in Java, even when limitations of the language make the code slower and/or more complex (see the classic post "Re: Why Git is so fast" aka "Why is C faster than Java" at https://marc.info/?l=git&m=124111702609723&w=2 or https://public-inbox.org/git/20090430184319.GP23604@spearce.... which mentions things like the lack of unsigned types or value types).
It created a lot of churn, but I think it has been net positive for Go because they now have a huge ecosystem of stuff that can be installed without the black hole of C/C++ dependency installation.
Each call out to a C library consumes 1 OS thread (max 10s of Ks of threads before terrible performance/scheduling issues); each call out to a Go library consumes a go routine, of which you can have 100s of Ks without much problem.
For SQLite it seems it would be ok, as there's no network traffic, but I've had issues where network glitches (Kafka publisher C library) would cause unrecoverable CPU spikes and an increase in OS threads that never recovered.
So that's the functional reason behind the Go communities desire to write everything in Go. Plus a lot of the people who love Go also tend to be the sort who would enjoy re-writing C libraries into a nice new language.
Rewriting things in Go not only makes using them nicer, it often is a way to get a more correct and stable program. However, we are talking about sqlite here, one of the best tested and stable C programs out there. Rewriting it in Go rather raises the chance of a bug and that would be counter-productive.
It still can be an interesting project and if it proves to be correct and eventually shows some advantages vs. using the C version, it might become a nice alternative.
I guess the reason is that it is easier to cross compile by keeping everything in go? I have no knowledge of Sqlite cross compilation (on Linux targeting windows for instance) but I guess it's also possible but it makes build process a little more complex.
Node.js can take advantage of WASM which is pretty handy in some cases.
Getting cgo to cross-compile while targeting less popular architectures can be a royal pain: I was trying to use cgo to add official SQLite to a Go app that I had running on a long-abandoned (by OEM) mips/linux 2.x kernel "IoT" device with an equally ancient libc. It was a sisyphean task that absolutely nerd-sniped me, I spent way too much time on the building toolchains and trying to get them to work with cgo. I ended up going with a Go-version of SQLite
Yes this is the reason. Using cgo to link in c libraries for example is slower and also brings up other difficulties if you want to cross compile. Here's an old link (may be out of date) outlining some of them:
The main reason why I personally try to avoid adding non-go languages to my go projects is because it tends to make profiling / debugging a bit of a pain. pprof has limited vision into any external C threads so all you can see is the function call and the time goroutines spent off-cpu while waiting for it to finish. You can obviously supplement some of that with other tools (perf), but sometimes accepting the tradeoffs and using a Go implementation of the package instead just makes more sense.
> a desire for everything to be in the single source language
And that is good.
In special, this is driven by how TERRIBLE all the dance around C is. Sqlite is among the easiest, yet, it also cause trouble: suddenly, you need to bring certain LLVM, Visual Studio Tools, etc. And then you HOPE all the other tools use the correct env_vars, settings, etc.
And then, you hit a snag, and waste time dancing around C.
A big part of my pain, and the pain I've observed in 15 years of industry, is programming language silos. Too much time is spent on "How do I do X in language Y?" rather than just "How do I do X?"
For example, people want a web socket server, or a syntax highlighting library, in pure Python, or Go, or JavaScript, etc. It's repetitive and drastically increases the amount of code that has to be maintained, and reduces the overall quality of each solution (e.g. think e-mail parsers, video codecs, spam filters, information retrieval libraries, etc.).
There's this tendency of languages to want to be the be-all end-all, i.e. to pretend that they are at the center of the universe. Instead, they should focus on interoperating with other languages (as in the Unix philosophy).
One reason I left Google over 6 years ago was the constant code churn without user visible progress. Somebody wrote a Google+ rant about how Python services should be rewritten in Go so that IDEs would work better. I posted something like <troll> ... Meanwhile other companies are shipping features that users care about </troll>. Google+ itself is probably another example of that inward looking, out of touch view. (which was of course not universal at Google, but definitely there)
I think you need to look deeper - one of the strengths of go is the runtime and everything they do there to support their internal threading model. When you are calling out to an external language you have memory being allocated and managed outside the go runtime, and you have opaque blocks of code that aren’t going to let the go runtime do anything else on the same cpu until they exit. Those are more the considerations to wanting go native implementations. Even with SQLite, which is probably the most solid and throughly debugged pieces of code written since the Apollo program, it would be desirable to minimize the amount of data being copied across the runtime interface, and to allow other go routines to run while I/o operations are in progress.
It's especially helpful to be pure Go when targeting both iOS and Android (in addition to Linux, Mac, and Windows) with https://github.com/fyne-io/fyne#about
I was a beginner in Go when I wanted to use SQLite with it and I wanted an easy way without a lot of hassle to build, it looked like cgo was the only solution back then. I really wished there was something that I can easily use from Go rather than building with cgo.
I don't think the Go community is particularly susceptible to this. You mention Python; as you say, Python and the dynamic scripting languages are particularly "OK" with having things that back to C, because of the huge performance improvements you get with doing as much as possible in C in those languages. Dynamic scripting languages are slow. But these are the exceptions, not the rule.
Most other languages want native libraries not because of some bizarre fear of C, but because of the semantics. Native libraries work with all the features of the language, whatever they may be. A naive native binding to SQLite in Rust may be functional, but it will not, for instance, support Rust's iterators. That's kind of a bummer. Any real Rust library for something as big as SQLite will of course provide them, but as you go further down the list of "popular libraries" the bindings will get more and more foreign.
Also, the design of these dynamic scripting languages were non-trivially bent around treating the ability to bind as C as a first-class concern. I think if they were never designed around that, there are many things that would not look the same. One big one is that Python would be multithreaded cleanly today if it didn't consider that a big deal because the primary problem with the GIL isn't Python itself, but the C bindings you'd leave behind if you remove it. Go's issue is mostly that it came far enough into C's extremely slow, but steady, decline that it was able to make it a second-class concern instead of a first, and not force the entire language's design and runtime to bend around making C happy.
As it happens, in my other window, I'm writing Go code using GraphViz bindings, and I'm experiencing exactly this problem. It works, yes. But it's very non-idiomatic. I've had to penetrate the abstraction a couple of time to pass parameters down to GraphViz that the wrapper didn't directly support. (Fortunately, it also provided the capability to do so, but that doesn't always happen.) There's a function I have to call to indicate that a particular section is using the HTML-like label support GraphViz has, which in Go, takes a string and appears to return the exact same string, but the second string is magical and if used as the label will be HTML-like.
This is not special to Go, I've encountered this problem in Python (the Tkinter bindings are a ton of "fun"; the foreign language in this case is Tcl, and if you want to get fancy you'll end up learning some Tcl too!), Perl, several other places. A native library would be much nicer.
Finally, the Go SQLite project isn't it's own SQLite. It's actually a C-to-Go compile, as I understand it. That's not really a separate project.
There are good reasons to avoid C dependencies in go though. Because it's essentially a big black box to the runtime and so you loose some benefit when you go there (no pun intended).
The original argument was that reimplementing everything in Go is better and avoids a bunch of trade-offs, mainly: more complicated and slower builds, loss of cross-compilation, tooling. But it completely ignores the real cost of maintaining software, even if 'just a translation'. Having the same underlying libraries interoperating with Go, Node, Java, etc is a massive advantage, gives you predictable performance, memory usage, and known reliability regardless of the host language. Who would ever port and maintain a beast such as a libvips or IM port and all the 25 other image libraries they depend on?
What really should be addressed is precisely that pain, so users of modules using CGO don't have to worry about it. Fix it, not abandon it.
I agree with everything you write, but the opposition to "too much cgo" has another reason too: what's the point of using a memory safe language if you tie it to a large body of code that is not memory safe? Rust applications using C libraries have this issue too, although the performance penalty of calling C from Rust is less than for Go.
Of course, there are still valid reasons for using cgo. But if you are building a library, you should ask yourself the "cgo or not cgo" question even more seriously, because you will be forcing everybody who uses your library to use cgo too...
Yes, memory safety certainly would be an argument for rewriting a library in Go. But we are talking here about an extremely well tested and robust C program which is actively maintained. In most cases, it is more likely to have correct programs by calling the C version than attempting a port, which can introduced bugs of its own.
Yes, in this particular case, SQLite has a stellar reputation for stability, test coverage etc. But for other C codebases, even if they have been around for a long time and are stable and well maintained, exploits have still been found...
Yes – Go is generally memory safe, with the exception of the `unsafe` package which is (obviously) not. Outside of this, there is no access to raw pointers or pointer arithmetic. The use of goroutines does not affect memory safety.
None of this means that you can't make an absolute mess of concurrency, but that's not a memory safety concern.
https://research.swtch.com/gorace describes a loss of memory safety through data races, but I note it says "In the current Go implementations" and was written in 2010.
I never heard of news that the situation had changed, but if it has I'm most interested in when it did! :)
It really depends on where you draw the line. This is an obviously-incorrect program (`++` isn't atomic):
func main() {
var x, y int64
doneCh := make(chan struct{})
inc := func() {
for i := 0; i < 2<<20; i++ {
x++ // line 13 in test/main.go
atomic.AddInt64(&y, 1)
}
doneCh <- struct{}{}
return
}
go inc()
go inc()
<-doneCh
<-doneCh
fmt.Printf("x, y = %v, %v\n", x, y)
}
This prints something like:
$ go run main.go
x, y = 3482626, 4194304
If you run it with the race detector the problem is clear:
$ go run -race main.go
==================
WARNING: DATA RACE
Read at 0x00c0001bc008 by goroutine 7:
main.main.func1()
/home/jrockway/test/main.go:13 +0x50
Previous write at 0x00c0001bc008 by goroutine 8:
main.main.func1()
/home/jrockway/test/main.go:13 +0x64
Goroutine 7 (running) created at:
main.main()
/home/jrockway/test/main.go:19 +0x176
Goroutine 8 (running) created at:
main.main()
/home/jrockway/test/main.go:20 +0x184
==================
x, y = 4189962, 4194304
Found 1 data race(s)
exit status 66
This is not strictly a memory safety problem (this can't crash the runtime), but a program that returns the wrong answer is pretty useless, so there is that. Obviously if x were going to be used as a pointer through the `unsafe` package, you could have problems. (Though I think that x will always be less than y, so if you have proved that memory[base+y] is safe to read, then memory[base+x] is safe to read. But it's easy to imagine a scenario where you overcount instead of undercount.
I'm very confused by this comment. Memory safety doesn't mean "can't crash the runtime"[0]. That would exclude the most basic, canonical examples of memory unsafety: use after free, buffer overwrite/overread, double free, race conditions, etc. The erroneous Go exemplar you posted is literally one of the first examples of memory unsafety on Wikipedia: https://en.wikipedia.org/wiki/Memory_safety. Go is not memory safe by any stretch of the imagination. It doesn't even claim to be. At most it incidentally prevents a narrow subset of unsafe memory uses, compared with C (not a high standard), in the sense of providing an array type (and concomitant string type) rather than just pointer arithmetic.
I think the underlying mantra is actually "if I lose compilation speed and simplicity, go is no longer the correct language choice for this problem" which is why cgo can be a non-starter for many.
Mandatory plug of this article[0] describing how to use the Zig language toolchain to get easy cross-compilation with CGO.
It's really cool because cross-compilation is, imo, the most painful part of CGO - right after being incompatible with the whole runtime model, being blocking, and being hard to profile with Go profiling tools.
Compile in Debian.
It has a full cross compile tool chain and is as simple as adding the target architecture to dpkg to get the libraries you need (assuming you don't mind the versions packaged for Debian).
Getting the right libs is really the hardest part of cross compiling, and zig can't really help there.
Not to crap on zig, it's pretty badass, just offering a well battle tested alternative.
Wow this is awesome, thanks! I'm working on a Go project that uses sqlite right now, and I thought I was going to have to use xgo[1], which is cool and all, but it's like an 8 GB docker container, and I'd still be worried about glibc issues.
The default go sqlite driver is https://github.com/mattn/go-sqlite3, which is quite lovely, but I ran into issues with concurrency on read only databases.
I'm now using https://github.com/crawshaw/sqlite and it seems to address those issues (but I haven't gotten around to setting up a proper test to confirm). It may be worth perusing if you do run into performance problems. It does come with the caveat of not being a database/sql driver though.
Considering the great lengths the SQLite source goes through to validate correctness (code that rides in critical airplane systems gets no free lunch), I'd be immediately concerned a "SQLite in ______" lacks remotely the same degree of vigor in ensure implementation correctness. The overhead in using cgo feels like a small price to not worry "SQLite in Go" will remain maintained to the same degree the base SQLite project is.
They have `0 errors out of 928271 tests`. The code is mostly auto-generated if I recall, so it's essentially just using translated C from the original SQLite project.
I switched from mattn to modernc for my ws4sqlite project and could not measure a difference; really interesting though. Of course I made the switch out of cross-compoling consideration, not for the performances.
i've always wondered how that worked. Is it simply a statistical side-effect of people having read about one thing will post & like other posts about the same topic ?
My guess was always that a post on a particular topic might make someone interested to learn more about that topic. If that person then finds a related link interesting, they are more likely to share it if the original post did well enough to indicate that other members of the community are interested in the topic. In some ways it feels a bit like the meme culture of other communities, where users are more likely to post items that are related to previous successful posts.
The main interesting thing to me is when the “topic de jour” has very little to do with what I’d normally associate with HN (language genealogy, Neanderthals, and ancient currencies are some examples from recent memory).
In this case, the author was messing around with trying to get DuckDB & SQLite to build/link statically in Go yesterday after the post about it, it's probably related to that:
TL;DR this article compares the performance of mattn/sqlite3 (a wrapper around sqlite's C code, requiring `cgo` to compile and embed) with modernc.org/sqlite, an automatic conversion of C to Go code, and finds that the "native" Go code is half as fast as the cgo version.
Which brings us to the obvious question, what improvements can be made? What if the Go code was handcrafted instead of automatically generated?
https://gitlab.com/cznic/sqlite/-/issues/39 is one issue from a year ago where they talk about insert performance and some optimizations as well, so I believe the author is aware of it.
Anyway, I hope a native Go version does pick up (for everything currently depending on cgo for that matter), it makes cross-compilation a lot easier.
Yeah, this seemed fishy to me too (I made a similar comment here: https://www.reddit.com/r/golang/comments/uo5mix/comment/i8dn...). It seems the mattn/cgo version is doing a constant amount of work, but the modernc/non-cgo version is doing an amount of work linear(ish) to the number of rows. The latter makes more sense for this query, so I wonder if there's something wrong here?
How could one be doing constant work and the other O(rows) work if it's the same code (just compiled from one language to another)?
I also thought the sub-ms numbers can't be right — SQLite is fast but not that fast on millions of rows, but didn't look into it until now.
Turns out the benchmarking code is wrong: it didn't read the rows returned from db.Query, so the mattn version simply didn't wait for the results to arrive. Once you apply this patch:
diff --git a/cgo/main.go b/cgo/main.go
index 8796b3d..9a74a2f 100644
--- a/cgo/main.go
+++ b/cgo/main.go
@@ -82,11 +82,15 @@ CREATE TABLE people (
panic(err)
}
}
- fmt.Printf("%f,%d,insert,cgo\n", float64(time.Now().Sub(t1)) / 1e9, rows)
+ fmt.Printf("%f,%d,insert,cgo\n", float64(time.Now().Sub(t1))/1e9, rows)
t1 = time.Now()
- _, err = db.Query("SELECT COUNT(1), age FROM people GROUP BY age ORDER BY COUNT(1) DESC")
- fmt.Printf("%f,%d,group_by,cgo\n", float64(time.Now().Sub(t1)) / 1e9, rows)
+ res, _ := db.Query("SELECT COUNT(1), age FROM people GROUP BY age ORDER BY COUNT(1) DESC")
+ for res.Next() {
+ var count, age int
+ _ = res.Scan(&count, &age)
+ }
+ fmt.Printf("%f,%d,group_by,cgo\n", float64(time.Now().Sub(t1))/1e9, rows)
}
}
}
modernc SELECT performance becomes pretty comparable, actually a little bit faster than mattn on my Intel Mac with high row count.
Not only that, modernc INSERT is noticeably faster on my Intel Mac...
Good point, thanks for checking out the code! I replaced db.Query with a db.Exec for the SELECT count (just to avoid the row iteration/deserialization) and I'm seeing closer performance.
I don't hope someone is going to attempt to rewrite sqlite in go. The C version is very well tested. A rewrite can only add bugs. Automatic transcription is the best way forward, the performance hit (when you want to avoid cgo) be damned.
I do suppose that the group_by performance can be improved by working on the c->go compiler, though. I think that's a more worthwhile effort than rewriting sqlite.
> Automatic transcription is the best way forward, the performance hit (when you want to avoid cgo) be damned
But if performance is worse than the C wrapper, there is surely no point whatsoever for such a native version to exist? Isn't the whole point of a native Go/whatever version to avoid the interop penalty?
> The C version is very well tested
Couldn't the same test suite be used for a Go/whatever rewrite, or at least be ported to Go too?
> Couldn't the same test suite be used for a Go/whatever rewrite, or at least be ported to Go too?
And then port each and every update as well? The only use of such a rewrite is performance improvement in go programs. Surely improving the c-to-go compiler is a better investment of effort? It would benefit other projects too. Rewriting sqlite in go will only lead to less effort on the c-to-go compiler.
Dumb question, on which system with a go compiler is a C compiler hard to install?
Most *nix distros either come with gcc (and/ or clang) already or have it easily available as a package. AFAIK the same is applicable for MacOS with homebrew.
Is this a problem for windows users or am I overlooking someone? I'm genuinly interested since I've recently started with Go and would like to use cgo in the future.
My primary use case is windows and embedded.
Also sometimes while the compiler is easy to install, make c compilation work seamlessly across multiple machine with unreliable vendor toolchain or weird path issue where the compiler can't find the correct header/library are not that uncommon.
Thank you for your response and perspective. I didn't consider the various toolchains and environments from this POV. Your points make sense and are very helpful.
I agree. And I work at https://github.com/goplus/c2go recently. Its goal is converting any C project into Go without any human intervention and keeping performance close to C.
The thing that scared me off this option is the that the incredible SQLite test suite can't be run against the Go translation, IIRC.
But I also really don't like using cgo, for all the reasons stated elsewhere. So these days I generally avoid SQLite in my Go programs, which is unfortunate.
What I think I'd like best of all is if there was an out-of-process SQLite network daemon mode that could run over Unix sockets. So one could run `sqlite3 daemon unix:///run/sqlite.sock` and then communicate with it like a traditional SQL database, but with most of the simplicity and power of SQLite intact.
I know there are at least some attempts at creating network daemons for SQLite, but my impression was that none of them seemed up to the (very high) bar set by SQLite itself. Also, there's no SQLite network driver for Go. Please someone correct me if I'm wrong about these though.
With modernc version there are also a lot less compiling hurdles for niche devices, I have a bunch of apps that use the modernc version because it was the easiest solution to compile the binary for a 32-bit ARM device like the Raspberry Pi.
Yep! This is pretty much the main reason I don't use SQLite in Go unless I know that the project won't ever be compiled on anything other than what the devs and the server run. Simple and easy cross-compilation is one of Go's often forgotten selling points, and depending on how you want to use SQLite, using CGo may or may not be a good solution for you.
If you do decide to use "modernc.org/sqlite" you should make it switchable with mattn's behind a build tag for those of us that prefer to use the system libsqlite3. I've noticed that the auto-converted package seems to have different behaviors in it's handling of concurrent updates, which makes me concerned.
I don't think you ever want to make the database backend switchable at compile time. Every database engine has slight semantic differences and it's crucial that your program handle that correctly. If there's a difference, like you mention, you want to tell the user to get the one that you've tested.
Something that has caused me a lot of trouble in the past and is now on my "never do" list is using SQLite for tests against an app that runs Postgres in production. It's pretty similar, but passing tests easily equal failing production. (The last straw for me was SQLite considering "f" true, while Postgres considers it false.) Your suggestion is a little different than that, but the same principle applies. Test your app against the database you're going to use. If you must support two database engines, then you need a test matrix that runs tests against both.
For an application being developed professionally I agree. At least until you have an good enough reason to deviate from that course.
I wrote my comment with the experience working with deploying open source software personally recently. They already had support for multiple database backends, with SQLite being one of them. They already had test suites, and while adding support for another SQLite implementation isn't free, it did only take a few minutes.
Can this be generalized to say, as a rule of thumb, that an automatic translation of code from C to go produces code that is twice as slow?, are there other projects in which the automatic translation performance is divided by 2?
I have absolutely zero problems using mattn/sqlite3 while cross compiling from Linux to windows.
Zero.
I’m not some deeply knowledgeable go-internals core team committer. I’m a pretty mediocre go dev who spends most days fighting silly fires in the 6-7 other languages any project ends up having to deal with.
Zero problems with cross compiling the cgo version of SQLite for 3 years now.
I have been using SQLite in Go projects for a few years now. During early stages of development I always start with SQLite as the main database, then when the project matures, I usually add support for PostgreSQL.
(I usually make a Store interface which is application specific and doesn't even assume there is an SQL database underneath. Then I make "driver" packages for each storage system - be it PostgreSQL, SQLite, flat files, timeseries etc. I have only one set of unit tests that is then run against all drivers. And when I have a caching layer, I also run all the unit tests with or without caching. The cache is usually just an adapter that wraps a Store type. I maintain separate schemas and drivers for each "driver" because I have found that this is actually faster and easier than trying to make generic SQL drivers for instance.)
However, I always keep the SQLite support and it is usually the default when you start up the application without explicitly specifying a database. This means that it is easy for other developers to do ad-hoc experiments or even create integration tests without having to fire up a database, which even when you are able to do it quickly, still takes time and effort. In production you usually want to point to a PostgreSQL (or other) database. Usually, but not always.
I also use it extensively in unit tests (often creating and destroying in-memory databases hundreds of times during just a couple of seconds of tests). I run all my tests on every build while developing and then speed matters a lot. When testing with PostgreSQL I usually set a build tag that specifies that I want to run the tests against PostgreSQL as well. I always want to run all the database tests - I don't always need to run them against PostgreSQL
(Actually, I made a quick hack called Drydock which takes care of creating a PostgreSQL instance and creates one database per test. This is experimental, but I've gotten a lot of use out of it: https://github.com/borud/drydock)
The reason I do this is that it results in much quicker turnaround during the initial phase when the data model may go through several complete rewrites. The lack of friction is significant.
SQLite has actually surprised me. I use it in a project where I routinely have tens of millions of rows in the biggest table. And it still performs well enough at well north of 100M rows. I wouldn't recommend it in production, but for a surprising number of systems you could if you wanted to.
The transpiled SQLite is very interesting to me for two reasons. It makes cross compiling a lot less complex. I make extensive use of Go and SQLite on embedded ARM platforms and then you either have to choose between compiling on the target platform or mess around with C libraries. It also eliminates the need to do two stage Docker builds (which cuts down building Docker images from 50+ seconds to perhaps 4-5 seconds).
The transpiled version is slower by quite a lot. I haven't done a systematic benchmark, but I noticed that a server that stores 30-40 datapoints per second went from 0.5% average CPU load to about 2% average CPU load. I'm not terribly worried about it, but it does mean that when I increase the influx of data I'm most likely going to hit a wall sooner.
I'll be using the transpiled SQLite a lot more in the coming year and I'll be on the Gophers Slack so if anyone is interested in sharing experiences, discussing SQLite in Go, please don't be shy.
There's often (for instance, in Go projects wanting to avoid cgo) a desire for everything to be in the single source language - Go. In what resembles NIH syndrome, there will be clones of existing libraries, offering little over the original except being "Written in Go". From experience this often makes for more bugs, as the Go version is commonly much younger and lessor used than the existing non-Go library.
The Python world does it a lot less, perhaps the slowness of Python helps encourage using non-python libraries in Python modules. But that sure does making building and distributing Python projects "fun".
What I'm trying to say is that:
A world where every language community has it's own SQLite project because the communities shun code written in other languages just feels like a profound waste.