Hacker News new | past | comments | ask | show | jobs | submit login
A MySQL compatible database engine written in pure Go (github.com/dolthub)
405 points by mliezun 7 months ago | hide | past | favorite | 80 comments



Hi, this is my project :)

For us this package is most important as the query engine that powers Dolt:

https://github.com/dolthub/dolt

We aren't the original authors but have contributed the vast majority of its code at this point. Here's the origin story if you're interested:

https://www.dolthub.com/blog/2020-05-04-adopting-go-mysql-se...


This is very cool! Couple of suggestions:

- Don't use "mysql" in the name, this is a trademark of Oracle corporation and they can very easily sue you personally if they want to, especially since you're using it to develop a competing database product. Other products getting away with it doesn't mean they won't set their sights on you. This is just my suggestion and you can ignore it if you want to.

- Postgres wire/sql compatibility. Postgres is for some reason becoming the relational king so implementing some support sooner rather than later increases your projects relevance.


PostgreSQL support here

https://github.com/dolthub/doltgresql

Background and architecture discussion here

https://dolthub.com/blog/2023-11-01-announcing-doltgresql/


What's the replication story currently like?


The vanilla package can replicate to or from MySQL via binlog replication. But since it's memory only, that's probably not what you want. You probably want to supply the library a backend with persistence, not the built-in memory-only one

Dolt can do the same two directions of MySQL binlog replication, and also has its own native replication options:

https://docs.dolthub.com/sql-reference/server/replication


Interesting!

> If you have an existing MySQL or MariaDB server, you can configure Dolt as a read-replica. As the Dolt read-replica consumes data changes from the primary server, it creates Dolt commits, giving you a read-replica with a versioned history of your data changes.

This is really cool.


Thanks!

Let us know if you try it out, we're always interested in feedback.

https://discord.com/invite/RFwfYpu


Coming soon we'll have the ability to replicate a branch HEAD to MySQL as well.


Have you benchmarked the replication? Or do you know of anyone who's running it against a primary with a couple 10s of thousands of writes per second?


That's a lot. With Percona clusters I started having issues requiring fine-tuning around a third of that at quite short peak loads, maybe ten minutes sustained high load topping out at 6-10k writes/s. Something like 24 cores, 192 GB RAM on the main node.

Not sure how GC works in Golang but if you see 20k writes/s sustained that's what I'd be nervous about. If every write is 4 kB I think it would be something like a quarter of a TB per hour, probably a full TB at edge due to HTTP overhead, so, yeah, a lot to handle on a single node.

Maybe there are performance tricks I don't know about that makes 20k sustained a breeze, I just know that I had to spend time tuning RAM usage and whatnot for peaks quite a bit earlier and already at that load planned for sharding the traffic.


I don't think we do have any benchmarks of replication from mySQL, but I am positive there's no chance it can handle 10,000 TPS.


Missed an opportunity to can this uSql!


I always found the idea behind dolt to be very enticing.

Not enticing enough to build a business around, due to it being that bit too different and the persistence layer being that bit too important. But the sort of thing that I'd love it if the mainstream DBs would adopt.

I didn't realise the engine was written in Go, and honestly the first place my mind wonders is to performance.


If you like the idea of the Dolt prolly trees[1], I'm building a database[2] that uses them for indexing, (eventually) allowing for shared index updates across actors. Our core uses open-source JavaScript[3], but there are a few other implementations including RhizomeDB in Rust[4]. I'm excited about the research in this area.

[1] https://docs.dolthub.com/architecture/storage-engine/prolly-...

[2] https://fireproof.storage

[3] https://github.com/mikeal/prolly-trees

[4] https://jzhao.xyz/thoughts/Prolly-Trees


We haven't benchmarked the in-memory database implementation bundled in go-mysql-server in a while, but I would be surprised if it's any slower than MySQL, considering that Dolt runs on the same engine and is ~2x slower than MySQL including disk-access.

https://docs.dolthub.com/sql-reference/benchmarks/latency


Version Control is not the type of thing "mainstream DBs would adopt".

We needed to build a custom storage engine to make querying and diffing work at scale:

https://docs.dolthub.com/architecture/storage-engine

It based on the work of Noms including the data structure they invented, Prolly Trees.

https://docs.dolthub.com/architecture/storage-engine/prolly-...


This seems to be a wire-protocol proxy for mysql -> SQL.

The default proxied database is dolt. I'm guessing this is extracted from dolt itself as that claims to be wire-compatible with mysql. Which all makes total sense.


Not a proxy in the traditional sense, no. go-mysql-server is a set of libraries that implement a SQL query engine and server in the abstract. When provided with a compatible database implementation using the provided interfaces, it becomes a MySQL compatible database server. Dolt [1] is the most complete implementation, but the in-memory database implementation the package ships with is suitable for testing.

We didn't extract go-mysql-server from Dolt. We found it sitting around as abandonware, adopted it, and used it to build Dolt's SQL engine on top of the existing storage engine and command line [2]. We decided to keep it a separate package, and implementation agnostic, in the hopes of getting contributions from other people building their own database implementations on top of it.

[1] https://github.com/dolthub/dolt [2] https://www.dolthub.com/blog/2020-05-04-adopting-go-mysql-se...


Really excellent work! For the curious, would you all be creating an in-memory database implementation that is postgres compatible for the doltgres project?


We are moving in that direction but it's not a primary goal at this point. Once we have more basic functionality working correctly in doltgres we will examine splitting off a separate package for it. The in memory implementation has a bunch of MySQL specific stuff in it right now and we're still learning what pieces need to be generalized to share code.


The compatibility (and functionality in general) is severely limited, not usable in production:

> No transaction support. Statements like START TRANSACTION, ROLLBACK, and COMMIT are no-ops.

> Non-performant index implementation. Indexed lookups and joins perform full table scans on the underlying tables.

I actually wonder if they support triggers, stored procedures etc.


Yes, triggers and stored procedures are supported. Concurrency is the only real limitation in terms of functionality.

The bundled in-memory database implementation is mostly for use in testing, for people who run against mysql in prod and want a fast compatible go library to test against.

For a production-ready database that uses this engine, see Dolt:

https://github.com/dolthub/dolt


Only for the in-memory implementation. It is also specifically stated that you shouldn’t use the in-memory stub in production


I suspect Go is probably better, but as a long time C# developer I cringe at the idea of implementing a DB with GC language. It seems that you would be fighting the GC all the time and have to write lots a lot of non-obvious low allocation code, using unmanaged structures, unsafe, etc., a lot. All doable of course, but seems like it would be starting on the wrong foot. Maybe fine for a very small team, but onboarding new devs with the right skill set would be hard.


There are quite a few database products and other data intensive systems written in Go, Java, and many other languages. Generally this is much less of an issue than you think. And it's offset by several benefits that come with having some nice primitives to do e.g. concurrency and nice language to work with.

On the JVM you have things like Cassandra, Elasticsearch, Kafka, etc. each of which offer performance and scale. There are lots more examples. As far as I know they don't do any of the things you mention; at least not a lot. And you can use memory mapped files on the JVM, which helps as well. Elasticsearch uses this a lot. And I imagine Kafka and Cassandra do similar things.

As for skillset, you definitely need to know what you are doing if you are going to write a database. But that would be true regardless of the language.


While it is true that Cassandra and Kafka are great software that countless developers rely on to handle massive scale...

It is also true that the JVM and the GC are a bottleneck in what they are able to offer. Scylla and Redpanda's pitch is "we are like this essential piece of software, but without the JVM and GC".

Of course, having a database written in Go still has its pros and cons, so each to their own.


The JVM and GC have a learning curve for the people implementing the database. But most users wouldn't get exposed to any of that. And if you manage it properly, things work fine. I've used Elasticsearch (and Opensearch) for many years. This is not really an issue these days. It was 10 years ago when JVM garbage collection was a lot less advanced than it is these days. These days, that stuff just works. I haven't had to tune GC for at least half a decade on the JVM. It's become a complete and utter non issue.

There are many valid reasons to pick other products but Elasticsearch is pretty good and fast at what it does. I've seen it ingest content at half a million documents per second. No stutters. Nothing. Just throw data at it and watch it keep up with that for an hour sustained (i.e. a bit over a billion documents). CPUs maxed out. This was about as fast as it went. We threw more data at it and it slowed down but it didn't die.

That data of course came from kafka being passed through a bunch of docker processes (Scala). All JVM based. Zero GC tuning needed. There was lots of other tuning we did. But the JVM wasn't a concern.


I think this depends on the level of optimization you go for. At the extreme end, you’re not gonna use “vanilla” anything, even in C or Rust. So I doubt that you’ll get that smooth onboarding experience.

In Go, I’ve found that with a little bit of awareness, and a small bag of tricks, you can get very low allocations on hot paths (where they matter). This comes down to using sync.Pool and being clever with slices to avoid copying. That’s footgun-performance tradeoff that’s well worth it, and can get you really far quickly.


Well, with a manually managed language you have to do those things pretty much all the the time, but with a GC you can pick which parts are manually managed.

Also I suspect this project isn't for holding hundreds of GB of stuff in memory all the time, but I could be wrong.


You would be surprised by performance of modern .NET :)

Writing no-alloc is oftentimes done by reducing complexity and not doing "stupid" tricks that work against JIT and CoreLib features.

For databases specifically, .NET might actually be positioned very well with its low-level features (intrisics incl. SIMD, FFI, struct generics though not entirely low-level) and high-throughput GC.

Interesting example of this applied in practice is Garnet[0]/FASTER[1]. Keep in mind that its codebase still has many instances of un-idiomatic C# and you can do way better by further simplification, but it already does the job well enough.

[0] https://github.com/microsoft/garnet

[1] https://github.com/microsoft/FASTER


Using net6. I agree, performance is generally great / just as fast as its peers (i.e. Java and Go). However, if you need to think about memory a lot, GCed runtimes are an odd choice.


Why is that?


Primarily because you not only need to think about memory, but you also need to think about the general care and feeding of the GC as well (the behavior of which can be rather opaque). To each their own, but based on my own (fairly extensive) experience, I would not create a new DB project in a GCed runtime given the choice. That being said, I do think C# is a very nice language with a high performance runtime, has a wonderful ecosystem and is great choice for many / most projects.


Isn't the characteristic of languages like C++ or Rust is you have to think way more about managing the memory (and even more so if you use C)?

Borrow checker based .drop/deallocation is very, very convenient for data with linear or otherwise trivial lifetime, but for complex cases you still end up paying with either your effort, Arc/Arc<Mutex, or both. Where it does help is knowing that your code is thread-safe, something Rust is unmatched at.

But otherwise, C# is a quite unique GC-based language since it offers full set of tools for low level control when you do need that. You don't have to fight GC because once you use e.g. struct generics for abstractions and stack allocated/pooled buffers or native memory for data neatly wrapped into spans, you get something close to C but in a much nicer looking and convenient package (GC is also an optimization since it acts like an arena allocator, and each individual object has less cost than pure reference counted approach).

Another language, albeit with small adoption, is D which too has GC when you want an escape hatch while offering features to compete with C++ (can't attest to its GC performance however).


That is the idealy world do, profit from productivity of using automatic resource management, and only do low allocation code on the paths that actually matter, as the ASP.NET team has been doing since .NET Core was introduced, with great results in performance.


There’s already mysql-like databases written in non-GC’ed languages. Such as myself itself.

Odds are if you need one written in Go then you’re requirements are somewhat different. For example the need to stub for testing.


yup, it's bad, and even if you "do everything right" minimization wise, if you're still using the heap then eventually fragmentation will come for you too


Languages using moving garbage collectors, like C# and Java are particularly good at not having to deal with fragmentation at all or marginally at most.


Yup, and exposing pointers in a gc language was a mistake as it blocks this. It limits (efficient) applications to small deployments


Why would it impose such a restriction?


In order to move objects you need to stop the world and update everything that will ever point to them: pointers, pointer aliases, arithmetic bases, arithmetic offsets. This quickly becomes intractable. It's not strictly speaking just pointers themselves, but the fact that pointers can be used arithmetically in various ways, even though the most obvious ways are disallowed. The obvious example is unsafe.Pointer and uintptr, but that has some guards, for example you'll get an error converting from a uintptr variable to an unsafe.Pointer, but mix in some slices, or reflect usage and you can quickly get into shaky territory. I believe you can achieve badness even without using the unsafe package.


Interesting. I did not realize it is such a problem in JVM. EDIT: in Go, not JVM, somehow it is always Go having trouble with systems programming domain.

In .NET, this does not seem to be a performance issue, in fact, it is used by almost all performance-sensitive code to match or outperform C++ when e.g. writing SIMD code.

There are roughly three ways to pass data by reference:

- Object references (which work the same way as in JVM, compressed pointers notwithstanding)

- Unsafe pointers

- Byref pointers

While object references don't need explanation, the way the last two work is important.

Unsafe pointers (T*) are plain C pointers and are ignored by GC. They can originate from unmanaged memory (FFI, NativeMemory.Alloc, stack, etc.) or objects (fixed statement and/or other unsafe means). On an occasion a pointer into object interior (e.g. byte* from byte[], or MyStruct* from MyStruct field in an object) is required, such object is pinned by setting the bit in object header which is coincidentally used by GC during mark phase (concurrent or otherwise) to indicate live objects. When an object is pinned in this way, GC will not move it during relocation and/or compaction GC phase (again, concurrent or otherwise, not every GC is stop-the-world). In fact, objects are moved when GC deems so to be profitable, either by moving to a heap belonging to a different generation or by performing heap compaction when a combination of heuristics prompts it to reduce fragmentation. Over the years, numerous improvements to GC have been made to reduce the cost of object pinning to the point that it is never a concern today. Part because of those improvements, part because of the next feature.

Byref pointers (ref T) are like regular unsafe pointers except they are specially tracked by GC to allow to point to object interiors without writing unsafe code or having to pin the object. You can still write unsafe code with them by doing "byref arithmetics", which CoreLib and other performance sensitive code does (I'm writing a UTF-8 string library which heavily relies on that for performance), and they also allow to point to arbitrary non-object memory like regular unsafe pointers do - stack, FFI, NativeMemory.Alloc, etc. They are what Span<T> is based on (internally it is ref T + int length) allowing to wrap arbitrary memory ranges in a slice-like type which can then be safely consumed by all standard library APIs (you can int.Parse a Span<byte> slice from stack-allocated buffer, FFI call or a byte[] array without any overhead). Byref pointers can also be pinned by being stored in a stack frame location for pinned addresses, which GC is aware of. For stack and unmanaged memory this is a no-op and for object memory this only matters during GC itself. Of course nothing is ever free in software engineering but the overhead of byrefs is considered to be insignificant compared to object pinning.


Java behavior is in general/abstract similar to .net - there are types & APIs to pin memeory for ffi and aliasing use cases but normal references are abstracted enabling the gc to perform compaction/moves for non pinned objects. The behavior I described in the parent post was go, which exposes pointers directly, rather than references.

It is still possible to end up with fragmentation challenges with both the jvm and .net under specific circumstances- but there are also a lot of tunable parameters and so on to help address the problem without substantially modifying the program design to fit an under specified contract with the memory subsystem. In go there are few tunables, and an even less specified behavior of the gc and its interaction with application code at this level of concern.


It would be great if this evolves to support mysql to postgresql and mysql to sqlite.

Then we can finally have multiple database engine support for WordPress and others.


It is always the edge cases that will kill you. In the case of WP on PostgreSQL, the reason you want WP in the first place is the plugins and those will be hit or miss on PostgreSQL. Just give up on the combination of those two.


Isn't there an adapter from mysql-to-postgres which would essentially mimic all the quirks in mysql onto an actual postgres?


I believe this is what Janus (NEXTGRES) does.

To clarify, the wire protocol is the easy part, the semantic differences how each database does things is a whole other can of worms. Such emulation will never be 100%, quirks and all.


tidb has been around for a while, it is distributed, written in Go and Rust, and MySQL compatible. https://github.com/pingcap/tidb

Somewhat relatedly, StarRocks is also MySQL compatible, written in Java and C++, but it's tackling OLAP use-cases. https://github.com/StarRocks/starrocks

But maybe this project is tackling a different angle. Vitess MySQL library is kind of hard to use. Maybe this can be used to build ORM-like abstraction layer?


Dolt supports git like semantics, so you can commit, pull, merge, etc...


I always look at these implementations and go wow! But then I think, is there any real use for this?


If your program integrates with mysql in production, you can use this for much faster local tests. It doesn't have to be a go program, although that makes it easier.


The readme mentions at least one interesting use which presumably is the impetus for its creation: https://github.com/dolthub/dolt


If you want to run arbitrary queries on structured data then SQL is a good language to do it in. This library gives you the opportunity to build such a SQL layer on top of your own custom structured data sources, whatever they may be.


Interesting, another project implemented in Go that is compatible with MySQL server, alongside others like Vitess and TiDB.


Is this for integration/smoke testing?


Most direct users of go-mysql-server use it to test Golang <> MySQL interactions without needing a running server.

We here at DoltHub use it to provide SQL to Dolt.


How hard would it be to use this as an in-memory replacement for MySQL for testing, let's say, a Rails project?

Given how important the DB layer is I would be careful to use something like this in production, but if it allows speeding up the test suite it could be really interesting.


I know it is a matter of choice, but why was MySQL chosen instead of PostgreSQL? The latter seems to be more popular on Hacker News.


Typically, things that are more popular on Hacker News are not most popular with the rest of the world.


> Why is Dolt MySQL flavored?

TLDR; Because go-mysql-server existed.

https://www.dolthub.com/blog/2022-03-28-have-postgres-want-d...

We have a Postgres version of Dolt in the works called Doltgres.

https://github.com/dolthub/doltgresql

We might have a go-postgres-server package factored out eventually.


Could this be used as kind of connection proxy to allow for more clients to a single pool of database servers?


Is there anything like this for postgres?


cockroachdb might be close: https://github.com/cockroachdb/cockroach


What's the purpose of this idea? Snapshotted mysql server? Who uses that and for what purpose?


Congrats, looks like a lot of hard work!

Could I swap storage engine with own key value storage e.g. rocksdb or similar?


Yes, that's the idea. Writing a simple read only database back end is not too tough.


Why read only? What's stopping this engine from using (for example) FoundationDB as storage?


Nothing, it's just quite a bit more complicated to implement a backend that supports writes.


Compatible has many aspects. I'd be interested in the replication protocols.


With Vitess likely consolidating its runtimes (vtgate, vtctl, vttablet, etc) into a single unified binary: https://github.com/vitessio/vitess/issues/7471#issuecomment-...

... it would be a wild future if Vitess replaced the underlying MySQL engine with this (assuming the performance is good enough for Vitess).


I don't think this is in the cards for vitess, their whole architecture is built around managing sharded mysql instances.


Why not standard conforming SQL instead of MySQL?


shouldn't these projects have a perf comparison table? there was a post a couple days ago about the an in-memory Postgres, but same problem on the perf.

if someone is considering running it, they're probably considering it against the actual thing. and I would think the main decision criteria is: _how much faster tho?_


This is a reasonable point, we'll run some benchmarks and publish them.

We expect that it's faster than MySQL for small scale. Dolt is only 2x slower than MySQL and that includes disk access.

https://docs.dolthub.com/sql-reference/benchmarks/latency


Thanks! Appreciate your response.

In dynamic language land, we tend to use real DBs for test runs. So having a faster DB wouldn't hurt!


Isn't that........ Vitess?


Vitess (a fork of it anyway) provides the parser and the server. The query engine is all custom go code.


Ah, cool, that makes sense. Thanks for clarifying


TiDB!


Performance comparison against the original?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: