I really appreciate the trend to rewrite C/C++ libraries in Go. It has always been really frustrating that hacky library wrappers for other languages leave performance and features on the table because they're either incomplete or just too hard to implement in the host language. For most of the languages out there, it is always better to have a native implemention.
There are now a number of great embedded key/value stores for Go, which make it really easy to create simple, high performance stateful services that don't need the features provided by SQL.
> For most of the languages out there, it is always better to have a native implemention.
I'm not sure that's true...imagine a K/V store written natively in Ruby, Python, JS... People have done what DGraph did before, in other languages -- writing log-structured databases in Haskell, &c -- but without a very large community you didn't get the level of improvement and testing that you see with something written in C and used across many languages. The JVM is an example of an environment where the "native" approach has worked out well; but is that "most of the languages"?
go build -o libWithExport.so -buildmode=c-shared myProject
There exists libraries that do provide an ORM, but since the declarative side of golang is limited compared to languages, say Python, I find them unwieldy. Also, the current trend shuns the usage of an ORM in golang and encourages directly or indirectly writing SQL queries and interacting with the database through database/sql and its extending libraries, like jmoiron/sqlx.
Naturally, it's possible to build a solution to persist Go maps and to make them safe to access across goroutines, but by that point you've built a persistent key/value store.
If we're going to rewrite libraries which may be used by other applications, which gets deployed on servers, where people will need to update things in the name of security...
It would be nice if those libraries weren't statically linked, and thus would need to be rebuilt, pushed and redeployed for every one of their dependent libraries which had a security issue.
The only place I can imagine Go-based software being useful, is for code you've built yourself, for use in a always rolling forward cloud-environment you maintain yourself, where updating dependencies and rebuilding is part of the day to day operations.
Trusting Go-code written by other people for anything you deploy publicly sounds like something I'd rather never do. It sounds like a security nightmare.
Which is, you will notice, the core of where Go comes from.
Also, you may notice that "server" software is generally moving in this direction very quickly. "Containers" are basically static linking writ large. We're still early in the curve on this but I expect in the next few years we're going to see more people start pointing out that dynamic linking is basically obsolete nowadays (the first few brave souls have already started mumbling it), and even the only putative reason that people keep citing for it being a good idea, the ability to deploy security updates (but not ones that require ABI changes), is fading fast and was less important than people made it out to be anyhow. First, lots of people still basically don't do those updates. Second, even if you deploy the updates you still have to take the service down to restart it. Third, by the time you've instantiated the architecture that lets you manage your dynamic library updates at scale you've also instantiated the architecture you will need to simply rebuild your static-everything containers and redeploy them.
In another ten years I expect it to be simply common knowledge that if you can't completely rebuild and redeploy everything easily for a fix, you're not really operating at a professional level.
I'm curious, is it a requirement to have types in order for QuickCheck to make sense? So you know what type of data to hammer a function with for example.
And possibly Go has the right approach! Is it better to make C integration as simple and smooth as possible, to leverage existing libraries, or is it better to encourage people to ditch all that unsafe C code and write everything in Go?
The reason calling C code from Go is mostly because Go has a different ABI: it does weird things with the stack, calling conventions, etc. But that doesn't stop you from calling assembly written to that ABI directly from Go. In fact, that is done a lot in the standard library. It is possible to create a wonky, C-like, low-level language that compiles into machine code with the Go ABI. Let's call this language Go-- :). With something like that, it might be possible to translate existing C code into Go--. However, the most likely use for Go-- would be for use in performance-critical sections of your Go code.
If the workflow requires you to modify or annotate the C code, that's a lot of work (and risky) and you might as well just rewrite it in Go.
If the workflow is totally automatic, so you can just use off-the-shelf C code, that's great! But in that case it's effectively just a C compiler, the "Go--" bit seems like a red herring.
It reminds me a bit of "C--", Simon Peyton Jones' suggestion for a low-level target for Haskell and similar languages. It would remove some C functionality that language runtimes don't really need, and add some extra low-level stuff like register globals and tail calls. I don't think that got much traction, but it may have influenced the design of LLVM.
It will also be great to have some comparison on write performance with fsync enabled.
Overall, very interesting project, bookmarked, will definitely follow the development!
fio  can easily do this because it spawn a number of threads
While working with Rocksdb we also found that Range iteration latency was very bad compared to a B+-tree and that RocksDB get good read performance mostly from random read because it's using bloomfilters.
Does anyone know if this got fixed somehow recently?
If one would like to contribute to Badger, happy to help someone dig deeper in this direction.
sudo apt-get install libaio1 libaio-dev.
I think the best bet is to build a fio equivalent in Go (shouldn't take more than a couple of hours), and see if it can achieve the same throughput as fio itself. That can help figure out how slow is Go compared to using libaio directly via C.
See this issue:
I think the real advantage of Badger is that it's not as sophisticated as Redis. ie you can have Redis-like functionality compiled into your application so less faffing about setting up another daemon / cloud micro-service inside your NAT / VPC / whatever.
If you want persistence, then I'd recommend persisting to disk. While I've not had this fun with Redis, I've written code that took out an entire Cassandra ring. Were stuff only in memory, it would have not been pretty. Just because something is distributed doesn't mean it's guaranteed to never go completely down.
(That said, if you're using Redis as an in-memory cache, this is a potentially acceptable tradeoff.)
An application developer would not choose Badger, but instead would pick a DBMS such as Redis.
A database engineer would use Badger to develop a DBMS that the application engineer could use. If the database engineer so chooses they could expose a Redis compatible API.
Cassandra, for example, has it's own storage engine that's responsible for writing bits to disk.
I know rocksdb is LSM-based and was built by FB to address write amplification on SSDs.
Badger, RocksDB, LMDB, etc. are storage engines. A process uses these storage engines to write data to memory (note: the storage engine may support persistent, volatile, or both types of memory).
A database management system (DBMS) is a higher level concept that often has multiple processes either on a single server or distributed across multiple servers. Simply stated, each process within a DBMS uses the storage engine to read/write to/from memory (persistent or volatile).
It's important to note that storage engines are a specialized area and require different skills from writing a DBMS. It's a big deal when other people write high quality storage engines because it makes it a lot easier to write a DBMS.
It seems that databases often want to persist/flush data in unfinished (4KB or 8KB) pages when they're being built, once they are full, they don't change much - once full they could be normally persisted. Another kind of pages are those that change very frequently - ie. single "root" page which keeps counters or other stuff in single, root page.
It seems a bit wasteful that multiple "checkpoints" (flushes/fsyncs) on partial blocks are triggering on hardware whole block rewrites. Similarly with "root"/"meta" pages that keep track of just few bytes frequently changing are triggering similar whole page rewrites.
To be honest even some kind of PCI card with little battery and slot for DDR4 would probably do the trick, no? The rest could be implemented in software - as long as you'd have access to fast flushing with battery backed memory that survives hard crash - it should be fine.
Is this silly idea?
- Does this have a log file? If so, what does it log and can it be disabled?
- How is the data stored? (Single file, multiple files, etc.)
- How is the RAM usage?
Shameless plug: If you have write-once (or seldom) and read-often kind of access pattern for JSON documents, I wrote a simple (619 LOC), pure Go key-value store, that supports this use case: microblob. It logs in common format, uses a single file backend and scales up and down with RAM..
(https://open.dgraph.io/licensing is a dead link)
AGPLv3 is just a license, it is not "draconic", nor "cancer", (despite what Ballmer wants you to believe), it simply represents an ethical/moral agreement with the ideology of software freedom and since it is their code, they are free to express what they believe via the appropriate license. Nobody is making them do it, nobody is forcing you to use it and nobody is demanding you agree with it, it was a free choice and is therefore as far from "draconic" as one can possibly be, unless you believe that everybody who doesn't subscribe to your worldview is "draconic" by definition.
As an aside, one can certainly describe something as draconic(or whatever else) if one views it as such; it's just an opinion.
Can you give me any more info on your sources?
I would also be surprised if Apple (being so allergic to the GPLv3) used AGPLv3 software, though that's just speculation.
Why should that be a laudable goal? Not all projects are megalomaniac.
1) copyleft versus non-copyleft
2) which copyleft license to choose
3) the strategy of the dgraph
Regarding 1), non-copyleft leads to higher short-term adoption, but copyleft is often the better choice long-term. Moreover, history has shown that if you switch from a non-copyleft to a copyleft license, people will feel tricked. So the "early stage" argument doesn't hold. If you want to use copyleft long-term, better be honest and do so upfront.
Regarding 2), whenever you ask a lawyer in that field, they usually tell you that AGPLv3 is almost always what you want, preferable to GPLv3 and most other alternatives. So the "draconic open source license" argument doesn't hold. AGPLv3 just closes large holes which GPLv3 left open, to ensure that people actually stick to the copyleft principle. So if you want copyleft, choose AGPLv3 unless you have a very compelling reason not to.
Regarding 3), the dgraph people seem to see it a similar way:
It makes sense from the host business's point of view. If you are the sole contributor, then you're entitled to do what you like. Moreover, you're also free to charge for commercial licences.
This licencing model wouldn't be appropriate for a community-centric database, such as PostgreSQL. With many contributors to core, no one would be able to arbitrage the situation.
And Mongo. Also RethinkDB until the parent company folded.
> This licencing model wouldn't be appropriate for a community-centric database, such as PostgreSQL.
I don't disagree.
It's interesting, though, how counterintuitive this is. I would think that GPL wouldn't be a problem for individual contributors (the types of participants I imagine when I think of a "community"), but for business contributors who don't want competitors to take advantage of their modifications. And yet, anything a business contributes back to an MIT/Apache project is actually less protected than a contribution to a GPL project.
> I don't disagree.
> It's interesting, though, how counterintuitive this is. I would think that GPL wouldn't be a problem for individual contributors (the types of participants I imagine when I think of a "community"), but for business contributors who don't want competitors to take advantage of their modifications. And yet, anything a business contributes back to an MIT/Apache project is actually less protected than a contribution to a GPL project.
Well, a lot of them want to, at least temporarily, distribute some features without releasing them. And that simply doesn't work for GPL projects, unless there's a sole owner and all external contributions are made under some form of CLA. There's a lot of open-core type projects, but in my experience they're on average less healthy than projects with multiple contributing entities.
For PostgreSQL there've been a lot of closed source forks, but a lot of them folded and/or couldn't keep up with the amount of changes and thus are based on some super old version (hello Redshift, hello Greenplum). The only ones that appear to be able to keep up are ones 1) that move more invasive changes upstream after a while and religiously rebase after every release, never delaying, or 2) move their modifications into extensions, possibly adding the necessary extension APIs to core PostgreSQL.
In my personal opinion, AGPL is largely chosen to avoid various cloud providers from profiting significantly from $product, without ever giving back. That's control, but a very specific form of it. I don't personally like AGPLs legalese, it's very very imprecise.
Maybe it just seems imprecise to a layperson because they don't know the well-defined meaning of various legal terms? (... and confuse these with their fuzzy meaning in ordinary language)
You're sure they were talking AGPLv3 and not [L]GPLv3?
The definitions of what constitutes an interactive program is quite vague (sect 0 and 13). Let's say you have a database server under AGPL (mongo, or say citus). Clearly they support interactive access in some form, but from the perspective of user of an application using said database access is not interactive, nor is it clear how the database could provide such an interactive notice. Various vendors addressed that issue with clarifying notes about their understanding, but that definitely increases doubts of possible users, including their lawyers.
I agree 'written purely in Go' would have been a better choice of words, though.
- Either you have a lot of random new writes and not so many updates, in which case LSM trees will ingest new data as fast as the disk can store them
- Or you care more about read performance, and LMDB and BoltDB will have more predictable (and arguably better) performance
InfluxDB is a timeseries database, so they write a lot of stuff and don't even read all of it. As data gets older it can be pruned efficiently with an LSM-based design, not so easily with B+tree. Dgraph on the other hand seems to sit right in the middle as it wants to be a general purpose database so there's no easy winner here. Hopefully the choice was correct for most use cases.
Here is a pretty good writeup: https://docs.influxdata.com/influxdb/v0.9/concepts/storage_e...
Bolt also uses insane amounts of RAM and writes get slower and slower as the size of the database increases. (Personal experience, don't have benchmarks. Take this at face value.)
If huge, long-running transactions aren't needed, then it may be easier to just let transactions happen in RAM, with a cache to enforce isolation and emulate atomicity. So for exmaple, while a transaction is committing, the system would need to cache the previous (that is, currently committed) version of every key, to avoid having other transactions reading the on-disk data, which is in the process of being updated. It would also need either a redo log or undo log so that, in the event of a cache, any half-written transactions can be either replayed or rolled back.
RocksDB performs much better and is the most popular and efficient KV store in the market, being used at both Google (Leveldb) and Facebook. Therefore, the benchmarks are against that. Without spending time generating benchmarks, I'll bet that Badger would beat BoltDB any day.
The idea of using B+-trees with value log has potential. One would need to do some obvious optimizations to decrease the frequency of writes to disk. Because SSDs have to run garbage collection cycles, which can affect write performance in a long running task.
But, I think it would make a great research project. And if it comes out to be better than our current approach of LSM tree in read-write performance, I'd switch in a heartbeat; because I think B+-tree might be a simpler design.
BoltDB author here. I agree with you that Badger would beat Bolt in many performance benchmarks but it's an apples and oranges comparison. LSM tree key/value stores are write-optimized and generally lack transactional support. BoltDB is read-optimized and supports ACID transactions with serializable isolation. Transactions come with a cost but they're really important for most applications.
Regarding benchmarks, LSMs typically excel in random and sequential write performance and do OK with random read performance. LSMs tend to be terrible for sequential scans since levels have to be merged at query time. B+trees are usually terrible at random write performance but can do well with sequential writes that are batched. They tend to have good random read performance and awesome sequential scan performance.
It comes down to using the right tool for the job. If you don't need transactions, Bolt is probably overkill. There's whole section on the BoltDB README about when not to use Bolt:
> It is just badly implemented, acquires a global mutex lock across all reads and writes.
You're welcome to your opinion about it being "badly implemented" but the global mutex lock across reads and writes is simply untrue. Writes are serialized so those have a database-wide lock. However, read transactions only briefly take a lock when they start and again when they stop so they can obtain a snapshot of the root node. That gives the transaction a point-in-time snapshot of the entire database for the length of the transaction without blocking other read or write transactions.
So, you should really test your read/write ratio with the size of the keys and payloads before selecting one solution or another. Even on the Badger tests, you can see that it can vary a lot.
Anyone aware of such a thing?