Ask HN: What features would you like to see in the next generation of DBMSs? - ajz
======
whatnotests
Having used RethinkDB[0] for a few things over the last year or so, I'm
convinced that it represents "what's next" for DBMSs.

* The community is great.

* Documentation is amazing, and up to date.

* Examples actually work.

* Installation is simple. Runs on multiple platforms.

* Clustering is easy. Sharding is easy. Management is easy.

* Built for today's needs, not for what we were doing 30 years ago.

If you haven't yet taken a look at RethinkDB, do yourself a favor and spend a
couple hours dinking around with it. You may just be impressed.

* [0] [http://rethinkdb.com/](http://rethinkdb.com/)

------
kristianp
How about one that supports a fast serialisation format such as Flatbuffers
[1]? In the communications protocol. Json is so 2015 ;).

[1],
[https://google.github.io/flatbuffers/](https://google.github.io/flatbuffers/)

------
PaulHoule
High flexibility. I want to have just indexes to do the queries I need to do
with insane speed, effective compression of all data. More of a "database
construction set" than an actual database.

Oh yeah, and something that is a cross between SPARQL and SQL 1999.

~~~
brickcap
>I want to have just indexes to do the queries I need to do with insane speed

couchdb seems to fit this requirement really well. You explicitly create
indexes on the fields that you want to query. The index creation part is slow
but once it's done your queries are really fast. All data is compressed with
snappy[1].

couchdb also encourages you to split your data across multiple databases.
Effectively you can have thousands of databases all managed by a single
couchdb server instance. You can move your data in temporary databases and
"purge" them when they are no longer required. It's all really cool once you
get a hang of how to use this feature. Although if you query across databases
you'll have to "join" the result set within your application.

You should give it a try, you'll really like it :)

[1][https://github.com/google/snappy](https://github.com/google/snappy)

~~~
eecks
So is the index cached then?

~~~
brickcap
Indexes are cached on disk. As the new data comes in that satisfies the query
function,that you have written,it will automatically be appended to the index
that has already been built up to that point.

The query results are retuned with Etags that you can cache like you do any
http resource.

[http://stackoverflow.com/questions/4952429/couchdb-
supports-...](http://stackoverflow.com/questions/4952429/couchdb-supports-
caching)

------
LarryMade2
Some tool for data upgrading/rolling back when doing updates. We got so many
nice tools for programs, its about time to have a DB that could do as such...
and versioning?

~~~
brickcap
The documents that are stored in couchdb are versioned _as long as_ you don't
replicate/ don't compress the database. You can query for all the _revs of a
document. You can make api calls to give you an older _rev of the document.

The _revs themselves follow a pattern that is semi-human readable.Like so:-

`1-some_uuid,2-some_uuid,3-some_uuid` and so on.

The versioning functionality is not comprehensive but you certainly have the
building blocks.

------
nonuby
Automatic restful APIs endpoints, ideally with watch (long poll and WS) paths
too.

~~~
brickcap
Have you looked at couchdb? It has an http api and you can "watch" the
modifications made to the database using a "_changes" feed that has options
for longpoll, eventsource and a "continuous poll".

Web socket is not supported out of the box yet but you can always put couchdb
in front of a proxy and send the changes feed over the socket connection once
it's established.

