Hacker Newsnew | comments | show | ask | jobs | submit | ddorian43's comments login


You can use citusdb (postgresql, columnar). Not the same, but close.

Build a way to backup our brain+body. Do backups daily/weekly/monthly and store them in various locations. If you die, tell your family to restore from backup.

I was just looking at the wiki page https://wiki.postgresql.org/wiki/Grouping_Sets yesterday hoping that one day it would be in.

I now see that postgresql is the one true database. Everyone should abandon all other databases that don't have special features (ex: sqlite embedded).

> Everyone should abandon all other databases that don't have special features (ex: sqlite embedded).

SQLite embedded does have special features. It's embedded.

I believe GP used that as an example of what not to abandon

Is "ex" short for "example" or "except"?

I use it for both. In that case it was 'example'.

Just added a line to that page saying it's in, :)

Can you explain HOW are you compressing the json data? Ex, is it just block-pgzip-compress? Or are you exploding each jsonb-field as a separate file like with normal columns ?

I think I've read an identical copy of your comment on other threads from this domain.

That only happens if you add a non-nullable column.


I think mysql-tests aren't open-source anymore (gift from oracle)


columnstore is supposed to not work with indices (although they have different types like zone-maps)

each column is stored separately on disk, so only the requested column-values are read from disk

This makes slow to select/update/delete a single row(oltp) since it needs to fetch multiple pages (for each column). And makes it fast to do queries regarding data in big size (olap) by doing vectorized query execution and sequential-reads on disk


Note, updatetable columnstore sucks everywhere, so I don't think so. It's only for OLAP, not OLTP (parse is oltp).

What may help @azinman is group-compression (ex: bigger pages of rows and compress the whole page, where each json-key can be repeated multiple times in a page since each page may have multiple rows). This is what happens on tokudb,hbse,hypertable,cassandra.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact