It is strange that there aren't more writeups on Postgresql migrations.
Yandex had a good one. Posted here a long while ago: https://news.ycombinator.com/item?id=12489055
It's probably because switching databases is very painful and rare.
Speaking of economic incentives.
That could be a thesis topic for my hereto unwritten PhD in Behavioral Economics:
"Choosing to pay"
What are the behavioral aspects of top leadership - Why do people still chose proprietary"
A quantitative case study."
Right now anyone can afford a db server with 128gb ram, enough to solve most problems #YourDataFitsInRam
Could you explain ?
Personally I use Oracle (and hopefully one day PG) because it gives me guarantees on data integrity and these guarantees help me to think about my programs more easily.
In the early 2000s, a serious database machine -- say, a Sun E10000 - had up to 64 CPUs, running at 400-500MHz each, 64 GB of RAM, and a huge cabinet full of disks.
Now you can call up a SuperMicro VAR and order a 64-core AMD EPYC server with 2TB of RAM and 4 TB of NVMe drives, that talks to other machines over a 100Gb/s ethernet, for a price around the same as a used Tesla S85.
If your database is under 2TB, it all fits in RAM on commodity hardware, where commodity is defined as "anyone with a credit card can order it without even talking to sales".
I didn't know that. At that time I was discovering Applesoft basic :-)
sudo apt-get install postgresql
was amazing to me!
I took the chance to upgrade our PostgreSQL instance. It took less than an hour. It was a smaller database, but the difference in complexity was still impressive.
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=password' -p 1433:1433 -d microsoft/mssql-server-linux
sudo apt-get install -y mssql-server
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
sudo add-apt-repository "$(curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list)"
sudo apt-get update
sudo apt-get install -y mssql-server
sudo /opt/mssql/bin/mssql-conf setup
sudo apt-get install -y mssql-tools unixodbc-dev
echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
All of that is on top of the issues of how heavyweight it is due to containing a platform abstraction layer that consists of a very large amount of Windows running in user mode, as discussed by others in this thread.
I'm surprised at how many people seem to have taken issue with a simple comment. Postgres is easy to install... and now so is MSSQL compared to the complex slow GUI installer they had before. Maybe it'll help the original commenter, maybe not. Let's just leave it at that.
> I'm surprised at how many people seem to have taken issue with a simple comment.
The thing is, that you must migrate your database between major pgsql releases and you don't want your distro to force it on you. The community pgsql repo has the same rules - you have separate repo for each major release, so no surprises either.
Insult to injury, if you're using Docker in a VM (as you must on a Mac) the instructions on that Docker Hub page say you need a minimum of 3.25GB RAM dedicated to the VM. That's quite a heavy lift if all you want is a database instance for local development.
And then sure, you could run SQL Server on Linux for your production instances, but you're well off the beaten path. You're going to have to roll your own infrastructure rather than relying on AWS RDS, Google Cloud SQL, Heroku, etc etc. And the moment you have a performance problem the first advice you'll hear seems likely to be "try it on Windows and see if it reproduces there".
(I'm a huge pg fan if that isn't clear - just ...)
It seems you're just guessing on that last part though - SQL Server has been in production for decades and already has lots of tooling and there are even new CLIs available for the Linux environment. Production SQL Server comes with support from MS, you're definitely not on your own nor will they just tell you to run it on Windows, this is simply not how enterprise support works and they would have a hard time staying in business with answers like that.
We can discuss support issues but it's best to leave the assumptions and hyperbole out of it.
I'm sure you're right that MS would be happy to sell support. What I don't see is a reason to care, unless I run in to a Windows app that can't use another DB that I suddenly need to run under Linux.
> ...SQL Server on Linux for your production instances, but you're well off the beaten path...first advice you'll hear seems likely to be "try it on Windows and see if it reproduces there"...
You dont think this is hyperbole? Have you actually used SQL Server or Microsoft support before? They do not treat enterprise customers this way and both Linux and MSSQL have plenty of tooling available. There's nothing challenging here.
> What I don't see is a reason to care
You're not the target market then. MSSQL has some compelling features and a big ecosystem, adding cross-platform support and a much easier installation process only helps customers with more options for deployment.
sudo apt-get install mssql-server
* Native partitioning
* Parallel query
Honestly, some very welcome quality of life improvements for use cases even outside of what I would consider "Big Data".
> This means that users no longer need to create triggers for routing data; it's all handled by the system.
Trigger routing has always been a performance foot gun...to the point that it's sometimes better to handle table routing in application logic. I wonder what the performance is comparatively between this new "native" partitioning and existing methods (triggers and rules) and whether or not this makes application routing always inferior.
where ((@foo is null) or (foo = @foo))
and ((@bar is null) or (bar = @bar))
Depending on statistics, order of execution, indexes in place and phase of the moon this could produce fast results or have to revert to a full table scan (because it uses a cached query plan with the wrong index).
Devs still have some awareness of how queries translate to actual running instructions and to measure that the results are in line with what is expected.
(I made it happen with an older version of things and ... yes, argh)
There's also still some performance kinks to work out with the current hash index performance - large when growing the index quickly or under very high concurrency.
Edit: not seriously asking you to stop, but the interview question part is true
DBAs are expected to understand the crappy inefficient row-at-a-time ORM in use by the devs, which implies understanding a particular programming language, but not the other way around? There’s nothing magical about SQL databases that a decent developer can’t understand.
You mean all their tools? Like how does modern processors handle branch predictions in the face of out of order operations, or what differences in server cooling will affect throughput or peak performance?
Do you have a deep understanding of all the tools you use? Because I honestly doubt so.
Specifically with ORMs, It does seem that a lot of devs don't understand or care about the generated SQL... but that question was way above the knowledge level I would expect/want out of someone IMO.
I guess it would be helpful for this feature to also include some documentation about if/when to use hash indexes, since up to this point the db has issued a warning basically saying "don't use these." Right now I'd imagine that there will be a lot of confusion around it given the feature's history so people will just not consider it.
This tiny little mention is a massive win! We now have access to the "und-x-icu" collation. As the postgres docs put it: "ICU root collation. Use this to get a reasonable language-agnostic sort order."
I've been waiting on this forever. Columns containing strings from any/all languages can now be sorted well enough to use postgres in truly multi-language applications. If your application has users writing in English, German, Russian, Japanese, et al, using the "und-x-icu" collation should make your application much more user-friendly (as opposed to using the en_US or C locale).
The ICU root collation is far from a perfect system, but it's the best option available today. I've been tied to MySQL purely for its similarly-purposed utf8mb4_unicode_ci collation. Now I can give postgres another very serious look.
You get great performance for the simple "dumb" use case (relational data in DB term, key/value store in redis case), and lots of awesome additional feature on top of it for more complex situations.
I love it.
That's like calling Swift "the Ruby of programming languages".
That's like saying <good truck> is a "Prius of 8 wheelers".
That's really weird comparison. Especially that PostgreSQL is more of a database than redis is.
For anyone else using them, I just did a bit of reading  and it looks like they are inferior to default 8 byte timestamps. FP timestamps have microsecond precision around the year 2000, but it drops off as it moves away from there.
I was using them to be the same as some GPS timing code, but it seems this isn't needed or desirable.
Holy crap I didn't even know this was feasible much less in development! Can't wait to test this out, as significant amount of my data sets have these kinds of relationships.
Parallel query will also be great for certain queries I do regularly. Been looking forward to this!
Thank you Thank you to all the devs that had a part of this!
Cross-column statistics is a great feature nevertheless.
- Checkbox item for people not wanting MD5 anymore.
- Storing passwords on the server in a securely hashed way, so the admin won’t know your password.
- PostgreSQL developers getting out of the roll-your-own-crypto game.
I don't think they've implemented certificate validation since I've left though.
My naive hope, going on many years, is that SCRAM with channel binding would have landed years ago (the first versions of the patch began to show up then), making client-side certificate checking (and let's get real: it's hard enough to use that many people will not validate when developing from their laptops, simply backspacing out the optional cert validation connection option, a elision that is invisible to the server) obsolete. It should be possible to modify the definitions of pg_hba.conf to require a channel-bound SCRAM connection, which would mean that the client is certain to have checked for an untampered certificate.
This implementation of SCRAM doesn't have that yet, but it's been an ambition of the author for some time to do so.
So yes they're public but their addresses are basically impossible to guess.
It's still not good practice, since most systems treat addresses with far less care than passwords and often save and/or transmit them unencrypted.
I still think it's a low-friction solution. But a secure one -- hardly.
Ipv6 only then?
As we are working in efficient C/C++, our internal time to parse the query set is closer to psql than pgcli, so we tend to prefer the pgserver fast, but the benchmark using pgcli did not clearly show the best time.
Also, when you are testing a query on one machine, you have to carefully monitor the CPU usage to differentiate the CPU pegging by postgres from the one by pgcli.
All those issues were gone when I got back to standard psql.
So if there is a failure, you're far less likely to learn that all your data is gone, because some important block on disk was not replicated and the database is not readable. With repmgr the worst what could happen is that you might lost few latest transactions, but your data should always be consistent and in working state.
Edit to add: I'm also interested to hear the approaches people use here. Today we've been setting up WAL-E to run our db backups so we're in that headspace. I'd actually like to decommission my main db server and provision another one to take its place. Getting the new one up and running from the wal-e archives is easy enough, the switch-over is a little more complex.
postgre fans will hate, but if you really need something with automated failover, you might have to take a look at better databases. Either Postgre/Oracle in the paid SQL world, or elasticsearch/cassandra/riak in the free NoSQL world.
Add Linux-HA for health monitoring and automatic failover. When failed over, the standby machine just starts up PostgreSQL and recovers from the transaction log. Uncommitted transactions on the primary machine that haven't been written to disk and replicated will be lost, which is consistent with the transaction semantic. Committed transactions are replicated and recovered from the transaction log at the standby.
Depending on the heartbeat interval and how fast PostgreSQL starts up and recovers from the log, the automatic failover can happen within seconds.
Configure a virtual IP for the cluster and let the standby machine to take over the IP at failover. Linux-HA does all that for you. The clients talk to the virtual IP and don't have to know the primary or standby machine. They just need to reconnect when disconnected at the failover.
The nice thing about DRBD and Linux-HA is that it's not just for PostgreSQL. Any disk based application can be benefited.
One of nightmare of replication is that there's latency and data never makes it instantly to the other host. This means that there is a possibility that once the master fails, the database file on the other host might be in a corrupted state.
If your data is important you should use replication mechanisms provided by the database. Repmgr makes it easier to set up replication, and perform failover. It also has mechanisms to do auto failover, but I would also be careful with that.
In case of the secondary not available, the data are queued up in the primary and sync up with the secondary when it comes back online. If the primary's disk is destroyed at that time, the failure case and data loss is the same as the database's native replication.
DRBD is much simpler and less error prone precisely because it does not care to understand the application data. It just deals with disk block. The simplicity also helps performance greatly. Benchmark has shown there's negligible impact when doing synchronous replication.
I still think using binlogs is safer and more efficient though, since it transfers only what's needed to make an update.
The DRBD replication is fast because it intercepts the write call and then issues the network send and writing to the underlying disk in parallel. Gigabit or 10G network IO are magnitudes faster than disk seek/write. The write to disk on the standby server can happen in parallel as the primary's write to underlying disk.
Seriously good work. I'm not sure why my firm still buys Oracle licenses.
Also, some shops are heavily invested in PL/SQL code.
The price of migration may far outweigh the license savings.
definitely. The cost of work is big compared to licensing. Now, PG has a "cool factor" so I bet some people would migrate to it off hours just for fun :-)
Thank you to all of the team members who put your time and effort into this project. It's not only a wonderful tool that I use every day, it's also a model of how to manage a project.
?? what am I missing. I dread having to lookup how to do things in postgres because their documentation is so horrible to find things in.
Why do people still choose MySQL/MariaDB/Oracle over PostgreSQL at all?
The strategy used by PostgreSQL is simpler, more flexible, and cleaner, but there are some cases where MySQL's way of storing things can be significantly faster.
There has been talk of PostgreSQL supporting clustered indexes, but that would probably require reimplementing a ton of stuff to support both storage formats efficiently. Also, it turns out that there are a bunch of ways in which PostgreSQL can cheat such that in most workloads there's no difference at all - though not all of these cheats have been implemented.
Note that I didn't say MySQL is faster or slower than PostgreSQL. Such statements don't make sense, because it depends entirely on your workload, and as long as you aren't literally using PostgreSQL as a key/value store instead of a relational database, you can probably always find a way to make it as fast as necessary.
Heh. It's funny how "the biggest thing" is different for a lot of users. I can't count the number of times I heard "if you only had XXX, I'd migrate / be happy / ..." for vastly different features. If you then tell them what other people think the most critical bit is they look at you with some surprise.
That's not to say these people, including you, are wrong, just that it's not always that clear what the highest priority items are.
It doesn't get easier, although it has a lot of advantages too, if there's no coherent direction for the entire project, due to all the different directions various companies and people want to go.
WRT clustered tables: I personally think it's a good feature, but there's other more pressing concerns. Since my employer thinks that as well, I'm for now working on other things...
Oracle: because the enterprise is its own unique beast. Companies frequently stick to what they know and have invested into, so long as it still works. Enterprise customers buy/adopt software in a very different way than everyone else.
MySQL: if you've got a history with the product and it works well for what you're doing, there is absolutely no necessity-based reason to switch to PostgreSQL. You should switch from MySQL to PostgreSQL if there's a very good reason/s to, a meaningful gain to be had, it's that simple. Otherwise, it's unnecessary optimization. There are a vast number of considerations when operating a business, upgrading to the latest & greatest (speaking very broadly) just to do it, is not a good reason most of the time. There should be a specific prompt to action based on a gain/s or capability that PostgreSQL provides you, in/for your use case, over eg MySQL.
This. For example I'm still on Java 1.6 because the cost of migrating to 1.7 (java is not the problem, the libraries we use are) is big. I'm still on Solaris because moving to Linux (for example) would imply redeployment of thousands of programs which translates to ten of thousands of man/hours.
Big corps go at geological speed :-)
Obviously I don't know what your cost or performance considerations are. If your organization can save a meaningful amount of money and operational headache by switching to Postgres, that can be a big deal for any business. If cost isn't much of a consideration in your operation (that is, what sticking with Oracle is costing you), stick to Oracle for data warehousing over Postgres - at least for now imo.
With Oracle it's the same story it has been for a long time (and probably will remain), it's expensive and a pain in the ass to get up and running the way you want it. Postgres has everything to gain vs Oracle from here forward, it seems very likely to bite into some of Oracle's well guarded high value territory in the coming years.
On the other hand, sometimes you just have to. SQL Server . . . I won't say it makes it easy, but it doesn't make it so goddam impossible.
If I lived in a world without Postgres, I would use SQL Server without thinking twice. MySQL/Maria I'm not fond of, and I kind of don't want to touch them. Oracle has some cute features and language extensions, but you have to deal with Oracle.
I can't think of a use case where Postgres falls down compared to other RDBMSs.
What we are seeing is Postgres competing with NoSQL and Mongo-type stuff. With full-text search support for JSONB types, we're looking at Postgres moving into ElasticSearch territory with a vastly more simple deployment model and still carrying ACID guarantees.
It's pretty remarkable.
Or am I not understanding what you're asking for?
Here's some more detail:
On the flip side PostgreSQL's approach is good when you query secondary indexes since these can point directly to the heap rather than to a primary key, removing the cost of long primary keys and allowing for scanning the heap in physical order after some kinds of secondary key lookups. It is also cheaper to sequentially scan heap compared to sequentially scanning a clustered index.
There are also some difference s on the write side too, but the gist of it is that both models have their own strengths and weaknesses.
Also initial setup is easier with MySQL. It is installed everywhere and setting it up uses involves less messing about. (I would imagine there are far more insecure MySQL install around as a result).
At least PostgreSQL allows you to wrap schema changes in BEGIN/COMMIT/ROLLBACK transactions, unlike MySQL.
So something like this will rewrite the whole table because of MVCC. MySQL will update the record in place without rewriting the whole table.
Thanks for reminding HN of this point.
They're talking about an in-place rewrite of the value of a single column, which, yes, InnoDB will do with way less write load than postgresql.
Is this essentially a pipe dream?
The PostgreSQL 11 roadmap also mentions "multi-model" database
Postgres - Solid single-node relational database with SQL standards support, JSON, fulltext search, custom data types, ACID transactions, foreign data access, and more. HA/scaling through replication and some 3rd party offerings for automatic sharding, failover and multi-master if you need it. Great as the operational source-of-truth for your core data.
Cassandra - Distributed wide-column (basically advanced key/value) meant to run as a cluster of machines with native support for multiple data centers. Limited "SQL" support which is really only used for defining tables. Data access is very different (everything is an upsert or delete) with variable per-query consistency settings. Great if you need data spread globally, 100% availability, eventual consistency fast key/value, and some interesting data access patterns.
If you want high-performance Cassandra, look at ScyllaDB first. If you need something in between Cassandra and Postgres, look at CockroachDB.
Cassandra is an eventual consistent database.
Postgres is ACID.
My 2c: you don’t need Cassandra or other BigData databases. If you did, you probably wouldn’t ask the question.
...and by the time you do, Postgres improvements will have rendered your Big Data medium-sized anyway :-)
Of course, that's in a Spark cluster running massively parallel queries and not on a single (however large) node.
That’s no criticism at all. I wouldn’t use Spark for a transactional DB because it’s not the best tool for the job. Same for PostgreSQL when the dataset grows far beyond what a single instance can reasonably process.
So, that person above may not need it today, but they may need it sooner rather than later.
If one is starting a new project, hence contemplating what DB to use, starting with an ACID db is a safe bet in most cases(unless of course they are already starting with a huge amount of data).
By the time the outgrow the ACID database they will have a better idea of what exactly they need and more importantly they will have the resources to make the switch.
BTW: riak is hopefully not dead yet, let's see what bet365 can do with it..
I'm co-founder of a company, doing all of the technical work, and I too am using a very vanilla stack: Postgres, Elasticsearch, Redis, Memcache, Rails. However, having worked on massive scale-out projects before, I recognize where my pain points are going to be with my tech choices.
 I'm storing calendar data in postgres. This data is primarily key/value based, needs to be persistent, and on a per user basis potentially is tens of thousands of records. This is a medium term problem, and I'll likely move this data out of postgres and into a scale out solution sooner rather than later. I need relatively fast lookup times, but this is primarily a "high write, low read" scenario, that Cassandra is well suited for.
 I'm storing a bunch of text from blog posts, news articles, etc., in Postgres. I really dont' need the data to be in postgres, it's just a handy place to store it to synchronize with Algolia & Elasticsearch, where the true work happens. However, I dont' want to hold onto TBs & TBs of news articles in a postgres system. As that system scales out, it'll be unnecessarily expensive, as I don't need all the bells and whistles I'm getting from Postgres, for what is effectively a form of cold storage. This is a long term problem, and I likely won't tackle it for well over a year.
So, you're right, I'm not contemplating what DB to use, but I'm definitely aware of why I won't just leveraging the same database in perpetuity. Being focused on delivering customer value is vital, and that's my intent; however, scalability and cost structure do matter over the long term, so an understanding of where one will go is good as well.
Citus is more focused on scalable OLTP rather than OLAP scenarios, giving you essentially an automatically sharding postgres database. Heap does use it for analytics but that's more so because of the JSON support and indexing in postgres itself.
Can you share some docs with me around how citus is more OLTP based? Looking to be educated.
BigQuery streaming ingest also uses in-memory buffering into BigTable until it's written to disk, just like Cassandra with memtables flushed into on-disk SSTables.
Citus is more manual work than MemSQL and doesnt support columnstores native, although you can use it with their cstore extension to get both. It's just a different design giving you a postgres database that automatically shards vs MemSQL's many optimizations around HA, replication, and very fast analytics performance.
Try the features page: http://www.memsql.com/features/
This article had a nice overview. Somewhat outdated but mostly still up to date. https://lnkd.in/g4anCf5
I’ll say it seems precarious to me to mix and match row store in memory data with a columnar store. Is it never a performance issue to have these two systems potentially in contention with one another for resources?
Version 6 is even faster, you can try out the beta: http://www1.memsql.com/beta-6.3-features.html
Based on what ? There is no evidence or logic behind this.
If you know that you are producing say 1GB a day of data that you must store then it is sensible and prudent to select a technology that can support your needs a year or two from now.
And this idea that you always have resources to make a switch later on is rarely the case. The more common scenario is that a team is established to implement something, it moves into production and then the team either disbands or moves onto something else. Why do you think so many people try and plan for the future ?
In most enterprise companies at least it would be considered a failure to have a project that in a year's time required another business case just to replace the database that the team should've appropriately selected in the first place.
Designing for the future is a horrible choice because where do you stop?
Eg. Why are you sure that Cassandra, or whatever you choose, will be good enough for the future? Maybe you will need a Spanner type database or something even more big/consistent/feature-full.
Yes, I understand this. I would not want to get bogged down on infrastructure when trying to deliver early versions of a product. At the same time, I think database choice is one of the more important tech stack decisions you should make early on.
Cassandra also sounds like a good bet for the type of data I'm interested in storing (ML model inputs, so lots of key -> numerical value data). But my experience is mostly around RDBMS and some NoSQL. Hence the question. Thanks for the answer!
How big are the numerical value data? AFAIK Cassandra does like large amount of data per value.
Also what do you expect the total amount of data to be?
BTW, I just recommended to start with PostgreSQL because it just works and will have your back no matter what ways you like to use retrieve you data.
However if you are absolutely sure you don’t require secondary indexes, joins, etc, and just need a KV database, Cassandra will work fine too.
Citus specifically cite analytics and write throughput as their strengths. I believe them. They do seem to charge for shard rebalancing and some other features.
Greenplum comes with an inbuilt data science toolkit (MADlib) and has a long history of dealing with intensive workloads. I don't know if there is a commercial version with extra features. I don't think so, but I haven't looked closely.
Greenplum also gave birth to HAWQ, which uses the Greenplum distributed/parallel query planner to plan queries over Hadoop.
Disclosure: I work for Pivotal, we do most Greenplum development.
As for elasticsearch, I have found it to be temperamental. The "out of the box" experience was usually unhappy. For logging I've seen people move to other tools (splunk, stackdriver) more than once.
2 TB/day is fairly easy... as long as you have a bit of experience AND one of the database that can scale horizontally AND the company buys the amount of hardware that is required.
StackDriver/Sumologic/logentries are cloud solutions, so obviously one doesn't have to deal with the database ever.
On the other hand, if I have structured-enough data, I'm going to prefer relational tools for it. Hammers and algebraic nails and so on.
Did you miss this: "(unless of course they are already starting with a huge amount of data)"?
No. Cassandra is a database with selectable consistency.
You can set it to be strongly consistent (ALL or QUORUM) with the tradeoff being worse performance.
If it fits into 1 machine or a few, it isn't "big data".
"big data" usually doesn't require transactions though or at least no transactions across servers so you could probably get away with sharding postgresql via citus.
Besides, if we picked machine size as _the_ definition, it would change every year. And that feels like a bad way to talk about big data.
(BTW, I'm not a huge fan of the term in general: I've seen a lot of companies sold a bill of good because some sales guy convinced them they needed a Big Data solution instead of something like Postgres)
If I have a 12TB dataset, and I need to be able to find a single record in that dataset, and return the results to a process in less than one second, I don't believe that needs to be considered "big data". The dataset is "large", but the latency / access pattern surrounding that data is low enough that you can solve that with a handful of hard drives and your data sorted on disk.
If I have a 12TB dataset and I need < 10ms latency on random lookups, that is now probably big data. To guarantee those sort of lookup times for random IO, I either need very expensive hardware, or a software based solution to distribute my data to multiple machines.
If I have 10GB of census data, and I need to run an analysis that shows what are the 100 most similar cities (non approximation) to Los Angeles in terms of demographic make up, and I need an answer in less than 1 second, that is also probably big data. The analysis is easily parallelized, but if done serially on a single machine is a "slow" process.
I realize this is a moderately unconventional way of approaching Big Data, but it's how I've started to structure some of my thinking and explanations around why it's not just size, but size + latency (which is inherently a question of what you're trying to do with the data).
My team lead did a talk at NGINX Conf 2017 which touched on how we tuned Postgres to handle 1 Billion metrics per day for NGINX Amplify (https://www.nginx.com/products/nginx-amplify/). I would link it, but it seems it hasn't been published by our Conf team yet.
We did this on 9.4 so the changes outlined here regarding native partitioning, parallel queries, replication improvements, etc. would likely make doing a similar scaling structure easier and just straight up more performant.
I wouldn't classify 1 Billion metrics a day as "Big Data". I also would point out that our use case is unfairly biased to "recent" data rather than "all historical" data...which allows us to make some query optimizations that wouldn't be acceptable in the latter use case.
But we are using Postgres to host a constant, write-heavy workload that conventional wisdom dictates it is unsuited for. Moreover, it was/is fairly simple to implement, understand, and scale indefinitely. Personally, I think both Redshift and CitusData are various degrees of proof that scaling Postgres to handle these types of workloads is possible.
For me, the reason you want to consider using a Big Data focused tool (such as Cassandra) is at some point you end up fighting your tool to make a square peg fit in a round hole. At some point it makes more sense to adopt a specialized tool rather than work to specialize your implementation of a specific tool.
In all fairness, we are considering just that. Our road map actually has Scylla in the near future. It's not that we don't think Postgres could handle the load, but rather we think it will be easier to move specific data to Scylla and in the long run will save us resources (primarily in man hours and potentially in compute requirements as well).
Even if we move all our timeseries data to a specialized tool, we won't be getting rid of Postgres. For most/if not all our other persistence requirements Postgres is more than adequate. This is one of the reasons we chose to start with Postgres: even once we hit a scale that specialized tools made sense, Postgres would still be useful for everything else that doesn't require a specialized tool.
Moral of the story: Postgres will get you a lot farther than many give it credit for...and if you are just starting out I would highly encourage you to just pick Postgres and see how far it can take you. If you hit the limit, replace that specific part.
Edit: Also, there appears to be some movement on Columnar indexes which is worth keeping an eye on:
Cassandra's CQL is very limited in terms of types of queries it supports.
I couldn’t tell from the write up if this is one of the things addressed with Native Partitioning
Last time I got to build something new the time pressure was so great I had to use MySQL, because it was there, set up and working and I knew it. I wanted to use Postgres, but at the end of the day the extra time setting it up wouldn't really have been much benefit as it was a fairly standard Django app, and didn't need any of Postgres's extra features. (I quit that job shortly afterwards).
For example, data is partitioned by month, and I have no table for the month in the data I'm trying to insert.
† The MySQL reference is very useful particularly when you're looking to compare feature differences between versions.
They are pretty good.
Do anyone know if they have something planned in this department ?
Can you get back to xml from the table?
To get XML from a table, functions already existed in previous releases (XMLELEMENT etc.).
- You can partition by for example month and cheaply delete or archive old data by instead of deleting all rows just dropping the entire partition which is much cheaper.
- You can have different indexes on different partitions. This is commonly used when some indexes are only used by queries which operate on recent data.
- You can put different partitions and their indexes in different table spaces. E.g. you can put your older and less frequently accessed data on cheaper and slower disks.
- Random inserts into B-tree indexes are expensive and when you for example have time series data on the format (device_id, timestamp, v1, v2, ...) it can be useful to partition on the device id so all rows are inserted in strictly increasing timestamp order in each partition. This way you can get more write performance out of your hardware.
- Most databases have a maximum table size (PostgreSQL's maximum is 32 TB) and you can get around that by splitting your huge table into multiple smaller partitions which all fit in the maximum table size.
- The partitioning condition can be used as a very coarse index to entirely skip looking at some of the partitions (even at plan time for some queries). This does not always give you much over if you just had had one big table with one index over all of it, but in some cases this can be a big win.
- For PostgreSQL specifically it looks like you can have some of your partitions be foreign tables. Meaning some of your data can be on an entirely different server, not necessarily PostgreSQL or even a relational database system at all. I am not sure how useful this will be in practice.
Partitioning has a maintenance overhead and adds a bit of extra work to query execution and planning so it is not always worth it.
(disclaimer: worked for me, may not for anybody else)