Hacker News new | past | comments | ask | show | jobs | submit login
Why CockroachDB and PostgreSQL Are Compatible (cockroachlabs.com)
160 points by gesaint on Dec 17, 2020 | hide | past | favorite | 57 comments



How compatible?

Can I just expect things like full text search to work? https://www.postgresql.org/docs/13/textsearch.html

What about additional but supplied modules like ltree? https://www.postgresql.org/docs/current/ltree.html

I ask as I saw a related article https://www.cockroachlabs.com/blog/full-text-indexing-search... recently and damn it looks close... has anyone migrated a prod system that uses the above? What did you encounter?


We migrated a medium-to-large (in terms of model complexity, not rows), from PG to CR about a year ago. It went pretty smoothly, but you'll definetly run into issues. (Very happy overall, btw)

Every release has improved the compatibility story at an impressive rate.

The 20.2 released added partial indexes and enums, which helped close the gap in our app(though, enums don't support binary encoding yet, so might not work with your pg driver (1))

Things that are still an issue for us (all can be worked around):

1 - Can't defer foreign key checks

2 - No pg_trgm

3 - Can't tell if an upsert was an insert or an update

4 - No triggers (this is pretty huge)

(1) https://github.com/cockroachdb/cockroach/issues/57348


PostgreSQL enums don't have a binary encoding AFAIK. This is partially why they aren't considered that useful in many contexts vs a fact table.

From memory this is because there is no natural binary encoding, you could use the int number of the member in the enum but then you would need to communicate the enum members to the client in some way. PostgreSQL network protocol currently doesn't have such out of band information/metadata support.


Whatever the issue, it's compatibility related, as Elixir's PostgreSQL driver (Postgrex) works with PostgreSQL enums but not CockroachDB's enums, and it has to do how the driver encodes the values.


Are Postgrex/Ecto usable with cockroach now?

I’m about to start a new project, and would very much like to utilize an existing cluster.


We just use Postgrex directly. Not sure about Ecto.


I'd love to know why you decided to make the move if you have the energy to summarise it.


Sure. I'm a developer and also do our devops. We're baremetal. Setting up an ELK stack, log ingestion, openresty, gitlab, CI, rabbitmq, etc: no problem. Making our applications hot-deploy and HA: no problem.

When we ran PostgreSQL, I was using barman for PITR and that was fine. But getting PostgreSQL in HA? My confidence in relation to how critical that piece is? Nope. Even just trying to handle upgrades with no downtime scared me. If I had to do it, I'd look at repmgr.

The story is completely different with CockroachDB. We have 3 instances and taking down an instance (or the server it's on) for maintenance is no problem. The setup also probably couldn't be easier. Most importantly, if an instance goes down in the middle of the night: our app keeps working.


Hi, regarding number 3, I have filed https://github.com/cockroachdb/cockroach/issues/58032. Would love to get your feedback on the proposal; does it address your need? do you have a different idea for how it would work?


Be very careful - even if things are compatible, in my experience some things do not perform as well in CockroachDB which seems a bit counter intuitive but can be very true... We had problems making date range queries fast with tiny amounts of data, for example.

I would seriously consider if you need CockroachDB - if you need that level of scaling you should also consider Cassandra (or Cassandra like) solutions as they will scale better but you will have to architect your app to think in this way (i.e. your app generates the views of the data it needs).

If you don't need to scale yet use Postgres. Every piece of complexity has a cost and Cockroach while incredibly clever adds complexity that you might not understand or desire to manage over time. A nice interface won't help you when replication between clusters breaks in production.


It's not just about (big) scalability - redundancy. Cockroach 3 node cluster is a no brainer to setup, just runs. I see cockroach as far easier to work with than Cassandra, and more easily expanded than postgres - sits very well between the two.


True, and you can go really far with postgresql alone, like zalado did (biggest german e-commerce platform):

https://github.com/zalando/patroni


> How compatible?

That is literally explained in more than 40% of the article.


They used to have a huge caveat on the comparability page that cockroachdb only ran serializable transaction isolation. That's a huge caveat that appears to be gone from the page, but I don't believe gone from the solution. The default postgres isolation level is famously READ COMMITTED.


I am truly looking forward for CockroachDB to become the next PostgreSQL for planet-spanning database workloads. In our ecosystem we get more and more requests for CRDB integration.

Generally though, I would not say that full compatibility should even be desired. A k/v database simply works differently from a strictly relational database. There are things like shard IDs, avoiding hot spots, deciding on how to paginate data.

Going too much into "PostgreSQL" replacement will eventually hurt CRDB because too much focus will go into making legacy enterprise SQL (along the lines of https://news.ycombinator.com/item?id=25454635) work on this system. It's the forklift approach of moving to the cloud. This becomes clear when skimming through the docs:

- https://www.cockroachlabs.com/blog/how-to-choose-db-index-ke...

- https://www.cockroachlabs.com/docs/stable/performance-best-p...

- https://www.cockroachlabs.com/docs/stable/limit-offset.html

- https://www.cockroachlabs.com/docs/v20.2/selection-queries#p...

I think many of the "light SQL" patterns are really great - things like SELECT or WHERE, which e.g. DynamoDB can simply not solve without ElasticSearch. But all in all, I am very excited for CRDB to get more industry acceptance, and I think their cloud offering could also become very interesting - a competitor to Google BigTable / CloudSpanner, AWS DynamoDB or Cloud SQL.


CockroachDB is not a key-value store though.


Unless I misunderstood the docs, it is a KV store underneath the abstraction layers:

> At the highest level, CockroachDB converts clients' SQL statements into key-value (KV) data, which is distributed among nodes and written to disk.

https://www.cockroachlabs.com/docs/stable/architecture/overv...


Sure, but this is an implementation detail. It is designed to be consistent in the style of and support the affordances of more traditional single-node RDBMS.


You are mistaken, it is a key-value store. It just adds a ton of layers on top of the K/V to offer SQL capabilities.


While that's technically true, it isn't in any practical way. What K/V API does CockroachDB offer you as a user? The closest you can get is:

CREATE TABLE kv (key STRING PRIMARY KEY, value STRING);

But calling that a "key-value" store is disingenuous at best.


Hi Taylor!

What are the salient differences in your mind? Under the hood, CockroachDB executes writes to and reads from such a table in the same way that you would against a key-value store. You can explore this for yourself with the "kv trace" functionality of CockroachDB's SQL shell, which logs of all of the KV API calls that a SQL query emits:

  $ ./cockroach demo
  # Welcome to the CockroachDB demo database!
  #
  # You are connected to a temporary, in-memory CockroachDB cluster of 1 node.
  # ...
  #
  demo@127.0.0.1:26257/test> CREATE TABLE kv (k STRING PRIMARY KEY, v STRING);
  CREATE TABLE
  
  Time: 5ms total (execution 5ms / network 0ms)
  
  demo@127.0.0.1:26257/test> \set auto_trace=on,kv
  demo@127.0.0.1:26257/test> INSERT INTO kv VALUES('a', 'b');
  INSERT 1
  
  Time: 2ms total (execution 2ms / network 0ms)
  
                 timestamp              |       age       |                     message                      |                            tag                             |                location                 |    operation     | span
  --------------------------------------+-----------------+--------------------------------------------------+------------------------------------------------------------+-----------------------------------------+------------------+-------
    2020-12-17 23:17:46.626696+00:00:00 | 00:00:00.001123 | CPut /Table/53/1/"a"/0 -> /TUPLE/2:2:Bytes/b     | [n1,client=127.0.0.1:49216,hostssl,user=demo]              | sql/row/writer.go:207                   | flow             |    6
    2020-12-17 23:17:46.626754+00:00:00 | 00:00:00.001181 | querying next range at /Table/53/1/"a"/0         | [n1,client=127.0.0.1:49216,hostssl,user=demo,txn=dcce3954] | kv/kvclient/kvcoord/range_iter.go:159   | dist sender send |    8
    2020-12-17 23:17:46.626792+00:00:00 | 00:00:00.001219 | r36: sending batch 1 CPut, 1 EndTxn to (n1,s1):1 | [n1,client=127.0.0.1:49216,hostssl,user=demo,txn=dcce3954] | kv/kvclient/kvcoord/dist_sender.go:1851 | dist sender send |    8
    2020-12-17 23:17:46.627281+00:00:00 | 00:00:00.001708 | fast path completed                              | [n1,client=127.0.0.1:49216,hostssl,user=demo]              | sql/plan_node_to_row_source.go:145      | flow             |    6
    2020-12-17 23:17:46.627322+00:00:00 | 00:00:00.001749 | rows affected: 1                                 | [n1,client=127.0.0.1:49216,hostssl,user=demo]              | sql/conn_executor_exec.go:622           | exec stmt        |    4
  (5 rows)
  
  Time: 1ms total (execution 1ms / network 0ms)
  
  demo@127.0.0.1:26257/test> SELECT * FROM kv WHERE k = 'a';
    k | v
  ----+----
    a | b
  (1 row)
  
  Time: 6ms total (execution 6ms / network 0ms)
  
                 timestamp              |       age       |                message                 |                            tag                             |                location                 |    operation     | span
  --------------------------------------+-----------------+----------------------------------------+------------------------------------------------------------+-----------------------------------------+------------------+-------
    2020-12-17 23:17:54.402735+00:00:00 | 00:00:00.003116 | Scan /Table/53/1/"a"{-/#}              | [n1,client=127.0.0.1:49216,hostssl,user=demo]              | sql/row/kv_batch_fetcher.go:337         | materializer     |    7
    2020-12-17 23:17:54.402763+00:00:00 | 00:00:00.003144 | querying next range at /Table/53/1/"a" | [n1,client=127.0.0.1:49216,hostssl,user=demo,txn=d30bcbc9] | kv/kvclient/kvcoord/range_iter.go:159   | dist sender send |    9
    2020-12-17 23:17:54.404565+00:00:00 | 00:00:00.004946 | r36: sending batch 1 Scan to (n1,s1):1 | [n1,client=127.0.0.1:49216,hostssl,user=demo,txn=d30bcbc9] | kv/kvclient/kvcoord/dist_sender.go:1851 | dist sender send |    9
    2020-12-17 23:17:54.405091+00:00:00 | 00:00:00.005472 | fetched: /kv/primary/'a'/v -> /'b'     | [n1,client=127.0.0.1:49216,hostssl,user=demo]              | sql/colfetcher/cfetcher.go:888          | materializer     |    7
    2020-12-17 23:17:54.405895+00:00:00 | 00:00:00.006276 | rows affected: 1                       | [n1,client=127.0.0.1:49216,hostssl,user=demo]              | sql/conn_executor_exec.go:622           | exec stmt        |    4
  (5 rows)
  
  Time: 1ms total (execution 1ms / network 0ms)
  
  demo@127.0.0.1:26257/test>
I'll draw your attention to two lines in particular. Here's the put:

  2020-12-17 23:17:46.626696+00:00:00 | 00:00:00.001123 | CPut /Table/53/1/"a"/0 -> /TUPLE/2:2:Bytes/b     | [n1,client=127.0.0.1:49216,hostssl,user=demo]              | sql/row/writer.go:207                   | flow             |    6
And here's the get:

  2020-12-17 23:17:54.402735+00:00:00 | 00:00:00.003116 | Scan /Table/53/1/"a"{-/#}              | [n1,client=127.0.0.1:49216,hostssl,user=demo]              | sql/row/kv_batch_fetcher.go:337         | materializer     |    7
These operations (`CPut` and `Scan`) are KV operations that you'd be able to run yourself against any key-value store. CockroachDB doesn't give you access to those operations directly, but crafting your queries in this way is really not significantly different.


I think it’s the “under the hood” that’s the important part here. The salient difference is that calling it a key-value store eliminates the business reason for using it.


In that sense, all SQL databases are just KV stores. They compile SQL into actions on various internal KV stores. CRBD's approach just makes the separation more explicit than most since it's designed to run as a cluster.


> As the team learned the hard way in the ramp-up to CockroachDB 1.0, many developers in the ecosystems that CockroachDB wants to enter do not write their own SQL queries any more—as opposed to, e.g., ten or twenty years ago.

I cannot help but being saddened by this. It is really hard to understand how not knowing how to interface directly to the part of the system that holds the ultimate reason we write a program (handling data), and probably the performance bottleneck of that system when it scales, is beneficial to a professional developer.


You assume that all those people don't know and don't understand how to interface directly.

Maybe they do know, but different approach turns out more productive for them.

For such "no true Scotsman" argument that "real developers write their SQL" I really like "ad absurdum" that "real developers write in machine code".


TL;DR: Because the Postgres folks have outstanding taste, even better documentation, a good community, and a compatible license.

Which are all the same reasons that it's my goto database.

I don't build software anymore, but the last project I worked on for money was a query engine, and the project founders were pretty open about the fact that whenever they were unclear about the behavior required by the SQL spec, they looked at the PostgreSQL docs and behavior to get pointed in the right direction.


OT: your woodworking is amazing, always enjoyable to see someone transition to a more fulfilling line of work.


Thanks!


If you want to use the CockroachDB be aware of the license:

https://www.cockroachlabs.com/docs/stable/licensing-faqs.htm...


(From the FAQs) How does the change to the BSL affect me as a CockroachDB user?

It likely does not. As a CockroachDB user, you can freely use CockroachDB or embed it in your applications (irrespective of whether you ship those applications to customers or run them as a service). The only thing you cannot do is offer CockroachDB as a service without buying a license.

- - - - -

Cockroach is asking not to take open source and start offering it as a service. Which sounds like completely reasonable thing to do!

Of late there has been plenty of fear mongering when it comes to licensing. I wonder if this is some paid blogs from cloud companies which helps form some opinions or the age old open source license-and-principles which hold little water with predatory cloud companies around.


>Cockroach is asking not to take open source and start offering it as a service. Which sounds like completely reasonable thing to do!

But i just wrote "be aware of it", with that license i don't consider it "Free Software", just imagine Apache or Nginx would do that. On the other hand I'm totally fine if they want to protect themself from leeches like Amazon/Google/Oracle/IBM.


> imagine Apache or Nginx would do that.

Nginx will likely do something similar, very soon. It's not that long ago when F5 bought them, with clear plans to offer paid-only features while the core itself remains open.

Give it another 5 years and I expect nginx to propagate some of those special features to the open-source version, but with the restriction that you can't offer them as a service vendor without a specific license.


I never understood the appeal of Nginx. I mean, I understand the web server, it's much easier than setting up apache, and before apache introduced event based mpm it was also much faster.

What I mean is the load balancing functionality of F5. It's in the same league as proxy support in apache[1]. Hardly comparable to for example what HAProxy provides, yet for some reason F5 (which is a leader in hardware load balancing, sadly though with public cloud that market is shrinking) found it interesting. Is it because of the name recognition, and they plan on turning Nginx into a proper load balancer? Because in terms of functionality Nginx is not even close to their LTM (even HAProxy is missing a lot of its features).

[1] I'm talking about the non paid version, there are some health checks in paid although still the support is rather weak.


>Nginx will likely do something similar, very soon.

Ahh the oracle of Delphi...if they do that, everyone will change to caddy or apache and that's exactly why they don't do it.

Amazon/Google etc can give a shit about nginx, if they change the licenses they will just hurt themself.


Cockroach is asking not to take open source and start offering it as a service. Which sounds like completely reasonable thing to do!

the BSL is not an open source license, but I can see where they are coming from, and I don't have a problem with it. With the one exception than I bet it will slow adoption, which is really just selfish on my part, because I want to use it and it will be an easier sell the more people are aware of it. Honestly I'm hoping that postgres just adopts the best parts of cockroach and i won't need to keep bringing it up to grossed out managers.


>> the BSL is not an open source license, but I can see where they are coming from

I wish more people could pause and appreciate this.

I understand little on how an open source license gets approved by OSI body etc. But Im sure there is a need to mitigate threats from AWS and likes.

We can't be far from today from where AWS takes a github URL as input and starts offering it as its own service!


So if I write OfftopAPI as a service which basically deploys Cockroach for clients , but adds a thin Node JS API on top , is that ok ?

Feels like you could play a game of chicken with them. I know many big companies would probably stick to postgres with this in mind.

And also makes it feel like they might just change the license later


Welcome to the "no free software world", now your are in the same position as if you give your customers a Oracle-db access.


Is CRDB that much better to justify all of this.

Getting your boss to buy into a new tech is hard enough


I just know that i don't like proprietary software and crazy licenses.


There was a different licensing issue around backups, where only the paid version had a reasonable way to back up and restore your data. Has that improved?


Yes, the distributed backup/restore functionality was folded into the free version in our last release. See https://www.cockroachlabs.com/blog/distributed-backup-restor....


Incremental backups are a feature of the Enterprise version. Full data backups are supported with the free version.


I know how their license negatively impacts me, I don't need some company to tell me that "all is well". I know that thier license means that if my requirements and Coackroach Labs business model diverge, I will be unable to fork the code and build a proper open source community around it. I know that I will (likely) never be able to buy a managed coackroach DB service from any vendor other than CoakroachLabs, no matter who acquires them or how badly they manage their service or how well their managed service and term offerings match what I need. I know that no matter how much I may want to pool resources with some one else to build features I need or fix bugs that I need fixed, I only have on viable option and that is to go through Cockroach, because there is no way to create a viable community fork of the project.

Them pretending that their use of the BSL doesn't impact end users and end-user freedom is marketing department BS.

They are free to license the software they develop under whatever license they want. I appreciate their past and future open source contributions and the technical achievements they have made, but their use of the BSL license rather than an open source license makes their product highly undesirable in my business.


I don't think your points are totally wrong. I especially think that your points hold water if you were to feel the need to set out to build your community fork or competing hosted offering in the next couple of years. Secondarily, I think that if your company had the resources to staff developers to build out a whole hosted offering, I can tell you that CRL would love to be a partner and understand what's going wrong. Maybe that's unacceptable for some sort of other reason but it's not obvious to me what that is. Yes the license means that real-time development of the project, on some level, relies on coordinating with its developers, which do, on the whole, work for a company.

All this being said, it sounds like you've got a better vision for how projects like this should get funded and built. How should society fund projects like CRDB? I think there's a very wide range of reasonable answers that fall in different parts of the fantasy spectrum. For now, in 2020s American capitalism, the BSD license approach, at least to my tastes (which are hecka biased), feels pretty good.


I appreciate your thoughts.

A company doesn't have to have the resources to run a managed service in order to benefit from the guaranteed possibility for such an alternative to exist. I don't need to fork a project to benefit from the guarantee that I can fork it and do whatever I like with it.

There is a concept in many open source communities of a "benevolent dictator" project leader. It doesn't matter if the project leader is an individual, or a company. Many open source projects are lead by a single company that dictates how it is ran.

The thing that keeps such dictators benevolent is the Freedom To Fork. Without the Freedom To Fork, how do we guarantee benevolence? The current CRL folks could be the nicest, most wonderful people in the world, but there is no guarantee the company won't change hands, the current folks won't retire or move on, or their motivations don't change and they begin to lose their benevolence.

Sure, CRL may be interested in partnering and playing nice- now. But they hold the keys to the kingdom, and can lock the door anytime they want to.

I simply want the possibility, that if things don't work out, I have options.

But you are right- I do have ideals about how projects like this should get funded and built. And you are right, there are a wide range of reasonable answers. I disagree with them being a fantasy though.

I think Postgres is a great example! There is a long list of companies that are "significant" contributors to Postgres. https://www.postgresql.org/about/sponsors/

This list included consultants, hosting providers (Even the much maligned AWS is a sponsor), and companies that use Postgres.

WordPress/Automatic is another successful example, one with a strong business as community leader. They do well with their consulting and hosting business.

The Linux Foundation umbrella also has a variety of supporters and manage a lot of open source projects.

There are a variety of open source projects that are developed by companies that use them, much of the Apache Foundation projects fall into this category.

Another model that works well for many (I'm not a huge fan of this model) is the Open Core model. If as a user I stick to relying only upon the open source editions of projects like GitLab, I get all the benefits of using open source. (This does not prevent me from buying a commercial license from the company to gain support- if they are willing to support my use of their open source edition in an acceptable manner).

RedHat is pretty successful at selling support and consulting.

Other open source developers do well selling a hosted managed service of their product.

There are a lot of different ways to fund open source projects.


> There is a long list of companies that are "significant" contributors to Postgres. https://www.postgresql.org/about/sponsors/

While I agree that that can work for maintaining existing and valuable projects, I have some doubts that that can work to fund projects getting to that point. Postgres, additionally, grew out of a proprietary commercial enterprise in Ingress and then became usable with a sizable payroll from the University of California (as far as I understand it). I do think that public investment like large grants to fund teams to work on open source software would be a great thing.

I think another factor here is the timing and the context of the moment. There is a turning tide in the data systems world whereby just having something you can run isn't good enough because the overhead to figure out the operations just isn't in the budget when there are hosted solutions out there.

> Sure, CRL may be interested in partnering and playing nice- now. But they hold the keys to the kingdom, and can lock the door anytime they want to.

This is only somewhat true. The BSL license cockroach uses converts all code to Apache after 3 years. While in the short term this likely means that CRL holds the keys, if this investment builds the quality of product we hope and believe it will, that corpus of code will be available for decades to come. I do appreciate the quality product that is postgres, but I can also say that I've built services that have worked quite well on Postgres 9 which was released over 10 years ago. I'm not saying that good stuff hasn't happened, but that if cockroach is able to fund its way to a somewhat finished project that proves its worth in enterprise deployments, it will have to have been successful and valuable for more than 3 years.

Open core can be okay. CRL engages in some of that too. It's hard to know where to draw the line. I much prefer a 3 year, permissive BSL to the open core enterprise code. Maybe that's just me.

> RedHat is pretty successful at selling support and consulting.

Was pretty successful at it. It's an IBM brand now. Also, it grew up and thrived in the era when you needed a lot of investment and expertise to build, run, and manage datacenters. That world is ending.

Another thing I'll note is that Postgres is fundamentally simpler than crdb, or at least than cockroach would like to ultimately be. WordPress is way way simpler than both of them. Hundreds of engineer years is a pretty steep cost price to get something to that bar of really being valuable. The opportunity cost landscape for software developers alone has shifted what it might take to make a postgres-scale database happen again: my guess is that if the same calliber programmers from the 90s at Berkeley were tinkering on systems getting paid public university wages today, they'd end up in jobs elsewhere pretty quickly.

So yeah, I'd love for it to make sense for everything every company did to be at least big O Open Source, if not even big F Free Software. That'd be a cool world. Imagine the world where everything running Google, Amazon, Microsoft were building blocks we could all learn from and shape to our needs. That'd be sweet. In that world, crdb and crl might not need to exist, and that'd be totally cool too.

Note also that Google and Amazon not just have hosted versions of Postgres, they also have adapted them into new products and they make a heck of a lot of money from those products without sharing any of the tech.

It's not fair to draw direct comparisons from what has worked to what might work today. A 3 year delay for fully permissive licensing is something that lets me sleep pretty well while still leaving me way way more privileged than sometimes feels reasonable.


I certainly don't begrudge CRL making the decision that they have. It is their code, they are free to license it however they want to. I love that they have already contributed open source software, and that they have committed to continuing to contribute CRDB after three years! I probably won't be using the BSL licensed version anytime soon, just like I avoid using the "enterprise" licensed versions of open core products. But every person/business has to decide what their business model is going to be.

I don't think they should be stating that their license decision (versus the open source license they used before) doesn't have any impact on most users, because I don't believe that is a truthful statement. It has pros and cons, like the the various open source licenses themselves have.

I don't think it is fair to wave away the countless examples of profitable open source companies. Sure CRL and CRDB are unique.

I hope CRL is successful! Perhaps one day they will figure out how to make their CRDB open source from day one again. Perhaps one day I will be able to run a large, successful venture using open source software (and supporting it!).

No doubt business and software development are both challenging, regardless of what license or business strategy you choose! I think that is part of what makes them fun!


BSL is "delayed free software". Take the code, wait 3 years, then it's entirely free software.


>wait X years

It's three years, not X.


I wasn't sure. Thanks for checking. three years is a negligible amount of time.


This may be the wrong place to ask this but something about Databases and Pebble DB has been making me curious, and I’ve been loving reading what CDB puts out.

For a single node, strictly K/V workload, does CDB offer any advantages over Pebble? Does pebble have similar concurrent write issues like an SQLite? I wouldn’t think so because it’s split into multiple files.

Then one step further, again for a purely KV workload, if all I need to do is add/delete/update/find a key, would a solution like Couchbase be useful compared to Pebble?

If you even linked an article I would love a starting point.


I wonder if it's better in a non sharded use of something like Cockroach to push the replication and HA outside the database, let the control plane(e.g. kubernetes) handle failure and restarts on different nodes, and let something like Ceph with it's efficient write path and replication handle data durability.


You need to integrate transaction coordination with replication to ensure that the replication respects transaction atomicity (so that cross-shard queries get all their reads and writes isolated from each other, and rolled back atomically when a txn is aborted).

So separating the layers like you do is only possible if there is an XA protocol between the layers. Neither K8s nor Ceph support that.


The title is deceitful since it is not really compatible.


Not every postgresql installation can perform every postrgresql feature (via plugins). That doesn't make postgresql incompatible with itself.

I use a postgresql connector to execute some postgresql specific sql statement on CockroachDB. That's a baseline quality for "compatible".


We do have compatibility gaps -- some big, some small. But I would still call it compatible because it's definitely close enough to use a PostgreSQL driver with it in production. I would be curious to hear your opinion on which incompatibilities are most important to address.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: