Hacker Newsnew | past | comments | ask | show | jobs | submit | std_reply's commentslogin

Using Python syntax makes it more accessible.


Thanks. Please give it a try, and let me know if you have any issues. I'd be happy to help.


Have you looked at tiup?

https://tiup.io


Go is used for the SQL layer, it’s a modern optimizer that can do distributed joins etc. ie. Issue parallel reads to the storage nodes.

Additionally, it can push down the DAG to the TiKV storage nodes, written in Rust, to reduce movement of data and work closer to the physical data.


I would argue the opposite, distributed databases are much easier to operate at large scale. Truly online distributed DDL, at least in TIDB, strong consistency etc.

People who bang on about Postgres replication have rarely setup replication in Postgres themselves and that too in the 100a of Pb scale.

MySQL replication works well and can be scaled more easily (relative to Postgres) but has its own problems. eg., DDL is still a nightmare, lag is a real problem, usually masked by async replication. But then eventual consistency makes the application developers life more complicated.


TiKV the Core Storage scalable component is a CNCF graduated projected. PingCAP cannot change the license even if it wanted to.


Thanks for the info! I wasn't aware of this and didn't see any mention of this in their github docs. It means that at least you can use that part of the project without worry. They should promote this aspect of the project more prominently. It should be of great interest to anyone thinking about adopting the technology.

Do you have any links to the Core Storage CNCF project? I couldn't find it on the CNCF website under the Database and Cloud Native Storage sections?


If you’re looking for the TiKV project link: https://github.com/tikv/tikv


Thanks.


TiDB is running core banking services too.

I think people’s idea of scale and operating at scale is limited to their experience.

You can get MySQL to run at any scale, look at Meta and Shopify. Operational complexity at that scale is a different story.

Distributed databases reduce a lot of the operational complexity. To take one example:

Try a DDL on a 5 TB table in any replicated MySQL topology of choice and compare it with TiDB’s Distributed execution framework.


> Distributed databases reduce a lot of the operational complexity

That’s a new one to me; never have I ever heard someone claim that making a system distributed reduces complexity.

I’ve operated EDB’s Distributed Postgres at a decent scale (< 100 TB of unique data); in no way did it enable reduced complexity. For the devs, maybe – “chuck whatever data you want wherever you want, it’ll show up.” But for Ops? Absolutely not, it was a nightmare.


I’m contrasting a single node database with bolted on replication vs a distributed SQL database that gives strong consistency out of the box.


EDB Distributed Postgres under the hood is a single node database with bolted on replication.


This is quite an ignorant comment. TiDB routinely handles 100s of TB of data. Go watch LinkedIn’s presentation on why they choose TiDB as their strategic db going forward.



TiDB has four main components:

1. SQL front end nodes 2. Distributed shared nothing storage (TiKV) 3. Meta data server (PS) 4. TiFlash column store

1 and 3 are written in Go 2 is written in Rust and uses RocksDB 4 is written in C++

2 & 3 are graduated CNCF projects maintained by PingCAP.

Disclaimer: I work for PingCAP


My guess: then benefits SQL at scale and reduced maintenance burden. That’s what the article seems to hint at.


AirBnB, Databricks, Flipkart 3 of the largest banks in the world, some of the largest logistics companies in the world, at least 2K seriously large installations.


Very cool. So many upstart SQL players: TiDB, PlanetScale, Crunchy, Cockroach, Neon, Supabase, Nile, Yugabyte, Aiven, let alone the cloud provider options (AlloyDB, Spanner, Aurora, Cosmos). Kind of mind boggling there's room for all these players


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: