Hacker News new | past | comments | ask | show | jobs | submit | olluk's comments login

That all true but at some point the combinations of paths explode. It is not possible to write tests for all the combinations then it possible to cover them eventually with some probability. Fuzzing covers more execution path combinations over time.


Perhaps the multi-master approach is the example of system where incoherent does not mean terminal illnesses.


We write to WAL and then register the transaction in the transaction sequence registry. If a concurrent transaction registered between the start and the end of the transaction, we update the current uncommitted transaction data with concurrent transactions and re-try registering it in the sequencer again. To scale to multi-master we will move the transaction sequence registry to a service with a consensus algorithm.


Comparison with old version is actually in the article for the patient reader. It could go to the top but I don't think it will make a difference. At the end of the day it is the article at the official QuestDB website which gives the reader a spoiler about the bias.

I am intrigued what Timescale is going to publish next.


Right. The data is there but it comes with clickbait title which distracts focus from the awesome performance improvements QuestDB guys reached


There were 2 queries in the QuestDB benchmark over the same table. ClickHouse didn't even try to match both of them choosing one as a victim. I guess that's what happens when you optimise the data storage for one query.


No, there are no indexes in QuestDB in the article. None. Zero. That's bold mistake in the ClickHouse article. Should be named Yes, QuestDb is Faster.


Why does the lack of indexes matter? Especially when the size on disk is so much higher? Defining a sensible index isn't an unreasonable or daunting task, and minimal effort in CH got a 4x speedup over QuestDB. "It's faster if you invest literally zero time making it efficient" doesn't offer any practical benefit to anyone.

If it was demonstrated that Quest did a better job overall in the majority of cases where an optimization would have been missed, that's one thing. But this feels awfully nitpicky.


The article is not _just adding an index_. They are embedding one of the search fields in a table _primary key_. That likely means the whole physical table layout is tailored for that single specific query.

While it can help to win this very benchmark it's questionable whether it's usable in practice. Chances are an analytical database serves queries of various shapes. If you only need to run a single query over and over again then you might be better off with a stream processing engine anyway.


The primary key is, in effect, an index. Specializing on the latitude field of a table of geographic data seems like an incredibly small thing to nitpick.


Yeah, I've read more carefully and it seems they're doing full scan.


Looks like QuestDB is faster if you don't optimize your table storage for 1 query.

But if you are okay that only limited number of columns to be scanned faster than others ClickHouse comes first.


What if the purpose of the article is to compare queries without indexes?


Doesn't matter, since that clearly wasn't the purpose of the article. After all, they were totally happy to add an index for another competing DB as long as they happened to win that comparison. Then they crow about how they beat having an index.

Pretty sleazy.


So, maybe do not create specific scenarios for corner cases and then generalize outcome? And write articles about common scenarios that is important for people who will use technology on daily basis.


My personal view is that having fast queries without indexes is quite general outcome.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: