Hacker News new | past | comments | ask | show | jobs | submit login

For my use case of something similar on Clickhouse:

We load data from postgres tables that are used to build Clickhouse Dictionaries (a hash table for JOIN-ish operations).

The big tables do not arrive via real-time-ish sync from postgres but are bulk-appended using a separate infrastructure.




Would you be able to share how you implemented "bulk-appended using a separate infrastructure" at a high level?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: