Hacker News new | past | comments | ask | show | jobs | submit login

It depends -- mostly on whether the vendor (the company receiving the data) is comfortable requiring the source to map some fields.

For low volume cases, we can operate with zero mapping of fields. In those cases, we run every transfer as a full refresh.

If the volumes are higher, then we'll typically ask the source to expose a primary key and last_updated_at timestamp field. In those cases, we run incremental transfers. We use the last_updated_at to figure out what data to transfer, and the primary key to merge it into the destination table without creating dupes.




Thanks! Can you detect deleted rows as part of that?

Do you have support for the target table maintaining history via record effective and termination dates with a current record indicator, or do you just support maintaining current state at the target?

Can the target be a cloud filestore or old school SFTP site?


We can detect deleted rows for incremental transfers (and propagate those) if they're soft-deleted in the source, whether through a deleted_at column or a is_deleted column.

For now, we only support maintaining current state in the target.

Yup! We support all common cloud file storage as destinations (S3, R2, GCS, Azure Blob Storage) as well as vanilla SFTP servers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: