Hacker Newsnew | past | comments | ask | show | jobs | submit | conormccarter's commentslogin

Hydrargyrum medium-src iodide lamps are an alternative (artificial sun lights for movie sets), but you'll want a good AC unit in your office

I thought hydragyrum was a made-up word, but it's the Latin word for mercury, which explains the Hg chemical symbol. (Just in case anyone finds this interesting.)

Congrats on the launch! I'm one of the cofounders of Prequel (I saw our name in the feature grid - small nit: we do support self-hosting). This is definitely a problem worth solving - the market is still early and I'd bet the rising tide will help all of us convince more teams to support this capability. I'm not a lawyer, but the latest EU Data Act might even make it an obligation for some software vendors?

Maybe I can save you a headache: Snowflake is actively deprecating single-factor username/password auth in favor of key pair auth, so the faster you support that, the fewer mandatory migrations you'll be emailing users about.


Thanks! Kalan here, I appreciate the nit! PR is already merged. Definitely agreed on the market, it seems like there's a ton of opportunity. And thanks for the heads up re Snowflake auth! we're actively working that one, and a few other auth modes for Redshift and BQ as well.


Prequel | https://prequel.co | Senior Software Engineer | Full Time | GoLang, Postgres, Typescript, React, K8s | $150k-$180k + equity | ONSITE in NYC Prequel is an API that makes it easy for B2B companies to connect and sync data directly to their customer's data warehouse.

We're a small team of five engineers based in NYC. We're solving a number of hard technical problems that come with syncing tens of billions of rows of data every day with perfect data integrity: building reliable & scalable infrastructure, making data pipelines manageable without domain expertise, and creating a UX that abstracts out the underlying complexity to let the user share or receive data. We're powering this feature at companies like LogRocket, Modern Treasury, Postscript, and Metronome.

Our stack is primarily K8s/Postgres/DuckDB/Golang/React/Typsecript and we support deployments in both our public cloud as well as our customers' clouds. Due to the nature of the product, we work with nearly every data warehouse product and most of the popular RDBMSs.

We're looking for a full stack engineer who can run the gamut from CI to UI. If you are interested in scaling infrastructure, distributed systems, developer tools, or relational databases, we have a lot of greenfield projects in these domains. We want someone who can humbly, but effectively, help us keep pushing our level of engineering excellence. We're open to those who don't already know our stack, but have the talent and drive to learn.

// Full job posting here -- https://prequelco.notion.site/Senior-Software-Engineer-Prequ...

// To apply -- email jobs at prequel.co and include [HN] in the subject line


Hey HN -- Conor here, aka the guy from the demo. Happy to answer any questions you have!


Hey Ian, great to hear from you – thanks for the note, and hope you're doing well!


Just saw and sent you a note – would love to hear more!


Snowflake has made it really easy to share to other Snowflake instances, and the other major cloud data warehouses are working on similar warehouse specific features as well. This makes sense because it can be a major driver of growth for them. The way we see the space play out is that every cloud warehouse develops some version of same-vendor sharing, while neglecting competitive warehouse support.

Long term, we'd like to be the interoperable player focused purely on data sharing that plays nicely with any of the upstream sources, but also facilitates connecting to any data destination. (This also means we can spend more time building thoughtful interfaces – API and UI – for onboarding and managing source and destination connections)


Got it. Makes sense.

However, don't you think it makes sense for Snowflake to support "Replicate my Postgres/MySQL/Oracle" to Snowflake? Given how much they are investing in making it easier to get data into Snowflake.


Oh, yeah, it probably does make sense for the warehouses to make that part easier, at least for the more popular transactional db choices. You may have seen Google/BigQuery recently announced their off the shelf replication service for Oracle and MySQL. As far as Prequel goes, we connect to either (db or data warehouse sources), so we're largely agnostic to how the data moves around internally before it gets sent to customers.


Thanks for watching and for the kind words! Re 1. – it’s definitely on the roadmap – we’re planning on getting to it in Q4/Q1, but we can move it up depending on customer need. Re 2. – for tables without a tenant ID column, we suggest creating a view on top of that table that performs the join to add the tenant column (e.g., "school_id") – it's a pretty common pre-req.


Re: scale – we do handle billions of rows! As you can imagine, exact throughput depends on the source and destination database (as well as the width of the rows) but to give you a rough sense – on most warehouses, we can sync on the order of 5M rows per minute for each customer that you want to send data to. In practice, for a source with billions of rows, the initial backfill might take a few hours, and each incremental sync thereafter will be much faster. We can hook you up with a sandbox account if you want to run your own speed test!

Re: configuration – you would create a config file for each "source" table that you want to make available to customers, including which columns should be sent over. Then at the destination level, you can specify the subset of tables you'd like to sync. This could be a single common schema for all customers, or different schemas based on the products the customer uses.


Thank you! And exactly right – Postgres, Snowflake, BigQuery, Redshift, and Databricks are the sources we support today. We also have some streaming sources (like Kafka) in beta with a couple pilot users. At this point, it's fairly negligible work for us to add support for new SQL based sources, so we can add new sources quickly as needed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: