this one is a long-time coming and it's a continuation of our acquisition of Logflare[0]. Since the acquisition we've be open-source-ing the server, which you can find here: https://github.com/Logflare/logflare
Logflare handles about 1.5 billion log-events everyday on supabase. It's built with Elixir and has no problems with that workload.
This is really just the start of the Logflare updates. All logs are currently ingested into BigQuery, and we are adding support for Clickhouse and other backends (focusing primarily on open source backend). Over time Logflare will function very much like an open source Sentry alternative, where you can ingest data from various sources.
The team will be around if you have any questions about the technical implementation
The tools written in elixir are this Logging server, our Realtime server, and a Postgres Pooler which we launched on HN this weekend [0]
The decision is centered more around the BEAM [1], the virtual machine which powers Elixir/Erlang. It was designed for high-availability systems and it's very efficient at managing large numbers of processes and threads.
In the context of these tools, it's useful for handling open connections and spiky workloads.
edit: Also worth mentioning - we don't care too much about the language. We aim to use the right tool for the job, or preferably an existing open source tool. A good example of this is PostgREST which is written in Haskell. The only person on our team who knows Haskell is Steve (the maintainer). We don't mind because it's the right tool for the job, it's open source, and it has an amazing community.
But we also sponsor other members of the community to work on PostgREST[0]. We don't promote the paid sponsorship much. Instead, we find people who are already doing the work and then offer them sponsorship to continue doing it.
There’s more than 1 developer out there that knows Haskell and SQL. You just run into issues if you want those developers to sit in your office next to you.
you comment is probably metaphorical, but it's worth pointing out how we have de-risked this: supabase is fully distributed. We look for the right person no matter where they live. We're currently 65 people in more than 20 countries
Simply drawing a comparison with that comment, so that readers know the direction of the product.
I'll leave my comment above unedited, so that your comment makes sense - but perhaps I shouldn't have said "alternative", and simply left it a "Over time Logflare will function very much like Sentry."
Sorry for off topic question, but do you plan on doing anything about the egress prices?
I do understand that you are hosting your stuff on AWS, which is subject to AWS egress fees, but the egress fee is just way too much for any serious project (imho)
At this stage we're only using AWS for our infra. I wish we could lower the costs, but would be expensive for us to cover the AWS egress charges for all of our customers.
> but the egress fee is just way too much for any serious project
I assume you're not hosting your serious projects on AWS? Do you host them in another cloud provider?
we are definitely looking at other cloud providers, and slowly moving our infra away from AWS. I don't have any timeline on that - it's hard to migrate all of our databases, while also keeping up with the features that our community are demanding.
thank you thank you. the longer i spend as a dev the more i have infinite appreciation for better logging.
this is asking a lot but could we get a breakdown as to what you think are relevant decision factors for choice of logging server? (eg vs logstash/kibana, splunk and i guess people DIY with clickhouse but we can effectively ignore that)
Hi I’m one of the logflare devs and I work on observability at Supabase.
Great question. To directly address some of the tools you mentioned:
- Logstash is the transport and transformation portion (along with Filbert) in the elastic stack, and it performs the same functions as vector. It is out of scope for Logflare, which focuses on acting as a centralised server to point all your logging pipelines to.
- Kibana is the visualisation layer of the elastic stack. For Supabase, this functionality is taken over by the Supabase Studio, and the reporting capabilities will eventually converge to compete to match APM services like sentry etc.
- Splunk’s core is not open source, and is very much geared to large contract enterprise customers. Their main product is also much more geared towards visualisation as opposed to bare log analysis.
When it comes to a logging server/service, you’d consider the following factors:
- Cost. Logging is quite expensive, and the way that Logflare leverages BQ (and in the future, other OLAP engines) cuts down storage costs greatly
- Reliability. The last thing that you would want is for your application to take high load and go down, but you’re unable to debug it because the high traffic led to high log load and subsequently took down your o11y server. Logflare is built on the BEAM and can handle high loads without breaking a sweat. We’ve handled over 10x average load for ingestion spikes and Logflare just chugs along.
- Querying capabilities. Storing logs isn’t enough, you need to effectively debug and aggregate your logs for insights. This incurs both querying costs and additional complexity in the sense that your storage mechanism must be able to handle such complex queries without breaking the bank. Logflare performs optimisations for these, performing table partitioning and caching to make sure costs are kept low. This allows Supabase to expose all logging data to users and perform joins and filters within the Logs Explorer to their hearts’ content.
> - Reliability. The last thing that you would want is for your application to take high load and go down, but you’re unable to debug it because the high traffic led to high log load and subsequently took down your o11y server. Logflare is built on the BEAM and can handle high loads without breaking a sweat. We’ve handled over 10x average load for ingestion spikes and Logflare just chugs along.
Can logflare scale out into multiple containers / vms / machines? Is Supabase currently deploying something like autoscaling with kubernetes or something?
For the Logflare infra, we manage a cluster of around 6 VMs. Not a very complex setup, and no need for k8s as we have a monolith architecture. We also don't scale horizontally as much as the cross-node chatter increases with each additional node.
> Logflare was available under a BSL license prior to joining Supabase. We’ve since changed the license to Apache 2.0, aligning it with our open source philosophy.
It would be awesome if this could use Quickwit as a backend which is a new promising alternative to Elasticsearch, I’ve been using it internally and it’s much more lightweight and easier to run.
Wow man Quickwit looks amazing. Probably would have reached for it when I started Logflare if it was around.
I have not seen anything open source that really separates compute and storage like that in one package. You can setup Clickhouse to use an object store but there's a bunch of nuance there. People are starting to hack together something similar using DuckDB on Lambda, basically as a query engine on top of S3. I wonder if you can use DuckDb to get a SQL interface going?
Going to have to take a close look for sure. SQL is definitely a blocker. With Supabase we give people the ability to query their logs with full SQL. And with Logflare Endpoints you create an API endpoint with a SQL query.
I'm convinced that a combo supabase + quickwit can be quite powerful.
It's possible to make Quickwit support simple SQL queries like "SELECT xxx FROM yyy GROUP BY something" quite fast.
If you are eager to explore a POC with quickwit, let's have a quick chat, here is my email francois [at] quickwit [dot] io.
Supporting quickwit and its query language is definitely feasible and would not require quickwit to have SQL support. However, we've got SQL DBs on our roadmap at the moment, so it might be a while until we get to quickwit
> Simply provide your BigQuery credentials and we stream logs into your BigQuery table while automatically managing the schema
I didn't know BigQuery was capable of accepting streaming log data - in my mental model of the world it was the kind of database that you update using the occasional batch job, not from a streaming source of data.
It's pretty awesome & very cost effective! When I was at Twilio we streamed logstash/fluentd (forget which) -> kinesis -> BigQuery and it worked great - certainly better than the days we were trying to manage ES ourselves
is kinesis in the middle for more durability/being able to handle spikes? when and how does one make that sort of decision to insert kinesis into things? (esp since you were gonna get the data from aws -> gcp so you dont save anything staying within aws anyway)
I tried LogFlare (which is now Supabase Logs) in January, but it didn't work well for what I wanted.
Supabase Logs / Logflare seems primarily interested in creating graphs from logs rather than using logs for diagnostic purposes.
I've been looking for a log solution that's good for the use case of high retention but low volume.
I have a few small apps that generate a few MB of logs per month, so basically nothing. But I still want to have all my logs searchable in one place.
Most logging solutions set retention based on time rather than data size. So regardless of how much you're logging, they throw away your logs within somewhere between 7-30 days unless you're on an insane Enterprise plan.
I was excited about LogFlare because it supports unlimited retention, but I ran into too many issues and had to cancel my subscription:
* To search your logs, you need to write a SQL-like query in LogFlare's DSL. You can't just put in a route (e.g. /api/auth) like you can with other log analytics.
* Search only shows the matching lines. Usually, what I want to see is the log line in context. For example, if I search "catastrophic error" I want to see the log lines leading up to that, not just that specific line.
* Search is limited to a maximum of 100 results. If you want to see more results, you need to rewrite your query rather than just scroll up or hit a "load more" button.
* When you do adjust the query to a larger time window, the query will fail because it can't generate a graph unless you also adjust the group_by in your query to match the new time window's limits. This is an annoying obstacle if you don't care about graphing the results and are just trying to diagnose an issue in your logs.
I found support lacking as well. I emailed support to ask if I was misunderstanding how to use Logflare or if it was just designed for a different use case. I was on a paid plan, but I still had to wait 3 business days for a response. When the response came, they just said that it was designed for me but didn't address any of the issues I brought up.
I do like that Logflare/Supabase let you bring your own BigQuery. That's nice for customers like me who want low-volume, high retention. I hope they continue iterating because it has potential.
In the meantime, I've found LogTail to be a pretty good alternative, but they're limited to 30 days of retention even on the highest tier plan.
Firebase uses Google Cloud Logging. Taking a quick look at the blog post here, Google Cloud Logging already seems to support everything it describes.
Is there something in it that makes it a better solution in some way than what Google is already providing? (Note that Supabase Logs appears to rely on Google BigQuery so you'll be running on Google either way.)
> Logflare currently supports a BigQuery backend. We plan to add support for other analytics-optimized databases, like Clickhouse. We will also support pushing data to other web services, making Logflare a good fit for any data pipeline.
>
> This will benefit the Supabase CLI: once Postgres support is available, Logflare will be able to integrate seamlessly, without the BigQuery requirement.
You mentioned above that BigQuery reduces cost. I am surprised by that assertion, tbh. Can you point out ways in which Logflare uses it that makes it so (for ex, is it tiered-storage with a BQ front-end)?
How does Logflare's approach contrast with other entrants like axiom.co/99 who are leveraging blob stores (Cloudflare R2) for storage and serverless for querying for lower costs?
Multiple pluggable storage/query backends (like Clickhouse) is all good, but is there a default that Logflare is going to recommend / settle on?
Are there plans to go beyond just APM with Logflare (like metrics and traces, for instance)?
I guess, at some level, this product signals a move away from Postgres-for-everything stance?
> Can you point out ways in which Logflare uses it that makes it so (for ex, is it tiered-storage with a BQ front-end)?
After 3 months BigQuery storage ends up being about half the cost of object storage if you use partitioned tables and don't edit the data.
> How does Logflare's approach contrast with other entrants like axiom.co/99 who are leveraging blob stores (Cloudflare R2) for storage and serverless for querying for lower costs?
Haven't really looked at their arch but BigQuery kind of does that for us.
> Multiple pluggable storage/query backends (like Clickhouse) is all good, but is there a default that Logflare is going to recommend / settle on?
tbd
> Are there plans to go beyond just APM with Logflare (like metrics and traces, for instance)?
Yes. You can send any JSON payload to Logflare and it will simply handle it. Official open telemetry support is coming, but it should just work if your library can send it over as JSON. And you can send it metrics.
> I guess, at some level, this product signals a move away from Postgres-for-everything stance?
Postgres will last you a very long time usually but at some point with lots of this kind of data you'll really want to use an OLAP store.
With Supabase Wrappers you'll be able to easily access your analytics store from Postgres.
> After 3 months BigQuery storage ends up being about half the cost of object storage if you use partitioned tables and don't edit the data.
It sounds like it's just standard GCS nearline pricing plus BigQuery margin fee. Raw nearline is cheaper to store, but has a per-GB access fee that'll hurt if you re-process old logs.
Interestingly, BigQuery's streaming read free tier of 300 TB/mo makes BigQuery a fun hack to store your old-but-read-more data into, even if it's e.g. backups blobs.
BigQuery reduces storage costs as even before their recent pricing change, the cost per GB [0] is on par with and slightly lower than s3 storage costs [0], which we can use as a estimated market price for data storage. BigQuery makes money off the querying costs, which we take steps to optimize and minimize, with table partitions, caching, and ui design, both client side and server side. These all help to reduce querying costs. Table partitioning also helps to cut per-GB storage costs by half by switching to long-term logical storage after 90 days.
Of course, using blog storage might possibly result in comparable cost, however, relying on blob storage would likely increase the querying costs (in terms of GET requests) as well as complexity to query across multiple stored objects/buckets, as opposed to relying on BQ to handle the querying.
In the long term, we would likely continue using BQ for our platform infrastructure, unless GCP changes their pricing in a way that adversely affects us. When it comes to self-hosting, it would of course depend on how much complexity one would like to take on, and out-sourcing the storage and querying management is a better option in most cases.
We would not rule out such features, but we consider them nice-to-have features and are very far down the priority list. At the moment we're mostly focused on improving integration with the Supabase products and platform. It is actually possible to log stack traces, and is supported out-of-the-box for certain integrations such as the Logflare Logger Backend [2].
Postgres without any extensions is not optimized for columnar storage, and would not be an optimal experience for such large scale data storage and querying. It is also not advisable to use the same production datastore for both your application and your observability, it is better to keep them separate to avoid coupling. If one really wants to use the same Postgres server for everything, there are extensions that allow for Postgres to work as a columnar store, such as citus[3] and timescaledb[4], and we have not ruled out supporting these extensions as Logflare backends.
What are some of the largest production applications built using Supabase? I know it's popular for whipping something up for a hackathon, but how battle tested is it?
Also does anybody know what they're doing behind the scenes with the database? I know their storage uses s3, functions (I think) use Deno, this uses BigQuery. Is their db on RDS/Aurora? If so how do they claim max DB size of 1024 TB while Aurora is 128 TB?
We have tens-of-thousands of applications using Supabase in production. "Size" is a hard one to answer because it means different things to different people. We have projects that have databases and storage in the hundreds-of-terrabytes, projects making millions of API requests, and logos from Fortune 100 companies.
If sentryio has a self hosted version and logflare has one too, why would I pick logflare? Are there any differences? I tried sentryio and it's really convenient, at least the hosted version.
Depends on your use case. If the volume of data you're looking at is not much, Sentry is more feature rich and refined.
If you want to expose a logging interface to your customers and/or easily integrate large volumes of structured event type data into the rest of your infra, then maybe look at Logflare.
One caveat: the only backend-store supported right now BigQuery. We will be releasing support for Clickhouse and Postgres in the coming months, we just couldn't fit it into this Launch Week
Supabase-specific SDKs are still in the works. However, if you're using the Logflare service as is, there is a pino transport[0] available for sending events directly to Logflare.
this one is a long-time coming and it's a continuation of our acquisition of Logflare[0]. Since the acquisition we've be open-source-ing the server, which you can find here: https://github.com/Logflare/logflare
Logflare handles about 1.5 billion log-events everyday on supabase. It's built with Elixir and has no problems with that workload.
This is really just the start of the Logflare updates. All logs are currently ingested into BigQuery, and we are adding support for Clickhouse and other backends (focusing primarily on open source backend). Over time Logflare will function very much like an open source Sentry alternative, where you can ingest data from various sources.
The team will be around if you have any questions about the technical implementation
[0] acquision: https://supabase.com/blog/supabase-acquires-logflare