I remember Microsoft being a huge marketing proponent of the 3-tier architecture in the late 90's, particularly after the release of ASP. The model was promoted everywhere - MSDN, books, blogs, conferences. At this point COM was out of the picture and ASP served as both front-end (serving HTML) and back-end (handling server responses).
I think it's a perfectly fine approach in 2025 now that that CI/CD have proliferated and you're less likely to run into a human DBA arbitrarily blocking deployments. It was always the feudal lord mentality of DB operations that made relying so much on stored procedures undesirable.
Mozilla XULRunner, which was the thing that happened before Electron, wasn't too bad though. It had way more traction than Positron. It died because Mozilla didn't care about it.
By that point Electron had gained too much momentum and public mindshare to compete with without a big marketing push and flagship project at least as significant as Atom and VSCode were. Positron never got either.
Electron apps were popular, not Electron itself. Even now there are only 4-5 dozen popular apps using it, while I probably run across that number of Qt/GTK apps a month. It's a sort of weird dichotomy it finds itself it.
Index-backed point look-ups are not the problem for analytical queries, but rather minimizing disk I/O for large scans with high column or predicate selectivity.
Once you've optimized for the more obvious filters like timestamp and primary key, like using partitions to avoid using indexes in the first place, you're left with the situation where you need to aggregate over many gigabytes of data and an index doesn't help since your query is probably going to touch every page within the filtered partitions.
You can solve some of these problems in Postgres, like partitioning, but now you're stuck with random I/O within each page to perform non-SIMD aggregations in a loop. This approach has a ceiling that other implementations like ClickHouse do not.
Cedar uses a PAX storage layout (hybrid row/column) so I imagine 64KB optimizes well for OLAP workloads while keeping excess disk I/O for point look-ups manageable. Depending on your OLAP load that "excess" I/O could either be a rounding error or a bottleneck.
> You are paying enormous ingress/egress fees to do this.
It looks like their offering runs on the same cloud provider as the client, so no bandwidth fees. Right now it looks to be AWS, but mentions Azure/GCP/self-hosted.
As I pointed out the 10000x speed up claim is smoke and mirrors, and any of your team could have spent 10 minutes and figured that out by profiling the code. It's a silly claim that doesn't hold up to scrutiny and detracts from your project by setting off the bullshit detector that most programmers have on marketing hyperbole. It's not the flex you think it is.
This "10,000x" faster claim is specific to how long it takes to instantiate a client object, before actually interacting with it.
Turns out the LangGraph code uses the official OpenAPI library which eagerly instantiates an HTTPS transport and 65% of runtime was dominated by ssl.create_default_context (SSLContext.load_verify_locations) when I tested using pyinstrument. This overhead is further exasperated by the fact that it's happening twice - once for the sync client, and a second time for the async client. Rest of the overhead seems to be Pydantic and setting up the initial state/graph.
Agno wrote their own OpenAPI wrapper and defers setting up the HTTPS transport during agent creation, so that cost still exists it's just not accounted for in this "benchmark". Agno still seems to be slightly faster when you control for this, but amortized over a couple of requests it's not even a rounding error.
I hope the developers get rid of this "claim" and focus on other merits.
I remember Microsoft being a huge marketing proponent of the 3-tier architecture in the late 90's, particularly after the release of ASP. The model was promoted everywhere - MSDN, books, blogs, conferences. At this point COM was out of the picture and ASP served as both front-end (serving HTML) and back-end (handling server responses).
reply