Hacker News new | past | comments | ask | show | jobs | submit | tomnipotent's comments login

> one of the groups that was motivated was MS

I remember Microsoft being a huge marketing proponent of the 3-tier architecture in the late 90's, particularly after the release of ASP. The model was promoted everywhere - MSDN, books, blogs, conferences. At this point COM was out of the picture and ASP served as both front-end (serving HTML) and back-end (handling server responses).


The Percona founders worked at MySQL AB previously.

I think it's a perfectly fine approach in 2025 now that that CI/CD have proliferated and you're less likely to run into a human DBA arbitrarily blocking deployments. It was always the feudal lord mentality of DB operations that made relying so much on stored procedures undesirable.

The author works on Cody at Sourcegraph so I'll give him the benefit of the doubt that he's tried all the major players in the game.

He's also author of one of the most legendary posts about programming language design of all time, Execution in the Kingdom of Nouns.

http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...


Mozilla Positron, their attempt at an Electron-compatible runtime on top of Gecko, did not pick up any traction and was cancelled.


Mozilla XULRunner, which was the thing that happened before Electron, wasn't too bad though. It had way more traction than Positron. It died because Mozilla didn't care about it.


Well, and XULRunner was based in XUL, something Mozilla was trying to get rid of for very good reasons.


It did work with HTML though IIRC. (Well, given Firefox UI is now mostly HTML with some XUL sprinkled on top.)

That was quite late in the game, though. Kind of like closing the barn doors after the horses have already bolted.


It was released in 2016, only two years after Atom and Slack and a year after VSCode. Not sure I'd call that late.


By that point Electron had gained too much momentum and public mindshare to compete with without a big marketing push and flagship project at least as significant as Atom and VSCode were. Positron never got either.


Electron apps were popular, not Electron itself. Even now there are only 4-5 dozen popular apps using it, while I probably run across that number of Qt/GTK apps a month. It's a sort of weird dichotomy it finds itself it.


Index-backed point look-ups are not the problem for analytical queries, but rather minimizing disk I/O for large scans with high column or predicate selectivity.

Once you've optimized for the more obvious filters like timestamp and primary key, like using partitions to avoid using indexes in the first place, you're left with the situation where you need to aggregate over many gigabytes of data and an index doesn't help since your query is probably going to touch every page within the filtered partitions.

You can solve some of these problems in Postgres, like partitioning, but now you're stuck with random I/O within each page to perform non-SIMD aggregations in a loop. This approach has a ceiling that other implementations like ClickHouse do not.


Cedar uses a PAX storage layout (hybrid row/column) so I imagine 64KB optimizes well for OLAP workloads while keeping excess disk I/O for point look-ups manageable. Depending on your OLAP load that "excess" I/O could either be a rounding error or a bottleneck.


> You are paying enormous ingress/egress fees to do this.

It looks like their offering runs on the same cloud provider as the client, so no bandwidth fees. Right now it looks to be AWS, but mentions Azure/GCP/self-hosted.


As I pointed out the 10000x speed up claim is smoke and mirrors, and any of your team could have spent 10 minutes and figured that out by profiling the code. It's a silly claim that doesn't hold up to scrutiny and detracts from your project by setting off the bullshit detector that most programmers have on marketing hyperbole. It's not the flex you think it is.


This "10,000x" faster claim is specific to how long it takes to instantiate a client object, before actually interacting with it.

Turns out the LangGraph code uses the official OpenAPI library which eagerly instantiates an HTTPS transport and 65% of runtime was dominated by ssl.create_default_context (SSLContext.load_verify_locations) when I tested using pyinstrument. This overhead is further exasperated by the fact that it's happening twice - once for the sync client, and a second time for the async client. Rest of the overhead seems to be Pydantic and setting up the initial state/graph.

Agno wrote their own OpenAPI wrapper and defers setting up the HTTPS transport during agent creation, so that cost still exists it's just not accounted for in this "benchmark". Agno still seems to be slightly faster when you control for this, but amortized over a couple of requests it's not even a rounding error.

I hope the developers get rid of this "claim" and focus on other merits.


I mean, both companies are just things I could have cursor do in a few hours. So probably not.


LangChain has several products and has been building on the Agent space for years

im a fan of vibe coding but that's kinda of a stretch

lmfao


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: