Hacker Newsnew | past | comments | ask | show | jobs | submit | nindalf's commentslogin

The CEO said

> It's 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM.

"From scratch" sounds very impressive. "custom JS VM" is as well. So let's take a look at the dependencies [1], where we find

- html5ever

- cssparser

- rquickjs

That's just servo [2], a Rust based browser initially built by Mozilla (and now maintained by Igalia [3]) but with extra steps. So this supposed "from scratch" browser is just calling out to code written by humans. And after all that it doesn't even compile! It's just plain slop.

[1] - https://github.com/wilsonzlin/fastrender/blob/main/Cargo.tom...

[2] - https://github.com/servo/servo

[3] - https://blogs.igalia.com/mrego/servo-2025-stats/


Why would they think it's a great idea to claim they implemented CSS and JS from scratch when the first thing any programmer would do is to look at the code and immediately find out that they're just using libraries for all of that?! They can't be as dumb as thinking no one would've noticed?!

I guess the answer is that most people will see the claim, read a couple of comments about "how AI can now write browsers, and probably anything else" from people who are happy to take anything at face value if it supports their view (or business) and move on without seeing any of the later comotion. This happens all the time with the news. No one bothers to check later if claims were true, they may live their whole lives believing things that later got disproved.


I mean... Cursor is the CEO's first non-internship job. And it was a VSCode Extension that caught fire atop the largest technological groundswell in a few decades.

The default assumption should be that this is a moderately bright, very inexperienced person who has been put way out over his skis.


> Why would they think it's a great idea to claim they implemented CSS and JS from scratch when the first thing any programmer would do is to look at the code and immediately find out that they're just using libraries for all of that?!

Programmers were not the target audience for this announcement. I don’t 100% know who was, but you can kind of guess that it was a mix of: VC types for funding, other CEOs for clout, AI influencers to hype Cursor.

Over-hyping a broken demo for funding is a tale as old as time.

That there’s a bit of a fuck-you to us pleb programmers is probably a bonus.


I don't think he intentionally lied. He just didn't know how to check that and AI wrote

   - [tick mark emoji] implemented CSS and JS rendering from scratch - **no dependencies**.

I'm actually impressed by their ignorance. I could never sleep at night knowing my product is built on such brazen lies.

Bullshitting and fleecing investors is a skill that needs to be nurtured and perfected over the years.

I wonder how long this can go on.

Who is the dumb money here? Are VCs fleecing "stupid" pension funds until they go under?

Or is it symptom of a larger grifting economy in the US where even the president sells vaporware, and people are just emulating him trying to get a piece of the cake?


I'm reminded of the viral tweet along the lines of "Claude just one-shotted a 10k LOC web app from scratch, 10+ independent modules and full test coverage. None of it works, but it was beautiful nonetheless."

Yeah, it's

- Servo's HTML parser

- Servo's CSS parser

- QuickJS for JS

- selectors for CSS selector matching

- resvg for SVG rendering

- egui, wgpu, and tiny-skia for rendering

- tungstenite for WebSocket support

And all of that has 3M of lines!


Also selectors and taffy.

It's also using weirdly old versions of some dependencies (e.g. wgpu 0.17 from June 2023 when the latest is 28 released in Decemeber 2025)


That is because I've noticed the AI just edits the version management files (package.json, cargo.toml, etc) directly instead of using the build tool (npm add, cargo add), so it always hallucinates a random old version that's found in its training set. I explicitly have to tell the AI to use the build tool whenever I use AI.

I was LITERALLY thinking the other day of a niche tool for engineers to help them discover and fix this in the future because at the rate I have seen models version lock dependencies I thought this is going to be a big problem in the future.

You can do prompt injection through versions. The LLM would go back to GitHub in its endless attempt to people please, but dependency managers would ignore it for being invalid.

Bigger companies have vulnerability and version management toolsets like Snyk, Cycode, etc. to help keep things up to date at scale across lots of repos.

No need to build a tool for it, engineers can avoid the whole issue by simply avoiding slop-spewing code generation tools. Hell, just never allow an LLM to modify the dependency configuration - if you want to use a library, choose and import it yourself. Like an engineer.

Proposal to not tarnish the good name of actual engineers: slopgineers.

Maybe LLemgineers? Slopgrammers?


Just use Dependi or similar VSCode extensions, they'll tell you if dependencies are outdated.

It’s interesting that they don’t even know this

I assume lock and dependency files are in the training data, so predicting version number tokens have high probabilities associated with them.

Is it using Servo's layout code or did Cursor write its own layout? That's one of the hardest parts.

It seemingly did but after I saw it define a VerticalAlign twice in different files[1][2][3] I concluded that it's probably not coherent enough to waste time on checking the correctness.

Would be interesting if someone who has managed to run it tries it on some actually complicated text layout edge cases (like RTL breaking that splits a ligature necessitating re-shaping, also add some right-padding in there to spice things up).

[1] https://github.com/wilsonzlin/fastrender/blob/main/src/layou...

[2] https://github.com/wilsonzlin/fastrender/blob/main/src/layou...

[3] Neither being the right place for defining a struct that should go into computed style imo.


It's using layout code from my library (Taffy) for Flexbox and CSS Grid. Servo uses Taffy for CSS Grid, and another open source engine that I work on (Blitz) uses it for Flexbox, CSS Grid, Block and float layout.

The older block/inline layout modes seem to be custom code that looks to me similar but not exactly the same as Servo code. But I haven't compared this closely.

I would note that the AI does not seem to have matched either Servo or Blitz in terms of layout: both can layout Google.com better than the posted screenshot.


Cursor didn't write anything, they used ChatGPT 5.2.

> is just calling out to code written by humans

We at least it's not outright ripping them off like it usually does.


> The JS engine used a custom JS VM being developed in vendor/ecma-rs as part of the browser, which is a copy of my personal JS parser project vendored to make it easier to commit to.

https://news.ycombinator.com/item?id=46650998


It looks like there are two JS backends: quickjs and vm-js (vendor/ecma-rs/vm-js), based on a brief skim of the code. There is some logic to select between the two. I have no idea if either or both of them work.

Honestly as soon as I saw browser in rust I assumed it had just reproduced the servo source code in part, or utilised its libraries.

I thought they'd plagiarise, not import. Importing servo's code would make it obvious because it's so easy to look at their dependencies file. And yet ... they did. I really think they thought no one would check?

> And yet ... they did. I really think they thought no one would check?

I doubt even they checked, given they say they just let the agents run autonomously.


Hypothetically: what if they did check, only in order to ‘check’ they asked the LLM instead of manually verifying and were told a story? Or, perhaps, they did check manually but sometime after the files were subtly changed despite no incentive or reason to do so outside of a passing test? …

Humans who are bad and also bad at coding have predictable, comprehensible, failure modes. They don’t spontaneously sabotage their career and your project because Lord Markov twitched one of its many tails. They also lie for comprehensible reasons with attempts at logical manipulations of fact. They don’t spontaneously lie claiming not to having a nose, apologize for lying and promise to never do it again, then swear they have no nose in the next breath while maintaining eye contact.

Semi-autonomous to autonomous is a doozy of a step.


You know, a good test would be to tell it to write a browser using a custom programming language, or at least some language for which there are no web browsers written.

Write a browser without any access to the internet, is what I'd attempted if I was running this experiment. Just seed it with a bunch of local HTML, CSS and JS files from the various testing suites that exists.

I think that's too restrictive; agents should be allowed to reference the internet like we do.

You would want to download all the W3C and WHATWG specifications first.

Fortran 90 should fit the bill nicely.

Very sad to see Paul Graham boosting this slop on X.

To be fair, even if "from scratch" means "download and build Chromium", that's still nontrivial to accomplish. And with how complicated a modern browser is, you can get into Ship of Theseus philosophy pretty fast.

I wouldn't particularly care what code the agents copied, the bigger indictment is the code doesn't work.

So really, they failed to meet the bar of "download and build Chromium" and there's no point to talk about the code at all.


The whole point of Servo is its not impossible to use like Chromium.

None of us have access to Cloudflare's internal data. But a reasonable guess is that enough of their current and future paying customers use Astro? I'm one of those - Astro hosted on Cloudflare.

The top comment doesn't even make sense. Sqlite actually had to get funding to continue operating! They weren't immune from worries like paying rent or buying groceries.

It's just an ad that people are upvoting uncritically.


SQLite made and makes a lot of money from a lot of the people who use them. It's free for us to use, but it wasn't free for Motorola and AOL and Nokia (and later Google, Apple and Adobe) who contracted the team to build it out, add features, fix bugs on it. This wasn't FOSS funded by a few people's free time. It was a commercial business that made money by finding product market fit - the best embedded database in the world. Their scale then allowed them to find more bugs, fix them and become more reliable than anything else.

The whole story (https://corecursive.com/066-sqlite-with-richard-hipp/) is fascinating, but here are a couple of interesting excerpts:

> I scrambled around and came up with some pricing strategy. [Motorola] wanted some enhancements to it so it could go in their phones, and I gave them a quote and at the time, I thought this was a quote for all the money in the world. It was just huge. ($80k)

> [Nokia] flew me over and said, “Hey, yeah, this is great. We want this but we need some enhancements.” I [Richard Hipp] said, “Great,” and we cut a contract to do some development work for them.

> We were going around boasting to everybody naively that SQLite didn’t have any bugs in it, or no serious bugs, but Android definitely proved us wrong. Look, I used to think that I could write software with no bugs in it. It’s amazing how many bugs will crop up when your software suddenly gets shipped on millions of devices.

If you can find paying customers that can fund your development, then it's fantastic. It's even better if those contracts give you scale that none of your competitors have. You don't need VC money if that's the case. But let's not pretend that Astro were in that situation. No one was paying for a web framework.


> If you can find paying customers that can fund your development, then it's fantastic. It's even better if those contracts give you scale that none of your competitors have. You don't need VC money if that's the case. But let's not pretend that Astro were in that situation. No one was paying for a web framework.

Didn't this just happen right now that Astro got acquired by Cloudflare? I am sure that Cloudflare has bot tons of money right now so Astro got an offer to good to refuse but worst case scenario they could've still partnered up with cloudflare,netlify,vercel etc. but also companies who deploy astro (even google deploys astro pages)

Plus, Astro has a very strong focus on being performant/fast (getting 100 lightscore) so they could've definitely focused on consultance as well to actually have the people who work in the craft who can take a look and help you get score who literally know the inside out of Astro

That being said, the Question is, could they have survived long enough to be in a position of sustainability without VC money or could they have gotten sustainability from the start, if so what could be the path that they could've taken so that they didn't need VC money or could be (day-1 profitable ie?)


> they could've still partnered up with cloudflare,netlify,vercel etc.

What makes you think they aren't? https://docs.astro.build/en/getting-started/ says on the bottom left: Sponsored by Cloudflare, Netlify, Webflow, MUX.

> consulting to get to 100 Lightscore

Problem is, it was possible to get there with minimal effort. The default config of Astro was 100. I know absolutely nothing about web dev and my personal website was all 100s.

And in any case, consultancy doesn't scale. Interestingly Tailwind has that kind of model - free software, pay for beautifully crafted components. And their business isn't doing well.

We don't know what would have happened in an alternate universe. But it's hard out there building businesses on FOSS. Can't blame anyone for trying - VC or otherwise.


I’ve used Astro on Cloudflare for a few years for my personal website (username.com). They’ve both been absolutely fantastic, I can’t say enough good things about both of them. My website has all 100s on PageSpeed/Lighthouse, and that’s because of the performance focus of both Astro and Cloudflare. No credit to me at all. It was mainly because Astro prioritised shipping 0 JS unless it was absolutely necessary and Cloudflare is exceedingly good at serving static HTML.

But I also see the difficulty that Astro faced here. Despite being happy with the framework, I never paid for it. The paid offerings didn’t strike a chord with me. And it was partly because whatever they offered, Cloudflare already offered on a very generous free tier.

I'm glad the team have got a second life within Cloudflare,. I'm happy for the people who've given me such excellent software for free for years. Thanks folks!


Out of curiosity, how do you become ‘exceedingly good’ at serving static HTML?

By all accounts, they’ve centralised the delivery of this static HTML at several layers of the network stack, and you’re not getting static HTML anymore because some other part of the business fucked it up.

The World Wide Web was serving static HTML for decades before Cloudflare came along. Open an FTP client, drag and drop, and boom - new HTMl is served.


Likewise! I built my personal blog with Astro and host on Cloudflare (username.dev), and feel guilty about taking advantage of such excellent software and free tier. Here’s hoping they find a way to take my money soon.

I appreciate your honest testimonial. It's so rare these days to read a sentence like, "No credit to me at all" haha

amazing performance.

https://nindalf.com. I've written a few posts that have made it to HN [1].

I strongly oppose writing with LLMs and think it's more important than ever to write with our own words. If my writing is to be better than LLMs I need to hone it by writing more.

I'm proud of the website as well. I have used LLMs to assist with the UI dev. It has all 100s from PageSpeed. I've made it so it's easy to add pages within. All the books I've read in the last few years [1] and a minimalist gym tracker I use myself (any anyone can too!).

[1] - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[2] - https://nindalf.com/books

[3] - https://nindalf.com/gym


One of my major goals for 2026 was zero LLM use for writing, however I've found it a bit hard at times because LLMs are exceptional for research. Oftentimes I find that in reading an explanation or report that ChatGPT gives me about a topic there will be small turns of phrase or even whole sentences that capture a concept way better than I can. I then feel obstinate not using a clearly superior option, so I'm curious if you've run into that tension and if so how you navigate it.

Honestly I don't have an idea. I guess the stuff I write about I know enough to get a good first draft out, preserving my voice. But if I read a first draft by an AI I think I would be influenced.

I just saw GPT 5.2 do something absurd. It has a crazy amount of money ($26k) but folded with a 4-pair before the flop. That's insanely conservative, when it would have cost just $20 to see the flop. But even worse, on the very next hand it decided to place $20 down with a 5 and 4 of different suits.

In fact, all of them love folding before the flop. Most of the hands I'm seeing go like - $10 (small blind), $20 (big blind), fold, $70 bet, everyone folds. The site says "won $100", but in most of these cases that one LLM is picking up the blinds alone - $30. Chump change.

This is illuminating, but not a resource for learning poker.


Modern poker (which tbf not sure if these LLMs are acting according to modern GTO or not) is highly dependent on position. Things change a lot too when/if you are in SB/BB.

Yes, the prompt tries to get them to play GTO. I do think their preflop play is the closest to mirroring this compared to postflop behavior.

Is this tuned to tournament or cash GTO? To the OP's shock about pocket 4's (I think this is what they meant by 4-pair(?)), folding 4's pre flop in early position to no raise would be fairly standard in tournament GTO (although the stage of the tournament and # BBs can change things up significantly), but less standard for sure in cash (almost never probably).

Sqlite's test suite is closed source, so no one other than the Sqlite authors can attempt that. That said, you may be interested in this attempt by Turso to rewrite Sqlite in Rust (https://turso.tech/blog/we-will-rewrite-sqlite-and-we-are-go...). They're not using AI, but they are using some novel ways to test their code.

I miss Terry Pratchett. Just a good guy, writing joyful books. None of that "gritty realism" here. There's only about 40 books by him, so I read 2 a year. By the time I get to 40, I figure I would have forgotten the first few and I can start again.

My blog has had this header since the day he died.


The Night Watch seems pretty gritty to me. And Small Gods. And Vimes' escape from the werewolves in The Fifth Elephant.

Night Watch is my favourite book of his, as it turns out. He is capable of exploring serious themes while still maintaining some whimsy. That's why I love him so much.

This is such a coincidence. I had the same idea a few days ago and also vibe coded a library using Claude. https://nindalf.com/books. The original version of this was meant to encourage me to read more, and I'm pleased to say it succeeded. I hit my goal for the year after a couple of lean years. I also like looking at my highlights and notes and this UI makes it easier to read them.

My experience with Claude was mostly very good. Certainly the UI is far better than what I'd come up with myself. The backend is close to what I'd write myself. When I'm unhappy I'm able to explain the shortcomings and it's able to mostly fix itself. This sort of small-scale, self-contained project was made possible thanks to Claude.

Other times it just couldn't. The validation for the start and end dates it decided was z.string().or(z.date()).optional().transform((val) => (val ? new Date(val) : undefined)). It looked way too complex. I asked if it could be simplified, Claude said no. I suggested z.date().optional(). Claude patiently explained this was impossible. I tried it anyway, it worked. Claude said "you're absolutely right!". But this behaviour was the exception rather than the rule.


This is neat. How do you fill this in? I assume the highlights and notes come from a Kindle or similar reader.

I also have a bookshelf (fully manual, in another comment), and I’m looking for better ways for retention. I do highlights but I rarely do notes. Also with audiobooks I have yet to find a good way to do this outside of my text note for each book I’m reading/listening-to.


Yes, that's right. I mainly read on the kindle and I make a lot of notes and highlights while reading. These notes are synced to Amazon's cloud, which I can access by scraping read.amazon.com. I read paperbacks too, but I just add these to the library manually. No highlights from these.

I think highlighting is why I strongly prefer ebooks nowadays.


ive been iterating on something very similar as well :D started in september and give it 30-60 mins here and there. i ended up with rows instead of a horizontal scroll. There definitely hve been a handful of times claude made terrible decisions and described them as brilliant, but with some very heavy guidance and worktrees its still (feels at least) quicker than if i wrote it out.

cool to check out your version as well thanks for sharing.


Do you have the code for your book library ? I wanted to do something similar to help me remember the books I read in a year too.


Which part? I have a python code base (https://github.com/nindalf/kindle-highlight-sync) that scrapes read.amazon.com for my book highlights. It then exports the data into markdown files that are imported by my website.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: