You don't really need it anymore - CitC let you do views (mapping just part of the monorepo into your filesystem via FUSE) since about 2013, and then that functionality just got built into Piper. When I returned in 2020 you'd have a file at the top of your source tree that included all the relevant file mappings as well as any Blaze flags needed to build the project, and you could just point your IDE at that and it'd map in just what you need.
The history of Google's relationship to version control is even more interesting than editors - it went from CVS in 1998 to Perforce (P4) in 2000, then gcheckout and g4 in ~2006, then OverlayFS was invented in 2008, git5 came out in 2009, CitC obsoleted OverlayFS in ~2012, Piper built this all into the VCS in ~2013-2014, while I was gone from 2014-2020 apparently we got hg and jujutsu frameworks, and then when I got back in 2020 you'd just check out a .blazeproject from your IDE and everything would magically work. Many of these started as 20% projects (I used to have lunch with the guy who invented OverlayFS; interesting character and one of the best programmers I knew) and then got folded into the "official" way of doing things once grassroot adoption showed the execs that this was how people really wanted to work.
Haven't tried. But there are "IDEs" (Unity) that are really hostile to the idea that your project directory is not a fast local disk that they can both fill with garbage and use fsnotify on everything.
When I joined in 2016, it CitC would make it look (and still does) like you had the entire monorepo on your local filesystem on your machine.
Git5 would copy some directories but builds would still fallback to files from the monorepo if you didn't track them. It was convenient for me since I could just grep and do fuzzy matching from my editor. Now I have to do some extra work to avoid grepping the entire monorepo. LLMs sometimes still try to grep the entire repo lol.
Now, you could use a perforace, mercurial, or jj interface and it works fine.
How's the performance? My Wasm-under-Wasm tests became slow since I transitioned to the wasm2go transpiler, but I wonder if that's an artifact of repeated slow compilation (rather than execution) of tests.
For my use case (small datasets), query execution feels instant. No perceptible lag between user action and result. Though I haven't done formal benchmarking.
The initial load time is the bottleneck, not query execution. Whether that's the transpiler or just the Go runtime overhead I can't say without proper benchmarking.
Unfortunately I'm not knowledgeable enough on the browser side of things to advance something like that, and have quite a bit on my plate already. But if there's interest I can help demystify the SQLite/VFS side of things. Feel free to (e.g.) open a GitHub discussion on my repo to track this.
A year ago I would've told my boss “can't be done” about my work today. I'd tell him to get me the right person to talk to (our partner, not an alien) who could give me some insight into what the hell I'm supposed to be doing to consume their API. Or to at least explain why it is that this can't be done.
Nowadays, I spent a couple of weeks reverse engineering their terrible ideas. Yeah, it worked. But it's a complete waste of my time, and tokens, energy, chips and RAM. And worst of all, it will lead to a terrible design.
That will work, but will eventually colapse under its own weight, as we use our increased power to increase our sloppiness and take it a little further. Because we can manage it. For now.
I don't disagree, but whoever never put math they don't fully understand in their code gets to throw the first stone.
I've reached solutions by trial and error too, and tried to rationalize them later, quite a few times. And it's easier to rationalize a working solution, however adversarial you claim to be in your rationalization.
I don't see using gen AI for the (not so) “brute force” exploration of the solution space as that different from trial and error and post fact rationalization.
Since I started it a couple of months ago, it's been used by me to transpile SQLite to Go, and by some other folks to transpile other C, C++, Zig and even Perl libraries to Go.
Also, one thing the numbers they published show is that the bits that are growing 10x YoY (and which they expect to get “worse”) are all the things that you get “unlimited” mileage off (even if you're a paying customer): repos, commits, PRs.
Things that have “usage based billing” (like action monites) grow closer to 2x YoY.
When there's a dollar amount attached, people don't 10x, because it's not worth it. They splurge when it's cheap, and unlimited.
reply