Your impressions are not correct. Jujutsu's creation and its Git support both predate Google's direct involvement or its anointment as the eventual successor to Fig (Google's internal fork of Mercurial). Martin just happened to work there at the time he started jj, and only later did it become his full-time job to work on it. Though the occasional Googler who isn't Martin will contribute a patch or two when they want to fix something, Google otherwise isn't really involved.
Large file handling has improved in recent versions, FWIW; large files are left untracked if they violate the size limit (no auto track), you have to selectively add them at that point. Note that you can unbloat your local copy by pruning the operation log and then running jj gc if you accidentally add blobs and stuff; though if you push the blobs somewhere you obviously can't undo that so easily, that's no different than Git.
Git's underlying storage format just isn't a very good fit for any kind of "large-ish file" storage; Git LFS is mostly just hack and it is unlikely to be supported anytime soon. Our hands are a bit tied on that front.
My impression is that most of the interest and momentum for solving the "large files problem" would preferably be invested in a native non-Git backend for Jujutsu.
It supports standard UEFI boot, Ubuntu has a WIP image you can use that has some extra patches, and with some fiddling you can get other distros, assuming you use a really recent kernel with a supported device tree and maybe you need some firmware updates and a kernel patch or two (search 'x1e' in the Linux tree to see if yours is in there.) Wifi, USB-C etc work last I checked. Not sure about BT or general sound support or anything like that.
Overall though, things "work" but as far as usable goes, it seems to remain in a pretty poor state; especially given the lack of accelerated graphics combined with no NUC/SFF options. I guess WSL2 is probably okay. But it's a shame, because the hardware does seem pretty nice.
When some NUC options come out you'll at least be able to use it in a headless form factor, I guess.
The real reason to use those languages is because they often lend themselves to using the meta-language for other purposes, like writing flexible open ended test suites, or using some forms of code generation/metaprogramming to generate parts of the design from other things. That's very useful and one of the attractive properties of systems like Amaranth or Clash, and one of the downsides (IMO) of approaches like Filament or Bluespec. That said, the most important bits about Filament and Bluespec are their high-level concepts (like guarded actions and timeline types), which could be adopted into other RTLs as well.
At the end of the day though, sometimes you just have to debug a netlist, and that probably will remain true of Filament too. (Any language with higher-order applicative concepts will eventually run into some issues with wire names, etc, that's just unavoidable.) I think the SystemVerilog or whatever is just a red herring at that point; the tools for doing netlist debugging all feel like the equivalent of having to debug compiler assembly output with no debug symbols. Making sure you need to reach for the debugger much less is a good first step, but I'm not sure how to improve this part.
Yeah I agree that debugging generated SV is like debugging assembly and you rarely have to do that when writing normal programs. I think the difference is tooling. I can fire up an IDE debugger and have full access to all the relevant information and controls without seeing any assembly.
I don't know of any compile-to-SV tools that have a debugger anywhere near as capable as that. They definitely should! But they don't right now, so we're stuck at debugging RTL.
For the most part, yes, it should be very achievable. Embedded Swift basically just produces an object file that looks like any object file from a C compiler. The objects mostly rely on very basic primitives like malloc/memcpy so it's pretty freestanding (you can turn off allocations, too). It also has very good support for importing C headers into Swift code so you can interop easily.
Probably the biggest roadbump for something like FreeRTOS is the asynchronous support though. Embedded Swift's async support is still extremely rudimentary and I didn't find much about how to extend it/attach it to other control loops. I think it only supports single-threaded execution right now as well.
Hashes of the tarballs are recorded in the package-lock.json of downstream dependants, so recompressing the files in place will cause the hashes to change and break everyone. It has to be done at upload time.
The hashes of the uncompressed tarballs would be great. Then the HTTP connection can negotiate a compression format for transfer (which can change over time at HTTP itself changes) rather than baking it into the NPM package standard (which is incredibly inflexible.)
They're overall more prevalent in the FPGA world, I think. I've used and done several jobs with them (Clash/Haskell, Bluespec, etc) and know others who have, too. But you basically need to know someone or do it yourself. Pretty marginal overall, but IME the results have basically been good (and more fun to write, too.)
The official specs for the 5090 have been out for days on nvidia.com, and they explicitly state it's 32GB of GDDR7 with a 512-bit bus, for a total of 1.8TB/s of bandwidth.
This feels like a weird complaint, given you started by saying it was 24GB, and then argued that the person who told you it was actually 32GB was making that up.
All else equal, this means that price per GB of VRAM stayed the same. But in reality, other things improved too (like the bandwidth) which I appreciate.
I just think that for home AI use, 32GB isn't that helpful. In my experience and especially for agents, models at 32B parameters just start to be useful. Below that, they're useful only for simple tasks.
Yes, home / hobbyist LLM users are not overly excited about this, but
a) they are never going to be happy,
b) it's actually a significant step up given the bulk are dual-card users anyway, so this bumps them from (at the high end of the consumer segment) 48GB to 64GB of VRAM, which _is_ pretty significant given the prevalence of larger models / quants in that space, and
c) vendors really don't care terribly much about the home / hobbyist LLM market, no matter how much people in that market wish otherwise.
Well, it looks like Quickwit was going to add an Enterprise license as of earlier this year (PR #5529), which I had been keeping eyes on, but this announcement says they're instead going to relicense as Apache 2.0 so the "community can continue on":
> We will be focused on building a new product with Datadog, and to ensure our open-source community can continue, we will soon release a major update of both Quickwit with a relicense to Apache License 2.0 and tantivy.
So, it looks like we'll get a more liberally licensed Quickwit, but reading between the lines suggests development of it is might otherwise be winding down? It has been pretty nice and stable in my experience, so I can't really complain much. But I was really looking forward to what else it could bring.
"So, it looks like we'll get a more liberally licensed Quickwit, but reading between the lines suggests development of it is might otherwise be winding down?"
They will stop fulltime day-to-day effort in it themselves, probably because they have been relocated to writing a similar service but closed and integrated in DD, but it seems they want to opensource the current product with a OSI compliant license in the hopes that the community picks up the tab.
I think that's a nice trade. Could have been much worse.
By the way, also note that DD is not a total stranger in the OSS space. They actually opensourced their observability pipeline tooling for general use as Vector, which is a rock solid product. - https://vector.dev/
Yes, I've been using Vector since very early on, long before Datadog acquired it, and Datadog have continued ongoing maintenance and feature additions at a slow-but-steady pace, which I think is good. Like Quickwit, Vector is very stable and already quite complete. So I'm not too unhappy.
But Vector is something that complements Datadog's offering very well, so I think that makes sense for them to be good stewards of it. Quickwit is something that somewhat actively competes against them, which is a big difference. I suspect that unlike Vector, Quickwit is probably going to stop seeing any development in pretty short order, unless the devs now can consciously go out of their way to dedicate extra hours to it.
To be clear, I think that the relicense is great, and I think it's very possible that Quickwit will be picked up/forked by someone and maintenance will continue, because it's very good, and I'd really love to see someone do metrics for it as well. So, I'm not all gloomy or anything like that.
The Orin series and later use UEFI and you can apparently run upstream, non-GPU enabled kernels on them. There's a user guide page documenting it. So I think it's gotten a lot better, but it's sort of moot because the non-GPU thing is because the JetPack Linux fork has a specific 'nvgpu' driver used for Tegra devices that hasn't been unforked from that tree. So, you can buy better alternatives unless you're explicitly doing the robotics+AI inference edge stuff.
But the impression I get from this device is that it's closer in spirit to the Grace Hopper/datacenter designs than it is the Tegra designs, due to both the naming, design (DGX style) and the software (DGX OS?) which goes on their workstation/server designs. They are also UEFI, and in those scenarios, you can (I believe?) use the upstream Linux kernel with the open source nvidia driver using whatever distro you like. In that case, this would be a much more "familiar" machine with a much more ordinary Linux experience. But who knows. Maybe GH200/GB200 need custom patches, too.
Time will tell, but if this is a good GPU paired with a good ARM Cortex design, and it works more like a traditional Linux box than the Jeton series, it may be a great local AI inference machine.
AGX also has UEFI firmware which allows you to install ESXi. Then you can install any generic EFI arm64 iso in a VM with no problems, including windows.
reply