I’m working on an R7R7-small scheme called Sable. The focus is on having good Windows support, vscode support, Lsp and Debug adapter protocol support. It is closer to SBCL and is image based, and builds with just the platforms native c compiler
Yes wasm is basically a rust thing at this point unfortunately. And if you want to write a wasi preview 0.2 component, you need a bunch of rust tooling to do it properly
Only partially correct IMHO, the *WASM Component Model* is mostly a Rust thing (at least it's very apparent that it has been designed by Rust people), thankfully WASM itself is independent from that overengineered boondoggle.
Not at all. It's much more efficient to implement a GC on x86 or ARM than it is on Wasm 1.0/2.0, because you control the stack layout, and you don't have an impenetrable security boundary with the JS runtime that your GC needs to interop with.
Not to mention the issue that bundling a GC implementation as part of your web page can be prohibitive in terms of download size.
Tbh I never understood nor cared why people would want to use a garbage collected language with WASM in browsers when there's already Javascript. One of the main points of WASM was to avoid GC overhead alltogether (thus the 'garbage-free' subset asm.js which then became WASM).
WASM is not nearly as capable as either architecture.
But.... they would certainly be much more useful architectures and devices if they chose to cater more to actual needs rather than performance under C/C++
The existence of permissive licenses like BSD or MIT does not show that copyleft was unimportant.Those licenses allowed code to remain open, but they also allowed it to be absorbed into proprietary products.
The GPL’s significance was that it changed the default outcome. At a time when software was overwhelmingly proprietary, it created a mechanism that required improvements to remain available to users and developers downstream.
Gcc was a massive deal for the reasons why compilers are free now today for example
Any tips for working with Gemini through its chat interface? I’ve worked with ChatGPT and Claude and I’ve generally found them pleasant to work with, but everytime I use Gemini the output is straight dookie
Even though I don't like the privacy implications, make sure you use the option to save and use past chats for context. After a few months of back and forth (hundreds of 'chat' sessions), the responses are much higher quality. It sometimes does 'callbacks' to things discussed in past chats, which are typically awkward non-sequiturs, but it does improve it overall.
When I play with it in 'temporary chat' mode that ignores past chats and personal context directives, the responses are the typical slop littered with emojis, worthless lists, and platitudes/sycophancy. It's as jarring as turning off your adblocker and seeing the garish ad trash everywhere.
You must be joking. I’ve turned that off after first month of use. It’s unbearable. “Oh since you are in {place i mentioned a week ago while planning trip but ultimately didnt go} the home assistant integration question changes completely”. Or ending every answer with “since you are salesforce consultant, would you like to learn more about iron smelting?”
I told Gemini I'm a software engineer and it explains absolutely everything in programming metaphors now. I think it's way undertrained with personalization.
Gemini has always felt like someone who was book smart to me. It knows a lot of things. But if you ask it do anything that is offscript it completely falls apart
I strongly suspect there's a major component of this type of experience being that people develop a way of talking to a particular LLM that's very efficient and works well for them with it, but is in many respects non-transferable to rival models. For instance, in my experience, OpenAI models are remarkably worse than Google models in basically any criterion I could imagine; however, I've spent most of my time using the Google ones and it's only during this time that the differences became apparent and, over time, much more pronounced. I would not be surprised at all to learn that people who chose to primarily use Anthropic or OpenAI models during that time had an exactly analogous experience that convinced them their model was the best.
I'd rather say it has a mind of its own; it does things its way. But I have not tested this model, so they might have improved its instruction following.
I too have felt these feelings (though I'm much younger than the author). I think as I've grown older I have to remind myself
1. I shouldn't be so tied to what other people think of me (craftsman, programmer, low level developer)
2. I shouldn't measure my satisfaction by comparing my work to others'. Quality still matters especially in shared systems, but my responsibility is to the standards I choose to hold, not to whether others meet them. Plus there are still community of people that still care about this (handmade network, openbsd devs, languages like Odin) that I can be part of it I want to
3. If my values are not being met either in my work or personal life I need to take ownership of that myself. The magic is still there, I just have to go looking for it
I’m always amazed and slightly envious of what programming languages with large developer bases can do. I mean if a language is Turing complete it can do anything, but JavaScript takes this to the extreme.
Mind you I never said anything about quality or performance, obviously doing everything in JavaScript comes with it’s own issues but if you were to say that someone got JavaScript running in the Linux kernel as a POC I wouldn’t even be surprised
reply