NobodyWho is making developer tools for running small language models in local-first applications.
Our core principle is to ship the model weights along with the application, and then do efficient inference locally and offline, on any device. We run fast on Linux, MacOS, Windows, Android and iOS.
The main product is an inference library that wraps llama.cpp, written in Rust. We provide bindings for Python, Godot (the game engine), and will be releasing a Flutter plugin soon. It's all licensed under EUPL 1.2. Repo here: https://github.com/nobodywho-ooo/nobodywho/
We're hiring people who are comfortable building highly cross-platform FFI applications in Rust (with C++ dependencies), and people who are deeply familiar with language models and the open standards around them, as well as fine-tuning and evaluating models. We're also looking for a technical DevRel profile.
If any of that sounds relevant to you, feel free to email me: a>at<nobodywho.ooo
I'm working on a plugin[1] that runs local LLMs from the Godot game engine. The optimal model sizes seem to be 2B-7B ish, since those will run fast enough on most computers. We recommend that people try it out with Gemma 2 2B (but it will work with any model that works with llama.cpp)
At those sizes, it's great for generating non-repetitive flavortext for NPCs. No more "I took an arrow to the knee".
Models at around the 2B size aren't really capable enough to act a competent adversary - but they are great for something like bargaining with a shopkeeper, or some other role where natural language can let players do a bit more immersive roleplay.
What makes you say that these are all Stenbergs creations?
Could it be that these are just projects that use libcurl in some way?
I'm having trouble finding any sources that say that Daniel Stenberg actually worked on spotify, utorrent or openttd directly - just to test three of them.
The MNT Reform[1] is a pretty sexy open hardware laptop with a mechanical keyboard, optional trackball, and a small ARM processor.
A cursory glance at geekbench shows that you can get around 50% the performance of a 2020 Macbook Air M1, if you get the upgraded CPU option.[2][3] I have no idea how useful those benchmarks are, though.
I also learned Nim in 2018 with AoC. I found one of the best things was coming up with my own solution, then looking at other people's solutions (the Nim community usually does Advent of Nim where a bunch of people post their solution repos on the nim forum) for more idiomatic approaches or stdlib functions I didn't know about.
I highly recommend it to explore new programming styles in general. As some others, I use it as an excuse to learn a new language each year.
While I think python is a really great language for advent of code, I'm not sure I'd recommend going for an OOP-heavy style. Although that might just be a matter of personal taste - I think OOP is a poor strategy for most problems.
Yup. Very much a chance to learn a new (for me) language. Looking to do Golang this year - brushing up on the basics of the language as we roll into a long turkey weekend.
I'm something of an elixir zealot, but I have to say that Clojure has the most pleasurable syntax for this kind of thing out of any programming language.
I actually have played around with it! I think Clojure feels a lot more polished, but it's also a lot more rigid, which I could definitely see someone disliking. LFE almost feels a little more like Common Lisp, although that's more to do with Erlang and the BEAM than any specific syntax choice. Plus, of course, you get to work with all that OTP goodness. If had to choose between LFE and Clojure as the only language I ever programmed in again, it would be a really hard tossup. Clojure might have a relatively larger community, but I don't want to give up the BEAM!
The reason LFE didn't do it for me was that Elixir is relatively painless syntax-wise, and has really nice high-level libraries like Phoenix and Ecto that simplify a lot of work. If I don't need the BEAM, than I'd rather have a more strictly functional language like Clojure, or something more low-level like Common Lisp. LFE doesn't occupy a niche in my personal ecosystem. That being said, I'd definitely give it another try sometime, and I'd recommend it to anyone else who wants to feel the power of the BEAM.
Clojure is the most practically expressive language I've ever had the pleasure of working with. For some reason the example that immediately comes to mind is how you can use a collection as an accesser for itself, so ({:a 2 :b 3} :a) returns 2. Just a neat bit of syntactic sugar, but now if 'blacklist' is a set, you can just do (remove blacklist guests) to remove everything in blacklist from guests. There's a hundred little things like that which combine to make it a really fun and concise language to work with.
Babashka is fast-starting but slow-running (compared to JVM Clojure). AOC puzzles typically contain a lot of loops so you will likely run into the slow-running-ness. But it shouldn't be much trouble to move to Clojure when you hit this.
Clojure is fun to try out. I tried it last year and had a lot of fun combining and finding the right function from its huge expressive vocabulary. It inspires you to treat code as data.
I've had a couple of Clojure books I've meant to work through over the past year. Where I'd get stuck is in the development workflow more so than in the language features or syntax. Clojure (and lisp-like languages in general) seem like my usual edit, save, run loop isn't the most effective way (especially if it's Clojure and the JVM needs to start each time if you run cold).
I like OOP approaches a fair bit, but only for relatively large, long-lived systems, especially when working with a team. E.g., the Domain-Driven Design approach can provide a ton of shared clarity.
For something like Advent of Code, though, I'd avoid it. It's just me and it's not for long; there's no need for me to be explicit about how I'm thinking about the problem.
NobodyWho is making developer tools for running small language models in local-first applications. Our core principle is to ship the model weights along with the application, and then do efficient inference locally and offline, on any device. We run fast on Linux, MacOS, Windows, Android and iOS.
The main product is an inference library that wraps llama.cpp, written in Rust. We provide bindings for Python, Godot (the game engine), and will be releasing a Flutter plugin soon. It's all licensed under EUPL 1.2. Repo here: https://github.com/nobodywho-ooo/nobodywho/
We're hiring people who are comfortable building highly cross-platform FFI applications in Rust (with C++ dependencies), and people who are deeply familiar with language models and the open standards around them, as well as fine-tuning and evaluating models. We're also looking for a technical DevRel profile.
If any of that sounds relevant to you, feel free to email me: a>at<nobodywho.ooo
reply