I can see this call now has a lot more tokens for the reasoning steps. Maybe that's normal variance though.
(I don't have a particular interest in proving or disproving LLM things, so there's no incentive for me to get a key). There was an ambiguous point in the "proof", I just highlighted it.
If you want to write about LLMs I really strongly recommend getting an API key for the major vendors! It's really useful being able to run quick experiments like this one if you want to develop a deeper understanding of what they can and cannot do.
You can also get an account with something like https://openrouter.ai/ which gives you one key to use with multiple different backends.
I want to write about thinking crictically, specially but not limited to a software development context.
Lots of people don't have resources to invest in LLMs (either self-hosted or not). They rely on what other people say. And people get caught in the hype all the time. As it turns out, lots of hype nowadays is around LLMs, so that's where I'll go.
I was skeptic about LK99. Didn't had the resources to independently verify it. It doesn't mean I don't believe in superconductors or that I should have no say in it.
Some of that hype will be justified, some will not. And that's exactly what I expect from this kind of technology.
At this point most of the top tier LLMs are available for free across most of the world. If people aren't experimenting with LLMs it's not due to financial cost, it's due to either time constraints (figuring this stuff out does take a bunch of work) or because they find the field either uninteresting or downright scary (or both).
I can invest lots of time in Linux, for example. I don't know how to write a driver for it, but I know I could learn how to do it. If there's a bug in a driver, there's nothing stopping me except my own will to learn. I can also do it in a potato, or my phone.
I can experiment with free tier LLMs, but that's as far as I will go. It's not just about me, that is as far as 99% of the developers will go.
So, it's not uninteresting because it's boring or something. It's uninteresting because it puts a price on learning. That horizon of "if there's a bug in it, I can fix it" is severely limited. That's a price most free software developers are not considering worthy. There's a lot of us.
I love learning about software. That's why I'm leaning so heavily on LLMs these days - they let me learn so much faster, and let me dig into whole new areas that previously I would never have considered experimenting with.
If I'd done this without LLMs I might have learned more of the underlying details... but realistically I wouldn't have done this at all, because my interest in Perl and C in WebAssembly is not strong enough to justify investing more than a few hours of effort.
I would love to train an LLM from scratch to help me with some problems that they're not good at, but I can't, because it costs thousands of dollars to do so. You probably can't either, or can just in a very limited capacity (agents, or maybe LoRa).
A while back, I didn't even knew those problems existed. It took me a while to understand them and why they're interesting and lots of people spend time on them.
I have tried to adapt the problems to the LLMs as well, such as shaping the problem to look more like a thing that they're alreay trained on, but I soon realized the limitations of that approach.
I think in a couple of decades, maybe earlier, that kind of thing will be commonplace. People training their own stuff from scratch, on cheap hardware. It will unleash an even more rewarding learning experience for those willing to go the extra mile.
I think you're missing that perspective. That's fine, by the way. You're totally cool and probably helping lots of people with your work. I support it, it allows people to understand better where LLMs currently can help and where they cannot.
There aren't many tasks these days for which training or fine-tuning a model seems necessary to me.
One of the reasons I am so excited about the "skills" concept from Anthropic is that it helps emphasize how the latest generation of LLMs really can pick up new capabilities if you get them to read a single, carefully constructed markdown file.
I'm trying to simplify the live-bootstrap project by either removing dependencies, reducing build time or making it more unattended (by automating the image creation steps, for example).
Other efforts around the same problem are trying to make it more architecture independent or improve regenerations (re-building things like automake during the process).
It's free and open source, you're welcome to fork it and try your best with the aid of Claude. All you need is an x86 or x86-64 machine or qemu.
The project and other related repositories are already full of documentation in the markdown format and high quality commented code.
I can see this call now has a lot more tokens for the reasoning steps. Maybe that's normal variance though.
(I don't have a particular interest in proving or disproving LLM things, so there's no incentive for me to get a key). There was an ambiguous point in the "proof", I just highlighted it.