until you learn to trust the system and free mental capacity for more useful thinking. at some point compilers became better at assembly instructions than humans. seems inevitable this will happen here. caring about the details and knowing the details are two different things.
I've had the LLM "lie" to me about the code it wrote many times. But "lie" and "hallucinate" are incorrect anthropomorphisms commonly used to describe LLM output. The more appropriate term would be garbage.
Just a basic sanity check: did the LLM have the tools to check its output for lies, hallucinations and garbage? Could it compile, run tests, linters etc and still managed to produce something that doesn't work?
I've frankly given up on LLMs for most programming tasks. It takes just as much time (if not more) to coddle it to produce anything useful, with the addition of frustration, and I could have just written far better code myself in the time it takes to get the LLM to produce anything useful. I already have 40 years experience programming, so I don't really need a tin-can to do it for me. YMMV.
Compilers are deterministic tools. AI is not deterministic. It will tell you this if you ask it. AI then, is not a tool. It is an aide. It is not a tool like a compiler, IDE, editor, etc.
interesting. if you probe it for its assumptions you get more clarity. I think this is much like those tricky “who is buried in grants tomb” phrasings that are not good faith interactions
This article highlights how experts disagree on the meaning of (non-human) intelligence, but it dismisses the core problem a bit too quickly imo -
“LLMs only predict what a human would say, rather than predicting the actual consequences of an action or engaging with the real world. This is the core deficiency: intelligence requires not just mimicking patterns, but acting, observing real outcomes, and adjusting behavior based on those outcomes — a cycle Sutton sees as central to reinforcement learning.” [1]
An LLM itself is a form of crystallized intelligence, but it does not learn and adapt without a human driver, and that to me is a key component of intelligent behavior.
Not a lawyer, but my understanding is civilian casualties are not unlawful (according to international law) when the target is legitimate (on the theory that it is otherwise impossible to legally fight a war with an enemy that hides behind its citizens). To be clear this is not to say war crimes are not also happening.
I noticed you interface with the native code via ctypes. I think cffi is generally preferred (eg, https://cffi.readthedocs.io/en/stable/overview.html#api-mode...). Although you'd have more flexibility if you build your own python extension module (eg using pybind), which will free you from a simple/strict ABI. Curious if this strict separation of C & Python was a deliberate design choice.
Yes, when I designed the API I wanted to keep a clear distinction between Python and C. At some point I had two APIs: 1 in Python and the other in high-level C++ and they both shared the same low-level C API. I find this design quite clean and easy to work with if multiple languages are involved. When I'll get to perf I plan to experiment a bit with nanobind (https://github.com/wjakob/nanobind) and see if there's a noticeable difference wrt ctypes.
Thanks for pointing this out! I'll definitely have to investigate other approaches. nanobind looks interesting but I don't need to expose complex C++ objects, I just need the 'fastest' way of calling into a C API. I guess the goto for this is CFFI?
It's the same thing - both nanobind and cffi compile the binding. The fact that nanobind let's you expose cpp doesn't prevent you from only exposing c. And IMHO nanobind is better because you don't need to learn another language to use it (ie you don't need to learn cffi's DSL).
Nowadays, with more and more economic output coming from knowledge work (IP), this depreciation and amortization approach feels hopelessly out of date. I don't know what a good replacement is, but I know software and IP more generally shouldn't be treated at all like a material good.
Model Context Protocol. It is a way to give an LLM access to an API. There's a lot of hype about it right now, and, thus, a great many half-baked articles floating around. https://www.anthropic.com/news/model-context-protocol
TBH, I don’t really understand why this is a problem. An explanation is not an experience; it cannot provide to the human brain the same information. Suppose you neurologically induced an experience of seeing that new color without “really” seeing it - surely this would be sufficient to communicate qualia? (And if not, surely it’s just a matter of adjusting the inducement to some degree).
Isn't "an explanation is not an experience" basically the problem? Like, if you could perfectly describe all the physical conditions to induce the experience of a color, there would still be something missing from that description which you can't get without consciousness in the loop. You can't communicate or describe it without the actual experience part.
Most (all?) of our "science" doesn't require any sort of notion of consciousness to work, we can describe the motion of a projectile or an orbit in a way that doesn't depend on having an "experiencer." But there's this weird category of stuff for which that isn't true. (At least, for now).
Doesn't that just point to the fact that our ability to describe is limited and lossy? In the color example, we're trying to convey information about the effects of one of the senses without using that sense. It could very well be that without using that particular sense, the brain just isn't stimulated the same way.
Would there still be something missing if you could “perfectly describe all the physical conditions to induce the experience of a color”? I don’t see any reason to assume that’s self-evidently true, and it’s not something that we have the ability to test. Obviously if you start from the assumption that qualia exists then you will conclude that it must exist.
This sort of philosophical question becomes more important when you think of second-order-and-beyond. Think of, in this case, that color is just a manifestation of experience. But that manifestation of experience applies to, for example, the beautiful smells that a rose throws out into the world. To a degree, the redness has an affectation on the world, which is separate to each person.
my theory is that these things are just questions of resolutions of varying latencies.
reply