This is such an interesting thing to read about!! Thank you for posting it! I have epilepsy following hemmorhaging to the parietal area affecting my motor movement, planning, and sensation. Much like the patient described, specific things in cognition 'space' will cause me focal sensations. Full tonic-clonic seizures are quite rare for me, but those smaller focal moments still occur. The feeling is one of my left side moving away from me, where my arm no longer feels mine, and I cannot move it as easily, the limb fades to a blur in my head, and moves in slow motion out of sync with my good side. These auras/focals/feelings occur most intensely when I'm overextending my brain cognitively, usually because I'm having a high-bandwidth conversation or am trying to solve a complex programming problem. When these onsets occur I know I need to stop and step away. Floppy arm = time to stop.
Thanks for posting this - the high bandwidth tasks rings very true with myself also. Certain patterns of thought, situations or locations will bring it on, and that’s when I know I need to go do something else.
Have you tried to do those high-bandwith activities under the effects of nootropics or similar substances? E.g. modafinil, adderall, if so do they make things worse? Better? Make no difference?
I haven’t tried, and tbh I’d be very scared of messing with the complete cocktail of meds (~7) I’m on as it is. Risky! I’d rather maintain a knowable stability even if it’s non optimal.
Keep that up. I'm sure you've been told, but balances like that often only occur once; if you ever drop off the cocktail, it might not work again when you try to recover.
Amphetamines are not nootropic, but even leaving that aside you shouldn't be suggesting someone with what is clearly a brain injury take fun new drugs.
Niiice! I really like it. The spatial approach is cool, though labelling/annotations/axes would help.
I share the frustraion with getting book covers for my project ablf.io. Amazon used to make this much easier, but they've locked it down recently, so you have to jump through affiliate hoops. I ended up implementing my own thing and storing thousands of images myself on S3. If you have the goodreads IDs, feel free to use:
assets.abooklike.foo/covers/{goodreads id}.jpg
N.B. The actual goodreads website itself make it hard as well since they have an additional UUID in their img URIs, so it's not deterministic; that's why I created this.
Nice site! I like that I can filter results by fiction or non-fiction. Interesting to enter my favourite novels and see the non-fiction that's recommended. Some surprisingly good picks!
I mean, I'd like at least a brief blurb about their entire premise of safety. Maybe a definition or indication of a public consultation or... something.. otherwise the insinuation is that these three dudes are gonna sit around defining it on instinct, as if it's not a ludicrously hard human problem.
Technically it's acting on behalf of a proactive user in Chrome so IMHO is non-"robotic". But heh tbf this was also the excuse of Perplexity where they argued they are a legitimate non-robotic-user-agent (thus don't need to respect robots.txt) because they only make requests at the time of a user query. We need a new way of understanding what it even means to be a legitimate human user-agent. The presence of AIs as client-side catalysts will only grow.
I really love this kind of thing. I made one called 'redoku' a long long time ago. It generates random challenges. Try it here: https://padolsey.github.io/redoku/
I've found Claude to be way too congratulatory and apologetic. I think they've observed this too and have tried to counter it by placing instructions like that in the system prompt. I think Anthropic are doing other experiments as well about "lobotomizing" out the pathways of sycophancy. I can't remember where I saw that, but it's pretty cool. In the end, the system prompts become pretty moot, as the precise behaviours and ethics will become more embedded in the models themselves.
> No matter how fast Searle is, he won't be able to come up with a beautiful and original Chinese poem that has the creative spark special to humans
Why not?
> Of course, at some level of complexity, it will be stuck in a local maximum of work quality simply because the book has no guide on how to solve the problem at hand.
I find this a pretty un-optimistic view, especially from someone building a coding autopilot. Having myself used LLMs for a bunch of software development in the last year, it seems its 'local maximum' is no different from a developer's _if_ you split the process up appropriately. The author alludes to this when they mention 'workflow'.
Everyone is trying to use LLMs in a 'single inference pass', assuming that's as good as it gets, but that's like trying to get find human creativity in a single cascading activation of neurons. A brain doesn't fit on an axon. So, I kinda think the author should be less shy about their optimism. Inference is soon ~free, as they say, so to me, naive as I might be, the future of AI coding agents is not limited to grunt tasks, it is as creative and exploratory as any human coder.
Ps. Fume looks cool. I'd suggest people take a look at aider.chat and claude-engineer too (on github).
Unsure if this is a useful answer. But Searle/LLM could make something that looks like it has a creative spark, and that's it.
Why I think that's different is in the case of a human artist, they create something because they have something they want to say. Whatever they produce is a way of saying 'this is what the world feels like to me, is it the same for you?'. And if it is, it resonates.
But I cannot see how an LLM would 'want' to say anything. If we're talking psychoanlytically of where wanting comes from, and call it a desire to fill a void of how incoherent you actually are, then an LLM doesn't go through that process.
Maybe Searle does, and still wants the characters to make you feel a certain way, in which case the comparison doesn't fit.
> If we're talking psychoanlytically of where wanting comes from, and call it a desire to fill a void of how incoherent you actually are, then an LLM doesn't go through that process.
Ironically, many people complain LLMs are too incoherent, with all their confabulations and hallucinations.
But I agree. Desire is a good verb. I think that's what differentiates us from the 'machines'. In art, we try to create meaning. From our lives. From our discontents. Even a million LLMs cannot be in deficit of meaning; they are precisely tuned to their own capacity. Whereas something strange about humans is our endless desire for 'more'.
I'm not convinced we do "want" to say anything, though. The combinations of physical inputs (which mostly translate to hormones i imagine?) and data inputs seem to drive my behavior to such a degree that i question if i could really do anything else at any given moment.
The whole free will debate seems a bit out of scope (and out of my reach, hah), but nonetheless it feels interesting in the LLM context.
edit: Note that i don't necessarily think LLMs are there or even can be. We seem to technologically small to produce the complexity in ourselves. Nonetheless i'm always interested in how far reduced complexity can take us.
> Why not?
The 'original' part is more important than the 'beautiful' part - which should have been more clear in my writing. This argument also triggers the question "is true originality even possible" but I think the difference for LLMs at the moment is their incapability of building non-obvious analogies. I've yet to be inspired something written by an AI and I don't think simply overfitting a model with all human generated data is enough for that. As I also mentioned in the blog, I would be happily proven in future.
> _if_ you split the process up appropriately
I believe this pre-requisite is very important. LLMs so are terrible at planning and splitting a complex task into simpler steps. This might be natural limitation of `next token prediction`. For complex planning, each step should be the result of both the previous and speculative future steps. We try to tackle this by dividing a plan into two: a macro and a micro plan but still a lot to improve there.
An LLM, certainly by itself, can't be "as creative and exploratory as any human coder", because it's limited by inability to reason other than by training data mashup, has no curiosity, no ability to learn from it's exploratory mistakes and successes (were it to make them), etc, etc.
It seems we've reached the point that understanding of LLMs would be a great candidate for the beginner/intermediate/expert meme. "It's just autocomplete" -> "It's got a world model, it's thinking for itself" -> "It's just autocomplete".
Looks cool. To help the pitch, I think you should show the equiv price on anyscale/together for the 10B tokens. Also is there a reason you're not selling at more granular amounts? Why $400? Just speaking personally, I reckon I'd rather invest in a 3090 so I have more security/privacy. I can't think of many genuinely _useful_ 10B+ uses that won't include a bunch of private data. But nonetheless, I reckon there's a niche here, so nice work.
It’s unlikely an individual would need this much capacity. Folks who need tokens at this level are apps with lots of users that don’t have their own GPUs. Think character.ai type apps.
I don't much mind the populurization of these terms or making people think in more novel ways. Even if it's seen as BS by those more informed, I'm glad such books are written and communicators like Taleb exist. Without him, I wouldn't have discovered a bunch of tangential things. I will admit, it gives my brain a satisfying itch too, as I realise that academia is often just the refined encoding of pretty mundane everyday truths, so when someone is able to come in and re-extract that and share it widely, even with a bit of gentle-re-branding, I think it's still net-positive.
As you point out; "making people think in more novel ways" (or making them think at all given the amount of parroting in the education system) is the usp of Taleb's books. People need to stop focusing on the attributes of the author (arrogant etc.) but instead need to learn to focus on the content of their writings.