I'm curious for the follow up post from Yegge, because this post is worthless without one. Great, Claude Code seems to be churning out bug fixes. Let's see if it actually passes tests, deploys, and works as expected in production for a few days if not weeks before we celebrate.
When a toddler can pull the trigger and kill someone, you may argue guns are pretty good at killing. Key point being, people don't have to be good at guns to be good at killing with guns. Pulling the trigger is accessible to anyone.
How often does that actually happen? It's only when a fun owner was irresponsible, leaving a loaded gun in an accessible place for the toddler.
Similarly, AI can easily sound smart when directed to do so. It typically doesn't actually take action unless authorized by a person. We're entering a time where people may soon be willing to give that permission in a more permanent basis, which I would argue is still the fault of the person making that decision.
Whether you choose to have AI identify illegal immigrants, or you simply decide all immigrants are illegal, the deciding is made by you the human, not by a machine.
Not the OP, but my best guess is it’s an alignment problem, just like gun killing what the owner is not intending to. So the power of AI to make decisions that are out of alignment with society’s needs are the “something, something’s.” As in the above healthcare examples, it can be efficient at denying healthcare claims. The lack of good validation can obfuscate alignment with bad incentives.
I guess it depends on what you see as the purpose of AI. If the purpose is to be smart, it’s not doing very well. (Yet?) If the purpose is to deflect responsibility, it’s working great.
People have a performative mode and an authentic mode (oversimplifying), probably including you. If you're at home talking to your parents or spouse, and then suddenly realize your boss is in the next room listening, does your voice change?
Point being, this demo voice is in performative mode, and I think sounds fairly natural based on that. Would you rather it not?
Doesn't make sense. Requiring exponentially more resources is not surprising. Resources getting exponentially cheaper? Like, falling by more than half each cycle? That doesn't happen.
That would be the case of this were actually AI. But LLMs actually do procedurally generate language in a fairly human way, so using human languages makes sense to me.
The problem with this is that we can't get rid of the baggage of higher and higher level of libraries and languages that exist to accommodate humans.
I agree that it makes sense to use these currently but IMHO the ultimate programming will be human readable code-free, instead the AI will create a representation of the algorithm we need and will execute around it.
Like having an idea, for example if you need to program a robot vacuum cleaner you should be able to describe how you describe it to behave and the AI will create an algorithm(an idea, like "let's turn when bump into a wall then try again") and constantly tweak it and tend it. We wouldn't be directly looking at the code the AI wrote, instead we can test it and find edge cases that a machine maybe wouldn't predict(i.e. the cat sits on the robot and blocking the sensors).
What makes the AI context window immune to the same issues that plague us humans? I think they will still benefit from high level languages and libraries. AIs that can use them will be able to manage larger and more complex systems than the ones that only use low level languages.
At first, err on the lower side, and 4dd. Over time, you'll start viewing things in chunks naturally, which probably only helps your code scanning ability, which is probably half the benefit of learning Vim. You get really good at looking at code like building blocks.
And undo is an easy one... Just type 'u'!
Like any skill or tool, it often seems not worth it before you know how to do it. But every time you learn a new skill (like you just learned undo!), it makes you more efficient for the rest of your life! How amazing is that?
Why is it that now of all times, when we could actually make it useful, Clippy has not returned to ask, "It looks like you're trying to X, would you like help with that?"
I'm not sure if I agree with that. Given enough context, tools should be able to get a sense of what we're trying to do. Even suggested next steps / things to ask, which we're already seeing in tools like Cove, ChatGPT, etc., go a long way in guiding users. Guiding people to what they likely want to do is great product design when accurate and could be minimally painful if it's easy to ignore the suggestions.
I agree that tools should, but what I've seen of AI so far has not impressed me at all. It's constantly getting things wrong by either making something up or selectively ignoring parts of what I said to it. Worse, one of the biggest reasons people hated Clippy was that it was obtrusive. Much to Microsoft's surprise, people didn't actually need any help writing a letter, but that didn't stop Clippy from interrupting you while you were trying to work. Writers in particular hate distractions. They'll occasionally go to some fairly extreme lengths to get a distraction free environment (sometimes even sticking to pencils or typewriters).
Considering how hard AI is being rammed down our throats everywhere already, I have zero confidence that AI Clippy would be anything but obtrusive and obnoxious. At best, it'd be a feature that people turn off as soon as they see it's been forced on them.
Maybe one day we'll actually see AGI and instead of Clippy we'll get a system wide, entirely local, fully open, privacy protecting virtual assistant that's worth a damn. I can't say it wouldn't be nice if it worked like science fiction. My bet is that we're far more likely to get stuck with a bunch of annoying spyware programs, Bonzi Buddy style, using an LLM to fake intelligence, push ads/manipulate users, and deflect accountability.
He has started by going after the groups that are investigating his companies. USAID investigated overpaying for starlink in Ukraine. FDA doesn't like neuralink. The FAA investigates every time he blows up a rocket and showers debris into the airspace. The IRS has audited his tax filings before and he expressed frustration about it. He hasn't done anything the FTC cares about yet, though.
reply