How is it bad? Imagine a world where instead of sending hundreds of thousands young men to die, countries would just launch targeted attacks on the head of enemy's state.
It's the stated reason why the United States has an impeachment process. So that they have a process for removing undesirable heads of state without resorting to assassination.
Sorry I phrased that poorly. I meant countries have no means to impeach a foreign head of state so impeachment only serves as a power check if the citizenry disapproves.
Not that I think heads of state fearing for their lives from airstrikes is necessarily good, being able to act with impunity is certainly bad.
Of course it's performative. They're all presumably on Apple devices. They literally went to their settings to disable auto-capitalization to make some kind of ridiculous point, i.e. "I'm too important too think about capital letters".
So you give it approval to the secret once, how can you be sure it wasn’t sent someplace else / persisted somehow for future sessions?
Say you gave it access to Gmail for the sole purpose of emailing your mom. Are you sure the email it sent didn’t contain a hidden pixel from totally-harmless-site.com/your-token-here.gif?
I don't have one yet, but I would just give it access to function calling for things like communication.
Then I can surveil and route the messages at my own discretion.
If I gave it access to email my mom (I did this with an assistant I built after chatgpt launch, actually), I would actually be giving it access to a function I wrote that results in an email.
The function can handle the data anyway it pleases, like for instance stripping HTML
The access to the secret, the long-term persisting/reasoning and the posting should all be done by separate subagents, and all exchange of data among them should be monitored. But this is easy in principle, since the data is just a plain-text context.
Easy in principle is doing a lot of work here. Splitting things into subagents sounds good in theory, but if a malicious prompt flows through your plain-text context stream, nothing fundamental has changed. If the outward-facing agent gets injected and passes along a reasonable looking instruction to the agent holding secrets, you haven’t improved security at all.
I had it coding autonomously for about an hour (including lots of tool wait time) on a difficult task, and it actually produced good results.
What’s most surprising is that I had it follow a strict loop/workflow and it did that perfectly. Normally these things go off the rails after a while with complex workflows. It’s something I have to usually enforce with some orchestration script and multiple agents, but this time it was just one session meticulously following orders.
Impressive, and saves a lot of time on building the orchestration glue.
Whenever I see instances like this I can’t help but think a human is just trolling (I think that’s the case for like 90% of “interesting” posts on Moltbook).
Are we simply supposed to accept this as fact because some random account said so?
I don’t see it as the author being lazy, actually the opposite, I see it as being performative and a tryhard. Either way it’s annoying and doesn’t make me want to read it.
After looking into it, as I suspected, the author seems to make his living by selling people the feeling that they’re in the cutting edge of the AI world. Whether or not the feeling is true I don’t know, but with this in mind this performance makes sense.
reply