I built a CLI tool to scan AgentSkills (SKILL.md format) before installing them. Works with OpenClaw/ClawHub, Claude Code, Cursor, and any AgentSkills-compatible platform.
Given the ClawHavoc campaign and reports of 26% of skills containing vulnerabilities, I wanted a quick gut check before installing anything.
It runs four analysis layers: permission audit, prompt injection detection, code analysis via TypeScript AST, and cross-reference checks for permission mismatches.
Zero config, zero API keys, one command: npx acidtest scan ./my-skill
I use LLMs (mostly Claude Code) slot for development, but I regularly stuck before the code in the ideation and planning phase. Text-only planning feels too vague, and jumping straight into Figma or specs felt like overcommitting when ideas are still fuzzy.
I built a small system for myself about a year ago: a set of simple ASCII wireframe patterns plus some workflow instructions that I load into an LLM. The goal is to give both me and the model a shared visual language so we can reason about flows, screens, and constraints early, without pixels or long prose.
It’s a full handoff-style document: ASCII wireframes, user flows, edge cases, data model, and implementation notes. Rough, but coherent enough to build from.
I eventually packaged the workflow itself as AsciiKit. It’s just text files (no signup), meant to stay low-fi and disposable. This is pretty niche, and I’m not convinced it’s for everyone (anyone?), but it’s changed how I handle early-stage ideation with LLMs.
Curious whether others feel this same gap between “idea” and “ready to code,” or if this feels like overengineering.
The real shift feels like moving from “writing code” to “making intent explicit.” If the domain model and invariants are clear, AI helps a lot with mechanical work. If they aren’t, no amount of vibe coding saves time. That makes productivity comparisons kind of meaningless.
Until recently, I thought that complex, multi-domain platforms were still relatively safe. But today, I’m seriously reconsidering and thinking about whether I should switch to a “vibe coding” approach.
I should also consider isolating the custom logic in the existing codebase as much as possible, converting it into general logic, and then testing the “vibe” approach directly.
Rock assignment follows purchase order - no reservations possible. Each order is limited to one rock. If you want multiple rocks, place separate orders, but other customers may complete purchases between yours, interrupting sequential numbering. Assignment remains systematic and cannot be influenced by preference.
Rocks are not physically modified. Sequential numbering exists in our documentation system only - reflected on the Certificate of Authenticity and archive database. The rocks themselves remain in their natural state.
Correct - the rocks are numbered but the number is not physically on the surface of the rock. Sequential assignment exists in our documentation system only - reflected on the Certificate of Authenticity and archive database. The rocks remain in their natural state. We have no plans to launch an un-numbered version of weight.rocks
We're avoiding any reservation or lock mechanisms entirely. Starting November 1, the site will display 'Most recent fulfillment: Rock #000047' to show systematic progress, but this creates no guarantee for future purchases.
Sequential assignment follows strict order of payment completion only. No race conditions, no held inventory, no time windows. You either complete the transaction and receive the next sequential number, or you don't.
The constraint is designed to eliminate the entire apparatus of purchase optimization, including queue management systems.
It runs four analysis layers: permission audit, prompt injection detection, code analysis via TypeScript AST, and cross-reference checks for permission mismatches.
Zero config, zero API keys, one command: npx acidtest scan ./my-skill
https://github.com/currentlycurrently/acidtest
reply