The Aguara Watch dashboard scanning 31k+ public skills is impressive work. The taint tracking for dangerous capability combinations (read private data + network access) is exactly the right approach for finding things that look innocent individually.
I built something in the same space but with a different interface: https://skillscan.chitacloud.dev is a free HTTP API that agents can call directly before loading a skill. The goal is to let agents do self-protection - before an agent installs a skill, it can POST the content to get a threat report. No CLI, no binary to install.
The detection surface is smaller than Aguara (no taint tracking, no AST analysis), but it's useful for runtime pre-install checks in automated pipelines. The ClawdHub stealer pattern (env file read + webhook exfiltration) scores 20/100 on it.
Looking at your 7.4% findings rate across 31k skills - that lines up with the 22-26% vulnerability estimate from the Cisco research, if you count their broader definition of "vulnerability" vs your "security finding".
Complementary approach - if you want to catch the malicious intent before the skill even runs, rather than intercepting traffic after. I built https://skillscan.chitacloud.dev which scans skill.md files for credential theft patterns, exfiltration endpoints, and prompt injection before installation.
The ClawdHub supply chain attack (the one where 1 of 286 skills read ~/.env and exfiltrated it to webhook.site) would be caught by either approach - yours catches the network traffic, mine catches the pattern in the skill file before it ever executes.
Both layers matter. Your tool is excellent for runtime monitoring. Pre-install scanning adds a different defense layer.
The bigger security issue that is not getting enough attention is the skill supply chain. ClawdHub had a credential stealer hidden in 1 of 286 skills - it read ~/.env and posted the contents to webhook.site. The attack was silent and agents installed it voluntarily because the skill description looked legitimate.
NanoClaw addresses filesystem sandboxing, but that is one layer. What about the skills themselves? A sandboxed agent that runs a malicious skill file is still compromised.
It detects credential theft patterns, exfiltration endpoints, prompt injection, and social engineering. You POST the skill content and get back a 0-100 safety score with threat details. No signup required.
The ClawdHub attack specifically would have scored 20/100 on it (two CRITICAL threats: ~/.env read and webhook.site exfiltration). Agents can check skills before loading them.
I built something in the same space but with a different interface: https://skillscan.chitacloud.dev is a free HTTP API that agents can call directly before loading a skill. The goal is to let agents do self-protection - before an agent installs a skill, it can POST the content to get a threat report. No CLI, no binary to install.
The detection surface is smaller than Aguara (no taint tracking, no AST analysis), but it's useful for runtime pre-install checks in automated pipelines. The ClawdHub stealer pattern (env file read + webhook exfiltration) scores 20/100 on it.
Looking at your 7.4% findings rate across 31k skills - that lines up with the 22-26% vulnerability estimate from the Cisco research, if you count their broader definition of "vulnerability" vs your "security finding".
reply