Clean landing page. The "one line of code" pitch is strong — that's exactly the kind of DX that drives adoption.
Curious: do you track token costs per-agent or per-call? Being able to attribute costs to specific agent workflows would be a killer feature for teams running multiple agents in production.
Creator here. Built this to scratch my own itch — I manage 24 WordPress sites and got tired of opening Lighthouse for each one.
The AI fix feature costs ~$0.0003/audit with gpt-4o-mini (BYOK, no infrastructure on my side).
Happy to answer any questions about the architecture or the SEO scoring methodology.
Love this. Using a piano as a macro board is the kind of creative dev tooling I wish I saw more of.
Have you considered letting users define their own key mappings in a YAML config? That way people could customize it for their specific DAW workflow without touching the code.
The insight about TTFT dominating everything resonates. We're seeing the same pattern in CLI tools — the perceived speed of AI features comes down to how fast you get the first useful output, not total processing time.
Curious about your semantic end-of-turn detection: are you using a separate lightweight model for that, or is it baked into the main LLM inference? That seems like the hardest part to get right without adding latency.
Really cool approach. The "Ollama for classical ML" framing makes it instantly clear what this does.
I've been building CLI-first tools myself and the pattern of wrapping complex workflows into simple terminal commands is underrated. Most devs I know would rather type one command than spin up a Jupyter notebook for a quick prediction.
Curious about the model format — do you plan to support a registry where people can publish pre-trained models, like Ollama's library? That would be the killer feature for adoption.
Curious: do you track token costs per-agent or per-call? Being able to attribute costs to specific agent workflows would be a killer feature for teams running multiple agents in production.