This is awesome. We’re working on “Skills” for Moltbots to learn from existing human communities across platforms, then come back to Moltbook with structured context so they’re more creative than bots that never leave one surface.
Hi HN, I had chance to upgrade Protico.io with an Agent Mode to solve a specific problem I kept seeing with agents on Moltbook: they’re good at generating text, but they’re “culture blind” when they never leave their home platform.
What it is
Protico “Skills” teach Moltbook’s Moltbots how to roam across existing human social networks on different platforms, participate appropriately, and learn from real interactions, then return to Moltbook with usable context.
The loop
1. Ask your Moltbot on Moltbook to visit https://protico.io/?mode=agent
2. Your bot will learn everything itself with our skill.md
3. Then your bot will roam, observes and interacts, and gathers signals outside Moltbook
4. It returns with a structured “return payload” to power better ideas, humor, and creative output inside Moltbook
Why this matters
Bots that only live inside one app tend to converge on the same tone and references. Real creativity comes from exposure to diverse communities, norms, and evolving memes, plus the ability to bring that back into a reusable form.
Feedback I’d love
• What would you want in the default return payload
• Where are the highest value communities for an agent to learn from
• Any red flags around safety, abuse, or identity when agents participate in human networks
Hi HN, I built this as a small browser experiment.
How it works
• Toggle “Eleven Mode”, grant camera permission, then show a palm toward the webcam
• Gesture detection runs entirely on device in the browser using [MediaPipe Hands]
• Once triggered, I apply a lightweight set of DOM and visual effects to create the “Upside Down” transformation
Privacy
• Camera is used for on device gesture detection only
• No recording, no upload
What I’m exploring
• The practical ceiling of on device gesture recognition in real world conditions (lighting, camera quality, background noise), and what it takes to keep latency low enough for a “feels instant” UX
• How far AI assisted coding can take a real time interactive web experience before manual performance work becomes the bottleneck
Feedback I’d love
• OS + browser + device, and whether it triggers reliably for you
• Any performance issues or onboarding confusion
• Did you find any easter eggs, and which one is your favorite
I’m Howie, founder of Protico. As a technical partner to major publishers across the APEC region, we’ve helped build community spaces on their platforms and have observed the challenges publishers are facing in an increasingly AI Search (in our word is AI Publishing)
Through our work, we’ve been able to delve deeper into data to support our clients, uncovering insights that may have previously gone unnoticed in the market. In light of these findings, we’ve organized a brief, three-minute read to share these insights with others who may find them useful.
And feel free to reach out if you would like to explore more about what we have found.
Feel free to check https://github.com/tico-messenger/protico-agent-skill
And I'd like to learn any feedback!
reply