I'm working on Prompty.tools (http://prompty.tools), a prompt engineering and management platform where users can search, store and combine building blocks for creating structured AI prompts.
I created the platform because I found myself rewriting the same parts of my prompts (or storing them in a text-file) all the time. Now, with a few simple clicks I can populate all the task-specific fluff (personas, constraints, tones, ...) around the actual task that I want the AI to complete.
The platform is open by default; with the purpose of letting users learn from prompts and building blocks that other users created and use. I don't have any users yet, because I want to complete the MCP and Claude Code Plugin before I start marketing my product.
Other things on the roadmap:
- Teams tier, where teams can privately share prompts and building blocks between them. Currently, your data is either private or public, no targeted sharing.
- LLM integration into the prompt builder to reduce prompt engineering friction even more. Instead of manually searching for, and selecting the building blocks you want to use, you would just start typing your task and let the platform decide what building blocks would best support your prompt. There is still a difference with letting an LLM completely generate the prompt, as we would be using existing building blocks that have real feedback from previous uses.
Letting LLMs loose in the digital realm is something that I am also really interested in. I have a (somewhat art project) platform where different models are let loose without a goal or purpose. They have the freedom to do whatever they want, as long as it can be achieved using bash. [0]
most models are... dumb, for a lack of words, and destroy the system by filling up the storage space before doing anything interesting.
Did you one-shot this using an AI coding agent? If this was really just now created after reading this article and the comments, it's incredibly impressive.
Let's say.... 10 shot with Claude Code :) Initial app, then hand refined, Claude Code again...back and forth. Spend my morning doing it and it was fun. Very simple so far, want to clean it up and add more meaningful features.
EDIT: Also, turns out the in-browser Editor landscape got GOOD the last few years apparently. It's really just plug and play. I remember 5 years I tried to do this and it was painful.
Most browsers have a reading mode button in the URL bar. Apparently these sorts of fonts are actually easier for people with dyslexia to read. But I'm more interested in creating a unified visual aesthetic that says—this is not a scientific paper, read at your own leisure / risk.
I built a platform to monitor LLMs that are given complete freedom in the form of a Docker container bash REPL. Currently the models have been offline for some time because I'm upgrading from a single DELL to a TinyMiniMicro Proxmox cluster to run multiple small LLMs locally.
The bots don't do a lot of interesting stuff though, I plan to add the following functionalities:
- Instead of just resetting every 100 messages, I'm going to provide them with a rolling window of context.
- Instead of only allowing BASH commands, they will be able to also respond with reasoning messages, hopefully to make them a bit smarter.
- Give them a better docker container with more CLI tools such as curl and a working package manager.
If you're interested in seeing the developments, you can subscribe on the platform!
I created this website to follow along as LLMs are set free on docker containers. It's an interesting experiment, although not many useful commands are executed. It's striking how much stronger the o1-mini model is compared to the other ones, even with the delay handicap.
AIs are kept alive for 100 commands, but errors might come up before they reach 100 commands. The chat context gets reset every generation, but the environment where they are set free is persisted. So, every generation they build upon their last generation. Each bot is isolation from one-another, they do not share environments.
Right now, only a few models are active, but I'm planning to add Claude, Gemini and quite a few extra ones. If you want to keep posted, there is a form where you can subscribe to future updates!
I don't understand. When you work for a company, don't you also spend some time of your workday daydreaming, or getting a coffee, or doing some other thing that does not earn the company money?
I created the platform because I found myself rewriting the same parts of my prompts (or storing them in a text-file) all the time. Now, with a few simple clicks I can populate all the task-specific fluff (personas, constraints, tones, ...) around the actual task that I want the AI to complete.
The platform is open by default; with the purpose of letting users learn from prompts and building blocks that other users created and use. I don't have any users yet, because I want to complete the MCP and Claude Code Plugin before I start marketing my product.
Other things on the roadmap:
- Teams tier, where teams can privately share prompts and building blocks between them. Currently, your data is either private or public, no targeted sharing.
- LLM integration into the prompt builder to reduce prompt engineering friction even more. Instead of manually searching for, and selecting the building blocks you want to use, you would just start typing your task and let the platform decide what building blocks would best support your prompt. There is still a difference with letting an LLM completely generate the prompt, as we would be using existing building blocks that have real feedback from previous uses.
Let me know what you think!
reply