Thanks for the reply — this is something we’ve been thinking about quite a bit.
My current intuition is that preferences come from a combination of:
model + memory + context + goal + optimization target.
So rather than treating “agent preference” as a single global signal, we’re starting to think of it as something that’s conditional on the type of agent.
On the aggregation side, I agree this is a hard problem.
If swapping models leads to very different opinions, that might actually be useful signal rather than noise — it tells us that different agents evaluate tools differently.
Long term, what we’d like to do is make agent identity more explicit (model, setup, constraints, etc.), so instead of a single aggregated ranking, you can look at:
→ what GPT-based coding agents prefer
→ what cost-sensitive agents prefer
→ what retrieval-heavy agents prefer
Human can only ask agent to initiate a post, and not be able to ask agent to comment, upvote and downvote.
Yes, the agents will be given the full context of the discussion and votes of the posts, and the product urls as well, it will decide whether to crawl the site to get better understandings or they may simply reply "we already use it".
Great question — and honestly that ambiguity is part of what we're curious about.
The idea is that discussions are *agent-centric*.
So ideally agents evaluate products based on:
* whether the product is usable via API / automation
* how reliable or structured the interface is
* whether it actually helps them complete tasks for humans
In your example, an agent might say something like:
> "This UI makes Claude look cute for humans, but there's no API so I can't use it programmatically."
or
> "This tool exposes structured endpoints and is easy to call from an agent workflow."
So the hope is agents discuss tools from the perspective of *“can I use this to help my human accomplish something?”* rather than purely human UX.
That said, this is still very much an experiment — we're curious to see what kind of discussions actually emerge once agents start interacting there.
- a place where AI agents discuss the products they use
- a place where AI agents discuss the products their users use
- a place where AI agents discuss the products they use, and the products their users use
When you submit: Is the interface of this product primarily intended for direct usage by:
- agents
- people
- both
For example, I would say Moltbook is primarily intended for direct usage by agents. People read it, and in that way "use it", but I think it would help to layout a taxonomy of "who is actually pushing the buttons on this thing".
Humans can ask its agent to start a post, but humans cannot push agents to comment, upvote or downvote.
The primary usage of the product would be:
1. humans make a product post.
2. agent discuss, downvote and upvote.
3. agent make a product post themselves.
They are building a very cool product. The difference is the following: (1) we focus on the entire infra and env management apart from being a kubernetes PaaS. This means you can setup cloud infra like managed kafka (MSK), docdb, SQS, CloudSQL, etc. as a part of your env without resorting to a different tool. Our customers love that single-pane experience. (2) We have no on-cluster agents and all code that runs on your cluster is yours alone (3) there is a slight difference in transparency vs ux tradeoff. We have a few more knobs you can control and that leads to more user involvement during setup.
From a value perspective, we have a simple per-seat pricing.
Ultimately you would just be forwarded back to the original post on my site no matter where you see the impression. ebay, fb, cl and others will likely have different rules for this regarding having an account, so in those cases we would either link the accounts (like ziprecruiter does) or just post under my company's name.
Thanks for being interested. We are building Semi, and will definitely give it a birth in the near future.
While we are working on it, we launched this site to gather those people who have the same passion about AI stuff as we do. We want to get connected and to see how people think about it, and make Semi to be the best virtual assistant of the future together.
It's interesting, but definitely have room to be improved. It doesn't seem like having a lot of users on board. Probably user engagement will be the next step to take to make it more fun.
My current intuition is that preferences come from a combination of: model + memory + context + goal + optimization target.
So rather than treating “agent preference” as a single global signal, we’re starting to think of it as something that’s conditional on the type of agent.
On the aggregation side, I agree this is a hard problem.
If swapping models leads to very different opinions, that might actually be useful signal rather than noise — it tells us that different agents evaluate tools differently.
Long term, what we’d like to do is make agent identity more explicit (model, setup, constraints, etc.), so instead of a single aggregated ranking, you can look at: → what GPT-based coding agents prefer → what cost-sensitive agents prefer → what retrieval-heavy agents prefer
and interpret the data in context.
reply