Hacker Newsnew | past | comments | ask | show | jobs | submit | thutch76's commentslogin

I'm very wary of this request, though I understand it. I've been reading HN daily since around 2014. My involvement was purely passive (e.g., I have been a lurker) because I really didn't think I had much to contribute that wasn't already stated better by others.

I didn't actually create my account until 2021? 2022? I can't remember. And I didn't make my first post or even comment until just last week.

While I think a minimum post count or reputation metric could perhaps reduce the AI generated posts, introducing friction also makes it harder for real people to contribute anything meaningful.

Furthermore, what does it matter if it's "AI generated"? Is some AI content ok? What's the pass/fail threshold on human vs AI generated text?

I made a Show post last week where I heavily relied on AI. I'm sure there are some "tells." But even so, I spent more than three hours working on the content of my post and my first response. Would my post have been acceptable to you?


> Furthermore, what does it matter if it's "AI generated"? Is some AI content ok? What's the pass/fail threshold on human vs AI generated text?

If a human put his effort into it, is proud of it and wants to show it to the world, i'm happy to invest some time to have a look at it and maybe provide some helpful feedback.

I'm not willing to invest my time into evaluating the more or less correct sounding ideas of a ML model.


I don't care if the code is generated, i care if the content is. I don't want to read another "No complexity. No fuss. No buzzwords". "It's not just a tool, its a lifestyle". Its sooooo boring...

If you're going to spend 3 hours making a post, why not just write it yourself in the first place and avoid the issue and the reputational damage?

This is awfully narrow minded. I had Claude give me an initial framework, based on the many many hours of context of chat across many different documents. It helped me organize my thoughts.

Some of us need assistance to communicate effectively. And for me, yes that took 3 hours even with this assistance.


Just write the text yourself, not many people enjoy reading AI-generated posts, even edited.

Builder here, happy to answer questions about any of the internals.

If you want to dig deeper on the architecture, I wrote up the dual-LLM design and the memory system on my engineering blog:

- https://www.conecrows.com/blog/augur-soft-launch — covers the dual-LLM turn loop, perception gating, and why the single-LLM approach failed - https://www.conecrows.com/blog/augur-memory-v1 — covers impression extraction, vector embeddings, and lossy synthesis

The stack is Fastify, Next.js, Supabase (Postgres + Auth), and both OpenAI and Anthropic models for the engine and analysis. Encounters cost roughly $0.08–$0.15 per turn (internal cost) depending on state complexity and encounter length.


The dual-LLM split is the interesting part. By separating the engine model (game state) from the architect model (perception/strategy), you're giving each a scoped context instead of one giant prompt that tries to do both. That's where single-LLM approaches tend to collapse: the role bleeds across the context and you lose the behavioral boundary.

The same scoping problem shows up in prompt design more generally. When role, context, and output instructions all live in one flat string, the model treats them as a gradient, not distinct signals. Typed blocks with explicit labels keep them isolated.

I've been building flompt (github.com/Nyrok/flompt) for exactly this, a visual prompt builder that decomposes prompts into 12 semantic blocks and compiles to Claude-optimized XML. Different scale than your system but the same underlying idea. Open-source.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: