Hacker Newsnew | past | comments | ask | show | jobs | submit | SteveJS's commentslogin

I am using lean as part of the prd.md description handed to a coding agent. The definitions in lean compile and mean exactly what I want them to say. The implementation i want to build is in rust.

HOWEVER … I hit something i now call a McLuhen vortex error: “When a tool, language, or abstraction smuggles in an implied purpose at odds with your intended goal.”

Using Lean implies to the coding agent ‘proven’ is a pervasive goal.

I want to use lean to be more articulate about the goal. Instead using lean smuggled in a difficult to remove implicit requirement that everything everywhere must be proven.

This was obvious because the definitions i made in lean imply the exact opposite of everything needs to be proven. When i use morphism i mean anything that is a morphism not only things proven to be morphisms.

A coding agent driven by an llm needs a huge amount of structure to use what the math says rather than take on the implications that because it is using a proof system therefore everything everywhere is better if proven.

The initial way i used lean poisoned the satisficing structure that unfolds during a coding pass.


Could you put that distinction into the AGENTS.md file so it will understand and follow that nuance?


I have several techniques queued up that attempt to counter it. The distinction in Agents.md is definitely part of it.

Not sure if they will work yet.


Amusing this downgraded when it points directly to the word used for the phenomenon: https://en.wikipedia.org/wiki/Blinkenlights

I seem to recall an intel i960 was used to drive leds on at least one model.


Ada Palmer has a great blog post on why Stocism is so appealing to rich folks. Her writing is always excellent.

https://www.exurbe.com/stoicisms-appeal-to-the-rich-and-powe...


Wife’s comment: “Cherries? It needs to be an apple.”


This gets those cases right.

https://github.com/KnowSeams/KnowSeams

(On a beefy machine) It gets 1 TB/s throughput including all IO and position mapping back to original text location. I used it to split project gutenberg novels. It does 20k+ novels in about 7 seconds.

Note it keeps all dialog together- which may not be what others want, but was what i wanted.


My grandmother loved clippy.

Melinda French Gates back when she was Melinda French had a part in Clippy.

“Melinda French (then the fiancée of Bill Gates) was the project manager of Microsoft Bob”

Microsoft Bob is where Clippy was born.

Reference: https://www.artsy.net/article/artsy-editorial-life-death-mic...


i spent a lot of time playing with microsoft bob.

i don't know what you were actually supposed to do with it, but in real life i spent a lot of time building houses/forts so i did that in bob too. in a different era i'd've just done all that in minecraft.


Also matryoshka and the ability to guide matches by using prefix instructions on the query.

I have ~50 million sentences from english project gutenberg novels embedded with this.


Why would you do that and I'd love to know more


The larger project is to allow analyzing stories for developmental editing.

Back in June and August i wrote some llm assisted blog posts about a few of the experiments.

They are here: sjsteiner.substack.com


What are you using those embeddings for, If you don't mind me asking? I'd love to know more about the workflow and what the prefix instructions are like.


The summer before i took 6.001 i read “The little LISPer”. It is a good intro.

This is the version i read:

https://www.abebooks.com/9780023397639/Little-LISPer-Third-E...



Here is a (3 month old) repo where i did something like that and all the tasks are checked into the linear git history — https://github.com/KnowSeams/KnowSeams


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: