Hacker Newsnew | past | comments | ask | show | jobs | submit | rando77's commentslogin

It's worth taking actions that take large scale job losses due to AI in the future into consideration, even if now is not the time

It's important to have good feedback at each stage of iteration

I've thought of a service that scans new websites and GitHub repos and looks for things that don't look like anything else (using something like hdbscan for outlier detection), and creates a feed for people to follow.

One example of something that might fit the loose forest civilisation mold is a social network that faceted on ML derived topic.

This would allow niches to exist, without people being able to post off-topic things to your feed.


Or maybe it won't. If it can be made efficient like humans it might be at the edge mainly.


I've been leaning towards multi agent because sub agent relies on the main agent having all the power and using it responsibly.


What does that mean?


Have you come across the lethal trifecta [1]? I'm interested in decomposing tasks to avoid any agent doing all three and having limited data pipes between them.

[1] https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/


You might want some compute in space that you know is very hard to physically interfere with.

But general purpose compute no


I think it is great for experimenting, and proving concepts. Alphas and personal projects, not shipped code.

I've been working on wasm sandboxing and automatic verification that code doesn't have the lethal trifecta and got something working in a couple of days.

I'd like to do a clean rewrite at some point.


I'm interested in capability based software, with tools to identify the lethal trifecta.

This seems like a very hard problem with coding specifically as you want unsafe content (web searches) to be able to impact sensitive things (code).

I'd love to find people to talk to about this stuff.


I've wondered if LLMs can help match people. People give the LLM some public context about their lives and two LLMs can have a chat about availablity and world views.

Use AI to scaffold relationships not replace them.


Scam technology like LLMs aren't going to solve the loneliness epidemic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: