Hacker Newsnew | past | comments | ask | show | jobs | submit | esafak's commentslogin

You need to be able to at least control things that interact with the world to learn from it.

An agent need not have wants, so why would it try to increase its efficiency to obtain things?

Just put "keep yourself alive" in the SOUL.md. Might be all that it takes.

I don't think that was the intent of the comment, more that true AGI should be so useful and transformative that it unlocks enough value and efficiencies to boost GDP. Much like the Industrial Revolution or harnessing electricity, instead of a fancy chatbot.

Increased productivity is not equivalent to intelligence.

Not equivalent, but I do think a necessary byproduct of actual AGI is that it will be able to solve actual problems in the real world in a way that generates positive value on a large enough scale that it will show up in GDP

No one said it is. Sometimes correlation does equal causation.

But it's already like that; models are better than many workers, and I'm supervising agents. I'd rather have the model than numerous juniors; esp. the kind that can't identify the model's mistakes.

The problem becomes your retirement. Sure, you've earned "expert" status, but all the junior developers won't be hired, so they'll never learn from junior mistakes. They'll blindly trust agents and not know deeper techniques.

You can get experience without an actual job.

Can I rephrase this as "you can get experience without any experience"? Certainly, there's stuff you can learn that's adjacent to doing the thing; that's what happens when juniors graduate with CS degrees. But the lack of doing the thing is what makes them juniors.

>that's what happens when juniors graduate with CS degrees

A CS degree is going to give you much less experience than building projects and businesses yourself.


This is my greatest cause for alarm regarding LLM adoption. I am not yet sure AI will ever be good enough to use without experts watching them carefully; but they are certainly good enough that non-experts cannot tell the difference.

From my experience, if you think AI is better than most workers, you're probably just generating a whole bunch of semi-working garbage, accepting that input as good enough and will likely learn the hardware your software is full of bugs and incorrect logic.

hardware / hard way, auto-correct is a thing of beauty sometimes :)

Imagine putting it in a robot with arms and legs, and letting it loose in your house, or your neighborhood. Oh, the possibilities!

Heck, go the next step and put a knife in one hand and a loaded gun in the other!

If it replaces SaaS it will replace you too; how else will you collaborate?

Are any of these brand bibles public?

If you ever do creative work for a company they usually hand you brand guidelines in some form or fashion. Colors, fonts, how to display their name, what you can/can’t do with their logo, etc. it’s boring.

Some companies put up “press kits” on their site for public use but it’s usually logos and just basic info/stats .


It seems people are forgetting that companies should develop their differentiators and pay for the rest.

That's just a contrived example. Every application involves a million subjective decisions; from the architecture, algorithms, to the UI/UX.

You could assign the cluster based on what the k nearest neighbors are, if there is a clear majority. The quality will depend on the suitability of your embeddings.

Not knowing much about this space, isn't this something you could do in TLA+ ?

I too am still investigating the space, but what's attractive to me about CPNs is that they can be both the specification and the implementation. How you describe the CPN in code matters, but I'm toying with a rust + SQL-macros version that makes describing invariants etc natural. My understanding is that for TLA+ you'd need to write the spec, and then write an implementation for it. This might be another path for "describe formally verifiable shape then agentic code it", but it smells to me a little like it wouldn't be doing as much work as it could. I think in this there's an opportunity to create a "state store" where the network topology and invariants ensure the consistency of the "marking" (e.g. state of the database here) and that its in a valid state based on the current network definition. You could say "well SQL databases already have check constraints", and we'd probably use those under the hood, but I am betting on the ergonomics of putting the constraints right next to the things/actions relevant to them.

Yeah, as far as I can tell TLA+ can accomplish more or less the same stuff as Colored Petri Nets. You get a pretty graph with CPNs and it can be interesting to watch the data flow around in the animators, but I've had trouble doing anything terribly useful with Petri Nets.

I haven't really done anything with it, but I've heard Alloy gives you a graphical animation while giving you similar utility to TLA+.


There were a number of methods for doing TLA-like stuff. Others included SPIN/Promela, Pi Calculus (IIRC), and Event-B.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: