I don't think that was the intent of the comment, more that true AGI should be so useful and transformative that it unlocks enough value and efficiencies to boost GDP. Much like the Industrial Revolution or harnessing electricity, instead of a fancy chatbot.
Not equivalent, but I do think a necessary byproduct of actual AGI is that it will be able to solve actual problems in the real world in a way that generates positive value on a large enough scale that it will show up in GDP
But it's already like that; models are better than many workers, and I'm supervising agents. I'd rather have the model than numerous juniors; esp. the kind that can't identify the model's mistakes.
The problem becomes your retirement. Sure, you've earned "expert" status, but all the junior developers won't be hired, so they'll never learn from junior mistakes. They'll blindly trust agents and not know deeper techniques.
Can I rephrase this as "you can get experience without any experience"? Certainly, there's stuff you can learn that's adjacent to doing the thing; that's what happens when juniors graduate with CS degrees. But the lack of doing the thing is what makes them juniors.
This is my greatest cause for alarm regarding LLM adoption. I am not yet sure AI will ever be good enough to use without experts watching them carefully; but they are certainly good enough that non-experts cannot tell the difference.
From my experience, if you think AI is better than most workers, you're probably just generating a whole bunch of semi-working garbage, accepting that input as good enough and will likely learn the hardware your software is full of bugs and incorrect logic.
If you ever do creative work for a company they usually hand you brand guidelines in some form or fashion. Colors, fonts, how to display their name, what you can/can’t do with their logo, etc. it’s boring.
Some companies put up “press kits” on their site for public use but it’s usually logos and just basic info/stats .
You could assign the cluster based on what the k nearest neighbors are, if there is a clear majority. The quality will depend on the suitability of your embeddings.
I too am still investigating the space, but what's attractive to me about CPNs is that they can be both the specification and the implementation. How you describe the CPN in code matters, but I'm toying with a rust + SQL-macros version that makes describing invariants etc natural. My understanding is that for TLA+ you'd need to write the spec, and then write an implementation for it. This might be another path for "describe formally verifiable shape then agentic code it", but it smells to me a little like it wouldn't be doing as much work as it could. I think in this there's an opportunity to create a "state store" where the network topology and invariants ensure the consistency of the "marking" (e.g. state of the database here) and that its in a valid state based on the current network definition. You could say "well SQL databases already have check constraints", and we'd probably use those under the hood, but I am betting on the ergonomics of putting the constraints right next to the things/actions relevant to them.
Yeah, as far as I can tell TLA+ can accomplish more or less the same stuff as Colored Petri Nets. You get a pretty graph with CPNs and it can be interesting to watch the data flow around in the animators, but I've had trouble doing anything terribly useful with Petri Nets.
I haven't really done anything with it, but I've heard Alloy gives you a graphical animation while giving you similar utility to TLA+.
reply