For sure! UIs are also most of the past and present way to interact with a computer, off or online. Even Hacker News - which is mostly text - has some UI for to vote, navigate, flag…
Imagine the mess of a text-field-only interface where you had to type "upvote the upper ActionHank message" or "open the third article’ comments on the front page, the one that talks about On-demand UI generation…" then press enter.
Don’t get me wrong: LLMs are great and it’s fascinating to see experimentations with them. Kudos to the author.
I thought you'd say not being able to reload the form at a later time from the same URL is bad. This would be a "quantum UI" slightly different every time you load it.
If you look at many of the current innovations around working with llms and agents, they are largely around constraining and tracking context in a structured way. There will likely be emergent patterns for these sorts of things over time, I am implementing my own approach for now with hopefully good abstractions to allow future portability.
Exactly! LLMs can generate UIs according to user needs. E.g. it can generate simplified or translated ones, on-demand. No need for preset forms or long ones. Just the required ones.
Yes and no, the problem with not expecting that a prominent project follow the rules is that it makes it easier and more likely that no one will follow the rules.
The few times I've tried to use an agent for anything slightly complex or on a moderately large code base it just proceeds to smeer poop all over the floor eventually backing itself into a corner.
Assuming this means copyright is dead, companies will be vary upset and patents will likely follow.
The hold US companies have on the world will be dead too.
I also suspect that media piracy will be labelled as the only reason we need copyright, an existing agency will be bolstered to address this concern and then twisted into a censorship bureau.
I think that everyone is misjudging what will improve.
There is no doubt it will improve, but if you look at a car, it is still the same fundamental "shape" of a model T.
There are niceties and conveniences, efficiency went way up, but we don't have flying cars.
I think we are going to have something, somewhere in the middle, AI features will eventually find their niche, people will continue to leverage whatever tools and products are available to build the best thing they can.
I believe that a future of self-writing code pooping out products, AI doing all the other white collar jobs, and robots doing the rest cannot work. Fundamentally there is no "business" without customers and no customers if no one is earning.
You cannot build a tractor unit (the engine-cab half of the tractor-trailer) with Model T Technology even if they are close.
And the changes will be in the auxiliary features. We will figure out ways to have LLMs understand APIs better without training them. We will figure out ways to better focus its context. We will chain LLM requests and contexts in a way that help solve problems better. We will figure out ways to pass context from session to session that an LLM can effectively have a learning memory. And we will figure out our own best practices to emphasize their strengths and minimize their weaknesses. (We will build better roads.)
And as much as you want to say that - a Model T was uncomfortable, had a range of about 150 miles between fill-ups, and maxed out at 40-45 mph. It also broke frequently and required significant maintenance. It might take 13-14 days to get a Model T from new york to los angeles today notwithstanding maintenance issues, and a modern car could make it reliably in 4-5 days if you are driving legally and not pushing more than 10 hours a day.
I too think that self-writing code is not going to happen, but I do think there is a lot of efficiency to be made.
reply