I recently had to install Windows for the first time in ages because reasons, and it really wasn’t very hard. The setup really just presents two options at a time: the cloudy option, and the other option. If in doubt, the flashy one is the cloudy one. I kept selecting the non cloudy option and got to the desktop without signing up for anything. Sure it took more clicking than last time I went through this, but really wasn’t nearly as bad as people say and didn’t take any windows know-how or googling. Might be very different between editions and regions though…
Edit: ofc we all agree local accounts needs to be a supported option, but perhaps we should be more careful about yelling from the rooftops that it’s practically impossible. I’ve been told for years now that it’s really hard or impossible, and it really was not that hard (yet…)
Similar background here. I think what made it «click» into place for me was the notion of metaphors and mental models.
When someone (remember to define who!) looks at your app, they’ll subconsciously build a map/model of how your system works. This model doesn’t have to be accurate nor complete, and very often it certainly doesn’t match your system model and architecture diagrams. But it needs to be good enough for the user to get their task done. Example: many devs think of a git repo as a tree of commits, and they mostly get the job done even if that’s not at all how git actually works.
The challenge then becomes to communicate a sufficient mental model with the minimal amount of effort on the user’s part. (You could write a user manual or an interactive onboarding tutorial, but let’s be honest nobody is going to read that.)
How? Reduce the burden by explaining in terms of things they already understand. Examples:
* Practically all UI toolkits come with buttons that resemble real-life physical buttons. You don’t need to read the label or anything else to recognise that pressing it will trigger an immediate action.
* A bus ticket app will visually display the ticket in a design that resembles a paper ticket. This helps drive home the point that «this box with a QR code in it is just like having a ticket in your hand».
A lot of design work can be seen as mapping out first what the users need, then how to communicate to them through the UI how your system can help them achieve whatever they want to do. This involves finding users and asking them a boatload of questions to understand how they think, what they really need. And also whether your UI communicates a sufficient mental model (don’t contaminate their minds by explaining your UI!) and how to fix it when it’s inevitably not correct on the first attempt.
After all that, you can start worrying about colors. There’s lots of good recommendations here already, highly recommend both reading some of the books and observing some real users trying real systems for the first time afterwards.
I do agree with the other commenters about this being better solved with a <link rel="llm"> or just an Accept: text/markdown; profile=llm header.
It's not given that a site only contains a single "thing" that LLMs are interested in. To continue your dev-doc example, many projects use github instead of their own website. Github's /llms.txt wouldn't contain anything at all about your FastHTML project, but rather instructions on how to use GitHub. That is not useful for people who asked Cursor about your library.
Slightly off topic: An alternative approach to making sites more accessible to LLMs would be to revive the original interpretation of REST (markup with affordances for available actions).
Try checking for software updates? My RM2 got infinite canvas in an update some time ago.
That said, I don't really use that feature and find it annoying when I accidentally move the canvas instead of turning to the next page.
This whole paper tablet space looks like a place full of tradeoffs where it's hard to please everyone... IMHO overall remarkable is doing that balancing act quite well.
Yes, in notes themselves there's an infinite canvas towards the bottom. I meant a canvas around a PDF to take notes alongside the paper - which is 50% of my use case.
I don't feel it balances the features well, just today they released another update for their keyboard support. For me, that's setting false priorities for a device that's designed for handwriting.
Edit: ofc we all agree local accounts needs to be a supported option, but perhaps we should be more careful about yelling from the rooftops that it’s practically impossible. I’ve been told for years now that it’s really hard or impossible, and it really was not that hard (yet…)