Hacker News new | past | comments | ask | show | jobs | submit | fleabitdev's comments login

Unlike Windows, Apple and Android have managed to publish modern UI libraries for their own platforms - but in both cases, this involved migrating the whole platform to a new programming language (Swift and Kotlin, respectively). If I remember right, C# also has a few features which were only added to make Windows Forms and WPF more ergonomic.

I'm starting to suspect that general-purpose languages just aren't flexible enough to keep up with changing fashions in UI development. Whenever somebody comes up with a new UI paradigm, you're not going to be able to take full advantage of it until somebody designs a new programming language with that paradigm in mind.

If so, this might explain why the Windows team keeps pushing new desktop UI libraries, only to abandon them within a few years - they just don't have the willingness or resources to migrate away from C# and C++, and so the quality of their UI libraries is stuck in 2006.


The issue is the React paradigm, which is a bit questionable to begin with. It's a sort of fake OOP in a trenchcoat pretending to be something else, whilst abusing the language in ways that requires compiler plugins. My experiences with Compose and SwiftUI have been ... not that great. I kinda wish people would just have kept investing in their OOP toolkits.


You're right that reactive UI is a poor fit for most languages (especially JavaScript!), but I think the problem is more general than that.

Good UI architecture needs some convenient and efficient way to propagate state changes between different parts of the UI framework [0]. This requirement sits in an awkward place, halfway between imperative programming and functional programming. It just isn't in the day-to-day vocabulary of any mainstream language, not even modern imperative languages which have a bit of functional programming mixed in.

I don't think OOP is any better at fulfilling this requirement. Being able to offload half of your program into a visual editor is nice, but it's cold comfort if the other half of your program ends up being a tangled mess of callbacks and data binding.

[0]: https://raphlinus.github.io/ui/druid/2019/11/22/reactive-ui....


Yes you're absolutely right that these language dialects keep emerging because regular languages don't have quite the right features for reactive computations. Still, OOP was originally designed for GUIs and is a great fit for that, it's easier to build functional stuff on top than the other way around (apparently, judging from experience of having used both).

The closest I found to what I mean is that JavaFX has a whole observables framework, and there's ReactFX that builds a lot of stuff on top of it. With a small change that I prototyped in the past you can do what React Compiler is trying to do, where you run code and record what properties are read then register dependencies and re-run on change. It's quite a natural fit.


We already have a solution for most UI presented in this paper [1]. Functional programming let you represent the solution in a much nicer way after you've hidden the oop/imperative machinery away. But it's a complete package where the declarative part is only the shell. The UI part of any application should be considered as an external module, (like the data access layer) and the code architecture should reflect this. In an extreme way, if you can't create a telnet interface to you GUI software, that means your interface is already too coupled to the rest of the code.

[1]: https://dl.acm.org/doi/10.1145/62402.62404


Kotlin wasn’t designed with Compose in mind. They shoehorned it via compiler plugin.


The explanation is easy, the usual WinDev versus DevDiv politics.


I also admire the simplicity of this approach, but it has several downsides:

- Code-based UIs tend to work better with existing tooling, like comments and source control. You can freely customise your development environment, rather than being at the mercy of a single GUI app.

- Most serious UIs need code-like features, such as "for-each" to render a list of objects, or "if" to reveal a form when a checkbox is ticked. The easiest way to access code-like features is to write your UIs in code.

- You'll need to write some backing code in any case. Defining the UI tree and its "code-behind" in the same file could be considered more DRY.

- Live preview (or alternatively, hot reloading) will give you very quick iteration when writing a UI in code. It's not quite as good as drag-to-resize, but it's close.

- Automatic layout (which is non-optional nowadays) isn't necessarily a great fit for WYSIWYG editing.

- As React has demonstrated, the ability to quickly throw together custom components is extremely useful. Visual editors tend to make it more complicated and bureaucratic to define custom components.


In addition:

- WYSIWYG editors are bad about making small unintended changes that can easily slip by undetected (Interface Builder in Xcode for example often changes XIBs/storyboards just by viewing them)

- WYSIWYG editors bury bits of configuration in inspectors that are only visible in certain conditions, e.g. a control being selected, making them less evident and more difficult to find

There are circumstances where I think they still work alright — for instance I still enjoy building Mac apps with XIBs in Cocoa, but there’s a reason for that: traditional desktop UI has much less of a need for flexibility and generally speaking, has far fewer moving parts since it doesn’t have to hide things as a result of limited screen real estate. Additionally, these apps will only ever run on Macs which further reduces need for flexibility/adaptivity.

For mobile and multiplatform on the other hand, I strongly prefer code. It just works better.


I finally grokked `view = fn(state)` once I understood that the `state` argument encompasses all possible inputs. Your global store, each component's internal state, the text selection, the mouse cursor position, the user's cookies, the progress of an animation - everything listed in the original blog post is just state, and the `fn()` doesn't necessarily need to care where it comes from.

The reason this is counterintuitive is that UI frameworks insulate you from some types of state. When you write your own `fn()`, it's only receiving part of the state, and it's only defining part of the view. In the browser, animations are a good example of this; other examples include keyboard focus, the size of the browser window, and most of the behaviour of native form controls.


The purpose of `view = fn(state)` is to protect you from O(n*m) complexity scaling if you handle each event in isolation.

For any given part of your UI, you'll have n events to handle ("UI started up for the first time", "x changed", "y changed", "checkbox toggled", "undo", "redo", "file loaded"), and m invariants to uphold ("while the checkbox is checked, the user cannot edit y", "while x is less than y, the UI should display a warning", "this label displays the current value of y as a localised string"). If you try to manually maintain each invariant when handling each event, you'll find it works for simple cases, but it falls apart for larger values of n and m.


Sensitivity and specificity [0] seem relevant here.

Every interview stage which is designed to reject bad candidates (true negatives) will also accidentally reject some good candidates (false negatives). The more eagerly you filter out bad candidates, the more you'll also filter out good candidates; sensitivity decreases as specificity increases.

This means that building more pass/fail tests into your interview pipeline may produce worse results - even if you have an enormous number of applicants, all with infinite time and patience! At each interview stage, there's a risk of accidentally rejecting the best remaining applicant, the one who would have outperformed all others if they were hired. For example, the article mentions "values mismatch" as a particularly good reason to hard-reject a candidate, but several of the company's values seem like they ought to be optional. Surely some truly excellent software developers lack courage, humour, and thriftiness? Could those qualities just as well be taught on-the-job? Could a non-courageous hire have useful things to teach you about caution?

Good hiring processes should draw a clear distinction between "must-have" qualities and "nice-to-have" qualities. Beyond a certain point, the problem stops being "filter out bad candidates", and starts being "out of several great candidates, make sure we choose the best one"; to solve those two problems, different tools are required.

[0] https://en.wikipedia.org/wiki/Sensitivity_and_specificity


Can you recommend any games in the first category?

I surveyed the indie-game landscape a couple of years back, specifically looking for games in your first category, but struggled to find any recent examples. 2D transforms, alpha-blending and lighting effects seem to be the standard now.


Empire Strikes Back remake for c64 https://megastyle.itch.io/esb-by-megastyle


Celeste uses high resolution dialogue text, but its gameplay is strict about the pixel grid.

Loop Hero is strict about its pixel art and uses a fixed 16-colour palette that is reminiscent of the Commodore 64.

Baba Is You has visuals that you could basically render on a ZX Spectrum.

And in addition to doing pixel art well, these are all pretty great games.


There are a bunch of retro inspired games (ex Shovel Knight) that do the retro inspired thing. More interesting is a game like Celeste, where it has a 3D level select but during the actual gameplay, the characters move in integers and the screen buffer is still fixed at a low resolution.


Shovel Knight actually has a high-resolution render buffer and "subpixel" movement, but nobody seems to notice because the rules are mostly held in-place.


Cave Story, Shovel Knight?


I share your preferences, but it's a rock and a hard place. Performing any of the tasks you listed by hand is extremely labour-intensive. Doing without those features is a harsh creative limitation - in my experience, the result isn't "this game feels retro", but instead "this game feels oddly flat, repetitive and static", which is much less forgivable.

I've worked on a game which aimed for authentically retro pixel art, but in hindsight I think it was a silly misallocation of resources. There's a reason it's so uncommon nowadays. If I were to try again, I'd at least include high-resolution 2D transforms in the engine, and possibly dynamic lighting too.

I briefly explored the possibility of developing a style-preserving realtime rotation algorithm for sprites, but it's a hard problem.


how exactly is it a hard problem? You don’t have to rotate the sprite by sining and cosining pixels by hand, you can just render to the same resolution as your sprites and it’ll look fine.

The problem is that most indie games render at 2x or 3x or more natively.


Nearest-neighbour, low-resolution rotation produces unpleasant results when the sprite contains fine details with a thickness of one or two pixels, e.g. outlines: https://imgur.com/a/UiAZ49z

Artistic preferences are subjective, but I expect most players would find that rotated sprite unappealing. This is especially true when the rotation is animated - the aliasing causes a distracting, staticky, random-noise effect which gives the impression that the sprite's fine details are chaotically changing. It's almost the aesthetic opposite of what most pixel artists are aiming for.


> Nearest-neighbour, low-resolution rotation produces unpleasant results…

That’s the reason why some games use versions of the rotated sprites that have been tweaked by hand. See also tools like RotSprite (http://info.sonicretro.org/RotSprite).


Back in the day I found Privateer's in-game rotations jarring and hideous. Modern games and remasters should at least use higher resolution frame buffers, even if they downscale back to OG--ideally using high color and alpha channels. And I think higher resolution rendering is fine too.


Careful difficulty tuning is essential for fun gameplay. If an open-world game provides traditional exponential power and difficulty curves, and the game fails to guide you to the right content at the right time, you'll end up facing challenges which are too easy or too hard to be fun. Hollow Knight navigated this problem quite skillfully, but even in that game, I ended up effortlessly skimming through a few areas because I happened to leave them too late. What a waste of hours of content!

I'd like to see more open-world games experiment with a flat power curve: simply throw away the idea that the player character's numerical power should increase as the game goes on. If Super Mario World had increased the player's jump height or hit points based on the number of levels completed, it would have made the game much worse - so why do we tolerate the same thing from Castlevania and Final Fantasy?

Players might miss the (artificial) feeling of mastery, but I don't think it's worth the cost. Finding better ways to achieve that same feeling would be an interesting design challenge!


Zelda: OoT and MM did a great job of this. It was mostly about getting new equipment not necessarily stronger equipment. Aside from health (and doubling the magic bar in OoT) there’s no real upgrading in them, grinding isn’t even an option.


Thanks for writing these updates, Raph - back in 2019, your early piet-gpu blog posts helped to reignite my interest in 2D rendering. I recently learned that you also invented Cairo's trapezoidal-coverage algorithm, which was my original introduction to anti-aliased 2D rendering ten years ago!

Over the last few weeks, this interest has finally spiralled into an attempt to develop a renderer of my own. I've come up with a design which seems promising, but I do have one question for which I haven't yet found a good answer:

It seems to be common wisdom that 16xMSAA has unacceptable quality for 2D rendering. In practice, though, I'm struggling to find any test-cases where I can differentiate 16xMSAA from 256-shade analytical AA with the naked eye. The only exception has been fine strokes (e.g. downscaled text, or the whiskers on this Ghostscript tiger [0]), which tend to "shimmer" or "sparkle" as they move. Does MSAA have any other common failure modes that I should be aware of?

[0]: https://threejs.org/examples/webgl_loader_svg.html


Thanks for your response! I write these blog posts for you and others to understand the technology better.

I didn't invent the Cairo trapezoidal algorithm, I think that was mostly Carl Worth, maybe Keith Packard, but I certainly did do one of the earliest analytical area algorithms in the free software space, which I know inspired other work. It's been quite a ride, I went back and looked at my early work[1], and thought it was impressive that I could render the tiger in 1.5s. Now I'm seeing times in the 300µs range on discrete graphics cards.

16xMSAA is pretty good, and I would say is definitely good enough for most vector art. I consider it not quite good enough for text scaling, and anything less than 16x not acceptable. Desktop cards shouldn't have much trouble with it, but mobile devices might not be capable of it or able to deliver adequate performance.

Of course the upside of MSAA is that it's much easier to solve conflation artifacts, so I imagine we'll end up with some variant of it in additional to the analytical AA; and the latter will continue to be used for almost all glyph rendering.

[1]: https://www.levien.com/svg/


And it's all MIDI, running on a 24-voice synthesiser with a 470-kilobyte sample bank.

Whenever I'm composing, and I'm tempted to be perfectionist about instrument fidelity or mixing quality, I listen to some Uematsu tracks then go back to my piano roll.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: