Hacker News new | past | comments | ask | show | jobs | submit login

> `addSubview`

That's been my revised working assumption, but it's almost completely unclear from your writings, and also comes from a fairly deep (but understandable!) misunderstanding of GUI frameworks.

addSubview is not a defining feature of most GUI frameworks. drawRect is.

addSubview is used once in construction, and then you're done. And a lot (if not most) of the time it is hidden, because you just load a GUI definition. For example in Cocoa, you define your GUI in Interface Builder. IB saves a nib/xib, which is an serialised object graph with parameters. You then load that nib and voilà, there's your GUI!

The GUI is fully static/declarative. It reacts to dynamic content by drawing it in its "drawRect" method.

So where does the misunderstanding come from? It comes from recent changes in how developers (ab-)use GUI frameworks. I haven't full grokked how this came about, but it seems to be part (a) widget toolkits (b) horrible drawing APIs and (c) the iPhone with LayerKit/CoreAnimation.

This change has been that for example, when there is just some dynamic text to draw, people have been using text-label widgets instead of just drawing the damn text in drawRect. So suddenly you have people constructing, adding and removing views at runtime as a matter of course, rather than as something done in exceptional circumstances, which I gather is what you (rightly) object to.

However, this is not the "programming model" of GUI frameworks, it is an abuse of those GUI frameworks. Which is why your idea that the difference is about programming model, while understandable and somewhat defensible, is ultimately mistaken.

To put it succinctly, people are "drawing with widgets", instead of drawing in drawRect: like they're supposed to. So instead of drawRect, they are using addSubview to draw. However, widgets were not meant as a medium for drawing, they were meant as mechanism for constructing the (mostly) static UI that then draws itself, including the dynamic parts. As it is not really the supported way, it is cumbersome and error-prone.

If you were to actually adapt the framework APIs to a "drawing with widgets" model, every view would have a "drawSubviews" method in addition to or in lieu of the "drawRect" method.

See also: UIs Are Not Pure Functions of the Model - React.js and Cocoa Side by Side

https://blog.metaobject.com/2018/12/uis-are-not-pure-functio...

There is also a deeper pattern there, which goes back all the way to the beginnings of computer graphics: the back-and-forth between "object-oriented" graphics (see GKS[1], PHIGS[2]) and immediate-mode graphics. (Note that this is not OO in the computer language sense, but in the computer graphics sense).

Everybody, it seems has this idea that it would be nice to have a declarative tree of your graphics (and it also happened historically as a result of display-lists for vector graphics). It would also be nice to have a reusable library/API for this. Enter GKS/PHIGS. But then it turns out that things don't quite match up, so you end up having to express your domain graphics as complex (sub-)trees. So you need to imperatively edit the shape tree/database. Which is complex, painful and error-prone. In the end, it becomes easier to just drop the entire shape database and re-create it from scratch every time. At which point the whole idea of a shape database becomes somewhat moot.

Enter immediate mode graphics. See OpenGL, Postscript, Quartz, etc.

However, drawing everything procedurally is cumbersome. So you add back structure and composable elements. So let's have them be domain-/application-specific, draw themselves in immediate mode and also handle interaction. We might call them "Views". What's neat about Views is that they straddle the "object-oriented" and "immediate" graphics divide, and you can decide yourself where you want to be. You can shift more to the object/widget side and use object-composition, or you can shift towards the immediate-mode side and use procedural drawing. Best of both worlds, at least in theory.

And then things happen that make people shift towards the object-graphics side (sucky graphics APIs, phone UIs etc.) and lo-and-behold, we have the same problems that we used to have with GKS/PHIGS! And then we propose the same solution (modulo environmental and accidental differences).

And round and round we go.

[1] https://en.wikipedia.org/wiki/Graphical_Kernel_System

[2] https://en.wikipedia.org/wiki/PHIGS




Ah ok I see what you mean. Well yeah, I’m talking about how it’s being used in practice.

drawRect is a primitive but once you start dealing with layout and text measurement I think it can get hairy and at that point you might end up with imperative subview soup again. Somehow people using React don’t fall into that.

drawRect is low level because it only specifies rendering. But UIs usually care about local state and when to destroy or create it. Especially in lists. That’s something I mention in the post which React has a solution for but I don’t think drawRect is sufficient for expressing this generally. See the ShoppingList reordering example.


> being used in practice.

And that's great. But then please argue/describe from practice, and not from some largely mythical fundamental differences in programming model that only confuse. That would be really helpful, thanks.

> layout and text measurement

Yup, as I mentioned, the text APIs in Cocoa/CocoaTouch/Quartz are so rancid that just slapping on a TextLabel is incredibly more convenient, despite the fact that you get horrible messy subview soup (I like that term, can I borrow it?).

The solution would probably be better text APIs. Which are actually not that hard to build.

https://github.com/mpw/DrawingContext

(Alas, the text stuff in particular is only partially complete, I had more important projects. The very rough idea is to be able to essentially printf() into a view)

> drawRect [..] only specifies rendering.

Yep.

> But UIs usually care about local state

Right, that's why drawRect is embedded into these things called Views, which have local state.

> Especially in lists.

Right. And you have easy-to-use Views like NSTableView that handle lists beautifully, without you having to worry about the active set of subviews. Essentially you give it a data source and it will ask the data source about what it needs, when it needs it. Meaning it can handle arbitrarily large/infinite lists without problems. There are layers of customizability, from just delivering data via specifying cells to customise drawing/interaction all the way to having the NSTableView use arbitrary subviews to represent rows/columns.

https://developer.apple.com/documentation/appkit/nstableview...

No new programming model required, just a view within the current programming model.

And of course if you create your own views, they handle both the drawing (drawRect) and the interaction.

https://developer.apple.com/documentation/appkit/nsview?lang...


>So where does the misunderstanding come from? It comes from recent changes in how developers (ab-)use GUI frameworks. I haven't full grokked how this came about, but it seems to be part (a) widget toolkits (b) horrible drawing APIs and (c) the iPhone with LayerKit/CoreAnimation. This change has been that for example, when there is just some dynamic text to draw, people have been using text-label widgets instead of just drawing the damn text in drawRect. So suddenly you have people constructing, adding and removing views at runtime as a matter of course, rather than as something done in exceptional circumstances, which I gather is what you (rightly) object to. However, this is not the "programming model" of GUI frameworks, it is an abuse of those GUI frameworks.

Not sure when your "recent" (in "recent changes") refers to.

That's how GUI frameworks have worked at least since they've provided a widget hierarchy (with labels, containers, buttons, and so on). Delphi was like that, Swing was like that, QT was like that, GTK was like that, NeXT GUI lib was like that, Cocoa was like that, the old Mac OS 8 lib up to 8 was like that, and so on. Heck, even Athena was like that.

The GUI programmers and frameworks that have been "drawing the damn text in drawRect" are in the absolute minority, not since CoreAnimation, but since forever.

In fact, you even mention "I haven't full grokked how this came about, but it seems to be part (a) widget toolkits (b) horrible drawing APIs and (c) the iPhone with LayerKit/CoreAnimation.".

The first of those things, (widget toolkits) is 30+ year old, and has been synonymous with GUI development since forever, at least in the desktop application space.

>However, this is not the "programming model" of GUI frameworks, it is an abuse of those GUI frameworks.

Yeah, not really. Not only this is prevalent and common sense understanding of "GUI framework" in the last 3+ decades, but merely having some drawRect and co (without a Widget set) wouldn't even qualify as a "programming framework" at all, people call those "a graphics library".

"drawRect" has not been the main GUI programming tool since forever, except when a developer wanted to make their own custom widgets. Whole GUI apps never once call drawRect (or its equivalent in their lib) directly.


Thanks for some good points, as I wrote before, I haven't fully grokked this yet.

However, I am not sure where you got the idea that I denied the existence or use of widget toolkits, since they are central to the whole development. However, I don't buy your claim that the existence of widgets meant that nobody ever implemented drawRect::. That's just a false dichotomy.

For example, I just googled "open source Mac app", then went to the source for the first entry, Adium (https://github.com/adium/adium/tree/master/Source) and the first 3 implementation files I looked at all had an implementation of drawRect:: Second entry is Disk Inventory X. Includes a TreeMap framework, 5 classes, in is a view with a drawRect::.

In general, my experience is that you typically use a custom view for whatever your app is centrally about. For example, a drawing app has a custom Drawing view. A word processor has a view for the text, a spreadsheet for the central table. At the very least. Around the central view you would arrange tools and other chrome built out of the widgets of the toolkit.

The widgets are, however, not really part of the MVC pattern, they are tools you use to interact with the model, they rarely reflect the model itself (except maybe for being enabled/disabled).

In terms of horrible text drawing API, I don't know about other platforms, but for NeXTstep/Cocoa that happened with the transition away from DisplayPostscript. With DPS, text drawing was trivial and malleable. With the OSX/Quartz transition, text-drawing was delegated to ATS, with some of the most arcane, inconsistent and difficult to use APIs I've had the displeasure to use. And alas these were not built on top of the much saner Quartz APIs, which were bottom-most for everything else, but instead the Quartz text APIs were trivial and very limited convenience wrappers for underlying ATS calls. Sigh.

(And I realise that this is quite a while ago. (a) Yes, I'm old (b) I don't think the text APIs becoming horrible was a trigger, they already were when things changed)

The type of app that only used the widget set definitely also existed: these were the business/database apps or the like that just interfaced with textual/numerical data/tables. Those you often could build using just the widgets as-is, without ever creating a custom view. Apple concentrated a lot on those use-cases in their public communication, because NeXT's focus had been business apps and they made for great "look ma, no code!" demos.

Of course, these widgets aren't really connected to a wider model, they contain their own little model and MVC triad. In the case of Apple, they tried to fix that with bindings[1], but that was only a partial success. So the ViewControllers (which already existed, I think) jumped in and the "update view" part of MVC became "set the content object of this widget". This can actually work fairly well, if you really treat the ViewController as a View (this is entirely permissible, MVC describes roles, not objects) and really, really only do that update when you get a notification that the model has changed. Alas, that isn't enforced or even supported much, so you get arbitrary cross-view modification. Sigh. Slightly better support would probably help here, for example Notification Protocols[2].

So that leaves addSubview, adding and removing subviews for dealing with dynamic data. I'd still maintain that this is a fairly recent development as a major way of achieving UI dynamism, and I also think that its rise roughly coincides with the rise of the iPhone. And I also think that, even though this technique is now widely used, the basic widget sets aren't really well equipped to deal with that way of working, or with helping developers not make a hash of things. Because that's not how they were designed. They were designed to deal with fairly static hierarchies of views that manifest themselves and any dynamic content on the display using drawRect::.

[1] https://blog.metaobject.com/2014/03/the-siren-call-of-kvo-an...

[2] https://blog.metaobject.com/2018/04/notification-protocols.h...


Just checked how many implementations of drawRect there are in different apps:

Pages: 63

Keynote: 81

Numbers: 61


Compared to how many uses of ordinary widgets though?

And are those uses because that's how they draw their overall UI -- e.g. do they use drawRect as the main paradigm, or do they merely create new widget looks and behaviors (that they then treat the same as Cocoa ready-made widgets, append to parent, etc)?

E.g. do they draw the UI or some large part of the UI that way, or is just drawRect used to have some custom looking derivative of Button, Label and so on?


There were actually 130 (I had forgotten CALayer's drawInContext:). Of these 100 were either direct NSView subclasses or CALayer subclasses. Of the remaining, a quick scan indicates around 20 direct or indirect subclasses of NSControl.


The problem with drawRect is the fact that texture upload is too slow. So instead of redrawing your text each frame and uploading the resulting bitmap to the GPU, you upload it once and then only change its shader’s uniform parameters which are cheap to vary (e.g. position, alpha, etc.). The textlabel object is nothing but a handle to this pre-rendered texture through which we can vary the shader parameters.


> The problem with drawRect is the fact that texture upload is too slow.

You have a font render shader, and that renders to the texture, what's there to upload?


Yes and no.

You are right in that changes to drawing induced by the original iPhone are responsible for at least part of the widgetization of CocoaTouch. The first iPhone(s) had a really, really slow CPU but somewhat decent GPU, so moving more rendering functions to the GPU made sense.

Originally, Cocoa as well as its NeXTstep predecessor did essentially all drawing on the CPU (some blotting on the NeXTdimension notwithstanding). And this was usually fast enough. At some point, window compositing was moved to the GPU (Quartz Compositor). With the phone, animations were both wanted for "haptics" and needed in order to cover for the slowness of the device (distract the monkey by animating things into place while we catch up... g ), and the CPU was also rather slow.

So instead of just compositing the contents of windows, CocoaTouch (via CoreAnimation) now could and would also composite the contents of views. But that's somewhat in conflict with the drawing model, and the conflict was never fully resolved.

> texture upload is too slow

First, you don't have to have separate textures for every bit of text. You can also just draw the text into a bigger view.

> redrawing your text each frame

Second, Cocoa does not redraw the entire screen each time, and does not have to redraw/reupload the texture each time (if it is using textures). It keeps track of damaged regions quite meticulously and only draws the parts that have changed, down to partial view precision (if the views co-operate). Views that intersect the damage get their drawRect:: method invoked, and that method gets a damage list so it can also optimise its drawing.

Now if you actually have a texture living in the GPU and you are incapable of drawing into that texture, then you must replace the texture wholesale and the rectangle/view based optimisations won't work. However, I know that they do work, at least to some extent, because we were able to optimise animations on an iOS app by switching from layer-based drawing to a view with drawRect:: and carefully computing and honouring the damage-rect. It went from using 100% CPU for 2-6 fps to 2% CPU at 60fps. (discussed in more detail with other examples in my book: iOS and macOS Performance Tuning: Cocoa, Cocoa Touch, Objective-C, and Swift, https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...)

Third, if your text does change, you have to redraw everything from scratch anyway.

Fourth, while the original phone was too slow for this and lots of other things, modern phones and computers are easily capable of doing that sort of drawing. The performance can sometimes be better using a pure texture approach and sometimes it is (much) better using a more drawing-centred approach (see above).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: