Hacker News new | past | comments | ask | show | jobs | submit login

WebGL allows you to build completely custom UIs, and these are well suited for 3D apps (which typically use OpenGL on the desktop, so transferring knowledge is trivial).

However, between custom OpenGL UIs and HTML-based UIs, there is an entire spectrum of native apps (mobile and desktop) that make use of platform conventions and standard functionality. This space is difficult to cover with web technologies because you have to simulate the look & feel and make the site feel like an app.

Many people have tried to build "just like native" toolkits from WebGL and DOM, but it's pretty tough. WebGL is very low level for this, and DOM is much too high level for many things.

Now you mentioned it also passed my mind to scrap DOM and use my own gui, lots of advantages but im worried it will slow down the development. Indeed all the timeline editor is a canvas. Its worthy to consider it.

Doing your own GUI affords a lot of advantages in terms of how you structure your code. As soon as you decide to render every frame you can take advantage of immediate mode techniques that work very well in editors IMO (unfortunately, many of the examples of using and implementing these online are not very good and imply restrictions that are not inherent with the technique...).

That said you seem to be far enough along that redoing the GUI would not be a good idea really.

Also the DOM is a fairly okay retained mode UI, so if that is your preference I would just use it.

But i have stumble in so many problems thanks to the DOM inconsistencies. more time spend in DOM problems than webgl problems

As a user I have to say it doesn't matter me if your app looks native or follows HIG. That's just a convention used to make apps behave predictably and smoothly. If you can accomplish that with your own sensibilities that works too.

The common user interface/HIG idea makes less and less sense as software begins to do more and more things.

IRL, we don't expect toilets, hammers, bulldozers, shotguns, and surgical scalpels to have the same user interface features, and it would suck mightily if they did. Unfortunately, as those things get computerized we wind up getting the same crappy touch-screen menus and whatnot bolted on to them (imagine what using a hammer with a touch screen interface would be like).

I don't know the answer to how it could be brought about, but we definitely need some radically thinking in UI design. 3D libraries seem to at least provide the tools to make some new tools.

Those real life tools have a far simpler interface than, say, a mail client. As software becomes more complex, HIG increase in importance, because there is more to learn and HIG enables us to transfer knowledge across apps.

The actions you perform with a scalpel (or even a bulldozer) are far more complex than sending email, so arguing that they have a simpler interface doesn't really support your case. Quite the opposite, in fact. I would argue that they have simpler interfaces because those interfaces are actually optimized for the task, rather than being some jack-of-all-trades compromise.

But by having a shared convention, we avoid having to relearn a new one for every app.

You really wouldn't mind if the app used control-p for delete, and control-z to quit?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact