Hacker News new | past | comments | ask | show | jobs | submit login
PS4 UX is powered by WebGL (plus.google.com)
259 points by sciwiz on Nov 15, 2013 | hide | past | favorite | 150 comments

There's a bit of irony in that the strengths of HTML5 shine through when confined to a single environment. If the best experience comes from targeting a single environment, then why go web at all? Further, HTML is considered a strong contender for true cross-platform app support, which is where it fails the hardest. In my experience, it's less effort to target native apps per platform than to try and use something like PhoneGap for apps of reasonable quality and complexity.

The single environment HTML5 showcase is also often supported by another development team who is actively trying to support their specific use cases; Microsoft with IE/WinJS, Mozilla with Firefox/WebOS, Sony with a PS4-optimized WebGL implementation, etc.

When the big players hit roadblocks during the development of something as high profile as their UI for their next-gen console, the browser can be changed on-the-fly to overcome them. That option isn't available to the rest of the world, and "audio doesn't work like we need it to" being a solvable problem can certainly influence whether or not you believe HTML5 is a suitable app platform.

HTML5 and WebGL are two entirely different things. If anything this would be evidence that HTML5 isn't a suitable app platform, because even though they spin up a browser they decided to not use it and do everything with custom OpenGL ES instead.

> it's less effort to target native apps per platform than to try and use something like PhoneGap for apps of reasonable quality and complexity.

Sorry, are you saying that it's the effort for you to build a native Android app, a native iOS app, a native Windows Phone app, and a mobile web app combined add up to more than the effort it takes to make a single web app and get it to run well on Phonegap on multiple platforms?

That's a pretty bold statement. Are you on a big team? I would expect you'd need at least a half dozen developers to pull off four separate apps, while I can easily imagine a single person building a web app that could run well in PhoneGap.

I just spent six months building a quite complex single page web app with the hope that I could in the future package it in PhoneGap (or similar), so if what you're saying is true then I'm fucked, because it would take me easily a full year to build 3 native apps of the same complexity. Probably much longer.

In my experience, implementing a sufficiently complex app in PhoneGap targeting Android, Windows Phone, iPhone and iPad is absolutely more work with lower quality result than doing native versions of each app through Xamarin.

HTML apps have a tremendous hurdle to overcome. Customer expectations are different between something opened in a web browser and something downloaded from the app store. Apps that are amazing as web pages can be awful as apps. The amount of effort required to get a web application to perform in a way that matches user expectations for native UI cues, animations, quality and responsiveness across platforms is enormous and incredibly more difficult than your standard web app.

Of course, if you target just one platform, things get a lot better, but at that point, why not just use the native tooling?

This has been my experience, having tried both approaches over the course of several years, across team sizes ranging between one and ten developers.

Netflix has been using HTML5 for their mobile apps for years and the vast majority of people you will ask about it, developers or otherwise, probably don't realize it. It can be done, and is being done by many apps out there. And no, it does not take as much effort as writing N native apps for N platforms.


There's no argument that it can or can't be done, only that the time and cost to implement is equal to or greater than implementing N native apps. Wouldn't mind a contradictory citation for apps that have taken the plunge and found it to be worthwhile, and that the write-once-run-anywhere promise was fulfilled to their satisfaction.

You'll notice that the Windows Phone Netflix app certainly isn't using the same approach as the iOS and Android counterparts, and the iOS/Android versions use a non-trivial amount of native UI components for their respective platforms.

Ah, I see. All that matters for me is that it works and it looks good. I constantly modify my designs so that they are as easy as possible implement, without losing any usability. There are no fancy transitions, no fancy widgets, just a handful of animations where they are necessary for comprehensibility. My focus as a designer is on what users are able to do with my app. All my design effort is in service to that.

I don't have the resources to care about nativeness. I think you're definitely right that if you want something that behaves like a native app you're much better off just building a native app. I suspect for most people that level of nativity doesn't materially affect their business though.

One thing to keep in mind is that targeting a browser is not the same as targeting an API level. API levels guarantee consistent behaviors across different targets, but if you use the WebView, you'll encounter different behaviors across the same version of Android across different hardware vendors. Debugging PhoneGap apps isn't exactly painless, either.

It's a bit of magic to see a game work reliably across Android devices dating back to Froyo with excellent performance. Running your web app on the same hardware is not quite such a delight.

It's also interesting to note the bigger players that have recently made the transition from HTML5 to native apps. That said, the good enough bar may be good enough for many people.

> Sorry, are you saying that it's the effort for you to build...

native development takes advantage of sdks you wont find on your html plateform ( which is really an hybrid app, since you depend on phonegap container,so it's still native somehow, you just abstracted it away ,so it's not an html app ). so you are just building a packaged website.

> I just spent six months building a quite complex single page web app

define complex.

> while I can easily imagine a single person building a web app that could run well in PhoneGap

Except your app wont have a native feel , wont follow the plateform guidelines regarding ux since you are not using native widgets and will feel like a second rate ux on the plateform yo u are developing on.

Let's not even talk about performances. Is Your app multithreaded ? can it run on the background and to some stuffs while the user is on another one ? can it display anything on the user homescreen ? see , unless you go native , your capabilities are limited to the least common denominator instead of embracing the platerform , maybe that's why you spent 6 month on building it.

Of course since phonegap is just an hybrid app container. Someone had to write that java and objective-c code for you.

Why must it have been "someone" else who wrote it for me? I wrote it.

> Sorry, are you saying that it's the effort for you to build a native Android app, a native iOS app, a native Windows Phone app, and a mobile web app combined add up to more than the effort it takes to make a single web app and get it to run well on Phonegap on multiple platforms?

Actually, depending on the level of quality you need, yes. A very basic form based app can be better served by a phone gap app, and be of better general quality.

But the case where you won't be helped by a cross platform solution are numerous. It is obvious if you heavily need performance or rely on a lot of native API where you want very fine grained control (you might even not want feature parity between platforms, if some platform give better tools than an others), or use native third party libraries.

But even in cases where you require near pixel perfect layouts for a wide range of devices, or have unusual set of rules to position or move your elements, handle device rotation in non standard ways. In these cases you might be better off doing native apps as many times as you need to. It seems extremely wasteful, but dealing with hacks on top of hacks in a single HTML+CSS+js codebase relying on an extra abstraction layer enclosing wildly different platforms is a hell in itself.

edit: spelling

As a developer who has working on and in the browser platform for a long time, I totally agree that the irony of stories like this is thick.

On the other hand, watching browser technology evolve from whatever IE6 was into the sole interface presentation layer for top tier operating systems has been fascinating. Browsers may have failed to bring us a "one true platform" by themselves, but the technologies involved are quickly reaching ubiquity.

On a long enough timeline, HTML wins and posts like mine look dated and short-sighted. There's no doubt that things are getting better, and I can't look at at an in-browser demo of Epic Citadel and say that HTML isn't good enough for native apps. Once the tooling catches up, it'll be a moot point.

That said, things are also getting worse.

Fragmentation on the browser and device side is worse than ever. Implementation starts with caniuse.com, followed by ripping out the feature later when browsers support (or partially support) features differently across tablet, desktop and mobile form factors. Mobile browsers tend to be an absolute disaster, in which something as simple as a div with overflow-y requires Herculean effort to implement across devices spanning Android 2 to Windows Phone 8. That's something that should be brain-dead simple, and things don't get much better once you move beyond that.

One popular train of thought I've had enough of is "consistent implementation of X isn't a problem, just include library or polyfill Y and things will be great", which is a mindset that tends to defer problems over solving them when the solution eventually tips over and requires in-depth bug fixing or a from-scratch reimplementation.

Epic Citadel is not implemented using HTML - it's WebGL and JS. Furthermore it's asm.js. And asm.js is an output of C++ transpilation. Why on earth would you run it on tablets instead of just using existing C++ code base to get native performance on all devices?

And lets not forget it is based in the 2006 release of Unreal Engine.

So in 2013, WebGL offers a 2006's 3D GPU performance.

>So in 2013, WebGL offers a 2006's 3D GPU performance.

I hope you don't say it dismissingly, because to me it's amazing already (given all the other conveniences it gives).

Did you mean feature?

Performance comes from GPU device, not the interface.

Interface like GL just defines features, not how it should perform.

Didn't you hear that Javascript is now faster than C?

Nope. I heard LuaJIT does in some cases. So bring some numbers. I want to check it out.

In some sort of carefully "crafted" regex tests? Also a language can't be faster, did you want to say that JIT is faster than native?

> On a long enough timeline, HTML wins

As a developer never ever take any technology for granted, ever.

> I can't look at at an in-browser demo of Epic Citadel

I have a pretty decent laptop , this demo wont run on any browser. The performances are just not here , they are nowhere near native even on desktop.

It depends on the drivers and such as well, though. I have a Sapphire HD5750, which was a low-end card two years ago, and the demo works fine, with the CPU barely hitting 30%.

You're missing the point.

The value judgement for using a web rendering engine on your _closed_ system barely touches on "cross platform" benefits. The primary concern is whether or not the rendering engine saves you time and effort.

People seem to forget that browser engines are fantastic at a wide variety of tasks:

* Best in class UI layouting.

* A great scripting engine.

* Very performant rendering pipelines (for what they're doing).

* Easy extension points when integrating with your native layer (e.g. exposing your own 'native' JS objects).

* Fantastic dev tools.

* Etc.

Browser engines are a _platform_. Why reinvent the wheel?

> Best in class UI layouting

XAML is best in class, HTML is legacy and illogical

> A great scripting engine

yet a crappy script language

If the sole purported benefit of HTML5/WebGL was cross platform then you may (debatable still) have a point. Cross-platform or no, these tools can be incredibly efficient for developing UI.

I disagree. I work at a bank on a complex mobile investment application and we're building it on top of Phonegap. The barrier of entry is easy, people can pick up HTML5/JS/CSS in no time. We were building it in OBJC before and while it wasn't a disaster, the people we hired ended up costing over 8 million more than our current team because it's very hard to find real specialists in OBJC.

The beauty is we have one responsive code base that works PERFECTLY on over 20 devices. It's pretty awesome.

Are you trying to compare UX of banking app to game console? Seriously?

Am I supposed to sit here and make your point for you or are you actually bringing something to the table?

So they have a web browser for the sole purpose of setting up an GLES context for their UI? On a video game console? I guess that's what you do when you suddenly have 8GB of RAM, you piss it away on useless shit...

Why is this pissing away performance? A web browser isn't some luxury. It affords them flexibility with ease of development and being compatible with the entire www. I really don't see the old days of proprietary junky interfaces being superior, especially on hardware that is close to what a modern desktop PC is today.

Not to mention, my crappy phone runs Chrome. If it can do that, certainly a device that uses up to 100 watts, has 16x the ram, 100x the graphics/cpu can too without it being considered a drain.

I never said anything about performance? I specifically mentioned RAM. Although it is pissing away performance because WebGL is slower than running GLES directly, thanks to the sanitation of the GLES shaders that the browser does.

But this is still a proprietary, "junky" interface. If it's WebGL, it's still entirely custom, with entirely custom event handling, and entirely custom accessibility (or lack thereof), and entirely custom everything. If you are just using WebGL, which is what they said, then the browser is a massive chunk of dead weight that's doing jack shit. All the browser did is setup the EGL context, create a GLES context from that, and then handed it to you. This is basic hello world stuff, that's the entirety of the browser's involvement in this scenario.

The G+ comments seem to indicate that they're using the browser's event handling in some way.

Plus the browser runtime still gives you things even if you're going the pure-GL UI route: networking, a JSON parser, a database (Local Storage), cookies and session management, a (arguably awful) security model... how is the browser "dead weight"?

All those things are easily provided by libraries for every language with every API imaginable. The strength of a browser is HTML & CSS.

And I called it dead weight because we're talking client-side offline UI here, so things like cookies and session management are irrelevant. But the dead weight is going to come from the bulk of the browser code that is HTML/CSS parsing, webkit/blink, the graphics model that's not being used, font support that's not being used, etc... Those things you list are all really small pieces.

I strongly suspect the PS4's UI is neither truly client-side nor offline.

I don't disagree that there's a lot wasted but I'd probably have made the same trade-off given the circumstances.

The PS4's UI isn't particularly resource-constrained as far as I know and it's often easier to start with a familiar environment with everything and the kitchen sink than to start with nothing and hand-pick blocks along the way. Plus based on Sony's past in-house UI performance, the "glue libraries together from floor 0" approach wasn't working out for them.

So far the PS4's UI is getting good reviews and is generally described as "responsive" so it sounds like they didn't really lose - in a modern device where you'll end up porting a browser anyway I don't see why using it for UI is such a sin.

> I really don't see the old days of proprietary junky interfaces being superior, especially on hardware that is close to what a modern desktop PC is today.

As a cynic, I would say web apps are all about proprietary junky interfaces instead of using my platforms standard GUI. Reminds me a bit of the VB6 days where every program used to define its own true button color... (except the modern web apps look a lot nicer, of course).

Well, there's an actual user-accessible web browser as well.

JS+WebGL may make the store's development more agile compared to C/C++.

They could still use JS and not use WebGL. WebGL is just GLES in a web browser. Slower than GLES running natively thanks to sandboxing overhead, for what it's worth. But I was more talking about the RAM overhead of using the browser just to create a window to render GLES into. You can easily achieve the same thing with EGL or SDL or whatever the PS4 natively provides to games. It's trivial to setup a GLES context.

PS4 doesn't have any form of OpenGL. In this case, WebGL is probably translated on the fly to calls to the proprietary low level graphics API.

Is there any more public information available on this? I know the PS3 used a highly custom API that was nontheless based on OpenGL ES (and added stuff that was missing like shaders), it seems like with more standard hardware and the more mature OpenGL ES standards available today, they'd stick with something more standards-compatible.

The PS3's API (LibGCM) had nothing to do with OpenGL. What Sony did however was provide an OpenGL ES 1.1+extensions implementation on top of the low level API. You can probably count on one hand the number of games that made use of it because it was too slow.

This is the typical information the FOSS spread about consoles.

The PS3 has two graphics APIs, OpenGL ES 1.x with Cg for shaders and LibCGM.

Most game studios never used the OpenGL ES API.

The game industry does not care that much about open standards like the FOSS community does.

What counts it getting a game idea sponsored and on the hands of paying customers, regardless of any tool religion.

Porting to multiple platforms is an outsourcing business that is part of the industry since the early days.

Isn't that what game consoles are for? =) Pissing things away like your time, money & health?

If they're using nothing but a WebGL context filled with custom code for text, layout, etc. then you have a fair point. I doubt that's the case, though. It seems likely that they're taking advantage of HTML/CSS on top of whatever WebGL parts they're using, allowing system UI work to be done by more general web design people rather than involving lower-level developers just to implement a new kind of dialog.

How else are you going to get ads in to the UI easily?

But if you are already bundling the browser, is it really a penalty to utilize it, especially since both the ps and xbox reserve space for the UI thread?

WebGL might be less performant but honestly, for a menu system it's plenty fast, so what's the big deal here? Honest question.

You're assuming to many things.

You assume the renderer they're using uses a large chunk of RAM. You assume they use exactly the same browser for the home interface as the normal web browsing. You assume that they're keeping the home interface in memory (and wasting resources) when you're playing a game.

> You assume that they're keeping the home interface in memory (and wasting resources) when you're playing a game.

Console has 8GB of RAM. Games are only guaranteed 4.5GB of that. You do the math.

I'm going to go out on a limb here and guess that you're not a game console developer. It's really not as simple as you're trying to make it out to be.

Counter-argument: the PS4's desktop is the one place where performance least matters. But they would like to add new UX features there, over time, with less dev effort. Especially basic UI content. That can be fetched remotely or locally, relatively seamlessly. Which is what browsers already do well.

It bothers me how little people care about responsive UI. I haven't used a PS4 yet, so this is more just a general comment than a criticism of the decision - but the PS3 is a bit of a clunker in its cross-media bar. There are spinning placeholders for a couple of seconds every time the icons load. It doesn't exactly scream raw power - why aren't devs embarrassed by these things?

Have you read the polygon PS4 review? The review absolutely gushes bout how responsive the PS4 interface is, in general.


FWIW, Ars' follow-up review of the PS4 talks about how there's still loading indicators in the main menu UI's, which should not happen IMO when you're essentially talking about icons and you have 8GB of RAM to work with.


This is particulary impressive, seeing as it came from Polygon, who are notoriously pro-MS

Aren't they funded by Microsoft, or something?

The issue with the PS3's UI is that it's seen a marked increase in complexity since the initial release, coupled with less resource access. As it stands the PS3's XMB has access to about 80mb of RAM, the rest is reserved for games. Also everything I've heard about the PS4 makes it sound like the UI is incredibly fast and responsive.

UI responsiveness in WebGL ought to be way better than anything you'll see with regular HTML, CSS and JS DOM mutation.

On a machine with the sort of power that the PS4 has, how does "running a browser" need to be equal to "non-responsive UI?"

It doesn't, but the OP said "the PS4's desktop is the one place where performance least matters"

He's right, though. On the PS4 desktop, you're not having 10s of thousands of particles that interact with each other being rendered at 60fps. You're not going through 100 passes with different shaders making the perfect experience.

You're having maybe 1000 particles or a 50,000 polygons. Much lighter weight. It doesn't /need/ the performance that a game does.

When the PS3 isn't running anything, the icons don't spin or need to load in. When in a game or app, however, they do. Why? Because the majority of CPU and memory is reserved by the game; the icons will not be in memory; you have to async load them and the game might be using some streaming resources.

Wait, why doesn't performance matter on the desktop? What happens when you hit the PS4 button (or whatever it is) in a game to bring up the UI overlay? What happens when you're in the desktop itself, panning around? What happens when you install updates? Do we have to sit through flickering screens as update after update is installed?!

Interface performance is really fucking important, and I sure hope people like Sony and Microsoft care when it comes to their consoles.

People like EA and the notoriously terrible Battlefield* interfaces obviously do not care. (The original Battlefield 1942 would switch from your native resolution and refresh rate and color depth and put the screen in 800x600 16bit 60hz every single time you hit the escape button.)

Doesn't matter != matters least.

The performance of the desktop on the PS3 and PS4 is less important to the gamer than the performance of the games, also known as the primary purpose of the device.

However, that doesn't mean that it is unimportant.

> the PS4's desktop is the one place where performance least matters

Performance in general still matters a lot on the PS desktop. Case in point, my PS3, which takes a godawful amount of time to load all the screen icons.

It's not like any of that is difficult to do when your ecosystem is closed and you are the only one in control of it regardless of how you decide to execute it.

And (memory) performance is of critical importance if the PS4 supports multitasking such as the Xbox One does (don't know if it does).

Well, PS4 has 3.5GB allocated for system memory. Plenty for running lots of things in the background.

Oh, when people raged their heads off because Xbox had 3 GB of memory dedicated to the system I kind of imagined that the PS4 didn't.

Platform has a web browser anyways. Web technologies are brilliant for laying out user interfaces.

Complain about it being useless shit.

Where have all of these cynical people come from today? Every discussion I've wandered into today has had a good showing of judgmental, know-better, seemingly angry people.

The difference is the UI runs continuously and is RAM that a game cannot use. The web browser, by contrast, does not. It stops running when you leave it. Or it could, anyway, if it wasn't also being used as a GL container for the UI.

Theyre all wondering the same thing that you are.

Both steam and mac app stores are html as well. It really makes sense given how easy it is to create fluid layouts in browsers, as well as how much easier it is to prototype new changes from designers.

I find both Steam and the Mac app store a huge pain in the ass to use in their native clients.

Fine in a real browser though.

For my experience with the Steam store at least, I think it's a difference in expectations. The Steam store isn't a single-page app, it requires constant page reloads and navigates like a traditional web site. That kind of click-wait-click-wait is jarring when you're moving between more fluid navigation in the native applications running on your desktop (or the Steam wrapper around the store to launch games, etc.).

To me that's why it's clunky in the application, but fine in a browser. It's par for the course in the HTML/native hybrid application world.

There is definitely a _huge_ difference in performance for the same Steam Store pages between the app and looking at the same page in a browser. Seems like something is poorly optimized in the application.

Not to forget, in Steam you can't open new tabs which is extremely annoying.

For some reason I can't reply to winslow, so this goes here: Yes, you can open tabs. But that's rather useless (unless you want to use it as a general browser). I phrased that badly. What I meant was that you can't open links in new tabs. Which is what I'd like to do when browsing through sales or genre lists.

If you are talking about the in game Steam browser (based on chrome) yes you can open new tabs with Ctrl+T. Though their desktop version isn't meant to be a browser hence the limit to steam store and on tab.

Really? I find the Steam app on OSX awful - super slow and unresponsive. They do this in some of their games too - DotA 2 comes to mind - and it's so obvious that's not native when you're using it.

Oh I agree, the steam client is one of the worst pieces of software I have on my mac, but I don't think using html to layout the store is the source of their problems. The mac app store exhibits none of the issues I see with steam.

Not just OSX, it's sloppy on Windows too. I'm not sure what UI Toolkit they're using, but they should do a complete overhaul.

Steam is based on Valve's own custom UI Engine, the same one used by most of their games.

Something in-house, which they have no choice but to use because of the in-game overlay.

That's because it has to load content remotely. Any UI is shitty when trapped behind a huge network latency.

Especially when you are targeting a single environment with a relatively rich feature set. You can build some pretty awesome UI with keyframe animations and constraint-based (flex box) layouts if you don't have to worry about legacy desktop browsers.

What about Steam's Big Picture mode?

In my experience both have been less than optimal in terms of performance and fluidity. HTML is getting better, but it isn't anywhere near native at this point.

Most game companies and associated now use embedded Webkit that makes all this possible. Apple webkit investment still paying dividends and benefitted so many areas including desktop browsers (Chrome, now Opera) that run on it.

EA's open source initiatives almost all use an embedded webkit lib/browser to render UI content (some also use Scaleform (flash) -- skate 3 uses it a bunch).


Back in the day EA did this more often, they also had an EASTL for game optimized STL containers/usage: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n227...

> Back in the day EA did this more often, they also had an EASTL for game optimized STL containers/usage ...

There are a few examples of such optimizations in the Game Programming Gems series

EASTL on github, pulled out of the ea gpl projects: https://github.com/paulhodge/EASTL

Makes sense. HTML/CSS/JS is not a bad way to build UI and if you're going to have an HTML rendering engine and JS VM on your machine, you might as well use it.

When WebView on iOS and Android finally get WebGL support ( what's taking so slong?!!?), native app development with native SDKs will plummet.

No, it won't.

As a iOS developer, I write native because I want to deliver the best possible user experience. I've done HTML5 with PhoneGap and it does not deliver the same experience. WebGL on iOS will not change that at all.

You may be surprised to know that most in game interfaces are powered by Flash/ActionScript.

Would you build a website in Flash? I doubt it - you'd probably tell me it was slow and didn't deliver the same experience.

Many in-game interfaces are powered by Flash/ActionScript, running in things like Scaleform or various open source offshoots of gameswf but they aren't running in Adobe's actual Flash virtual machine, so any performance comparisons are pretty moot.

In any case, anyone who didn't build websites in Flash because it was "slow" was misinformed. Flash/AS, especially with AS3/AVM2 was significantly faster than JavaScript (at the time Flash was relevant) if doing anything that required "Web App" style functionality, at least if the ActionScript code was written by an actual programmer and not a designer copy-pasta-ing scripts to make something work. There are/were very valid arguments against Flash (security, un-indexability, etc), but performance compared to JS wasn't really one of them.

Source: used to write ActionScript apps that ran on the 350 mhz i.MX21 ARM chumby device with 64 megabytes of memory that had better performance than a lot of "native" mobile apps have today, let alone web-based mobile apps.

Games frequently have terrible UIs that have clearly had far more effort put into making them look good than being usable, especially for things not part of the actual game play.

Games typically have a very different set of expectations in terms of their UI than "regular" apps. So, that works for games but not in general.

Agreed, not to mention if you want to do _anything_ truly concurrent , as in running non bullshit threads that actually run on available cores, you need native.

Web Workers can use multiple cores, can't they?

See ScaleForm. In game interfaces… Powered by… Flash! ActionScript!

WebGL is a natural progression on this.

nit: you want to deliver the best possible ui experience. UX encompasses more than just the ui.

Agreed. One of the primary annoyances of HTML5/PhoneGap (as of a year ago or so), was that load times were slow and the UI wasn't as responsive as it should be. It never felt awesome to me.

For me, and maybe it's just me, I can build a much better experience using native code. I also can tweak some very small, but important details that HTML5 doesn't let me tweak.

If someone can create the same experience better, cheaper, and faster using HTML5, more power to them, but I can't, so I go native.

I haven't seen anything in the last year to say that a whole lot has changed. If anything, a lot of major projects have tried HTML5 and have "gone native". I realize there are plenty of projects that might prove the opposite, but when you are just one or two guys cranking out an app on iOS or Android, well you make the best decisions you can and move on.

Exactly, and you don't want to use a document mark-up language patched up with scripts for your UX.

If UX encompasses UI, how is saying that he wants 'the best possible user experience' incorrect? UI performance and bugginess has an effect on user experience.

UX is a superset of UI. That native delivers a superior UI doesn't mean it delivers a superior UX.

I'm just counter nitpicking at this point which isn't much better.

> That native delivers a superior UI doesn't mean it delivers a superior UX.

Sure, it doesn't necessitate a superior UX, but it certainly has an effect on UX. Here 'programminggeek' has tried building apps using native and using web technologies and has found that, for him at least, native provides a better UX. Are you saying thats not possible? That the choice between native and web is absolutely not a UX one but a UI one?

I'm saying that the web has UX benefits as well. The implication of the parent's post was that if you value UX native is the only choice.

As a sane developer, I'd like something that can compile to multiple platforms from a single code-base. I can't imagine targeting only iOS.

I understand the sentiment, but as a sane product maker, I can't imagine providing a customer with a subpar user experience.

> When WebView on iOS and Android finally get WebGL support, native app development will plummet.

WebGL just hands you a GLES context. Nobody writing 2D native apps deals directly with GL at all, and nobody wants to. WebGL will do squat to replace native app development as a result, it's completely unrelated.

The only ones dealing with GL directly do so for performance, and they sure as hell aren't going to be tempted by HTML/CSS/JS because that's all anti-performance.

>Nobody writing 2D native apps deals directly with GL at all, and nobody wants to.

Who says you need to work directly with WebGL. There are and will be some wonderful 2D frameworks that wrap WebGL. Exactly what happened with Flash's Stage3D. Starling and the Feathers UI Toolkit are great in creating smooth 60fps animations effects in AS3.

I'm not aware if any production quality UI toolkits for WebGL or even Canvas despite canvas being around for many years now. The problem is that you have to reproduce a lot of what you already get using html elements and css.

You don't even need WebGL, plain HTML + CSS, when coupled with a JS framework like Angular can produce very nice results. I've built Aether[1] that way.

Edit: The product is not the page, Aether is a desktop application. The page crashes devices low on memory because of the retina screenshots, and yes, I'm having a new site up soon.

[1] http://www.getaether.net

This page is an excellent counter-point. Crashes the iPhone 4S browser and grinds Chrome beta on Android to a halt.

The product isn't the page, Aether is an anonymous, peer to peer reddit-like desktop application whose UI is running on Webkit.

Your link crashes the browser on my ipad mini. Not what I would consider a nice result.

Sorry about that. Aether isn't a website, it's a desktop application. It crashes your iPad mini because it's running out of memory caused by retina screenshots. I'm planning a new website soon.

Don't work on latest stable Chrome in windows 7.

Plain HTML (without javascript or CSS) can produce very nice results. No need for fancy animations or generated graphics.

Interesting. I'm scrapping the entire thing and building from scratch, so it's long in the tooth anyway. The only animation I have in it is keyframe animation which changes the screenshot on the computer's desktop in intervals. I believe retina images are the problem rather than the transitions.

Yep, those retina images don't look too hot on Chrome/Macbook Air/OSX 10.9: http://i.imgur.com/bzwF86C.png

Just curious, is that image really what printscreen on a mac produces?

The printscreen doesn't add the laptop, if that's what you're thinking about (that's already there on the webpage). But the font in the screenshot-of-a-screenshot is completely unreadable.

'retina' is a layer at the back of the eyeball that contains cells sensitive to light. What you have are unoptimized high-resolution screenshots.

There will be a whole wave of apps that are not just crappy-looking but glitchy as well.

Many commenters here may be very surprised to know that it is very common for game developers to use Flash/ActionScript to build both game interfaces and game logic. See ScaleForm.

WebGL is the natural next step.

The reason why game devs use Scaleform is because of the great vector graphics authoring tool. WebGL is just a gl context in javascript and gives you nothing more than before. No, WebXXX isn't always the natural next step HN.

I suspect it may be rendered in WebKit (http://www.scei.co.jp/ps4-license/webkit.html)

Actually the whole list of open source licenses is interesting.. JQuery, Lua, Mono, Protobufs.

They are already using Mono on the PS Vita and there will be a commercial version of MonoGame for the PS 4.

Awesome, that site confirms that the PS4 has Opus support [1] I was hoping it would since its low-latency makes it ideal for gaming.

[1] http://www.scei.co.jp/ps4-license/opus.html

That makes perfect sense, WebKit is the most embeddable of open source browser engines. You can embed Gecko and Blink as well but WebKit seems to care the most about embeddability and customizability.

Its actually a pretty small part of the UX as the guy pointed out here: https://plus.google.com/113371030751322342143/posts/5akNbY6A...

More interesting is it is running FreeBSD: http://www.scei.co.jp/ps4-license/

That would be on the devkit most likely.

ps3, ps4 and ps vita run freebsd

PS3 for sure does not run FreeBSD. It does use a modified version of UFS. The lv1 hypervisor is likely derived from IBM's reference implementation. I'm not sure about the lv2 OS (GameOS) but I believe it is 100% Sony code.

So that's why PS4 is using 70-90W idle?

Despite all the hate, they've done it and it looks rock solid. Great job guys!

Hate it when people use UX as a cool new buzzword for UI. UX is not UI. Read:


Any video of what it looks like?

Here are some screenshots from the personal site of the author:


I really enjoy getting to "peak behind the curtain" of embedded systems and other such appliances, especially games consoles. They've always held a certain mystique. Sometimes a beautiful, glossy UI running on top of a well thought out, logical and high performance system can be just as exciting as the games they were designed to deliver.

(At least to someone stuck doing LoB/ERP work and CMS development.)

WebGL is just an interface to low-level graphics. This just means HTML was inappropriate for any UX in PS4.

So, why didn't they use just native GL code? Because of sandboxing limitation?

yeah, Netflix thought it was a good idea too until they ditch their web ui for a native one:


That's completely different. Netflix had to work across low-end devices like Roku boxes, SmartTVs and Blu-ray players with very limited embedded CPUs and memory counted in the dozens of megabytes. Sony only need this to run on a latest-gen game console with an 8-core x86 CPU, 3 GB of RAM and a screaming GPU. Netflix never had the option of using WebGL.

no surprise. Here in SF Sony was searching aggressively for webGL and JS hackers since last year

UX => UI


UI != UX.

UX is a side effect of UI execution.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact