The single environment HTML5 showcase is also often supported by another development team who is actively trying to support their specific use cases; Microsoft with IE/WinJS, Mozilla with Firefox/WebOS, Sony with a PS4-optimized WebGL implementation, etc.
When the big players hit roadblocks during the development of something as high profile as their UI for their next-gen console, the browser can be changed on-the-fly to overcome them. That option isn't available to the rest of the world, and "audio doesn't work like we need it to" being a solvable problem can certainly influence whether or not you believe HTML5 is a suitable app platform.
Sorry, are you saying that it's the effort for you to build a native Android app, a native iOS app, a native Windows Phone app, and a mobile web app combined add up to more than the effort it takes to make a single web app and get it to run well on Phonegap on multiple platforms?
That's a pretty bold statement. Are you on a big team? I would expect you'd need at least a half dozen developers to pull off four separate apps, while I can easily imagine a single person building a web app that could run well in PhoneGap.
I just spent six months building a quite complex single page web app with the hope that I could in the future package it in PhoneGap (or similar), so if what you're saying is true then I'm fucked, because it would take me easily a full year to build 3 native apps of the same complexity. Probably much longer.
HTML apps have a tremendous hurdle to overcome. Customer expectations are different between something opened in a web browser and something downloaded from the app store. Apps that are amazing as web pages can be awful as apps. The amount of effort required to get a web application to perform in a way that matches user expectations for native UI cues, animations, quality and responsiveness across platforms is enormous and incredibly more difficult than your standard web app.
Of course, if you target just one platform, things get a lot better, but at that point, why not just use the native tooling?
This has been my experience, having tried both approaches over the course of several years, across team sizes ranging between one and ten developers.
There's no argument that it can or can't be done, only that the time and cost to implement is equal to or greater than implementing N native apps. Wouldn't mind a contradictory citation for apps that have taken the plunge and found it to be worthwhile, and that the write-once-run-anywhere promise was fulfilled to their satisfaction.
You'll notice that the Windows Phone Netflix app certainly isn't using the same approach as the iOS and Android counterparts, and the iOS/Android versions use a non-trivial amount of native UI components for their respective platforms.
I don't have the resources to care about nativeness. I think you're definitely right that if you want something that behaves like a native app you're much better off just building a native app. I suspect for most people that level of nativity doesn't materially affect their business though.
It's a bit of magic to see a game work reliably across Android devices dating back to Froyo with excellent performance. Running your web app on the same hardware is not quite such a delight.
It's also interesting to note the bigger players that have recently made the transition from HTML5 to native apps. That said, the good enough bar may be good enough for many people.
native development takes advantage of sdks you wont find on your html plateform ( which is really an hybrid app, since you depend on phonegap container,so it's still native somehow, you just abstracted it away ,so it's not an html app ). so you are just building a packaged website.
> I just spent six months building a quite complex single page web app
> while I can easily imagine a single person building a web app that could run well in PhoneGap
Except your app wont have a native feel , wont follow the plateform guidelines regarding ux since you are not using native widgets and will feel like a second rate ux on the plateform yo u are developing on.
Let's not even talk about performances. Is Your app multithreaded ? can it run on the background and to some stuffs while the user is on another one ? can it display anything on the user homescreen ? see , unless you go native , your capabilities are limited to the least common denominator instead of embracing the platerform , maybe that's why you spent 6 month on building it.
Actually, depending on the level of quality you need, yes.
A very basic form based app can be better served by a phone gap app, and be of better general quality.
But the case where you won't be helped by a cross platform solution are numerous. It is obvious if you heavily need performance or rely on a lot of native API where you want very fine grained control (you might even not want feature parity between platforms, if some platform give better tools than an others), or use native third party libraries.
But even in cases where you require near pixel perfect layouts for a wide range of devices, or have unusual set of rules to position or move your elements, handle device rotation in non standard ways. In these cases you might be better off doing native apps as many times as you need to.
It seems extremely wasteful, but dealing with hacks on top of hacks in a single HTML+CSS+js codebase relying on an extra abstraction layer enclosing wildly different platforms is a hell in itself.
On the other hand, watching browser technology evolve from whatever IE6 was into the sole interface presentation layer for top tier operating systems has been fascinating. Browsers may have failed to bring us a "one true platform" by themselves, but the technologies involved are quickly reaching ubiquity.
That said, things are also getting worse.
Fragmentation on the browser and device side is worse than ever. Implementation starts with caniuse.com, followed by ripping out the feature later when browsers support (or partially support) features differently across tablet, desktop and mobile form factors. Mobile browsers tend to be an absolute disaster, in which something as simple as a div with overflow-y requires Herculean effort to implement across devices spanning Android 2 to Windows Phone 8. That's something that should be brain-dead simple, and things don't get much better once you move beyond that.
One popular train of thought I've had enough of is "consistent implementation of X isn't a problem, just include library or polyfill Y and things will be great", which is a mindset that tends to defer problems over solving them when the solution eventually tips over and requires in-depth bug fixing or a from-scratch reimplementation.
So in 2013, WebGL offers a 2006's 3D GPU performance.
I hope you don't say it dismissingly, because to me it's amazing already (given all the other conveniences it gives).
Performance comes from GPU device, not the interface.
Interface like GL just defines features, not how it should perform.
As a developer never ever take any technology for granted, ever.
> I can't look at at an in-browser demo of Epic Citadel
I have a pretty decent laptop , this demo wont run on any browser. The performances are just not here , they are nowhere near native even on desktop.
The value judgement for using a web rendering engine on your _closed_ system barely touches on "cross platform" benefits. The primary concern is whether or not the rendering engine saves you time and effort.
People seem to forget that browser engines are fantastic at a wide variety of tasks:
* Best in class UI layouting.
* A great scripting engine.
* Very performant rendering pipelines (for what they're doing).
* Easy extension points when integrating with your native layer (e.g. exposing your own 'native' JS objects).
* Fantastic dev tools.
Browser engines are a _platform_. Why reinvent the wheel?
XAML is best in class, HTML is legacy and illogical
> A great scripting engine
yet a crappy script language
The beauty is we have one responsive code base that works PERFECTLY on over 20 devices. It's pretty awesome.
Not to mention, my crappy phone runs Chrome. If it can do that, certainly a device that uses up to 100 watts, has 16x the ram, 100x the graphics/cpu can too without it being considered a drain.
But this is still a proprietary, "junky" interface. If it's WebGL, it's still entirely custom, with entirely custom event handling, and entirely custom accessibility (or lack thereof), and entirely custom everything. If you are just using WebGL, which is what they said, then the browser is a massive chunk of dead weight that's doing jack shit. All the browser did is setup the EGL context, create a GLES context from that, and then handed it to you. This is basic hello world stuff, that's the entirety of the browser's involvement in this scenario.
Plus the browser runtime still gives you things even if you're going the pure-GL UI route: networking, a JSON parser, a database (Local Storage), cookies and session management, a (arguably awful) security model... how is the browser "dead weight"?
And I called it dead weight because we're talking client-side offline UI here, so things like cookies and session management are irrelevant. But the dead weight is going to come from the bulk of the browser code that is HTML/CSS parsing, webkit/blink, the graphics model that's not being used, font support that's not being used, etc... Those things you list are all really small pieces.
I don't disagree that there's a lot wasted but I'd probably have made the same trade-off given the circumstances.
The PS4's UI isn't particularly resource-constrained as far as I know and it's often easier to start with a familiar environment with everything and the kitchen sink than to start with nothing and hand-pick blocks along the way. Plus based on Sony's past in-house UI performance, the "glue libraries together from floor 0" approach wasn't working out for them.
So far the PS4's UI is getting good reviews and is generally described as "responsive" so it sounds like they didn't really lose - in a modern device where you'll end up porting a browser anyway I don't see why using it for UI is such a sin.
As a cynic, I would say web apps are all about proprietary junky interfaces instead of using my platforms standard GUI. Reminds me a bit of the VB6 days where every program used to define its own true button color... (except the modern web apps look a lot nicer, of course).
The PS3 has two graphics APIs, OpenGL ES 1.x with Cg for shaders and LibCGM.
Most game studios never used the OpenGL ES API.
The game industry does not care that much about open standards like the FOSS community does.
What counts it getting a game idea sponsored and on the hands of paying customers, regardless of any tool religion.
Porting to multiple platforms is an outsourcing business that is part of the industry since the early days.
WebGL might be less performant but honestly, for a menu system it's plenty fast, so what's the big deal here? Honest question.
You assume the renderer they're using uses a large chunk of RAM. You assume they use exactly the same browser for the home interface as the normal web browsing. You assume that they're keeping the home interface in memory (and wasting resources) when you're playing a game.
Console has 8GB of RAM. Games are only guaranteed 4.5GB of that. You do the math.
You're having maybe 1000 particles or a 50,000 polygons. Much lighter weight. It doesn't /need/ the performance that a game does.
Interface performance is really fucking important, and I sure hope people like Sony and Microsoft care when it comes to their consoles.
People like EA and the notoriously terrible Battlefield* interfaces obviously do not care. (The original Battlefield 1942 would switch from your native resolution and refresh rate and color depth and put the screen in 800x600 16bit 60hz every single time you hit the escape button.)
The performance of the desktop on the PS3 and PS4 is less important to the gamer than the performance of the games, also known as the primary purpose of the device.
However, that doesn't mean that it is unimportant.
Performance in general still matters a lot on the PS desktop. Case in point, my PS3, which takes a godawful amount of time to load all the screen icons.
And (memory) performance is of critical importance if the PS4 supports multitasking such as the Xbox One does (don't know if it does).
Complain about it being useless shit.
Where have all of these cynical people come from today? Every discussion I've wandered into today has had a good showing of judgmental, know-better, seemingly angry people.
Fine in a real browser though.
To me that's why it's clunky in the application, but fine in a browser. It's par for the course in the HTML/native hybrid application world.
EA's open source initiatives almost all use an embedded webkit lib/browser to render UI content (some also use Scaleform (flash) -- skate 3 uses it a bunch).
Back in the day EA did this more often, they also had an EASTL for game optimized STL containers/usage: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n227...
There are a few examples of such optimizations in the Game Programming Gems series
When WebView on iOS and Android finally get WebGL support ( what's taking so slong?!!?), native app development with native SDKs will plummet.
As a iOS developer, I write native because I want to deliver the best possible user experience. I've done HTML5 with PhoneGap and it does not deliver the same experience. WebGL on iOS will not change that at all.
Would you build a website in Flash? I doubt it - you'd probably tell me it was slow and didn't deliver the same experience.
Source: used to write ActionScript apps that ran on the 350 mhz i.MX21 ARM chumby device with 64 megabytes of memory that had better performance than a lot of "native" mobile apps have today, let alone web-based mobile apps.
WebGL is a natural progression on this.
For me, and maybe it's just me, I can build a much better experience using native code. I also can tweak some very small, but important details that HTML5 doesn't let me tweak.
If someone can create the same experience better, cheaper, and faster using HTML5, more power to them, but I can't, so I go native.
I haven't seen anything in the last year to say that a whole lot has changed. If anything, a lot of major projects have tried HTML5 and have "gone native". I realize there are plenty of projects that might prove the opposite, but when you are just one or two guys cranking out an app on iOS or Android, well you make the best decisions you can and move on.
> That native delivers a superior UI doesn't mean it delivers a superior UX.
Sure, it doesn't necessitate a superior UX, but it certainly has an effect on UX. Here 'programminggeek' has tried building apps using native and using web technologies and has found that, for him at least, native provides a better UX. Are you saying thats not possible? That the choice between native and web is absolutely not a UX one but a UI one?
WebGL just hands you a GLES context. Nobody writing 2D native apps deals directly with GL at all, and nobody wants to. WebGL will do squat to replace native app development as a result, it's completely unrelated.
The only ones dealing with GL directly do so for performance, and they sure as hell aren't going to be tempted by HTML/CSS/JS because that's all anti-performance.
Who says you need to work directly with WebGL. There are and will be some wonderful 2D frameworks that wrap WebGL. Exactly what happened with Flash's Stage3D. Starling and the Feathers UI Toolkit are great in creating smooth 60fps animations effects in AS3.
Edit: The product is not the page, Aether is a desktop application. The page crashes devices low on memory because of the retina screenshots, and yes, I'm having a new site up soon.
WebGL is the natural next step.
(At least to someone stuck doing LoB/ERP work and CMS development.)
So, why didn't they use just native GL code? Because of sandboxing limitation?
UX is a side effect of UI execution.