You can even try it out now on Safari Technology Preview: https://webkit.org/blog/7627/safari-technology-preview-32/
This is absolutely huge. Many, many services that could once only be provided at a premium fee or by large to massive companies like Skype or Google can now be offered for free by small start-ups without having to worry about "this page was designed to be viewed in [preferred browser] version [foo] or higher" hassles.
Well, I have been figuratively waiting for it for 6 years.
Alternatively they’d have to add VP8 support in their chips and one suspects they would be unwilling to spend silicon on that which could otherwise be used for whatever witchcraft their silicon designers are whipping up.
I’d grant that as a valid technical reason for limited video codec support. Silicon and battery are at a premium.
HLS has no such problems which makes it the better choice.
And Columbia University is in that patent trolls list. Disgusting.
Going through that site, I found their attempt to leech on VC-1: http://www.mpegla.com/main/programs/VC1/Documents/vc-1-att1....
And they list Microsoft there, which is strange, since MS are part of Alliance for Open Media which is an antithesis of this trolling cartel. Either MS are sitting on both chairs, or MPEGLA are trying to fool everyone.
Apple was doing video  long before Firefox and the web were a thing; perhaps it's Mozilla that needs to get with the times and industry standards.
Here's an article on JavsScript based HLS from a couple of years ago:
Web browser considerations aren't relevant on iOS because Apple forbids alternative browser engines. Firefox on iOS is not Firefox because Apple doesn't allow it to use Firefox's JS runtime or Firefox's render engine. As a result there isn't any true browser competition on the iOS platform, which is a shame.
Personally, I want to run full, real Firefox on my iPhone. It's a low quality move from Apple that they stop me doing that.
I'm sure a lot of them do, but it's also true that there are a lot of Mac laptops out there which will be upgraded to High Sierra that don't have hardware HEVC acceleration.
Both HEVC and H.264 require the patent holders to be paid in order to be allowed on either a device or content.
One obvious technical issue is that, as far as we know, there's no VP8 decoding hardware in any of Apple's products; implementing VP8 decoding in software might be more of a power drain than Apple wanted.
But what I found interesting in the WWDC session you linked to was that a lot of Apple products don't have hardware HEVC decode and\or hardware HEVC encode support. Apple has implemented software HEVC decoding and encoding in a lot of places. From that perspective, adding support for VP8 and VP9 wouldn't be much different.
If I tick "Remove Legacy WebRTC API", and retry, they all fail until it gets stuck with no message on "Udp enabled".
Also, definitely worth trying out this AR.js demo if you're running the beta:
Always thought it was a little peculiar that the Media Capture APIs are tied to what is otherwise a very data/protocol-oriented spec.
Wish we could have had a camera feed in a canvas element on Mobile Safari a few years ago without having to wait for the entirety of WebRTC to be vetted. :P
And now, for my own unrelated Web API peeve:
Beyond service workers and all that jazz, I'm a little bummed out that the Pointer Events API isn't even listed on the Webkit Feature Status page:
It just seems like such a pleasant unification of all of the input-type APIs (mouse, touch, pen, hypothetical future peripherals...)
At least touch events have a "force" property with Apple Pencil input in the meantime. No tilt though.
Maybe next year :/
My phone, though an iPhone 6, seems to shut down Safari if I've been using other apps long enough. I can't imagine it staying alive long enough to torrent anything large.
 http://new.gafferongames.com/post/why_cant_i_send_udp_packet... / https://github.com/networkprotocol/netcode.io
Really? Why? It seems obvious to me that communicating directly with other players would be faster than relaying messages through a server. Even if the server and the clients are all in the same building, the server is still going to add the latency of its entire stack. Unless that server is acting as a very simple stream-oriented traffic controller, then surely its latency is in the 5-20 millisecond range, at least?
I know WebRTC is complex, but minimal latency is such a critical feature for multiplayer gaming, that surely, if it works at all, then it is worth the trouble?
Cheating. Having an authoritative server reduces the ability to cheat and limits the type of cheating that is possible.
The more you trust the client, the more cheating can affect the game.
Also, client-server scales to more players.
His lib is based on real-world pain that many game devs have hit trying to integrate WebRTC. It's the same reason you see Lua thrive while V8/etc are rarely a part of a game engine.
That being said, I'm very excited about WebRTC and its inclusion in Safari. It's not a silver bullet that exposes a simple UDP interface, but it's a welcome alternative to WebSockets for use in real-time games.
Contribution: there are platforms who already uses WebRTC instead of rtmp to stream the live camera feed. You can use tokbox's apis for example.
Delivery: This is where it gets more tricky, the nice thing about http based streaming like HLS and DASH is that it's cacheable just like any other file served over http - making it extremely scalable and that's how most CDNs operate today.
Changing that part into WebRTC has it's benefits like low latency but has a huge degradation in scaling complexity because of the connection state oriented nature of WebRTC, there are companies trying to build that as well e.g: Red5
The main reasons for transcoding are downstream bandwidth usage and codec support. WebRTC mandates VP8, so codecs shouldn't be an issue. Downstream bandwidth usage probably wouldn't be a major concern when it's viable to stream at 1080p to thousands of viewers from a home connection (though I doubt that's going to happen soon). So you could probably ignore transcoding entirely.
However, marketing would probably be an issue, since this architecture would prevent streamers from having more than about a hundred watchers, meaning that the big fish would stick to Twitch. That said, I suppose you could have a failover system, where larger streamers would switch to a relay-based system.
In terms of replacing, the obvious ones (as mentioned) are applications like Hangouts, Skype, GoToMeeting, etc.
In terms of enabling, time will tell but I try to think about what kinda of applications were possible before but not practical, because of the need to get everyone to install something. Hopefully this will increase the competitiveness of web to apps.
One of the projects using WebRTC data channels right now is Threema Web to connect smartphone and browser across networks without trusting a server: https://github.com/threema-ch/threema-web (Disclaimer: I'm involved in development)
WebRTC can communicate on the near network without roundtrip to ISP uplink, so it can get high bandwidth in managed environments (industrial etc).
"Today we are thrilled to announce WebKit support for WebRTC, available on Safari on macOS High Sierra, iOS 11, and Safari Technology Preview 32."
It is completely hostile to users. If I close a web page I expect everything related to that web page to stop. Immediately. I don't expect there to be lingering background threads consuming battery life, network data and disk storage. And in Chrome you can only discover service workers by enabling a Debug mode and memorising a specific URL. Exactly how does a non-developer figure out what is going on ?
If Apple had any sense they would ban service workers and instead propose use case specific, tightly focused and managed APIs that focus on security and battery life first.
Also just a hint at a future where Botnets are running in your browser:
Think about it this way... why do you have to download an entire app just to use a site? Don't like the notifications? Turn the off.
Again, the concept of background tasks isn't necessarily a bad thing. But it needs to be locked down to specific use cases not a free for all allowing arbitrary JS to be executed.
AFAIK Service workers are still under consideration, and nothing was announced.
quick question: i noticed that the camera isnt available on iOS 11 if you put a web-app capable page on to your homescreen. when you open the page in full screen, mediaDevices is undefined. is that just an iOS beta bug?
And when are they going to support MSE in iOS Safari?
still some bugs maybe...
Using webrtc-adapter, all of my webrtc code instantly worked. Though it doesn't seem to want to connect to firefox peers.
It sometimes baffles me how apple is holding up progress on the web - and that they aren't criticized more for it.
Apple tends to take a little longer and suffer a little short-term pain to get it right in the long run.
If all Apple cared about was the App Store then why do they continue to hire WebKit engineers and build new features ? It would be easier just to fork WebKit, never add anything new and deliberately make it so unusable that everyone rushes to apps.
I worked with a group of engineers full time for years around mobile/iOS, and we were pretty amazed at how severe but unaddressed these bugs were. The move to WKWebView was helpful, but still lacking. The app store parallel that you take issue with is an easy one to make when you contrast those failings against the talent and financial bandwidth of a company that size.
No there's not. I'm a web developer so I'm pretty invested in the web platform, but even I can see how incredibly naive this assertion is. Native will always win due to just having more power and capabilities. Web will always be a step behind.
Never attribute to malice that which is adequately explained by apathy or lack of resources.
Because Apple has always shaped so much of our development environment, it's easy to take these things as natural law. I'm assuming you're a younger developer based on your assertions – I would probably even agree with those suppositions based on the tone of posts like mine that seem anti-Apple. But the history of these technologies is glaring, and native/web/hybrid apps have never been given equal footing.
Nothing you said refutes a financial incentive in the app store, and most apps don't need "more power and capabilities", they need access to a basic API that isn't randomly crippled.
If you care about this subject, read about the history of UIWebView, WKWebView and hybrid apps, you will see how uniquely broken the platform is. Android doesn't suffer the same problems for a host of reasons, but that makes sense given Google's incentive for web. Hybrid and web apps are then not terribly appealing since they're only effective on one platform, so the ecosystem stagnates.
Here's a post I saw a few days ago, by coincidence: https://hackernoon.com/if-it-werent-for-apple-hybrid-app-dev...
Sure, but it stops your site from working cross-browser.
Apple famously refused to allow Flash on iOS and in doing so single-handedly did more to push HTML5 standards than anyone else. I'll also point out that WebRTC has gone through numerous iterations to such a degree that a lot of WebRTC sites don't work with Safari 11 because they're using old non-standards-compliant variants of the API.
The typical HN web dev seems to load their site in Chrome, click around a bit, and call it a day. That's not healthy for standards or the web, but it sure is lazy and expedient for a very narrow group of people.
Chrome is not the web.
I don't believe it's a short-sighted view. I think it's quite the opposite; my opinion comes from from spending years doing (web) development for it.
Their focus has always been on things that matter little (they were by far the first to implement `backdrop-filter`) while still keeping important things severely broken (as `position: fixed`) and messing with things that should work in a pretty obvious manner anywhere else (like overflow scrolling).
WebAssembly and WebRTC are the opposite of it. "Last year or so", maybe, but it'll take more than one year of good work for me to look at the team at a good light.
I remember having to integrate WebRTC through cordova (thank goodness somebody did it first) a couple years ago, and wondering why on earth Apple didn't find time or money for this. libwebrtc has been stable for years, it represents effectively no security risk, it performs well on all of their platforms. It truly boggles the mind.
ETA: If you look at the page counts of all the required specs (ECMAScript APIs, WebRTC protocol specs, underlying codec and protocol specs like Opus and RTC) it adds up to multiple thousands of pages. By comparison, the whole HTML5 spec (current WHATWG version) is 855 pages.
I would think the greatest barrier to integrating Opus would be getting Apple legal to agree not to sue other Opus adopters . Fair enough, but that's not five years worth of legal research. I would be gobsmacked if Apple bothered to write their own encoder or decoder just for WebKit, you don't need to read the bitstream format documentation.
And again, Apple has so much cash on hand that it takes the accountants five minutes to walk across a line on their balance sheets. Maybe the rest of you are busy, fine, but there are people who are qualified to put WebRTC in WebKit in the general public. Apple could send some money to the qualified people at Collabora, Igalia, or Ericsson Research (who I think have had WebRTC in Webkit since 2015).
All in all, it's absolutely bogus that Safari hasn't had WebRTC for at least two years already. You can make all the excuses you want, but this API drives more customer value than any ES6 feature. Nobody actually deploys ES6 on the web today, because it has forced browser vendors to completely reengineer their compilers, taking half a decade each, and as a result it has not been deployed long enough with decent performance to offer any value. WebRTC has been deployed for going on five years in valuable applications which are otherwise completely impossible on the web, half the other crap has no userbase, and offers questionable tangible value.
Just like with all of the other web standards, I'm pretty sure the WebRTC code is in the publicly available repo.
Obviously Apple isn't afraid to collaborate with other entities--Igalia did the CSS Grid implementation that ships in Safari and WebKit.
I guess it's also not an easy decision to "just include libwebrtc" into ones own application, since that's a giant dependency which pulls in lots of chromium code.
No thanks Google, you shouldn't get to decide when my PC goes into sleep mode.
I really hope Safari is better than that...
Edit: just as i'm typing this, Chrome has:
pid 19325(Google Chrome): [0x001ccbbb00018b19] 01:55:40 NoIdleSleepAssertion named: "WebRTC has active PeerConnections"
2 hours for... what? I have no idea which page is guilty.
If you go to Window > Task Manager it'll tell you which PID corresponds to which tab.
And i wouldn't call it a leak, you need to do extra work to prevent the system from sleeping, so someone did this intentionally.
Oh well, since I had to close Chrome last evening to my computer will sleep, today I opened Safari as my main browser instead. Google annoys too much lately.
Safari's fine these days though, so you do you.
I don't think that implementation of it is anywhere near as easy as you think.
That said, if you don't have the client in your browser, nobody gets to even try.
You may think that WebRTC is a pain, but try implementing video chat with getUserMedia and WebSockets, or a custom plugin, and get back to me on how much of a pain WebRTC is.