Hacker News new | past | comments | ask | show | jobs | submit login
Apple Announces Full WebRTC Support in Safari 11 (peer5.com)
259 points by shacharz on June 7, 2017 | hide | past | web | favorite | 158 comments



WebRTC was the #1 most requested web platform feature for Safari. Now coming to macOS and iOS: https://webkit.org/blog/7726/announcing-webrtc-and-media-cap...

You can even try it out now on Safari Technology Preview: https://webkit.org/blog/7627/safari-technology-preview-32/


About fucking time. I've literally been waiting for this for two and a half years!

This is absolutely huge. Many, many services that could once only be provided at a premium fee or by large to massive companies like Skype or Google can now be offered for free by small start-ups without having to worry about "this page was designed to be viewed in [preferred browser] version [foo] or higher" hassles.


> I've literally been waiting for this for two and a half years!

Well, I have been figuratively waiting for it for 6 years.


If people had just built on Doug Engelbart's NLS then we could have had this decades ago :P


Well the version part still doesn’t go away ;)


Does it support VP8 as required by RFC 7742 "WebRTC Video Processing and Codec Requirements"?


Not in the version that will ship in iOS 11 / High Sierra. Not yet determined for future versions.


If there's anything we (Mozilla) can do to help, please let us know.


So, "full support" might be misleading to put in the title.


Is that a political or technical decision?


They would likely argue that HEVC and H.264 are hardware accelerated on nearly all of their devices whereas VP8 likely isn’t. This would mean compromising on battery life as they’d have to provide a software fallback where needed. I don’t see Apple being very willing to provide a bad user experience and given how hard they were pushing HEVC during their 2017 WWDC keynote the best bet is that they think that’s the better option.

Alternatively they’d have to add VP8 support in their chips and one suspects they would be unwilling to spend silicon on that which could otherwise be used for whatever witchcraft their silicon designers are whipping up.

I’d grant that as a valid technical reason for limited video codec support. Silicon and battery are at a premium.



I always find it disappointing when video from Apple doesn't work in Firefox. There are quite a few JavaScript libraries available these days which support HLS in browsers which don't have built-in HLS support but Apple doesn't make use of them.


Apple should stop fooling around, and start using DASH+MSE instead. But being Apple, they have very hard time letting go of their NIH and lock-in.


The problem with DASH these days is that you might have to buy a patent license to use it. The MPEG LA wants to sell you one anyhow:

http://www.mpegla.com/main/programs/MPEG-DASH/Pages/Intro.as...

HLS has no such problems which makes it the better choice.


Ah, so these freaks already managed to make claims. I hope someone will work on busting them. I highly doubt HLS is in any better shape in this regard.

And Columbia University is in that patent trolls list. Disgusting.

UPDATE:

Going through that site, I found their attempt to leech on VC-1: http://www.mpegla.com/main/programs/VC1/Documents/vc-1-att1....

And they list Microsoft there, which is strange, since MS are part of Alliance for Open Media which is an antithesis of this trolling cartel. Either MS are sitting on both chairs, or MPEGLA are trying to fool everyone.


Apple should stop fooling around, and start using DASH+MSE instead. But being Apple, they have very hard time letting go of their NIH and lock-in.

Apple was doing video [1] long before Firefox and the web were a thing; perhaps it's Mozilla that needs to get with the times and industry standards.

[1]https://en.wikipedia.org/wiki/QuickTime


> perhaps it's Mozilla that needs to get with the times

Mozilla is with the times. HLS works well in Firefox. You just do it with JavaScript and it's disappointing that Apple doesn't bother to do that on their website.

Here's an article on JavsScript based HLS from a couple of years ago:

https://blog.peer5.com/http-live-streaming-in-javascript/


Again, performance and battery life is going to be better with Apple’s approach for mobile devices, especially since iOS devices have hardware accelerated playback.


> with Apple’s approach for mobile devices

Web browser considerations aren't relevant on iOS because Apple forbids alternative browser engines. Firefox on iOS is not Firefox because Apple doesn't allow it to use Firefox's JS runtime or Firefox's render engine. As a result there isn't any true browser competition on the iOS platform, which is a shame.

Personally, I want to run full, real Firefox on my iPhone. It's a low quality move from Apple that they stop me doing that.


Well, ditch Apple. Why do you put up with this?


That's a bogus argument, since nothing stops Apple from supporting common standards in their hardware, instead if NIH.


DASH is being with the times and standards. HLS is being Apple.


> They would likely argue that HEVC and H.264 are hardware accelerated on nearly all of their devices whereas VP8 likely isn’t.

I'm sure a lot of them do, but it's also true that there are a lot of Mac laptops out there which will be upgraded to High Sierra that don't have hardware HEVC acceleration.


WebRTC has codec negotiation, which means you can give preference to a particular codec while still supporting both.


Except that open source and free software can't (legally) do that.

Both HEVC and H.264 require the patent holders to be paid in order to be allowed on either a device or content.


Right.... so an open source program/device might only offer VP8. While Apple could offer both H264/HEVC and VP8, preferring the former.


They could, but as discussed in this thread, they won't. This means chromium, for example, will not be able to webrtc video with Apple devices.


Cisco provides a fully licensed encoder, OpenH264, that you can download for free (Firefox uses it). That loophole was removed for H.265, though. I would have rather had only VP8 mandatory to implement in the standard, but at least this situation is better than the reverse.


Is that a political or technical decision?

One obvious technical issue is that, as far as we know, there's no VP8 decoding hardware in any of Apple's products; implementing VP8 decoding in software might be more of a power drain than Apple wanted.


> there's no VP8 decoding hardware in any of Apple's products

But what I found interesting in the WWDC session you linked to was that a lot of Apple products don't have hardware HEVC decode and\or hardware HEVC encode support. Apple has implemented software HEVC decoding and encoding in a lot of places. From that perspective, adding support for VP8 and VP9 wouldn't be much different.


Apple avoid supporting free codecs, because they are part of closed codecs cartel. So they do all they can to delay adoption of free codecs. Note, that they didn't join Alliance for Open Media, while even Microsoft did: http://aomedia.org


Do you think you would get an answer to this question?


I can't get many WebRTC projects/test pages to actually work on the new Safari preview. Is there anywhere they list the specific features implemented from the spec?


We're working individually with WebRTC sites to get them running. There's complications because many sites use legacy APIs that are not in the spec but still exist in Chrome. We're adding some of those for more compatibility.


Is there any chance for Screen(or window or tab) Sharing to make it in?


Not in High Sierra / iOS 11 but we're aware of this and considering it for a future version.


Shame it didn't make the current version, but great to hear it will be available in a later one.


To be clear, no promises. But we're definitely considering it and it seems like useful functionality.


It looks like RTCPeerConnection(config) is throwing an error if you don't pass in a null config.


Actually, the problem was using the deprecated 'url' property, rather than the newer 'urls' (which works).


Most on this page work: https://webrtc.github.io/samples/



Unfortunately this test page uses legacy APIs that have been removed from the spec years ago, so it can't give an accurate assessment.



The audio capture fails, then the camera gets stuck on "Check resolution 320x240" without error.

If I tick "Remove Legacy WebRTC API", and retry, they all fail until it gets stuck with no message on "Udp enabled".


SO excited about this. Data-channel and all. Non-jailbroken torrenting on iOS at long last :P https://webtorrent.io/

Also, definitely worth trying out this AR.js demo if you're running the beta: https://freinbichler.me/apps/web-ar/

Always thought it was a little peculiar that the Media Capture APIs are tied to what is otherwise a very data/protocol-oriented spec.

Wish we could have had a camera feed in a canvas element on Mobile Safari a few years ago without having to wait for the entirety of WebRTC to be vetted. :P

---

And now, for my own unrelated Web API peeve:

Beyond service workers and all that jazz, I'm a little bummed out that the Pointer Events API isn't even listed on the Webkit Feature Status page: https://webkit.org/status/

It just seems like such a pleasant unification of all of the input-type APIs (mouse, touch, pen, hypothetical future peripherals...)

At least touch events have a "force" property with Apple Pencil input in the meantime. No tilt though.

Maybe next year :/


How would this allow you to torrent on iOS without jailbreaking the device?

My phone, though an iPhone 6, seems to shut down Safari if I've been using other apps long enough. I can't imagine it staying alive long enough to torrent anything large.


Just have an iframe on the web torrent client page so you can reddit to pass the time!


Didn't test it, but I'd guess that playing some media (audio, preferably) should keep Safari alive.


Keep Safari active. You can use other apps in split screen while the file downloads.


No split screen on the phones (unless perhaps the plus size allows it, but I don't think so).


Even with the event good news from Safari as far as standards compliance, I suspect Apple will have to be dragged kicking and screaming into shipping Pointer Events. It's basically admitting defeat.


Aren't there torrent clients in the App Store?


What WebRTC apps can we expect will be mainstream in, say, 6 years? I'm just trying to picture what this technology will allow (or replace).


That's the $1B question. The obvious application is the video conferencing style hangouts, Skype (which are using WebRTC already). The nice thing about having video conf capabilities in the browser is that you can embed it in any web page and start do interesting things with it - like interactive lectures. Improved gaming networking is another vertical. We've built Peer5 (W17) on top of WebRTC to create a P2P CDN for Video streaming


I doubt you'll see gaming pick up WebRTC for many of the reasons that Glenn build netcode.io[1]. It's overly complex and doesn't guarantee UDP which is a non-starter for most games.

[1] http://new.gafferongames.com/post/why_cant_i_send_udp_packet... / https://github.com/networkprotocol/netcode.io


> ... there is a trend away from peer-to-peer towards client/server for multiplayer games ...

Really? Why? It seems obvious to me that communicating directly with other players would be faster than relaying messages through a server. Even if the server and the clients are all in the same building, the server is still going to add the latency of its entire stack. Unless that server is acting as a very simple stream-oriented traffic controller, then surely its latency is in the 5-20 millisecond range, at least?

I know WebRTC is complex, but minimal latency is such a critical feature for multiplayer gaming, that surely, if it works at all, then it is worth the trouble?


> Really? Why?

Cheating. Having an authoritative server reduces the ability to cheat and limits the type of cheating that is possible.

The more you trust the client, the more cheating can affect the game.

Also, client-server scales to more players.


The server also handles arbitration which can add a ton more latency if you need to determine it via consensus from all the other clients.

His lib is based on real-world pain that many game devs have hit trying to integrate WebRTC. It's the same reason you see Lua thrive while V8/etc are rarely a part of a game engine.


It guarantees UDP for all of the cases when UDP would work. You can choose to not use TCP if you like, though I think most games would actually desire a TCP fallback.


WebRTC Data Channels don't actually let browsers send arbitrary UDP packets over the wire; rather, they only let you send SCTP packets tunneled over UDP. This distinction sounds pedantic because you can configure SCTP to deliver unreliable and unordered messages just like UDP, but there are some important caveats. For example, one of the benefits of UDP in real-time gaming is that packets can be dropped with minimal impact because it doesn't have head of line blocking. However, SCTP mandates congestion control which will start buffering your outgoing packets if it detects a minute or so of sustained packet loss (at least in libwebrtc). While congestion control is generally a good thing, in this case, it causes the game to grind to a halt. In addition, there is some overhead to sending the SCTP metadata which is suboptimal in bandwidth-heavy use cases like synchronizing a large physics simulation or supporting slower connections.

That being said, I'm very excited about WebRTC and its inclusion in Safari. It's not a silver bullet that exposes a simple UDP interface, but it's a welcome alternative to WebSockets for use in real-time games.


Pretty sure you don't want TCP(as the below article about X-Wing lays out pretty well)

http://www.gamasutra.com/view/feature/131781/the_internet_su...


What about a WebRTC version of Twitch.tv? Is that feasible in 5 or 10 years? Twitch does transcoding and I'm not sure how transcoding would work with WebRTC.


What do you mean exactly by that? You want to replace the contribution side (camera to server) by WebRTC or delivery side (server to screen) or both?

Contribution: there are platforms who already uses WebRTC instead of rtmp to stream the live camera feed. You can use tokbox's apis for example.

Delivery: This is where it gets more tricky, the nice thing about http based streaming like HLS and DASH is that it's cacheable just like any other file served over http - making it extremely scalable and that's how most CDNs operate today. Changing that part into WebRTC has it's benefits like low latency but has a huge degradation in scaling complexity because of the connection state oriented nature of WebRTC, there are companies trying to build that as well e.g: Red5


It would probably be possible for smaller communities today, given how cheap bandwidth already is. For comparison, I'm currently paying ~$13/mo for a (symmetric) gigabit connection.

The main reasons for transcoding are downstream bandwidth usage and codec support. WebRTC mandates VP8, so codecs shouldn't be an issue. Downstream bandwidth usage probably wouldn't be a major concern when it's viable to stream at 1080p to thousands of viewers from a home connection (though I doubt that's going to happen soon). So you could probably ignore transcoding entirely.

However, marketing would probably be an issue, since this architecture would prevent streamers from having more than about a hundred watchers, meaning that the big fish would stick to Twitch. That said, I suppose you could have a failover system, where larger streamers would switch to a relay-based system.


Where do you live? Internet is way more expensive in the US.


Stockholm.


If you're lucky enough to have Google Fiber in the US it's $70/mo for symmetric gig, IIRC.


You can also fall over to a classic client/server model with WebRTC for one to many broadcasting, while still keeping the advantages of very low latency (and reusing much of the same code).


That actually already exists and it's owned by Microsoft: https://mixer.com/


I look at it as WebRTC bringing the browser closer to an OS, in terms of the capabilities offered by the browser to application developers, out-of-the-box and without needing to convince or have users install something.

In terms of replacing, the obvious ones (as mentioned) are applications like Hangouts, Skype, GoToMeeting, etc.

In terms of enabling, time will tell but I try to think about what kinda of applications were possible before but not practical, because of the need to get everyone to install something. Hopefully this will increase the competitiveness of web to apps.


Well as of right now, my browser based real-time action game won't suffer from head-of-line queueing in TCP based Websockets on Safari.


Applications that rely on P2P data channels combined with end-to-end encrypted signalling like https://saltyrtc.org/ might also be an interesting area.

One of the projects using WebRTC data channels right now is Threema Web to connect smartphone and browser across networks without trusting a server: https://github.com/threema-ch/threema-web (Disclaimer: I'm involved in development)


People can make web apps that communicate with each other without paying for servers - hobbyists, students, developers in third world countries etc. Collaboration tools focused on local or niche things, text / video chat, multiplayer games, etc.

WebRTC can communicate on the near network without roundtrip to ISP uplink, so it can get high bandwidth in managed environments (industrial etc).


We at Lookback use WebRTC to do live streaming research sessions for desktop, iOS, and Android. Works pretty neat, even though you need a SFU (a rebroadcaster) to support heavier load than a few peers watching. So the style of doing P2P for video sort of gets lost anyway ..


Application neutral WebRTC rebroadcasting becomes a commodity this way though, so it should become quite cheap.


Wow, this has been a long time coming. So many shoddy video communication hacks can finally go away. This is huge.


Should we assume mobile safari will also get WebRTC support, or is that a lot less likely to happen in the near future?


It's coming to both macOS and iOS in Safari 11 and iOS 11.


Can you share a link please?


Here's the announcement on the WebKit blog: https://webkit.org/blog/7726/announcing-webrtc-and-media-cap...


iOS is specifically mentioned:

https://webkit.org/blog/7726/announcing-webrtc-and-media-cap...

"Today we are thrilled to announce WebKit support for WebRTC, available on Safari on macOS High Sierra, iOS 11, and Safari Technology Preview 32."


AFAIK it was part of the IOS11 announcement, so I'm inferring it'll be part of it, but couldn't find written proof


Does this mean Apple may actually implement Service Workers and Web Push in the next Safari?? :)


I hope not as Service Workers is an awful technology.

It is completely hostile to users. If I close a web page I expect everything related to that web page to stop. Immediately. I don't expect there to be lingering background threads consuming battery life, network data and disk storage. And in Chrome you can only discover service workers by enabling a Debug mode and memorising a specific URL. Exactly how does a non-developer figure out what is going on ?

If Apple had any sense they would ban service workers and instead propose use case specific, tightly focused and managed APIs that focus on security and battery life first.

Also just a hint at a future where Botnets are running in your browser:

https://sakurity.com/blog/2016/12/10/serviceworker_botnet.ht...


Service workers were designed to only work during events which you opted into. Either because you loaded the site which is now making requests, or because you OPTED IN to receive notifications.

Think about it this way... why do you have to download an entire app just to use a site? Don't like the notifications? Turn the off.


Yes because ordinary users never, ever opt in to things they shouldn't. And because of that one opt-in mistake suddenly they have invisible service workers mining Bitcoins which they could never learn about since the discovery mechanism for service workers is currently non-existent.

Again, the concept of background tasks isn't necessarily a bad thing. But it needs to be locked down to specific use cases not a free for all allowing arbitrary JS to be executed.


So limit the amount of CPU/bw/etc. they can use, just like mobile platforms do for background apps?


Installing an app is a higher bar than tapping the "get out of my way" button on a modal dialog that some website just shoved in your face.


Oh good. It can only burn 10% of my battery and bandwidth to make someone else rich. That's fair.


Service Workers are "under consideration" (https://webkit.org/status/#specification-service-workers), but there's no mention of Web Push. We can only hope.


Would assume no since they have APNS built into Safari/macOS [1] with a registration workflow that is nowhere close to the one that the Push API spec uses. I think Apple would keep it that way since they are sensitive about notifications not being used to spam/advertise.

[1]: https://developer.apple.com/notifications/safari-push-notifi...


I wish Web Push would go away. The last thing I/most people/my inlaws who can't use a computer need is more notifications


By Web Push you mean http2.0's push feature? If so then yes (http://caniuse.com/#search=http%2F2).

AFAIK Service workers are still under consideration, and nothing was announced.


By Web Push EGreg probably means https://www.w3.org/TR/push-api/ which requires Service Workers as a dependency.


ive dreamt of this for years!!! so excited.

quick question: i noticed that the camera isnt available on iOS 11 if you put a web-app capable page on to your homescreen. when you open the page in full screen, mediaDevices is undefined. is that just an iOS beta bug?


It’s not a bug. Spoke to an Apple engineer about this today at WWDC. There’s security concerns enabling media devices for webviews outside safari, but it should be coming ‘soon’.


Noooooooo


Yeah... I’d file a bug anyway though, might help them prioritise it.


Sounds like a bug to me. You should file it at http://bugreport.apple.com.


will do!


So now they'll start supporting Opus at last, or they'll find another excuse not to support free codecs? Once they'll support it, AAC can be thrown out in the garbage.

And when are they going to support MSE[1] in iOS Safari?

1. https://en.wikipedia.org/wiki/Media_Source_Extensions


Anyone else feel like Apple is the new Microsoft and they hold the world hostage by only updating their products every couple years with features we actually want? It seems like just as the unrest is about to hit critical mass they spring into action and implement just enough to placate everyone for another couple years.



Plan B is pretty disappointing, Chrome compatibility winning over standards compliance :(


I'll take a quick moment to share my favorite web app which uses WebRTC: Instant.io [0]. I use it all the time to transfer files among local devices, or to send medium to large files to friends.

[0] https://instant.io


It would be interesting to see surveys about mobile IPv6, will it get along with P2P? Will mobile users get P2P video this way or have the operators crippled end-to-end connectivity?


TypeError: nativeMediaDevices.addEventListener is not a function

still some bugs maybe...


This is a capturing API, I guess?

Using webrtc-adapter, all of my webrtc code instantly worked. Though it doesn't seem to want to connect to firefox peers.


So, any one know how one might block WebRTC in Safari?


Can it be disabled? (VPN privacy, and all...)


YASSS!!! FINALLY!


Yayaya!


Still only supports proprietary codecs from what I understand. So not "full".


It does support Opus for audio! No VP8 yet, but halfway there.


I saw this yesterday on twitter and my thought was "they don't support webrtc yet?!? seriously?"

It sometimes baffles me how apple is holding up progress on the web - and that they aren't criticized more for it.


They're not holding up progress. They have different objectives, and chief among them is power efficiency. Chrome eats up a lot more battery life than Safari.[1] So did Flash (among other conflicts), and I think we all accept now that "holding that up" was good for the web in the long run.

Apple tends to take a little longer and suffer a little short-term pain to get it right in the long run.

[1] https://daringfireball.net/2017/05/safari_vs_chrome_on_the_m...


For all the leaps and bounds we saw being made by ARkit, Safari desktop still does not support typical 360 video due to a CORS non-compliance issue. Said issue has been submitted and known for two years.


Apple has absolutely kneecapped web technologies. Mobile Safari has simple, egregious bugs that have been open for years. We're talking about things like being able to crash the browser with CSS or file selection, etc. It's impossible to make the case that Apple is just rejecting features for the sake of the user. The next question is why, and you'll need some creativity to come up with any other answer than pointing to the monolithic, money printing app store.


Any input that can crash the browser is a potential security vulnerability. Can you point me to an example of an inpatched crash in Safari that has been "open for years"?


We'd super appreciate bug reports of reproducible crashes and we definitely try to look at them. Bugs welcome at http://bugs.webkit.org/


This is just mince. There are no long-standing crashes in Safari that "crash the browser with CSS or file selection". And from a purely anecdotal point of view, I've found it the least annoying browser to use by far.


I think you missed the Mobile part. Do a search for "Mobile Safari file selection crash" – that one minor but severe example was still happening as of earlier this year and for at least four years prior. Mmm, mince!


Posts like this genuinely confuse me.

If all Apple cared about was the App Store then why do they continue to hire WebKit engineers and build new features ? It would be easier just to fork WebKit, never add anything new and deliberately make it so unusable that everyone rushes to apps.


Sincerely, what's confusing about it? There is a massive monetary incentive toward native. I don't think it's any kind of top-down nefarious decision, but is probably an emergent quality based on priority.

I worked with a group of engineers full time for years around mobile/iOS, and we were pretty amazed at how severe but unaddressed these bugs were. The move to WKWebView was helpful, but still lacking. The app store parallel that you take issue with is an easy one to make when you contrast those failings against the talent and financial bandwidth of a company that size.


> Sincerely, what's confusing about it? There is a massive monetary incentive toward native.

No there's not. I'm a web developer so I'm pretty invested in the web platform, but even I can see how incredibly naive this assertion is. Native will always win due to just having more power and capabilities. Web will always be a step behind.

Never attribute to malice that which is adequately explained by apathy or lack of resources.


Did you read the next sentence after what you quoted?

Because Apple has always shaped so much of our development environment, it's easy to take these things as natural law. I'm assuming you're a younger developer based on your assertions – I would probably even agree with those suppositions based on the tone of posts like mine that seem anti-Apple. But the history of these technologies is glaring, and native/web/hybrid apps have never been given equal footing.

Nothing you said refutes a financial incentive in the app store, and most apps don't need "more power and capabilities", they need access to a basic API that isn't randomly crippled.

If you care about this subject, read about the history of UIWebView, WKWebView and hybrid apps, you will see how uniquely broken the platform is. Android doesn't suffer the same problems for a host of reasons, but that makes sense given Google's incentive for web. Hybrid and web apps are then not terribly appealing since they're only effective on one platform, so the ecosystem stagnates.

Here's a post I saw a few days ago, by coincidence: https://hackernoon.com/if-it-werent-for-apple-hybrid-app-dev...


As long as we get there eventually, I will always side with taking the time to do it right. There is room in the world for both a highly advanced browser, with the sacrifices that come with that, and a workhorse. Nothing is stopping you from having them both installed (except, well, on iOS).


> Nothing is stopping you from having them both installed

Sure, but it stops your site from working cross-browser.


Even with both installed on iOS WebRTC doesn't work


Maybe you were still in middle school but there was a time when the web was dead and IE killed it. Mozilla and Apple resurrected the web.

Apple famously refused to allow Flash on iOS and in doing so single-handedly did more to push HTML5 standards than anyone else. I'll also point out that WebRTC has gone through numerous iterations to such a degree that a lot of WebRTC sites don't work with Safari 11 because they're using old non-standards-compliant variants of the API.

The typical HN web dev seems to load their site in Chrome, click around a bit, and call it a day. That's not healthy for standards or the web, but it sure is lazy and expedient for a very narrow group of people.

Chrome is not the web.


I agree in essence, and I personally hate how they tend to handicap mobile Safari, but the recent announcement of WebRTC and WebAssembly support (including mobile!) means they're catching up fast, and maybe changing their approach to web technology.


I believe that’s a short-sighted view. The past year or so, the Safari/WebKit teams have been killin’ it. I use Mobile Safari a lot each day and I can tell you it’s not handicapped.


I'm curious if you do Mobile Safari development, however.

I don't believe it's a short-sighted view. I think it's quite the opposite; my opinion comes from from spending years doing (web) development for it.

Their focus has always been on things that matter little (they were by far the first to implement `backdrop-filter`) while still keeping important things severely broken (as `position: fixed`) and messing with things that should work in a pretty obvious manner anywhere else (like overflow scrolling).

WebAssembly and WebRTC are the opposite of it. "Last year or so", maybe, but it'll take more than one year of good work for me to look at the team at a good light.


Apple and Microsoft waited five years to deploy this thing. Given how quickly it was implemented and adopted by other browsers, I have to wonder what their motives were for deliberately not integrating it; because frankly they could've done it pretty quickly at any point in the last five years.

I remember having to integrate WebRTC through cordova (thank goodness somebody did it first) a couple years ago, and wondering why on earth Apple didn't find time or money for this. libwebrtc has been stable for years, it represents effectively no security risk, it performs well on all of their platforms. It truly boggles the mind.


WebRTC is actually quite complicated to implement and it took a large effort by a big chunk of the WebKit team, even though we had a lot of code that we could reuse. People have a lot of theories about Safari deliberately omitting one feature or another, but the truth is there's only so much we can do at once.

ETA: If you look at the page counts of all the required specs (ECMAScript APIs, WebRTC protocol specs, underlying codec and protocol specs like Opus and RTC) it adds up to multiple thousands of pages. By comparison, the whole HTML5 spec (current WHATWG version) is 855 pages.


I understand, though when it comes to WebRTC, there is an existing, efficient, stable implementation in a license which is compatible with WebKit and Apple's proprietary embeddings of it. You aren't going to be translating the WebRTC protocol diagrams and interfaces yourself. The main effort would be elsewhere (adapting it to Apple's platform TLS libraries perhaps, verifying the bindings, fitting it into the compositor, sandbox, etc.).

I would think the greatest barrier to integrating Opus would be getting Apple legal to agree not to sue other Opus adopters . Fair enough, but that's not five years worth of legal research. I would be gobsmacked if Apple bothered to write their own encoder or decoder just for WebKit, you don't need to read the bitstream format documentation.

And again, Apple has so much cash on hand that it takes the accountants five minutes to walk across a line on their balance sheets. Maybe the rest of you are busy, fine, but there are people who are qualified to put WebRTC in WebKit in the general public. Apple could send some money to the qualified people at Collabora, Igalia, or Ericsson Research (who I think have had WebRTC in Webkit since 2015).

All in all, it's absolutely bogus that Safari hasn't had WebRTC for at least two years already. You can make all the excuses you want, but this API drives more customer value than any ES6 feature. Nobody actually deploys ES6 on the web today, because it has forced browser vendors to completely reengineer their compilers, taking half a decade each, and as a result it has not been deployed long enough with decent performance to offer any value. WebRTC has been deployed for going on five years in valuable applications which are otherwise completely impossible on the web, half the other crap has no userbase, and offers questionable tangible value.


I understand, though when it comes to WebRTC, there is an existing, efficient, stable implementation in a license which is compatible with WebKit and Apple's proprietary embeddings of it.

Just like with all of the other web standards, I'm pretty sure the WebRTC code is in the publicly available repo.

Obviously Apple isn't afraid to collaborate with other entities--Igalia did the CSS Grid implementation that ships in Safari and WebKit.


"Nobody deploys ES6 today since no browsers support ES6 yet so there's no point in browsers spending engineering effort adding ES6 support"

logic.


I'm not related to the Safari or any other browser team in any way. But I totally believe that it's a giant amount of work. I personally researched the effort which would just be required to implement webrtc data channels in a server application and have been rapidly been put off after seeing that I would need to have support STUN, TURN, ICE, DTLS and DTCP. With each of them having giant specs. The media formats and all the JS APIs would add on top of that.

I guess it's also not an easy decision to "just include libwebrtc" into ones own application, since that's a giant dependency which pulls in lots of chromium code.


We did end up using libwebrtc but we had to strip a lot of the dependencies and also update it use system services instead where appropriate.


Thanks for the info. One more question on that: Do you think it makes sense to rerelease that independently or upstream the changes so that others could also include a more lightweight webrtc library? Or are your changes too far tied to Safari? I guess an optimal solution would need to have some kind of platform abstraction layer which then gets implemented by integrators like Chrome or Safari. Probably a big effort.


There's OpenWebRTC, which is optimized for standalone applications like media servers and proxies. The version in Chromium is also pretty easy to build and run in C/C++ applications on Linux, but requires extra work for an application like Safari.


That means you're aiming for WebRTC 1.0, not for ORTC, right?


For now, yes. Our main goal in the short term is interop, since we are a little late to this party.


Don't forget energy efficiency. That's a huge priority for Apple and explains a lot of delays. Safari is way more energy efficient than Chrome.[1]

[1] https://daringfireball.net/2017/05/safari_vs_chrome_on_the_m...


This is completely unrelated. This is a feature which their platform completely lacks, if they added this feature it would not have an effect on the energy efficiency of code which does not use it.


In a sense you're right. But we also want to make sure WebRTC websites don't blow out your battery, and we went to some effort to make sure it uses efficient video encoding/decoding paths. That said, this was not a majority of the effort.


Besides battery life, Chrome on the Mac prevents sleep indefinitely sometimes. If you check with pmset -g assertions, you'll see Chrome saying "WebRTC has active PeerConnections".

No thanks Google, you shouldn't get to decide when my PC goes into sleep mode.

I really hope Safari is better than that...

Edit: just as i'm typing this, Chrome has:

pid 19325(Google Chrome): [0x001ccbbb00018b19] 01:55:40 NoIdleSleepAssertion named: "WebRTC has active PeerConnections"

2 hours for... what? I have no idea which page is guilty.


I believe Safari will block sleep only if there's actively playing video or audio in a foreground tab.


That seems more like a leak than normal behaviour.

If you go to Window > Task Manager it'll tell you which PID corresponds to which tab.


It's the main Chrome process, I killed it and all Chrome windows were gone :)

And i wouldn't call it a leak, you need to do extra work to prevent the system from sleeping, so someone did this intentionally.

Oh well, since I had to close Chrome last evening to my computer will sleep, today I opened Safari as my main browser instead. Google annoys too much lately.


I meant that whatever was supposed to clear the wakelock didn't run, thus leaking the wakelock. I really do find your problem odd, I recommend taking a look at your extensions if you want to track it down at some point.

Safari's fine these days though, so you do you.


I've seen 6 hour wake locks as well. From Chrome. Even if it's a leak, why 6 hours?


But it would affect the efficiency of code that does use it, which is the whole point.


WebRTC is a total pain in the arse in practice - maybe I've just had a bad experience, but it seems to be a ludicrously over complicated solution to the problem, and still ends up unreliable.

I don't think that implementation of it is anywhere near as easy as you think.


Most of the complexity in the WebRTC ecosystem (aside from things that are already done for you) is in the server space. You need STUN/TURN, you need to scale it, you have no idea how exactly to achieve that without spending infinite money. If your clients don't support your codecs, or you're proxying to SIP land (where you often won't have common codecs aside from A-law PCM), then you have to figure out a way not to blow your budget (both money and latency) on transcoding.

That said, if you don't have the client in your browser, nobody gets to even try.

You may think that WebRTC is a pain, but try implementing video chat with getUserMedia and WebSockets, or a custom plugin, and get back to me on how much of a pain WebRTC is.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: