Hacker News new | past | comments | ask | show | jobs | submit login

Hi, I'm a member of the WebRTC team at Google, and I wrote a lot of the data channel and network code that's part of that behemoth.

I'm glad you got this to work :). I've always hoped someone would do exactly what you've done (use the data channel code on a server to do low-latency client<->server communication).

We're aware of the difficulty and are actively working on making it easier to work with just he components you need for data channels (ICE, DTLS, and SCTP). It might take a while, but it's one of our goals.




Hey, I think you misread my comment, I haven't actually implemented it into any of my games. I've played with it for a bit but haven't gotten around actually adding it to any games. Sorry!

But yeah, I really hope it becomes easier to integrate, as right now that's the biggest barrier into putting it into my custom written C++ server that I use for all my games. They already support UDP-only communication for desktop and mobile builds and bringing it to web would make the experience a lot better. Thank you!


Indeed, I misread your comment. Well, hopefully in the future how I read the comment will be correct. I don't think it would be that hard to write a data-channel-only server using BoringSSL and usrsctplib. See a comment of mine further down on the page of how you could do that.

Or you can just wait until we have our code refactored :).


I also played with the idea of using WebRTC data channels for client server applications. From a system design perspective it looks good, since it can provide lots of concurrent flow controlled streams (like HTTP/2) with a potentially lower latency.

However from the pure engineering side things don't look so great for the reasons Matheus28 already mentioned: WebRTC (even data channels only) is a big stack of massive complexity. It's not something you want to implement or understand fully on your own within a reasonable amount of time (like you could do with WebSockets).

The most reasonable way to get WebRTC support seems like integrating Googles native WebRTC library. However one downside is that it's a big dependency with which a lot of people might be uncomfortable bringing in (although you say you are working on making it smaller). The other downside is that it's not only big but a native dependency, which I and other people want to avoid wherever possible in non C/C++ land.

The alternative solution would be to develop a pure Go/.NET core/Java/etc. WebRTC data channel solution. However for this most of the required subcomponents are missing. Imho neither of those even support the required DTLS encryption in the (extended) standard libraries, and there are also no libraries for SCTP on top of UDP around. Therefore getting this to work is a serious effort, and anybody who approaches it must ask himself if the effort is justified or if Websockets and HTTP streaming are not good enough. For the latter one the performance might even reach WebRTC data channel performance if QUIC gets standardized and widely deployed.

I think the situation might be different if WebRTC data channels would only have standardized a UDP or possibly encrypted UDP. Anybody who would have needed it could still be able to implement streams and multiplexing on top of it on server and client(JS) side. The current solution provides a nicer out of the box API, but supporting it outside of the browser is hard.


I agree completely. I feel what is needed is an UDP version of WebSockets. That's all I wish we had.


Using WebRTC for communicating game state seems like a hack. WebRTC is too complexed with many parts. I try to get into it every year or so but I don't even get the demos to work. Then compare to Websockets that are well supported and very easy to use, with fall-back libraries such as sockjs.

If possible I think it would be a good idea to break up the different parts of WebRTC so that they can work independently of each other. The abstractions are also a bit leaky, as you need to know about the underlaying layer to use it. Another approach would be having a low level API witch might be easier to implement in the browser, and then count on libraries to make good abstractions.


Everything on web development feels like an hack.


Oh, how I wish I could upvote this 100 times. Nay, 1000!

It was already bad enough 10ish years ago when it was a comparatively small pile of hacks, and there was hope that something could be done about it. But now? It's an enormous pile of hack upon hack. Full stack engineer? More like full hack engineer!

The main reason I worry about losing my job or moving to a new location is that web development jobs are a dime a dozen nowadays, while more traditional development is seeming less and less relevant. As much as I hate C++, I'll stick with it over the monstrosity that is Web 2.x.

[Insert the usual complaints about shitty languages, tooling, and gazillions of frameworks / reinvented wheels here.]


I can relate as I've been a web developer for 18 years and before that I did some qBASIC. Some aspects of web development have been stable for over ten years and feels like a foundation. Everything with computers is a "hack" though, the transistor is a hack, then everything on top of it is a hack. It's only when things get stable and has a tight abstraction things starts feeling non-hacky. But when something is stable (aka perfect) it no longer evolves and starts to get old. So you either use old stuff that worked 10 years ago and will work many years forward, or just accept that the bleeding edge technology you use today might be a bleeding mess tomorrow.

One thing I like about computers and programming is that it's all created by humans. I tried go into physics and biology but once you go deep nothing really makes any sense, it's all random. With programming there's always, most of the time, reason behind design decisions.


I don't have any problem using old stuff that worked 10 years ago. Our field is young, and there haven't been any major advances in the way programming is done for the last 25 years or so. Even if there had been major advances, I don't see anything inherently bad or wrong about oldness; conversely, I don't see anything inherently good about newness. Something can be 30 years old yet still be better than any of its alternatives, even in our field where obsolescence is a fact of life. I also don't think stable software is prohibited from continued evolution—obvious examples include the common open-source Unix-like OSs, many of which have been stable for years, yet have continued to evolve new features like loadable kernel modules, direct rendering interfaces, network firewalls, containers/jails/zones, nifty file system improvements, etc. Even Berkeley sockets were once introduced by an OS that was already old enough to be licensed to drive.

Also, I disagree that all of computing is built upon nothing but hacks. Computing is underpinned by lines of theory whose fundamentals can legitimately be described as elegant or even beautiful. I'm thinking of things like universal Turing machines, the lambda calculi, type theory, the structured programming theorem, theories of concurrent, parallel, and/or distributed computation, automata theory, computability theory, complexity/tractability, universal/abstract algebra, relational algebra, unification, etc., but the elegance doesn't end where the theory ends. Many people, including myself, would consider Lisp to be profoundly beautiful, for example, perhaps even on multiple levels. Whether you like the language or not, it was a crowning achievement of early computation science, and it is far from unique in that regard.

Although I personally loathe the state of Web development, I don't hate Web developers. On the contrary, I'm very glad that there's no shortage of people who seem to enjoy it—especially as a long-time Linux user, I'm glad that since the dawn of "Web 2.0", I've had to worry less and less about being left out because third-party developers decided not to support my OS: more and more, I can just pop open my Web browser and use the exact same software anyone would use on Windows or MacOS. It's a double-edged sword, for sure, since along with the convenience and compatibility, browsers have become insane, bloated resource hogs, and if I'm not connected to the Net, there's a chance I won't be able to use the software I want or access the data I want. On top, philosophically I can't help but feel that XaaS for various values of X and "the Cloud" are regressions back to a time when personal compute power was prohibitively expensive, for reasons that are billed as convenience but in reality only serve to remove the freedoms of end users; notwithstanding, I'm just going to focus on the technological issues that I perceive for now, since the philosophical issue(s) demand a different class of solution altogether, nor do said issue(s) belong uniquely to the Web.

I suppose most of the issues I have with the Web as a software platform stem primarily from one force: organic growth over the course of two decades, as opposed to thoughtful and deliberate design by an engineer or group of engineers. The way the Web is now, especially when viewed as a software delivery and execution platform rather than as a document delivery platform, it's a Frankenstein's monster that has been pieced together from numerous disparate protocols and data formats all designed by different people, revised by still more different people, and oftentimes extended by yet more different people in order to cover use cases that had not been considered by the original designer(s), and then connected in the most straightforward ways possible, where each connection might consist of an entirely different mechanism than any other (rather than, for example, extending the protocols such that they provide a uniform connection mechanism).

However, I don't think organic growth on its own necessarily leads to monstrosities. I think that the force of organic growth has been guided by a couple of factors, similar to how evolution is guided by various forms of selection. For one, throughout the history of the Web, the goalposts of its continued development have moved time and again. Once a system for the distributed service of hypertext documents, it quickly became a service for hypermedia in general. Then it became a service for interactive trinkets. And it quickly became a service for commerce and enterprise. With the advent of Java applets and ActionScript-programmable Flash "movies", it became a service (but not yet a platform) for the delivery of applications. Then, of course, AJAX sparked a fundamental change in how the Web was viewed by developers and users alike: it finally became not only an application delivery service, but also a software platform! Since then, the goalposts have shifted only slightly, and the majority of these goals can be summarized as a desire to further enrich the software platform, first by doing the things Flash was once used for, then the things Java was once used for, coming to the point where there is a desire for a Web page to be able to do the things native desktop applications are typically used for, including even AAA games. For each set of goalposts, the context of the design of new Web technology has been different; as such, the notion of what has constituted a "good" design decision has also changed: sometimes, what was a good design decision at the time became not-so-good in a new context. The result has been—rather than a clear progression towards a single goal—a bunch of tumorous outgrowths in various directions with a line of best fit trending from "hypertext document service" to "hardware and operating system abstraction layer and virtual machine for shitty, poorly-performing, inefficient, possibly-distributed applications". The curious "architecture" of the Web is reflected in the architecture of Web applications: the number and complexity of the technologies that are needed to create even the most basic Web application is, frankly, ridiculous. And on top of it all, where platforms with like goals such as the JVM and CLR manage to provide first-class support for multiple programming languages, the Web manages to offer only one, and it happens to be particularly grimy (my fingers are crossed for WebAssembly).

The lesson of all this (and it's not a lesson unique to the Web by any means), is this: backwards compatibility is a bitch.

tl;dr all these young punks need to get off my lawn


Ive built some multiplyer real time web game prototypes using websockets with sockjs fallback and canvas 2d context. Its very fun and productive. You dont have to know opengl or tcp ip and it works everywhere, even on a 5 year old Nokia.


I've been trying to find some numbers on what kind of performance characteristics for using WebRTC unreliable datachannels (compared to the spectrum between pure UDP vs WebSockets). Do you know of any such comparisons? Or any intuition on the matter?


> I'm glad you got this to work :).

Matheus28: Did you actually get it to work or did you give up? (pthathcerg: The comment only claims they "analyzed the possibility".)


I've played with a few simple examples but didn't integrate it into any games, see https://news.ycombinator.com/item?id=13267248


Where does the low latency in WebRTC come from? Is it just packet ordering in TCP or is there something more to it?


You can receive data messages our of order. That's all there is to it. If you could receive TCP out of order, you could just use TCP.


Is there a way to specifiy ordering on specific packet types?


From the Article:

  Because SCTP is configurable, we can open multiple data channels with different settings, which saves us tons of work!
  Typically we have:
  an unreliable and unordered data channel for game state data and user inputs
  a reliable and ordered data channel for chat messages, scores and deaths


Also WebRTC is inherently p2p . So you can connect directly to the other peer without server getting involved expect for connection (STUN and TURN). Basically message via the LAN even. Latency improves in many scenarios.


Latency has very little to do with the p2p aspect. In a client-server setting, the use of RTP/SCTP to communicate payloads between two clients would have significantly less latency than, say, a P2P TCP connection.


You can't do packet reordering on top of TCP as it is already ordered, RTCDataChannel uses SCTP instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: