I'm glad you got this to work :). I've always hoped someone would do exactly what you've done (use the data channel code on a server to do low-latency client<->server communication).
We're aware of the difficulty and are actively working on making it easier to work with just he components you need for data channels (ICE, DTLS, and SCTP). It might take a while, but it's one of our goals.
But yeah, I really hope it becomes easier to integrate, as right now that's the biggest barrier into putting it into my custom written C++ server that I use for all my games. They already support UDP-only communication for desktop and mobile builds and bringing it to web would make the experience a lot better. Thank you!
Or you can just wait until we have our code refactored :).
However from the pure engineering side things don't look so great for the reasons Matheus28 already mentioned: WebRTC (even data channels only) is a big stack of massive complexity. It's not something you want to implement or understand fully on your own within a reasonable amount of time (like you could do with WebSockets).
The most reasonable way to get WebRTC support seems like integrating Googles native WebRTC library. However one downside is that it's a big dependency with which a lot of people might be uncomfortable bringing in (although you say you are working on making it smaller). The other downside is that it's not only big but a native dependency, which I and other people want to avoid wherever possible in non C/C++ land.
The alternative solution would be to develop a pure Go/.NET core/Java/etc. WebRTC data channel solution. However for this most of the required subcomponents are missing. Imho neither of those even support the required DTLS encryption in the (extended) standard libraries, and there are also no libraries for SCTP on top of UDP around. Therefore getting this to work is a serious effort, and anybody who approaches it must ask himself if the effort is justified or if Websockets and HTTP streaming are not good enough. For the latter one the performance might even reach WebRTC data channel performance if QUIC gets standardized and widely deployed.
I think the situation might be different if WebRTC data channels would only have standardized a UDP or possibly encrypted UDP. Anybody who would have needed it could still be able to implement streams and multiplexing on top of it on server and client(JS) side. The current solution provides a nicer out of the box API, but supporting it outside of the browser is hard.
If possible I think it would be a good idea to break up the different parts of WebRTC so that they can work independently of each other. The abstractions are also a bit leaky, as you need to know about the underlaying layer to use it. Another approach would be having a low level API witch might be easier to implement in the browser, and then count on libraries to make good abstractions.
It was already bad enough 10ish years ago when it was a comparatively small pile of hacks, and there was hope that something could be done about it. But now? It's an enormous pile of hack upon hack. Full stack engineer? More like full hack engineer!
The main reason I worry about losing my job or moving to a new location is that web development jobs are a dime a dozen nowadays, while more traditional development is seeming less and less relevant. As much as I hate C++, I'll stick with it over the monstrosity that is Web 2.x.
[Insert the usual complaints about shitty languages, tooling, and gazillions of frameworks / reinvented wheels here.]
One thing I like about computers and programming is that it's all created by humans. I tried go into physics and biology but once you go deep nothing really makes any sense, it's all random. With programming there's always, most of the time, reason behind design decisions.
Also, I disagree that all of computing is built upon nothing but hacks. Computing is underpinned by lines of theory whose fundamentals can legitimately be described as elegant or even beautiful. I'm thinking of things like universal Turing machines, the lambda calculi, type theory, the structured programming theorem, theories of concurrent, parallel, and/or distributed computation, automata theory, computability theory, complexity/tractability, universal/abstract algebra, relational algebra, unification, etc., but the elegance doesn't end where the theory ends. Many people, including myself, would consider Lisp to be profoundly beautiful, for example, perhaps even on multiple levels. Whether you like the language or not, it was a crowning achievement of early computation science, and it is far from unique in that regard.
Although I personally loathe the state of Web development, I don't hate Web developers. On the contrary, I'm very glad that there's no shortage of people who seem to enjoy it—especially as a long-time Linux user, I'm glad that since the dawn of "Web 2.0", I've had to worry less and less about being left out because third-party developers decided not to support my OS: more and more, I can just pop open my Web browser and use the exact same software anyone would use on Windows or MacOS. It's a double-edged sword, for sure, since along with the convenience and compatibility, browsers have become insane, bloated resource hogs, and if I'm not connected to the Net, there's a chance I won't be able to use the software I want or access the data I want. On top, philosophically I can't help but feel that XaaS for various values of X and "the Cloud" are regressions back to a time when personal compute power was prohibitively expensive, for reasons that are billed as convenience but in reality only serve to remove the freedoms of end users; notwithstanding, I'm just going to focus on the technological issues that I perceive for now, since the philosophical issue(s) demand a different class of solution altogether, nor do said issue(s) belong uniquely to the Web.
I suppose most of the issues I have with the Web as a software platform stem primarily from one force: organic growth over the course of two decades, as opposed to thoughtful and deliberate design by an engineer or group of engineers. The way the Web is now, especially when viewed as a software delivery and execution platform rather than as a document delivery platform, it's a Frankenstein's monster that has been pieced together from numerous disparate protocols and data formats all designed by different people, revised by still more different people, and oftentimes extended by yet more different people in order to cover use cases that had not been considered by the original designer(s), and then connected in the most straightforward ways possible, where each connection might consist of an entirely different mechanism than any other (rather than, for example, extending the protocols such that they provide a uniform connection mechanism).
However, I don't think organic growth on its own necessarily leads to monstrosities. I think that the force of organic growth has been guided by a couple of factors, similar to how evolution is guided by various forms of selection. For one, throughout the history of the Web, the goalposts of its continued development have moved time and again. Once a system for the distributed service of hypertext documents, it quickly became a service for hypermedia in general. Then it became a service for interactive trinkets. And it quickly became a service for commerce and enterprise. With the advent of Java applets and ActionScript-programmable Flash "movies", it became a service (but not yet a platform) for the delivery of applications. Then, of course, AJAX sparked a fundamental change in how the Web was viewed by developers and users alike: it finally became not only an application delivery service, but also a software platform! Since then, the goalposts have shifted only slightly, and the majority of these goals can be summarized as a desire to further enrich the software platform, first by doing the things Flash was once used for, then the things Java was once used for, coming to the point where there is a desire for a Web page to be able to do the things native desktop applications are typically used for, including even AAA games. For each set of goalposts, the context of the design of new Web technology has been different; as such, the notion of what has constituted a "good" design decision has also changed: sometimes, what was a good design decision at the time became not-so-good in a new context. The result has been—rather than a clear progression towards a single goal—a bunch of tumorous outgrowths in various directions with a line of best fit trending from "hypertext document service" to "hardware and operating system abstraction layer and virtual machine for shitty, poorly-performing, inefficient, possibly-distributed applications". The curious "architecture" of the Web is reflected in the architecture of Web applications: the number and complexity of the technologies that are needed to create even the most basic Web application is, frankly, ridiculous. And on top of it all, where platforms with like goals such as the JVM and CLR manage to provide first-class support for multiple programming languages, the Web manages to offer only one, and it happens to be particularly grimy (my fingers are crossed for WebAssembly).
The lesson of all this (and it's not a lesson unique to the Web by any means), is this: backwards compatibility is a bitch.
tl;dr all these young punks need to get off my lawn
Matheus28: Did you actually get it to work or did you give up? (pthathcerg: The comment only claims they "analyzed the possibility".)
Because SCTP is configurable, we can open multiple data channels with different settings, which saves us tons of work!
Typically we have:
an unreliable and unordered data channel for game state data and user inputs
a reliable and ordered data channel for chat messages, scores and deaths
Anyway, I've been idly thinking of ways to try to kindle an interest in programming in the guy, since he's very smart and seems to like computers. I played Cookie Clicker with him, and then showed him how to "hack" it from the console by playing with the JS a little bit (it's all local).
But since he loves diep so much I think that would really give him motivation to learn more if he could fiddle with things in that game somehow. But I realize it's harder given that it's a client/server model. I don't suppose you have any ideas of things we could try? Is there a client-only mode where we can fiddle in the console? Or is the server code open source so I could run it locally or something?
Oh, ha, and something that's been killing me... why's it called "diep"?
For Diep.io, it's completely server side and very little happens on the client side. That is intentional as Agar.io had a problem with "private servers" popping up which were actually people ripping (read stealing) the client side code, putting their ads in there, hosting it on their own website and pointing it at their server emulator.
It comes from an old game I made when I was a kid called Diepix. There's no reason for the name "Diepix" other than sounding cool.
These devs could do:
This could solve this diep.io monthly updates issue. I really hope you allow more devs.
And also (if you dont know what discord is, discord is a website where people can chat live), Join the Diep.io discord made from the moderators of the Diep.io subreddit: https://discordapp.com/invite/YDSF2wD#discordbutton
Also if it's not a secret what backend did you use to handle so many concurrent connections and how many were you able to keep per box?
It's a custom written WebSocket implementation, see https://news.ycombinator.com/item?id=13267261
I think Agar.io has around 190 players per server. Diep.io has around 72.
Per server you mean like in a game room or is one game room equal to one linux box? If so, I guess then that handling the game logic was the bottleneck, not the number of concurrent connections?
Also, congrats on the success and making some really cool games.
Per game room (each room is a process). I end up just using boxes that have 1 CPU core and run just that game room in there. Except for some dedicated servers that have 40+ cores, in which we run 40+ processes.
On Agar.io doing all the collision checking and encoding the packets is the biggest bottleneck. Similarly for Diep.io. Number of players of course increases those two factors almost linearly. For example, Diep.io doesn't process shapes that aren't being transmitted to anyone.
I was inspired by your games to try something similar for the latest Ludum Dare: http://www.bemmu.com/compo/ludum/37/index.html
At first I tried checking every creature for collisions against everything else, but unsurprisingly that was too slow (N^2). To reduce the checks I put each creature in a grid cell based on their position, then check for collisions only against creatures in the same or adjacent cells.
I think overlapping grids would be even more efficient, or perhaps to do these checks on GPU.
WebRTC leaks true IP addresses unless it is outright disabled in [supported] browsers. It is a huge annoyance, and I would be hard-pressed to view it as more than a gimmick that complicates the already messy landscape of web development.
To me the best way to do this involve knowing how often the TCP socket is doing retransmissions, which is an information typically not available at the WebSocket level.
OT question.. how much $ do you make on those?