In contrast, Discord has a volume slider that I end up having to adjust for most people. That requires manual tweaking, often while I'm playing a game, and isn't future-proof; just the other day, one of my friends changed his microphone setup and was extremely loud (I had him at 200% when in the new setup he should've been at about 75%).
This is really my only complaint with Discord's VOIP functionality. It is a major step backward from Mumble. (Also, for some reason the Discord community mostly uses voice activated mic instead of push-to-talk, but I don't think that's Discord's fault.)
Which is a very sane default for most people
The only reason you'd want to use push to talk is either you have a lot of people in the channel, you have a loud background or you don't want people hearing stuff not designated for the discord voice (if you're streaming, have irl people in the same room - that kinda stuff)
I couldn't disagree more. Having an open mic setup for gaming is both insane and completely inconsiderate to everyone else in the channel/group.
It's hard not to personally judge people that join a group, whether that's a friend of a friend, someone looking to try out, public LFG lobbies, etc and you can immediately tell they have an open mic configuration. It's maddening.
Breathing, keyboard smashing, mouse clicking, background noise, eating, or exasperated whining every time you die/lose are all annoyances you push on the rest of the channel/group with an open mic configuration. At best it's an annoyance. At worst it's actively disrupting people's game play and team communication.
For the love of all that is decent with humanity, use push to talk.
I’d love Zoom, WebEx, GoToMeeting, et al., to implement push-to-talk, and give an option for mandatory use to conference call organizers.
We use Teams on our team, and my current PTT technique is to Ctrl+Shift+M to unmute and mute every time. Très annoying.
Also voice activation is a boon for competitive games where you can't spare a hotkey and finger during play.
I don't think they hit anyone, but they did unplug the foot pedals from their PC the moment they got home.
I've been playing competitive games for 15 years. I think you're overstating how difficult it is to press one other button. It's also contradictory to the common practice of clans/scrim groups having a PTT policy.
I think people saying "Well X does it better." are missing the point. It's always been a problem. Ventrilo, Mumble, Teamspeak, etc. If you've never considered it a problem there is a good possibility a lot of people around you were just tolerating it.
Why? That's just a bald assertion with a butt load of evidence to the contrary.
>and what's "competitive" level for you
I'm pretty serious about competitive play. Started with CS 1.6, moved to SC2, then RB6:Siege, then GO, and now OW. I've consistently ranked in the top tier in each of those games. I moved my gym time to my lunch break and run when I wake up so I can play games in the evening and scrim when I'm on teams. It's an addiction.
>PTT is shooting yourself into the foot just because you can't find not-infantile teamplayers
I like how instead of addressing the technological merits of voice activation you just call people "infantile" to avoid actually discussing the subject.
Not sure if discord has anything as good, that might be the difference.
This is super game/group dependent. For example if I'm playing with a 5 stack I'll have open mic on volume cutoff. We're casually chatting pre-game and ptt is not useful there, then dota starts and I cannot manage the additional load of regularly using ptt key whilst playing so I'll end up talking way less and it's bad
On the other hand in a 250 man fleet in Eve Online - you'd best believe everyone with open mic is getting banned from the server.
This falls apart for people that speak very quietly though. If there is no volume difference between your chewing sounds and your speaking sounds you __need__ push to to talk.
In theory but not in practice.
sounds to me like it's not a sane default
Whilst commanding someone to fix it should be the last resort most people with voice activation would rather the setup work for everyone in the server so they can keep using it.
On my setup for example I had a desk mic which sat on my wooden desk. People had to point out to me that there were sudden sharp noises coming from my mic. Eventually it was isolated to my mouse hitting the desk and vibrating up the base into the mic. I placed the mic on a bit of foam and everyone was happy.
These types of activations are not picked up during self testing easily.
But that's just me :)
> "Oh, yes. Except they do it with meat."
> "I thought you just told me they used radio."
> "They do, but what do you think is on the radio? Meat sounds. You know how when you slap or flap meat it makes a noise? They talk by flapping their meat at each other. They can even sing by squirting air through their meat."
Tangent: It's ironic that the dialogue between the two aliens is spoken by humans. After all, the actors are rendering the dialog with meat sounds. I think it would have been better to use a speech synthesizer that's not based on actual recordings of meat sounds -- like eSpeak, DECtalk, or ETI-Eloquence. DECtalk, at least, is flexible enough to speak with whatever intonation you want, so the dialogue wouldn't necessarily come out sounding flat.
I can't tell you how many times I've ran into users who just cannot get their sensitivity right. So in turn, you end up with a ton of people coughing, sneezing, talking to others, fans, and insert your other favorite background noise. Most users just don't care to set it up right, and it's impossible to get them to adjust it (unless you have some type of Discord Mod / Admin privs on the server at the time to force them to fix it). Otherwise, you have a bunch of users who have to put up with Joe-Schmoe chewing away on his mic.
It's why the Discord servers I'm in 100% completely enforce PTT, zero voice activation allowed.
It might be better with people using Xbox controllers.
And we complain a lot when someone on the server has an improper voice activation threshold, so everyone quickly fixes theirs.
Not necessarily, I have had to disable my mic when playing certain games, apparently I'm violent enough with the analog sticks that hitting the end of travel will activate discord.
The best being when they use speakers so you get some echo. That's usually when they get muted because fuck their lazy-ass.
you have a lot of people in the channel
you have a loud background
you don't want people hearing stuff not designated for the
What were you saying about open mics?
Definitely the community, as Discord provides an option to require PTT on a channel-by-channel basis.
I know people who consider voice-activation to be an accessibility feature, as otherwise it would be too difficult/distracting to be able to use VOIP.
A community should absolutely be able to disallow voice-activation, but disallowing it by default across the platform would be an issue.
PTT is mandatory running an MMORPG raid with 40 people. Voice activated is perfect for playing a game while talking with 3-8 people at a time.
I think it's more about who you're playing with, if you are good friends with the people that you are on voice with, chances are you're quite alike and considerate to each other.
That's a hint that the voice activation level is not configured correctly.
Couple that with people not being considerate of others (such as not muting when you do need to make noise that will be picked up), it gives voice activation a bad rap.
But in terms of pure quality, voice activation is always, in my mind, better than PTT. Granted, the last few years I've only really ever played with friends and in small groups, so when people forget, we aren't righteous assholes about it.
That's right. Voice activity detection (VAD) is not the same as sound detection. WebRTC even has a really good VAD built into it that is extremely easy to use and dynamically adapts to the current audio environment. See e.g. https://github.com/wiseman/py-webrtcvad and https://github.com/dpirch/libfvad for examples where the relatively small VAD code has been pulled out of the giant webrtc corpus.
People also need to know to enable AEC in their audio driver, which completely solves the problem of whatever sounds they're playing leaking into their mic.
Edited for tone.
That's why I also use PTT.
I've said before that I can't really justify nonfree software for something as simple as text chat.
That said, rooms with very large numbers of people trying to communicate with voice/video is one use case where maybe it makes more sense for a commercial product to solve it.
Everyone hosting their own Ventrilo/Mumble/Ts3 servers had the huge advantage that not only one single company got all data everyone produces. Not to mention outages.
They say in the blog post
> Routing all your network traffic through Discord servers also ensures that your IP address is never leaked
Yeah, except to your centralized company servers.
> Supporting large group channels (we have seen 1000 people taking turns speaking) requires client-server networking architecture because peer-to-peer networking becomes prohibitively expensive as the number of participants increases.
It's unfortunate, but it's just the truth. The reason we aren't seeing decentralized solutions to the problem that Discord solves is technical. It's not because people don't want it or aren't trying. I just doesn't perform well enough.
Not many selfhosted (eg decentralized) solutions use peer2peer because this requires NAT punching and other methods. TS3, Mumble and Ventrilo are all "proxies" in that manner and not peer to peer
> Yeah, except to your centralized company servers.
Which most gamers don't care about.
What they do care about, is what follows that sentence: "preventing anyone from finding out your IP address and launching a DDoS attack against you".
Especially for high profile e-sports figures and streamers, that's a very real problem: they want to interact with their fans using chat/voice calls, without giving bad actors the means to DDoS their residential connections (causing disconnects from game matches etc).
That's also why Discord proxies all embedded media from third parties through their own servers (except for some major sites, that are white-listed).
> For clarity, we will use the term “guild” to represent a collection of users and channels — they are called “servers” in the client. The term “server” will instead be used here to describe our backend infrastructure.
I still don't understand why you ever chose the name "server" to refer to something that is many things, but not a server. Guild is a way better name, and doesn't need disclaimers about your self-chosen terminology being confusing.
So yes, "guild" is a really bad name for the thing pretty much anyone in the target demographic is used to call "server", whether that thing actually was a physical or logical server or not. "Guilds" also used to have a voice "server" to host their voice chat activity, but they did never equal themselves to their voice server, so even if all games out there called groups of players "guilds", it would still be a really bad choice. Someone at Discord apparently learnt on the job that "naming stuff" is one of the two hardest problems in IT ;-)
We don't play MMOs, aren't very organized, have a very flat hierarchy, (admins, regulars, new users) and don't stick to one game very long.
The word guild comes with too much baggage, and that single word choice would likely mean we'd still be on Teamspeak to this day. I don't think I'm exaggerating that point either.
I never considered "server" a poor name, even knowing that it was probably not one dedicated physical server in reality. It's simply the right nomenclature for the target audience.
: A Teamspeak server is definitely a server as I assume you define them.
Should I quit the discords that I don't use? I feel like they stay in sync in the background and if I am in 100 servers and they all post say tons of text I will get massive lags.
Does that make sense? Is there a way to prioritize only the server I have active/I am talking in and freeze the rest while im full-screen/in-game/focused?
WoW alone forces me to have 1 server per class, then 1 server for every guild I'm associated with. Forget about any other game.
I bet it's on their todo list to handle that use case and no doubt there are other priorities.
Additionally, we unsubscribe you from these events if you unfocus the server for a given amount of time.
You custom implemented many components of webrtc I barely got part of the pats down in my project. So this was really interesting to me.
So not even a VPS like Linode, Discord rent physical servers across the board? Or is there a mix of AWS or some other vendor in there?
Additionally, you can buy bandwidth for much cheaper from dedicated hosting providers as opposed to cloud providers. For our usecase, AWS would be approximately 15,000x to 30,000x more expensive due to bandwidth pricing.
Do you mean virtualization?
If so, I recommend looking into testing this with SR-IOV based NICs and passing through a VF to the guest. Even in regular operation the latency difference between bare metal and an ixgbevf virtualized NIC all but disappear into levels well below anything that would be meaningful for voice communication.
Moving to a DPDK based poll mode driver would reduce the latency differences even further.
Edit: https://01.org/packet-processing/blogs/nsundar/2018/nfv-i-ho... some actual numbers w/ DPDK on bare metal vs vm
Disclaimer: I work for a cloud company, but SR-IOV knowledge in general is something I had from my days running a vmware environment, and not anything new :)
However, there are remaining overhead/downsides, and virtualization may be a solution looking for a problem in their environment.
 Also, presumably, dependent on specific NIC hardware, but I expect they're already using something compatible. It's merely another constraint.
A vm using SR-IOV with ixgbevf on good ol' Intel 82599 from 7 years ago will not have a latency difference noticeable to the overwhelming majority of use cases vs. bare metal.
I didn't mean to imply that was your argument.
> I just wanted to point out how low latency can go in general.
Rather, I meant that "can" isn't the same as "does", absent exceptional circumstances.
> A vm using SR-IOV
Whether this qualifies as exceptional is, of course, arguable, but I'm arguing that it is. I could understand the point that it doesn't have to be, but, to be actually convinced, I'd want to see evidence that it's well understood and well implemented enough that neither rare expertise, substantial engineering effort, nor constrained configuration (hardware or software) would be required to take advantage of it. I'd expect most technically-minded decision makers to think similarly.
Oh. My apologies for misunderstanding your point.
SR-IOV is available on basically any and all server grade NICs, and is quite simple to use. With Azure and AWS it's basically just making sure you have the proper driver installed (gotten for free on basically all modern kernels) and flipping a command switch.
If you're rolling your own virtualization stack, it's generally about as simple as any other task for that stack.
With vSphere it takes a matter of seconds: https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsp...
Similarly easy for XenServer: https://support.citrix.com/article/CTX126624
A little bit more work with the common KVM management options, but still a very simple task as far as Linux sysadmin tasks go: https://access.redhat.com/documentation/en-us/red_hat_enterp...
OpenStack is a bit more complicated, but frankly, less complicated than plenty other tasks in OpenStack: https://docs.openstack.org/mitaka/networking-guide/config-sr...
All of the real setup work has to be done at the hypervisor level, but you're primarily just doing two things: Creating VFs, and assigning them to VMs. The driver does all of the rest of the hard work. I would argue that any Linux or vSphere admin with any real amount of experience should be able to read any of the documentation I linked and be able to confidently work through it in an hour or two.
For the guest, just making sure the driver is installed should be all that's required. For ixgbevf, the ubiquitous commercial option, it's been in-tree for the Linux kernel for at least half a decade.
Once VFs are created and assigned, it largely "Just Works". The only real caveat I know of is that seamless live migration of the guest is no longer an option, because now all of the network virtualization is handled in the hardware instead of the hypervisor.
Having glanced through those documents, I agree that it doesn't appear to be overly complex. However, considering how much CLI there was in those instructions, I'd argue that it's evidence that this feature is not what could safely be called "well implemented" (or perhaps "well integrated" would have been better for me to use) and probably not "well understood".
If it actually only ever requires that hour or two and nothing ever again and isn't brittle, that's great. If it ever needs debugging, especially if a critical performance problem crops up, a rare expert might be needed after all.
I realize my overall point is, essentially, FUD, but, absent a large enough installed base, that's not a totally outlandish stance for a decision-maker with an already-working solution.
> Once VFs are created and assigned, it largely "Just Works". The only real caveat I know of is that seamless live migration of the guest is no longer an option
If they have to be individually/manually (or automated, just not already integrated into the usual VM management mechanisms), wouldn't this also prevent other forms of virtualization flexibility?
Ultimately, though, especially in this case, it seems like virtualization is a solution looking for a problem. That there may be (even nearly complete) mitigations for some performance issues doesn't mean that there won't still be some overhead and, more importantly, at scale, the virtualized options are always going to be noticeably more expensive than bare metal.
One has to build (or buy) for peak bandwidth. Selling it pay-as-you-go, with no regard to local maxima, means one has to price that rate high enough to account for the typical (and then some) spikiness in traffic. 
It's not hard to imagine that something like a UGC video site might significantly increase that spikiness ratio, if only because of the sheer quantity of data involved. Moreover, it's a large quantity of data transfer per user, so even modest user growth would result in huge network use growth. As a sibling comment pointed out a cloud provider "may not really want that type of client".
Perhaps cloud providers could start charging on a more traditional-ISP 95th-percentile style basis for larger customer and engineer their networks accordingly, but then they might have to keep those customers corralled in specific datacenters, which would remove part of the value of cloud infrastructure.
 Forgetting that "the cloud is just somebody else's servers" also led to the delusion that one doesn't have to "worry" about hardware failures in the cloud. Fortunately, it's now common knowledge that EC2 instances are subject to disappearing due to hardware reasons and that this needs to be "worried" about (engineered around).
 There is a similar issue with residential electricity pricing, where consumers pay a flat rate but the utility actually pays time-of-use (potentially a much higher rate on the spot market). Somewhat related to but not identical to rooftop solar using the grid as a "free battery", since that's also time-of-use. These come up routinely on HN discussions of electric power.
I don't think that's actually true. The first assumption, that they are buying something from someone else is potentially flawed, and the conclusion is based on another potentially flawed assumption, that bandwidth has an inherent volume discount.
I say "potentially" flawed because these assumptions easily hold true for small enough providers and little enough bandwidth.
At large provider scale, it's probably safer to assume that they're building instead of buying, and those costs follow fairly large, discrete steps.
Increasing bandwidth means buying faster DWM modules and, possibly, higher-end equipment that supports them. It might mean doing that for their network peer, too.
In many cases, I expect it would mean bypassing shared infrastructure like internet exchanges, which might be limited to as little as 10Gb/s or even 1Gb/s and getting direct peering arrangements (including physical connections) with other networks, including possibly reimbursing them for their costs. This can be complicated by the new peer only having just enough bandwidth to that exchange point to match the exchange's maximum bandwidth, in which case peering will require co-locating somewhere else, with all the hardware and (hopefully dark but not always possible) fiber leasing costs.
None of those costs are necessarily high if considering maximum available bandwidth, such as if they spent 5x to get 20x or even 100x the capacity. However, if they only did it for a single customer that, on average only uses 2x the bandwidth (and only peaks at 20x-100x at rare times or only on certain, unpredictable in advance, connections to peers), they experienced a volume premium, rather than a discount.
1) because they can
2) because they may not really want that type of clients
3) because due to the nature of peering agreements, they want to avoid paying as much as they can
Not sure which of the above applies, but the list is very likely at least part of the reason.
Aside from the money we already make from https://discordapp.com/nitro :)
> This is all in beta, so things may change by the time we’re ready to launch. What you see today will likely not be what you see in the future.
What they added was voice chat to static groups, whereas before you had to invite everyone to a custom group chat every time. Definitely a QoL improvement, but not exactly "new".
I suppose that a perfect health check will prevent this, since the failover will assign failover traffic at exactly the level that the new server can successfully handle. But if it's wrong on the other side (rejects connections when capacity actually exists), then compute resources are wasted "just to be safe".
I imagine that estimating capacity is even more difficult since people can join and leave at any time, and the client doesn't send any packets when there is silence. So the load changes based on how talkative people are being (which means your server always crashes during the best parts of whatever you're discussing).
Anyway, I'm wondering how this all compares to the naive strategy of "pick a random server for this channel, if it crashes, bad luck".
a) A voice server can sit in multiple different load categories. So it's not "best server by score", but rather "best server out of a pool of servers with a given load factor". The load factor is one of ":verylow | :low | :medium | :high | :veryhigh | :extremelyhigh | :full" When looking for a server, we have an index of "best servers by region" that's stored in memory on each node and kept synchronized by service discovery. Additionally, if we don't have enough candidate nodes in a given load category, we will grab a few from the next-best load category. The thought being, that for a given region, we'll have a large set of servers to allocate to. This prevents a server failing from thundering-herding another server.
b) A voice server fast-fail (reject) allocation requests, and does so under some circumstances, e.g.: the rate of allocation requests for the server exceeds a threshold, the server is at capacity, or approaching capacity. We do a lot of this fast-failing logic using semaphores around a shared resource (server alloactor): https://github.com/discordapp/semaphore
c) We also run things a bit over-provisioned. We try to have enough excess capacity during peak such that we can handle the failure of an entire datacenter within a region, or an entire region to nearby geographical regions.
>I imagine that estimating capacity is even more difficult since people can join and leave at any time, and the client doesn't send any packets when there is silence. So the load changes based on how talkative people are being (which means your server always crashes during the best parts of whatever you're discussing).
We use a lot of factors to measure load on a server to group it into a load category - in addition to just traffic: we look at concurrent clients connected, concurrent voice servers allocated, packets/sec, bytes/sec.
Yes and no, there's no packets being passed over the WebRTC connection, but the server maintains a WebSocket connection for state changes. There's two, one to the guild service, which handles assignment of the voice service. Want to count the number of clients connected to a voice server? Count the number of connections.
All Discord clients are connected to Guild services and discord publishes events for voice channel updates to all clients in the guild. (Ex. when a person joins a voice channel.)
Also, for larger discord guilds, there are exclusive voice server pools for them to consume. These servers are configured with more resources and are usually pretty close to exclusive to the single guild. Most of the voice servers are for the millions of other guilds though.
I wonder whether this would help an attacker infer voice data, with a method similar to the one from the paper "Uncovering Spoken Phrases in Encrypted Voice over IP Conversations" 
Actually the voice activation (on/off periods of sending vs silence) was the first thing we looked at in that project. There's definitely some information leakage there, but it was really hard to learn anything meaningful from it. The problem seems to be that long strings of words all get lumped into a single activation. It's really hard to discern anything about what those words are, or even what language they come from.
We got some very weak results on language detection from VAD. You can learn something about the language, but it's not very precise. For example, maybe you could tell that a given conversation is definitely not language A, B, or C, but it might still be language X, Y, or Z.
Deciphering content from latencies in packetized speech is likely much more difficult, but I wouldn't put much stock in it being too difficult.
Which is to say, if you're transferring high-value information assets over VoIP you should probably assume it's decipherable. That doesn't mean you should change what you're doing. You could simple say, "M'eh, I'll worry about it when it becomes a thing." But I wouldn't assume it's confidential to someone willing to invest the time to target and capture the conversations. And I might leave a few choice comments in the source code and documentation so nobody could excuse imprudent reliance on confidentiality with, "But nobody warned me".
I haven't made many calls but the sound was always crisp.
Both the Rest API (that official clients use internally too), as well as the real-time WebSocket protocol are described here:
If you're thinking they're going to check the actual software. Like ass I'm going to let someone into my house to check if I'm breaking their terms of service
I also have a fair amount of different IRL/internet friend groups who all have their own channels.
>"Using the WebRTC native library allows us to use a lower level API from WebRTC (webrtc::Call) to create both send stream and receive stream."
So I'm gathering that discord's voice servers receive multiple persistent connections, then compress the audio streams for delivery to each end user. THIS part is where I can't imagine the on-the-fly cpu usage. Each client's receiving compression needs to also negate their own audio to prevent an echo effect (no point to hear your own voice), but it also means separate compression streams per user.
>" All the voice channels within a guild are assigned to the same Discord Voice server."
I imagine this helps significantly with I/O in converting live streams into 1 stream per end user. I've dealt with video compression (only in ffmpeg) and live syncing time stampings, and I can say from experience that, this is no easy feature. I understand this is audio streams (so lower overhead), but still the persistent voice server needs to handle the incoming connections, web socket heartbeats (negligible), compression (high I/O), and deliver the streams (high memory usage too).
I'm impressed, but would love to hear the specs on the media servers and their DL/UL speeds. My old setup to deliver live video (in sync and compressed) was 6 mini-itx's, 4GB of ram per board, and i3's...my bottleneck was my isp, which I solved with multiple docsis modems and an internal switch (each board had 2 ethernet ports).
The bulk of the user-space time on the SFU is spent doing encryption (xalsa/dtls). We also avoid memory allocations in the hot paths, using fixed-size ring buffers as much as possible.
Additionally, we coalesce sends using sendmmsg, to reduce syscalls in the write path: (http://man7.org/linux/man-pages/man2/sendmmsg.2.html)
I posted some about the specs here: https://news.ycombinator.com/item?id=17954163
So video w/audio broadcasting has to be compressed client side, then proxied through Discord's media servers, to the end user's. That's pretty smart...I just wished that I could send my raw stream to a LAN host so I could offload the compression, and allow my LAN host to provide delivery (I'm a nitro user).
Also, muxing server side means we can't do things like per-peer volume and muting, without having to individually mux and re-encode for each user in the channel depending on who they have muted and the volumes they have set per peer (which would explode CPU complexity even further).
So, in this case, bandwidth is cheap, let's use (and waste) some, in an effort to simplify the SFU, and also, make it more CPU efficient. Default audio stream is 64kbps (or 8 KB/sec), per speaking user.
There might be some public or semi-public servers for this kind of thing available somewhere, or the alternative that I played with a few years ago was to compress and base64 encode the connection information into a string, and allow users to share the link with friends via whatever method they want, and that can then be expanded client side and used to establish the connection. (Or something like qr codes I've also seen used)
Sadly I never finished that project, so I don't really have any code to show you, but in theory it should work okay.
Doesn't work anymore.
Firefox times out the answer offer after very few seconds, which makes sharing the answer offer asynchronously impractical, which effectively killed serverless WebRTC and in turn killed any interest I had on it (for my side projects I mostly do serverless web apps, as in real serverless, not lambda functions).
Is there a ticket number or blog post or something I can research to learn a bit more about the reasoning behind this change with them?
> So every 5 seconds Firefox (version >= 49) sends another binding request no matter if the ICE transport is in use or not, and it expects the other side to reply with a binding response. If it hasn’t received a binding response for 6 consecutive binding requests, in other words no reply within the last 30 seconds, it will give up and mark the transport as failed. This results in switching the ICE connection state to ‘failed‘ and stop sending any packets over that transport.
The central servers don't have to do much, but they need to exist so new nodes on random IPs can find each other. The Internet has no native service discovery bus.
You will need to know where the other peer lives. Which is why they expect you to have a discovery service. But typing in local IP's should work for internal networks. The hardest part is getting through NAT and firewalls and that's not a problem on your own LAN.
There’s nothing to stop you doing this through broadcast messages on a local network, though I’m not aware of any way that could be done from within a browser. You could extract the SDP and ICE messages from the browser environment and handle that in Node though, if you were using Electron or something.
Looks up DHT use.
Disclaimer: Google employee, work on the WebRTC team.
After getting frustrated with all the other WebRTC libraries over the last 4 years, I finally wrote our own ( https://github.com/amark/gun/blob/master/lib/webrtc.js ) which is capable of using a set of decentralized DHT relay-peers in GUN for signaling - and once peers are already on WebRTC, they can signal (daisy-chain "DAM" as we call it) to other WebRTC peers via WebRTC!
Meaning, you don't/won't have to run any servers!!! Ping us on our chatroom if you want the list of DHT peers.
Chuck! Sounds like you'd be a very useful person to know. Me, Feross, etc., plenty others have been requesting additional API/protocol access over the last 4 years (some of which FireFox is adding in libdweb extension!). Any chance we could connect and chat? Ping me at firstname.lastname@example.org ?
I'll start one with just mine, and then others can PR to opt-in. You want to join?
I'm not actually using WebRTC (or GUN) yet, but I hope to for a project relatively soon and the possibility of not requiring a STUN/ICE server is very enticing to me.
Source: Google employee, in the WebRTC WG.
Also in general, I highly recommend webrtc-adapter.
I used this dockerized coturn server successfully:
However if you connected @mywebRTClobby with https://GroupTweet.com you could configure things so that any @mentions (from authorized users or anyone) would be converted into actual tweets from the @mywebRTClobby account so that all followers would actually see those tweets/contact details.
Edit: STUN is free, Turn is not. Been a while since I worked with those.
You can register for an account on my server. WebRTC downstream seems to be working at the moment. Be sure to read the instructions, however.
I am thinking of turning my game server into a serverless PaaS offering. I'm open to collaborating on this.
A single connection-initiation server could be the easiest solution. It won't need to withstand a heavy load.
One common desired use case is using WebRTC for p2p torrent communications. Right now the best way to do this is in browser, or to use an Electron app that can bridge the WebRTC clients with the standard desktop torrent clients.
It still leaves you writing in c++, but it lets you build a self contained server without any browser stuff. There also were java bindings which is what I used when I was experimenting with it. You just have to build your .so and .jar files.
EDIT: Here's some scala code that I wrote that uses the native bindings. (Please forgive the messy test project that I have abandoned).
Or do you want something that has a completely different API? It would be nice to be able to ignore the JS idioms, but it is tough to start from scratch.
There are many aspirational repos that have been started over the years with the intention to implement WebRTC but very few of them actually made good progress.
I too would love more third-party language-native implementations. Right now everyone is binding to the C++ codebase.
The real problem without having language-native implementation is that it creates a protocol rift. Things like BitTorrent vs WebTorrent, or IPFS native vs IPFS JS: It's effectively UDP/TCP vs WebRTC, clients on one end (native apps that aren't NodeJS) can't speak to clients on the other end (browsers) without a relay bridge (which is always NodeJS).
"CONNECTION_STATUS_ICE_CHECKING" ICE Checking.
Wow. Has anyone here been in a channel like that? I'd imagine it to be complete chaos.
But I suppose it's possible with good moderation and/or bots to ensure people take turns? Is that what they do?
All I know is that official fortnite (discord's largest gaming server last year AFAIK) had serious amounts of disconnect issues, only on that server though
Fortnite didn't have more than 100 battlerooms (4 players max) at any given point in time.
There's a few battle royale scrimming servers with 70-100 players in one room for tournaments. But the rule is everyone needs to be mute & deafened though
One thing I've found on the app is when a person/voice is far away from the mic it often glitches and doesn't send all of the audio data over like, for a example, a normal FaceTime Audio call would.
Not sure why this happens, but just thought I'd put that out in the ether.
Why is the Gateway not directly accessible from the public Internet?
This is just confused and sounds quite worrying :I
Salsa20 is a stream cipher. DTLS and SRTP are higher level security protocols that use ciphers (among other things) as building blocks, to ensure things like replay protection, integrity protection, mutual authentication, secure session key agreement and forward secrecy in addition to confidentiality.
If you replace an engineered security protocol with a raw cipher and key, you create many vulnerabilities and make a much less secure system. VOIP applications are especially vulnerable to replay attacks, for instance.
A better approach would be to continue using DTLS and SRTP, and use Salsa20 as the SRTP cipher.
Non-crypto-experts saying things like "we replaced <estabilished security protocol> with <my own idea> and it's much faster"
is a well known bad sign - It often indicates the person has fumbled things without realizing it.
We are already multiple times larger than Slack :)
What's hindering adoption outside of the gaming community is the branding and niche focus.