> linux chokes at around 1M packet per second per cpu socket.
which is why kernel bypass mechanisms f.e. dpdk/netmap/... etc. are getting some/lots of traction. fwiw, using dpdk (on x86-64 hw), i have had no problems with pushing minimal 64b sized packets for large (>250k) number of flows at line rate...
This isn't specific to Linux. PPS is a limiting factor for network hardware vendor's gear as well. Lots of ~100 byte packets will bring pain to Juniper and Cisco products as well.
Thanks, I've always wondered under which conditions BSD networking performs better compared to Linux. It's the number of simultaneous streams as opposed to raw bandwidth for fewer streams.
Well BSD kernel will also choke at more or less the same limits as linux kernel.
The key point here is that a netmap driver bypass the kernel and therefore open the door to many millions packets per seconds or millions requests per seconds. Not many Gbps
Next time management asks why the driver for your product should be open source, point out that it makes your product much more attractive to people who will improve and promote your product for free.
Sales improve because the potential userbase has increased, not because of free contributions.
Imagine you are google and you have created your own custom build system. You now want to hire more people that are familiar with it. Unfortunately the potential applicant pool is exactly 0 because nobody outside of google can even start to learn how to use the google specific tools because they simply don't exist outside of google.
The big barriers to adoption of a new product, service, or whatever are:
1. People knowing about it.
2. People needing it.
3. People trusting it.
Like a fire needs heat, oxygen and fuel, new products survive only when they get all three corners of the triangle. If your product is cheap enough, some people will try it when they think that they have a need for something like that; trust comes eventually. Open source builds trust that in the worst case, people using your product will have a glide path out instead of a sudden brick wall.
In the instant case, Mellanox gets a highly visible, reputable customer telling everyone in their industry that Mellanox NICs are high-performance, are trusted, and can be adapted to their needs. Anyone who reads this article and thinks about high-performance NICs will have a bit more trust in the Mellanox brand as a flexible system.
FreeBSD & Netflix. We use Mellanox on our 100GbE CDN nodes.
It is a very different application from what the BBC is describing. Rather than 1 80Gb/stream, we have tens of thousands of several Mb/s streams. And rather than kernel bypass, we use heavily optimized kernel path, via sendfile and in-kernel TLS.
Limelight is primarily Chelsio with a long tail of Intel. I chose Chelsio over Mellanox because I like their drivers more, but both are top shelf NICs.
One thing to note on topic, we developed a driver framework called 'iflib' in FreeBSD that is somewhat similar in scope to NDIS on Windows. If you write an iflib driver, you automatically get netmap support on FreeBSD. So we have support for all of Intel, and newer Broadcom as well, without potentially unfamiliar corporate devs having to understand the intricacies of soft ring management for userspace networking.
> it makes your product much more attractive to people who will improve and promote your product for free
This is a huge myth.
1) even if you open-source, the company retains control over the project, and push their priorities first.
2) codebase can be huge and intricate, very few external contributors would actually put the time to learn it, leading to poor contribution (ie. might fix a given problem, but breaks other subsystems)
3) even if there is external contributors, is the company willing to put the manpower to process the changes and handle community interaction ? Ie. do you really want to handle bikeshed'ing style patches, potentially costing time to actual paid devs ?
Most of the time, niche opensource contributions are merely are more of a marketing tool (or HR tool ala. "your profile looks great however we're too greedy to hire you, please contribute to our software for free").
#1 is not true for bazaar-style (ie. rather few inertia) open-source projects, say the Linux kernel. At best, subsystem maintainers can push their own agenda, but even this level of control is rather limited. However, I do concede that it's rather true for non-profit focused cathedral-style (ie. lots of inertia) projects, say, the BSD.
#2 does not mean that code is crappy at all, unless you consider that not writing down every single assertion made by a developer who wrote the whole subsystem as "crappy". In which case, I can assure you that all software is crappy. This might sounds as a huge generalization, but unless infinite amount of time and money, everything is a compromise, especially when deliveries are constrained.
Devs who are working daily on the codebase tends to know these compromises way better than newcomers. That's my point.
Yes, less inertia. If the Linux folks decides to change a central datastructure, they do it, don't care if anything break with some proprietary module. With the BSD... well, they can't even agree on a modern DCVS, changing any central kernel datastucture is pretty much a lot fight (I tried...). Same logic with risk-averse companies.
Lots of people missing the point in this discussion, the goal of this is to send Uncompressed video over closed IP networks in a studio for example. I talked about some of the problems at Demuxed last year (referencing BBC R&D's work): https://www.youtube.com/watch?v=A4L5xEXXlas
I don't think many people missing this point. I got the point from the article itself and even before this when I saw BBC I figured it was going to be related to video.
The idea is explicitly that you would run the equivalent of a TV control room on a web app that knows how to switch between video feeds (potentially from fixed cameras without human operators) and then can output video to... whoever is your audience.
The "moving window" of near-live editorial decisions is pretty interesting as well, and seems geared towards giving non-professional editors a chance to fix mistakes.
How does a web app know how to switch between video feeds, or cameras know where to point?
Maybe we're talking about different kinds of events, but there's actually a lot of creativity that goes into camera framing, movement, shot selection, timing of cuts, etc. to end up with something that's actually interesting to watch...
The app is not choosing. The operator/director is choosing. The thing is that you give yourself a buffer before broadcasting.
"The user can pause the action at any point in time and seek back through the session to fine tune edit decisions using a visual representation of the programme timeline. On resuming, the play-head seeks forward to the end of the timeline. This functionality is made possible by ensuring the time shift window (DVR window) of the camera feed is infinite so that all live footage is recorded and can be randomly accessed by seeking.
Once the operator has established a sufficient buffer of edit decisions, they can begin broadcasting them. [...]
A research goal is to investigate how big the window of time should be between the broadcast play head and edit play head to ensure the operator has enough time to perform edits without feeling rushed and whilst keeping the programme as close to a live broadcast as possible. We call this the ‘near-live window’"
With IP, it's usually a software defined network model, along with some IGMP components for controlling video flow.
So the clients (software or control panels) would send a request to the routing orchestration system to request a route to send multicast flows (SMPTE 2110, separate multicast for video, audio streams, ancillary data) from a source to a destination, same as with a baseband router. With IP, the orchestration layer also drives the receiving device to join the multicast explicitly if it isn't IGMP based.
Your employer amongst other dinosaur hardware manufacturers is also making this a nightmare to implement in software because of the tiny packet burst requirements (40us). My team will spend thousands of man hours and our servers will waste huge amounts of energy because power-saving can't be turned off in order to hit these crazy requirements.
In 40 years time we'll be still having to comply with this nonsense because some manufacturers didn't want to update their FPGA designs. It's like fractional frame rates all over again.
This is really where the broadcast industry lost its collective mind.
There are valid use cases for both hardware solutions where appropriate, and software solutions for other use cases. There are many low latency use cases where crazy requirements are still actual requirements.
Yeah on your second point; I used to do a bit of TV broadcast stuff and it was a marvelous technical world/system of its own. And it seems obvious that the BBC's proposal here is in some sense much more lo-fi than that.
I think their proposal was that you could give up on being able to do camera framing and movement, but you would still maintain the ability to cut between shots at aesthetically pleasing moments. I could just about imagine that producing good results, with a relatively predictable performance and a decent technical director.
But I think a system like this doesn't compete with professional video production. It competes with someone making a shaky cellphone video in a small venue -- a form of video which a lot of people seem to accept, these days. I could see the BBC's proposal being a step up from that (and a lot cheaper than a mobile TV studio setup).
>Maybe we're talking about different kinds of events, but there's actually a lot of creativity that goes into camera framing, movement, shot selection, timing of cuts, etc. to end up with something that's actually interesting to watch...
"Professional, live, multi-camera coverage isn’t practical for all events or venues at a large festival. For our research at Edinburgh Fringe Festival 2015, we experimented with placing three unmanned, static, ultra high definition cameras and two unmanned, static, high definition GoPro cameras around the circumference of the BBC venue. A lightweight video capture rig of this kind, delivering images to a cloud system, could allow a director to crop and cut between these shots in software, over the web and produce good quality coverage ‘nearly live’ at reduced cost."
> unique challenge here involves handling IP packets (around 1500 bytes each) at data rates of between 1 and 8 Gigabits per second
This reminded me of something. At this point it seems like Jumbo frames are never going to be widely adopted, are they? Otherwise this seems like the perfect application - massive datarates, controlled hardware/software, high-quality wiring...
Working with Mellanox cards for a few weeks; the MTU matters a lot. The difference between an iperf test measuring 14 Gbits/s on MTU 1500 vs. 40 Gbits/s on MTU 9000.
We got a couple of Connect-X5 cards, which allow switchless connections, akin to a ring topology. A lot of neat things, at just stupid line speeds, and latency levels I haven't seen in software, ever.
Interesting. I bought a "real" netgear switch a few years back (geared to the small business market) and did some tests with jumbo frames enabled, and saw only a slightly under 10% difference in performance pushing 10GB files from my laptop to my NAS.
Everything else was consumer level though. I did ensure that everything was flipped to jumbo frames. I/O shouldn't have been the factor since it was SSD on both sides.
Mellanox has some interesting stuff, I used to work with it back in my HFT days.
For my home LAN the problem with Jumbo packets is that it seems there isn't a standard size that all NICs support. I have several machines that support 9000 bytes, some 9014 bytes, some only up to 4000 bytes. So I haven't been able to get the whole network on a single MTU. At least this was a case a few years ago.
Jumbo frames are limited in the improvement over 1500 octet frames, around 9000 bytes. That's still too small. I think NIC offload mechanisms already handle bigger chunks of data and let the OS TX/RX 64k or more at a time.
Last kernel hacking I had to do some work with 10Gbps NICs, so 80Gbps is just so attractive to my eyes.
There are different pass-through/fastpath patch for different chips to avoid memcpy, or do zero-copys, but they are all kernel patches, kernel-only.
An alternative method will be BPF/XDP and DPDK for which you will need modify the kernel drivers somehow for good performance. Wondering if Netmap does that already or is has nothing to do with them.
All of them are addressing Dataplane packet move, hope I can have an environment to experience these close-to-100bps network in my next projects.
In the meantime, I am wondering, why do you pass 4K uncompressed video using IP packets...
Also netmap is extremely easy to get going. It’s a man page and some great examples and the deva are superbly helpful.
I found DPDK daunting in comparison (it really is a different beast though, a real toolkit).
One nice thing about Netmap is it can fall-back to emulation mode and work with any NIC. This can be really helpful if you want a single codepath but don’t always need the performance. DPDK might support this too now, I haven’t looked at it for quite a while.
DPDK has great documentation and examples. I agree that it's more of a beast compared to netmap and pf_ring, but it's not lacking good documentation. Their documentation on each example are really well-done.
DPDK can fall back to emulation mode as well. See the AF_PACKET driver.
Hmmm, I found /lots/ of documentation, but lots of gaps. For example, I still have no idea how many rings, how many queues and how many cores I should use for my packet forwarding app. Presumably there's a mental model I need to learn here. It's among the first thing any DPDK user needs to know. It's not the first thing in the docs. In the examples, its magic numbers all the way with no explanation. eg http://dpdk.org/doc/guides/sample_app_ug/skeleton.html
Also, I can't find a hello world like "send a packet" application, and it's complement of "receive a packet", only forwarding apps. That's not great when I'm trying to test if my config works.
How many is really a question of how fast your processor is, how many cores, and what rates you're doing. They can't really give a solid answer since it varies so much, but their examples have defaults.
Hello world apps are testpmd or dpdk-pktgen.
Hugepages depends on how much memory your program is going to allocate. They also come in different sizes, so that example you gave is only 64 2MB pages, which is pretty small. I usually go for the largest hugepages possible in the processor I'm on (2GB) and allocate a small number of those. The reason being that most of my DPDK applications are the only things running on that machine, so I don't need to worry about sharing memory.
I agree it's tougher to learn, but it's much more powerful. They likely need some kind of beginners' guide, since once you get past the tough parts, it's great.
> How many is really a question of how fast your processor is, how many cores, and what rates you're doing.
FWIW, as far as I can tell, if you have no significant CPU work to do per packet, then you never need more than one core. A single core on a 5 year old laptop CPU can memcpy at a few hundred Gbit/s. In my tests I could even do the UDP checksum in software at 40 Gbit/s on one core. Mostly for me the bottleneck is the DDR RAM bandwidth, and using more cores doesn't increase that.
The docs should explain why/how the defaults values were chosen in the example apps.
I still don't know if there is any advantage to using more than 1 queue (aka ring) when using a single core. When I tried it, I found I could send nearly twice as many packets, but got lots of packet loss. Weird. I'm working on a public cloud, so the rules of what "line rate" really is are not available. There's some kind of fairness rules that either the hypervisor or the TOR enforces (I think). It's not at all clear if/how back-pressure is asserted to the NICs.
> Hello world apps are testpmd or dpdk-pktgen.
testpmd is 34,000 lines of code. dpdk-pktgen is 59,000. These are not hello world apps. I cannot read them to learn how to write a DPDK app. To understand if I had the basics right, I wrote a simple UDP sender and UDP echoer. They were 350 lines each. It seems like something similar should be the first apps everyone beginner sees.
> Hugepages depends on how much memory your program is going to allocate
Yeah, fair enough. I didn't explain my actual problem. My hello world programs worked fine with 64 huge pages on Intel cards. But I found no matter how small I made the ring buffers, 64 huge pages wasn't enough with Mellanox cards. And the error messages I got were gibberish from the bowels of the Mellanox driver (which isn't DPDK's fault). The Mellanox docs should have said how many hugepages their driver needs as an overhead (it appears to be more than 64).
Netmap cannot (and does not want to) achieve the same performance of DPDK, but it has other advantages more than simpler installation, learning curve and flexibility (think of netmap pipes, netmap monitor ports and the VALE switch).
Netmap applications can save CPU (sleep) when no traffic is needed, while DPDK applications must resort to busy waiting and need dedicated cores.
This may be very handy for some applications, as you can find a sweet spot between high packet rate and CPU consumption.
Moreover, Netmap uses the user<-->kernel interface to implement a layer of protection between the user-space application and the NIC hardware (performing validation on the application operations, which obviously comes with some cost). AFAIK, DPDK does not pay this cost, but allows unprotected access to the NIC hardware (which could do wild memory access through DMA).
In terms of performance, benchmarks clearly show that DPDK has lower per-packet overhead (because it does less things), although netmap is not really far from those numbers. Point here is that benchmarks are often like free running engines; they show performance with no useful work. When you write a real application, you want to perform some useful work, which usually includes touching the packets, copying, compute packet headers etc; this results in expensive cache misses and CPU cycles that adds up to the basic I/O cost. If the real-work cost is something like 100ns per-packet (or more), then a difference in I/O cost of 5-10 nanoseconds per-packet may not really matter that much.
As another said, been out for quite a while, more or less the standard kernel stack bypass, and certainly less recent than this BBC thing. Wish they'd spent their time contributing better MLX support to DPDK.
Can someone ELI5 the major challenges of delivering "broadcast quality" video over IP compared to cable? It seems crazy that pushing HD content over a standard aux cable is faster than downloading HD over IP. We have been pushing HD content to our TV's for 10+ years, but many people still have trouble streaming an HD movie from Netflix.
Is this simply due to the overhead of IP transport supporting bidirectional communication? That is, a TV broadcast only needs to support a fixed set of N unidirectional flows (channels), but IP needs to support a dynamic set of N bidirectional flows?
It's essentially the difference between circuit switching (dedicating a whole physical link to a particular signal) and packet switching (breaking it down into smaller chunks that can each be dealt with independently throughout a network, and then get reassembled on the other end). The former is easier to do (no overhead of disassembling/reassembling) but less flexible.
Challenges are mostly around keeping lock-step timing across a lot of separate devices in the system at scale (using PTP) for frame-accurate switching, and working through getting the SMPTE-2110 standards completed for interop, but there huge benefits on ability to scale to well beyond what is practical with coax, as well as cost savings on cabling with more density (and weight savings, for mobile production trucks which is important). There are also big architectural advantages of having more dynamic infrastructure and device discovery (AMWA NMOS, etc) rather than hard-wired coax signal paths.
There are a lot of scenarios with massive scale on the back-end, with lots of streams managed as part of the production process ahead of delivery to consumers... We have an IP video router (https://evertz.com/products/EXE-VSR) that has 2,304 10Gbps ports (moving to 25Gbps per port), each of which can do 6x fully uncompressed SMPTE 2110 video (1080i 29.97fps) flows in each direction, with a 46Tbps non-blocking back-plane so the scale can get a bit crazy.
The full scale back-end stuff is pretty invisible from the consumer side. Eventually that internal infrastructure feeds into distribution encoders that produce lower bitrate streams for cable/sat/web distribution (for real-time events, with separate file delivery for VOD platforms like iTunes & Netflix)
As you said, broadcast is usually done with IP multicast, so it's not really bandwidth intensive. The netflix example is different, there you have a large group of people who all want different streams. At netflix's scale serving that many simultaneous connections from any reasonable amount of datacenters has the capability to saturate DC uplinks at peak hours. Hence why they've resorted to caching appliances in IXs.
One really funny story about this. When AT&T was creating U-verse way back when they picked IP Multicast as the deliver method for the video. In one of their first test rollouts to the engineers on the projects house they found an issue. The engineers kid picked up the remote and started hitting the channel up button rapidly. It takes a few second for the Multicast join to be processed so this really did not work that well. The fix was that when you join say channel 100, the box joins the streams for channel 96-104 so you get the picture ASAP if you push the up/down channel button. This is why you see a delay if you jump to 200 from 100. The Multicast join has to be processed.
Anyway that is the story I heard from one of their router vendors. Technically it is all correct.
I think other (more modern?) systems solve that by requesting the first few seconds after a channel switch via unicast in parallel to the join request, to cover up the time until the join is completed. I discovered this when I swapped my ISP router for a Linux box and didn't forward igmp. When turning on the TV I saw a frozen image. After switching the channel I got about 4 seconds of TV and then freeze again. Some wiresharking hinted me at the igmp and multicast I forgot to properly handle, and after setting it all up I saw a couple of seconds of unicast UDP data after every channel switch.
The time to start the single stream would be not much faster or at all to the mcast join. Most channels are broadcast to the local head node (main router in an service area). The join just builds the tree down past the head to the end user. Last time I looked at / worked on / designed these systems was maybe 8 years ago, but I do not except it to have changed to much. But you never know. I think the most interesting thing was this use case was what mcast was really designed for and the only real world use case I have seen beyond stuff in the finance market (market data feeds from the exchanges are multicast). Well there is OSPF...
Yes, these setups are really just a big mystery regarding the inner workings if you don't happen to work in that field. The unicast stream really started almost instantaneously when switching channels (sub second), so they somehow got this optimized well, but multicast always took 2 to 3 seconds...
Actually, I didn't realize that broadcast was done with IP multicast. I was thinking it used a separate, older technology. So does that mean that generally speaking, modern ISP's that bundle Internet + Cable push the cable content over the IP transport using multicast in the layer 2 DOCSIS network?
I used to have a fiber + TV package. It was definitely multicast. I found that when I swapped their router for my own I had to make sure IGMP was handled in order for TV and on-demand to work.
I was amazed that a bit of software that is basically in all home routers is so old, unloved and misunderstood.
Multicast is a funny thing. For years people thought it was the future until they realised 1) it requires the WAN end to support it and 2) people to watch stuff at the same time.
It's not the only way, but yes, there are absolutely cable companies that transmit via IP multicast. There'll still be other filtering going that bundles a couple channels together into the radio/DOCSIS version of a channel so that the end devices don't have to digitally filter all of the firehose of data, but then it's IP multicast on top of that.
I know that AT&T pushes TV over IP but I no longer use a TV provider. Also when I had slower AT&T services over twisted pair they would only send the signal of the TV channel your we’re watching from the DSLAM to your house. It would take a noticeable second to switch channels.
It'd be great if it could be done using IP multicast! Sadly over the Internet to consumer devices multicast isn't an option, sometimes it is within a particular ISP for their services, but from an arbitrary broadcaster to a receiver it's all unicast (and often pull-based using HLS or DASH)
Yeah, I worked on a startup to push IP multicast TV to consumers way back when. Went nowhere for the reasons you state, but we got some cool tech out of it regardless.
I meant at real scale where it makes sense. You're correct in that your upstream ISP needs to support PIM to the customer edge, which is hard to get, but available for certain customers and especially so in datacenter fabrics.
ah yes, I thought you were talking about distribution rather than inside a production facility. At that case the bandwidth of uncompressed 4K (which you want to do so you're not introducing compression artifacts) is in the order of several Gbps, so bandwidth is pretty high, although multicast is used.
There is an awful lot of data that is routed around internally before you hit transmission. That's more what's being talked about here, not delivering the video to viewers.
So if this lossless video over ethernet thing ever trickles down to consumers, what happens?
Can my computer monitor become just another network device plugged into Ethernet like my printer already is?
In my living room, can I ditch the video routing part of my AV receiver and just plug everything (streaming devices, video games, cable box) into a very fast ethernet switch?
am i reading this right? are they coming up with a way to use IP to broadcast "television" in the sense that, if this was finished and open source that "anyone" could feasibly run their own "television channel/station" (edit: over IP) ?
Not exactly. This is about how to pass video between interoperable machines.
Traditionally within a broadcast house, video+audio was sent between machines using HD-SDI using BNC coax[1]. For the first generation of HD, this ran at 1.5Gb/s. For 1080p/59.94Hz, a new standard was developed to run at 3Gb/s. For 4K at 59.94Hz, there is a 12Gb/s standard.
HD-SDI routers are extremely expensive. Each input and output from a device has to be individually cabled to a router and some devices can have many dozens of I/Os. There has been a push to switch from using ridiculous numbers of cables to using IP solutions and off-the-shelf IP routers. SMPTE 2110 provides a framework for doing this using RTP [2].
4K needs roughly 12Gb/s, so to carry a single video, you need at least 25Gb/s networking. If you want to carry many signals, the network bandwidth goes up fast -- that's why BBC is interested in 100Gb/s links.
Also keep in mind the traffic is not "bursty". These links may be nearly saturated 24/7. Most off-the-shelf routers are not designed with enough buffering to handle fully saturated links on all connectors.
not necesarily to broadcast television, but to produce it. Rather than having to buy expensive bespoke hardware to fit a TV studio, what happens if it was just software running on a bunch of servers (maybe in a cloud)? This is the background of the project: http://www.bbc.co.uk/rd/projects/ip-studio
Also. live TV. Live feeds over IP have always had their issues. Slight delays, even the concept of buffering, is not acceptable. Packets have to come in and out of the system at roughly the speed of light. Any drop of anything will be seen on screens. Parity can help, but that requires processing that is very difficult in the acceptable timeframe. A typical TV studio could have 20 different live feeds coming in and four or five going out. They need to switch between and/or layer any number of these instantly and seamlessly. The bandwidth demands quickly become insane.
Remember too that the feed will transit over many different IP-based systems. A seemingly acceptable delay on one system, a few microseconds here and there, adds up or multiplies as data streams between equipment.
I think it's from the cameras and recording gear to the backend where it's peiced together and that is what is broadcast... given they mention uncompressed hd and uhd, that's the impression I get...
If this is for a more-or-less closed system, I wonder why it uses IP, and not an RDMA-type network (Infiniband etc.) which already is kernel-bypass and works routinely at around 100G, 1μs latency. There's nothing special about the Mellanox drivers amongst the Openfabrics ones in having free drivers in Linux, but they generally require blobs or separate firmware.
My view is that of all my susbscriptions for entertainment - sky, disney, netflix, and time wasted listening to ads, the one that gives easily the best value for money is 12 quid a month to the BBC.
(I mean sky is nearly 60 quid a month and i cannot persuade my other half to go broadcast-less)
The BBC have always been awesome, especially with their R&D. I used to work at a manufacturer of teletext inserters years ago so we met with lots of BBC people - they had 4K before anybody had heard of it, as well as the work on the Dirac codec.
There's something about broadcast stuff i'd love to work with again. Even just a broadcast mixing desk jogwheel was a league above anything i've ever DJd with, the engineering to make it feel correct was amazing. I'd imagine these days moving to IP it's more of a challenge I can step up to. Are the BBC hiring devops people...?
I never understood why they stick with this model instead of just levying a tax on citizens (like my beloved CBC). The BBC has been more than just a TV station for a long time; and I hear people in the UK are dead tired of having BBC license "collectors" coming round every 2 months to harass them into paying fees for the flatscreen TV they have hooked up to an XBox.
If you tell the BBC licence collectors (phone/letter/in person) that you're not watching TV, they stop coming.
The license collectors remain a favourite grumble of anti-government or anti-tax people, most people aren't affected. (Most people buy the license, most people who don't need to instead tell the BBC roughly every 3 years.)
The only people who have problems are the people who refuse to tick a box on a webform that says "I do not use iPlayer, I do not watch any tv live as it's broadcast".
I didn't have a TV license for years because I didn't watch live TV etc. and they just sent me regular letters (I think every two years). True that those letters are kind of menacing but...
that's not quite true, they wording on their paperwork implies that they "may" still come round to persuade you to buy a licence, this is usually the elderly that are fooled into "buying" a licence on the stop but are instead proven to be guilty of not having a licence and are sent to court.
the licence system is too heavy handed, especially for those that do not know their rights.
The BBC is fantastic. Use a proxy to watch it in the US. Keep telling them I would happily pay the license fee if they allowed us to watch the iplayer outside of the UK. They really should do this. I would wager money they would get more paying license outside the UK then inside.
You doubt a microkernel can approach Linux kernel efficiency, yet here a typical kernel task is delegated to user-space for efficiency under Linux ; an approach that is similar to how networking might be handled by a microkernel?
I agree that it'll be interesting to see how fuchsia turns out.
The problem is that the linux kernel can't process many simultaneous small connections.
Just to be clear: - linux can easily transfer at 40 Gbps - linux chokes at around 1M packet per second per cpu socket.
Thats right.
So linux can easily transfer at 40 Gbps one or few simultaneous flows
But! It can't transfer several flows at 40 Gbps.
The bottlebeck is the number of packets per seconds it can inspect.
So if you are sending 100 big transfers, you will reach 40 Gbps.
If you send 5M small transfers, linux will die.
This is why netmap is handy.
It offloads the packets from linux directly to the app
Bbc is sending xM small requests per second, hence millions packets per seconds.
In conclusion, netmap is good if you need a lot of small simultaneous connections.
I reached 40M connections on chelsio 40 Gbps nic using netmap on FreeBSD 11.