At this point I think Intel's entire advantage over the past two decades was being on the bleeding edge of silicon processes, paired with a middling silicon design team and good firmware devs that could patch most flaws in microcode.
The process lead has disappeared, and when working with RF frontends (LTE, WiMax, cable modems, WiFi) their ability to cover up implementation errors in the driver is limited.
Intel's prospects for the next few years look dim in this context, given the tens of billions wasted on the aformentioned forays into making radio chipsets and the declining revenue of their legacy CPU market paired with serious manufacuting constraints that have hamstrung their supply chain.
I have a book about UPnP written by some Intel engineers and it describes a bastardization of http that is so insane that there is no way I would trust the company to embed an http server inside its management engine. It just wouldn't be possible with an attitude like that to get security right.
Maybe INTC went up tonight because investors know now that Intel will stop wasting money on ads proclaiming themselves to be the leader in the 5G "race". (How come nobody cares about the finish line?)
Edit: Most Comcast/Xfinity modems and converged gateways are Intel based and have this and other issues, pure garbage devices.
You'd have to scroll down a bit on the page (I'm copying the factual data in case the remote link goes down at a later date).
The bad news is that a lookup table is literally required to know what chipset is in use inside of the device - just like with all of the WiFi adapters :(
(PS, this table looks like garbage, but many users on mobile will cry if I prefix with spaces to make it look OK... I'd really rather every newline were a <br> element.)
Motorola SB 6121 4x4 (Intel Puma 5)
Motorola 6180 8x4 (Intel Puma 5)
Arris SB 6183 16x4 and Motorola MB7420 16x4 (Both Broadcom)
NetGear CM1100 32x8 (Broadcom)
NetGear CM1000 32x8 (Broadcom)
NetGear Orbi CBK40 32x8 (Intel Puma 7)
Note: I tested this model and was told that this modem build into Orbi does not have same issues as other Puma 6/7 modems. I haven't seen any issues with it since using it. The Orbi modem is based off Netgears CM700.
TP-Link - TC-7610 8x4 (Broadcom)
Routers what work with zero issues with the above cable modems list in my current collection:
Asus - RT-AC66U and GT-AC5300 (OEM and Merlin FW)
D-Link - Many model routers tested including COVR models.
Linksys - WRT1900AC v1 and WRTx32v1
NetDuma - R1 Current firmware version (1.03.6i)
Netgear - Orbi CBK40, R7800, XR450 and XR500
Forum User Modem and Router Experiences
Arris - SB 6141 8x4 (Intel Puma 5) and D-Link DIR-890L and ASUS RT-AC5300
Arris - SB 6141 8x4 (Intel Puma 5) and Asus RT-AC66U
Arris - SB 6183 16x4 (Broadcom) and Linksys WRT1900ACM and WRT32x and NetGear XR500
Arris - SB 6183 16x4 (Broadcom) and NetGear XR500
Cisco - DPQ3212 (Broadcom) and Asus RT-AC66r, D-Link DGL-4500, NetDumaR1 and NetGear R7000
Motorola - MB 7220 (Broadcom) and Asus RT-AC66r, D-Link DGL-4500, NetDumaR1 and NetGear R7000
TP-Link - TC-7610 8x4 (Broadcom) and NetDuma R1
That's what the Internet Archive is for! The URL above has now been archived .
EDIT: And the resulting problem isn't just the resulting end-user latency. TCP's congestion control mechanisms (i.e. the ones that let the endpoints push as much traffic as the network can bear and no more) rely on quick feedback from the network when they push too much traffic. The traditional, quickest, and most widely implemented methods of feedback are packet drops - when those are replaced with wildly varying latency, it's hard to set a clear time-limit for "this packet was dropped", and Long-Fat-Network detection is a lot harder.
Whereas most UDP applications are constant rate, with some kind of control channel.
Bufferbloat should not matter for your home connection. (Unless it is constantly in use by more than one client.)
However, when congestion occurs and the data you sent, that sits in these buffers are already stale, irrelevant, but the problem is that there's no way to invalidate the cache on the middleboxes. And it leads to worse performance because it clogs up pipes with stale data when those pipes get full. So it prevents faster unclogging. This results in a jerk in TCP, because it scales back more than it should have without the unnecessary wait for the network to transmit the stale data.
That is wrong. A single client can saturate the connection easily (eg. while downloading a software update or uploading a photo you just took to the cloud). Once the buffers are full, all other simultaneous connections suffer from a multi-second delay.
The result is that the internet becomes unusably slow as soon as you start uploading a file.
Using my smartphone, it induces and measures > 700ms latency on my cable modem connection. That’s worse than old-fashioned high-orbit satellite internet!
Traditional tcp congestion control in an environment where buffers are oversized will keep expanding the congestion window until it covers the whole buffer or the advertised receive window, even if the buffer is several seconds of packets. There may be some delay based retransmission, but traditional stacks will also adapt and assume the network changed and the peer is expected to be 8 seconds away.
Is this bufferbloat? I guess what happens is that a bunch of packets get queued up and I have to wait until all of them are delivered?
Ericson, at least, published a paper showing they recognized the problem: https://www.ericsson.com/en/ericsson-technology-review/archi...
and I do hope that shows up in something, however the chipsets on the handsets themselves also need rational buffer management.
To exclude cases you'd need to watch the network traffic with something like WireShark and look at retransmissions. If it suddenly shoots up and then packets start to trickle later but very slowly, then that could be bufferbloat.
But the 1 minute seems too long.
Reading more about it, you are correct about 1min being too long, therefore it's probably not (just) bufferbloat.
I hope to have a document comparing it to docsis 3.1 pie at some point in the next few months, in the meantime, I hope more (especially ISPs in their default gear) give cake a try! It's open source, like everything else we do at bufferbloat.net and teklibre.
My Netgear R7000 can't handle my 400mbps connection using qos throttling. I will need probably at least a mid range Ubiquiti router to handle it.
Avoid modems with Intel, specifically the various "Puma" chipsets. Best to double-check the spec sheet on whatever you buy.
The main alternative seems to be Broadcom-based modems: TP-Link TC7650 DOCSIS 3.0 modem and Technicolor TC4400 DOCSIS 3.1 modem (of which there are a few revisions now).
$69 MikroTik hAP ac2 will easily push 1Gbps+ with qos rules - https://mikrotik.com/product/hap_ac2#fndtn-testresults (it's a bit more tricky to setup and you need to make sure you don't expose the interface to the internet)
_Some communications equipment manufacturers designed unnecessarily large buffers into some of their network products. In such equipment, bufferbloat occurs when a network link becomes congested, causing packets to become queued for long periods in these oversized buffers. In a first-in first-out queuing system, overly large buffers result in longer queues and higher latency, and do not improve network throughput._
I hope I get this right, please correct if needed:
So basically Intels chipsets were creating what looked like a fat network pipe that accepted packets from the host OS really fast but in fact was just a big buffer with a garden hose connecting it to the network. The result is your applications can write these fast bursts, misjudge transmission timing causing timing problems in media streams like an ip call leading to choppy audio and delay. The packets flow in fast, quickly backup and the ip stack along with your application now have to wait (edit: I believe the proper thing to say is the packets should be dropped but the big buffer just holds them keeping them "alive in the mind" of the ip stack. The proper thing to do is reject them and not hoard them?). The buffer empties erratically as the network bandwidth varies and might not ask for more packets until n packets have been transmitted. Then the process repeats as the ip stack rams a load more into the buffer and again, log jam.
A small buffer fills fast and will allow the software to "track" the sporadic bandwidth availability of crowded wireless networks. At that point the transmission rate becomes more even and predictable leading to accurate timing. That's important for judging the bitrate needed for that particular connection so packets arrive at the destination fast enough.
Bottom line is don't fool upstream connections into thinking that your able to transmit data faster than you actually can.
Buffering is layer 4's job. Do it on layer 2[a] and the whole stack gets wonky.
[a] Except on very small scales in certain pathological(ly unreliable) cases like Wi-Fi and cellular.
And in the case of telco equipment, that's an tradition-minded and misguided conscious policy decision.
Smaller buffers are in general better. However advanced AQM algorithms and fair queueing make for an even better network experience. Being one of the authors of fq_codel (RFC8290), it has generally been my hope to see that widely deployed. It is, actually - it's essentially the default qdisc in Linux now, and it is widely used in quite a few QoS (SQM) systems today. The hard part nowadays (since it's now in almost every home router) is convincing people (and ISPs) to do the right measurement and turn it on.
More generally it refers to any hardware/system with large buffers - needed to handle large throughput but can lead to poor latency due to head of queue blocking.
Simply use a different queueing strategy or just smaller buffers.
Revenue grew last quarter almost 20% YoY to ~$20B
Net income has been up 40%-80% each quarter YoY, to ~$5B
While I do believe what was said above, it seems that whatever choices they are making that supposedly are resulting in a failing company from an engineering perspective, are resulting in a successful company from a financial perspective.
I think the only point that I disagree with in the OP is the idea that there is declining revenue in the "legacy CPU" market. It seems like there is still a long trajectory of slow growth there at worst.
These large customers that have caused a temporary surge in profit are in the process of migrating away from Intel.
Finances for big brands can be lagging indicators.
Both times they have tried to put an x86 core around the modem part with varying success.
Come to think of it, the last decade has been really bad for Intel. They no longer have a node advantage. They no longer have a performance advantage in any market space that I can think of outside of frequency hungry low-thread applications (games).
Their WiFi chips are good, but second tier.
Their Modem chips are third tier.
Their node is mostly on par with the competition, for now.
Their CPUs are trading blows with AMD.
Their ethernet chips are a generation behind.
Optane is is a bright spot, but we'll see how they squander that.
The next big diversification play by Intel is GPUs, I have no idea how that will pan out.
Optane is a neat idea, but the severe change in software architecture combined with only select CPUs even supporting it will limit uptake outside FAANGs or organizations with really specialized needs.
I'm not sure it's really a niche.
They were also scrambling to move all their services away from use of that database in favour of a horizontally scaled system that could grow further.
The query rate that can be handled by a single conventional server are pretty monstrous these days. You'd have to be simultaneously a) maybe top-50 website level load (I'm well aware that there's a lot more than websites out there, but at the same time there really aren't that many organizations working at that scale, much as there are many that think they are) and b) confident that you weren't going to grow much.
Financial systems especially trading platforms need to ensure market fairness they also need to have at the end a single database as you can’t have any conflicts in your orders and the orders need to be executed in the order they came in across the entire system not just a single instance.
This means that even when they do end up with some micro-service-esque architecture for the front end it still talks to a single monolithic database cluster in the end which is used to record and orchestrate everything.
Take a look at how cagey Intel is acting about Optane: https://www.semiaccurate.com/2018/05/31/intel-dodges-every-q...
This is the same behaviour as with Intel's LTE chips, where promised features keep slipping (much to Apple's dismay).
All of which looks to solve the wear problem even if it means higher price and latency.
However my sidelines impression was intel got sidetracked with 40g while everyone else, especially dc network fabric land, went towards 25/100.
Also IIRC for low end devices and battery life, Intel GPUs are play the nicest
Ethernet NIC/CNA rankings(IMHO):
1. Mellanox is the pack leader right now.
2. Chelsio is right up there with them, but not leading.
3. SolarFlare and Intel bring up the middle ground.
4. Everyone else (QLogic, Broadcom, etc)
Aquantia is an unknown for me. As long as they dont suck, they'll probably go in tier 3.
I am speaking off the cuff as someone heavily involved and interested in RF in general and WiFi/LTE in particular.
From my anecdota(shame this isnt a word), Atheros chips have better SNR and higher symbol discrimination thanks to cleaner amps, better signal discrimination logic, and tend to be on the forefront of newer RF techniques in the WiFi space. All this culminates in better throughput, latency, and spectrum utilization than anyone else.
It also helps that their support under Linux is far superior to most everything else, which helps in Router/AP/Client integration and testing.
I don't even like Qualcomm, but from my experience, you will almost always regret choosing someone else for anything but the most basic requirements.
A big frustration with Qualcomm wifi on Windows has been that they do not provide driver downloads to end users. If you are using a laptop that has been abandoned by the OEM and you have a wifi driver problem you have to hunt for the driver on sketchy 3rd party sites or just live with it. I have personally had to help several people find drivers because ancient Qualcomm drivers were causing bug checks on power state transitions.
What real-world experiences have you had with Intel wifi on Linux and Windows that make you believe it is second rate?
Mellanox has 40GbE (and above) adapters with good reputation.
Mellanox also have 10GbE stuff, but that's mostly older generation / legacy (low end). Not sure how the 10GbE ones are regarded.
I talked to Syba USA two months ago and they said end of Q1 so I didn't pursue the idea of bringing 500 into the US and sell them. I still might. Do you know any good platforms for this sort of thing?
The other reason I didn't pursue this because the Realtek based 2.5gbps adapters are out https://www.centralpoint.nl/kabeladapters-verloopstukjes/clu... (USB A version: https://www.centralpoint.nl/kabeladapters-verloopstukjes/clu...) and I wasn't sure whether people would care enough to jump to 5gbps.
Unfortunately, I don't know 500 people who'd want one and I think shipping from the US to anywhere not-US (say, Australia) would be prohibitively expensive.
I wouldn't call them solid - at least the X710. The net is full of bad experiences regarding them. They're VMware certified but are apparently really unstable on VMware; I have no personal experience with that platform. On Windows Hyper-V hosts I had the NICs repeatedly go into "disconnected" status and individual ports would straight up suddenly stop working. On Linux KVM hosts that didn't occur, at least for me.
Supposedly upgrading the firmware to a recent-ish release fixes it - I haven't had it occur after. That's understandable. What's not understandable is that the NIC was released in 2014 and the issue was resolved only in like 2018 according to the net.
Broadcom comparatively has crap drivers and decent silicon, meaning your cable modem works fine (with no bufferbloat or jitter issues), but good luck with that random WiFi chipset on Linux :P
For instance my biz dev guy thought that Intel's graph processing toolkit based on Hadoop would be the bee's knees and I didn't have to look at it to know that it was going to be something a junior dev knocked out in two weeks that moves about 20x more data to get the same result as what I knocked out in two days.
NVIDIA, on the other hand, impresses me with drivers and release engineering. Once I learned how to bypass their GUI installer I came to appreciate what a good job they do.
(They gotta have that GUI installer otherwise some dummy with a Radeon card or Intel Integrated 'Graphics' will post on their forum about how the drivers don't work.)
Maybe they need a GUI, but it doesn’t need to be such a bloated monstrosity.
well imho, the _real_ reason why wimax failed had more to do with lack of compatibility with existing (at that time) 3g standards than anything else, and this was despite the fact that lte was delayed ! operators had already spent lots of money in deploying 3g, and were wary (genuinely so) of sinking money in something that was brand new when lte was just around the corner...
also, imho (once again), technological challenges (as you pointed out above) are _rarely_ an issue (if ever) in determining the marketplace success / failure...
There's a WiMAX brand selling "WiMAX 2+" but that is just LTE.
I actually still have a legacy wimax device and plan active since it was the last plan with true unlimited with zero "fair use". It also had a usage scale so on months I don't use it it's only 300 yen. So a good backup device. I've loaned it out to people who had have to spend a week in hospital (no wifi) and they've burned through 30 GB streaming video
with no issues. They send me letters every couple months urging me to upgrade to a new plan that all cost 4000/mo and have fair use data caps.
On the other hand, I've been bitten by Broadcom WiFi chipsets too much recently -- two different laptops with different Broadcom chips having a variety of different connectivity problems. One of them would spontaneously drop the WiFi connection when doing a TCP streaming workload (downloading a large file via HTTP/HTTPS). Admittedly this was probably some kind of driver issue, but I wasn't excited about the prospect of using an out-of-tree driver on Linux to solve the problem (and that didn't help my Windows install either). I swapped the mini-PCIe boards out with some Intel wireless cards and they've been running perfectly since.
But of course, if the rest of the business suffers, they may have to make cuts all over the place. I just hope the WiFi stuff doesn't end up on the chopping block or the competition improves.
But then I deployed lots of boxes with WiFi, that had to work in remote locations where others depended on it. I'm ashamed to say I just shipped the first couple with the Intel cards provided by manufacturer.
But after getting a lot of complaints I actually tested it. Which is to say I purchased all the makes and models of 10 WiFi mPCI cards I could find on eBay, collected as many random access points as I could and lots of laptops (over 50), put them all into a room and tested the network to collapse.
Intel's cards were the most expensive, and also the worst performing. They collapsed at around 13 laptops. The best 801.11ac cards were Atheros (now owned by Qualcomm), and they were also the cheapest. Broadcomm were about the middle, but I had the most driver problems with them.
My pre-conceived but untested notions were shattered.
Why would that be the case? Why would those functions be harder to fix in code/microcode than a CPU's functions?
Sure they had a couple bad years but now they have a talent pool deeper than any of the competitors so I'm still betting on them to turn this around and bounce back even stronger.
What does that even mean?
I can't reconcile that claim with PC Magazine's tests of the iPhone XS's mobile performance: https://www.pcmag.com/news/364116/iphone-xs-crushes-x-in-lte...
The PC Mag author, Sascha Segan, says repeatedly that the iPhone XS at worst only slightly trails the Galaxy Note 9 and other high-end phones with 2018 Qualcomm SoCs. To quote directly:
> Between the three 4x4 MIMO phones, you can see that in good signal conditions, the Qualcomm-powered Galaxy Note 9 and Google Pixel 2 still do a bit better than the iPhone XS Max. But as signal gets weaker, the XS Max really competes, showing that it's well tuned.
> Let's zoom in on very weak signal results. At signal levels below -120dBm (where all phones flicker between zero and one bar of reception) the XS Max is competitive with the Qualcomm phones and far superior to the iPhone X, although the Qualcomm phones can eke out a little bit more from very weak signals.
We could say this is all Apple tuning the phone's antenna array and materials, but I find it extremely unlikely that would have compensated for "5 years" of Intel lag - back then LTE was several times slower in theory and practice.
We could also say this is just Apple giving Intel all the secrets of modems that Intel couldn't figure out themselves. That could be more plausible, but again I'm doubtful, since Apple would have little incentive to hoard those secrets and then loan them to Intel instead of using them to build their own chips. Unless of course Apple stole those secrets from Qualcomm or someone else...?
Why didn't you just reply to their comment?
I think Intel failed to realize that they had made the right call with iPhone: their very culture isn't about being innovative, but providing microcode flexibility at high instructions/watt. They had a chance to define servers, and still do. They should have been all over the whole spectre/meltdown/timing security issues and owned creating a secure server chip. Instead, they've fretted away so many options that they never had a chance to win.
My two cents.
CPU security is actually one of the few topic Intel is extremely good at (don't trust the RISC/CISC flamewar) and could define the way cloud providers are built upon.
Given that the settlement came in the middle of court arguments unbeknownst to the lawyers, I'm thinking the latter.
Intel has a reputation for completely ripping up departments once they're 'refocusing' but they really only do that after the writing has been on the wall a while. I will be amazed if it slowly becomes clear their 5G backhaul business isn't going the same way though.
Apple been poaching Qualcomm's top radio engineers for years ...for work in their Chinese RnD centre (no non-compete enforcement there)
My speculation is maybe their talks with Huawei were more fruitful than known to the public.
Who will be your first option for supplier of 5G PHY chips other than standard's original authors?
True story: I was born in a opressive communist dictatorship, escaped with my family to freedom and democracy. It's very hard to observe the vanguard of democratic capitalism, become 99% communist, with a couple guys in California rounding the corners and painting the boxes...
That’s a big advantage that Apple has had — their tighter product lines allow them to commit to a large number of parts for long periods of time.
The Intel reps mostly talk about storage these days. Quite a departure from the old days of high margin chips.
Companies have reached a global patent license agreement and *a chipset supply agreement*
Coincidence? I think not!
Edit: apparently the very wise HN users would rather downvote a call for evidence and upvote a feels goods conspiracy theory. stay classy HN.
You could have taken a few minutes to scan comments yourself and find this information. Or is not classy of me to point that out?
I have been here for years and I have decent karma I think
That leaves basically just Qualcomm and Mediatek (on the lower end) left. Huawei and Samsung have something too. Apple and Samsung will have to make their own modems now, if they don't want to be dependent on Qualcomm.
This is certainly not good for competition.
Article five months ago: https://semiaccurate.com/2018/11/12/intel-tries-to-pretend-t...
I remember Broadcom thought "no problem, we churned out Wifi solution with 50 people so how hard can this cellular thing be for us?"... Well, they crashed and burned.
You need massive teams of people who know what they are doing to build cellular chipsets with the necessary software stack.
Seems like Intel’s future is bleak, and the company may have to be broken into pieces.
As for ARM CPUs seriously moving into the PC space, who knows. It hasn't happened yet, but there's an ever-increasing amount of smoke around the likelihood of ARM-based Macs. It's silly to think that in five years Intel will be a destitute, hollow shell of its former self -- but it's not silly to think that ARM-based PCs are going to not only exist, but be both common and seriously competitive.
2. The idea isn't to emulate OSX apps. It's to repurpose them to run on ARM. And they been doing this with all of the App Store apps for years:
...but, Intel mentioned the IP they still retain in the 5G space. What's the over/under on Apple licensing Intel's IP and bringing modem dev in house (keeping in mind that would take a couple years at best to payoff) vs making a best case scenario of a working relationship with Qualcomm and using their tech?
Apple will have purchased IP and maybe entire teams from Intel.
Surely it can't be a coincidence. Though the weather thing might not be either, but if so this was a very well coordinated response.
> "...and Other Data-Centric Opportunities"
What does Data-Centric mean? Specifically, Data-Centric platforms that they keep referencing?
Not sure if that is still true after Krzanich out.
Also enjoyed how their OpenGL drivers used to lie about hardware support and how in used to talk about being open source friendly, yet GPA analyser was DirectX only for several years.
NVidia has nothing to gain by becoming part of Intel.
Who said they did? I'm saying Intel could gain by acquiring Nvidia and focusing their engineers on CPUs and chipsets.
I think they are focused on that. They've been trying to bring their 10nm process to market for years. It's now 3 years past their initial projections. Moore's exponential curve is certainly dead, the big question now is whether performance will improve linearly or something worse.
Agreed, but none of the technologies have reached the marketplace, and it's entirely unclear how long it will take for them to develop. If it takes decades, any exponential curve is toast.
I would not call that an exponential, it's a step function. It's entirely possible that human innovation over time will be look more like a series of step functions than a continuous exponential. The problem with step functions, of course, is that you can't easily predict future growth based on past growth, as you can with an exponential.
E.g., when quantum computing comes, it will surely be significant, but if it takes 10, 100, or 1000 years to make a breakthrough, it's not at all clear that the timing will happen to coincide with the exponential that we have been on for the last 50 years.
> "The company will continue to meet current customer commitments for its existing 4G smartphone modem product line, but does not expect to launch 5G modem products in the smartphone space, including those originally planned for launches in 2020."