Hacker News new | past | comments | ask | show | jobs | submit login

Intel couldn't make WiMax work right (too power hungry and low speed), their attempts to build an LTE chip have fared worse than Samsung and Qualcomm, to the point that they are a full 5 years behind their competitors, and their cable modem chipsets (Puma 6/7, used in the Xfinity converged gateway) are fatally flawed and DoSable with a few Kbps of traffic, while having horrible bufferbloat.

At this point I think Intel's entire advantage over the past two decades was being on the bleeding edge of silicon processes, paired with a middling silicon design team and good firmware devs that could patch most flaws in microcode.

The process lead has disappeared, and when working with RF frontends (LTE, WiMax, cable modems, WiFi) their ability to cover up implementation errors in the driver is limited.

Intel's prospects for the next few years look dim in this context, given the tens of billions wasted on the aformentioned forays into making radio chipsets and the declining revenue of their legacy CPU market paired with serious manufacuting constraints that have hamstrung their supply chain.




Intel could not resist bufferbloat.

I have a book about UPnP written by some Intel engineers and it describes a bastardization of http that is so insane that there is no way I would trust the company to embed an http server inside its management engine. It just wouldn't be possible with an attitude like that to get security right.

Maybe INTC went up tonight because investors know now that Intel will stop wasting money on ads proclaiming themselves to be the leader in the 5G "race". (How come nobody cares about the finish line?)


What is bufferbloat


When your buffers are too big for the connection, it induces latency as your computer sends 100Mbps of traffic, yet the modem is capped to say 5Mbps. Its better to drop this traffic as TCP/IP and UDP will throttle itself, rather than letting 500ms to 1 second of latency be induced by holding that data in a buffer, resulting in a jerky user experience.

Edit: Most Comcast/Xfinity modems and converged gateways are Intel based and have this and other issues, pure garbage devices.


This is the main reference I used when shopping for a cable modem a couple months ago:

https://badmodems.com/Forum/viewtopic.php?t=65000

You'd have to scroll down a bit on the page (I'm copying the factual data in case the remote link goes down at a later date).

The bad news is that a lookup table is literally required to know what chipset is in use inside of the device - just like with all of the WiFi adapters :(

(PS, this table looks like garbage, but many users on mobile will cry if I prefix with spaces to make it look OK... I'd really rather every newline were a <br> element.)

Motorola/Arris Modems Motorola SB 6121 4x4 (Intel Puma 5) Motorola 6180 8x4 (Intel Puma 5) Arris SB 6183 16x4 and Motorola MB7420 16x4 (Both Broadcom)

NetGear Modems NetGear CM1100 32x8 (Broadcom) NetGear CM1000 32x8 (Broadcom) NetGear Orbi CBK40 32x8 (Intel Puma 7) Note: I tested this model and was told that this modem build into Orbi does not have same issues as other Puma 6/7 modems. I haven't seen any issues with it since using it. The Orbi modem is based off Netgears CM700.

TP-Link TP-Link - TC-7610 8x4 (Broadcom)

Routers what work with zero issues with the above cable modems list in my current collection: Asus - RT-AC66U and GT-AC5300 (OEM and Merlin FW) D-Link - Many model routers tested including COVR models. Linksys - WRT1900AC v1 and WRTx32v1 NetDuma - R1 Current firmware version (1.03.6i) Netgear - Orbi CBK40, R7800, XR450 and XR500

Forum User Modem and Router Experiences Arris - SB 6141 8x4 (Intel Puma 5) and D-Link DIR-890L and ASUS RT-AC5300 Arris - SB 6141 8x4 (Intel Puma 5) and Asus RT-AC66U Arris - SB 6183 16x4 (Broadcom) and Linksys WRT1900ACM and WRT32x and NetGear XR500 Arris - SB 6183 16x4 (Broadcom) and NetGear XR500 Cisco - DPQ3212 (Broadcom) and Asus RT-AC66r, D-Link DGL-4500, NetDumaR1 and NetGear R7000 Motorola - MB 7220 (Broadcom) and Asus RT-AC66r, D-Link DGL-4500, NetDumaR1 and NetGear R7000 TP-Link - TC-7610 8x4 (Broadcom) and NetDuma R1


> (I'm copying the factual data in case the remote link goes down at a later date).

That's what the Internet Archive is for! The URL above has now been archived [0].

[0]: https://web.archive.org/web/20190417095456/https://badmodems...


Now we have a second reference in case, god forbid, something happen to the internet archive. Something happening is always possible!


Put your (cable/DSL/fiber) 'modem' [it is a router] in bridge modus and be done with it. Anything would work then. Ubiquity gear (which I use), but also different (more open source) stuff like Turris Omnia, Turris Mox, or a lovely PC Engines APU2.


The fault isn't just with cheapo Intel edge hardware - a lot of ISP infrastructure is built with the old telco mentality of "we never drop data". Which, as you correctly point out, is precisely the wrong thing to do for an overcongested IP network.

EDIT: And the resulting problem isn't just the resulting end-user latency. TCP's congestion control mechanisms (i.e. the ones that let the endpoints push as much traffic as the network can bear and no more) rely on quick feedback from the network when they push too much traffic. The traditional, quickest, and most widely implemented methods of feedback are packet drops - when those are replaced with wildly varying latency, it's hard to set a clear time-limit for "this packet was dropped", and Long-Fat-Network detection is a lot harder.


So, with TCP the speed should depend on the bandwidth-delay product (which depends on the full peer to peer round-trip latency, because it needs ACKs coming in faster than it empties the window, otherwise the sending peer just waits).

Whereas most UDP applications are constant rate, with some kind of control channel.

Bufferbloat should not matter for your home connection. (Unless it is constantly in use by more than one client.)

However, when congestion occurs and the data you sent, that sits in these buffers are already stale, irrelevant, but the problem is that there's no way to invalidate the cache on the middleboxes. And it leads to worse performance because it clogs up pipes with stale data when those pipes get full. So it prevents faster unclogging. This results in a jerk in TCP, because it scales back more than it should have without the unnecessary wait for the network to transmit the stale data.


> Bufferbloat should not matter for your home connection. (Unless it is constantly in use by more than one client.)

That is wrong. A single client can saturate the connection easily (eg. while downloading a software update or uploading a photo you just took to the cloud). Once the buffers are full, all other simultaneous connections suffer from a multi-second delay.

The result is that the internet becomes unusably slow as soon as you start uploading a file.


You can see this effect by going to fast.com.

Using my smartphone, it induces and measures > 700ms latency on my cable modem connection. That’s worse than old-fashioned high-orbit satellite internet!


I'd encourage you to get a non-Intel modem


The problem with bufferbloat is not necessarily excess retransmissions or stale data (although that does happen), it is primarily that delay significantly increases in general, and that delay in competing streams or intermittently active streams is highly variable.

Traditional tcp congestion control in an environment where buffers are oversized will keep expanding the congestion window until it covers the whole buffer or the advertised receive window, even if the buffer is several seconds of packets. There may be some delay based retransmission, but traditional stacks will also adapt and assume the network changed and the peer is expected to be 8 seconds away.


I have a 4G modem. Whenever I watch a video and skip forward a bunch of times, the connection hangs and I have to wait for about a minute before it resumes normal operation.

Is this bufferbloat? I guess what happens is that a bunch of packets get queued up and I have to wait until all of them are delivered?


Yes, that sort of jerky behavior is symptomatic of bufferbloat. Multiple 4G and 5G devices have now been measured as having up 1.6 seconds of buffering in them. They are terribly bloated. It was my hope that the algorithms we used to fix wifi ( https://www.usenix.org/system/files/conference/atc17/atc17-h... ) - where we cut latency under load by 25x and sped up performance with a slow station present by 2.5x - would begin to be applied against the bufferbloat problem there. Recently google published how much the fq_codel and ATF algorithms improved their wifi stack, here:

http://flent-newark.bufferbloat.net/~d/Airtime%20based%20que...

Ericson, at least, published a paper showing they recognized the problem: https://www.ericsson.com/en/ericsson-technology-review/archi...

and I do hope that shows up in something, however the chipsets on the handsets themselves also need rational buffer management.


That's probably something else. The server rate limits your client, or the ISP rate limits due to too many bursts, or the client needs to buffer more of the video.

To exclude cases you'd need to watch the network traffic with something like WireShark and look at retransmissions. If it suddenly shoots up and then packets start to trickle later but very slowly, then that could be bufferbloat.

But the 1 minute seems too long.


The whole connection hangs - it's not the server or buffering and I doubt it's the ISP.

Reading more about it, you are correct about 1min being too long, therefore it's probably not (just) bufferbloat.


Probably not. It's just crappy software.


Any cable modem brands known to not use bufferbloat-ing NICs?


DOCSIS 3.1 standard introduced a good but not great Active Queue Management scheme called PIE. But upgrading your modem only helps with traffic you're sending; your ISP needs to upgrade their equipment to manage the buffers at their end of the bottleneck in order to prevent your downloads from causing excessive induced latency.


The bufferbloat project introduced a great (IMHO) fq + AQM scheme called "cake", which smokes the DOCSIS 3.1 pie in every way, especially with it's new DOCSIS shaper mode in place. It's readily available in a ton of home routers now, notably openwrt, which took it up 3 years ago. It's also now in the linux mainline as of linux 4.19. The (first of several) papers on it is here: https://arxiv.org/abs/1804.07617

I hope to have a document comparing it to docsis 3.1 pie at some point in the next few months, in the meantime, I hope more (especially ISPs in their default gear) give cake a try! It's open source, like everything else we do at bufferbloat.net and teklibre.


Use a decent router on your side and configure it to rate limit slightly below the modems's limits. This avoids ever creating a queue in their boxes. You can run a ping while tweaking your router rate limit settings to find the point where it is just about queuing but not quite, to optimize both throughput and latency.


Depending on your speed, you may need a bit more than just a decent router. Many routers can't hardware accelerate qos traffic, which will be needed to limit the speed.

My Netgear R7000 can't handle my 400mbps connection using qos throttling. I will need probably at least a mid range Ubiquiti router to handle it.


Ubiquiti routers won't help you; they're even more reliant on hardware acceleration than typical consumer brands, and nobody has put the best modern AQM algorithms into silicon yet. What you really need is a CPU fast enough to perform traffic shaping and AQM in software, which ironically means x86 and Intel are the safest choices.


Well, bufferbloat is at it's worst on slow connections (<100Mbit) and 50 dollars worth of router can fix it there in software.


Only if the firmware implements the algorithms. OpenWRT is your best bet for this: I have it running on a TL-WDR3600 quite well.


> Any cable modem brands known to not use bufferbloat-ing NICs?

Avoid modems with Intel, specifically the various "Puma" chipsets. Best to double-check the spec sheet on whatever you buy.

The main alternative seems to be Broadcom-based modems: TP-Link TC7650 DOCSIS 3.0 modem and Technicolor TC4400 DOCSIS 3.1 modem (of which there are a few revisions now).


A $45 router is enough to de-bufferbloat connections up to several hundred megabits, and past that bufferbloat is less of a concern (in part because its difficult to saturate).


My $120+ Netgear r7000 can't quite handle my 400mbps connection when qos filtering is turned on. If anyone wants a reference.


There are cheap and good routers.

$69 MikroTik hAP ac2 will easily push 1Gbps+ with qos rules - https://mikrotik.com/product/hap_ac2#fndtn-testresults (it's a bit more tricky to setup and you need to make sure you don't expose the interface to the internet)


It's not the hardware that's at fault, it's the software and/or configuration. See here: https://news.ycombinator.com/item?id=17448022


From https://en.wikipedia.org/wiki/Bufferbloat

_Some communications equipment manufacturers designed unnecessarily large buffers into some of their network products. In such equipment, bufferbloat occurs when a network link becomes congested, causing packets to become queued for long periods in these oversized buffers. In a first-in first-out queuing system, overly large buffers result in longer queues and higher latency, and do not improve network throughput._

I hope I get this right, please correct if needed: So basically Intels chipsets were creating what looked like a fat network pipe that accepted packets from the host OS really fast but in fact was just a big buffer with a garden hose connecting it to the network. The result is your applications can write these fast bursts, misjudge transmission timing causing timing problems in media streams like an ip call leading to choppy audio and delay. The packets flow in fast, quickly backup and the ip stack along with your application now have to wait (edit: I believe the proper thing to say is the packets should be dropped but the big buffer just holds them keeping them "alive in the mind" of the ip stack. The proper thing to do is reject them and not hoard them?). The buffer empties erratically as the network bandwidth varies and might not ask for more packets until n packets have been transmitted. Then the process repeats as the ip stack rams a load more into the buffer and again, log jam.

A small buffer fills fast and will allow the software to "track" the sporadic bandwidth availability of crowded wireless networks. At that point the transmission rate becomes more even and predictable leading to accurate timing. That's important for judging the bitrate needed for that particular connection so packets arrive at the destination fast enough.

Bottom line is don't fool upstream connections into thinking that your able to transmit data faster than you actually can.


It's also a problem because a few protocols in you may use from time to time (like, say, TCP) rely on packet drops to discover and detect network throughput. TCP's basic logic is to push more and more traffic until packets start to drop, and then back off until they stop dropping. And then it keeps on doing this in a continuous cycle so that it can effectively detect changes in available throughput. If the feedback is delayed, then this detection strategy results in wild swings in the amount of traffic the TCP stack tries to push through, usually with little relation to the actual network throughput.

Buffering is layer 4's job. Do it on layer 2[a] and the whole stack gets wonky.

[a] Except on very small scales in certain pathological(ly unreliable) cases like Wi-Fi and cellular.


Is there no way to limit how much of the buffer is used via some config?


Usually not. There may be an undocumented switch somewhere in the firmware that a good driver could tweak? Depending on the exact hardware. But end-user termination boxes, whether delivered by the ISP or purchased by the end-user, are built for as cheap as possible and ship with whatever under-the-hood software configuration the manufacturer thought was a good idea. Margins are just too narrow to pay good engineers to do the testing and digging to fix performance issues. (Used to work at a company that sold high-margin enterprise edge equipment, and even there we were hard-strapped to get the buggy drivers and firmware working in even-slightly-non-standard configurations. Though 802.11 was most of the problem there.)

And in the case of telco equipment, that's an tradition-minded and misguided conscious policy decision.


Your analysis is correct.

Smaller buffers are in general better. However advanced AQM algorithms and fair queueing make for an even better network experience. Being one of the authors of fq_codel (RFC8290), it has generally been my hope to see that widely deployed. It is, actually - it's essentially the default qdisc in Linux now, and it is widely used in quite a few QoS (SQM) systems today. The hard part nowadays (since it's now in almost every home router) is convincing people (and ISPs) to do the right measurement and turn it on.

https://www.bufferbloat.net/projects/bloat/wiki/What_can_I_d...



It’s when you’re gaming and your ping jumps to 500 because someone is watching Netflix. It’s one of the main flaws in the currently deployed internet for end users. There are a lot of novel solutions (codel and so on) - but still not widely deployed.

More generally it refers to any hardware/system with large buffers - needed to handle large throughput but can lead to poor latency due to head of queue blocking.


Resisting bufferbloat isn't a tricky problem. A few simple config changes are usually all that's needed to resolve it.

Simply use a different queueing strategy or just smaller buffers.


This comment made me want to investigate Intel's financials and see whether this qualitative story matches their quantitative data, but it seems to be the opposite:

Revenue grew last quarter almost 20% YoY to ~$20B

Net income has been up 40%-80% each quarter YoY, to ~$5B

While I do believe what was said above, it seems that whatever choices they are making that supposedly are resulting in a failing company from an engineering perspective, are resulting in a successful company from a financial perspective.


Yeah, revenue of $80B/year can cover a lot of failed growth projects, even multibillion dollar ones.

I think the only point that I disagree with in the OP is the idea that there is declining revenue in the "legacy CPU" market. It seems like there is still a long trajectory of slow growth there at worst.


Intel is strip mining its current customers by jacking up newer server chip prices and discontinuing their older chips (due to 10nm fabs not being ready). Intel is producing few low-end chips(i3/i5), which has in part caused the current Ram glut.

These large customers that have caused a temporary surge in profit are in the process of migrating away from Intel.


> While I do believe what was said above, it seems that whatever choices they are making that supposedly are resulting in a failing company from an engineering perspective, are resulting in a successful company from a financial perspective.

Finances for big brands can be lagging indicators.


Also note that LTE and cable modems both came to Intel as a result of acquisition (LTE from Infineon and cable+DSL from Infineon via their Lantiq spinoff).

Both times they have tried to put an x86 core around the modem part with varying success.


There is a good chance the top talent at both jumped ship either before the acquisition, or immediately after, leaving the dregs and predictable results.


Reducing costs through 'synergy' is easier to quantify and link to bonuses and annual objectives than maintaining technical quality.


They've also picked up the Ex-Motorola RFIC team in Phoenix. Integrating those two teams 9 time zones away I'm sure went poorly. I worked with both of those teams, and they're both sharp. I'm sure the integration was tough, and obviously went poorly.


To me this is kind of strange, I thought Intel had a good reputation as far as Ethernet and Wireless drivers were concerned. Certainly much better than (old) Broadcom, Realtek, Ralink, ST and the like.


On Ethernet, Intel is known for their little frills, stable line of NICs. That said, they have not been a leader since the jump to 10Gbps. Their 10Gbps line eventually came along and is solid, but it was a few years behind everyone else. That same story seems to be repeating itself at every new ethernet standard over the past decade.

Come to think of it, the last decade has been really bad for Intel. They no longer have a node advantage. They no longer have a performance advantage in any market space that I can think of outside of frequency hungry low-thread applications (games).

Their WiFi chips are good, but second tier. Their Modem chips are third tier. Their node is mostly on par with the competition, for now. Their CPUs are trading blows with AMD. Their ethernet chips are a generation behind. Optane is is a bright spot, but we'll see how they squander that.

The next big diversification play by Intel is GPUs, I have no idea how that will pan out.


Optane will be squandered by limited CPU support and slim software support :c

Optane is a neat idea, but the severe change in software architecture combined with only select CPUs even supporting it will limit uptake outside FAANGs or organizations with really specialized needs.


Databases will love Optane. There have been companies showing how much they are willing to spend on database hardware and software for ages. I'm not sure that will change any time soon.


Not so sure about this. World is increasingly moving to distributed scale out databases. Once you go that way consensus algorithms and rpc costs dwarf disk io speed.


Not everyone is building apps which need to be "web scale". Optane has the potential to significantly raise the performance ceiling of a single-master database. I bet there are a lot of companies who will happily drop six figures on Optane systems if it saves them the complexity of managing a distributed database.


The niche where your present requirements are big enough to benefit from Optane and your future requirements are small enough to not need to go distributed is pretty narrow.


Finance, healthcare, and enterprise systems.

I'm not sure it's really a niche.


I've worked for a company that was willing to spend that kind of money on monolithic database servers. They were a top-100 website though, and this was the best part of a decade ago (and thus e.g. in the pre-SSD era).

They were also scrambling to move all their services away from use of that database in favour of a horizontally scaled system that could grow further.

The query rate that can be handled by a single conventional server are pretty monstrous these days. You'd have to be simultaneously a) maybe top-50 website level load (I'm well aware that there's a lot more than websites out there, but at the same time there really aren't that many organizations working at that scale, much as there are many that think they are) and b) confident that you weren't going to grow much.


The real world is much bigger than websites/apps.


It is, and I acknowledged that. But it gives a sense of the scale involved. Just as there are very few websites/apps that need to handle, what, 2000 requests/second (and simultaneously don't intend growing by more than a factor of two or so), systems that need that kind of performance in any other field are similarly rare.


Not really, it’s also not necessarily not needing to grow but not everything can scale in the same manner as Netflix or Facebook.

Financial systems especially trading platforms need to ensure market fairness they also need to have at the end a single database as you can’t have any conflicts in your orders and the orders need to be executed in the order they came in across the entire system not just a single instance.

This means that even when they do end up with some micro-service-esque architecture for the front end it still talks to a single monolithic database cluster in the end which is used to record and orchestrate everything.


That is indeed one case where a large single-node database makes a certain amount of sense (though it's not the only solution; you need a globally consistent answer for which orders match with which, but that doesn't have to mean a single database node. Looking at the transaction numbers I'd assume that e.g. the busiest books on NYSE must be multi-node systems just because of the transaction rates). But fundamentally there are what, 11 equity exchanges in the US total (and less than half of those are high-volume). And the market fairness requirements are very specific to one particular kind of finance; they're not something that would be needed in healthcare, general enterprise, or most financial applications. Like I said, niche.


That really depends on the workload.


You are correct.


Yep, exactly what I was getting at. If you have a giant database and a big budget, Optane is great. It could be really useful for smaller users too, but its unlikely to become widely popular as the average developer won't have it available in their laptop or desktop.


Optane will be seen as a really big main memory by the OS and that's it. So, I don't get why there will be severe changes in software..


Optane is not durable enough to survive as main memory, hence use cases using it as primarily read memory to avoid wearing out the cells with writes.

Take a look at how cagey Intel is acting about Optane: https://www.semiaccurate.com/2018/05/31/intel-dodges-every-q...

This is the same behaviour as with Intel's LTE chips, where promised features keep slipping (much to Apple's dismay).


The new DIMMs Intel is putting out have a RAM cache in front of the optane memory that will absorb all the churn plus a wear leveling algorithm on the writeback side. It has big enough capacitors to put all its data away in the event of power loss.

https://www.storagereview.com/intel_optane_dc_persistent_mem...

All of which looks to solve the wear problem even if it means higher price and latency.


SemiAccurate's singing a different tune this year, now that Intel has stated that the Optane DC Persistent Memory modules are warrantied for 5 years regardless of workload. Intel's gotten write endurance up to sufficient levels for use as main memory, though if you use those DIMMs as memory rather than storage, then your DRAM will be used as a cache.


Re ethernet youre spot on with 82599 and ixgbe. Cheap, plentiful, and really reasonable platform support. Its like the 2.6.18 kernel in that I expect it to be relevant for decades.

However my sidelines impression was intel got sidetracked with 40g while everyone else, especially dc network fabric land, went towards 25/100.


As long as Intel keeps fully supporting the open source Linux driver, I suspect there will always be demand for their GPUs (for people that don't need to game or do video production/etc).

Also IIRC for low end devices and battery life, Intel GPUs are play the nicest


xf86-video-intel?? Distros have been defaulting to the modesetting kernel driver for years due to the instability caused by Intel's linux drivers.


the intel drm kernel driver which the modesetting x11 ddx (not kernel driver) uses


Could you fill out those tiers? Who has better wifi/ethernet chips than Intel atm?


WiFi: Tier 1 is unquestionably Qualcomm Atheros (surprise!).

Ethernet NIC/CNA rankings(IMHO):

1. Mellanox is the pack leader right now.

2. Chelsio is right up there with them, but not leading.

3. SolarFlare and Intel bring up the middle ground.

4. Everyone else (QLogic, Broadcom, etc)

Aquantia is an unknown for me. As long as they dont suck, they'll probably go in tier 3.


Not sure if I agree. The top-end Intel Wi-Fi cards 826X/926X have excellent performance compared to almost any other card I've used. Throw in Intel's excellent Linux support and it's hard to find a better option in the laptop space.


Interested in learning more. Could you provide some sources on why Qualcomm Atheros is "Tier 1"?


I dont have any, and I'm not going to dig up some to pretend I do.

I am speaking off the cuff as someone heavily involved and interested in RF in general and WiFi/LTE in particular.

From my anecdota(shame this isnt a word), Atheros chips have better SNR and higher symbol discrimination thanks to cleaner amps, better signal discrimination logic, and tend to be on the forefront of newer RF techniques in the WiFi space. All this culminates in better throughput, latency, and spectrum utilization than anyone else.

It also helps that their support under Linux is far superior to most everything else, which helps in Router/AP/Client integration and testing.

I don't even like Qualcomm, but from my experience, you will almost always regret choosing someone else for anything but the most basic requirements.


In my experience Intel wifi, specifically in laptops, has been far and away the best wireless experience I've had on both Windows and Linux. I do not see how Intel's Linux support is second rate, Intel's Linux wifi team is very active and always has solid support for hardware before it is shipped.

A big frustration with Qualcomm wifi on Windows has been that they do not provide driver downloads to end users. If you are using a laptop that has been abandoned by the OEM and you have a wifi driver problem you have to hunt for the driver on sketchy 3rd party sites or just live with it. I have personally had to help several people find drivers because ancient Qualcomm drivers were causing bug checks on power state transitions.

What real-world experiences have you had with Intel wifi on Linux and Windows that make you believe it is second rate?


Just curious, why couldn’t you use your real account to ask this question?


This is my only account on this site.


Interesting pushback :)


Chelsio has 10GbE (and above) adapters with good reputation.

Mellanox has 40GbE (and above) adapters with good reputation.

Mellanox also have 10GbE stuff, but that's mostly older generation / legacy (low end). Not sure how the 10GbE ones are regarded.


What about Aquantia? Although I'm sad the USB 3.1 5gbit NICs never appeared.


That's not 100% correct. You can get them if you buy 500 or more. Product page: http://www.speeddragon.com/index.php?controller=Default&acti...

Available https://sybatech.en.alibaba.com/product/60793590161-80442320...

I talked to Syba USA two months ago and they said end of Q1 so I didn't pursue the idea of bringing 500 into the US and sell them. I still might. Do you know any good platforms for this sort of thing?

The other reason I didn't pursue this because the Realtek based 2.5gbps adapters are out https://www.centralpoint.nl/kabeladapters-verloopstukjes/clu... (USB A version: https://www.centralpoint.nl/kabeladapters-verloopstukjes/clu...) and I wasn't sure whether people would care enough to jump to 5gbps.



Nice!

Unfortunately, I don't know 500 people who'd want one and I think shipping from the US to anywhere not-US (say, Australia) would be prohibitively expensive.


Sorry, no idea as I've never heard of them. :)


Chelsio and mellanox both have 100G with good reputations. 10G is not really something we should be comparing on anymore since it's been out over a decade.


> On Ethernet, Intel is known for their little frills, stable line of NICs. That said, they have not been a leader since the jump to 10Gbps. Their 10Gbps line eventually came along and is solid ...

I wouldn't call them solid - at least the X710. The net is full of bad experiences regarding them. They're VMware certified but are apparently really unstable on VMware; I have no personal experience with that platform. On Windows Hyper-V hosts I had the NICs repeatedly go into "disconnected" status and individual ports would straight up suddenly stop working. On Linux KVM hosts that didn't occur, at least for me.

Supposedly upgrading the firmware to a recent-ish release fixes it - I haven't had it occur after. That's understandable. What's not understandable is that the NIC was released in 2014 and the issue was resolved only in like 2018 according to the net.


I specifically called out Intel's software team as competent/good. They are obviously able to take buggy silicon and make it do impressive things, but when it comes to shaping and interpreting analog RF waves it seems this is outside their capability to tune much beyond what they've done.

Broadcom comparatively has crap drivers and decent silicon, meaning your cable modem works fine (with no bufferbloat or jitter issues), but good luck with that random WiFi chipset on Linux :P


My experience is that Intel software is crap.

For instance my biz dev guy thought that Intel's graph processing toolkit based on Hadoop would be the bee's knees and I didn't have to look at it to know that it was going to be something a junior dev knocked out in two weeks that moves about 20x more data to get the same result as what I knocked out in two days.

NVIDIA, on the other hand, impresses me with drivers and release engineering. Once I learned how to bypass their GUI installer I came to appreciate what a good job they do.

(They gotta have that GUI installer otherwise some dummy with a Radeon card or Intel Integrated 'Graphics' will post on their forum about how the drivers don't work.)


NVidia still equals NoVideo in my book, for a time it seemed like every other driver release would brick your card, and Linux support is abysmal. Wayland still isn't supported, despite AMD, Intel and even no-name HDMI Phy manufacturers fully supporting it.


> They gotta have that GUI installer otherwise some dummy with a Radeon card or Intel Integrated 'Graphics' will post on their forum about how the drivers don't work.

Maybe they need a GUI, but it doesn’t need to be such a bloated monstrosity.


It's because they include game patches in the drivers.


NVIDIA is the worst video card for Linux, if you don't want to install their blobs.


Intel is faaaaaaar behind on NICs. They're just now releasing an asic capable of 100G, while their competitors have had one for many years, and are now moving to 200G. I think they were hoping omnipath took off.


RF is difficult.


Yep, mostly done by graybeards, which Intel probably fired.


Using stack ranking to fire 10-15% of their employees every year selects for people who have political talents. Aren't they now comprehensively failing in every area except microarchitectures, with the next Ice Lake one being held up by the 10 nm node debacle?


If they are using stack ranking they deserve to fail.


I wonder if all Intel's acquisitions have failed while homegrown efforts succeeded.


> Intel couldn't make WiMax work right...

well imho, the _real_ reason why wimax failed had more to do with lack of compatibility with existing (at that time) 3g standards than anything else, and this was despite the fact that lte was delayed ! operators had already spent lots of money in deploying 3g, and were wary (genuinely so) of sinking money in something that was brand new when lte was just around the corner...

also, imho (once again), technological challenges (as you pointed out above) are _rarely_ an issue (if ever) in determining the marketplace success / failure...


wimax is still used in Japan, mainly for "pocket wi-fi" gadgets which you can buy or rent (the latter mainly for visitors or tourists). They compete with LTE but there's still a market because they don't have completely overlapping feature sets w.r.t. coverage and bandwidth.


WiMAX is dead in Japan. You can't get new devices on it, half of the spectrum has been refarmed to LTE (leaving speeds in the single-digit Mbits) and the network is shutting down by next year.

There's a WiMAX brand selling "WiMAX 2+" but that is just LTE.


Thanks for the update. I checked out the situation a few times the last two or three years or so (the last time more than a year ago), and things are apparently changing (also re. the next post). At one point there certainly was wimax though, the coverage / area was quite different.


They don't make it easy to understand what's going on since they still have a toggle between "LTE" and "WiMAX" that you can switch. The former is fast metered MVNO data on the main carrier of the telco that owns them and latter is "unlimited" data on their own separate radio spectrum with different coverage. But the latter is still actually LTE.

I actually still have a legacy wimax device and plan active since it was the last plan with true unlimited with zero "fair use". It also had a usage scale so on months I don't use it it's only 300 yen. So a good backup device. I've loaned it out to people who had have to spend a week in hospital (no wifi) and they've burned through 30 GB streaming video with no issues. They send me letters every couple months urging me to upgrade to a new plan that all cost 4000/mo and have fair use data caps.


What the Japanese carriers sells as wimax 2 is apparently rebranded lte.


I don't think they'll back out of RF tech entirely. The best WiFi chipsets I've used in recent years have been Intel-vended, and they're broadly used.

On the other hand, I've been bitten by Broadcom WiFi chipsets too much recently -- two different laptops with different Broadcom chips having a variety of different connectivity problems. One of them would spontaneously drop the WiFi connection when doing a TCP streaming workload (downloading a large file via HTTP/HTTPS). Admittedly this was probably some kind of driver issue, but I wasn't excited about the prospect of using an out-of-tree driver on Linux to solve the problem (and that didn't help my Windows install either). I swapped the mini-PCIe boards out with some Intel wireless cards and they've been running perfectly since.

But of course, if the rest of the business suffers, they may have to make cuts all over the place. I just hope the WiFi stuff doesn't end up on the chopping block or the competition improves.


I also chose Intel over other vendors. I don't know why. It is true Intel's stuff is better intergrated better into the WinTel ecosystem than the others, so it's probably true Intel gear gave less trouble on my laptop.

But then I deployed lots of boxes with WiFi, that had to work in remote locations where others depended on it. I'm ashamed to say I just shipped the first couple with the Intel cards provided by manufacturer.

But after getting a lot of complaints I actually tested it. Which is to say I purchased all the makes and models of 10 WiFi mPCI cards I could find on eBay, collected as many random access points as I could and lots of laptops (over 50), put them all into a room and tested the network to collapse.

Intel's cards were the most expensive, and also the worst performing. They collapsed at around 13 laptops. The best 801.11ac cards were Atheros (now owned by Qualcomm), and they were also the cheapest. Broadcomm were about the middle, but I had the most driver problems with them.

My pre-conceived but untested notions were shattered.


And on the fab area, they are being beaten by TSMC. It looks like they are falling behind across fronts.


If they can ship 10nm in reasonable volume soon, they can get their process lead back a bit, assuming their claims that its better than the 7nm foundry processes are true.


> The process lead has disappeared, and when working with RF frontends (LTE, WiMax, cable modems, WiFi) their ability to cover up implementation errors in the driver is limited.

Why would that be the case? Why would those functions be harder to fix in code/microcode than a CPU's functions?


I don't think you should count intel down and out just yet, over the past few years they have been one of the few tech companies actually committed to diversity.

Sure they had a couple bad years but now they have a talent pool deeper than any of the competitors so I'm still betting on them to turn this around and bounce back even stronger.


"one of the few tech companies actually committed to diversity."

What does that even mean?


Increasing hiring of non-white non-male individuals to the employee pool. Not sure what it has to do with the technical discussion.


Perhaps the continuous executive push for cheaper international labor is having an impact on their core competence. Who could have known.


Intel's x86 business is not far behind




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: