There's a bug in Apple's EFI driver for BCM4331 cards present on a lot of older Macs which keeps the card enabled even after handing over control to the OS. A patch went into Linux 4.7 to reset the card in an early quirk, but I suspect other OSes can be taken over via WiFi on the affected machines:
Porsche for instance isn't earning any money with their LMP1 race cars. They need them to sustain the illusion that they're a sports car company, despite the majority of their sales being SUVs (Cayenne, Macan) and sedans (Panamera).
That's what it feels like on the bleeding edge! Again, I wouldn't suggest being the first guy on the internet to try it.
If the Apple adapter doesn't work, there are a bunch of others on Amazon, although they are a bit more expensive today. Prices will come down. But if you have a TB2 plug on your laptop, you probably have a Mac anyway.
LEDE folks have started to dump their patches on kernel mailing lists, but they don't seem to be mainline-able as is and the submitter is loathe to rework them:
Perhaps there is more context, but from a narrow read: the linked email states a need, contains a patch that solves the problem, and offers to solve it another way if needed... that seems pretty mature?
The text literally says:
> I am not sure if this is the best way to remove the quirks from the build. Let me know if you prefer a different way of solving this.
I can speak from experience that some of the LEDE/OpenWRT core people are fairly toxic at times. John Crispin in particular is known to go autistic on you if you top-post on one of his mailing lists instead of bottom-post. He's a real laces-out kinda guy.
"obviously i am interested to get this upstream with the least amount of effort. I am quite aware though that some patches will need an overhaul to be applicable for upstream. its not really my call if it is enough to make this an enable patch and review the quirks enabled by it or if the code needs to be moved."
Upstreaming patches into the kernel requires that you're willing to spend time to rework them so that the result is maintainable. Saying from the start that you only want to do the least amount of work possible isn't helpful.
The majority of OpenWRT/LEDE patches are gross hacks that need a major rework or a completely different approach.
This is not the first patch of that kind.
BMW has a subsidiary in Ulm (BMW Car-IT) working on the next head unit. This will be designed and engineered in-house, at least to a large extent. Any opinion on that? The current head unit generation is apparently sourced from a 3rd party. I interviewed there once, it was almost funny how much they stressed that they're a software company, not a car company, kind of like self-hypnosis. (Disclosure: I didn't get the job, neither wanted it after seeing the situation on-site; didn't fit into their culture.) (Fun fact: Company is full of ex-Nokians who they apparently scooped up when the local Nokia subsidiary had layoffs.)
I know that Car IT has a large group there which originates from the Nokia office in Ulm. They also have lots of hiring postings on stackoverflow careers. But I don't really know how much they are doing in house within that group and which parts will still be externally sourced. Even if they are now doing more parts of the headunit themselves there will be still lots of other ECUs which are 100% external developments. For infotainment BMW probably has the most in-house efforts going on. Audi went another route and founded e-solutions as a joint venture together with Elektrobit, which is developing exclusively for Audi. Daimler is still mostly relying on external partners.
A bit off-topic. I recently had to endure an azure evangelist telling me on literally every slide that they are now open (they had a big blue "open"-rectangle on the upper-right edge). A moment later i got very, very frustrated because a feature i had to use didn't work with the java-library (https://github.com/Azure/azure-sdk-for-java/issues/465 open since February!), the node.js library and the Rest interface lacked critical features so i was unable to authenticate programmatically (i think). Since i am working on a Mac and don't have windows, i couldn't use the C# library because it required dlls. In the end I had to code a simple proxy server in C# in a text-editor on my mac and then push it to a azure server running windows, compile it there and run it, just to get the data to my program. It was really a horrible experience. Since then i just assume that somebody that really has to stress something has in reality big problems/deficits in dealing with it.
Huh? But you can use C# on a Mac... I should know, I just came home from doing that all day at work. .NET Core works great on Mac, and imports many dlls just fine.
That being said if it was quite an old dll, and never compiled under PCL or netstandard, maybe not. There's still Mono, but, well, I can definitely understand reluctance in this case, and indeed would choose a different way in a different language myself, if that was my only option left that way.
if i remember correctly the DLL was responsible for the library to not be portable. It might be that i don't remember this correctly, but some crucial part was not available so i could not use it.
Yes, because the PCIe root complex in the CPU can only connect one other device besides the southbridge, and that's used for the Thunderbolt controller on the left handside. The second Thunderbolt controller is connected to the southbridge (as are all the other PCIe peripherals), so it doesn't have the same number of PCIe lanes available as the one directly connected to the root complex.
Apple could have solved this by connecting a PCIe switch to the root complex and attaching both Thunderbolt controllers below it, but that would have consumed additional energy. Alternatively they could have used a beefier CPU with more PCIe root ports on the CPU, but I guess those available would have been too energy hungry. Which kind of means this is Intel's fault for not providing a low-energy chip with enough PCIe root ports on the CPU.
I'm wondering what the situation is like on the 15" version with discrete graphics. This would require 3 root ports directly on the CPU to drive both Thunderbolt controllers and the GPU with full speed, I assume that's indeed the case since it's not mentioned in the document.
Another thing not mentioned in the document is that energy consumption will be suboptimal if one device is attached on both sides of the machine because it prevents one of the Thunderbolt controllers from powering down. One should connect both devices on one side to improve battery life.
Edit: On Skylake the PCH is apparently optional, the functionality is mostly integrated into the CPU, so the limitation is really the number of lanes provided by the CPU, and this wasn't sufficient to connect both Thunderbolt controllers with 4x. The CPUs used in the 13" model all have 12 lanes, the ones in the 15" model have 16 lanes. So for the top-of-the-line model this could be 4x for each of the Thunderbolt controllers, 4x for the GPU, 2x for the SSD, 1x for Wifi, 1x for HD Audio?
Re:your edit, the PCIe lane configurations from the CPU aren't that flexible. There is no 4x+4x+4x+2x+1x+1x configuration.
4x Thunderbolt + 4x Thunderbolt + 8x GPU, with everything else on the PCH would make sense for the 15". Or maybe they connected the SSD (also 4x PCIe) to the CPU, and one of the Thunderbolt controllers to the PCH.
Hopefully they used the full set of CPU lanes. Most laptop manufacturers have a tendency to under utilize the CPU lanes and put things on the bandwidth constrained PCH for some reason.
Yes, he did. As nsxwolf commented, the 2010 MacBook Pro had USB 2 ports that ran at different speeds. And I remember vaguely that that wasn’t the first time.
It’s just not your problem because, really, how many people are going to be inconvenienced by the right Thunderbolt ports being slightly slower than the left Thunderbolt ports?
I mean, I'm not sure how you can get 40 GBit/s with 4x PCIE 3.0 in the first place, every quote I find says 32 GBit/s. Maybe there is that much overhead in Thunderbolt.
But surely 4x40 Gbit/s would require 16 lanes at least. I don't think Intel makes any consumer CPUs with more than 16.
My understanding is that that's total bandwidth across all protocols. So the mix of Displayport and PCIe 3.0 can't exceed 40 GBit/s. The 4x PCIe 3.0 on it's own is 32 Gbit/s, as you mentioned.
Each controller provides two Thunderbolt ports, which share bandwidth. For the 15", 4x PCIe to the left side Thunderbolt controller, 4x PCIe to the right side Thunderbolt controller, and 8x to the GPU would be a sensible configuration. Though who knows if Apple took this approach.