Hacker News new | past | comments | ask | show | jobs | submit login
Intel acquires Linutronix (intel.com)
199 points by HieronymusBosch on Feb 23, 2022 | hide | past | favorite | 135 comments




That sounds like a fun company to work for :)

Sure one can whine about an RPi and "industrial reliability" (and all I can think of is that damned SD card...), but hey, it's just a cool movie, and it communicates in a nice way what they do. Also, it sounds like they should have been involved in the Mars helicopter (Ingenuity), which runs Linux and need these types of techniques. Cool podcast on the subject: [0]

[0]: https://www.jupiterbroadcasting.com/145067/mars-goes-to-shel...


While I agree with everything you said to a certain extent, I agree less that their expertise makes them uniquely qualified to engineer the flight software for the prototype helicopter on Mars. It is quite a different environment, but there is certainly plenty of crossover. I think JPL has enough expertise in this domain, though.

But yes, it's a wonderful thing that a fairly-accessible linux distribution is powering space missions in this era. It's long overdue, I think, and many of the JPL folks would probably agree.


> and all I can think of is that damned SD card...

I'm having some better experiences with SD cards marketed as "high endurance". Another trick I've been doing is to mount /var/log as tmpfs. If they crash when they run out of space, I let them restart themselves.


I've heard that you also should care about optimizing the quality of the power supply since power fluctuations or brownouts are apparently what actually kills SD cards. Personally, I just use USB drives because they can boot off of that now, and I treat the device as semi-disposable and assume that I'm going to reimage its thumb drive every few months.


To be honest I don't have that much issues (used to have much more with older SD card, pre-micro), but still, imagine a Raspberry Pi 4, 8 GB ram, and an nvme (m.2) hard drive. It would be soooo nice. I know there are compute module based products that do something like this, but imaging a 65 dollar pi with this slot.


Looks like K'Nex, not Lego. Still very cool!

https://en.wikipedia.org/wiki/K%27Nex



That's Technic Lego.


Intel seems to be making all the right moves. I'm excited to see what Pat Gelsinger can do over the next 5 years (I'm long INTC).


Their current CPU-features-as-a-service marketing push doesn't look like a right move to me.


>> Their current CPU-features-as-a-service marketing push doesn't look like a right move to me.

That was bothering me a bit too, but then I realized it may have a use that most of us don't care about. Sure they could charge more for AVX512 or whatever, and they might try to charge rent for such options which I'm not a fan of. But what if they are being asked by 3-letter agencies for chips with custom circuitry that would be relatively low volume, somewhat annoying to produce? If it's not too much area they could just add those features to every CPU and only enable them for those agencies via SDSi interface. Just speculation in a direction that is IMHO less awful than excess monetization.


How does it bother you less, assuming that 3-letter agencies can access parts of CPUs that you can't?

If anything, it bothers me more! Why would they get special treatment, and why would I have to subsidize its features paying for my CPU to have it, but disabled?


FWIW, IIUC the binning process that designates chips as Core, Xeon, Pentium etc series ultimately sources parts from a common set of conveyor belts, identifying final designations using QC processes that further subclassify against what parts of a chip do not work correctly. So if say the AVX512 unit in a single core doesn't pass muster an entire die might go in the "doesn't have AVX512" bin and then get marketed appropriately; or perhaps (uncited speculation) a chip that fails the test suite for a Xeon E3 might get rebranded as a Core i3 instead.

I do wonder what percentage of this classification process is driven by process yield and how much is driven by volume quota requirements. I wouldn't be surprised to learn that this is an area of careful optimization; for all I know the entire silicon portfolio just ships the process yield org chart.

But all this means the user-facing FLAGS in the chip under the keyboard I'm typing this comment on is the result of a fuse configuration (aka policy), rather than a 1:1 representation of the potential of the photomask, and there are very likely a few micrometers worth of functionality I'll never get to use.

Of course I'm very curious if this is because the disabled areas were faulty (optimal use of manufacturing potential) or because *shrug* The Manufacturing Computer needed to meet its quota of Core i5s that day (arguably optimal fulfillment of volume potential). (Then there's the argument of disabling feature X across all cores for consistency, hmph.) But this is all firmly out in the weeds of implementation minutiae, and way beyond reasonable optimization; I have no idea what's theoretically broken in my CPU - and whether re-enabling that functionality explicitly to torture-test stuff I might like to make resilient would present me a relevant surface area of functionality I even knew what to do with (haven't yet played with C intrinsics for example).

At the end of the day, disabling functionality on-chip seems to be one of the few viable ways to claw fabrication yield back to something commercially viable and not utterly eye-watering, and IIUC it's been a staple for a long time.

Rereading your comment I realize it's quite possible you were writing with some or all of the above context implied and I may have misread. Not sure, disregard if so.


I don't think anybody object to binning CPUs based on what the chip can actually do. It's when Intel decides to sell chips with features fused off even though the silicon is fully functional that people start to get annoyed.


Also selling chips at a set MSRP and then offering to unlock the "full" potential of the chip for a low monthly fee.

It would be like buying a car and having to pay $20/month to have the 65mph speed limiter turned off.


I am impressed that this reuse is possible cheaply. I have had no exposure to the business end of this process. Indeed, my comment may be less relevant than I thought.

If it might not be clear even to the manufacturer which chip pays for which other chip, then I might give them a pass.

I suppose the only way to tell for sure would be de-lidding and comparing. But that's an expensive hobby.


I think they had a whole set of chipset based products that are beginning to move of package. Turning these off and on via software makes sense(think quick assist for compression acceleration). These require integration effort, and aren’t something like a compiler flag.

I would also argue you aren’t subsidizing the feature, they are subsidizing your budget chip.


This is already a standard practice and wouldn’t require changing their business model.


It also wouldn't require a public announcement.


> * That was bothering me a bit too, but then I realized it may have a use that most of us don't care about.*

That's how it starts. First it happens to people whose needs you can't relate to, and then one day they'll want extra money so the accelerated video support starts working again.

It should bother you.


Like it or not, that's probably where the industry will be in 5 years. You might not like it, but X-as-a-service make way more money for the company, and shareholders.

It'll probably look like this: buy a 'cheap' Intel CPU for $99 (say, 4P, 12E cores clocked at 2 GHz), with "Turbo" to 4.5 GHz being a $79/year subscription and "Extreme Turbo Max" to 5.1 GHz being an additional $35/year. First year free, of course, to get you hooked on the 'turbo' speeds.

Intel would be able to capture ~97% of those proceeds; instead of having to pay XX% to the distributor and retailer.

Would consumers revolt? Some might, but the masses will click 'Buy Now' with a $99 sticker price versus competitor offerings at $400; give it one financial year and AMD's board will force them to follow suit.


I would not like to order my dystopian cyberpunk future from IBM's or Oracle's or Microsoft's mail order catalogues, please.

IIUC, you needed to license each individual core of IBM's POWER servers. And then license the exact set of software features you needed enabled.

I don't think I need to talk about Oracle licensing.

I only recently learned (aww, can't re-find the comment) that Microsoft volume licensing in enterprise charges a seat license for Macs not running Windows.

This feels like an even more depressed reinvention of that. Me not want.


Enterprise licensing is a tarpit of unimaginable fuckedness.

The most ridiculous kind of licensing I've ever seen is an application (for data compression) where "how many bytes did it save" has to be accounted because it's one of the main ways the license fees are calculated.

And of course the usual insanity with twenty different portals (one for every other president, and about as old) per vendor, disjoint logins, human-in-the-loop verification and so on.


OOOHHH! kinda curious, if I create a byte stream that's larger when compressed, do they pay me?


*Facepalms in "when a measure becomes a target it ceases to be a good measure"*


I believe in the relevant circles this is called "utility based pricing".


Damn, the future of personal computing gets more depressing every day.


The market economy has discovered this path and is allowing it.

- iPhone you can't freely distribute software to (and Android makes it sufficiently difficult that only 0.01% of users can do it, so we may as well count them in too)

- infrastructure giants that turn open source services into paid platforms and then accrete all developer mindshare

- thin clients replacing thick clients. Workloads will move from desktop computers to the cloud. No need for a beefy computer to run software locally. More moat for the platforms since tools won't develop for a small market of hobbyists.


>Android makes it sufficiently difficult that only 0.01% of users can do it,

That's BS. My mom installed f-droid by herself and can install apps from APKs without any help.

It's not like you gotta root your phone, use ADB, or break out the CLI for that. All you have to do is tap Install when prompted, that's it. OMG, so complex, only 0.01% of users can tap Install in F-droid. /s


Purely anecdotal. It sounds like your mom is pretty tech savvy, though.

F-droid is less popular than Unix on the desktop. They don't publish any stats, but as a proxy, their Twitter account only has 10k followers and their forums receive fewer than ten posts a day.

Google trends shows "Ubuntu vs f-droid" dwarfing the latter term to the point there's no signal at all. Perhaps that's an unfair comparison, but I was expecting it to be closer.


Yes, but you were talking about difficulty, not popularity. Those are not even remotely the same things. Just because something is not popular does not automatically make it difficult.


You are forgetting two things:

- Apple. They charge premium for their hardware but don't make you pay to use extra features of the CPU/GPU. And a lot of common folk already love the 8GB M1 Air and buy it in droves. It's going to last them 7 years easily, if not 10.

- Most people hold on to their computers with a death grip until it can't boot anymore. I have a friend who just two months ago finally replaced a laptop with 1.5GB RAM and 180GB HDD. He used it for 13-15 years.

Both of these mean that Intel and AMD can find themselves with 30% of their previous sales or less, if they try to force the CPU-features-as-a-service thing.

They can only stretch the "you're not the target audience of these new machines" trope only to a certain extent.

Common consumers getting indifferent and holding on to their existing tech is a real market force.

Having to "hold on until the market adapts" might become too big a pill to swallow for Intel / AMD shareholders.


that's a very dark future for personal computer. its already suck with Windows to pay for different level of OS features.

if Intel go this route then i'm glad that Apple paved the way to use ARM for personal computer.

i bought a mac mini w/M1 and it was amazing. i can code .Net with jetbrains's rider without a problem, run multiple apps and play Hearthstone, all on 8gb of RAM.

i hope to see RISC-V for personal computer in the near future.


That trick might work for home desktop due to the short sight bug in human consumers (mostly gamers?), but does it make sense for server? It seems like a bunch of potential liabilities for a cloud host or data center operator. This is basically DRM for CPU and as such, its primary function is to stop working when all the business rules don't line up right.


They tried it for home desktop ~10 years ago and had to back down: https://en.wikipedia.org/wiki/Intel_Upgrade_Service


[ Sad mumbling about the ten year gap between United States vs. Microsoft Corp (2001) and the Chromebook (2011) ]

(In all seriousness, I just googled both of them for the first time as I typed the above to check the dates expecting it to be like 15 years or something. I honestly wasn't expecting... a small whiplash moment. Ow.)


It sure makes sense on the server-side. That's been IBM's business model for mainframes for half a century.


That's true, I had to pay IBM in the 90's for a CPU upgrade where they just dial in, change a config, and poof-more CPU. It was completely infuriating and one reason we migrated to HPUX. The hardware support from IBM was amazing though - the techs are very well trained and show up with basically another mainframe in the van and start swapping parts until things work.


It's strange how paying for an upgrade where they swap a part feels much better than a faster upgrade they can do remotely.

Yet with software we don't much blink (though I suppose people DO like to see at least some download after they click Pay).


Dude, Sun in their heyday offered a similar service that allowed you to online more CPUs as needed. As I recall HP tried to as well. I mean sling all the hash you want at IBM, but at some point midrange people tried to do the same thing and for the same reasons - making it easy to capture every last bit of revenue and less enticing for you to switch platforms.


IBM support was great. OTA CPU upgrade is convenient.

A while back suffered with HPUX hardware software not dialed in. Was painful slow working through issues with HP.


Not just mainframes, on POWER servers they offer the same.


It will be interesting to see how it transfers from the consumer to the B2B arena but it gives them more latitude with price, which can only help them if they use it properly.

It wouldn’t surprise me if it flips the other way in B2B and they sell contracts for X years of Y teraflops instead of individual chips with fragile DRM on extra pieces.


> but does it make sense for server?

Typically in Server/Enterprise the licensing is self-report and audit or activation model. I think the activation model would probably apply here quite easily.


> Like it or not, that's probably where the industry will be in 5 years. You might not like it, but X-as-a-service make way more money for the company, and shareholders.

Do you like it?

I kind of hate this modern trend of taking the most dystopian possible kind of future as something completely inevitable and normal.


This will suck until enthusiasts figure out how to jailbreak the processors to unlock maximum speeds.


haah. jokes on you. the next time you unlock your cpu, intel would dmca you and charge your with a felony. or more seriously, "warranty void if unlocked" meaning you would be penalized with not getting any warranty cover if you remove the sticker, kinda like android oems who do not let you unlock the bootloader or let you install custom roms because "security". microsoft pluto(pluton?) seems to be the step in this direction imo


I don’t think it’s hyperbolic to say that this type of future will lead to literal class warfare. I believe whatever the next punk movement is will be reactionary against this type of lock-in; cultural and economic.

Maybe we should talk about making certain business plans illegal.


Rent seeking basically. By never actually selling something they can demand an ongoing stream of revenue.

In a world without the internet, keeping track of the paper was an insurmountable problem. So it only happened with a limited set of things worth doing it for.

But with digital storage, up scaling etc getting cheaper by the day we can now track everything anyone owns and rent it to them. So that's what everyone wants to do.

Unless your revolution happens or is prevented by making such rent seeking illegal, then in the future the only thing you will own will be what you can make with what little resources you are allowed to get.


You will own nothing and you will be happy.


Much less likely than just selling enterprise features, which is particularly common for networking companies selling stuff like ports "on demand". The trend is towards a lot more accelerators and specialty features (AXV512, AMX, HBM-related), which aren't going to have universal demand, charging for them separately might allow for fewer SKUs (lower unit costs) without charging everyone for features they don't want.

Considering the consumer market at a CPU level makes very little sense, almost everyone just buys a device manufactured by an intermediate company.


That's just leasing, which unlike SaaS, is not a new business model. It's been around for thousands of years, and there are a lot of problems with it. How do you repo from a non-paying account for instance? With a SaaS, their account is automatically turned off. This isn't the case when leasing out an actual physical product.


Depends upon the default rate. It might still make sense even accounting for the losses. Especially if the upfront fees cover the marginal manufacturing costs.


Intel Management Engine.


Sure you can remotely disable it, but unless you get the chip back, you've lost it, and you'd have been better off selling it anyway.


That makes no sense - you would just buy the ultra cheap unlocked model each year. Also how do you justify paying for a boost when the next gen is faster and an incremental price increase.


It's probably the right move financially. People love to pay more for less. Prebuilt desktop, server and cloud companies have built businesses around that concept.

The average person is undereducated and easily parted from their money. It's a lot easier to make a bad product that appeals to them and get half the market for free than it is to make a good product and try to appeal to the best.


Calling the cloud "less" in terms of return is certainly a take. Writing software in a DC was pretty miserable, managing the systems in a DC while having zero control over provisioning, having to file tickets, sending endless emails, and dealing with single lane DCs with little to no redundancy was particularly miserable.

The cloud has a lot of flaws, and I do think we'll end up back in DCs again, but it'll be different this time. Companies will have to have redundant internet providers, they'll have to provide APIs as opposed to helpdesks, they'll have to implement a redundant internal structure that scales well, storage will demand options. All of these things existed before the cloud, but became expectations when the cloud hit the market. That's why a lot of companies moved and I doubt they see it as "less".


Companies are generally stupid, because any group large enough regresses to the median of its members, or worse, the average of its C-Levels. I'm sure they don't consciously see the cloud as less, but that's still the selling point; it's why they value it.


Is it rented or is it just pay-to-permanently-unlock.

Paying to permanently unlock a feature might not be so bad. And forcing extensions to justify their price might not be the worst thing. And this might let them get the "nobody is paying for avx-512" signal a little faster than spinning up a billion slightly different SKUs.


It would be interesting to rephrase many of these HN comments in the context of a general purpose home robot. Or a Tesla. I take all of this for granted already for Tesla.

For a general purpose robot I might want baby sitting one year and health care provider another year.


Wouldn't that be exactly the right thing to do if you want to minimize physical sku's and maximize sales?

as far as the user is concerned, if they put deactivated silicon on a chip, and can hit the power envelope targets, then it's like it wasn't there at all...


They can produce more chips from the same wafer if they just produce smaller chips. I think doing this is stupid in the long run.


OK, I get the worry around software-enabled CPU features. I really do. But what evidence do we have that Intel is pushing it specifically for renting those features out vs price differentiation at purchase time (which has been their model forever)


For nvidia, in the datacenter, they made all the features of most cards a service you can't buy, and it paid off big for them.


Intel is currently worth less than AMD even though Intel's profit is quite a bit higher than AMDs entire revenue. I recognize that Intel has huge problems to solve but they absolutely seem undervalued currently.


Intel have been making balls-out plays (i.e. they could've just sold the fab business and MBA-ed themselves to death) and making good money but their stock is 30% or so down from last year so I am inclined to agree.

They'll never have a run like they did from Nehalem through to Zen's launch, but I think they're about to prove that they still have it and that they know how to sell chips.


MBA-ed to deadth hehe

German "Tod durch BWLer".

If a company suffered from this it is IBM. They sold they're hardware business with long term earnings and the entry path to many customers was lost. Now there is Red Hat with the IBM-Letter attached to it. At least we see improving support for ThinkPads through Red Hat. Apple, Microsoft, Amazon instead invested in hardware with software. Siemens is another exampled for dead by MBA, thanks for ruining Siemens Nixdorf. How you can even think about focusing a company on one single market with a "Profit Center", you loss the broad base and flexibility. No other part can sustain you till you adapt to a change.


But they have mba'd themselves to death.


Not really. Intel’s two core issues are engineering failures — process technology and processor architecture


Are they having architecture issues? I guess I could criticize them for too-aggressively pursuing small microachitectural advantages leading to failures like Meltdown and making them spend more engineering-years than AMD for products that aren't that much better but, baring Meltdown, the end architectural results have been pretty good. My gripes would all be with the process, management decisions about how to respond to the process problem, or fusing off features for product segmentation reasons.


By architecture, I’m referring to trade offs then made for single thread performance vs core count and memory channels


I have a basis peak in my desk that says otherwise.


Has nothing to do with Intel’s problems


it was an intel product when it was shut down because of the wrist expositions.


I know what it is, but does that have to do with Intel’s problems? Sohail and Dadi probably did not think about Basis Peak for more than 10 minutes over the entire period Intel operated the brand


I just mention that as a way to illustrate they they were not reinvesting their capital into verification/quality and R&D or focusing on their core competency.

this was during the period when intel was riding the IOT hype train and made some dumb purchases.

see here for more

https://www.businessinsider.com/intel-is-probably-the-worst-...


I am very well aware of the history here. Again, Intels misadventures in IOT have absolutely nothing to do with their challenges today, which are rooted in a handful of poor engineering decisions


AMD spun off their fab business, they didn't MBA themselves to death so I'm not sure why that's the only possible end result you see?


The bad CEO (Hector Ruiz) who probably would have done it left, and AMD eventually got a much much better CEO (Lisa Su).


Given how Global Foundries has essentially given up in regards to moving to better nodes, it seems like spinning the fabs off and going to contract manufacturing was absolutely the right call. AMD was already a gen or so behind and falling back further due to lack of investment in it's fabs.


AMD is way more nimble than Intel though, since they're fabless.


And I'm more nimble than both because I'm fabless and designless.


This seems like a strategic drawback.


That puts them more at the mercy of their suppliers. Replace "nimble" with "not in control".


This seems like a strategic drawback.


Why? Plenty of chip companies are fabless. Apple is fabless, Nvidia is fabless.


Neither Apple nor Nvidia compete with TSMC, while Intel does


The argument was that being fabless is a strategically bad move. I don't see how Intel competing with TSMC refutes that? What am I missing here?


Intel competing with TSMC will not be a problem for Intel or TSMC until the chip shortage is well and truly behind us.


I agree they are undervalued. That said, AMD is growing quickly in terms of earnings, and intel is not. I think it’s a matter of time and impeccable execution before Intel is winning again.


A quick search shows INTC market cap is 184B and AMD is 135B. AMD is worth 73% of Intel.


My quick google search shows AMDs cap at 183B. So maybe not more valuable, but equally. Apparently AMD was worth more yesterday though https://finance.yahoo.com/news/amd-is-now-worth-more-than-ri...


They are both a bit over 180B, that AMD number is prior to the XLNX acquisition which was done by issuing stock.


As of a few minutes ago... AMD: 181.86B INTC: 183.97B


Really? Name a single successful Intel acquisition, especially open source.


I read your statement and I thought "should be easy to find at least one"!

Turns out it isn't: https://en.wikipedia.org/wiki/Intel#Acquisitions_and_investm...

Not only does none of these ring a bell, all the software ones seem to be legitimate garbage. Why did intel buy a cloud gaming startup?


The non-chip business at intel is a sideshow of uncommitted grasping.


They bought a drone company that had built a single drone that was better than all competing drones in the market, and then they stopped it. No more support, no more new development, no more production.

So senseless.


Maybe acquihires


Altera?


Wind River seems like it did okay. Bought for $884 million and sold for $4.3 billion.

https://en.wikipedia.org/wiki/Wind_River_Systems


Mobileye ?


Bought for 15B. Net profit in 2021 was $1B. It'll take a while to get their investment back. Not sure I'd call it a successful acquisition.


They're trying to IPO mobileye for $50B. If they can pull that off the acquisition will have been a major success.


But not an intentional one. They bought it as a "hey we're relevant too!" move at peak autonomous car hype, and as it's hardly additive in real life, are trying to IPO it for the dumb money before they have to write it off (e.g. Habana).


Peak autonomous hype would be when it's there.

And there is too much money in it to never be there, to although I think it will take longer than most predictions.

And net profit while amount of employees have grown 5 fold. Growth of 24% yoy is pretty good.


Same and also long INTC - Pat has been known for a while to be a very competent leader. I think the vertical integration that intel has will be extremely valuable in the next decade.


After working directly with Intel and AMD engineers, my money is on AMD. The amount of planning they're doing is nothing short of incredible.


Intel has a lot of irons in the fire and some very talented engineers, but they've always competed with a process advantage behind them and been able to use it to recover from the occasional architecture misstep like Itanium or the Pentium IV. Pat Gelsinger is probably the best CEO they could have picked but they're in a tough position and I'm not optimistic.


A lack of process advantage is exactly the problem. I remember being at nVIDIA in 2008 and being afraid about Intel's Larrabee x86-based GPU architecture. I was genuinely afraid because they could do full custom design on the latest process and so the physical design would be much superior to nVIDIA even if the higher level architecture was not as great. I didn't care much about the x86 aspect because I knew software needed to be re-developed for GPU anyway so having a compiler generating a custom/private ISA wasn't a big deal vs. x86.

When Larrabee failed I breathed a sigh of relief. To me that failure was huge, and the fact they are trying to replay that strategy (building a competitive GPU) but without an unfair manufacturing advantage speaks to the hole that Intel has dug themselves into. It's not a grave, but the stock price has not baked this reality in yet. We've got a long ways down to go. This won't look like a simple turnaround story. The company will appear dead before it can come roaring back. If.


And the rumour on the street was that Larrabee was one of the reasons why Pat Gelsinger had to leave Intel.

And Intel Arc GPU is not being delivered on time either, last I heard was early Q1 2022 which is now Q2 2022. And in recent investor day conference, they again announced delays in their 2023 server chip.

These delays have occurred after Pat at the the helm, combined with Intel burning through a lot of cash; I don't understand why most are so gung-ho about Pat. Thankfully the early comparisons with Steve Jobs when he returned to Intel have stopped, they were both laughable and an insult to SJ.

[1] http://vrworld.com/2009/09/18/pat-gelsinger-left-intel-becau...


Yes, forgot about the connection with Pat Gelsinger. It would be really disappointing if they blew Arc. In fact, given how much I think the current position of Intel is a product of their GPU project failure over a decade ago (look at where NVDA is today), I would put the most focus on absolutely nailing Arc and being competitive with NVDA and AMD in high performance GPU space. This would be a big morale lift and would probably be the first big step on the staircase to salvation.


So the CEO decided to use TSMC for some future products.


People were extremely skeptical of Pat on HN not long ago. Glad to see some positive comments.


As Thomas Gleixner has been the x86 maintainer since 2008, what do they gain from this? It seems a pretty close relationship already. It's not as if they're acquiring IP. Conversely if they'll allow Linutronix to continue "to operate as an independent business" and e.g. work on other architectures like RISC-V, then that dilutes their access to the talent they're acquiring.


Yes, it's been a pretty close relationship for a long, long time.

What does Intel get out of this? I'm really hoping that Intel gets help improving the kernel from a talented bunch of kernel developers who have experience working closely with paying customers. Intel has tons of kernel developers, but few of us are very directly customer-facing.

I also hope the Linutronix folks can spend less time on "castle maintenance" and more time on kernel maintenance. https://www.phoronix.com/scan.php?page=news_item&px=PREEMPT_...

(BTW, I work on Intel at Linux).


Yeah, Intel clearly has no interest in helping RISC-V succeed.

https://www.zdnet.com/article/intel-invests-in-open-source-r...


Intel is well aware that their x86 IP isn't the secret sauce it used to be. If RISC-V takes over the world, they won't be caught off-guard.


Maybe, but I think the more immediate motivations are

- Get customers for their fab services division, now that they're making a serious push to fab 3rd party chips.

- Help RISC-V threaten ARM at the low end, thus taking away attention and resources ARM could otherwise use to compete with x86 servers.


Maybe Intel is developing a new chip architecture and want to support linux from day 0? Or does Intel want a team of Kernel developers for their AI Silicon play, support the lower level architecture, or for their GPU play?


Maybe he was going to go do something else and they purchased some golden handcuffs and a smooth transition?


Hopefully this results in even better drivers for Linux. I wonder if the graphics cards they have planned will have good Linux drivers too. I've always had a good experience with their Linux integrated graphics drivers so far.


I hope so too! But, I was trying to think of if I've ever seen the Linutronix folks working on the Intel graphics code. I don't think I have:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

For very selfish reasons, I'm hoping that this acquisition will give the Linutronix folks even more of an opportunity to contribute to the core kernel and especially arch/x86.

Disclaimer: I work on Linux at Intel.


There's a series currently under discussion which failed CI:

https://lore.kernel.org/intel-gfx/20211214140301.520464-1-bi...

Plus 13 patches over the past years (not counting merges and SPDX commits):

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

Unfortunately, a lot of the PREEMPT_RT patches follow the "disable stuff for now, fix up for real later" anti-pattern. :-(

Case in point:

https://lore.kernel.org/intel-gfx/YgqmfKhwU5spS069@linutroni...


They don't necessarily have to directly do it. As far as I remember Linutronix is rather invested in training (I enjoyed a Linux training from them around 2010), and so they might just help out other intel engineers in landing those patches.


Thanks for your hard work!


Echoing this. Intel has by far the best working out of the box graphics drivers. I usually go Intel APUs for this reason, and am excited by the move into discrete.


Intel is doing the best to support Linux ecosystem. I wish other growing chip manufacturer follow them.


Sure, they work somewhat. Always first to get support for the new rendering backend on linux, even before the hardware is released. However, they're plagued with issues. As a linux user of integrated intel graphics for the last 10+ years, the cycle has been: driver works with new hardware, but with major bugs that impact usability. Major bugs get ironed out in the first 6/10 months, leaving with half a dozen papercuts for a good 1-2 years. By the time the driver is stable, a new shiny rendering model/backend/engine is enabled somewhere in the stack, rolling back progress. I generally switched laptops faster than intel's ability to fix bugs on existing hardware. I have a couple good stories on certain individual series, but that's it. Not to mention, most of the driver issues are worked around in the software you're using most of the time, so the fact that you don't see issues doesn't mean the driver is working fine.

I was also disappointed recently by the AX500 driver on linux. For a good part of the last year, I couldn't get stable connections. BT was next to useless. Every driver release would fix one issue in wifi, just to break BT, and vice-versa.

For a company the size of intel and such massive marketshare in premium laptops I do not consider this acceptable.

The amdgpu driver has actually less bugs on vega currently, has opencl working right out of the box to booth. I had less issues with realtek drivers on wireless too.


What does this mean for the future of PREEMPT_RT on ARM?

I know PREEMPT_RT is mostly independent of ISA; it's the parts that are not independent that worry me.


sadly everything intel touched on embedded linux over the years all failed so far,e.g. windriver,yocto,its embedded chip efforts,even ARM,etc.


Is Linutronix by any way related to Pengutronix? https://www.pengutronix.de


Different company, both are in the Linux kernel development space.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: