Hacker News new | past | comments | ask | show | jobs | submit login
Qualcomm cancels Snapdragon Dev Kit, refunds all orders (jeffgeerling.com)
120 points by LorenDB 4 hours ago | hide | past | favorite | 121 comments





Serves Qualcomm right for front-loading so much Windows support and not engaging with the Linux community sooner. Every single person I've seen express interest in this device, wants it for Linux (dev or to try as a daily driver).

Those are my thoughts exactly!

I was going to buy this day 1, then I found out Linux support wasn't ready and I went with a new AMD 365 laptop instead. This was it's own adventure, with Ubuntu not working out of the box, so I had to go with a rolling release distro.

It's been months and I think Ubuntu might boot on the Thinkpad 14s. It's definitely not supported as a daily driver, or on any other hardware.


Ubuntu released a special version for the SXE chip that specifically listed the Thinkpad 14s as the best-supported PC with the chip.

If you are playing your cards right, its best to be on mainline kernels and in the stock non-lts release by default in most distros.

I don't disagree, but a kernel that works right now is better than not having any.

Why non-LTS out of curiosity?

> Why non-LTS out of curiosity?

Based on my experience, more recent kernel versions have more fixes/better support for recent hardware(or sometimes not so recent). LTS doesn't backport everything from later releases for obvious reasons.


Qualcomm showed Linux working in October 2023 yet here in October 2024 Linux does not work. I don't get it.

> Qualcomm showed Linux working in October 2023 yet here in October 2024 Linux does not work. I don't get it.

Curse of Android, Qualcomm lived for so long with permanently forked kernel, that there is no pressure on code quality, only thing they know is how to sling minimally viable platform code at their unfortunate customers. No one cares about maintainability of it, because even before code has a chance to stabilise, there is already new model to be released, new hardware to be supported. At this point no one cares about old hardware.


Add me to the list. Fortunately there's great progress being made getting Linux on the Volterra Windows Dev Kits from 2023.

> Every single person I've seen express interest in this device, wants it for Linux (dev or to try as a daily driver).

Does Linux still require specific images with patches to be built for every SBC/device?


Technically no if you get a machine with ARM SystemReady certification, but effectively yes in practice because zero affordable SBCs meet that criteria. I think the Ampere Altras are the only ones you can easily buy, and those start at $1400 for a bare motherboard and CPU combo with no RAM included.

https://www.newegg.com/asrock-rack-altrad8ud-1l2t-q64-22-amp...


Yes but that isn't a Linux-specific thing. You will always need a BSP (Board Support Package) for any embedded custom hardware, whether it's an SBC or any SoC for that matter.

It depends on what you mean by "every SBC/device" and "specific images"

If your SBC requires things not in mainline linux (which is common for new ARM SoCs), then you will need a custom kernel.

Otherwise it can use a stock kernel, but might need a custom version of uboot, and will definitely need a board-specific device-tree.


Fuck device trees so much. Is there some fundamental impediment to having the same "plug 'n play" experience that we've have on x86 IBM-clone descendants for thirty years?

That standard exists, it's called SystemReady. Vendors choose not to implement it.

However, device trees are far better than ACPI. The only reason anyone likes it is that there's a lot of generous people dedicating their time to patching over broken ACPI tables so that users don't have to see how utterly terrible it is. Device trees make that pain into very clear and obvious errors instead of giving you a subtly broken system.


IBM PCs use ACPI tables, which are basically the same thing as device trees. It's just that ACPI tables are burnt into the UEFI bootloader on the motherboard, while most SBCs put the bootloader on the same media as the OS.

(generally I think this is an area which just hasn't become mainstream enough for this standardisation to be useful: when most of your users are hacking around with the SBC and are happy to just download the vendor's SD card image, there's very little impetus to make something plug-and-play)


> (generally I think this is an area which just hasn't become mainstream enough for this standardisation to be useful: when most of your users are hacking around with the SBC and are happy to just download the vendor's SD card image, there's very little impetus to make something plug-and-play)

I think it's a more subtle problem of embedded software engineering culture being a decade or more behind the rest of software development, especially at any IC company that aren't NVIDIA/Intel/AMD. IME most SBC users are commercial, not users hacking with it (I don't know about this specific dev kit though), and every professional EE I know hates dealing with poorly supported non-mainline linux SBCs. If your company/clients don't commit to a single platform like iMX for years at a time, managing embedded linux is a pain in the ass. That's why RPi was so successful with its compute modules, the community handled a lot of that pain instead of depending on the vendor.


There is that general (much bigger) issue with fragmentation as well. But that's mostly seperate from 'I also need to stick a devicetree on my boot media'.

(I do know the pain of non-mainline linux SBCs, and I make some effort to avoid it now: even if it means using old chips, I'll very strongly prefer one with mainline support, note that raspberry pi is also pretty behind on getting their support upstream, even the pi 4 is still lagging, and the pi 5 isn't even usable. They just have some vaguely decent support for their fork)


> embedded software engineering culture being a decade or more behind the rest of software development

I'm not sure, because putting the bootloader in EPROM or NOR has been done on embedded boards for years; it's just rare on mass-manufactured general-use SBCs. I don't know if that's because of BOM, convenience for commercial SBC customers, or just to make it harder to brick the board.


Yes, the impediment is that there isn't a single IBM machine that everyone's cloning.

Have a look at the kernel >5.x.y organization. Overlays were a necessary result of every manufacture spinning their own board variant with standard cores and generic drivers. i.e. to bring up the hardware a generic monolith kernel would need to balloon in complexity, and could still mishandle ambiguous module setup etc.

It is a kludge for sure, but a necessary one in the current ecosystem. =3


Yes. Lack of BIOS or EFI. And ACPI. And. and. and.

You're right, in the sense that the standards that enable this magic are not present on Arm SBCs, but I wouldn't consider that "fundamental". Arm is at least theoretically on board with UEFI, for example, and there are implementors making boards that support it.

What I'm really asking is if a sufficiently determined person couldn't have avoided all these headaches by building and/or standardizing equivalent technologies. (And if it's possible why hasn't it happened yet?)


Basically because of the way the tech evolved. x86 was made to be “generic” from very early on, in large because of the decoupling between CPU vendors and PC vendors.

ARM doesn’t make chips you can buy and plug into devices (they don’t make chips at all). You get the IP for the core and you then typically integrate it into your SoC. There is/was so much custom development that there was no benefit in adding an abstraction for the sake of adding an abstraction when the only ARM SoC a device would ever see is the one that shipped from the factory with it, and the tight coupling between firmware and hardware developers for phones and tablets meant you would just hardcode the values and behaviors and peripherals you expected to find.

There is a level of abstraction at the code level with svd files that let you more easily code new firmware against new chips reusing existing logic, but it’s like the equivalent of a some mediocre api compatibility without abi compatibility - you still need new blobs for the end user.

The era of PCs and laptops using ARM running generic operating systems came ages into the existence and growth of the ARM ecosystem. Compromises such as device trees were found, but it’s nothing like BIOS and ACPI.

(Now we’ll get the typical replies stating “yes but no one does BIOS and ACPI correctly so I’d rather have to wait for actually correct device trees blobs from the manufacturer than a buggy ACPI implementation” to offer the alternative viewpoint.)


I still get ACPI related error messages on my boot screen on the previously Windows machine. I tried to figure it out and used the command line tools on Linux while trying to not break anything - I stopped at some point. ACPI is a journey on itself.

> What I'm really asking is if a sufficiently determined person couldn't have avoided all these headaches by building and/or standardizing equivalent technologies.

https://xkcd.com/927/

The reason that this worked out in x86-land is that the entire industry spawned from cloning and making extensions for a single product, the IBM PC.


Most of these ARM boards use the u-boot method for staged bring up of the system. =3

What does a crappy hardware product that couldn't get fcc certification has to do with Linux?

I missed the boat on the dev kit, but I picked up a Galaxy Book4 Edge 14” for $800 at Best Buy ($550 off!)

It’s not the top-line SoC, but for the same price as the DevKit it comes with a free battery and AMOLED screen:

https://www.bestbuy.com/site/samsung-galaxy-book4-edge-copil...

It has OpenBSD support in 7.6, and there is a devicetree patch floating around for Linux.


It's still $800 ... OK, I need someone to talk me out of it (esp. as the last thing I need is Yet Another Tinkering Sinkhole :) )

But have you encountered any serious issues so far (I expect to put Linux on it, and I build my own kernels).


On Windows, some subregions of the screen flicker in garbage data sometimes, but very briefly.

On Linux, I successfully built a kernel but I have not gotten the firmware working. My WIP is here and I am very much a novice in this area: https://github.com/conradev/x1e-nixos-config


Could get a pi5 if you want to play with ARM.

The documentation and well maintained kernel are kinder to new devs. =)


I play with ARM in the Day Jobs but the full laptop form-factor is really appealing

Depends on the use-case.

In general, most reasonable ARM64 machines are not yet value competitive with the same priced intel+Nvidia_GPU laptops.

Application support (MacOS or Win11 on ARM) is usually what hits hard in edge-cases people take for granted as working most of the time.

Best of luck, =)


I think that one has the best screen of the whole batch of the new Snapdragon laptops, too. That's a sick deal

I didn’t fully know what I was getting into when I got it, but it honestly puts the MacBook Pro screen to shame

Genuinely curious how they could not get a linux version running. If they can release a working android version for their other SoCs, what stops them from releasing a working Linux distro?

I know they have tons of proprietary blobs but I suspect they could still just deliver a linux ISO with all the binaries included.

It just smells like someone at Qualcomm did not see the value and cut the budget to the bone. I had my fears around long term software support of those chips. This news makes me even less confident that I trust them to support these chips long term.


>It just smells like someone at Qualcomm did not see the value and cut the budget to the bone

That's been the theme of the last two years. Qualcomm laid off 1200 last year and a few hundred last month (and I'm sure there's been more if I look deeper), so it's not doing anything special compared to the rest of the industry.


It seems like the development of this thing was a shitshow, the version they ended up shipping (very late) has an unpopulated HDMI port and comes with a USB-C to HDMI dongle instead, and has no FCC certifications. The main HDMI encoder was so broken that they couldn't salvage it, and their last resort was to ship a half baked prototype.

so much for the windows on arm 'revolution'.

As others point out, pretty much all interest seems to have come from Linux developers, or people who'd like an alternative to x86 as their (Linux) desktop.

It should be pretty clear by now that most Windows developers do not care about ARM. It's not that I think they are dismissive of the platform, but they are waiting for Microsoft, Dell and Lenovo to deliver a finished product. They aren't going to spend time on yet another failed Microsoft hardware experiment.

Windows developers aren't going to switch to ARM in large numbers until Microsoft and partners can deliver a platform with a decade or two of longevity. They are completely accustom to Microsoft preserving backwards compatibility and they'll expect the same to be turn for a architecture switch.


Really its not about how long MS will support the platform but the fact that there is really no reason for Windows developers to support ARM. Current x86 laptop CPUs are way faster than Snapdragon and are not that far behind in battery life.

2012-2013 - Microsoft partners with Nvidia to launch Windows ARM

2019-2020 - Microsoft partners with Qualcomm to launch Windows ARM... again

2024-???? - Microsoft partners with Qualcomm... again, to launch Windows ARM... again

Fourth times the charm?


I was on the Qualcomm side of the Windows 8 experience. It was terrible, not a good partnership. I have long, boring stories about the previous (to this) WoA attempt.

And the one before that.


Win32 slays yet another would-be challenger.

Again.

Like every single challenger that has come before.

Perhaps it is prudent to not challenge Win32.


Year of Windows on ARM is the new Year of the Linux Desktop.

(This comment was submitted on an ARM-based Linux Desktop)


Which ARM-based Linux Desktop are you using? :)

Android, of course

What desktop runs Android?

ChromeOS

I was excited during the summer, but now Intel's Lunar Lake is ruining the show. Seems to be Intel is able to match (or offer better) battery life and performance. If this the case, I don't see much point in picking Arm for Windows.

And unlike with Apple, on Windows the x86 won't go away. So there's two architectures that need to be supported. And I'm not so sure everybody is interested to put the effort for Arm.


The revolution will not be in the Microsoft Store.

I dont even think it will be on windows ;)

I use windows on ARM on a daily basis… running from a VMware image on macOS on Apple hardware. I would love to have a native booted windows ARM device, but…

Windows on ARM is the same year Tesla has FSD...50 years from now judging by Elon's own tweeting stating that exact detail.....

Commercial fusion power will beat both to market.

I should know, I owned both a Windows ARM PC and a Tesla with “FSD”.


Wintel vendors must be feeling like the airlines that invested too much on Boeing

Now Windows 11 feels as exciting as sponge cake and nobody cares too much about PCs anymore


I've actually enjoyed a couple of good sponge cakes. On the other hand Windows 11 is determined to upset my workday and get in the way of doing my job. It's so bad that my employer, a very large US SaaS provider and diehard Microsoft shop, is starting to put Ubuntu on our laptops because of the constant headaches.

Hope it's not realted to ARM vs Qualcomm legal dispute

no, it's just incompetence and overpromising on a product and them deciding to pull it since it's making them look bad and way too late to mater at this point anyways.

Sidenote: While Qualcomm is focusing on Windows, recent Exynos chips use RDNA2 architecture graphics, which ought to work in Linux AND Windows with a few minor patches. Samsung has also been extremely faithful about keeping bootloaders unlocked on Exynos devices. My bets are on Exynos laptops.

Microsoft doesn't even care about Windows, why would they care about what architecture it ran on?

They got bamboozled by the AI PC thing from Microsofr

You can't just go about copying everything Apple does to create the Apple magic. I think the last non-Apple innovation in the space was the netbook from Asus - and that was a completely independent market. Maybe the microPCs/NUCs as well. Oh, Framework's modular laptops for sure.

But each of the copy-Apple things aren't that good. When Apple copies, the product is approached from the top. When these guys copy, the product is approached from the bottom. I suppose that's because of strong brand value allowing for higher pricing allowing for premium pricing in the former case.


It’s usually not even higher pricing. The difference is that Apple is one company whereas if I buy a Dell there are least 4 companies involved with conflicting incentives and extra overhead. If MacBooks crash when waking from sleep, nobody at Apple can try to shirk responsibility to Microsoft, Intel, or the BIOS vendor.

> I think the last non-Apple innovation in the space was the netbook from Asus [...] Maybe the microPCs/NUCs as well. Oh, Framework's modular laptops for sure.

There's also the Steam Deck, which introduced a completely new PC form factor.


Oh that's a big one. Looks like there have been quite a few in the space actually.

Wild how hyperscalers have such great success with arm. But it's Ampere & literally no one else (Apple within their fief) whose made even a somewhat viable pass at medium/large size computing.

Arm, the architecture that forever remains inaccessible. It's been well over a decade since AMD announced project Seattle, an ARM chip, which took many years to eventually make the A1100 Opteron which was still basically u purchaseable & slow. ARM is just endless quagmires & failures. I don't know what it is with this microarchitecture being so ubiquitous yet so failing to come to market again and again and again.


Sad. It was reported that MS was contractually committed to Qualcomm for 5-7 years as of '16, so I would imagine this is the end of the road.

Windows on ARM works on the RPi4. I would have to think other ARM license holders could match whatever x86 emu optimizations Qualcomm had come up with without too much trouble, and MS may even get a license to continue to use them?

I'm looking forward to learn what MS's future plans are.


This isn't a cancellation of Qualcomm's Snapdragon chips for laptops, just a cancellation of their FUBARed dev kit that was already delayed to the point that the only reason for them to keep trying to ship it was save face.

this is just about the dev kit they tried to make themselves. turns out there's expertise needed to make, ship and support and actual entire system.

looks like they should get someone like asus to build / distribute it for them.

I wonder if anyone at Microsoft is starting to have an inkling of realization that ARM is not what makes the Mac good.

Apple's custom ARM chips is definitely one of the things that does make Apple better. Low power, low/no noise, and high performance.

Better nowadays. Previously Macs were widely laughed at as constantly having a generation or two (or more) older CPUs and GPUs, particularly for the price.

Thats not what I remember. Towards the end of the Intel era it really slowed down, but before that I felt that Apple was routinely releasing new macs with the latest Intel processors.

Would be happy to be proven wrong. Got some good examples?


Your sentence makes as much, arguably more sense if you erase the word "ARM".

the ARM chips are, to a large extent, what enable that.

Why does it matter what instruction set the chips support?

The main difference between x86 and ARM is that x86 is slightly harder to decode because instructions can be variable-width. But I have never heard of instruction decoding complexity being a particularly important bottleneck.


Memory ordering is also different.

You've never had to rub a bag of frozen peas all over the bottom of your x86 Macbook Pro because it was overheating and you had an imminent zoom meeting you could not miss.

Those chips were designed and fabbed by Intel, not by Apple/TSMC respectively. That’s the relevant difference, not the instruction set.

The instruction set has only moderate impact on the chip’s frontend, and no impact on the backend. Most design decisions are unconstrained by the choice of instruction set.


AMD doesn’t have that problem, though, so is it a problem with x86 or Intel? I would bet that Apple’s CPU team could get great results with a free hand on x86, too – probably not quite as good but close.

The last few generations of x86 MacBooks were exceptionally bad implementations in this regard, and some of the better thermal behavior of the Apple Silicon MacBooks are things that they could just as easily done with an Intel CPU, if they had felt like it. For example, the Intel MacBooks was extremely eager to ramp its power consumption to the max, while the ARM MacBook slowly increases clock rate, one step at a time, such that it only hits max power after a long time of sustained demand.

I think that might have been Intels doing. They were on 14nm for 5 years or so, with each new 14nm release pushing the power budget and squeezing slightly more performance out of any corner they could find. I assume the CPU ramping was just another part of this approach. If the CPU ramps up faster it will seem fast to users and it’ll look faster in short benchmarks like Geekbench.

ARMv8 was developed in order to make those chips. Something else could work but wouldn't be as good.

Are you suggesting that ARM isn't a large contributor to those things being possible? Genuinely curious.

The ISA is, according to people far more knowledgable than myself [0], not a significant factor in regards to performance and/or power efficiency on modern CPUs. That's also why both AMD and Intel haven't done a bad job keeping up with Apple on the efficiency front, AMD since the 4x00u series and Intel now with Luna Lake. Nowadays, the OS not having downright broken sleep [1] is what keeps Macbooks slightly ahead, though then again, MacOS makes up for that with other bugs.

Cause of that, if one is willing and able to go for a well supported Linux distro, they can in my experience get good battery life regardless of ISA. Essentially, whether on a Macbook with Asahi or a modern x86 notebook with either Linux or an aggressively managed/fixed Windows install, you'll do well. Personally lack the skill for the latter though, my Surface remains scolding hot whilst sleeping even after a fresh install, but c'est la vie...

[0] https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-...

[1] https://www.spacebar.news/windows-pc-sleep-broken/


What makes you think it is a large contributor?

High performance doing what? Everyone in high-performance computing is using Linux, my dude. Unless you're a studio who needs specific Mac workflows, or maybe you're doing LLM work?

Great perf-to-watt, but absolutely locked down so you can't do anything meaningful in the embedded space. I'm not over here mounting a max specced Mac Mini just to get that perf-to-watt.

We're using low TDP x86-64 when we need that.


building Ardour (.org) from source:

16 core Ryzen Threadripper 2950X: 9min

M3 macbook pro: 4min

Pretty meaningful when your life revolves around building Ardour.


Sure, but isn't that just comparing an obsolete CPU against a state-of-the-art CPU? I can't see attributing that to the ISA.

A scratch, optimized build of Ardour on a Ryzen 9 9950X takes 1m51s.


the machine is less than 4 years old!

anyway, the point was about the obsolesence or other of the ISA. It was about the claim that various versions of apple silicon are no good for performance, only for perf-per-watt or some related variance.


You are right, it's the terrible docker experience that really seals the deal. Or the fact that to run Linux you have to rely on a small group that is trying to will it into existence without any support from Apple.

Isn't the fact that people use mac despite all that testament to the fact that there's a good reason they're doing so? Like it or not, Apple hardware is top-notch, it integrates well with the software, and it's hard to get both anywhere else.

As always, it's a testament to the power of Apple marketing rather than the product itself. Apple has never been anything special compared to its competitors.

Comments like this are a way to instantly kill your credibility. Let's go back to 2012 and compare laptop trackpads and then see if we can pretend that your comment isn't hyperbole.

Well you have made it instantly clear you have no idea what you're talking about

They have a lot of nice window dressing, like excellent touchpad gestures. Until the Snapdragon chips launched they slaughtered everyone else for battery life and power-per-watt on laptops.

But it's certainly true that a lot of their prestige just comes down to brilliant marketing. What else could get businesses to buy laptops with non-replaceable batteries and storage and non-upgradeable RAM that need to be trashed every 3-6 years?


> What else could get businesses to buy laptops with non-replaceable batteries and storage and non-upgradeable RAM that need to be trashed every 3-6 years?

Most PC laptop vendors? The part you’re missing is that businesses usually plan to refresh devices every few years anyway, and since the service life on a Mac is 5-10 years this just isn’t a limitation most people hit whereas weight, lower performance, and worse battery life are something you notice every day.


I'd really like to know how many businesses are replacing storage and RAM. Most of the ones I've worked for, they just buy the business Dell Latitude and never touch to storage or RAM. Everything for the business is in O365 or sharepoint or drop box.

They have much more support than that. Apple may not be willing to provide direct assistance (and I wish they would) but they have designed the system such that it's not locked down nearly as much as an iOS device. To the point that on ARM64, one may add a second OS without losing system integrity on one's primary OS.

Docker on macOS doesn't have the same underlying system support as on Linux, which is where Docker originated.

The Docker experience on macOS is also marred by the fact that Docker Inc. appears to have limited interest in their Docker Desktop offering, whereas a third-party alternative like Orbstack provides a much better experience.


> Or the fact that to run Linux

Most people don't care about that (if you look at it the other way around running macOS on PCs these days isn't that easy either and it will only get worse). Just don't get a Mac if you want to use Linux?

Docker is pretty painful certainly, though. Windows seem like a much better option if need to develop for Linux.


Asahi Linux is pretty nice though.

And next to no support from upstream kernel devs

What exactly is MSFT doing for Linux support on Snapdragon Elite X machines?

Qualcomm is merging patches into the upstream kernel directly

To quote OP:

> I wonder if anyone at Microsoft [...]

Qualcomms, as far as I can tell, imperfect [0] efforts are independent of this discussion.

[0] https://www.phoronix.com/news/Linux-Disabling-X-Elite-GPU


Right, I know, I guess my point was that MS doesn't need to do anything directly since Qualcomm and the various device manufacturers are contributing instead.

They were asking what MSFT was doing in response to the previous commenter talking about how Apple is doing nothing to help with Linux support. Apple is both the OS maker and the device manufacturer in that case. In the case of the Snapdragon chips, the manufacturers are upstreaming support into the mainline kernel.


Not /exclusively/ ARM, but having their own silicon is helping more than hurting. Personally, and along the lines I think you're getting at, I think Microsoft is in a weird spot of trying to hold on to legacy compatibility for longer than they should and taking some UI gambles that aren't paying off ("again", if you count Windows Phone). Maybe their own chips would help, but that alone isn't the dealbreaker.

I mean, as the other member of the Wintel ecosystem, they have to believe Intel is wholly to blame for it failing apart, right? Evidence be damned.

Dang Intel CPUs, terrible battery performance when paired with an OS that just randomly cranks the CPU up every couple seconds for no user-serving reason.


Let's be fair here, Intel CPUs have terrible battery life under every OS.

No really, I have a Chromebook with a 1215u and de battery life is pretty decent, in the other hand my 1255u laptop with windows not so much...

Doesn't help that there's a ton of different hardware vendors that likely have their own ACPI bugs, effectively making good power management impossible.

Eh, play around with powertop a bit, things aren’t so bad in Linux.

Possibly, but how much time do you need to spend getting closer to the out of the box macOS experience? If it’s more than an hour or so you’re paying more to have an involuntary hobby.

That seems sort of beside the point. The existence of not-power-hungry Linux installs indicates that there’s something Microsoft could do about the problem.

For me, I like a convertible laptop which I cannot buy from Apple for any amount of money as far as I know. If I could have gotten a fully unlocked iPad I probably would have gone for that, but they don’t seem to exist. Yet, at least! Here’s hoping.

It actually would be nice if Microsoft would figure out their part of the mess, because that would give people the ability to trade money for battery life.


That can get it to be okay, if you put some effort in and get lucky with the manufacturer, but it's still worse than basically anything else that's modern and supports Linux.

On my system looking at it right now, I only see wireless consuming much power. Everything else is below 100mW. I can’t really blame x86 for the wireless power consumption (maybe Intel more generally though).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: