Serves Qualcomm right for front-loading so much Windows support and not engaging with the Linux community sooner. Every single person I've seen express interest in this device, wants it for Linux (dev or to try as a daily driver).
I was going to buy this day 1, then I found out Linux support wasn't ready and I went with a new AMD 365 laptop instead. This was it's own adventure, with Ubuntu not working out of the box, so I had to go with a rolling release distro.
It's been months and I think Ubuntu might boot on the Thinkpad 14s. It's definitely not supported as a daily driver, or on any other hardware.
Based on my experience, more recent kernel versions have more fixes/better support for recent hardware(or sometimes not so recent). LTS doesn't backport everything from later releases for obvious reasons.
Expecting hardware companies to get things ready so much ahead of time that they make the cut for a LTS release might lean towards being too unrealistic since LTS releases have a fair bit of gap between them.
> Qualcomm showed Linux working in October 2023 yet here in October 2024 Linux does not work. I don't get it.
Curse of Android, Qualcomm lived for so long with permanently forked kernel, that there is no pressure on code quality, only thing they know is how to sling minimally viable platform code at their unfortunate customers. No one cares about maintainability of it, because even before code has a chance to stabilise, there is already new model to be released, new hardware to be supported. At this point no one cares about old hardware.
I'm sure Linux still works, more or less, on Qualcomm's reference design laptop. But that one isn't available to customers, and they failed to enable usable Linux support on their platform as a whole.
Technically no if you get a machine with ARM SystemReady certification, but effectively yes in practice because zero affordable SBCs meet that criteria. I think the Ampere Altras are the only ones you can easily buy, and those start at $1400 for a bare motherboard and CPU combo with no RAM included.
Fuck device trees so much. Is there some fundamental impediment to having the same "plug 'n play" experience that we've have on x86 IBM-clone descendants for thirty years?
That standard exists, it's called SystemReady. Vendors choose not to implement it.
However, device trees are far better than ACPI. The only reason anyone likes it is that there's a lot of generous people dedicating their time to patching over broken ACPI tables so that users don't have to see how utterly terrible it is. Device trees make that pain into very clear and obvious errors instead of giving you a subtly broken system.
IBM PCs use ACPI tables, which are basically the same thing as device trees. It's just that ACPI tables are burnt into the UEFI bootloader on the motherboard, while most SBCs put the bootloader on the same media as the OS.
(generally I think this is an area which just hasn't become mainstream enough for this standardisation to be useful: when most of your users are hacking around with the SBC and are happy to just download the vendor's SD card image, there's very little impetus to make something plug-and-play)
> (generally I think this is an area which just hasn't become mainstream enough for this standardisation to be useful: when most of your users are hacking around with the SBC and are happy to just download the vendor's SD card image, there's very little impetus to make something plug-and-play)
I think it's a more subtle problem of embedded software engineering culture being a decade or more behind the rest of software development, especially at any IC company that aren't NVIDIA/Intel/AMD. IME most SBC users are commercial, not users hacking with it (I don't know about this specific dev kit though), and every professional EE I know hates dealing with poorly supported non-mainline linux SBCs. If your company/clients don't commit to a single platform like iMX for years at a time, managing embedded linux is a pain in the ass. That's why RPi was so successful with its compute modules, the community handled a lot of that pain instead of depending on the vendor.
There is that general (much bigger) issue with fragmentation as well. But that's mostly seperate from 'I also need to stick a devicetree on my boot media'.
(I do know the pain of non-mainline linux SBCs, and I make some effort to avoid it now: even if it means using old chips, I'll very strongly prefer one with mainline support, note that raspberry pi is also pretty behind on getting their support upstream, even the pi 4 is still lagging, and the pi 5 isn't even usable. They just have some vaguely decent support for their fork)
> embedded software engineering culture being a decade or more behind the rest of software development
I'm not sure, because putting the bootloader in EPROM or NOR has been done on embedded boards for years; it's just rare on mass-manufactured general-use SBCs. I don't know if that's because of BOM, convenience for commercial SBC customers, or just to make it harder to brick the board.
This still feels like a poor idea. It is as if the manufacturer of the boards don't really know where the serial port is mapped or where the USB is, but 3 months later someone will fix a DTB that allows you to find it.
Noone would accept an x86 pc to not know itself well enough to hand you a list of its parts, but for some reason having to find the right mix of kernel, u-boot and DTB is considered "fine" for ARM boards, which I don't really understand.
There’s a lot of value in being able to boot to a working state without your operating system needing explicitly seeded with intricate knowledge of the hardware. Not to mention the huge differences between proprietary vendor kernel device trees and mainline, so what the vendor supplies may be worthless. No matter how broken acpi etc is at least it’s compatible
Have a look at the kernel >5.x.y organization. Overlays were a necessary result of every manufacture spinning their own board variant with standard cores and generic drivers. i.e. to bring up the hardware a generic monolith kernel would need to balloon in complexity, and could still mishandle ambiguous module setup etc.
It is a kludge for sure, but a necessary one in the current ecosystem. =3
You're right, in the sense that the standards that enable this magic are not present on Arm SBCs, but I wouldn't consider that "fundamental". Arm is at least theoretically on board with UEFI, for example, and there are implementors making boards that support it.
What I'm really asking is if a sufficiently determined person couldn't have avoided all these headaches by building and/or standardizing equivalent technologies. (And if it's possible why hasn't it happened yet?)
Basically because of the way the tech evolved. x86 was made to be “generic” from very early on, in large because of the decoupling between CPU vendors and PC vendors.
ARM doesn’t make chips you can buy and plug into devices (they don’t make chips at all). You get the IP for the core and you then typically integrate it into your SoC. There is/was so much custom development that there was no benefit in adding an abstraction for the sake of adding an abstraction when the only ARM SoC a device would ever see is the one that shipped from the factory with it, and the tight coupling between firmware and hardware developers for phones and tablets meant you would just hardcode the values and behaviors and peripherals you expected to find.
There is a level of abstraction at the code level with svd files that let you more easily code new firmware against new chips reusing existing logic, but it’s like the equivalent of a some mediocre api compatibility without abi compatibility - you still need new blobs for the end user.
The era of PCs and laptops using ARM running generic operating systems came ages into the existence and growth of the ARM ecosystem. Compromises such as device trees were found, but it’s nothing like BIOS and ACPI.
(Now we’ll get the typical replies stating “yes but no one does BIOS and ACPI correctly so I’d rather have to wait for actually correct device trees blobs from the manufacturer than a buggy ACPI implementation” to offer the alternative viewpoint.)
I still get ACPI related error messages on my boot screen on the previously Windows machine. I tried to figure it out and used the command line tools on Linux while trying to not break anything - I stopped at some point. ACPI is a journey on itself.
In my long experience of debugging and fixing ACPI errors exposed by Linux the reason MS Windows avoids (exposing to the operator) these firmware bugs is due to the fixes being incorporated into the Windows platform/chipset device drivers they ship.
> What I'm really asking is if a sufficiently determined person couldn't have avoided all these headaches by building and/or standardizing equivalent technologies.
Yes but that isn't a Linux-specific thing. You will always need a BSP (Board Support Package) for any embedded custom hardware, whether it's an SBC or any SoC for that matter.
On Windows, some subregions of the screen flicker in garbage data sometimes, but very briefly.
On Linux, I successfully built a kernel but I have not gotten the firmware working. My WIP is here and I am very much a novice in this area: https://github.com/conradev/x1e-nixos-config
On debian linux distros you will find most, but not all common amd64 packages can be found/built for aarch64.
The porting issues are not as spotty as a few years back, but one should check your use-case won't be hit with 3D/codec issues or missing services. Recall qemu does ARM64 just fine... =3
Ordered one, it's coming today. Since I have TotalTech if I completely hate it I've got something like two months to give up on it. I don't game so if I can pull it off I can leave my XPS at home; we'll see ....
Genuinely curious how they could not get a linux version running. If they can release a working android version for their other SoCs, what stops them from releasing a working Linux distro?
I know they have tons of proprietary blobs but I suspect they could still just deliver a linux ISO with all the binaries included.
It just smells like someone at Qualcomm did not see the value and cut the budget to the bone. I had my fears around long term software support of those chips. This news makes me even less confident that I trust them to support these chips long term.
>It just smells like someone at Qualcomm did not see the value and cut the budget to the bone
That's been the theme of the last two years. Qualcomm laid off 1200 last year and a few hundred last month (and I'm sure there's been more if I look deeper), so it's not doing anything special compared to the rest of the industry.
As others point out, pretty much all interest seems to have come from Linux developers, or people who'd like an alternative to x86 as their (Linux) desktop.
It should be pretty clear by now that most Windows developers do not care about ARM. It's not that I think they are dismissive of the platform, but they are waiting for Microsoft, Dell and Lenovo to deliver a finished product. They aren't going to spend time on yet another failed Microsoft hardware experiment.
Windows developers aren't going to switch to ARM in large numbers until Microsoft and partners can deliver a platform with a decade or two of longevity. They are completely accustom to Microsoft preserving backwards compatibility and they'll expect the same to be turn for a architecture switch.
Really its not about how long MS will support the platform but the fact that there is really no reason for Windows developers to support ARM. Current x86 laptop CPUs are way faster than Snapdragon and are not that far behind in battery life.
My theory -- linux arm is a real alternative to linux x86 and everyone involved knows that they are doing. Windows in Arm, at this moment, only causes confusion to product lines and branding, for both device manufacturers and consumers. By comparison, it is a hard switch for MacBooks (not releasing new x86 and arm computers at the same time), and I don't think Apple even mentions ARM that much -- they just keep saying "Apple silicon", which sounds stupid to developers but is good marketing.
They effectively said new Macs will be arm. If you want to develop for OSX it better be ARM compatibility .
Microsoft has a chicken and egg problem. Why would I port even my games to Windows ARM when the vast majority of users will be on x64 anyway.
Windows ARM isn't much more efficient when running a x64 binary anyway. You have strange comparability issues.
This is less of an issue with Linux since we can ultimately recompile our own software.
But as mentioned above, Qualcomm doesn't feel like supporting Linux, at least not well enough for it to be usable.
Honestly Microsoft( or another OEM) needs to ship a dual boot ARM laptop. The 12th of never when they happens these laptops will sell well. As is they're effectively on clearance. A 3 month year old laptop already 30% to 40% off.
I was on the Qualcomm side of the Windows 8 experience. It was terrible, not a good partnership. I have long, boring stories about the previous (to this) WoA attempt.
This comment was submitted on a PineTab 2 that's kind of configured as a desktop layout, (with external keyboard and mouse), so that counts, right?
If you want a review: it's fine if your goal is to just experiment with an ARM machine or a Linux tablet, but GUI apps are pretty slow. If you switch to a console VT it's fine. I haven't tried any benchmarks but I don't imagine they'll get a very good score at all.
I was excited during the summer, but now Intel's Lunar Lake is ruining the show. Seems to be Intel is able to match (or offer better) battery life and performance. If this the case, I don't see much point in picking Arm for Windows.
And unlike with Apple, on Windows the x86 won't go away. So there's two architectures that need to be supported. And I'm not so sure everybody is interested to put the effort for Arm.
I use windows on ARM on a daily basis… running from a VMware image on macOS on Apple hardware. I would love to have a native booted windows ARM device, but…
I've actually enjoyed a couple of good sponge cakes. On the other hand Windows 11 is determined to upset my workday and get in the way of doing my job. It's so bad that my employer, a very large US SaaS provider and diehard Microsoft shop, is starting to put Ubuntu on our laptops because of the constant headaches.
You can't just go about copying everything Apple does to create the Apple magic. I think the last non-Apple innovation in the space was the netbook from Asus - and that was a completely independent market. Maybe the microPCs/NUCs as well. Oh, Framework's modular laptops for sure.
But each of the copy-Apple things aren't that good. When Apple copies, the product is approached from the top. When these guys copy, the product is approached from the bottom. I suppose that's because of strong brand value allowing for higher pricing allowing for premium pricing in the former case.
It’s usually not even higher pricing. The difference is that Apple is one company whereas if I buy a Dell there are least 4 companies involved with conflicting incentives and extra overhead. If MacBooks crash when waking from sleep, nobody at Apple can try to shirk responsibility to Microsoft, Intel, or the BIOS vendor.
> I think the last non-Apple innovation in the space was the netbook from Asus [...] Maybe the microPCs/NUCs as well. Oh, Framework's modular laptops for sure.
There's also the Steam Deck, which introduced a completely new PC form factor.
It seems like the development of this thing was a shitshow, the version they ended up shipping (very late) has an unpopulated HDMI port and comes with a USB-C to HDMI dongle instead, and has no FCC certifications. The main HDMI encoder was so broken that they couldn't salvage it, and their last resort was to ship a half baked prototype.
I received one yesterday with what appears to be older firmware (i.e., no option for use of an external display with WinPE) and a newer image.
As is the case with nearly any Windows device I receive, I booted into Windows Audit Mode and exported all of the installed drivers to a thumbdrive.
That is when the fun began...the device failed to OOBE correctly, the restore image provided by Qualcomm doesn't work, and there's no display. Various workarounds with custom Windows arm64 images are fruitless, and there's no UEFI BIOS updater that would allow one to restore it to a known good state.
ThunderSoft (Qualcomm's partner in Thundercomm) has utterly failed to deliver a reliable working solution with the most basic Windows functionality, which makes me wonder if there is another factor at hand here. There are smaller, less-well-funded companies that can get these types of products built and shipped.
AND - the fact that they're issuing refunds for these devices (at least Arrow Electronics here in the States) REALLY makes me wonder.
Sad. It was reported that MS was contractually committed to Qualcomm for 5-7 years as of '16, so I would imagine this is the end of the road.
Windows on ARM works on the RPi4. I would have to think other ARM license holders could match whatever x86 emu optimizations Qualcomm had come up with without too much trouble, and MS may even get a license to continue to use them?
I'm looking forward to learn what MS's future plans are.
This isn't a cancellation of Qualcomm's Snapdragon chips for laptops, just a cancellation of their FUBARed dev kit that was already delayed to the point that the only reason for them to keep trying to ship it was save face.
This is probably a money losing business for the unit. Qualcomm needs to pay manufacturers a lot of money to get it going. I am not sure Qualcomm cares enough about that.
Sidenote: While Qualcomm is focusing on Windows, recent Exynos chips use RDNA2 architecture graphics, which ought to work in Linux AND Windows with a few minor patches. Samsung has also been extremely faithful about keeping bootloaders unlocked on Exynos devices. My bets are on Exynos laptops.
Wild how hyperscalers have such great success with arm. But it's Ampere & literally no one else (Apple within their fief) whose made even a somewhat viable pass at medium/large size computing.
Arm, the architecture that forever remains inaccessible. It's been well over a decade since AMD announced project Seattle, an ARM chip, which took many years to eventually make the A1100 Opteron which was still basically u purchaseable & slow. ARM is just endless quagmires & failures. I don't know what it is with this microarchitecture being so ubiquitous yet so failing to come to market again and again and again.
I bet this has less to do with ARM and more to do with it not being x86.
Given as much time, what will we have to say about RISCV?
And what of the legacy of MIPS?
Not-x86 chips keep grabbing headlines because it's exciting to think of what competition could look like.
But then none of them do the hard work to make an entire platform/ecosystem that would be required to make a compelling product that has an upgrade path as x86 was destined to be from the beginning. Well, RPI hasn't done bad though.
Unless these other technologies standardize, x86 can't expect much competition in existing markets. I guess Android was an exception in that phones were already a market of throwaway garbage devices so there was no expectation that good support would occur.
no, it's just incompetence and overpromising on a product and them deciding to pull it since it's making them look bad and way too late to mater at this point anyways.
Why does it matter what instruction set the chips support?
The main difference between x86 and ARM is that x86 is slightly harder to decode because instructions can be variable-width. But I have never heard of instruction decoding complexity being a particularly important bottleneck.
You've never had to rub a bag of frozen peas all over the bottom of your x86 Macbook Pro because it was overheating and you had an imminent zoom meeting you could not miss.
Those chips were designed and fabbed by Intel, not by Apple/TSMC respectively. That’s the relevant difference, not the instruction set.
The instruction set has only moderate impact on the chip’s frontend, and no impact on the backend. Most design decisions are unconstrained by the choice of instruction set.
The last few generations of x86 MacBooks were exceptionally bad implementations in this regard, and some of the better thermal behavior of the Apple Silicon MacBooks are things that they could just as easily done with an Intel CPU, if they had felt like it. For example, the Intel MacBooks was extremely eager to ramp its power consumption to the max, while the ARM MacBook slowly increases clock rate, one step at a time, such that it only hits max power after a long time of sustained demand.
I think that might have been Intels doing. They were on 14nm for 5 years or so, with each new 14nm release pushing the power budget and squeezing slightly more performance out of any corner they could find. I assume the CPU ramping was just another part of this approach. If the CPU ramps up faster it will seem fast to users and it’ll look faster in short benchmarks like Geekbench.
AMD doesn’t have that problem, though, so is it a problem with x86 or Intel? I would bet that Apple’s CPU team could get great results with a free hand on x86, too – probably not quite as good but close.
The ISA is, according to people far more knowledgable than myself [0], not a significant factor in regards to performance and/or power efficiency on modern CPUs. That's also why both AMD and Intel haven't done a bad job keeping up with Apple on the efficiency front, AMD since the 4x00u series and Intel now with Luna Lake. Nowadays, the OS not having downright broken sleep [1] is what keeps Macbooks slightly ahead, though then again, MacOS makes up for that with other bugs.
Cause of that, if one is willing and able to go for a well supported Linux distro, they can in my experience get good battery life regardless of ISA. Essentially, whether on a Macbook with Asahi or a modern x86 notebook with either Linux or an aggressively managed/fixed Windows install, you'll do well. Personally lack the skill for the latter though, my Surface remains scolding hot whilst sleeping even after a fresh install, but c'est la vie...
Better nowadays. Previously Macs were widely laughed at as constantly having a generation or two (or more) older CPUs and GPUs, particularly for the price.
Thats not what I remember. Towards the end of the Intel era it really slowed down, but before that I felt that Apple was routinely releasing new macs with the latest Intel processors.
Would be happy to be proven wrong. Got some good examples?
High performance doing what? Everyone in high-performance computing is using Linux, my dude. Unless you're a studio who needs specific Mac workflows, or maybe you're doing LLM work?
Great perf-to-watt, but absolutely locked down so you can't do anything meaningful in the embedded space. I'm not over here mounting a max specced Mac Mini just to get that perf-to-watt.
anyway, the point was about the obsolesence or other of the ISA. It was about the claim that various versions of apple silicon are no good for performance, only for perf-per-watt or some related variance.
I agree with your second point, but misstating the age of a 6-year-old CPU as 4 years old will tend to throw off the equations, considering how fast this field moves. I would also note that the Zen/Zen+ CPUs were truly awful. Ryzen and EPYC did not have a good implementation until Zen 2.
You are right, it's the terrible docker experience that really seals the deal. Or the fact that to run Linux you have to rely on a small group that is trying to will it into existence without any support from Apple.
Isn't the fact that people use mac despite all that testament to the fact that there's a good reason they're doing so? Like it or not, Apple hardware is top-notch, it integrates well with the software, and it's hard to get both anywhere else.
Not necessarily. People waste money on things for all sorts of _bad_ reasons.
And Apple in particular has curated a group of people who buy without asking questions, and many among them knowing nothing of the specs or performance of the system. Not to discredit those who have good reasons, but I don't think that has much to do with the success of Apple.
As always, it's a testament to the power of Apple marketing rather than the product itself. Apple has never been anything special compared to its competitors.
Comments like this are a way to instantly kill your credibility. Let's go back to 2012 and compare laptop trackpads and then see if we can pretend that your comment isn't hyperbole.
As someone who really cares about a good trackpad and owns a Magic Trackpad 2, I never understood people who put Apple ones on pedestal. It's macOS that had good trackpad handling sooner than other OSes/DEs; their hardware was always comparable to other laptops from the same class. The only thing they actually differentiated in was haptic feedback, and it's mostly just a gimmick unless you happen to care about making tactile clicks as quiet as possible.
Sure, if you bought some shitty cheap laptop you got a shitty cheap trackpad on it, but that's hardly surprising.
Not that I agree with them, but you gave a really bad example. Typing this from an M1 MBP and I would take any laptop trackpad I've ever had over it, even pre 2012. Because this thing is inferior at basic tasks like dragging or shifting from right to left click. Partially because it lacks no physical mouse buttons, and partially because it's implementation of such a scheme is just worse than every other laptop I've had. The laptop begs for a standalone mouse, but Apple in its infinite practically did not include a USB A port for the purpose, further complicating life.
They have a lot of nice window dressing, like excellent touchpad gestures. Until the Snapdragon chips launched they slaughtered everyone else for battery life and power-per-watt on laptops.
But it's certainly true that a lot of their prestige just comes down to brilliant marketing. What else could get businesses to buy laptops with non-replaceable batteries and storage and non-upgradeable RAM that need to be trashed every 3-6 years?
> What else could get businesses to buy laptops with non-replaceable batteries and storage and non-upgradeable RAM that need to be trashed every 3-6 years?
Most PC laptop vendors? The part you’re missing is that businesses usually plan to refresh devices every few years anyway, and since the service life on a Mac is 5-10 years this just isn’t a limitation most people hit whereas weight, lower performance, and worse battery life are something you notice every day.
I'd really like to know how many businesses are replacing storage and RAM. Most of the ones I've worked for, they just buy the business Dell Latitude and never touch to storage or RAM. Everything for the business is in O365 or sharepoint or drop box.
Docker on macOS doesn't have the same underlying system support as on Linux, which is where Docker originated.
The Docker experience on macOS is also marred by the fact that Docker Inc. appears to have limited interest in their Docker Desktop offering, whereas a third-party alternative like Orbstack provides a much better experience.
They have much more support than that. Apple may not be willing to provide direct assistance (and I wish they would) but they have designed the system such that it's not locked down nearly as much as an iOS device. To the point that on ARM64, one may add a second OS without losing system integrity on one's primary OS.
Most people don't care about that (if you look at it the other way around running macOS on PCs these days isn't that easy either and it will only get worse). Just don't get a Mac if you want to use Linux?
Docker is pretty painful certainly, though. Windows seem like a much better option if need to develop for Linux.
Right, I know, I guess my point was that MS doesn't need to do anything directly since Qualcomm and the various device manufacturers are contributing instead.
They were asking what MSFT was doing in response to the previous commenter talking about how Apple is doing nothing to help with Linux support. Apple is both the OS maker and the device manufacturer in that case. In the case of the Snapdragon chips, the manufacturers are upstreaming support into the mainline kernel.
There is some truth in that -- if x86 can achieve the same battery life and heat management as Apple silicon, barely anyone would care about ARM for Windows/Linux. AMD has made leaps in CPU efficiency in recent years, but that was still not enough. Your uncle probably couldn't understand or care about x86 or ARM. They want a performant, fanless laptop with long battery life, and it seems that such a (mainstream) laptop still does not exist in the Windows world.
Not /exclusively/ ARM, but having their own silicon is helping more than hurting. Personally, and along the lines I think you're getting at, I think Microsoft is in a weird spot of trying to hold on to legacy compatibility for longer than they should and taking some UI gambles that aren't paying off ("again", if you count Windows Phone). Maybe their own chips would help, but that alone isn't the dealbreaker.
Although ARM brings better performance and battery life to Mac, without the support of the software ecosystem and Apple's deep integration of hardware and software, ARM architecture alone is not enough to make Mac stand out.
Recently, Docker was exposed to many bugs with Arm64
I mean, as the other member of the Wintel ecosystem, they have to believe Intel is wholly to blame for it failing apart, right? Evidence be damned.
Dang Intel CPUs, terrible battery performance when paired with an OS that just randomly cranks the CPU up every couple seconds for no user-serving reason.
Doesn't help that there's a ton of different hardware vendors that likely have their own ACPI bugs, effectively making good power management impossible.
I think Chromebook throttle the CPU frequency to a low level to make it run at a sweet spot where there is enough performance but not much heat.
Saying that as someone who installed Linux on a Chromebook with 1215u. (Don't do that)
I do that with other AMD machines as well. I always throttle CPU to base frequency unless I really need max performance, which has not happened yet.
I think that's just what I Intel or AMD should have done in the first place -- lower both the base frequency and max turbo frequency to the point that the fan does not even turn on in most workflows.
I suspect that they want to look good in benchmarks, so they set max turbo frequency very high. But that doesn't matter in 90% of the workflows.
Now that you are interested, I'm happy to share more.
For context, I got this exact machine https://www.ebay.com/itm/134571642124 at $174, and did it because I wanted a cheap, 15.6" laptop with an accurate & high resolution screen that is more useful than an iPad Pro. A refurbished machine of this model almost perfectly fits those requirements. I installed Linux (Mint) because lots of things like window management and VPN are cumbersome on ChromeOS, and I want very little to do with Google. I didn't install Windows because memory is limited (8GB) and the experience wouldn't be good, but if it had 16GB I would have done that instead.
Issues I have run into:
* Keyboard layout
* Apparently Caps Lock and Esc keys are going to be a problem. And if you have a remote session where the remote machine has Caps Lock and Esc swapped, it would be a mess. As a (vscode) vim user, I ended up doing Ctrl+[ for Esc. (If I spent more time on it I probably would have figured it out)
* Also apparently, F11 and F12 keys don't exist, and it can be painful for debugging.
* You need to properly map F1-F10 as well.
* Audio issues -- there is a chromebook-linux-audio project that enables audio on chromebooks, and I know other people have successfully used it for this model but on a different distro. The project clearly says it does not support Ubuntu, and it indeed didn't work for me. Not that I need or care about audio on this machine dedicated to programming, but missing audio can be annoying at times when you need it.
* Power management. I don't know if it's bad for this model, for Mint or bad for Linux in general, but it's annoying. I think I have properly throttled the CPU via auto-cpufreq, but sometimes the fan runs crazy when nothing is happening. (Granted, that happens on well-supported Windows laptops as well, and I don't know why.) Then something is weird with suspend ("sleep") -- even after setting things up so that closing lid = suspend, after doing it, the laptop goes out of battery within a few hours. I have to manually suspend every time I am done to preserve battery.
Basically, I am seeing a number of issues with using the machines, some specific to Chromebooks, and some that may or may not be associated with Chromebooks specifically. I know it is always a bumpy road to use Linux on laptops, but this is rougher. It's probably not worth the effort, unless you really want to do that, like in my case. Or to put it this way -- if I didn't intend the machine to do lots of development work and I want to be in the Google ecosystem, it would have been a perfect machine otherwise.
And it didn't occur to you to use just the Linux parts of ChromeOS? As in the Linux dev VM with Wayland passtrough. Firefox runs fine (including touchpad gestures) as does pretty much everything else.
It is okay, until is not. Funny enough WSL integration with Windows seems better the Chromeos one, mostly because the security model of the latter I think.
Window management is very weird: try running vscode in crostini. Title bar is empty. I think there is a workaround that ... shows two title bars. Here you are. I didn't want to waste time on that. And I don't remember exactly what happened, but I can never arrange the windows the way I want. I am not talking about something as good as Microsoft Windows with all the regions and snaps, just something at native MacOS or a desktop Linux distro level. No you can't. Especially if you have an Android app open as well.
If you want to connect to VPN, you may need to go to play store and use an Android app for it, which probably doesn't support keyboard very well. I am talking about something as basic as using tabs.
I vaguely remember Firefox on crostini was not a great experience but I don't remember why. Firefox in Android is a big no-no for Chrome OS. Horrible tab management and there is no keyboard shortcut support.
You regular linux command may or may not work.
That's just some of the issues I had.
My experience with WSL isn't great, having spent many hours on it, mostly due to poor IO performance and weird bugs. But I would love to hear more about others' experience.
Possibly, but how much time do you need to spend getting closer to the out of the box macOS experience? If it’s more than an hour or so you’re paying more to have an involuntary hobby.
That seems sort of beside the point. The existence of not-power-hungry Linux installs indicates that there’s something Microsoft could do about the problem.
For me, I like a convertible laptop which I cannot buy from Apple for any amount of money as far as I know. If I could have gotten a fully unlocked iPad I probably would have gone for that, but they don’t seem to exist. Yet, at least! Here’s hoping.
It actually would be nice if Microsoft would figure out their part of the mess, because that would give people the ability to trade money for battery life.
That can get it to be okay, if you put some effort in and get lucky with the manufacturer, but it's still worse than basically anything else that's modern and supports Linux.
On my system looking at it right now, I only see wireless consuming much power. Everything else is below 100mW. I can’t really blame x86 for the wireless power consumption (maybe Intel more generally though).