I run Ubuntu on a Dell XPS 13 without any issues as far as I can tell. I've done almost no tweaking. I just do periodic software and firmware updates. I close the lid, throw it my bag, open it hours later, or the next day and I'm right back to where I was. The experience as close to Mac-like as I've ever experienced outside of Apple.
But I still do wish someone would make a Linux laptop that's as tightly integrated with the hardware as macOS is on a MacBook.
I have Thinkpad which is supposedly built to run ubuntu as well and even certified for RHEL and Ubuntu. It doesn't work so good, though. It works, but there are rough edges around sleeping, external displays, power management.
I feel that it has nothing to do with manufacturer, though, just not good enough Linux support for laptops.
> It works, but there are rough edges around sleeping, external displays, power management.
Windows has these rough edges, too, though. It's actually pretty shocking that here in 2024, PC manufacturers and OS vendors are still struggling with basics like sleep/wakeup. Last job I had with Windows laptops, everyone would walk around the office from meeting to meeting with their laptops propped open because nobody could be sure that their OS would actually wake up when they opened the lid. And when you closed it and went home for the day, would standby actually work or would it be on fire and out of battery the next morning? Somehow, only Apple has seemed to be able to solve this Herculean problem.
> Somehow, only Apple has seemed to be able to solve this Herculean problem.
Bit of a stab in the dark here but I would assume ARM has at least something to do with this? Tablets, phones, etc. get standby a lot better than x86 systems seem to. My pre-M1 Macbook Pro does not handle standby well but my partner's M2 Macbook Air lasts for forever and handles sleep etc. well. The lower power consumption in "standby mode" on ARM seems like at least part of the picture for why Apple gets this so much better. I bet it's part of why Windows is trying to release their ARM variant and have been working on it for 10+ years
This used to work, but Windows/Intel have this new thing called Modern Standby that just doesn't do what anyone wants. It's on purpose. It's very frustrating.
I think so. My company's new refresh policy is "buy your own recycled corp device from us and we'll install all of our tracking software on it so you can use it as a corp device" (the _worst_ kind of BYOD imaginable). So, I'm probably using the initially "free" Intel Macbook until it dies, I do, or my job does.
I can't help but wonder if Dell tweaked the firmware. I know that I, and everyone I've seen discuss it, haven't been able to get a vanilla XPS (non-Developer edition, sold with Windows) with a typical off-the-shelf distro, including Ubuntu, to work 100%.
I've had a Dell XPS 13 9343 (2017 model, non-Developer edition) running Fedora for years without problems. I suppose you might consider it cheating because I replaced the original Broadcom WiFi card with an Intel WiFi card, as that driver was a bit flaky in the early days (whereas the Intel driver has kernel support).
Other than the pitiful 4 hour battery life, the laptop still runs fine, and mostly does what I need it to do for a permanently-docked daily driver.
Hey there! I no longer use my 9343, but I remember I was not able to run Fedora without breaking the sound support for it (Ubuntu had some kernel option set on startup that put the sound card to some legacy mode, instead of the I2C that Windows used). And I never managed to setup palm rejection, it was a constant pain whenever I had to use the (otherwise excellent) trackpad.
(The external "carbon-like" skin texture just disintegrated on it after a few years, and the hinges got loose, but otherwise it is tip-top, still functional!)
That seems likely. I know that firmware is one of the big differences between System76 laptops and the version that Clevo subsequently offers with Windows. I think the chips can vary sometimes too.
Just from an ACPI perspective, I'd expect the Linux variant to (at a minimum) be built with the Intel compiler and the Windows one with Microsoft's. It is likely that there are far more differences, though.
The biggest problem with System76 laptops: their screens.
$1400 for a laptop with 1920x1080 at 60hz in 2024 is a joke. $200 more gets you a 3024x1964 @ 120hz, with an M3 processor and the ability to get warranty service walk-in anywhere around the world.
I agree that a better screen would be great, and walk-in service anywhere in the world would be fantastic.
But I want a Linux laptop, not Windows or OSX. I also want a computer that obeys me, not some megacorp (not unrelated to the previous point.) I also want to not fight it all the time.
I bought Dell 3410 once which was shipped with Ubuntu. I closely inspected that Ubuntu, compared it with vanilla Ubuntu install. All I've found are branding packages (desktop pictures, etc) and one package which blacklisted some module. No secret drivers, no secret kernels.
Can't comment about XPS, but I feel that it'll be the same.
I ordered one with Ubuntu pre installed and it worked well, however there was an annoying issue where the mouse would freeze for a few seconds every couple minutes. I eventually swapped it with Garuda Linux and got a much faster UX, but suspend/sleep doesn't fully work. Doesn't bother me.
I'm a huge XPS15 advocate at work and really love these machines as a Windows developer. But the standby just doesn't work. If I close the lid and throw it in my bag, then the battery will be empty and the bag will be hot as hell.
This is a huge failure and makes me shutdown my XPS15 every evening. Which is just nonsense.
I'm a Mac user at home and just never shut these laptops down ever.
Yes, standby is working fine. I don't have the machine in front of me now but I don't remember fiddling with any of the power settings either. It was all working after the install. I definitely run software update so that might explain why it's working so smooth too.
Meanwhile, my other machine from work is a Precision workstation running Windows 10 and it gives me all kinds of power issues, more invasive updates, random restarts, random high fan RPMs, etc. Dell has already serviced the machine, twice. What a mess.
FWIW I had similar problems with my X1, sleep on lid close was working about 50% of the time (which is probably worse than not working at all, because you genuinely don't know what is going to happen...).
As a quick fix I assigned Ctrl-Meta-L to Sleep (Meta-L is screen lock - I'm using KDE btw). It didn't take long for me to automatically press this combo before closing the lid - I got so much used to it that I had stop stop and think when I got a new laptop later and installed linux fresh on it. And of course I just set it up like before, even though this one works :)
In the last few years; Microsoft started pushing this "Modern standby"[1] thing, which lets the CPU run while suspended or something. IIRC it is so a PC can run background services, wifi and what not, like tablets + cell phones.
It is causing so many issues, because the common use case for a laptop is to close the lid, and then stuff it into a padded bag. If anything starts up the laptop for whatever reason, all that heat is trapped in there, cooking the device. Some system BIOS are removing the option to even disable modern standby mode (vs traditional standby where just the memory was energized)
The rumor is that this is a bug that happens when you close your laptop screen to put it to sleep BEFORE you pull out the power plug, so the laptop basically never realizes it stopped being plugged into the wall, and does work it shouldn't, like a windows update. I always remove the power before putting a laptop to sleep and do not have this problem anymore.
It happens on macbooks too weirdly.
A sleeping laptop, even "modern sleep" should not be doing enough work to create a meaningful amount of heat.
This should work much better than it does. Microsoft is right - Windows machines should be able to run background services as well as a tablet or phone.
Their Modern Standby requirements should have included a clause saying that the machines efficiency core (which I assume is what would be running in standby) should not be able to raise the temperature enough to require a fan.
No, Microsoft did not ask the users if they wanted this or not (or made this behaviour configurable). Just as they did not ask users if they wanted to see ads in their Start menu...
It works well on mobile devices because from the get-go, it is established that the operating system can aggressively suspend or halt processes. Laptops + PC's, on the other hand, have 40+ years of legacy that assume that the OS won't kill a process unless the user insists, or a resource disaster is imminent. They can deal with a pause, provided the processes external view of the state of the CPU + memory are not drastically changed.
Windows finally had suspend working reliably, where memory was frozen, and nothing else on the PC could change the state of memory or the CPU. Modern standby is Intel/Microsoft's effort to hoist that mobile-style of operating system management onto PC's, in an environment that was not expecting it.
They should have slowly rolled it out, with thermal protections from the get go to prevent disaster, and after a generation or two when the hardware + software are working correctly, made it on by default. It seems like they rushed it for Win 10, and then made it the default on Win 11 before it was really stable.
> Some system BIOS are removing the option to even disable modern standby mode
The CPU manufacturers have stopped providing support for developing firmware with an S3 (“traditional standby”) function for recent CPU generations, except for a couple of laptop manufacturers receiving special treatment.
I really hope this doesn't become a contributing factor in a future plane crash from an onboard fire in the baggage compartment. I could see someone throwing their laptop in a suitcase with a bunch of clothes and having that heat building up into a thermal runaway. It's asinine to me that there isn't a hardware thermal sensor that just shuts off power if the heat is too high. In addition to the tragedy of an accident, what will happen is they'll probably block everyone from bringing laptops with them.
Standby on windows just appears to be a cue for the OS that the user isn’t actively using the machine so it should use the time to install updates and restart itself 5 times.
The machine isn't waking from sleep, it's that the standby processing is intensive enough and the hardware is so poorly designed that the computer heats up which requires the fan to run.
> When Modern Standby-capable systems enter sleep, the system is still in S0 (a fully running state, ready and able to do work). Desktop apps are stopped by the Desktop Activity Moderator (DAM); however, background tasks from Microsoft Store apps are permitted to do work. In connected standby, the network is still active, and users can receive events such as VoIP calls in a Windows store app. While VoIP calls coming in over Wi-Fi wouldn’t be available in disconnected standby, real-time events such as reminders or a Bluetooth device syncing can still happen.
Macbooks also wake from sleep while closed and yet it doesn't destroy the computer. How is the computer supposed to do background checks / send its location etc if it can't wake up for a short while?
Connected Standby has worked on my devices for a decade. When I plug in my laptop to my dock in the office and it wakes up, it comes on pretty much instantly. Its already on the WiFi, which it joined when I walked in the building. My email has already synced. My chat has already synced before I even log in.
It has been doing this just fine since Windows 8 came out across multiple Thinkpads, Surface tablets, and other devices.
Even pre-Windows 8, sleep has generally worked perfectly fine for me. I'd have my computer on sleep between classes, open it up and pretty much instantly be right back in OneNote ready to take notes. Cheap Compaq laptops, expensive HP laptops, IBM Thinkpads, Lenovo Thinkpads, Surface tablets, no-name cheap Walmart laptops, all kinds of devices. In the last almost 20 years I've had less than a dozen instances of a hot bag running XP, Vista, 7, 8, 8.1, 10, now 11.
I had issues with sleep on some desktops in the past, where it wouldn't want to stay in sleep. Every time it was some dumb app waking up the machine. Never due to some specific Windows issue, always something I installed.
I don't want my computer to do _anything_ if I set it to sleep, other than keep the memory contents alive for some time. Although these days even Ubuntu with KDE starts up so fast that the only reason for sleep (instead of shutdown) is to keep some programs running, with some mid-work state.
“How is the computer supposed to do background checks / send its location etc if it can't wake up for a short while?”
Why would I want it to do that? OTOH, coming back from pay of on modern hardware is fast enough that I just reenable hibernation and use that instead of sleep, now that MS has made sleep less sleep-ish.
> But I still do wish someone would make a Linux laptop that's as tightly integrated with the hardware as macOS is on a MacBook.
I feel like the forces around device driver development conspire to make sure this rarely happens, that is, we can’t have “commodity” hardware that has “cutting edge” device drivers because the time and expense of developing the driver isn’t justified with commodity pricing.
Here's my massive pet peeve around PCs that I don't even believe that the Dell XPS 13 has resolved:
All those computers charge over USB-C with the full force of the port. This is fine. But the second the battery is completely drained, the port cannot revive that computer. You must use the laptop's crappy barrel plug.
Only Apple allows you to use only USB-C as a charger.
huh? The current XPS 13 and many other laptops do not have a barrel plug. My Dell laptop without a barrel plug didn't become bricked when it ran out of battery.
Are they built better now? I've bought a lot of stuff from them in the past and while their support is great and their pre-built desktops are fantastic, their laptops were just rebranded Clevo trash.
I really wish that System76 would offer a Framework based option... would definitely pay a bit of a premium for Pop OS support on Framework hardware. Those two companies are just screaming for a teamup IMO.
No, still Clevo. Although the CEO said they are currently designing a custom laptop chassis in house. Probably still a minimum 2 years away, but at least they are working on it.
This is such an elitist attitude, and I'd like to see less of it.
The vast majority of users aren't going to be bothered by those screen specs. For many coming from low-end hardware, it's actually an upgrade. Most work won't be significantly impacted by increasing the refresh rate, and while better resolution can be helpful if you keep multiple windows on the screen, most programs still feel tailored to 1920x1080 screens. Office workers writing emails, reports, purchase orders, and basic spreadsheets aren't likely to notice a better refresh rate and they're more likely to get a positive impact from turning a monitor vertical to fit more of a page on their screens.
Don't get me wrong, I use two 2560x1440p monitors at 144Hz at home, but I honestly get just as much work done on my dual 1080p 60Hz monitors at my desk at work. Saying that a laptop with 1080p@60hz is a waste is elitist and unnecessary in my opinion.
My first laptop in 7th grade was 1366x768(@60?) and it's what got me into the whole industry. I still use 1920x1080@60 as my daily driver work laptop and it's fine. If I need bigger screens / higher refresh rate I have my desktop.
I have a cheap Ideapad Pro with an AMD proc that gives me the same experience using Pop_OS.
MacOS doesn't run on anything(1) but a Mac and people seem to be okay with that, but good grief, you tell them to pick a machine that is compatible with Linux and they lose their shit.
Disables Swap and Zram, gets OOM killed, surprised pikachu face
Joking aside, is there an actual legitimate reason to do this on a workstation? I understand why you would want to disable swap on something like a kubernetes cluster node but in my head, heaving at-least zram enabled is a good thing on a workstation so you *don't* get OOM killed... I call on thee, Linux wizards of HN, to help me understand the reasoning behind this.
Personally, for a long time I disabled swap and made sure that I had an OOM killer running.
This was always in a setup where I'd have ample RAM for my everyday tasks, and was doing numerics. Running OOM would invariably mean two things:
1. I had a bug in my scripts, which typically meant I'd accidentally materialized a huge sparse matrix or some such, and thus
2. The system wouldn't go "just a little" OOM but rather consume memory an order of magnitude over the actual system's capacity. And it would not recover.
In that scenario, the system would typically start swath-thrashing so hard that I'd just cold reboot. An OOM daemon fixed that and let me iron out my bugs.
On my SBCs and VPSs I use a cache-heavy zram setup with LZ4 and `vm.page-cluster=0` being the most important changes to the default, and cache pressure and swappiness both to 200 off the top of my head, and things like only doing foreground IO when the background write buffer is full. This type of swapping is fast, and is easy on the CPU, and gives a lot of extra disk cache on this type of low performing storage. I disable disk schedulers because they haven't been necessary and would just add overhead.
This means there's a lot of available RAM capacity, that there's a hefty read cache to avoid the SD card, that when there are disk writes on writable storage it can still read from it, and with the lack of clustering and the speed of decompression there's no swapping lag whenever a page needs to be swapped back. This swap early, swap often is the complete opposite of the OOM-prevention swapping you used to use on disks, which was slow and interrupted IO whereas LZ4 in RAM is fast and doesn't interrupt IO.
I have been using this setup since 2022 and have not had any issues but I don't compile anything on those setups, though I see no reason why it would not be safer than compiling without zram at all.
dirty_background_ratio = starts background writing when it's at least 1% of available mem; dirty_ratio = starts force writing when all avail (not total) ram is full; page-cluster = swap in only what's needed; swappiness = lower means swapping is expensive higher signals swapping is cheap; vfs_cache_pressure = lower keeps more dentries and inodes in memory.
Unfortunately, there is a huge amount of cargo-culted cruft lying around in various Linux-on-workstation-wiki guide sites that hasn’t been modernized since the 2000’s. I don’t normally like to rant without providing a solution, but this is a problem I see my friends bump up against all the time when I tell them it’s finally the year of the Linux desktop. When something goes wrong they land on the same search results that I did when I was a child and the advice just never got updated.
There used to be a time where swapping out meant moving cogs and wheels full of heavy rocks and RAM frequencies could be approximated by waving a stick until it made whistling noises. At that time suddenly dealing with memory swap made the system unusably unresponsive (I mean unusable, not just frustrating or irritating). Advice about disabling swap and zram came from that time for “resource constrained” systems. Unfortunately the meme will never die because the wikis and now regurgitated LLM drivel will just never let it go because nobody has gotten around to fixing it.
I’ve learned to disable swap on my scientific computing machines where we’re working on giant datasets. It’s better for the machine to crash when it exhausts its RAM than go to swap.
In my experience a machine is never going to recover when a workload pushes it into swapping because something has gone awry and that situation is not going to fix itself.
There are many reasons this situation could happen outside of your context and swapping on SSDs is comparatively harmless compared to the old days of HDDs. Random example: swapping due to VM. You just stop VMs.
Yeah on my current nvme linux systems, swap is just "the phase where the ongoing leak makes the system kind of sluggish, shortly before the oom killer goes to work". On 32GB, I ~never hit swap "legitimately".
The most useful thing honestly has been a memory usage applet in the task bar. Memory leaks usually have a very clean and visible signature that provides a few seconds of warning to hit alt-tab-tab-ctrl-c.
That's because when it comes to memory management on a Linux workstation, it is an unsolved problem. I've tried every piece of advice, distro and tool, and spent hundreds of hours trying to tune it over the years, and haven't been able to find a configuration that works as reliably as Windows or MacOS do out of the box.
Linux memory management works well for servers where you can predict workloads, set resource limits, spec the right amount of memory, and, in most cases, don't care that much if an individual server crashes.
For workstations, it either kicks in too early (and kills your IDE to punish you for opening too many tabs in Chrome) or it doesn't kick in at all, even when the system has become entirely unresponsive and you have to either mash magic sysrq or reboot.
>At that time suddenly dealing with memory swap made the system unusably unresponsive
Interestingly that was my experience on steam deck with its default 1gb swap. But after enabling both zram and larger ordinary swap (now also default setting for upcoming release) it became much more stable and responsive.
Swapping in any form always sucks, period. The machine starts behaving strangely and does not tell you why because it's trying it's hardest to hide the fact that it ran out of resources.
Experience has shown me over and over that you just want to feel the limits of the machine hard and fast so you can change what you're asking of it rather than thinking that there is some perf issue or weird bug.
It's the idea that swap is somehow useful that's old. It's not, it never worked right for interactive systems. It's a mainframe thing that needs to die.
But where else are you going to put your anonymous pages when you don't want them for a while?
Lots of the stuff you're using is backed by disk anyway -- and will be removed from RAM when there's any memory pressure, whether or not you have any swap. If you've got swap then the system can put anonymous pages in it, otherwise it'll need to evict named files more frequently.
Unless you have enough RAM that you're literally never evicting anything from your page cache, in which case swap still doesn't hurt you.
I'll absolutely agree that swapping out part of the working set is unwanted, but
most swapping is benign and genuinely helps performance by allowing the system to retain more useful data in RAM. You don't want to get into a state where you're paging code in and out of RAM because there's nowhere to put data that's not being used.
The whole concept of "virtual memory" has tainted systems design for decades. Treating RAM as a cache relies on the OS making guesses about what will be needed and what can be passivated without it actually knowing the application requirements. Except that compared to CPU level caching, the cost of page faults is big enough that performance degradation is not linear and breaks the user experience. The idea that a 4GB machine can do the same with as an 8GB one albeit slower is just not true. If you hit the swap, you feel it bad. I'll concede that Zram can work because the degradation is softer. But anything hitting the IO should be explicitly controlled by the app.
Other random semi-related thoughts:
- Rust having to define a new stdlib to be used in Linux kernel because of explicit allocation failure requirements. Why wasn't this possibility factored in from the beginning?
- Most software nowadays just abstracts memory costs, partly explaining why a word processor that used to work fine with 64mb of RAM now takes a gig to get anything done.
- Embedded development experience should be a requirement for any serious software engineer.
> Rust having to define a new stdlib to be used in Linux kernel because of explicit allocation failure requirements.
This is phrased in a way that’s a bit more extreme than in reality. Some new features are in the process of being added.
> Why wasn't this possibility factored in from the beginning?
So, there’s a few ways to talk about this. The first is… it was! Rust has three major layers to its standard library: core, alloc, and std. core, the lowest level, is a freestanding library. Alloc introduces memory allocations, and std introduces stuff that builds on top of OS functionality, like filesystems. What’s going on here is the kernel wanting to use the alloc layer in the kernel itself. So it’s naturally a bit higher level, and so needs some more work to fit in. Just normal software development stuff.
Why didn’t alloc have fallible APIs? Because of Linux, ironically. The usual setup there means you won’t ever observe an allocation failure. So there hasn’t been a lot of pressure to add those APIs, as they’re less useful then you might imagine at first. And it also goes the other way; a lot of embedded systems do not allocate dynamically at all, so for stuff smaller or lower level than Linux, there hasn’t been any pressure there either.
Also, I use the word “pressure” on purpose: like any open source project, work gets done when someone that needs a feature drives that feature forward. These things have been considered, for essentially forever, it’s just that finishing the work was never prioritized by anyone, because there’s an infinite amount of work to do and a finite number of people doing it. The Rust for Linux folks are now those people coming along and driving that upstream work. Which benefits all who come later.
Oh hello, thanks for the clarification! Having enjoyed writing some embedded Rust, I'm familiar with the core/alloc/std split. IIUC you're saying that the user-space Linux malloc API itself does not provide a reliable way for the application to think about hard memory limits? Which would fuel my pet theory about "infinite virtual memory" being a significant factor in the ever growing software bloat.
Ah, okay. So yeah, it's not a new standard library, it's "things like Vec are adding .push_within_capacity() that's like push except it returns a Result and errors instead of reallocating" more than "bespoke standard library."
> IIUC you're saying that the user-space Linux malloc API itself does not provide a reliable way for the application to think about hard memory limits?
So, unless you've specifically configured this to 2, there are many circumstances where you simply will not get an error from the kernel, even if you've requested more memory than available.
What happens in this case is that your program will continue to run. At some point, it will access the bad allocation. The kernel will notice that there's not actually enough memory, and the "oom killer" will decide to kill a process to make space. It might be your process! It also might not be. Just depends. But this happens later, and asynchronously from your program. You cannot handle this error from inside your program.
So even if these APIs existed, they wouldn't change the behavior: they would faithfully report what the kernel reported to them: that the allocation succeeded.
Most of the time, you want to use RAM as a cache for the disk. I was trying to make the argument that sometimes that disk cache is more valuable than an under-used anonymous mapping.
Steve has responded to your comment about Rust; to your other comments:
Modern applications do a lot more than old ones. Even if you only use 20% of the features, you probably use a different 20% from any arbitrary other person. You also probably benefit from the OS being able to map everything into virtual memory but only actually load the bits you use :).
And I strongly disagree with your stance on being "serious". I'm sure you don't mean to gate-keep, but we need to teach people where they are rather than giving them hoops to jump through.
In my experience, some of the best software engineers have very little development background. And I say that as someone who implemented 64-bit integer support for the compiler and RTL for a DSP part back in the day. It's useful to have people around with a variety of backgrounds, it's not necessary for everyone to share any particular experience.
Yeah, I agree. The memory-to-memory + modern CPU power makes it transparent or at least gives it a soft roll-off that IO based swap never achieves. But it's still a hack which too often is used by manufacturers to cheapen on RAM in machines.
As the gas-powered engine people will say: "there's no replacement for displacement" (I wont push the analogy comparing zram to turbocharging but, you know, they both deal with "compression"...)
I have similar experiences. I've been digging into this more over the years and my two conclusions are: (a) Linux memory management is overall rather complex and contains many rather subtle decisions that speed up systems. (b) Most recommendations you find about it are old, rubbish, or not nuanced enough.
Like one thing I learned some time ago: swap-out in itself is not a bad thing. swap-out on it's own means the kernel is pushing memory pages it currently doesn't need to disk. It does this to prepare for a low-memory situation so if push comes to shove and it has to move pages to disk, some pages are already written to disk. And if the page is dirtied later on before needing to swap it back in, alright, we wasted some iops. Oh no. This occurs quite a bit for example for long-running processes with rarely used code paths, or with processes that do something once a day or so.
swap-in on the other hand is nasty for the latency of processes. Which, again, may or may not be something to care about. If a once-a-day monitoring script starts a few milliseconds slower because data has to be swapped in... so what?
It just becomes an issue if the system starts trashing and rapidly cycling pages in and out of swap. But in such a situation, the system would start randomly killing services without swap, which is also not entirely conductive to a properly working system. Especially because it'll start killing stuff using a lot of memory... which, on a server, tends to be the thing you want running.
Default configs of most distros are set up for server-style work, even on workstation distros. So they’ll have CPU and IO schedulers optimized for throughput instead of latency, meaning a laggy desktop under load. The whole virtual memory system still runs things like it is on spinning rust (multiple page files in cache, low swappiness, etc).
The only distro without this problem is Asahi. It’s bespoke for MacBooks, so it’s been optimized all the way down to the internal speakers(!).
> Default configs of most distros are set up for server-style work, even on workstation distros. So they’ll have CPU and IO schedulers optimized for throughput instead of latency, meaning a laggy desktop under load. The whole virtual memory system still runs things like it is on spinning rust (multiple page files in cache, low swappiness, etc).
LOL. A Ken Colivas problem, circa 2008, still there :-)))
> At that time suddenly dealing with memory swap made the system unusably unresponsive (I mean unusable, not just frustrating or irritating).
I had a machine freeze this month because it was trying to zram swap, and have hit shades of the problem over the last few years on multiple machines running multiple distros. Sometimes running earlyoom helps, but at that point what's the point of swap? So no, this isn't out of date.
This is OS-agnostic. I love the old fact that you should have twice the amount of swap as your RAM size. I could rant but, no. Just don't.
Today, don't buy a computer (regardless of size) with less than 32 GB of ram. Yes, this applies to fruity products as well. Part from making it a more enjoyable experience it will also extend the usable life of the computer immensely.
(The weird crap about apple computers not needing as much RAM comes from iOS vs. android and is for different reasons, does not apply to real computers)
I don’t understand the sentiment. People should analyze what they actually use and what the need is. Sure, I bought a 64gb ram macbook because I like toys and don’t want to think about it, but for 80% of my workload 8gb is fine, and for my partner it’s fine for 100%.
8 GB can, even in this electron world, barely work. But it won't tomorrow. Buying something with 8 GB today is wasting an otherwise perfectly good computer.
And when your partner gets a new computer, for whatever reason, the old one can easily live on for many many years. But it's utility will be limited if it only has 8 GB of ram.
The product in the article is only 8 years old but already stretching its usefulness for no good reason.
I have swap, zram, and systemd-oomd enabled on my self managed kubernetes nodes. It helps dealing with JVM powered or memory leaking software at low cost.
I am not sure why you would disable those in many scenarios.
compiling clang on ubuntu 20.04, the link step used up all my ram and started swapping on the nvme.
htop froze, so i hit ctrl-c, but nothing happened. no mouse movement, no ssh'ing in, just totally hard-locked. i ended up having to physically powercycle the machine.
after that i turned off swap so that it killed the process rather than the machine (and remembered to pass -DLLVM_PARALLEL_LINK_JOBS=1)
Don't know if it's "legitimate", but I've got 64GB of RAM.
Allocating 16/32/64/128GB of NVME storage to swap is mostly just a waste of disk space for me. When I had swap enabled, it was constantly showing 0 used. (Not "pretty much none", literally "0.0".)
Further, if I'm trying to use more than 64GB of RAM... I'm fine with things getting OOM killed. I don't know that I've ever had anything OOM-killed when something wasn't clearly misbehaving. (I count Chrome eating 50GB of RAM because I haven't closed any tabs all week as me clearly misbehaving for the purposes of this discussion.)
And as far as zram... I guess same sorta arguments. I'm not running out of RAM, so why use up CPU cycles (and presumably battery power)? why use up brain cycles setting that up?
Until I've maxed out my system's RAM, I'd rather just throw more RAM at it.
Actually, zram is great! When an "excessive swap event" happens with zram, the system stays somewhat responsive, enough to let you kill the offender even from a graphical session. Without zram, I hope you were going for lunch break anyway...
zram does basically nothing while your working set fits into memory, no performance penalty.
Similar opinion here on my destop. I was running 128gb, only exceeded 64gb a handful of times. That said, my RAM started causing lots of issues (thought my ssd was going bad). I only bought 64gb to replace it with as I felt the extra cost wasn't worth it to maintain, also likely to upgrade early-mid next year.
Yeah I'm not trying to say "64GB is enough for anyone!" so much as "I have way more RAM than I realistically need for my workloads." I've got all the things I need open right now and `free` shows I've got 40GB of RAM available.
If your workloads involve using more RAM than you have you can... add more RAM, use swap/zram/etc, or just not do that thing.
Absolutely makes sense to me to throw some swap into the mix. I'd probably do the same if it were an infrequent use case (otherwise preferring to just add more RAM).
But also absolutely makes sense to me to not have any swap enabled on this machine right now.
I know this doesn't fit the author's goals but I still think the trick with the surface line is using WSL instead of trying to run native Linux. Things have improved over time but when I was using my Surface Pro 4, Linux support was still pretty lacking. Maybe things will get better now that they're practically EOL with Win10 ending next year and no support for Win11.
Unfortunately my SSD started to fail and battery life was poor enough that I ended up buying something else. The iFixit repair score reflects how much of a pain it would be to replace both of those. I do miss it sometimes, I really liked the 3:2 aspect ratio.
I'm actually rather fine with what WSL can do. Hell, many of the tools I use run fine on Windows itself.
But for me, the biggest shortcoming of this arrangement is having to put up with Windows' UX. I hate every single second I have to interact with this steaming pile of crap.
This so much. I've run Linux in all my desktop machines for 10+ years. When I was younger it was mainly due to ideology, but now I really don't care.
Although most linux distros still have quirks (bluetooth issues, sleep/resume issues, no hibernation out of the box, high battery consumption, among a plethora a of other papercuts) I am sticking with it mainly because windows ux just sux so much.
Every new computer I buy I give the installed windows a try and oh my god, it becomes crappier with every version.
For me Windows 2000 was the best... 20 years ago. It's been downhill from there.
> Although most linux distros still have quirks (bluetooth issues, sleep/resume issues, no hibernation out of the box, high battery consumption, among a plethora a of other papercuts) I am sticking with it mainly because windows ux just sux so much.
Heh, as usual, YMMV. My bluetooth headphones actually work reliably on Linux (with LDAC support!), while on Windows I usually have to fiddle with them for a few minutes until they start working. For some reason, whenever I reconnect them, Windows thinks it's a different "sound card". I sometimes can't control the volume in video calls, and they start at the max which is painful.
Battery is much better on Linux (there not being anything doing god knows what with the cpu for no reason must help), and it actually stays asleep when I close it. Hibernation also worked well whenever I tried it, but I don't really have any use for it, so I can't tell for sure it's actually fully reliable.
I didn't jump through any hoops for this other than an almost standard Arch install ("almost" because I use a fully encrypted drive with TPM+PIN unlock and secure boot with my own keys).
On linux, I have to switch my headphone mode when going in/out of web calls. It doesn't auto-switch to mono-mode when the mic is in use by an application.
IME this isn't very reliable on Windows, either. It's likely to switch to conference mode when starting a call, but chances are it won't move out of it at the end. Linux tends to do the same. I chalk this up to crappy conferencing apps which don't seem to release the mic when the call is done. I've seen Teams show up multiple times in volume mixers and webex lose the mic in the middle of a call for no reason (also happens with a traditional wired headset) so I tend to not blame the OSs for this particular problem.
In my specific case this isn't much of an issue most of the time because I've chosen my headphones for their music playback quality and didn't care about mic performance, which, it turns out, is pretty crappy. So, I just put on my wired Jabra headset for calls, which doesn't lag and works mostly OK (until it doesn't: sometimes windows stops getting anything on the mic for some reason – never had the problem in Linux).
Largely the same here... I've been split windows+wsl and mac the past few years for work, and while I feel WSL makes windows usable, I'd rather run Linux directly than either. Muscle memory on a Mac is often painful to deal with (us-ansi 104 keyboard).
WSL still has a ton of issues, slow IO and CPU usage, just to name two of them. Search "WSL vmmem" and you'll see what I mean. It is nowhere near ready for serious use if you are spending 90% of time doing development in a Linux environment.
You say no support for Win 11, but my Surface 2 Pro runs Windows 11 just fine. I don't think it even asked for the license key when I installed it. I probably used Rufus to make the image and turned off some of the more problematic aspects of Win 11, but it for sure installs with little or no problems. This is also a 4GiB model with 128GiB storage. It is very usable, despite having a processor equivalent to a pre-retina MacBook Air IIRC.
I guess that works because Linux power management is almost as bad as Windows so not a lot is lost. I'll never understand how people pick mobile devices with such short battery life. I further don't understand how literally no company other than Apple is able to deliver decent battery life. Even Microsoft's first party offerings which aren't infected by OEM bullshit are garbage in this regard.
> how literally no company other than Apple is able to deliver decent battery life
Apple’s full vertical integration from chip on up gives them an advantage here. For example, the doubling of video playback battery life from iPhone 12 Pro Max to iPhone 13 Pro Max [1] probably came from a new low-power display plus a new video decoder in the A15 Bionic chip.
> I'll never understand how people pick mobile devices with such short battery life.
Some people don't need all that much battery life.
For me trains and buses, meeting rooms and at home there are outlets. It's a convenience thing when I want to sit at home on the couch without a cable attacked to my laptop.
These new Snapdragon Elite X laptops compete on battery life. But I need to build for Linux/amd64 and I don't want to emulate so it's either Intel laptop or Apple Silicon laptop with Rosetta 2 for Linux.
Do they still compete on battery life while running a corporate email client, corporate chat client (Slack/Teams) and an editor (text/code/spreadsheet) in the background while completely idle? You’d think such simple idle workloads wouldn’t matter and yet I only find macOS to be capable of reigning in even these “light” background tasks without manual process suspension and killing. I don’t understand how we got to this point but it seems to be how my “real” world works.
I never liked the Surface series that much. It looks very nice, until you actually start working with them. Then they feel like a weird tablet with slow Windows on it. You can optimize it a little, but not much. Quite expensive as well and sometimes support is horribly slow.
I gave my wife an old Lenovo Yoga 2 in 1. That thing works nice using it as a flipped tablet to watch Netflix, but here also the performance isn't great.
Maybe just don't expect that much from these weird computers pretending to be tablets.
My wife and I have been very happy with our Surface Pro 8 16Gb we bought last year running Windows 11 Pro. Mostly we use it with the keyboard attached.
My wife needed a personal device because her company issued laptop was so locked down that she couldn't do a lot of basic personal admin stuff on it (for example online ordering of groceries).
We considered an iPad, but in the end chose the Surface Pro because it allowed multiple user profiles. Windows Hello works super well that for either of us as we pick it up and look at it it's pretty much instantly on the correct profile and thanks to cloud sync with OneDrive and Microsoft Edge, I'm at home on either my own machine or the Surface.
Only thing to mention is that the out of the box experience wasn't as good as I would have liked, especially compared to my experience with iPhones (despite liking iOS over Android, I have no love for macOS).
Firstly, it wasn't running the latest feature update of Windows 11 and trying certain apps (like Instagram) off the Microsoft Store failed to install with a largely undescriptive error. Eventually I realized it wasn't running the very latest Windows 11 feature update which resolved the issue once installed.
The other problem was that my user profile was laggy, but not my wife's. For example the Start Menu was very slow to come up. After a few days of this and no luck Googling the issue, I just formatted and re-installed Windows using Microsoft's official ISO download image. I normally do this with any new Windows PC I get, but assumed it wouldn't be essential for full on Microsoft hardware, but even though there was no obviously extra bundled rubbish software, something was clearly not 100%.
It depends on your reference point, but IMHO there's no device right now that hits all the right point, so yes, Surface Pro is one of these flawwd machines.
On the other side you'll have devices that feel really well built and graceful, but can actually do very little, or other ones fitting a very average vision of what a computer needs to do, and you'll be paying for additional devices to deal with the edge cases.
The real big roadblock is Apple, but if the DMA forces them to let third party software, we could get a fully exposed subsystem opening the door to what users really ask for.
Right now the joke is Windows XP emulation making it what it always needed to be, getting containerised/emulated Mac apps with decent Perfs from low level access would be a huge win. We could be close to your ideal, with the iPad still running, and a Mac instance pinned to the external screen.
This is my ideal setup. And I'd have it switch to macOS mode just with keyboard/mouse, so inside the magic keyboard it is just the most slick 11" macbook air ever built. Pop it out and you are dropped back into iOS.
I'd easily pay $3k for a top end version of such a device. I think this is Apple's main holdup - if the iPad can run macOS in this dual mode setup, the MacBook Air becomes pretty boring and a pretty bad deal. And they can no longer sell people two devices that accomplish the same task, only differentiated by one having a touchscreen.
In my eyes, Apple's transition to ARM on Macbooks looks like a stepping stone on that path. I wouldn't be surprised if they announced something like that for the iPad Pro eventually.
So iDEX? There have been multiple attempts at that from motorola, the nokia n900, sailfish, ubuntu touch, linux on DEX, DEX, maruOS, windows whatever, citrix,...
Sounds nice in theory but people rarely actually use it.
It actually works very well. Phones/tablets are now more capable than many PCs/Macs. When you've literally got more compute power, RAM, storage, and network bandwidth than supercomputer centers had 15-20 years ago in a phone or tablet-sized package, all you really need is a nice dock to plug it into for display (I'll take a 42" multitouch/pen setup like the Surface Studio, please), keyboard, mouse, and network.
BTW, I've done exactly this daily with the only slightly larger Surface Pros and docks for over a decade, so the concept definitely works, and there are probably millions of people using it, contrary to your assertion.
It's a very small step from doing that with a PC or tablet to doing that with a folding phone design, and there are a few such solutions like that today. (Though they should run the same OS/interface, just morphed slightly for the hardware that's active.)
After having this setup, I will never, ever, go back to an old caveman laptop or desktop computer.
IMO the advantage of the Surface is that it's one of the only tablets out there which is (a) reasonably priced for what you get, (b) has a x64 processor, and (c) can have Linux installed on it without too much difficulty. So if you want a Linux tablet, the Surface may end up being one of your only viable options.
The Steam Deck is also a great option nowadays. Its a lot bulkier than a tablet, but I personally prefer it having a controller attached. Its biggest advantage is that it comes with Linux out of the box, so you don't have to go through the headache of installing an OS yourself and messing around with drivers.
Not trying to be snarky, but I'd like to understand who you think the steam deck would appeal to? The original article, and the comment you're replying to seem to want pen input to do work/draw art, and like the tablet form factor (presumably for the large display), neither of which the steam deck provides.
With "only" 16gb of ram, a relatively meagre 8 core 6800 series APU, and small screen it wouldn't make sense for most software developer workloads, and because of the attached controller(s) it's not super portable so not great for content consumption.
Other than gamers, who likely don't even care that the steam deck runs linux (and in fact are hindered by it in some ways) is there a group you can imagine that would appreciate preinstalled linux so much that the steam deck makes sense over the surface pro or even a framework?
While i have a LCD Steam Deck and i agree you can do almost anything you want on it, i do not think you should use it for a production environment. The design and supported OS is clearly intended for gaming the way Steam wants you to game on it. This works almost perfect. Couple of minor glitches here and there. But all Steam Deck verified games work perfect. As was intended.
Of course i tinkered with it. Steam doesn't care and gives you lots of options. From installing Windows on the go to a sd card, to emulation software, to a full linux desktop environment. This is almost pure freedom, but it works far from perfect and is also not the intention. It's a superb tinker device. You can almost mod it to anything, overspec it, put it to other uses, etc. It's your call. It is like if Steam says "Hey you do you, go and have fun. We will not officially support it, but if you want to go ahead".
Not sure which Lenovo Tab you mean specifically, but I just had a glance at a few now and none of them were x64. If we're talking about ARM tablets, there are an abundance of those. It's Linux-capable x64 tablets which are rarer.
I think Surface Pros are very use-case dependent. It's perfect for mine, to the point I'm astounded there is no real competitor.
Use case: While traveling or at coffee shops, be able to switch between full laptop mode (as long as you have a table; doesn't work on your lap), and use with the pen for taking notes, drawing things etc. While not as critical as pen use, being able to take the keyboard off quickly when reading or watching videos saves space, and lets me get the screen closer.
I liked the first two iterations of the Surface Pro line, but it dropped off the radar for me when they went to NTrig digitizers.
The Samsung Galaxy Book 12 was about the perfect computer for my needs:
- decent-size high-resolution screen
- small enough to fit in a bag for when traveling
- Wacom EMR stylus --- I find this essential for drawing, sketching, annotating, and when I'm not inclined to connect a keyboard, writing
Performance was quite good, but then Fall Creators Update crippled the stylus down to an 11th touch input which scrolled in web browsers and made selecting text quite awkward, as well as making using older applications quite difficult. I rolled back to 1703 twice and stayed there until circumstances forced a replacement --- the best option I could find was a Samsung Galaxy Book 3 Pro 360 --- I have to keep the Settings app open so I can toggle the stylus between acting/not acting like a mouse.
It kills me that we had such great innovation in the tablet space once-upon-a-time (the ThinkPad was so-named because it was originally planned as a stylus computer) and my NCR-3125 (since donated to the Smithsonian) running PenPoint was one of my most-favourite computers and things seemed so promising w/ Windows 8... at least it's easy to write into text fields again.
Hopefully the Lenovo Yogabook 9i will be popular enough that someone will make a dual-screen device using Wacom EMR.
I disagree. I'm typing this on a nice Lenovo Yoga 2-in-1 and though it's quite nice and well-built, it's the worst computer I've bought in decades, because it's stupidly designed: It's got all the compromises of a tablet, but is too heavy and thick to really be used as one. The pen is marginal (and there's no way to carry it with the laptop except in a pocket!), and it gets way hotter than any of my Surfaces have.
It was clearly designed to be used as a laptop, and never really as a tablet. This shows in myriad ways, from being uncomfortable to hold as a tablet (though its rounded edges are infinitely better than the Surface Studio Laptop's razor-sharp edges (which really can cut you when holding it as a tablet!), to there being NO GOOD WAY to adjust volume without opening it back up to get to the keyboard!
To be fair, half of what I hate about the Yoga is Win11. I'm definitely moving to a Linux desktop next time, if that's viable. The Starlabs StarLite would be perfect if I could get it with 32-64 GB of RAM and a fast ARM processor like the one used in the new Surface Pro
I'm a big fan of used Surface Go models. They tend to be for corporate use which seems to have a knock on effect of them being sold off very cheaply when people want rid and with seemingly minimal use. For use when traveling they're pretty exceptional, I even managed to get away doing a few days dev work on one while railing around Japan
Have gotten multiple people a Surface Go 1 with 8GB ram and the keyboard and have never paid more than £80.
Bizarre that they even made a 4GB model, let alone that they kept it until the second most recent version
I use a surface pro 9 for development, diagramming, note taking, media, light fusion 360 (on the iGPU), and gaming (with an egpu). it's a great machine with a few minor flaws, primarily battery life and cooling performance. at a go anywhere device, it's hard to beat. the price is obscene though, especially considering it's not OLED.
I'm keen to try the arm version though, and the Minisforum V3 is interesting tho not much of an upgrade
I've been wanting to switch to Linux on my Pro X SQ2 for a while due to the WSL2 support on it being terrible (might be fixed now [1]) but always thought that most stuff such as LTE, webcam and surface connector wouldn't work [2].
The peripheral support is slowing getting there.
The major issues are the ones you've mentioned (including the inability to use external displays), but I'm seeing more and more upstream commits for the sc8180x by Maximilian - so I'm confident that these issues will be solved relatively soon.
Wi-Fi and BT works btw, which IMHO makes it already usable as a daily driver. Audio works via BT
I just bought a Surface Pro 11 and love it. I've jumped from mac into the surface line every few years and I totally agree with you - the fans on the old models were spinning just by having a few chrome tabs opened.
But...if you can live with Windows on Arm (Which has improved greatly in the past year) the SP11 has been great. Battery life is incredible.
For me I was never looking to fully replace my actual laptop, but more to replace my iPad with something that is actually capable of doing any sort of development work if needed. The iPad is a much better tablet, hands down, but even just updating a static website on an iPad is an absolute chore and requires multiple apps to function.
I really can't remember, but my guess is you are right. Having just 4gb of ram makes Windows 11 quite slow. Just saw a couple of desktops running insanely slow and yup, only 4gb ram.
Note this is the lowest spec Surface Pro 4, it had a low power Intel Core m3-6Y30 so that it could run without any active cooling, making it a 'true' tablet. Most of the 'proper' Surface Pro 4s had an i5 or i7 processor with active cooling (see https://en.wikipedia.org/wiki/Surface_Pro_4 ) and were roughly comparable in performance to other PC ultrabooks at the time. I've been using the Surface Pro line for about 10 years to do everything I need to do, they are pretty solid.
I also use the surface for everything I need: I like it a lot and I’ve never had a problem with it. I don’t get the hate, nor why the inaccurate idea that you cannot run things on it persists.
I get the perspective of comparing based on price per unit of specs, but you're not just paying for that. That consideration may not be the main consideration in purchase for everybody. People make subjective assessments that are hard to quantify and compare across individuals.
I guess if you find yourself being disappointed but you otherwise would have liked it, I suggest you may be looking at it the wrong way and missing out on what could work for you.
For me, I think the weight and mobility are important too. I love the stylus and OS. I like the look and there's a bit of a f-you status, not in terms of the money involved which is not that much (especially considering what people drop on gaming rigs, Mac stuff, etc), but because it is a bit different.
I think you're wrong that there's no hate towards Surface: you may not be picking up on it, there definitely is. Maybe people dislike that it's flashy and costly when they expect it should be utilitarian, so it kind of clashes with their expectations in a way that upsets them, and they dislike seeing other people enjoy what displeases themselves. I find it humorous that the same people may see another item, a Mac or whatever, in a different light, despite obvious similarities, and enjoy its flashy costliness. Heh! :)
I encourage you to consider how the people who like and enjoy it see it.
These topics have a way of turning people a bit mad, or at least creating conflict. So please let me turn the heat down a little bit with this olive branch compliment: hey, cool username, are you a mathematician?? :)
Agreed, people can sometimes get too much into minor things like laptop brands.
Yes, I am a mathematician and have a few colleagues who are happily using their surfaces for notes and online teaching. I have seen some "rivalry" with people using iPads instead, but luckily no hate thus far.
Right? Exactly! It's a personal thing, I mean using it is not minor for me, it's super useful, but I don't see the point in challenging others about it. Just like different strokes for different folks, like diffeomorphisms haha :) Did that math joke work? I don't know as i'm not a mathematician. Lucky you haven't seen the hate, it's definitely out there. The refined world of academia must be too pleasant for it haha :)
I also really like how you can just plug whatever keyboard in to it and use a desktop OS on tablet form factor, and it just works.
The original surface pro 1 and 2 were 16:9 aspect ratio, an interesting experiment but from surface pro 3 onwards they went with a much more useful 3:2
About the Fedora Gnome vs EndeuvourOS KDE... the issue here isn't Gnome. It's actually Fedora.
In my testing on a similar hardware (also Core M3 and 4gb RAM), Arch-based distros was the best with low RAM. And I tried like, probably 50 distros since last year...
Gnome on my HW with Arch, is as fast as KDE, and use less memory than KDE (in theory, I know RAM is a complicated subject).
Why fedora is problematic on low end hardware? Because well, Fedora uses packagekit, which is a ram hog, and this is pretty known. Is not the only reason though, I believe there's some other defaults that make it slower than arch on my HW, like zswap vs zram.
In my experience with weak CPU and low ram, was that zswap was actually the best choice. On such low RAM like 4gb, you'll really need a swap, you can't run from this. And zram won't be enough, in my experience.
Which I guess is one of the reasons why Arch go very well here, as is one of the few distros right now that does a nice default for zswap.
With Fedora, and most other distros, I get constant freezes when the RAM is full (which is pretty easy to do with 4gb), and this never happen on arch based distros.
yeah, I took the Ubuntu / Fedora perf for granted as well. Recently switched back to Arch on a whim across one low-end machine, one high-end machine, and both run like lightning compared to Ubuntu 24.04 / Fedora 40.
Expected the difference with Ubuntu as it packs more out of the box for the enterprise behaviours, not so much with Fedora. I've had no freezes, faster startup and shutdown, generally more responsive desktop etc. with Arch.
Generally, though a rolling release it also has fewer moving parts as well - only having to deal with the main repo + flatpak (and a select few AUR pkgbuilds) is nice compared to Ubuntu where I had to layer deb repos + PPAs + flatpak + brew to get my tooling in place without having to script my own git-driven installers.
One thing that tripped me up on any distro - the defaults for TLP (vs power profile daemon) seem hyper conservative wrt performance, probably by design. I never bothered digging in, just switched back to PPD, but it definitely prioritises power savings above all else.
I've been on Manjaro (arch based) for a few years now. I only ever installed it once and regularly update it. I've had some minor issues over the years but was able to resolve them. Mostly updates are without issues and when they aren't usually the fix is a google search away and pretty straightforward.
And of course just about everything has been updated many times at this point. Latest kernel, gnome, etc. Nice when a bunch of Intel driver performance improvements landed a few years ago. I got them right away after that kernel got released and noticed a slight difference. A few months ago, I noticed a few more improvements with performance when a bunch of btrfs fixes landed.
It's a good reason to stick with rolling releases. And since the Steam Deck uses Arch, getting Steam running on this was ridiculously easy. I'd use it professionally except I have a Mac Book Pro M1, which is really nice, and the Samsung laptop I run Manjaro on is not great, to put it mildly.
I check once in a while but there are a lot of compromises out there in terms of different laptops but none of them really come close to Apple. They all do some things well only to drop the ball on other things. You can have a fast laptop but not a quiet one. You can have a nice screen but then the keyboard or touchpad is meh. Or the thing just weighs a ton.
I think that was the point with the Surface Pro 4 in the article. It's a bit crap in terms of performance but the formfactor is nice-ish. Of course the touch support isn't great, which is no different with Manjaro. Except of course you do have access to all the latest attempts to address that.
I'm using a Surface Pro 7 to run Fedora, and my experience is mostly the same, although it runs a bit faster and without the ghost touches. The main annoyance I face is probably the fact that touch in Firefox occasionally breaks.
Can you share a bit more about your experience here, in particular setting the system up?
I have a bashed up Surface Pro 7 I took traveling with me. I upgraded my main PC to a Surface Pro 9 when I housed up and have been wondering what to do with with the Pro 7 because it's so battered from being thrown around and used outdoors for a year that it's not really sellable. I was thinking of turning it into a dedicated outdoor/travel computer, installing Fedora and Steam for point and click adventures, and maybe some MIDI/DJ controller software to play tunes. But I no longer have a keyboard for it, so I would need to be able to do the full Linux install by touchscreen. My other Surface is 100% bluetooth input devices to avoid cables, docks and dongles, so I could potentially pair one of those if it would help during install phase, but I wouldn't want it permanently paired. It seems like the advice online is generally "if you don't have a USB keyboard, don't bother", though. Do you think it's worth a shot?
> "if you don't have a USB keyboard, don't bother"
I think you should be able to hardware reset without a keyboard - but in my experience - you really want console access when messing with bootloaders and alternative os'. Even if it is just to get to a point where on-screen/Bluetooth keyboard works... Often an USB Ethernet dongle can be useful as well (avoiding the catch-22 of needing network access to download wifi driver).
I don't think anything could go wrong just booting into the live distro, but I did my setup with a keyboard and I don't know how it would work without.
I love that the Linux solution to a problem is just have this additional hardware to overcome it. I've run Linux as a desktop OS for years, so I'm not at all unfamiliar with all the hoops you have to jump through. Hoops that die-hard greybeards will deny exist because their personality is tied up in an operating system of all things. Surely 2024 is the year of the Linux desktop!
well you may only need the keyboard to install it right? there are thousands of USB keyboards everywhere. in the poorest most remote villages in Africa they probably have so many USB keyboards they make sandals out of them.
Except now you have sandals and perhaps still can't install Linux on a Surface.
Seriously, though, it's kind of ridiculous to make a case that just because there is so much electronic waste already in the world, might as well create some more of it. I don't own a USB keyboard and haven't owned one for a decade or more. Because I exclusively use Surface. Imo Windows tablets are the true cyberdeck of the 21st century.
Touchscreen devices should not require plugging in a keyboard to enter text or plugging in a mouse to click on things. The whole point of these devices is that they can work on their own, without peripherals. If you need to plug in to use them, then you might as well have just bought a laptop in the first place.
I think that if you are expecting Linux to work perfectly when there is no keyboard on a notoriously Linux hostile proprietary device maybe you should step up and write the driver for it yourself.
nobody is getting paid to specifically maintain the weird workarounds required to support the surface and your problem can be avoided by spending a nickel at the salvation army.
it might even work without one! I know the latest Ubuntu detects a touchscreen on my Thinkpad and provides an onscreen keyboard by default.
edit:
I sincerely believe that the best way forward is for people who use Linux to vote with their wallet and buy products from the companies who are not actively hostile to it.
I apply this logic to nearly ever device I buy and it results in less waste because I buy stuff I can actually fix! see this:
I like the hybrids/detachable form factor, as a mean to merge tablets and laptops in a single device, but the whole software/hardware stack was not yet ready then, especially for those attempting to use Linux.
List of problems:
1. x86(-64) power saving (sleep) capabilities are poor; tablets are expected to consume very little battery (ie. last weeks in standby mode), while x86 eats batteries for lunch (in S-whatever); this doesn't even take into account Windows arbitrarily deciding to wake up the machine while in a bag/backpack
2. Surface Pro's and Surface Book's (the latter was state of the art in terms of tablet hardware by the time of SB1 and SB2) had OK hardware support from Linux, but it took a long while, and it wasn't very stable (eg. wifi)
3. Hardware touch support itself is not enough; software needs to be good, and there was (likely, is) no document reader with good UX and annotation capabilities on Linux
The solution for my use case was to dual boot, but points 1 and 2 were still a serious issue overall.
Nowadays:
1. there are ARM tablets, with performant power saving (sleep) mode
2. WSL sidesteps Linux hardware compatibility issues (assuming one tolerates running Windows as underlying O/S), and avoids dual boot
3. WSL also allows using better document readers/annotators
I fear WSL, but as a matter of fact, it's changing the landscape for Linux users.
In theory, Ipad Pro's would be the best of both worlds, but they have a toy O/S by design. /shrug
Sounds like every experience of mine with desktop Linux. Excitement, initial success installing, days of esoteric troubleshooting, then disillusion and abandonment.
This is an elegant, accurate description of my own experience. It's taken 20 years of regular attempts, but I've finally given up. ("This new release of Ubuntu / this new distro will be the one!") I use WSL if I want to compile a program for Linux users.
The Intel m3-6y30 used on this surface is just fantastically puny a core. 4.5W design spec, tdp down to 3.5 up to 7W. Tiny GPU. The 7200u on my Samsung Book 12 is a 15W configurable from 7-25W; so much more headroom. 0.8GHz vs 2.5GHz base clocks! Admittedly the 7200u is also a year newer but both are Sky Lake.
https://ark.intel.com/content/www/us/en/ark/products/88198/i...
One interesting thing happening in Linux now is bpf control over hid devices. Perhaps it might be possible to filter palm reads out at the kernel level with this, or eliminate ghost inputs. Hypothetically it should allow filtering the data arbitrarily. Classically I've used interception-tools in userland to do some light remapping, reading a device filtering and emitting as a virtual uhid, but this should be faster & slicker. https://www.phoronix.com/news/Linux-6.11-More-HID-BPF
I really need to switch from my Samsung Book 12 to another copy (which I already own); mine's OLED is pretty cracked: remarkably invisible when looking straight on at it, but the touch went from sometimes not working to never working. I also want to try a pen with it.
The 4GB of ram can be obnoxious. I feel like with a better nvme not sata SSD it wouldn't be such an issue but paging stuff out or in really makes the whole system lag badly sometimes, which is terrible.
I also hella recommend hibernate. I didn't trust it for years, but one day ran low on power while suspended & watched systemd wake my system up, then hibernate it, and was shocked shocked shocked that it resumed latter & worked. It takes ~10s to boot up but being able to put a project aside, and come back weeks later & pickup where I left off is amazing. Use hibernate! I think you can configure it to hibernate after X amount of time sleeping.
On my last laptop, sleep also crashed the wireless card. But, if I restarted the system it would come back.
Guess what hibernate does? It restarts the system. After many years of carrying around a USB wifi card, when systemd hibernated my system on me, it also made the wireless card start working again! Hibernating fixed my broken wifi.
Oh thanks, that is super helpful to keep my state if I accidentally hit fn-f1 which the manufacturer hardcoded to sleep. One likely blocker is that I think restart also crashes my wireless card. Maybe hibernate will work.
Had one issued for work. Absolutely hated working on it. Though that was probably more a mismatch with work requirement (heavy excel use + teams = deathly slow). A lighter OS plus lighter use could be fine.
It's unfortunate because I found that the Surface Pro "expensive" models are gerat, but the lower end really can't handle much of any workflow (dreaded latency spikes) and it leads to loads of people just having a middling impression of a product that theoretically could capture a lot of the high end Windows market IMO
Yeah I liked the polish on them. I don’t recall which one it was. The mid range i5 I think. This was beginning of covid so IT just issued whatever they could get regardless of suitability. But yeah gigantic formula heavy excels kill even desktops let alone tablets.
I had it swapped for a surface laptop. Forget what exactly but similar generation.
That had active cooling which I suspect made the difference. Still slow but somewhat tolerable
I’ve had had SP4 i5 8gb ram version since 2017. It’s unreliable when running Windows, let alone Linux. It had constant touch screen issues which never fully went away even after replacing the screen. When I tried installing Linux I decided to switch back to Windows after a couple of months as both wifi and bluetooth had constant issues. The battery life is 2 or 3 hours at best even if you replace the battery with a new one. I’ll be replacing it with an M2 Macbook as that’ll be way more productive than to keep using this Surface.
I tried the SP7 refurbished 3 years ago, and it was already kinda slow and not great, though it gave a clear idea of what Microsoft did with the line.
Switching to a 16G SP8 it's infinitely better. Still unreliable at times, but not that much if compared to an similar usage on an iPad pro. Battery life is workable (I get around 5~6 hours coding and compiling, usually have an external battery when out anyway).
I assume if you're looking at an M2 giving up x86 compatibility isn't an issue.
The most glaring issue on the Surface for me are too much reliance on Chrome/Edge for touch support, as Firefox is really not ready (mobile version is fine, don't know why desktop is so bad), and the port networking management in WSL2 where proxying VPNs can mess with wsl's port proxying. Otherwise I'mll be waiting for Apple to ever port macos to the iPad before reconsidering.
I've been exclusively using linux on my tablets since 2007 with the thinkpad x61t and i've never had any of these problems. Although i use a completely different setup compared to the dude in the article. I would even say that on tablets gnu/linux actually provides a better experience.
But I'd say that's rather on the manufacturers, and not on Linux. They usually provide crappy drivers only for whatever version of windows they ship and call it a day. See all the junk that would stop working between major windows updates.
Also, how does that laptop work? Don't the screens just show up as two displays, or do they do something special?
> I want to get away from windows completely but their support for laptops is much better.
YMMV as they say... Speaking of displays specifically, we just got some brand spanking new 5k screens at work. My full intel hp enterprise laptop can't use them at 5K under windows [0], but Linux supports them perfectly, even two at a time in addition to the integrated panel. Even 4k@60 had be borked on Windows on this PC for something like 2 years after I bought it. Worked OK since day one on Linux.
---
[0] I actually did get it to work by installing the latest driver from the Intel website. But windows helpfully "updated" it back to the borked version after a reboot.
Having been using a Framework 13 running Fedora for ~2 weeks now... it's going great! I've plugged in a variety of external devices (monitors, a webcam) and they've all just worked.
Linux support for laptops is fine. Getting an OS to work well on hardware requires a whole team of people called system integrators. Just slapping Linux on a Windows laptop and expecting it to work is naive.
If you want a better Linux experience, you have to buy a Linux laptop, i.e. one that was designed (especially in firmware and chip selection) to run Linux, with support. You know, like you do for Windows.
I'm very seriously thinking about one of these (or really, its successor) when I need to replace my computer again in the next year or two - it's already optimized for several Linux distros: https://us.starlabs.systems/pages/starlite#
All I need now is a good replacement for OneNote that stores notes in an open format and supports pen input for sketching and handwritten note-taking...
One reason I don't use tablet is that they all have glossy screens.
And the new iPad with matte screen has a glossy frame around it. I tried it in a store and the glare around the otherwise nicely matte screen was uncomfortable.
Does anyone here have experience how well matte screen protectors for tablets work? I see them mostly discussed for they haptic feel when drawing on the tablet. I wonder how well they work to have a good experience when coding on the tablet.
I’ve used one of the drawing/pencil screen protectors on my iPad for years for the same reason and it works great. It does make the screen feel a little less sharp/crisp but solves the glare problem for me. I’m sure they’ve gotten better over the years as well.
Not OP but I use this and like it. It gives a slight scratchy feel when I write on my iPad with the apple pencil and it removes all of the glare for when I'm reading. It's magnetic so you can remove it whenever you want to, but I never take it off.
I ran nixos on a surface pro 5 for 3 years without issues. Even the stylus worked. It was one of my favourite "laptops" I had. The superbad thermals forced me of surface pro line.
Searching for a Linux tablet, I got a used Lenovo X1 Tablet Gen3. Linux works mostly fine, but as a tablet, it's mostly useless for reasons similar to the ones mentioned in TFA:
* Battery life. 5-6 hours for moderate use simply does not cut it, especially since sleep drains battery like crazy because s0ix is not working properly, and debugging why is almost impossible. It's absolutely crazy how something that used to work just fine was deliberately botched because MS/Intel decided everything has to be a phone.
* So because of this, you need to shut down the tablet if not used, which wouldn't be too bad, but as TFA says, you need a keyboard to enter the LUKS decryption password.
* As a pure reading device, it's too heavy.
Apart from that, Firefox is basically unusable because backspace does not work properly because of this bug:
It's not, it follows the upstream releases and has a couple of patches for the Surface drivers (e.g: SAM) that will hopefully be upstreamed one day. They have something like ~50 commits on top of the release tag [1].
The main developer is doing an amazing job, and the fact that Linux runs on so many Surfaces devices, including the ARM ones (like my SPX) is just amazing.
Linaro (Bjorn Andersson) helped quite a lot in the Linux on ARM environment, and qzed (Maximilian Luz) is doing all of the Surface reverse engineering and kernel driver in their own free time.
Sorry, I had to downvote you because this is just disrespectful on the amount of work awesome people are doing on their free time, and you clearly have no clue on what the linux-surface project is about.
But I still do wish someone would make a Linux laptop that's as tightly integrated with the hardware as macOS is on a MacBook.