Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What do you want to see in Ubuntu 17.10?
1374 points by dustinkirkland on March 31, 2017 | hide | past | favorite | 1145 comments
Howdy HackerNews!

Dustin Kirkland here, Product Manager for Ubuntu as an OS platform (long time listener, first time caller).

I'm interested in HackerNews feedback and feature requests for the Ubuntu 17.10 development cycle, which opens up at the end of April, and culminates in the 17.10 release in October 2017. This is the first time we've ever posed this question to the garrulous HN crowd, so I'm excited about it, and I'm sure this will be interesting!

Please include in your replies the following bullets:

- FLAVOR: [Ubuntu Desktop, Ubuntu Server, Ubuntu Core]

- HEADLINE: 1-line description of the request

- DESCRIPTION: A lengthier description of the feature. Bonus points for constructive criticism ;-)

- ROLE/AFFILIATION: (Optional, your job role and affiliation)

We're super interested in your feedback! Everything is fair game -- Kernel, Security, Desktop apps, Unity/Mir/Wayland/Gnome, Snap packages, Kubernetes, Docker, OpenStack, Juju, MAAS, Landscape, default installed packages (add or remove), cloud images, and many more I'm sure I've forgotten...

17.10 will be our 3rd and final "developer" release, before we open the 18.04 LTS (long term support, enterprise release) after October 2017 (and release in April 2018), so this is our last chance to pull in any big, substantive changes.

Thanks, HN!



- FLAVOR: Ubuntu Desktop:

1. HEADLINE: A way to have different scaling for external monitors hooked up to my HiDPI laptop.

Currently I need can only set a single scaling factor, so I need to ajust my laptop screen resolution to match scaling of the external monitor. If that's not possible, a way to automatically set resolution and scale for both screens once you hook one up would already save me a lot of manual switching and restarting lightDM!

2. HEADLINE: "Native" multitouch gestures like 3-finger swipe to change workspace.

There are some programs that can do this already like xSwipe and Fusuma, but I expect this integrated with a nice and easy menu.

3. HEADLINE: Better battery management.

Battery performance under Ubuntu is often much worse in Ubuntu than Windows. TLP helps, but it's not enough.

User: I want hi-res apps!

Dev: Sure, here you go.

User: But why is it so small on my new shiny tablet high density screen?

Dev: (SHit it worked okay for me) Okay now it detects the density and scale..

User: But when I move the window to my old good lcd screen it becomes way too big!

Dev: Okay let's see if I can dynamically adapt to a new monitor density, it's just one scale factor.

User: But when I put it on my big tv flat screen it is too small!!

Dev: (Oh shit you gotta be kidding me, the pixels are actually a viewing distance relative unit??!)

MacOS actually in my experience seems to handle this all perfectly. Normal-DPI screens you chose resolution and dragging windows between monitors works as you naturally expect (it pops between DPIs).

Concerning DPI and trackpad integration, Ubuntu should strive to be as Mac-like as possible, at least imo. Macs absolutely win in the trackpad arena; multitouch works like a dream, configurable gestures aplenty to achieve whatever you want (of course, it could always be more customizable).

DPI scaling between monitors work exactly as you'd expect. Windows stay the same size when moving between high-DPI and regular monitors.

These two problems are two of the biggest reasons I don't use Ubuntu (or any Linux) desktop (I use a macbook with a headless Ubuntu Server box and, when necessary, X11 forwarding over ssh).

Does it understand "I want 2x magnification on my 15" 4k laptop display, but not on my 43" 4k monitor"? Windows 10 decidedly does not, so I have to switch manually every time I switch display (just forget using both together), and it then tells me to close all my work and log out and in again to make scaling consistent between UI elements on screen.

Yes it does. It even remembers different window sizes and locations for different monitor configs :)

Huh? On Win10Ent I've got both of my displays set to different scaling factors and changes take effect immediately (like, as soon as I release the slider).

It works for me, with one annoying caveat. I have a set of regular 1920x1080 monitors on my desk at 100% scaling. My laptop has a 4k screen, and when I plug it in, I have it set to turn off that screen.

I have to log out when I plug/unplug or the windows will end up blurry or the wrong size.

Have you ever tried shouting at Microsoft about this?

I know they have a bad track record of not listening, but I think things might be different now, they seem to be a bit more receptive to feedback, particularly with the beta updates.

Or maybe I'm remembering the prerelease "hai where r the bugz halp" back when Win10 was not yet RTM...

This is exactly what I mean.

Yes it does, sometimes. But you can configure it as well. I think they used predefined lists of hardware though. EG if name contains tv then scale is 1

Pretty much. Works just fine when I hook my 15" laptop up to my 130" projector.

It's a bit weird to drag things onto my 4k TV through HDMI and try to track my microscopic mouse pointer to the tiny window to maximize my video, but otherwise works well. I suspect I could fix that in settings somehow though.

On newer macos shaking the mouse back and forth makes the pointer get larger (so you can find it).


KDE has a similar feature: When you hold Ctrl+Win, it will draw revolving circle segments in black and white around the cursor to allow you to find it. It looks like this: https://imgur.com/a/67wfI (You'll have to imagine the cursor inside these circles. My screenshot utility won't include it in the image for some reason.)

Heh...I think Windows 3.11 already had that feature (Ctrl -> Circle zooming in to mouse) :)

have you considered swapping out to a BIIIG mouse pointer? that's what I do on my home 65" HTPC/ TVPC :)


I've always thought that arcdegrees should be what we measure UI's in: how much of a user's field of view does this thing consume? After all, what makes text "small" is that how much of my FoV it consumes (or doesn't). Not inches, or points, or pixels.

(Admittedly, "points" are still likely a good measurement for print. Perhaps one can work backwards and fudge point as a measure of angle if you consider 12 point font at a typical viewing distance.)

I assume the real hard piece is figuring out the distance the display is going to be viewed at. Some definite defaults exist (phones are typically about the same distance away, same with desktop monitors) but unique situations certainly can exist. (I'm also assuming that the monitor can report it's physical size and resolution; combined w/ viewing distance, it should be possible to calculate FoV.) If you did this, you should be able to mostly seemlessly split a window between two displays, and have it be equal "size" in the FoV. (of course, some displays have borders, so that fudges it a bit.)

> I've always thought that arcdegrees should be what we measure UI's in: how much of a user's field of view does this thing consume? After all, what makes text "small" is that how much of my FoV it consumes (or doesn't). Not inches, or points, or pixels.

That is a good starting point for calculating the default "optimal UI scaling", but there are going to be adjustments needed for the FoV of the whole screen area (not per pixel) too.

With large screens, for example 24-30" on your desk, just the per-pixel FoV measure will probably be good enough. You have plenty of "space" for windows and content, and want to get the optimal scaling.

But once you get to very small screens like phones, there is a tradeoff between keeping font and UI sizes comfortable, and being able to actually fit enough content on the screen without endless scrolling. I am willing to strain my eyes with smaller font sizes on my phone than on my laptop, just so that I can see more than 5 sentences of text at the same time.

"CSS Pixels" are actually supposed to be based on viewing angles:


If the OS (not an app) could allow you to tweak the native pixel resolution, scale, size of each display, even under "advanced settings" that would go a long way towards helping.

This is at the Operating System level, not like some random one-off application.

For me, the one feature I miss the most is a checkbox option for "Native Scrolling"; Did this really need to be removed?

X11 did — run xdpyinfo and you'll see its idea of screen dimensions and resolution. (It's unlikely they'll have been configured correctly, of course.) If you look hard enough, you can find some ‘outdated’ plain-X software from the workstation era that respects it. It was the ‘Linux desktop’ crowd that threw that away, since they couldn't think beyond building Windows clones for PC clones.

And the vector-based competition to X (e.g. NeWS, Display Postscript) would have done better.

Simple answer is don't auto-detect. Allow the user to set the scaling factor per screen and then just auto-apply that when using that screen. This just requires a way to uniquely identify screens and requires the user to set the scaling factor for that screen once when first used.

Initial autodetection and scale-factor setting is ok. Otherwise most regular users would just say "all my icons and text are too small on my new notebook". Windows detects the high dpi in that case and sets the scalefactor to 200%, which gives a good starting point. Of course the user should be able to override this permanently if it isn't his preference.


As someone who loves singular they, I have a request: please don't do this. It is OK if grand parent uses he/him. Thanks!

The distance between of the third and fourth formulations of the problem is very small. Once apps can be dynamically redrawn with a scale factor, simply make the scale factor customizable.

Simply introduce zoom in/zoom out for the whole desktop separated to each screen like in browser (you can zoom certain tabs/sites and have that memorised). Problem solved.

> 1. HEADLINE: A way to have different scaling for external monitors hooked up to my HiDPI laptop.

This would be awesome. Even when both the laptop and the external screen are 1080p, different scaling could be helpful if you want to use a dual monitor setup effectively.

Unfortunately, it's a tough nut to crack given current desktop behavior. For example, you can have a window that straddles both monitors. What should the scaling be? You need to switch at some point as you're moving a window back and forth - when? So it's a challenge, but solving it would be so worth it!

Widows 10 handles different scaling (zoom) between monitors far better than any Linux distro I have used. A window keeps the zoom of where it came from until it is entirely on the new monitor. Works pretty well.

While I get that it's uncool to like Windows on HN, I really like Windows 10. With WSL, all of the CLI tools I need for development are here along with better hardware support (including suspend / resume, high DPI monitor support, latest GPU drivers / etc).

Plugging two 4K monitors into my laptop (which has a native 1080p display) is an awful experience when booted into Ubuntu. You either have to set the DPI to make the laptop display unusable or set it to make the 4K monitors look like $hit.

Plus... you know... games.

Windows isn't the best at multi-DPI in general though either. Only recently did Firefox on Windows get multi-DPI support - not sure if Chrome does yet because I gave up on it and went to dual 4K because the scaling was easier. If you want to see really good multi-DPI support, OSX is really good at it with most apps supporting it out of the box.

Multi-DPI is kind of a hack in general though and is likely to cause issues unless applications have been tested for it very thoroughly, it causes serious issues on major frameworks like Electron and Qt - though both of their support for it is improving slowly. If you want things to work smoothly for now, try to stick to 1 DPI setting.

I think you're confusing Windows DPI scaling availability vs lack of support from the apps you use.

It's not windows fault the apps don't take advantage of DPI. You can also disable dpi scaling for individual apps.

You're right, but even many builtin Microsoft apps - while they supported DPI scaling - did not support multi-DPI switching and rather than scaling properly just scaled pixels and looked blurry.

It's "not Windows fault", sure but it certainly makes it a worse experience than other platforms like OSX where multi-DPI is much more commonly supported.

I would much prefer the Windows behavior to what I see on Linux. Right now, if I open an app that doesn't support high DPI, it is just unusable because it is so tiny.

Couldn't you just manually reduce your screen resolution? Or is that too drastic to be worth it?

It's worth considering whether there is some flaw with windows multi dpi scaling such that apps don't use it. Firefox and Chrome have scaled properly on Mac for years now, while even Windows 10 ships with first party apps that don't scale right. (E.g. device manager.)

For sure that this has been an issue in the past with Windows. UWP helps make muli-DPI work by default in new applications.

Sure, but 99% of my Windows software isn't UWP. It's all good and well to say it's there, but that doesn't make the experience good for the user. Contrast to KDE and OS X where it just works for 99% of software.

I mean, I get it, same issue as Vista for Microsoft - people expect 100% backwards compatibility, but it turns out that terrible design decisions made many years ago tend to mean you need to break compatibility. Just like UAC, resolution scaling will be an issue that becomes less painful in Windows over time. Right now it's not great, however.

I mean, you say that, but on KDE, for example, every application except one on my system works with DPI scaling (the odd one out is Unity3D) - that's because at the QT level DPI scaling is built-in, so the toolkit supports it and the applications get it for free. Clearly this wasn't the case for the older Windows UI stuff, where they are literally just scaling the image of the window up (which means horrible looking text).

Actually, the really old windows stuff did support scaling - the 'Large fonts (120%)' option was there almost forever. I remember that original Delphi, circa 1995, supported it.

Just most apps chose to ignore it, the developers took the 'anyone uses 96 dpi anyway' attitude and at the end of 90's most applications started to suck at 120 dpi.

Yep, Windows API already had support for logical pixels in the 16 bit days and all good books always preached to convert between logical pixels and physical ones.

I guess people got lazy, as you say.

I think that the monitors stayed more or less the same later pixel density for a very long time. Is only been gradually increasing very slowly for 20 years, until a few years ago.

No point in spending time on logical pixels if it makes almost no relevant difference...


Anything running its own renderer doesn't get to benefit from component scaling since they don't use components.

That was my point - running KDE, this is extremely uncommon, running Windows, it's practically every application.

The problem isn't just scaling between two different resolutions, it's the inconsistencies (yes, apps don't take advantage but that's not the only issue). For example, if I want 200% 4k (my monitor) and 100% 1080p (my 2 side monitors), I have to choose between ultra-tiny text on my 4k with regular text or blurry text on my 1080ps.


Is that Windows 10? On my Windows 10 Ent desktop I'm able to set the scale factor of each display independently.


This is Windows 10. How do I enable that option?

Erm...click on the display you want to change (1,2,3) and simply drag the slider?

This month's Windows update fixes DPI scaling for old toolkits.

Yes, it's true that there are issues. It seems like most Microsoft apps handle multi-DPI well. By comparison, on Fedora 25 (the latest release), the only program I have found that handles multi-DPI is Terminal. Firefox doesn't do it.

Yeah, Windows support is better than Linux for it, but it's still pretty iffy. While IE and a few other things do, even stuff like Windows Explorer and OneNote doesn't handle multi-DPI well or even just runtime DPI changes in general, I'll RDP my box from a 100 DPI system and have my session screwed up when I come back to my system.

If you're making a decision about whether to make a purchase, don't make it unless you're prepared to do it all at once. Stick to ~100 DPI until you can make a commitment to go all at once.

Chrome hast had DPI scaling since 2015 on Windows. I remember having to report lots of initial bugs. Now it works fine.

DPI scaling yes, but not multi-DPI, when dragging from a 100 DPI monitor to a 300 DPI one text should remain sharp and not blurred by scaling pixels. Or even vice versa.

>While I get that it's uncool to like Windows on HN

That has not been my experience here at all. There is a rather active, and sometimes vocal, Windows fan base around here. Misconceptions about the current state of desktop Linux are commonly seen as it seems most people around here only use either Mac or Windows.

Agreed, while I see some MS/Windows hate... some of it technical, some political, and a mix of founded/fud... There's been a fair amount of counter to that.

I mostly use mac at work, mostly windows at home, and a bit of linux for servers, and my htpc (most of my casual browsing at home)... Each experience is fairly different. And they all have pluses and minuses. That said, more often than not, I prefer the Windows UI desktop/menu, but osx & unity app integration and linux/bash shell environment. I wish that Ubuntu/unity would integrate more of the menu/taskbar features found in windows. (And bring back natural scrolling checkbox)

Microsoft integrated Ubuntu instead.

My experience (currently running two 27" panels at 3840x2160 and one 27" panel at 2560x1440 in KDE for most stuff, Windows for gaming, and previously had one of the first edition retina MBPs with external non-retina displays):

OS X, years ago when the first retina MBP was released, did everything right. It was seamless from monitor to monitor, scaling done well.

Windows 10, now: OK, ish. Most applications scale badly with blurry text because it's just literally scaling the image afterwards. Newer applications are fine. The actual scaling isn't great - having a window half on one monitor and half on the other leads it to 'picking one' and looking weird on the other.

KDE, now: Pretty good. Correct scaling once you set it up. The autodetection can be dodgy, and the DPI scaling for text isn't linked to the rendering scaling for windows, for some reason. The GUI still only gives you a single scaling option for all monitors, but the autodetection can do different for each monitor, and environment variables can be set to solve it manually. The actual scaling is perfect for the vast majority of things. Things scale correctly and no blurriness. The only application that doesn't handle scaling is Unity3D, so everything is tiny (no fallback to raw image scaling).

In general, it's what you'd expect for interace stuff across the platforms - Linux does it right, but the interfaces around it are bad, Windows does it fine for new stuff, old stuff (which is most stuff) sucks, but the interfaces are OK for doing it, and OS X gets it all right.

Edit: Just to be clear, it's only the Unity3D editor that doesn't do scaling, the actual games work fine, as you'd expect they just get the full space and the game chooses how to render to it. To be fair to Unity about the editor, they support scaling on OS X, and the Linux build is still a beta. It is annoying though.

I use this on Windows 10: http://windows10_dpi_blurry_fix.xpexplorer.com/ and it works fine. If you have blurry text, disable DPI scaling in that app (right-click -> Properties -> Compatibility -> Disable DPI scaling) and this will take over and make it usable. There are a couple of applications that act wrong no matter what (Battle.net for example), but most of the time this fixes it well enough.

I only use Windows 10 for gaming, so fortunately I don't really need to worry. Useful for those who use Windows all the time, though.

Windows also gets my vote when it comes to the per-app volume mixer controls which have been awesome since Windows Vista.


PulseAudio provides this feature and actually provides more features and functionality than Windows. Ubuntu's default mixer isn't the greatest so I recommend this instead:

    sudo apt install pavucontrol
You can then find it in the application menu labeled, "PulseAudio Volume Control". It lets you set the volume for individual applications (and with Chrome, individual tabs!) and also pick which output/input device will be used.

It lets you configure some neat tricks. For example, you can setup an audio device that forwards to another computer running PulseAudio, an RTP receiver, and a few other similar protocols then set say, Spotify to output to that device. So if you have some network-enabled audio receiver somewhere in your house/office/whatever you can send audio from your Linux workstation to it.

You can of course also pass that audio through various filters/plugins to mess with the sound before it goes out to the remote receiver. For example, equalize it, noise removal, etc. PulseAudio supports LADSPA plugins so if you wanted to you could setup a little Raspberry Pi audio receiver at your front door and yell at solicitors in a robotic voice from your desktop. All with a bit of PulseAudio configuration fiddling =)

I still remember the first time I was in a computer lab and I leaned too far away from my computer and my headphones that were blaring music popped out... and the whole room WASN'T subjected to the same loud music. And I opened up the Kubuntu audio controls and plugged in my headphones and the volume slider suddenly jumped up, then I unplugged again and it muted again. "Woah."

I remember trying it on whatever Windows computers were in the lab just to make sure I wasn't crazy and that this wasn't there all along, and sure enough, they kept the same volume no matter whether the headphones were plugged in or not.

One of the first PulseAudio victories I remember, at a time when I vaguely recall that it was a newcomer and people were really pissed at PulseAudio's bugs and recommending just straight ALSA instead.

+1, PA + pavucontrol are very flexible. You don't even need weird protocols to send your audio to another computer, I just used its tunnel module (enable it in the receiver, then configure its IP on the sender) to send my browser's audio output to my home server, which has a decent stereo attached. The latency is quite good too, the delay even over wifi is barely noticeable.

There's also pulseaudio-dlna[0]. It works as advertised.

[0] https://github.com/masmu/pulseaudio-dlna

Thanks for the heads-up! This is one thing I miss mightily on my Mac.

This feature comes by default with PulseAudio, maybe Ubuntu doesn't expose it well enough in their audio settings. I think Gnome Settings has it, KDE definitely does.

Pulse Audio solves the same thing for Linux.

I get correct auto scaling-switching like this on Gnome 3 with Wayland, but only for a subset of programs (basically those that are fairly vanilla GTK+3), and at the cost of weird bugs with Wayland and program support thereof that still crop up fairly regularly.

I've always resorted to xrander and can get the screen looking pretty good. Though I really think something like this should just work.

Weston does multi-DPI really well. When you drag a window between monitors, the half on the HiDPI monitor is scaled and the half on the LoDPI monitor is unscaled. So it looks perfect without windows growing or shrinking when you move them to another monitor like GNOME on wayland.

I'd love to read more about how this was done, if you have a link perhaps.

macOS handles that edge case. It just displays the window in only one screen. The one with the biggest area of the window shown. There is no need to be held back by cases like this.

If you zoom in, you can sort of force parts of one monitor to be shown on the other monitor. You can see how everything's upscaled/downscaled from there.

> This would be awesome. Even when both the laptop and the external screen are 1080p, different scaling could be helpful if you want to use a dual monitor setup effectively.

Actually, this is especially true when both are 1080p, because laptop screens are never as big as desktop monitors, and we also tend to use them closer. I have this exact problem right now but I think I've just adjusted my eyes over time to squinting at 1080p at 14", or perhaps I turned on some display scaling and forgot about it.

> For example, you can have a window that straddles both monitors. What should the scaling be?

Intuitively, I feel like you should use the physical DPI of both screens to make sure that the window has the same physical dimensions on both. But that'd probably lead to weird scaling factors like 1.17 instead of nice round ones, and thus fuzzy scaling, so it probably couldn't quite work. I guess perhaps you'd just snap each display's DPI to the closest predefined value (eg. .25 increments which I think most systems use these days). Then you'd get a similar-sized part on both sides of the boundary.

But yeah, I think overall if you actually use physical DPI for scaling everything should work out close to nicely.

FWIW, macOS changes a window's DPI mode when the cursor that is moving the window passes over from to one screen to another. Just tried that out. :D

That's what happens when "Displays have separate spaces" turned on. (With that setting on, windows are only present on one monitor at a time, and, when dragging a window, that transition happens when the mouse cursor moves between displays.)

With "Displays have separate spaces" turned off (so windows can be present on more than one monitor at a time), it looks like windows take their DPI setting from whichever monitor the majority of the window is on-- with my current two-monitor setup, the DPI transition happens at the halfway point of a window, regardless of where the mouse is as I'm dragging.

Leaving aside the implementation difficulty, the answer to "what should the scaling be" seems obvious? Use the monitor scaling for the part shown on that monitor. The switch should happen on a monitor level, not on a window level.

The painting happens on window level, that's why it handled there. Application paints the window whenever it receives event "paint me" for it and it cannot paint different portions of a window at different DPI - from the applications POV, it is a single canvas. Another thing is, that the window resize and dpi change are separate events, so you cannot really call it twice in a row with different DPI and expect the app not getting confused.

Another approach would be to let the application render at higher DPI and the compositor would downsample the portion on the lower DPI display.

OSX handles this by upsampling/downsampling the parts of windows that are drawn on the other screen.

This isn't just external monitors! MBP with "retina" screens are also unusable for Ubuntu :(

Fedora with Gnome shell on Wayland already handles both 1 and 2, although power managements is about the same as Ubuntu and Wayland comes with its own set of issues.

I switched from Ubuntu to Fedora about a year ago and am quite happy with it.

Well 1 depends on Wayland actually detecting your external monitor, I normally end up having to drop back to X to get it to detect my secondary 28" 4K monitor :-(

I couldn't figure out where I can change the different scaling for the external monitor on my fedora 25. My 1080p external monitor just looked huge comparing to my dell xps 13 hidpi display.

It requires Wayland features that are used by GNOME 3.24 (so F26).

really, there's a native multitouch support for touchpads? do you have more info about that?

Yes there is native 4 finger swipes to change desktop on Wayland. And I wrote an extension to add 3 finger gesture support for an action of your choice. Check it out here: https://github.com/mpiannucci/GnomeExtendedGestures

I can't find much information, but things like scrolling, switching work spaces etc. worked out of the box for me when I was testing Fedora 25 a month or so ago.

Two-finger scrolling works really well on Fedora with Wayland, in fact at some point it appears to have become default behavior (at least on my machine running the latest version).

Fedora uses libinput. Of course, it is not without issues from those, who would like to tweak every little setting. Libinput is designed to be as automatic and configuration-less as possible.

+1. I recently got an Dell XPS 13 and hooked it up to my external monitor (4K). Icons were way too small so I adjusted those and standard text size. But getting applications (e.g. PyCharm) to run at a reasonable size was frustrating (I had to google it and then modify some configuration file somewhere). With OS X, which I just came from, the external monitor "just worked" when I plugged it into my Macbook Pro.

it really is a mess. i connected a 4k xps 15 to FHD monitor, the only way for it to work is via open source nvidia drivers and using xrandr to scale the external monitor and then use other settings to scale everything to FHD. that and some other things made me return the xps and order the new macbook.

Windows scaling should auto configure and work in any modern application. It always works like that for me, and I only have issues for software written in 2003 in Java or really old versions of QT.

> (I had to google it and then modify some configuration file somewhere)

I even had to do this with Chrome [1]. It's crazy how obscure this was when I was setting things up. Other apps, like Gimp, still look like shit because I can't find a way to do the same thing; their GUI just rends at a tiny scale and is difficult to use.

[1] https://superuser.com/a/1120078/103402

One way to solve the scaling issue is to set the external monitor to a virtual higher resolution while still driving it at its native resolution (with scaling down done in GPU).

Actually Linux/Xorg generally support this out of the box, it is just the higher-level software that would need to make use of it. You can try it youself:

xrandr --output <output-name> --scale 2x2

the result should be the given monitor will appear to have twice the resolution, so if applications believe they are running on a high-DPI display, they will look fine on the external monitor as well.

However due to lack of support and awareness in desktops doing just this might leave you with an unsatisfactory configuration, e.g. part of the desktop erroneously shown on both monitors - you might need to use further xrandr commands to setup the regions that each monitor displays.

I use the same approach to solve this issue on a Windows 7 system I am using, it is just slightly more involved (I need to setup a custom resolution in the Nvidia control panel).

Unfortunately, this scales after drawing. The entire point of hiDPI is to have a crisper image. To achieve that, the scaling must be done at the drawing level.

Unity and Gtk would scale everthing up, so things that are properly drawn before being re-scaled.

So quality will be there.

For normal/low-DPI screens instead, you'd scale everything down, so you'd lose some memory CPU power, but you'd still get the quality result.

Battling xrandr is not for the feint of heart. It is tedious to get the right behavior and differs from one display to the next (the precise dimrnsions, etc..)

HiDPI is still a huge problem in the linux desktop I can't count the number of hours I've spent researching and fiddling with it. Wayland is the answer, but it's slow moving, and Sway currently looks terrible when scaling double.

The biggest issue with Wayland is video drivers. Try getting Wayland to work with any proprietary blob, and see your efforts fail miserably.

#1 is absolutely the biggest one for me and #3 is a solid second.

I have a Macbook Pro with retina and stopped using linux simply because I couldn't get a good resolution on my laptop and monitors. And then when traveling (flights etc), ubuntu chewed through battery probably 3 to 4x as fast as OSX so I wasn't good for that either. As a result, I have been on OSX for a couple years now but would love to be back on ubuntu some day.

I have the dual problem of 1.HEADLINE: I've got a HiDPI notebook and suffer when I have to connect it to a common 1080p screen.

Better out of the box HiDPI support would be great.

Autdetection would Be nice, but just being able to set the scaling option in one place and having it apply not only to my desktop but the login manager as well would be very useful.

Also, afaik there is no documentation on changing the scaling factor in the login manager, or at least not in the official docs.

I would not buy a standard resolution monitor at this point, so having simple support for it in Linux is very important to me.

Ubuntu Unity Developer Here...

I'm mostly replying at the point 1., as it's closed to what I do...

I know we should offer an UI for that, but waiting for that you can just workaround this.

Well, as said unity supports scaling, although it's not possible to scale toolkits per monitor.

However... There's actually a good workaround for this, that works fine for multiple monitors.

The idea is that you scale everything up to 2x / or your maximum scaling (including window contents), then you scale the non-HiDPI monitors down using xrandr --scale

For example, if you want to use normal resolution there, you just have to do something like:

xrandr --output <OUTPUT> --scale 2x2

In this way it will be scaled down, and everything will be readable and almost 1x1.

You can test this in normal resolution monitor as well, and you'll see things should be pretty good.

I should find some time to implement this directly inside UCC / USD, so that users will get this for free...

Notice that there's also a bug in X causing some mouse trapping, so you'd probably also need X to be patched as explained in this bug: https://pad.lv/1580123 (we'd like to include this upstream, but we're waiting for X upstream approval for that)

On 1.: Seriously, I was gonna write the exact same thing. Just today I researched once again, since it's quite a hassle, and nowadays seems pretty common to have a HiDPI laptop screen in combination with a standard-DPI external screen.

I had the same problem yesterday, I use fedora but we share the same pain missing this feature. It would be awesome to have this setting. Being able to set different scaling for external monitors is a must have feature.

Thanks for mentioning TLP - I hadn't heard of it before

3. agree with the default WM; no issues (same or better than Win) battery life with i3wm. In my experience ofcourse.

More work on gesture!

Including the ability to configure what gestures you want in a GUI interface!

if multi-monitor support was as solid as it is on macOS, i'd likely switch

Would love support for #2

I would absolutely be in favor of #1 and #3.

- FLAVOR: Ubuntu Desktop

- HEADLINE: Please, please, please fix space issues with /boot.


I'm constantly running out of space in /boot, due to kernel updates. It drives me so incredibly batty. If I had to guess, this is due to poor defaults in the installer for folks that opt to encrypt their whole disk. Even still, this system was setup back on 14.04 (don't think it started on 12.04), and I have no intention of reinstalling from scratch just to fix it.

Publish something official on how to fix this problem! Make it easy and stress free! Yell at the people who didn't catch this bug before it went out! Sorry, but this is just a really bad problem: it leads to folks like me wasting time, and probably a whole bunch of other folks just not being able to install updates, and no idea why.

- ROLE/AFFILIATION: software developer in the federal government

+1 -- This is the one and only problem I have to regularly help my non-technical Ubuntu friends (and their friends) with. Every few months they cannot install updates anymore because their /boot fills up and apt fails to install a new kernel package.

The simplest fix would probably be to make /boot large enough by default (in the order of 10GB or 20GB or so -- the current size is 512MB IIRC).

A better fix would be to purge old unused kernels automatically but as far as I understand there were some difficult edge cases around that.

> The simplest fix would probably be to make /boot large enough by default (in the order of 10GB or 20GB or so -- the current size is 512MB IIRC).

Sure, I'll just use 1/6th of SSD to store 60 megabytes.

  $ du -hs /boot/
  56M	/boot/
If 512M is not enough space for /boot you're doing something wrong.

>If 512M is not enough space for /boot you're doing something wrong.

I don't know what planet you're living on but it's certainly not this one. Between a Ubuntu desktop, a laptop and personal server with multiple Ubuntu VM's on it, all of which are kept up rigorously to date, I fix this problem at least three times a year, every year.

The command line process to fix it[1] is a multi-stage mess of dense bash-foo that comes with a 140 word, two paragraph explanation so that /ubuntu veterans/ can figure out what is going on without resorting to scouring the man page for flags. The friendly GUI process to fix it relies on a third party tool that is no longer maintained[2].

It is not possible to explain to non-technical users what is happening here, which means the only thing they can do when they see this is call their technical friend and cry for help. This is exactly the kind of user experience that makes people think Linux is not ready for widespread desktop use.

This is definitely something the OS should take care of itself. I'm ignorant of the challenges that caused it to be this way in the first place, but in my ignorance I would advocate that:

a) the partition be made larger by default b) the OS auto-purge any kernel package more than three revisions old

[1] https://askubuntu.com/questions/89710/how-do-i-free-up-more-... [2] https://launchpad.net/ubuntu-tweak/

Here is my old-timey one-liner personal solution[0] for it that has worked flawlessly so far, obscure theoretical edge-cases be damned, because the non-edge case situation is just awfully worse and practically impactful.

(warning, rant inside)

[0] https://gist.github.com/lloeki/520acee8ba3b44c532c7

Um, isn't the fix `sudo apt auto-remove --purge`, which autodetects unused kernels? What am I missing?

If you do not run that command before /boot fills up, and you have a full /boot with a partially installed kernel, then that command fails. So this works fine if you remember to call it regularly, but it does not solve the problem once it occurs.

Interesting. I haven't encountered that edge case. I've many times filled /boot and resolved by doing an auto remove.

It seems silly to me that I need to manage this myself. Why do I need to be worrying about different kernel versions? I just want to make websites.

Following the chain of links and answers and explanations, we come to the conf file that says in the comments that it commonly results in two (2) kernels being saved, but can sometimes results in three (3) being saved.

IOW, it does automatically remove old kernels, it just keeps the last 2-3.

So, yes, run "apt-get autoremove", that's it.

I think it has solved the problem for me, but still is not a good solution for anyone who would answer "What's a terminal?"

I love having a terminal with bash and use it constantly, but I don't think it should be needed for the system to just go on working.

I've been using Ubuntu either part or full time since 2007. I've literally never encountered this.

Which is not to say you're lying, I'm just sort of flabbergasted that this is an issue for so many people. Do you run autoremove much? Maybe that would solve it for you?

I run ubuntu 16.04 on a laptop, desktop and a TV streamer and I get this all the time. My boot partition on the desktop is 15gig and it gets plugged every now and then.

I've hit this before, but honestly do not think it's a big deal. Sure the installer could default to a larger boot, but it's manually configurable during install. And cleaning it up once in a while is just good sys admin practice.

sudo bash -c "apt auto-remove --purge; apt update; apt upgrade" is what I usually run.

Prefer they focus engineering cycles on actual engineering problems.

Sorry, but Ubuntu is doing something wrong here, not me. This should be handled automatically. Ubuntu wants to be the system for everybody, but you can't expect people to open the terminal and fix this manually. Making boot 20GB is ridiculous, but 1GB should be no problem, and for me 2GB would be OK if that means that this problem will disappear forever.

And I believe my boot partition is only 256MB, and I didn't set it to that. That was a system default.

Ubuntu is absolutely doing something wrong here and we'll get that fixed. Thanks!

Yup! That "something wrong" is installing every single kernel update for two, three, four years and not deleting any of the old kernels.

Super common in enterprise deployments. I ran into this a bunch on my $EMPLOYER-issued workstation.

The installer should handle this. When you apt-get upgrade anything besides the kernel, does it leave the old version lying around?

I understand that it may be wise to keep the old kernel around so the system can be booted in case there is a hardware incompatibility or breakage in the new release, but that justifies only one additional kernel. Ubuntu keeps those kernels sitting there until you `apt-get autoremove`, and that means that unless you're running that command routinely, the boot partition is going to fill up at some point, no matter how big you make it.

This is especially a problem for people who use the unattended-upgrades package. I've autoremoved and had it clean up almost a gig of old kernel images before.

If you're running updates weekly it will fill up on Ubuntu. This is a recent problem, and I've only experienced it on my laptop with full-disk encryption.

The update process generates on the order of 100mb/month.

It's not new. It's been happening to me since I started using Ubuntu in the 8.x range.

Doesn't `apt-get autoremove` remove those old kernels? Not that it's a solution; it should of course be done automatically! Here's what I get when using it:

    > apt-get autoremove
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following packages will be REMOVED:
      linux-headers-3.19.0-79 linux-headers-3.19.0-79-generic
      linux-image-3.19.0-78-generic linux-image-3.19.0-79-generic
      linux-image-extra-3.19.0-78-generic linux-image-extra-3.19.0-79-generic
    0 upgraded, 0 newly installed, 24 to remove and 39 not upgraded.
    After this operation, 1,732 MB disk space will be freed.
    Do you want to continue? [Y/n]

Whenever a kernel is updated autoremove should be called immediately afterwards. It should be called before the restart now / restart later dialog box of update-notifier appears.

Currently, Ubuntu installs a new kernel and update-notifier tells the user a reboot is needed. The autoremove notification only appears when using the terminal which explains why users are running into this issue. Also, update-notifier informs the user another reboot is needed after autoremove is run.

To avoid this mess I’ve commented out the lines of /etc/apt/apt.conf.d/99update-notifier and wrote my own updater using bash and zenity and incorporated needsrestart. It’s not pretty but it works.

absolutely not, automatically running auto remove may lead to bad things; on occasion autoremove flags other more useful packages for removal.

For example I'm using LVM with my installation on my Ubuntu laptop and after updating the kernel and running "apt autoremove" it removed the LVM package leaving me scratching my head shortly on reboot as to why it wouldn't find my root filesystem (frankly i have no idea how it became "unneeded").

A more sensible approach is how Red Hat do it with YUM/DNF, that is, to allow a certain number of the same packages to be installed, "installonly_limit" in yum.conf. Doing this means that when a new kernel gets installed the oldest is removed to keep the the system at the limit specified.

On my RHEL/CentOS machines I tend to narrowly provision /boot to around 250-500MB. set "installonly_limit" to 2 and the system will keep the most recent kernel and one back. it works for me.

I see you’re point, though I too use LVM and haven’t seen that happen... weird. I could have been more exact with my response as autoremove does more than just remove old kernels. Anyway, it would be nice to see Canonical resolve this.

Care to share it? Maybe it could help others...

I thought about sharing it but like I said it’s not pretty. It involves editing sudoers and holding back config updates for sudoers and update-notifier-common which might cause problems in the future if you’re not aware. I’d much rather see Canonical address it properly.

>Doesn't `apt-get autoremove` remove those old kernels?

Of course it doesn't! Why would you assume such a silly thing? /s https://askubuntu.com/questions/563483/why-doesnt-apt-get-au...

I confirm that it doesn't autoremove. I had to empty /boot on some servers lately.

Anyway sometimes one wants to keep old kernels. I have an old laptop that runs OK with a 3.something kernel and has wierd video sync problems with any newer ones. Ubuntu 16.04 keeps running with that old kernel so I keep booting from that, maybe once or twice per year.

However the proper solution would be pinning a package and autoremoving the others.

Yes, it does remove old kernels. Read the very link you posted.

> It's better to err on the side of saving too many kernels than saving too few

But Muh Freedoms! I hate to be subject to one man's opinion of things /s

It's also kind of a garbage argument.

People who know they have broken kernels don't keep upgrading them, they stop and fix them.

People who don't know they have broken kernels also don't know they can boot with an older kernel, so they get nothing from the "backup".

We want to leave some time for people to realize their kernel is broken, so keeping three is probably just fine. Honestly, it would probably be adequate to just bump the oldest one off the queue whenever a newer one is requested. If you've got a tiny boot partition, maybe that means only two revisions. If you've got a huge boot partition it could be 20.

But just keeping them all and making people manually uninstall them gains you nothing, it's user-hostile for no reason.

Except, it would be nice to keep a few of the (recent) older kernels, in case things go awry with the new update.

This already happens: apt autoremove won't remove the package for the running kernel. It'll clean up "old" (N-1 and lower) kernels, but installing kernel N+1 won't allow kernel N to be deleted as long as kernel N is still executing.

Once you reboot/kexec into the N+1 kernel, it'll let you remove the N (now N-1) kernel, bringing you down to one. But at that point you've proven the new kernel works—at least well enough to get to a shell you can run apt autoremove from.

This is why autoremove isn't so auto: if it happened automatically after reboot, it might be running on a now-wedged system (e.g. one that can't bring up the display manager), removing the last-known-good kernel and leaving you with only the broken one.

I think the right middle-ground solution would just be for installing kernel updates to touch a file, and for Desktop Environments to notice that file and trigger a dialog prompt of "you've just rebooted into a new kernel. Everything good?"—where answering "yes" runs apt autoremove. On a wedged system, you can't answer the prompt, so the system won't drop the old kernel. (In other words, just copy the "your display settings were changed. Can you read this?" prompt. It's a great design!)

Fedora/RHEL yum has a much better solution: installonly_limit, defaulting to 3. Kernels which have been updated will only be kept up to this depth. The excess are automatically trimmed during update.

Wouldn't a good solution then be to run autoremove before installing a new kernel?

That way, you have kernel N running, first autoremove wipes kernels N-1 and older, then it installs kernel N+1, so that when you reboot into N+1, you'll always have known-good kernel N if it doesn't work.

It's a very similar solution to how a good programmer solves an off-by-one error, doing a shift/rotate shuffle on a for/while loop.

What happens when you have a high-uptime system where you repeatedly "apt dist-upgrade" and end up installing packages for kernels N+1, N+2, N+3, etc., all without rebooting into any of them?

I agree that if the user manually runs an apt [dist-]upgrade—or really any manual apt command—that that's a good time to do apt maintenance work. (Homebrew does maintenance work whenever you invoke it and there haven't been any complaints so far.) But kernels usually get installed automatically, so it can't just run then.

Now, if there was a specific concept of a "last-known good kernel" (imagine, say, the grub package generating+installing a virtual package when you run grub-install, that depends on whatever kernel you specified as your recovery kernel, ensuring it remains around), then your approach could work—you'd always have two kernels, the LKG for a recovery boot, and the newest for a regular boot.

Exactly what happens on Fedora.

I agree.

I'm running Ubuntu 16.10 currently. A kernel upgrade hosed my setup yesterday, and having an older kernel available saved my butt. I was able to do another `apt-get update` and things eventually worked with the latest kernel.

For Ubuntu Desktop, it may make sense for the package manager to keep only the latest 2 or 3 kernels, and automatically purge the rest.

I had the /boot filling up problem but had thought it was fixed, I'm on 16.04+. I'm pretty sure the last two kernel updates I did removed older kernels leaving me with the current one and previous one ... ?

You can configure apt unattended upgrades to autoremove by default, perhaps you did that?

Nope, still doesn't do it without manually invoking autoremove.

This is the main problem that keeps me from wanting to set up less technical family members on Ubuntu. It's possible to get in a spot where even a simple command won't solve this.

Solus uses https://github.com/ikeydoherty/clr-boot-manager now, which purges old kernels and modules, but keeps the modules for the currently running system so HW still works

> The simplest fix would probably be to make /boot large enough by default (in the order of 10GB or 20GB or so -- the current size is 512MB IIRC).

What? This is ridiculous and unacceptable. I don't use Ubuntu anymore, can someone tell me what is filling up the boot partition?

I'm currently on ArchLinux and mine is 200MB and it's 14% full! I can't fathom what could occupy so much space.

It's the way kernel update come in apt. The kernel update is a new package, not an upgrade of a previous kernel package. Thus the old kernels are left in place and the new ones installed alongside. After about 3 kernels have been made available in /boot the previously recommended size for /boot is full and attempted update to a new kernel fails.

It can be manually fixed by removing older kernels ("sudo apt purge ...").

Perhaps I'm mistaken but i thought a fix was in place for this, maybe it was something third-party but apt definitely offered to remove unused kernel package for me recently.

In other words: integrate purge-old-kernels


Maybe add some stats to know which kernels have booted successfully, so one knows which old ones can be safely deleted, and keep the last 1 or 2 good ones (not always the latest!).

$ apt-get install purge-old-kernels

    The program 'purge-old-kernels' is currently not installed.
    You can install it by typing: apt install byobu
"byobu"? packages.debian.org to the rescue...

https://packages.debian.org/jessie/byobu "Using Byobu, you can quickly create and move between different windows over a single SSH connection or TTY terminal, split each of those windows into multiple panes, monitor dozens of important statistics about your system, detach and reattach to sessions later while your programs continue to run in the background."

Uhm... what?

So I'm the author of both Byobu and purge-old-kernels. It's in Debian, because I help push it there after I push it to Ubuntu.

It's completely wrong that purge-old-kernels is in Byobu, rather than in the kernel or directly handled by dpkg/apt. Thanks for all the feedback here -- we'll get that cleaned up in 17.10!

Are you sure you typed exactly that?

For `apt-get install purge-old-kernels` I get

    E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
    E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?
And for `sudo apt-get install purge-old-kernels`, I get

    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    E: Unable to locate package purge-old-kernels
But I can reproduce your error by `purge-old-kernels` alone. That is indeed strange.

EDIT: I straced bash by `strace -o bashlog -f -s 10000 bash` and found the culprit to be /usr/lib/command-not-found. Indeed, if you run `/usr/lib/command-not-found -- purge-old-kernels` directly, you get that same message about byobu.

EDIT2: I assumed this was some kind of bug with the database, but now I actually tried installing byobu and it does make purge-old-kernels available.

The first of the two errors has nothing to do with the package itself - packages need root permisions to be managed, which is what exactly the error is telling you:

    E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?
In your case, you didn't prepend `sudo`.

Regarding the second, which is instead the core of the problem, see the man page (http://manpages.ubuntu.com/manpages/xenial/man1/purge-old-ke...):

    Provided by: byobu_5.106-0ubuntu1_all bug
Therefore the correct (and full) command is:

    sudo apt-get install byobu
As pointed out in another comment, such package is arguably a poor placement for this type of utility.

I'm aware that the first errors don't have anything to do with the package itself, but this was the command line given. I was wondering what kind of setup might lead to an apt-get command (without sudo even) to display that error message.

One case (which may be irrelevant in this context, but still a valid scenario) is auto-updates being executed in the background.

> "byobu"? packages.debian.org to the rescue...

You don't ned to use packages.debian.org to look up what packages do.

> apt show byobu .. Description: text window manager, shell multiplexer, integrated DevOps environment Byobu is Ubuntu's powerful text-based window manager, shell multiplexer, and integrated DevOps environment.

Quite right. One uses packages.ubuntu.com:

* http://packages.ubuntu.com/yakkety/all/byobu/filelist


byobu is basically an abstraction over screen and tmux, letting you use either with some common keybindings for spawning new windows and a common toolbar at the bottom telling you disk usage and load average and the like. I am not sure why purge-old-kernels is in the byobu package, that seems like a really poor placement for it.

It is a bad place for it. But it's there because I wrote both of them, and Byobu is always everywhere I want it to be, and I generally always want purge-old-kernels there too.

But yes, you're very right. It needs to be moved out.

Forgive the intrusion, just want to say thanks for Byobu. I use it regularly, and it's much nicer than bare tmux or screen.

:-) Thanks!

It shouldn't be needed. AFAIK part of the post-installation script of a new kernel already marks the oldest kernels for removal with `apt-get autoremove`. They just need to run it automatically after an upgrade.

Unfortunately, that's not enough. There are many corner cases where kernel packages hang around much longer than they should, for odd reasons.

Another related issue: I have helped get a few co-workers set up with Ubuntu on their laptops. Inevitably, once every few months, one comes to me and says "I just ran an update and the 'Restart' popup came up, so I restarted, now my laptop says 'No bootable devices found.'" This happens when Ubuntu is installed in UEFI mode. A kernel update sometimes wipes out the boot image. To fix it, I to get into the BIOS and reselect a bootable UEFI image. This should never, ever happen.

I have similar experience, but with Virtualbox in UEFI mode. After any restart, UEFI will complain that it cannot find anything bootable, I will run the bootloader from UEFI shell (Virtualbox does not have BIOS menus), in booted system run the efibootmgr to register it, just to have it lost at next reboot and doing the dance again.

Only Ubuntu does that, other linux distributions don't have this problem.

+1 - I didn't even know this was an issue. Usually i just use apt to update and run autoremove after I get a new kernel and verify it is working.

I recently installed Ubuntu on some old computers for my relatives and they really like it. If they keep updating and after a couple of months their system fails to work that will be a disaster for any good will they will have developed for Ubuntu.

This needs to be fixed NOW! How can Ubuntu even pretend to be a viable desktop operating systems if normal updating renders the system unusable?!

+1 then I can kill my custom "remove oldest kernels, except the running one, and leave at least two other kernels" script.

Yes, I had this same problem when I was on Xubuntu. Fedora and CentOS(and I'm assuming many other distros) seem to handle kernel updates just fine without forcing me to manually clean out old images from /boot periodically (was always too lazy to write a script).

What Fedora and CentOS do is a generic yum/dnf option: installonly_limit. It allows you to keep limit on the amount of packages with the same name and different versions, kernel included.

would you mind sharing this in the meantime?

I've been using this for years. I believe it only keeps one old kernel version though, not two like you requested. Tested and working for weekly use since 12.04 though 16.04:

echo $(dpkg --list | grep linux-image | awk '{ print $2 }' | sort | sed -n '/'`uname -r`'/q;p') $(dpkg --list | grep linux-headers | awk '{ print $2 }' | sort -n | sed -n '/'"$(uname -r | sed "s/\([0-9.-]*\)-\([^0-9]\+\)/\1/")"'/q;p') | xargs sudo apt-get -y purge

I guarantee there's a way to get that pipe to purge all packages in your system.

This is the new way to do it:

    sudo purge-old-kernels --keep 3 -qy

I agree, I've had to deal with this issue many times. I think it has happened at least 4 times in the last year. A few times I had to manually go in and start deleting old kernels because it had completely run out of space, so apt couldn't do anything without crashing.

I eventually set up a cron script to regularly delete everything except the last 3 kernels. I think this should really be the default behavior. "Save all the old kernels until you run out of space and everything crashes" doesn't sound like a very sane default.

+1 as well. This made me brick my entire installation (entirely my own fault: trying to fix it with insufficient knowledge). It made me switch to another distribution.

Despite having cleaned out old kernels before, I spaced one day and accidentally removed the kernel I was running. This is fixable if you boot a live distro you can mount the relevant volumes (like your root disk to /mnt/foo, and your boot partition to /mnt/boot), then, after a chroot to /mnt/foo you can re-install the kernel. Here is an article describing it (but it misses mounting the boot partition) http://askubuntu.com/questions/28099/how-to-restore-a-system...

+1. This is one of my major annoyances w/ Ubuntu. Been using it 3 years now and this seems like a fundamental issue that should be fixed.

It's user error on your part. The proper way to upgrade a Debian/Ubuntu system is:

  $ apt-get dist-upgrade
  $ apt-get autoremove
dist-upgrade installs new kernels, and autoremove will automatically remove old kernels (and keep the last 2 most recent ones.)

I like to use apt-get dist-upgrade --auto-remove, all in one step. Though for kernels they will be removed on the next invocation after reboot as the default logic keeps the last installed kernel version as well as the current one.

As for the original issue of not cleaning up the kernels, this is fixed in xenial/16.04 but not in trusty/14.04, see bug report here: https://bugs.launchpad.net/ubuntu/+source/update-manager/+bu...

Xenial/16.04 has other issues regarding kernel cleanup involving DKMS leaving files behind:



autoremove followed by dist-upgrade to 16.04 for me somehow decided to build multiple kernel versions and ran out of space on /boot mid build. That was really annoying to fix.

There is a lot of confusion in this thread.

dist-upgrade is NOT (in most cases) meant to upgrade to a newer distribution (eg. Ubuntu 14.04 to 16.04). dist-upgrade is just like upgrade except it also installs additional packages if necessary (eg. linux-kernel-4.1 AND linux-kernel-4.2).

Most people should always run "apt-get dist-upgrade" and never "apt-get upgrade" in order to simply keep their packages up-to-date.

An actual distribution upgrade is triggered by a different command. On Ubuntu: do-release-upgrade

dist-upgrade shouldn't try and move you to another release version, do-release-upgrade alone does that, as far as I know. Things get a little more confusing using apt update/upgrade vs. apt-get update/dist-upgrade, they don't seem to be quite the same in all cases for me. But I agree with the general frustration that it shouldn't be necessary to run apt(-get) autoremove frequently to keep /boot from filling up with old kernels.

Run autoremove after dist-upgrade.

I quit using Ubuntu as a Desktop OS because this was so obnoxious. I would gladly return if they fix this.

This will get fixed. Mark my words ;-)

Awesome! thanks, looking forward to it!


This is a feature (kernel update) that is used by everyone, including newbs. They don't have to understand anything to enjoy the update, and they shouldn't have to understand anything to avoid the space filling issue.

Lots of ways to fix it, I don't claim to know which is best:

* Always leave N% /boot available, and delete or move old kernels to satisfy that.

* Move old kernels to /old_kernels, outside of /boot. Driven either by satisfying N% space, or no more than K number of kernels kept in /boot.

* Opt in, opt out, ask this on installation and for non-server installs, default to "never have to think about this again."

* Easily configurable in the "whatever that box was called when I last used Ubuntu years ago" box.

sounds like you're keeping too many old kernels, I've had this problem too in the past..try this nice little oneliner: dpkg --list | grep linux-image | awk '{ print $2 }' | sort -V | sed -n '/'`uname -r`'/q;p' | xargs sudo apt-get -y purge

really, this is what your package manager should do automatically before/after installing new kernels

for your reference: http://askubuntu.com/questions/2793/how-do-i-remove-old-kern...

That's not a good one liner, because it doesn't do anything about the linux-headers packages (which are hundreds of thousands of small files) or the linux-image-extra packages.

Just run "apt-get autoremove".

This would be fantastic. This problem has no official or easy solution for non-technical people and delays updates which could cause security issues. Please, please provide a fix for this!

-FLAVOR: and Ubuntu Server +1

Bit late to the party, but I believe this was fixed in xenial/16.04 but not trusty/14.04

The relevant bug report is here: https://bugs.launchpad.net/ubuntu/+source/update-manager/+bu...

Side note, give up on a separate /boot in future, not needed 99.9% of the time.

Indeed this is due to missing options in the installer: with some manual fiddling, it's long been possible to make grub boot from encrypted disk (thus removing the need for a separate /boot and its space issues): https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1062623

You could probably fix this on your system without installing from scratch, but it would take some careful planning (mostly wrt backups!). You'd need to boot from a recovery cd or such, copy /boot into /, edit /etc/fstab accordingly, and then follow the last few steps in the bug report above (from the chrooting onwards). I'd probably test this in a VM first.

This might be aggravated by this DKMS bug which was found in v2.2.0.3, which incidentally is used in Ubuntu 16.04 LTS, and maybe other versions. It doesn't remove old initrd files in /boot which lead to full /boot issues and subsequent update problems

I manually removed a bunch of old initrd images the other day.



They've got a bug reported upstream. We need this fixed in Ubuntu

Ubuntu Launchpad bug here regarding DKMS leaving old files to fill /boot:


You can add yourself to the list of people affected to increase the Bug Heat and get this issue fixed.

Why do you have a /boot partition?

  $ mountpoint /boot
  /boot is not a mountpoint
I don't know if this is the default, but my KUbuntu machines have been fine for many years without a separate /boot.

If you want to install with full disk encryption, you normally want a /boot partition.

In theory, GRUB can load kernels off of a LUKS-encrypted partition, but in practice I've never managed to set that up without having two passphrase prompts, one from GRUB and one for mounting the root filesystem under Linux.

I try to do the same, but sometimes there's no way around it. LUKS comes to mind: can't boot an encrypted kernel because EFI/BIOS has no decryption facilities. I'm sure there are other cases, but this is the only one that comes to mind.

Ubuntu mounts my EFI partition as /boot.


I don't use Ubuntu, but in Fedora it keeps only a handful of old kernels (i.e. one or two back) and deletes the rest. I guess Ubuntu just holds onto every kernel you've ever had since the beginning of time?

Why is boot still a separate partition. I most cases it doesn't need to be.


omg, I am on 14.04 LTS and didn't believe until now it is not fixed in newer releases! :)

I've also encountered this issue, and I assumed it was a bug to do with the fact that I sometimes use apt-get and sometimes accept the Ubuntu GUI software update prompts.

If people are encountering it on this wide a scale, it must be a truly severe problem (that's also probably causing many people who haven't learned about apt-get to give up on software updates entirely!).

This is the primary reason I've recently​ replaced​ Ubuntu as the OS on my home server/NAS. I'd be happy to come back!

This is an annoying problem indeed but there is an easy workaround. Here is a script I use whenever I ran out of space on /boot partitition:


Ubuntu should just integrate Debian's /etc/kernel/postinst.d/apt-auto-removal script.


This more than anything. I believe it is this issue (at least for me): https://bugs.launchpad.net/ubuntu/+source/unattended-upgrade...

+1 I recently had this issue. I have Ubuntu on my machine (not dual boot) and I installed with all the default options. I hardly installed any heavy applications and it started saying it cannot install updates as /boot is full. Thanks for bringing this up.

The old kernels should be purged automatically on desktops. That really is an annoying issue.

I used to use Arch and seriously one of my favourite things to see was an update that used negative disk space. Apt has come a long way, but this DEFINITELY needs to be fixed.

I had this problem on a previous work laptop. Very annoying. I thought it was the local IT people who had installed it wrong.

apt autoremove will clean it up.

Only if you catch it in time, before you've completely run out space. `apt autoremove` will crash if there is not enough space on the disk.

Which would be another worthwhile fix: some way to run `apt remove` or `apt autoremove` in a zero free-space environment. It could detect it and pick pick data somewhere (like /usr/src) to persist to memory temporarily while it works, then replace.

I've never noticed autoremove removing old kernel images in /boot.

+1 ... plusa nice command to clean out unused kernels


Yes! Please!!


- FLAVOR: Ubuntu Desktop

- HEADLINE: More stable dock/undock and sleep/wake handling.


I've noticed that my system often hangs unrecoverably with a blank screen during dock/undock and sleep/wake events. I've learned, though, that I can reduce the likelihood of having problems by trying to minimize the number of state changes that the system has to handle at once. For example, if I'm leaving the house with the laptop, I'll first open the lid, wait 10 seconds to see if the display wants to turn on or not, undock it, wait 10 seconds for it to adjust, and only then put it to sleep. Same thing waking it up: one step at a time, with 10 second pauses in between. Seems to reduce my problems by about 90%. As a developer, this screams "race conditions" to me, but what do I know? If there's a bug filed for this already, I wouldn't know -- no idea what I'd search for.

I take the uptime game pretty seriously: having to reboot means that I lose a ton of context. Right now, I've got nine separate workspaces/desktops going, all with several browser, terminal, etc. windows. A reboot means I'll spend anywhere from 10 to 20 minutes installing updates and recovering all of that state. It's painful. Right now, my system has only been up for 9 days, which is weak sauce.

- ROLE/AFFILIATION: software developer in the federal government

It's kind of crazy how long this has been a problem and across many different hardware configs. Sleep doesn't work on my desktop or on a windows laptop with standard intel everything.

As a side note, the same issue persist on Windows with HP current gen EliteBook and Office. Everytime I undock the notebook, MS Outlook gets disconnected and it simply refuses to reconnect. The only option is to close Outlook and open it again - but waiting 30s between closing and starting again otherwise some locks on the PST file are still in place and Outlook would hang on start forever. The only way to fix it then, is by a Windows reboot. That they killed the QA department was the dumbest idea ever, now things are so buggy.

Just want to note I have similar problems on MacOS especially when using multiple monitors. I.E. one monitor will work and the other won't until I restart.

Yeah, I've had intermittent suspend/resume issues with nearly every laptop I've tried linux on.

My current xps 13 is the only one I've ever used where it works 100% reliably.

Interesting. I did a test run of the new xps 13 the other day and it had all sorts of issues with a stock install.

What sorts of issues? I've tried a multitude of distros on it and provided that it has a recent kernel (4.8 or newer, so if you tried 16.04 you'd want to make sure you're running the HWE stack) everything seems to work really well.

The only hardware related issue I've had has been static background noise from the headphone jack (which I also see on a clean install of windows), but I was able to get past that with the steps here:


I had this same problem. I switched back to nouveau drivers for my Nvidia card, instead of the proprietary drivers, and everything works perfectly. I also seem to be getting better overall performance in day-to-day desktop activities and battery life. I don't really do anything that needs 3d other than the desktop compositor.

When I was using he non-open-source drivers, Ctrl-Alt-F# to change virtual terminals would sometimes help. I would switch to a random virtual terminal and back and sometimes it would be working again.

Sadly, the ctrl+alt+f# trick doesn't work for me (OP). I wish!

And btw (should have mentioned this): ThinkPad W530 running nVidia drivers.

nVidia soft hang issues are super annoying. since mine is always plugged in, i just set my laptop to never suspend.

+1 Happens all the time. Can't tell specifically what the cause is. Most common seems to be the display hangs after unplugging HDMI while the laptop was in sleep mode.

I've found myself able to access my computer for a good while before the lock screen would turn on after a wake sometimes.


My ubuntu laptop has now became a stationary computer permanently stuck to the docking station and i never put it to sleep because I'm scared every time i unplug or plug in an external device. Anything from the most simple USB keyboard to ethernet, to VGA monitor, HDMI/DP/DVI monitors, USB3 docking station, close laptop lid, wake from sleep... EVERYTHING has a big chance that the hot plug will fail and you either have to reboot for the device to be detected or in worst case the OS just freezes.

This is a must fix for ubuntu to ever be viable as a desktop OS.

I use tmux-resurrect for this exact reason. Between that and Chrome's session restore I end up in an OK state after a reboot.

+1 for fixing this issue, just trying to work around it in the meantime.

Yes please!! So painful. I also have dozens of applications open, all the time, which I lose when I dock/undock when in sleep...

So much this! _If_ it succeeds in waking up from standby, only 2 of my 3 monitors work, this means I end up rebooting anyways...

This is actually one of the main reasons I am thinking of switching to a Mac. It's really hard to live with all these 'black-screen-after-lock' moments. If that could work as it does in MacOS/Windows, I'd probably never switch.

I am experiencing this when I resume with a VPN connection on.

Even though it doesn't take me much time to recover state its really annoying to know that I have to figure out all these issues anytime I start using a Linux laptop.

My laptop, everytime I open from sleep, is starts but it donesn't turn on the screen. I have to close and reopen, then it turn on the LCD.

this is one of the main reasons i don't use linux on laptop

A huge +.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact