Hacker News new | past | comments | ask | show | jobs | submit login
Ubuntu 16.04 (Xenial Xerus) (ubuntu.com)
437 points by ctpide on April 21, 2016 | hide | past | favorite | 287 comments



PSA: If you're running a HTTP/2 server like NGINX on the 14.04 LTS you'll want to upgrade to this release.

Google Chrome will no longer support HTTP/2 on vanilla 14.04 after May 15th [0], even if you're using the latest official upstream NGINX packages. This is because 14.04 ships with a version of OpenSSL that does not support the ALPN extension (prior to OpenSSL 1.0.2 you're limited to NPN, now deprecated). There was a bit of back-and-forth about the exact date, as the change was originally scheduled for earlier. However, Chrome decided to specifically push back the date so that there would be an Ubuntu LTS release available with the required support [1]. If you're still stuck on SPDY, that's going to be dropped too, so there's really no good reason not to simply use HTTP/2 at this point.

[0] http://blog.chromium.org/2016/02/transitioning-from-spdy-to-...

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=557197


It looks like Ubuntu 16.04 comes with Nginx 1.9.15, which is both not the latest stable release (it's a development, aka MAINLINE, release, although Nginx development branch is pretty stable) and it's one minor version ahead of Nginx's own development PPA, which is at 1.9.14.

The ppa[1] notes there's a newer version[2] also

[1] https://launchpad.net/~nginx/+archive/ubuntu/development [2] https://launchpad.net/ubuntu/+source/nginx/1.9.15-0ubuntu1


No, many people get this wrong. Mainline is stable, and http://hg.nginx.org/nginx/ is dev. The labeled stable version is for distros that have strict upgrade policies.

nginx recommends using mainline over stable: "We recommend that in general you deploy the NGINX mainline branch at all times." [1]

[1]: https://www.nginx.com/blog/nginx-1-6-1-7-released/


In the release notes they mention that they will upgrade to Nginx 1.10 when it is released: https://wiki.ubuntu.com/XenialXerus/ReleaseNotes#Nginx


> It looks like Ubuntu 16.04 comes with Nginx 1.9.15, which is both not the latest stable release (it's a development, aka MAINLINE, release, although Nginx development branch is pretty stable)

The stable branch will fork from the mainline branch shortly. The version shipped in 16.04 is very close to what the stable will be, because the fork hadn't taken place before 16.04's release. I expect there to be very few changes, which is why (as someone else pointed out) we expect to update 16.04 to the stable branch as soon as it is available.

> The ppa[1] notes there's a newer version[2] also

That's just noting that the version released in 16.04 is newer than the version provided in the PPA.

> Nginx's own development PPA

Actually it's a PPA maintained by a team that care about Nginx's availability in Ubuntu. In this case, the uploads to that PPA were made by the very same person who looks after the official Ubuntu Nginx packages available to Ubuntu users by default.


If you use 14.04, you usually upgrade at the first point release to 16.04.01, not now at 16.04.00. Only 15.10 will immediately suggest an update.

Is this outdated or not applicable to servers?


That is correct, upgrades are not enabled between LTS releases at this time, also for servers AFAIK.

Either way, personally I would never upgrade a server in place these days. Treat your servers like cattle not pets: Rebuild from new base image, validate, put into LB/proxy, terminate old stack.


This. Our upgrade path was changing a variable in our packer config.

Literally the easiest upgrade ever.


You should hold off doing in-place upgrades using "do-release-upgrade" until 16.04.1 (due August/September).

However, you can also "upgrade" your stack by building a new image using 16.04 from scratch, and that doesn't need to wait until 16.04.1.


You can also force the upgrade with do-release-upgrade -d


For some softwares like Nginx you should use official repo from the Nginx team and not rely on your distro repos.

http://nginx.org/en/linux_packages.html


If you re-read the parent comment even using the official repo from n nginx team you will be impacted. The issue is with OpenSSL not nginx.


Care to explain why? There is a huge amount of provisioning and deployment software that relies on distro repos.


To be clear though, at the moment (before May 15th), there is still better browser support for SPDY than for HTTP/2:

http://caniuse.com/#feat=spdy (77.39% global)

http://caniuse.com/#feat=http2 (70.15% global)


Why isn't just upgrading OpenSSL to version 1.0.2 enough? Seems easier than a wholesale OS upgrade.


With OpenSSL's complete lack of anything resembling a stable ABI, and it's popularity, there is no meaningful difference between an OpenSSL upgrade and a wholesale OS upgrade.


Is http2 supported on xenial now? It wasn't as of beta1 -- http2 was considered 'experimental' and wasn't included in the builds. We're using the PPA instead.


My understanding is that HTTP2 support in Apache was held out of Xenial but may be included later.[0] nginx in Xenial does have HTTP/2 support but no SPDY.

[0] https://wiki.ubuntu.com/XenialXerus/ReleaseNotes#HTTP.2F2_su...


Xenial seems to include nginx 1.9.15 and OpenSSL 1.0.2, so it should fully support HTTP/2. Personally I would still use the official upstream nginx packages.


IIRC, xenial beta1 included nginx 1.9.12 with http2 explicitly disabled.


Fortunately the support was enabled in the final release: https://bugs.launchpad.net/ubuntu/+source/nginx/+bug/1565043


> Google Chrome will no longer support HTTP/2 on vanilla 14.04 after May 15th

Doe this mean 14.04 with Apache 2.2 is affected? Their blog doesnt explain and leave plenty of people confused...


I don't think so. If it was, half the internet would go down for Chrome users.


Apache 2.2 is not impacted as it does not support HTTP/2, only SPDY via an optional module (which will stop working).

Even in cases where a HTTP/2 or SPDY connection will no longer be established for Chrome users, the browser will fall back to HTTP 1.1. Unless you're using specific HTTP/2 features, the main impact will be decreased performance.


Notes from my 15.10 -> 16.04 upgrade:

1) If you use the nvidia drivers from the graphics-drivers PPA, starting the default non-root X server will hang with no graphics output. Installing xserver-xorg-legacy fixes this.

2) LXC+Linux 4.4 seems to be very broken: https://github.com/lxc/lxd/issues/1666#issuecomment-21290311...

3) Pulseaudio now uses shared memory and playing audio inside a firejail will break the pulseaudio server: https://github.com/netblue30/firejail/issues/69#issuecomment...


> LXC+Linux 4.4 seems to be very broken

Seeing as my main use for my current 14.04 install is LXC containers, I think I will hold off a little then.

Thanks for the warning :)


Yeah, we've seen similar issues with runC on point 2. I don't think it's an upstream kernel issue as it didn't happen on openSUSE Tumbleweed when we had 4.4 (we're on 4.5 now). So presumably it's some patch Ubuntu applies.


no new ubuntu release would be complete without graphics issues from Nvidia and AMD. I say AMD too because FGLRX is out of 16.04.

After all the Issues i had with intel/amd/nvidia stiwching gpus in the ivy bridge days i just gave up buying nvidia and AMD, and boy has it made linux easier.


For anyone packaging software on Linux, this now means every major distro - Debian, RHEL/CentOS, Arch and now Ubuntu - supports .service files.

No need for bash scripts, custom watchdog and daemonise tools, etc.


> No need for bash scripts

Friends don't let friends write shell scripts targeting bash.

For context: Bash is not available|installed everywhere, and has some inter-version weirdness.

Write clean, posix-compliant shell scripts (i.e. target /bin/sh commonly referred to as bourne shell) and you're in a much better position.

On Debian your script will be run by Dash, on OS X it will be run by Bash, on Ubuntu or RedHat it could be different again, but the point is, they are specifically running in POSIX mode, so that you get reliable, reproducible results across systems and even across Bash versions.


And test them on a shell that isn't bash. Even with the --posix option bash accepts non-standard bashisms, particularly the execrable 'extension' of '>&'.


That's why I always put "#!/usr/bin/env bash" at the top of my scripts. Most were written to be portable, but unless it's been well tested, it's best to consider it not portable. Fail early and predictably rather than in strange ways.


That.. Isn't the solution. The point is that bash is not a good target for shell scripts, not that you should use env to detect the location of bash.


If you only use *nix platforms the default to bash, why not write bash scripts?


You're getting downvoted to oblivion, but I'm old enough to remember not being able to take bash for granted. The default shell on some modern systems (OpenBSD for example) comes to mind as well. I feel this battle has mostly been lost, however.


Everything old is new again - distros targeted at running inside docker containers (like Alpine) are shipping without bash. Not taking bash for granted is still a good plan.


HN coolkids downvoting advice that makes software more portable and reliable. What a fucking shock.


> HN coolkids downvoting advice that makes software more portable and reliable. What a fucking shock.

A little more verbosity on the advice might have helped. I wouldn't have properly understood you were saying without @peatmoss's reply. So without the context, your original comment just seemed like knee-jerk bash-bashing.


Unless you've actually tested it with a shell other than bash, you should be marking it as bash. Otherwise it's likely it will not actually be portable between shells, and you will have made your software less reliable.

I've encountered many scripts written by developers on Fedora that totally fail over when run on Ubuntu specifically due to using #!/bin/sh when they really required #!/bin/bash.


Unless you've tested it as a shell script you should be marking it as a recipe for a banana cream pie.

I specifically said "Write clean, posix-compliant shell scripts"

Do the same developers write Java code in a .cpp file and wonder why it doesn't work? Or put <?php tags in a .erb template?

Yes, you need to test to make sure that you wrote the correct things.


They use bash as a service. It's integrated in node.js 2041.4.1.


Default on debian is dash.


And, this page being news about Ubuntu, one should note that this has been true for Ubuntu for fast approaching a decade, now; Debian having followed in the footsteps of Ubuntu, in this regard.

* https://wiki.ubuntu.com/DashAsBinSh

* https://wiki.ubuntu.com/DashAsBinSh/Spec

* https://lists.debian.org/debian-release/2007/07/msg00027.htm...


But bash is installed by default if I'm not terribly mistaken, and it's classified as an essential package. If you put in a bash shebang, it should work.


I have seen several people use #!/bin/sh and write bash code. This is going to break on debian machines when you do ./foo

If more people used #!/usr/bin/env bash, it wouldn't be that big of a deal.


And they would have non portable code.

If they wrote posix compatible shell scripts it wouldn't be any big deal because they'd just work.


There's like five comments from you in this thread and you seem very adamant about people not writing bash scripts. Have you considered that not everyone has the same requirements? Maybe they don't care about targeting systems that don't have bash installed. In that case, they can use the myriad of enhancements that bash offers to improve their productivity and happiness.


> For context: Bash is not available|installed everywhere, and has some inter-version weirdness.

Can you cite some sources here? No one knows what you're talking about.


Non-OSX BSDs don't include bash, some things like the ... operator don't work on older versions of bash.

Many people, especially Solaris and BSD folk, don't like bash scripts, and want you to write shell scripts in Bourne shell instead.

Many other people, who code on OS X and deploy on Linux, enjoy the extras bash adds, and only deploy on platforms that include bash.


I think your last point is important. I don't think shell script writers should be concerned about Solaris/BSD if they are confident that they will use their scripts only in environments where bash is available.


The bash COMPAT page? http://tiswww.case.edu/php/chet/bash/COMPAT

Also, just because you may not know what I'm talking about, don't assume no-one else does.


Sorry, I meant "no one who replied to you knows what you're talking about". Still, that list don't tell much about potential breakages. Are you actually complaining about bash having too many features or are there actual breakages? Please illustrate with examples, not links to tedious changelogs and ad-hominems.


Depends on the use case. There are cases where I don't know how I'd live without arrays/associative arrays. Performant regex matching is also quite nice.


Could you, please, explain what .service files are and how they eliminate the need for bash scripts etc? Couldn't find anything in search results.


systemd services, that replace traditional init.d scripts and provide watchdog options, daemonising, etc.

https://www.freedesktop.org/software/systemd/man/systemd.ser...



This site's down, 80/443 both closed.




This is pretty much the only reason I've been waiting for 16.04 – to get rid of upstart scripts and standardize on systemd. Still going to wait for a couple months while people iron out initial bugs but definitely excited about this release.


> to wait for a couple months while people iron out initial bugs

There's going to be a 16.04.1 release just 3 months away from the initial LTS release; I believe it is just for that.

https://wiki.ubuntu.com/TrustyTahr/ReleaseSchedule - this schedule is for 14.04, but I believe it's been the same for quite a few years already.


Great, thanks. I will probably install this on internal servers (non-production, non-critical, CI, etc) first because I'm also impatient. :)


I wonder how big the installation of Amazon Linux is? They are still not on systemd. Also, CentOS 6 still has support for 4 more years so probably can't throw away all the bash yet.


Very good points. Amazon linux is pretty popular within AWS, and LTS CentOS and RHEL have huge install bases for larger corporate environments.

For that matter, Ubuntu 14.04 is still supported for another two years, and that's still upstart.


Three more years: https://wiki.ubuntu.com/LTS


Upstart provided comparable service for years. There was no reason to use bash scripts and watchdogs / restarters before.


Yes, there was a reason: Having to support other distros.

The big deal now is that all the major distributions support the same mechanism.


Which of course still leaves OS X, *bsd, Solaris AFAIK. But it should cut down on the internal Linux incompatibilities a bit. I'm not a fan of systemd, but having one broken standard is much better than having four broken standards :-)


> Which of course still leaves OS X, bsd, Solaris AFAIK.

AFAIUI OS X has launchd, and Solaris has OMF[1]?

... so basically the BSDs are the ones left behind?

[1] I think that is the acronym for Solaris' XML-based standardized system service files? (I don't care that they're XML. It care that they're standardized.)


Thinking that the BSDs have been left behind is to erroneously presume that the BSDs ran System 5 rc in the first place. They did not. Most of them use Mewburn rc (or their own reinvented version thereof), which was invented almost a decade after van Smoorenburg rc, which itself was invented about half a decade after System 5 rc.

Here's a "portable" script for van Smoorenburg rc that ports to 4 Linux families, complete with the sort of case statements that I described in https://news.ycombinator.com/item?id=10357589 :

* http://conman.googlecode.com/svn/trunk/etc/conman.init.in

Here's a Mewburn rc script:

* http://cvsweb.netbsd.org/bsdweb.cgi/~checkout~/src/etc/rc.d/...


Solaris 10 and 11 have SMF


Ah, SMF, that was it! Thanks


I'm talking about the time when other distros were already on systemd, but ubuntu wasn't. Of course we needed shell scripts before that.


Except upstart actually allowed shell code in its configuration files[1]. Now, it did have the "exec" stanza which was far cleaner, but if you look through what Ubuntu actually did in practice, I think you'll find quite a lot of "script" stanzas in there.

[1] http://upstart.ubuntu.com/cookbook/#script


That was always an option. It's the same for systemd - you can always configure: `ExecStart=/etc/init.d/something start`. But the context of the comment was - you as a developer writing the config. And you can do it the modern/proper way instead with both upstart and systemd.


> It's the same for systemd - you can always configure: `ExecStart=/etc/init.d/something start`.

Yes, but you cannot embed shell script directly in the job/service definition... thus making the "good path" simpler and more natural, which is a Big Deal(TM) in software usability. (As evidenced by the fact that many many upstart jobs on Ubuntu at least used "script" sections.)


Say what you will about Canonical, their whimsical naming scheme has helped expand my vocabulary of both obscure African mammals and little-used adjectives.


17.10 might switch to flowers and "Apologetic Anemone" is a proposed name :-)


Anemone is also a sea animal.


I was hoping for Aachen Aardvark


Abused Albatross? ;[

Afflicted Ass?


those are the best suggestions yet, they should put that to a vote


What is going to happen when they land at Z? The end of Ubuntu as we know it? :P



Interesting question, and closed as "not constructive". Typical.


If someone some day decides to commit a pedantocide, the first thing he'll do will be to compile a list of stack exchange mods.

Silly joke aside, this sort of release scheme is really confusing to me, lts and stable branches with normal version numbers are way more simple. Besides, these uncommon words are easily forgotten.


They use both, though. The word gets used in various places in packaging schemes, the number gets used everywhere it matters.



Mozilla will release Firefox directly via snaps

https://blog.mozilla.org/futurereleases/2016/04/21/firefox-d...


Using binaries provided by Mozilla is not a good idea (unless they do things differently with the snaps). They are not hardened in any way; ie. no PIE (rendering ASLR pretty much useless), no stack canaries, no relro, ..., making it a lot easier to exploit any given sec-related bug.

  $ hardening-check ./firefox
  ./firefox:
   Position Independent Executable: no, normal executable!
   Stack protected: no, not found!
   Fortify Source functions: no, only unprotected functions found!
   Read-only relocations: no, not found!
   Immediate binding: no, not found!
Absolutely ridiculous given the amount of vulns likely to linger in its codebase.

It should also be noted that Firefox is one of the few packages that Canonical keeps aligned with Mozilla releases (even 12.04 LTS has the latest firefox), and:

  $ hardening-check /usr/lib/firefox/firefox
  /usr/lib/firefox/firefox:
   Position Independent Executable: yes
   Stack protected: yes
   Fortify Source functions: yes (some protected functions found)
   Read-only relocations: yes
   Immediate binding: yes


More "innovations" which "justify" their own existence with novelty, but eliminate useful properties, backward compatibility, interoperability and standards with blissful ignorance. Standardization is a Good Thing(TM)... many formats creates a confusing dependency hell across multiple systems. Deb/apt works well. This will be deprecated in 6 months after a major security incident. Canonical is mismanaged and capricious, and this is just another in a long line of examples.


What is the significance of this? Doesn't Canonical already update the package (lagging a day or two behind the official release) for the lifetime of the Ubuntu version?


I'd guess that Firefox is a fairly good candidate for this kind of packaging just because it has relatively few external dependencies, and lots of Mozilla-specific dependencies which are rarely used by other software


Firefox is going to be used as a testbed for snap packages.


It will be Firefox, in a sandbox, out of the box.


Notably, Xenial is also the first release to bring support for s390x architecture (ie mainframes).

[0]: https://insights.ubuntu.com/2015/08/17/ibm-and-canonical-pla...

[1]: https://help.ubuntu.com/16.04/installation-guide/s390x/


> Online searches in the dash are now disabled by default [1]

A welcome and saner default. I'm thinking of moving back to Ubuntu from LinuxMint (I was thinking of Arch as well but not too confident of being on the bleeding edge).

[1]: https://wiki.ubuntu.com/XenialXerus/ReleaseNotes


As someone who has used Arch for 4 years, the bleeding edge got boring. I haven't had a major breakage in at least a year, probably longer, unless you count KDE 4 -> 5 being a breakage - one any 14.04 to 16.04 upgrader will also have to contend with.

The last major show stopper I can recall was when Arch dropped security hooks from the kernel and I had to get rid of my MAC.


Besides the default online dash searches pre-Xenial, do you mind sharing what other changes are allowing you to consider moving back?


All my experiences running mint have amounted to a broken and out of date distribution which has poor hardware support. I hope it has become better wince those days (i believe it was the initial release of MATE) but I see no reason to use mint when you know enough about a Debian based distro to pick and choose what you really want.


I never moved to mint, but I did recommend trying it to some people who really didn't like unity. I pretty much stopped doing that when they had the security issues last year. Is that what is prompting you to look at switching back, or are you just more reconciled to unity in general now?


That is one and I read online (can't find the source now) that Mint development doesn't respect compatibility/play well with other open source developers. I don't know how much of it is true but for me, compatibility is important. For almost 5-6 years, I have never formatted my home directory. So if my distribution is, for example, creating config in a non-compatible fashion, I won't be able to move to another distro. I know that typically distros don't modify individual program's dotfile/config etc but I guess I'm a bit paranoid about it.


If one wants Ubuntu and GNOME, there's

https://ubuntugnome.org/


Release notes: https://wiki.ubuntu.com/XenialXerus/ReleaseNotes

Always worth a read before you fire up the installer...


I gotta say, Canonical/Ubuntu has a lot of respect in my book for how dependable the distro has been over the years. I've been through my fair share of distros, and one of my employees goes through a new Linux install at least once a week. Ubuntu is one of the few distros that almost always works. Other distros will have one problem or another with this machine or that, but put Ubuntu on there and everything is fine (well, as fine as Linux gets).

A few weeks ago I had to dig up an old 12.04 machine and bring it back to the modern age. Much to my surprise, I was able to upgrade it all the way to 15.10 with minimal hassle. While the normal apt repos were dead for 12.04, Canonical keeps around an archived mirror. So you just edit the sources file to point at the archive, and then you can upgrade from there. Impressive.

Not that Canonical/Ubuntu don't have their warts. The Amazon fiasco, Unity, their cloud services, etc. And at the end of the day it's still Linux, with all the problems that brings. But, all things considered, I rate Ubuntu as the best of the bunch and feel grateful for the gift they give to the community.


Yeah, take Ubuntu and their repos, throw out Unity, find a way to get a Java release from the last few years, do the graphics drivers dance, and you have a very solid, reasonably compatible development environment.


Been using 16.04 on my XPS 13 for a week or so now, it finally supports nearly everything (bluetooth is a couple of extra commands) out of the box, and I've not had any issues so far.


Are you happy with battery life? For laptops, I've always stuck with Apple machines but in my experience even Windows uses less power than Linux.


I've been using an XPS 13 for 8 months as my main development machine and battery life hasn't been an issue. I regularly go out for a (e.g. coffee shop) working day and I don't have to charge.

There were some issues with earlier Ubuntu/Linux/BIOS versions, but most have disappeared with new releases. The last one is a palm detection issue with the touchpad, but it should to be fixed in 16.04. And if you can't wait, at least you can fix things yourself if you want.


> The last one is a palm detection issue with the touchpad, but it should to be fixed in 16.04. And if you can't wait, at least you can fix things yourself if you want.

I'm not sure if I could fix palm detection. I am certain that I could spend a lot of time trying to fix it and for me, that time would probably be better spent elsewhere.


Are you using the sputnik "developer edition" or the standard retail version of the XPS 13?


The Sputnik version. That indeed reminds me I swapped the wifi card for an Intel one. Because they are better supported and it was cheap.

The original one was definitely working though.


Ooh, can you do that? Broadcom driver drives me up the wall every time I tweak something...

(Apart from that and the HiDPI, the latter of which is the DE devs' fault, the laptop is awesome.)


This fantastic iFixit (Step 10) has all the details you need:

https://www.ifixit.com/Teardown/Dell+XPS+13+Teardown/36157#s...


I was just about to ask what wifi card you had working there. It took me quite a while to get my xps13's broadcom card working with 15.10 and I'm not sure I even remember how I did it. It's putting me off upgrading.


My laptops at least for the past 5 years have been fine with Ubuntu - much the same battery life as windows, and with a little tweaking quite a lot longer. I'm currently getting 7 hours out a commodity Asus with a couple of dev environments and a VM running. It drops to 5 if I'm running the fairly intensive service I'm developing, 4 if I do anything foolish like run the service from within eclipse, and 3 if I leave the chat sidebar visible in Facebook


> I'm currently getting 7 hours out a commodity Asus with a couple of dev environments and a VM running

Wow, that's very good. I've always found running a VW absolutely destroys battery life.

For laptops, I've only ever managed to get 9+ hours on a Macbook and that's pretty much the threshold that I'm comfortable with to leave with a laptop as my work machine for the day. If you're getting 7+ on your Asus running a VM, you have to be over 9 without the VM running, right?

I'm going to load 16.04 on my ThinkPad and see how it does.


I'm actually not sure the VM is making that much difference, which is surprising, although it's pretty much only running nginx, and having all the bare-metal virtualization stuff enabled has helped a lot with performance.

This is with core-M, so in many ways the things to watch out for are screen brightness and anything which makes the processors ramp over 800mhz - that actually seems more damaging than processes that just wake a lot


Are you using the laptop mode tools or tlp? Any packages in particular that you would recommend to reduce power consumption?

BTW, my VM was running Windows and that's probably why it drained my batter so quickly.


Which model do you have? My new 9350 seemed to hang a few times and become unresponsive when I tried beta 2, I'm hoping that release isn't going to do that.


how's the audio jack working, I got my XPS 15 last week, and planning to install ubuntu LTS once released. I read that audio jack is tricky to get working


I have an XPS 15 and can confirm the audio jack only works once with 15.10—once you unplug, audio is borked. I haven't tried 16.04 yet. Webcam and audio in also doesn't work, but everything else does (!). I initially tried linux mint, but the kernel was ancient, so latest ubuntu with `apt-get install cinnamon` is roughly the mint desktop experience with more recent packages.


Is it a XPS 2016?



This doesn't work for me, because of this issue: https://github.com/mitchellh/vagrant/issues/6871


Sadly even the most recent one still has the private_network issue (and the missing line in hosts), it just stops during vagrant up with this error:

> sudo: unable to resolve host ubuntu-xenial

> mesg: ttyname failed: Inappropriate ioctl for device

I hope this PR (https://github.com/mitchellh/vagrant/pull/7241) gets merged because the private_network issue has been around since 15.04


Does it work for you? For me VirtualBox gets stuck into the 'gurumeditation' state when booting.


This is a virtualbox regression because the box is packaged with the LsiLogic controller: https://www.virtualbox.org/ticket/15317

You can downgrade VirtualBox, dropping to 5.0.16 worked for me: http://download.virtualbox.org/virtualbox/5.0.16/


Same Problem. Newest VirtualBox newest vagtant 1.8.1


Also same, but have heard better reports from those on Virtuabox 4 instead of latest Virtuabox 5 :(

I had better luck with https://atlas.hashicorp.com/gbarbieru/boxes/xenial which just got updated a few hours ago.


I'm not sure as there's no description, so I'll ask. That's the desktop version, not the server one ?


Ummm I assumed it was the server one, perhaps I'm mistaken. Ubuntu/*64 have typically been headless.


I love the part about simplifying packaging via 'snap'.

Now, I would love to know, if I'm a maintainer of Foo (and you can get it today via `apt-get install foo`), how will I be able to start packaging using snap rather than relying on deb packages that come from debian? I'd love any feedback, cheers!


> I love the part about simplifying packaging via 'snap'.

Bleargh. More container bullshit, now with even less control over it by end users. Now each tiny library update (think OpenSSL security fixes) will pull hundreds of "snaps" instead of a single package… assuming the developers even realize they have to rebuild their snaps.


While I see your point, the idea that I as an end user have any real control now is absurd. All I can do is hope the people who package stuff for their distribution know what they are doing.

If I want to install software that's outside the stuff that the packagers have prepared, like Firefox with correct KDE integration on Kubuntu, I am relying on a number of hard to track things working correctly together. Which tehy have regularly failed to do for me in the past.

Contrast to the OSX install experience: An application is a folder which contains everything related to the application that the base system does not provide. It's brilliant. Installing is copying. Uninstalling is deleting. As a user I feel more in control of the process than with apt.

Depending on the software stack you're looking at, dependency isolation might or might not make sense. I think OSX and Windows both are good case stuides that show that some level of isolation is sensible.


> If I want to install software that's outside the stuff that the packagers have prepared, like Firefox with correct KDE integration on Kubuntu, I am relying on a number of hard to track things working correctly together. Which tehy have regularly failed to do for me in the past.

Firefox isn't that hard, all considered. KDE is necessarily hard because of how invasive that is – but no container fuckery will save you from that, because it has to be invasive to work in the first place! Either every container has to ship a full KDE, or you do it like OSX does it, and only have One Desktop To Rule Them All. (That's why Canonical and Gnome are both interested in app containers, presumably, to get rid of that filthy freedom of choice.)

> Contrast to the OSX install experience: An application is a folder which contains everything related to the application that the base system does not provide.

The difference is that OSX (and Windows) provides a lot, and you only have applications building on top of that huge, stable code base that's diligently updated by Apple (/Microsoft). OSX/Windows apps only have to keep their few third party dependencies updated.

Linux app containers have to literally ship everything outside the kernel to work, because the kernel ABI is the only stable interface in the Linux ecosystem. Everything else, up to and including the libc, varies between distributions, and versions of the same distribution, and will be incompatible. An openssl update would trigger a repackaging of every single app container. And where it doesn't, users are at risk. In contrast, without containers, I upgrade one package and the whole system is secured.

(On servers, it's a bit more nuanced, because here operations is in charge or repackaging docker/lxd/nspawn containers, and can start that process whenever they want.)

> As a user I feel more in control of the process than with apt.

That's more because apt is bullshit, even by Linux standards. pacman and other package managers not made in the 1990s are much nicer to deal with.



Something as core as OpenSSL should be provided by the system and updated by the system maintainers. That's what happens in Windows and OS X land, where the system provides a broad base of functionality that every application can count on (e.g. https://msdn.microsoft.com/en-us/library/windows/desktop/ff8...). Thus third party programs only need to bundle the elements not already provided by the system. When goto fail was discovered in the OS X crypto libraries, Apple simply issued a system update and that was the end of the story.



> Now each tiny library update (think OpenSSL security fixes) will pull hundreds of "snaps"

Do you think that's worse than the alternative where each tiny (shared) library update potentially breaks hundreds of programs?


ABI breaks from security patches for stable releases, especially of Linux distributions, happen how often?

In comparison, outdated npm/gem/pip/hipster-package-manager-of-the-week or docker installations happen how often?

With the former, the burden of updating and testing lies with a small handful of distribution maintainers. With the latter, every single developer has to worry about deployment and maintenance.


Distros shouldn't waste time and effort on supporting and packaging language-specific, high-churn code packages except in very limited circumstances. Let the language specific-tools do those. Upstream is best, local fragmentation is wasteful.


While that potential was always there (and people were paranoid about it), in real life it was extremely rare. ABIs don't tend to change much between minor versions on sane libraries. I can't remember it ever happening to me.


False dichotomy fallacy. Linux distributions need to engineer for side-by-side installation of multiple versions and not get tied up to those common locations. nix is just one example as an interesting idea, while Homebrew and stow are better ideas.


A hundred times this. When some library upgrade happened to break stuff on my system, I could roll it back selectively, while still getting the updates for everything else (as long as it does not strictly depend on this new library version). That's gone now.

Good thing I abandoned Ubuntu a long time ago.


What did you switch to? Fedora is switching to xdg-app, which basically does the same thing. Now that those two are using such an approach, I imagine that app developers are increasingly going to drop support for distros that don't use snap, xdg-app or docker. Snap/xdg-app/docker are a lot more convenient for those app developers.

Edit: forgot appc. Thank goodness they're all planning on supporting the Open Container Initiative.


I'm on Arch, which has none of that bullshit.


Given their history, expect Arch to adopt xdg-app.


Just a side note, as well:

`apt-get` is depreciated, they've moved to just plain `apt`

`apt install foo`


About time-- I always thought it was absurd for them to keep apt-get and apt-cache separate "for historic reasons." I used to have the hardest time explaining that the occasional new ubuntu users (to say nothing of all the people who were convinced that they needed to type "apt get [packagename]" then "apt install [packagename]".


Last time I tried to use just apt, its help pages warned that it was experimental, and there was still basic functionality which was either not implemented, or implemented differently from the regular apt-* tools.

I'd very much prefer to standardize on plain apt, but it doesn't seem ready yet.


Nice. There were a couple of things that were 'apt-cache', I could never remember which, I gather 'apt' does those too?


It does 'seach' and 'show' from apt-cache at least, not sure about any others.


Oddly apt's `search` output is terrible compared to apt-cache's. If they want me to switch to the new command they should at least make it the same or better, not a couple steps backward.


Also, how does Canonical's snap compare/compete with Freedesktop/GNOME's xdg-app?

https://wiki.gnome.org/Projects/SandboxedApps


One thing I've noticed is that xdg-app seems to be only for desktop GUI apps, while snappy works for server software and command-line apps too. To me, that makes it more interesting.


What is snap? I can't find mention of it anywhere on the release notes page.


https://developer.ubuntu.com/en/desktop/

https://developer.ubuntu.com/en/snappy/build-apps/

The ubuntu developer page has a good description of snaps and how to create them.

In short its a package that contains all its dependencies.


If you have the source of the package you might be able to use ubuntu's snapcraft[1] tool to create snap packages. There is a project on github called deb2snap[2] that might help creating snaps from deb packages which is something that might help. I haven't used it so can't provide any feedback on how well it works. The README is pretty detailed though.

[1] https://developer.ubuntu.com/en/snappy/build-apps/

[2] https://github.com/mikix/deb2snap


I knew deb2snap thanks, what I'm wondering is the bureaucracy steps to switch Foo to use my snap (instead of debian's deb) by default when being installed in Ubuntu v.NEXT. Thanks again


For anyone like myself who isn't a big fan of the Unity interface, check out Ubuntu MATE[0]. The MATE desktop environment is very similar to pre-Unity Ubuntu. It seems like the final 16.04 release hasn't landed yet but I'm sure it will later today. Ubuntu MATE is also one of the derivative distros that has been granted LTS status by Canonical.

[0] https://ubuntu-mate.org/


Or, though not everyone's cup of tea, I quite like Ubuntu Gnome[1], which is aggressively not retro or Unity.

1: http://ubuntugnome.org/


I'm a small breed of people that actually really like Unity. I used to use Awesome WM and tiling WMs for pretty much my whole Linux 'career' but now I quite like how my desktop looks like with Unity. It's my favorite DE at the moment.

Screenshot: https://imgur.com/nvsf24E


I really don't know why people hate Unity so much. If you don't want to be stuck in the last decade, regarding desktop workflow (Mate, XFCE, etc.), and if you want to actually use a mouse, your options are basically Unity, KDE, Gnome 3, and Cinnamon.

At the moment, I think Unity is the only one that is not a horrendous misdesign (Gnome 3) or consists of a shoddy and insecure plugin framework that is a moving target stability wise (Gnome 3 and Cinnamon). I never liked the fussiness and the looks of KDE. So far, Unity has been the most stable, configurable (Compiz) and, most importantly, productive experience for me.


I'll second that. Mate is amazing.

I can also recommend Cinnamon.


Both are great, Xubuntu is excellent and still my goto since it has pretty much exactly the features I want from a DE.

It's nice to have good choices that use the old paradigms.

XFCE 4.12 pretty much feels exactly like Gnome 2 towards the end.


I would love to see an "official" Cinnamon build in the way we have Xubuntu, etc.


Ubuntu Mate is also great on the Raspberry Pi. all the GPIO stuff just works as in Raspbian but you get a more modern desktop (using the classical gnome2 layout) and Ubuntu as the base (which to mee means easier setup of things like a mail server.)

I'm hoping Owncloud starts to use Snaps.


I like some of the features of their window managers but I don't think Xubuntu and MATE have much support for HiDPI yet and, in general, seem to lag behind stock Ubuntu.


I've been on 16.04 for a few days now, and while I did have to work through some bugs (such as sddm and lightdm fighting each other) I've been impressed so far with the improvements.

Inused to hate on Ubuntu, but on my 2014 Macbook Pro, it was the one distro that "just worked", and since I mostly run debian servers, I figure Sticking to the similar ecosystem reduces mental load of switching.

I still have my issues with Shuttleworth and Canonical, but hey, it's linux, so I can remove the crap I dont like (unlike some things, staring at you windows 10).


Curious what you want to remove in Windows 10. Not an evangelist, just set it up on a dual-boot refurb for a friend and thought the initial experience wasn't bad.


My experience of Windows 10 is limited to having a look at it in local computer shops (I've been trying to find a decent cheap laptop to stick Linux on for a friend).

I clicked the Start Menu. Fully half of it was made up of flashing animated crap - things moving about, very colourful adverts, the actual things I wanted to do were obscured by it.

I tried a few machines. They did the same thing. Maybe it's a manufacturer default.

Hey, maybe it's customizable. I don't know. It just struck me as being so far from what I think of a computer as being - not a tool to be used, but a flashy, childish entertainment box, like a children's rainbow cake.

My ego betrays me at this point, I suppose. I don't understand how the engineers at Microsoft got to this point.

It reminds me a lot of the Xbox 360 dashboard. That was the point I left 'mainstream' video gaming - it felt like my hackery, fun world had turned into a world of consuming advertising, of subscribing, of being someone else's plaything. Perhaps it was always like that, and I was too young to see?


Right click, turn live tile off. I don't know why you even bother with Linux if you didn't even try the first logical graphical gesture that one can reasonably expect to lead to fixing this problem. You can even remove most/all default tiles, like any sane poweruser would do. The same as in Windows 8. On live tiles - Apple has dashboard that has seen more success than widgets on Vista+, Microsoft decided to push some of this functionality on live tiles, so that you can have an app launcher that doubles as a widget when you want to display information on it, but that functionality is optional. If it was off on all default tiles then nobody would know about it, so it'd be a bad default.

What I'd want to know about Windows 10 though, and Google seems not to deliver anything beyond bullshit articles on how to disable it or useless scaremongering blogspam - clearly enumerated and sourced list of information telemetry subsystem captures and sends back to mothership and what parts can be reliably disabled without resorting to shady third-party apps.


I think you missed the part of my comment that mentioned me being in a computer store - I just played with it for ten seconds to check out the keyboard and touchpad of the machines, I have no plans to use Windows 10 thus no need to try to disable it. Useful to hear that it is possible though! :)

WRT disabling telemetry - probably the best way to go about it would be to set up a firewall or similar and try to kill all outgoing to microsoft.com. Turn it back on for updates and hope that they don't just send everything then, I guess. Disabling individual parts sounds like a losing game - you're running an MS kernel, if they want to do it they'll just do it.


> I think you missed the part of my comment that mentioned me being in a computer store

Not really. You mentioned Linux, you should be able to right click.

> probably the best way to go about it would be to set up a firewall or similar and try to kill all outgoing to microsoft.com

It wouldn't be.

1. I don't want to kill updates. 2. I'm more interested in whether I should trust Microsoft. If the telemetry sends how many times I opened built-in whether app and such then I suppose I'm fine with that. I'm also fine with online searches in start menu if they can be turned off, as well as services related to Cortana if they can be turned off. I wouldn't be okay with Windows scanning my computer and sending telemetry on what third-party software I use or documents I view. There were articles on the internet that were later debunked, but I can't seem to find something well sourced on what exactly is going on beyond tutorials on how to turn various knobs.


You can right-click on the Windows 10 flashing animated crap and remove it from the start menu.

> I don't understand how the engineers at Microsoft got to this point.

It was likely the marketeers driving the engineers.


The Start menu in Windows 10 can be configured in about 2-10 clicks to stay put.


Poor choice of default settings.


By default, Windows sends a lot of your information to their servers sometimes without asking you to opt-in.

https://fix10.isleaked.com/


That's A LOT of bs to fix. And after each update I noticed it defaults some of them back to 'Enabled'.


If I was an average user, cortana would definitely be one thing I might want to disable but probably couldn't do so easily. I'll admit I had to use my powershell script for getting rid of GWX on win7 as a base to completely get rid of Cortana without feeling like some update was going to magically re-enable her.

Also, see: https://github.com/dfkt/win10-unfuck


Having two control panels


Not a fan of TIMTOWTDI?


In Windows 10 there are some things that can only be configured from one of the Control Panels and I never remember which.


I like more than one way to do things, the issue is the two control panels do different things and its impossible to remember which does what.

For example, I can configure a VPN adapter in one Control Panel (Legacy) but I have to go to the other Control Panel (New) to connect or disconnect it or anything like that.


Small warning: Ubuntu's "do you want to upgrade" popup window defaults to "yes". Had to find that out the hard way when a user hit Enter the wrong moment and was suddenly sitting in front of a bricked system.


Usually you press Enter to accept things and ESC to cancel. Also, it first downloads all packages before upgrading, so you should've had time to cancel it.


Is upgrade failing in general or in this specific case only?


I've no idea. `apt install -f` fixed everything and I didn't bother probing into it further.

(We only have Ubuntu on two machines, everything else is Arch or Debian.)


I had a similar experience upgrading my laptop(though I initiated the upgrade on purpose). Just crapped out, but an apt install -f followed by a dist-upgrade fixed it all and it seems to have worked.


Crashed for me, too.


PowerPC/ppc64el, System z and ARM/Raspberry Pi server images available at: http://cdimage.ubuntu.com/releases/16.04/release/


Where are the rest of the releases?


Thanks, I was looking for this!


After 3 or so hours ubuntu 16.04 finally installed & what lovely surprise,no launcher a big black cross instead of mouse cursor & slower than a bowl of soup warming up in the freezer, can someone please advise me how to get this running properly. Many thanks.


Is an update from 14.04 painless or would you recommend reinstalling? I'd like some newer packages but don't really feel like setting up the whole system again with a reinstall.


If you don't feel a particular urge to upgrade right now, it's probably not a bad idea to wait for the first service pack (16.04.1), which usually comes a few months later. Most kinks should be ironed out by then.


Per the release notes [0], upgrades from 14.04 aren't enabled. They will be enabled with the 16.04.1 LTS release, in 3 months time.

[0] https://wiki.ubuntu.com/XenialXerus/ReleaseNotes


Pretty painless - I've just ran this on two servers (not in production):

    sudo do-release-upgrade
or if you're on a server

    sudo do-release-upgrade -d
although they don't recommend doing that on a production server because the .1 release usually has a lot of bugfixes.

It took about half an hour followed by a reboot. Occasionally had to intervene to tell it to overwrite config files I hadn't touched or to let me do a merge if I had.

Obviously make sure you have backups first.


I always wait a month before upgrading/reinstalling. I have done upgrades in the past through multiple versions and they generally go smoothly but there might a minor glitch or two that requires manual intervention to fix (usually solved though search google for advice.)

Another thing I've done is keep /home on a separate partition so I can nuke everything else for a clean install if needed. In such cases, I backup the /etc directory beforehand to recover configuration details as needed.


Just failed for me on two desktop systems (one 14.04 and one 15.10).


They should just stop publishing MD5SUMS for new releases. By now, everybody should have gotten the word that MD5 has been broken.

The security of the MD5 has been severely compromised, with its weaknesses having been exploited in the field, most infamously by the Flame malware in 2012. The CMU Software Engineering Institute considers MD5 essentially "cryptographically broken and unsuitable for further use". [1]

[1] https://en.wikipedia.org/wiki/MD5


You're conflating collision attacks like the Flame malware with a preimage attack that would be necessary to produce a malicious Ubuntu release with the same MD5SUM.

I.e. what's "broken" about MD5 is if you have a lot of CPU time and I allow you to give me two unrelated blobs, you can craft those blobs to have the same MD5 sum.

What's not "broken" (beyond a theoretical 2^123.4 attack) with MD5 and not broken at all for SHA1 is preimage attacks. E.g. if I work for Canonical, make an Ubuntu ISO and you have to produce a malicious ISO with the same MD5 sum.

We should be considerate of collision / preimage attacks, but please don't spread FUD by conflating the two. Publishing MD5 sums for ISOs is just fine.


What? Pretty sure you're wrong.

Yes, MD5 preimage resistance is not broken (to a reasonable degree).

If you have the Ubuntu 16.04 ISO (you do), and if you have its hash, the attack to craft a different ISO with the same hash is a collision attack.

A preimage attack is if you had some hash y where y=H(x) where x is some file/whatever, and trying to find out possible values of u that give rise to y when you do H(u), without the knowledge of x.


No. What you're describing is just a subset of preimage attacks.

Not having knowledge of x is just one type of preimage attack, a "preimage resistance". You can also know x, then it's a "second-preimage resistance".[1]

Which is not the same as a collision attack. Where you're trying to find x and y such that h(x) = h(y) without anyone specifying x or y in advance.

By definition a collision attack is an attack where you specifically craft both x and y such that they exploit weaknesses in the algorithm.

It's not enough to know an arbitrary x or y that someone else has made, because that value isn't going to be exploiting the weakness.

1. https://en.wikipedia.org/wiki/Preimage_attack


yeah, you're right. my bad.


(how many sites on the internet have a discussion where people are both patient and civil with each other, and the discussion results in mutual understanding?)


Hmm, I personally haven't seen action in discussion where this kinda stuff doesn't happen; maybe I just pick the comments I reply well.


You should think about this in terms of collision resistance. Canonical doesn't write most of the packages that go into a release.


How deterministic is the build process for an entire distro like this? And wouldn't updating just a single letter in a README just before release thwart the efforts of an external attacker?


Collision attacks matter if you don't (want to) trust the Ubuntu release team.


If you don't trust the Ubuntu release team, why are you downloading Ubuntu to begin with? Your threat model makes no sense.


It is not trust in Canonical itself that is the problem, but the constant threat of coercion it puts on both the organization and the people that compromise it. This is why I work on generalized multi-signature schemes and deterministic builds.


s/compromise/comprise/


I've always wondered this but felt too embarrassed to ask, screw it. Let's say the ubuntu 16 iso is infected with some kind of malware by a 3rd party. If they have control of the file, would they not have control of the checksum displayed on the site? I can understand if the checksum is spread to other sites for cross-reference but I'm having trouble seeing why a checksum from the same location as the file you're downloading is worth anything.

Any insight?


If you have an existing Ubuntu system you trust, you can verify the authenticity of this release via:

  $ gpg --no-default-keyring --keyring /usr/share/keyrings/ubuntu-archive-keyring.gpg --verify SHA256SUMS{.gpg,}
  gpg: Signature made Thu 21 Apr 2016 10:40:38 UTC using DSA key ID FBB75451
  gpg: Good signature from "Ubuntu CD Image Automatic Signing Key <cdimage@ubuntu.com>"
  gpg: WARNING: This key is not certified with a trusted signature!
  gpg:          There is no indication that the signature belongs to the owner.
  Primary key fingerprint: C598 6B4F 1257 FFA8 6632  CBA7 4618 1433 FBB7 5451
  gpg: Signature made Thu 21 Apr 2016 10:40:38 UTC using RSA key ID EFE21092
  gpg: Good signature from "Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>"
  gpg: WARNING: This key is not certified with a trusted signature!
  gpg:          There is no indication that the signature belongs to the owner.
  Primary key fingerprint: 8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092

  $ sha256sum --ignore-missing -c SHA256SUMS
  ubuntu-16.04-desktop-amd64.iso: OK
(You can ignore the `WARNING`s above, since you're explicitly telling `gpg` to use a keyring you trust)


I see. I'll do that from now on, that's pretty reassuring.


They GPG-sign the cryptographic checksums, see the .gpg files. If you don't verify the GPG sig, with Canonical's signing key obtained out of band, the checksum by itself is pretty useless as you describe.

PS. anyone know of a CLI download tool that supports this format of GPG signatures?

edit: why the downvotes for the parent? it's a fine question


`gpg` is a command line tool for verifying these signatures and it is what `apt-key verify` uses in it's backend. you can download the signatures with `curl` or `wget`


It doesn't protect against a malware being included by default, it protects against a malware being inserted on the wire, ie between Canonical's HDD and your HDD; if a malware is inserted at this point then the checksum should fail.

Coincidentally it requires the checksum to be propagated through an other, more secure mean, so distributing the checksum on the very same site increases the chance for an attacker to act, however there is no other way as widespread as this to give the checksum anyway.


If they infect at the source, yes. But if just one mirror is infected, or one of the CDN servers, the attacker may not be able to change the checksum on the official site.


Ah - that makes sense. I knew their had to be an easy answer but I couldn't see it.

Thanks


It's worth it for error-checking.


the checksums are gpg signed.


They do provide SHA256SUMS. I guess MD5SUMS are there for legacy purposes. It is on users to stop using MD5SUMS and move to SHA256SUMS


i thought the use of md5 here was just to check that your download wasn't damaged, by comparing your hash to theirs.


The purpose of publishing a public MD5 sum of a software release from the developer is to prevent tampering with an image. If I download an ISO of Ubuntu and check the MD5 value, and it doesn't match what Canonical says it should, then it's been tampered with.


I'd argue that there aren't any good reasons to use MD5 over sha256 today, but either way -- the plain checksums are mostly useful to make sure that iso's etc downloaded without error. The chance of a bad network connection or other random problem leaving you with an ISO and matching md5 in case of some random download error are extremely slim.

For verifying downloads, you should be using the gpg signatures. And again, I don't think there's much of a reason to provide both signatures and plain hashes today, but: you might be in a jurisdiction where gpg is illegal (but then, you wouldn't be allowed to use Ubuntu anyway), or you might be bootstrapping from a system without gpg installed (eg: vanilla windows), but with sha256 installed, and a set of trusted CA-certs, so that you feel you can trust the downloaded hash. I'd argue it's probably a false sense of security -- in general the gpg-signatures (or more precisely the secret keys behind those signatures) -- should be easier to secure, and easier to tie to the trust-worthiness of the builds, than some random web server not being compromised. Or, put another way, in a scenario where the gpg signing key is compromised, it seems likely an attacker would also be able to to other stuff, like embed a back door etc. While there are many, many ways a mirror might be compromised, or TLS subverted.

That's not to say that gpg is perfect, I just think verifying the gpg signatures get you closer to verifying what you (probably) care about: that you indeed have an install iso that is made in good faith by the Ubuntu release team, and to the best of their knowledge is ok.


Another interesting tidbit, this version comes with

Cephfs v10.2.0 Jewel: "This major release of Ceph will be the foundation for the next long-term stable release. (...) This is the first release in which CephFS is declared stable and production ready!"

http://docs.ceph.com/docs/master/release-notes/


I wonder why Canonical don't have an HTTPS version for this page.


From some sources, I've heard that it appears that those with AMD graphics will suffer a downgrade in performance until the point release in June. I think by June, the open source AMD drivers should be up to speed or have the same features as the previous flgrx ones. From what I gather it's more like a downgrade in supported features.

I think this is just an issue if you are doing 3D graphics work or gaming.


Most people with AMD graphics have been using the open source drivers and will just see all around improvements.

edit: Also, you can get Vulkan with the new drivers.


This very much depends upon whether you want the latest OpenGL 4.x features or not. Compute shaders, etc. Most people won't notice, but if you want to develop or use applications using these features, the open drivers aren't yet up to par, so for these cases it is indeed a major downgrade.


Only temporarily. The closed source ones are legacy now, afaik. I believe AMD has said they want to only use open source drivers with closed source plugins; i.e. hybrid drivers.

Anyone know if it's going to be the same on Windows? Open source with proprietary plugins?


So to be more accurate, its more developing with these graphics functions, rather than gaming? games developing vs games playing.


If somebody needs a Vagrantfile for testing: https://gist.github.com/therealmarv/555f7efc1c55ffa288bca091...

but it seems even Ubuntus Server are not speedy today (Atlas server are also slow)

(Update) it seems I'm getting an error with it :/

  The guest machine entered an invalid state while waiting for it
  to boot. Valid states are 'starting, running'. The machine is in the
  'gurumeditation' state. Please verify everything is configured
  properly and try again.
  
  If the provider you're using has a GUI that comes with it,
  it is often helpful to open that and watch the machine, since the
  GUI often has more helpful error messages than Vagrant can retrieve.
  For example, if you're using VirtualBox, run `vagrant up` while the
  VirtualBox GUI is open.
  
  The primary issue for this error is that the provider you're using
  is not properly configured. This is very rarely a Vagrant issue.


Someone else in this thread posted something rather nonsensical about switching to Commodore 64. I was just about to make a guru meditation joke when I saw that someone apparently beat me to it.

I'm running 16.04 with the Vagrant/Virtualbox image from atlas. I've tried getting it to work in the google cloud but had no luck there so far (it's possibly just the ssh key injection that failed, haven't had time to investigate).


If you've recently upgraded to VirtualBox 5.0.18 you might try downgrading to 5.0.16 as I had a similar issue earlier this week.


I've updated to the VirtualBox 5.0.19 test build which also fixes that. Bad timing for Ubuntu: https://www.virtualbox.org/wiki/Testbuilds


Hope DigitalOcean and Linode will support 16.04 quickly. Can't wait.


16.04 is available on DO now!


In case anyone's looking here first, ISOs are available here:

http://releases.ubuntu.com/16.04/

For some reason these are not linked yet from the 'Downloads' page at ubuntu.com.



I want this so bad.


So we finally got Native ZFS with 16.04? :-)


ZFS is only enabled for their new containers (X<something I can't remember>) for now.


I haven't tried it but I think if you do a manual install and format disks as zfs you could have a zfs root filesystem with 16.04.


There was an issue in the beta where the installer would not recognize ZFS volumes for the boot installer. Will be verifying if still an issue as soon as I can.



what are they going to do when they run out of letters in the alphabet?


Go back round again with new animal names.


This appears to be the correct answer. Backed up here:

https://wiki.ubuntu.com/DevelopmentCodeNames


Would Ubuntu 20.04 Fiery Fox be considered copyright infringement?


Unicode is growing fast enough.


I love that answer. Switch to a different language and alphabet!


Shut down the company and stop releasing new versions, obviously.


Yeah, it's really a shame. I thought the open source movement had a lot going for it, but they should have thought this part out better.

It's okay though, I honestly feel we can take most of what we learned and apply it to the Commodore 64, which, while closed source, most "black hat" hackers can patch the kernel of to include execution of the Binaries Formerly Known As Ubuntu.

The process will be different, though: After powering on the system and reaching the READY string signaling the BASIC interpreter, you'll POKE the unicode name-string of the package you want to update to the user space at $C000, then execute an SYS command to set the CPU's program counter to the vector of the package installation.


Greek, Cyrillic, Elvish,...


obviously, invent new letters :)


Invent two thirds of a new alphabet and then get distracted by the idea of creating new numbers and promise to complete the alphabet "real soon now"? ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: