It is painful to even try to use. It forces so many unnatural behaviors, in favor of... I don't even know why. It's not elegant, it's not pretty, it's not fun to use.
It is bad. Horrendously bad. And suited to the tastes of little more than a very small handful of people, who probably open a browser and read email, and do little else, except maybe script shell commands on the rare occasion.
Their decisions arbitrarily imposed on many, who likely still use the distro, simply due to ease of installation (and probably no other reason), and then shoulder the burden of routinely overriding the default install configuration immediately.
I still prefer Unity nowadays. It's simple, it's fast and stable (at least for me), and it reminds me of OS X (which I also like).
Everyone who runs Linux at my job uses it. No one had to be taught how to use it. No one has had any issues with it or complains about it. It just gets the job done.
I totally get that not everyone likes it, but Canonical is targeting a general audience and I think they're doing a good job. If you don't like it, just apt-get something else. It doesn't need to be super customizable and cater to everyone. Let them focus on their core features for their primary audience.
While lowest common denominator UIs are great for simple tasks performed by novice users, so much more can be accomplished by even putting in a little effort. I mean you did take the time to learn a programming languge, correct? Or was your programming language so natural you never had any questions? Did it just work without any effort?
Or are you implying that a GUI is a distraction? That it provides no benefit and something like a tiling window manager or other system can't give you any productivity benefit? I'd find this hard to believe.
Unity is simple and it doesn't try to be the be-all and end-all of my computer use. Unity seems to be designed by people who understand that the purpose of a desktop environment is to facilitate launching other programs and perhaps moving their windows across the screen a little bit, and that's it.
I am a former KDE user, and I used to spend hours and hours on end tweaking my KDE configuration. Finding that perfect equilibrium was a quest without end, but the abundant opportunities for shaping the DE to my liking didn't make me any more productive.
Switching to Unity was a liberating experience. Suddenly, I could not do any of the things that KDE allowed me to do...and I didn't suffer any ill consequence as a result of that. Rather, I discovered that all I ever needed from the desktop environment was to let me run Emacs and Firefox. Everything else was, and always has been, superfluous.
On the other hand I'm worried about the resources invested in this project, instead of stability or performance improvements in Windows. In the long term I'm worried about Microsoft's monopoly.
Since lot's of mobile devices don't perform as well running Linux as Windows and you want to get the maximum performance out of your device (think battery life), Windows is really you best choice.
In the past, I replaced the windows shell with a couple different BlackBox ports that were compiled for Windows but I was still missing out on all the command line tools so it didn't meet my work flows.
But now that Microsoft let's you run BASH on Windows, I think if I gave it another shot, this setup would work for me but at the moment I don't have any requirements that depend on Windows that would cause me to switch OSs.
Will it be possible to implement alternative syscall handlers with a future version of the DDK?
Compatibility Mode has never been very good, though I don't think there's anything preventing Microsoft from making a more Wine-like compatibility layer if they really wanted to.
I'm not sure if the same thing is still available for Windows 10.
Honestly MS is not that good on backward compat. They are good on the story, so/so on the facts. But I prefer that to being stuck in the dark ages of poor design and the poor security which comes with.
To run bash on windows on ubuntu on windows.
Note this isn't a rant by any way, I appreciate the work done by the Wine project a lot and use it every now and then. I'm just trying to understand if it's a matter of manpower, simpler task, or better design.
As such, WINE has to reimplement all those libraries, whereas Windows only has to implement something comparatively tiny.
I expect someone can write a better summary of pros/cons of both approaches (and there are definitely both for both!), so I'll leave that to them!
Having this in mind: It is much simpler to implement a new subsystem for a kernel that was designed from beginning to support multiple subsystems than to retrofit a Win32 subsystem (Wine) to a kernel (Linux) that was never designed for this purpose.
In practice, lots of modern NT syscalls are directly or at least quite closely modeled after their Win32 equivalent. Even more so when they have been introduced after the end of the Windows Consumer vs NT split - but even before that, NT was already very close to Win32.
And Win32 has always been special for NT. And Posix & co has always been "special" in the other way: even when the idea of having subsystems was already decided (at first it was mainly for OS/2 vs Windows API), the question was raised whether to implement the Posix API in a subsystem or layered on top of another subsys API (like Cygwin is today). The paper advocating for the subsystem is available in the WRK, IIRC (quite a poor paper with bad arguments IMO, but it shows that the question has been raised).
Actually the traditional NT subsystems were not even able to do WSL, and the most important work to get it was not the legacy of the Posix & co. subsystems, but a project to be able to virtualise different versions of Windows userspace on a single NT kernel (indirectly, it first leads to the Android subsys attempt, which got cancelled and then morphed to the WSL we will get).
When you look at the WSL architecture (and compare with Windows NT + Win32 architecture) and hints given by the previous or existing WSL bugs, there is probably not much the WSL driver deffer to the NT kernel, only very core stuff like virtual memory, scheduling, and on some FS code (WSL adds its own layer for VolFS). Even Socket/TCP networking needs completely custom code in at least the upper layers (not surprising when you know the completely insane architecture of Winsock, with a legacy from Win16) but probably also quite a bit in underneath layers -- result: it stills works so/so when you don't stick to ultra trivial usage of the TCP sockets. And they told that Pipe are completely custom, because the semantics is too different from NT Pipes. The Fs aspects also shows that there is probably a lack of virtualization in NT, or that for now some WSL stuffs are plugged at the wrong place. Signals are not really a concept of NT, so WSL does most of the work, and like usual in this story, it shows quickly by still having some bugs: the first trick or even mainstream but non-trivial usage you try to do, it immediately fails. Especially more so when ptracing.
In the end, what permitted to have a WSL seemingly reaching a decent state (it's still very far from perfect, but quite decent already for development purposes) so quickly is that: 1) it actually started way before any publicity about it 2) it seems to be made by a small, but not too small, very competent dedicated team working full time on it 3) Posix API and Linux syscalls have a documentation at least 2 order of magnitude better than MSDN, and a reference implementation available for everybody -- probably they don't allow WSL devs who write WSL code to look at the Linux code, but they still have access to the man pages which are more useful than the code in most cases, especially given how much detailed are lots of man pages -- compared to that MSDN doc of Win32 is a complete piece of useless crap 4) the area to cover (Linux syscalls) is smaller compared to Win32 -- Wine does not tries to reimplement an NT kernel, but the whole userspace stack -- an analogy would be MS trying to reimplement Linux + Gnome + Kde + PulseAudio + X + etc... The amount of work is not comparable at all!
Now what I don't get, and I'm not trolling here, is why would you want this? Isn't a VM a better option?
I assume people using Windows want to because of the UI, not the kernel. Whereas people generally choose Linux/Gnu/Unix for the kernel and OS environment, not the UI. With Ubuntu on Windows you get part of what people have wanted, and arguable the most important part people want.
Will people really want a Windows kernel, with Ubuntu/GNU OS tools, and Unity? I suspect most would still want the Windows UI.
If this is indeed close to native performance, you wouldn't get anywhere close to that with a VM. I could be oblivious to some optimization techniques but Linux VM's on Windows have always performed drastically worse than native.
Now this does not render WSL useless; launch time for any random small tool is better with WSL than booting a complete VM, the memory is not partitioned, integration with the Windows Drive FS is slightly better, etc.
But tons of use cases are better covered by a VM -- and with a huge margin.
Mate, the thing isn't even released yet and you're bashing it (excuse the pun) for taking too long to open up a man page? Give me a break.
After it's actually released and the performance gets better then it is of course going to trump running a whole VM and to suggest otherwise is just FUD, IMO.
I think Win10 is quite good, and that WSL is quite good given its age, and will be extremely useful for a lot of things.
Still, if you try some workloads, WSL is way slower than a Linux VM. I don't expect it to change for the release. I don't even expect it to change for the 2 years to come. Sill, of course, I could be wrong.
But even so; I insist: WSL is actually quite impressive.
Suppose you're developing a cross-platform GUI application (hypothetically, its name could be "Visual Studio Code" or something like that...), and your main development environment is Windows. Now you can build and test it on Linux as part of your regular workflow, without having to sync code etc.
Heck, you could even make it so that launching the project (from a Windows IDE) would do a Linux build, and launch the resulting binary.
Please explain how it is better.
They've implemented the ABI/syscalls in the kernel allowing for direct execution of unmodified ELF64 binaries.
Which is why I asked about where people thought Unity on Windows would make sense. Because a VM will most likely generally give a better experience.
If you restrict yourself to bash and standard Unix tools, you'll end up with a similar experience to macOS and terminal (or my preference iTerm2)
Why does the implementation have to be identical? How would that even be possible, unless they violate the GPL. I don't get your point.
> Because a VM will most likely generally give a better experience.
Again, please explain WHY it would give a better experience. What do you see lacking?
VMware has basically abandoned Workstation, and the gfx performance is terrible. A native X server on Windows talking to the Linux subsystem seems like it might be decent.
And if this means somehow I can end up using more XMonad...
Also considering the state of graphics drivers on GNU/Linux and on Windows, it wouldn't surprise me if for some specific hardware configurations, the “perceived interactive performance“ (responsiveness) of the UI was actually higher on Ubuntu for Windows than on it running on top of the Linux kernel.
I'm hoping we can kill our current windows build, and require people install bash for windows.
When I asked how you tested, I was looking for a more detailed answer: what were the environments, the hardwares, what does the cli app you used do (CPU, IO, allocation etc.) (this you answered), etc.
I normally run a KDE session, which takes up some memory and the KDE ecosystem aren't the lightest users of syscalls (I'm looking at you Konsole), but when I need sheer perf (even for benchmarks), I kill all KDE sessions and switch to an openbox session running st.
I compared a standard Ubuntu install to a standard windows 10 install on a T420 Think pad.
It seems that there are two main options:
- Rootless mode (the default install for VcXsrv), in which your windows are managed by the Win10 WM and integrate quite nicely
- Rooted mode, in which you can use your own window manager, but the windows are restricted to living inside a native Windows window. Using this in fullscreen on a virtual desktop provides a near-linux experience, but some remapping is required so you don't conflict with Windows' alt-tab, etc. Other bits of integration like copy&paste still co-exist nicely.
I think Cygwin is already using Pacman so perhaps you can get it to work without root on Ubuntu and hence on Windows 10.
The next level of integration would be running X11 fullscreen and then launching Windows programs inside that environment. e.g. Word 2016 launched and managed from the Unity launcher.