I use WSL from time to time on my personal laptop, and the reason is this: I think Windows is still superior to any Linux flavour for doing "leisure" things like watching VoD, or playing games (not that I do any more).
This isn't a dig at Linux it's simply that we still have lazy content creators using things such as Silverlight to provide content. These applications "just work" on Windows, and when they are at all possible on Linux, they tend to require a lot of manual effort and simply aren't as good as the Windows solutions (Google Drive, Spotify, VPN applications etc.).
I also use my personal laptop for dev, and as I'm an OSS-stack developer, I use Linux. Having the ability to run Linux natively on Windows is superb, and is genuinely usable for work purposes for the most part.
The article doesn't capture a few other bugbears with WSL. The file system is slooooooow (really slow), quite a few applications just flat-out don't work, and others are buggy as hell (psql e.g.). Would I use it in a serious work capacity daily? No - but I'm certainly appreciative that I can use it and it works pretty well 95% of the time.
I've been working for 10+ years this way. Windows on the desktop, ssh in to Linux boxes for real work via ttys. There's nothing about the X11 ecosystem I miss at all. The one hassle is the editor; you're either stuck with using tty editors like emacs or running a Windows app as an editor and relying on network filesystems to let you remotely edit files.
I had a couple of years with MacOS instead of Windows. Definitely better in so many ways. But MacOS Unix is worse than WSL Ubuntu in every way, mostly because it's a bizarro BSD variant with 10 year old tools and Homebrew is a kludge.
I don't do much with WSL but find it occasionally useful in a pinch. The lack of an init system is very limiting, I find I still need a full Linux environment for most things.
I'm curious: how is Homebrew a kludge? Also, the *BSD userland command-line tools on macOS aren't old, they are fairly close to the latest updates for bash, git, vim, etc.
Homebrew seems far behind dpkg/apt in terms of managing complex dependencies and versioning. It's gotten better, but largely by replicating the complexity of those systems.
Last I looked unzip on MacOS still couldn't handle files > 2GB in size. That was a patch submitted about 10 years ago. The version of less is also ancient and compiled without standard features like LESSKEY. At least as of a year or two ago, it's been awhile since I've tried using MacOS.
I agree completely. It's actually quite nice to use the entire range of common software available! There's nothing unique or compelling about macOS that I personally miss.
It will be a difficult choice if my next employer offers an OS choice. The more I ask myself what I really enjoy about Linux and Windows, the more I understand that I might just prefer Microsoft's Window manager to customizing one.
I'd rather ticker with Linux at home than fret about keeping it fixed at work. WSL and Docker let me do that in spades.
> I'd rather ticker with Linux at home than fret about keeping it fixed at work.
What do you mean by this? I've ran pretty esoteric set-ups at work but none of them have required more maintenance than the Windows systems some of my coworkers run.
I agree that the configurations back in the 2010s were much stranger than they are now. Now, things are relatively straight forward. Below are my subjective personal experiences that have assisted me with self-career growth and growth of others.
If a bad patch comes down for Windows or MacOS then the company IT department will likely shoulder some blame. If a package manager update comes through for Linux and does fail then the onus will be on me; IT hasn't supported personal Linux installs at any company I've worked for.
From the job-insurance standpoint, it's a liability to use Linux at work if the business makes you take the full helpdesk responsibility. Plus, we're all inevitably asked to open Visio, Photoshop, Outlook, or one of the many other popular Windows-only tools. Now we get into virtualization and VMs.
Windows works for me. WSL is enough and Docker fills in the gaps. Frankly, I can run Visual Studio, Neovim, both Docker and WSL let me run what I want on Linux. In the end, it's just easier to visualize Linux in Windows than the inverse. I've never been afforded the option to "just" run Linux at the office.
Now at home, where I can let projects sit, they run Linux. My personal servers are Linux. My embedded electronics are Linux. If they break? It's fine. I like to tinker and play with those configuration files. Fixing that obscure display bug on your hardware is very fulfilling to me. When reviews come up, I'd like to be able to demonstrate the ROI I've created rather than hoping that custom WM takes my salary up a notch.
Right, I get the Windows with virtualized Linux/Linux with virtualized Windows debate.
I run the latter because the former doesn't make sense to me; Linux makes for a better foundation security-wise and I can rely on it to do what I tell it to; something that's been a problem for Windows users the past few years.
I also get company policies, I'd try to make this part of the contract —I simply cannot work as efficiently if I'm fighting tools instead of business needs— but if it wasn't a possibility I'd definitely run a full X11 environment on top of Windows. It does makes sense in that case.
But I still do not get your point about maintenance; as you said, this isn't the 2010's anymore (I'd argue it was fine at that point as well if hardware wasn't simply chosen at random) and issues do not come up any more often than they do on Windows or macOS. The opposite has been the case in my office: those of us who run Linux just have far fewer issues both dealing with development tools and with system maintenance (and we have more Windows experts than Linux experts); so the implication that you'd produce a lower ROI by running Linux is a bit laughable from my own point of view.
At the end of the day you have your own situation and it's up to you to decide what's best for you, but I believe that if maintenance cost is your deal-breaker you probably should reassess your choice.
Security is a complex, multi-faceted, ever chancing set of practices. All operating systems receive regular security updates. Can you point out the security issues with macOS or Windows that disqualify it for a developer workstation?
> I also get company policies, I'd try to make this part of the contract
I've never had anything close to that leverage during negotiations. If I had leverage like that then I'd use it on vacation or salary.
Regarding maintenance, Windows and macOS both have package managers now. My dotfiles work across Linux and Windows; macOS is the odd duck. If the operating systems run the same tools then it makes the OS a means to an end instead of a religious debate.
In practice, I've used all three at various jobs for my developer machine. They all have security updates and maintenance options that work fine. All can be successfully maintained, used, and developed on.
> I believe that if maintenance cost is your deal-breaker you probably should reassess your choice.
Your comments demonstrated that you prefer Linux strong enough to negotiate for it. I'm just a developer who enjoys solving problems and thinks the WSL is really useful.
WSL is a godsend for developers. Having easy access to openssl with minimal fuss is awesome for all those obscure cert operations that you need a couple of times a year. Cygwin and all that has always been rather crappy IMO, so I'd usually keep a linux vm handy for those kinds of things. Installation can be slow yeah, QT took forever to build. But it did work!
Both. WSL works by translating Linux calls into appropriate Windows calls under the hood. The Windows file system is already comparatively slow, and the interfaces aren’t especially similar, which means that you end up using a not-fast translation of a medium-fast filesystem implemented on top of a relatively slow file system.
I've always gotten by (both before and after WSL) with a Slackware install running under VirtualBox on a Windows host. It goes something like this:
Install VirtualBox, create a Linux VM instance and configure for at least 2GB RAM, 3D video acceleration, 128MB of video RAM, and at least 20GB of storage. DO NOT choose EFI or any other fancy/experimental features; the reason we're using Slackware is because it is dead simple and bulletproof in its default configuration, and doesn't need any of that mess.
Install Slackware on the VM instance (I prefer Xfce desktop as it integrates well with Windows in Seamless mode but use what suits you if you find another is better).
If using Xfce: Remove the bottom panel but keep the top panel, and configure it to your liking. If using KDE you may wish to move the bottom panel to the top edge.
Install the Guest Additions for Linux.
Activate Seamless mode, you'll find that your Linux desktop sits as a layer on your Windows desktop. If you use Xfce, your top panel will be at the top of the screen and will by default float behind any Windows-native windows. This allows you to keep both OSes running all the time and switch back and forth as necessary. Optional: Install the Numix GTK themes and Numix icon themes available from slackbuilds.org for a more Windows 10-esque look and feel in your Linux native apps.
Again, the reason I use Slackware for this (besides my nearly two decades of familiarity with it) is because it is simple, stable, and stays out of the way. You don't need any experimental VM features; it's pretty much pure Linux. That said, something like Alpine or Arch may be more suitable depending on your workflow, and they both are also simple and VM-friendly distros. Alpine in particular is designed to integrate with VM and container setups with minimal fuss.
Interesting, and thanks for the write-up about Slackware on VirtualBox on Windows. Can you do copy-and-paste between the Slackware in the VM and Windows, and can you do file / folder sharing? I might want to try to use Slackware based on your description.
Update: Also, can you explain what you mean by this:
>If you use Xfce, your top panel will be at the top of the screen and will by default float behind any Windows-native windows. This allows you to keep both OSes running all the time and switch back and forth as necessary.
I use Ubuntu on VirtualBox on Windows and can already do the switching back and forth, but am not sure what you mean by "will by default float behind any Windows-native windows".
> "Can you do copy-and-paste between the Slackware in the VM and Windows, and can you do file / folder sharing?"
Yes, to the extent VirtualBox allows it. Make sure you have it turned on in the VM's settings and the Guest Additions are installed.
> "am not sure what you mean by "will by default float behind any Windows-native windows"."
Sorry, I just meant that the VM apps are treated like any other Windows object and aren't "always on top" or "always on bottom". Let's say you have the VM running as I described, and you open a Windows application. That application will be the "top" layer because it has focus; the VM won't override it. If you then click on the title bar of a Linux application, it gets pulled to the front and the virtualized app becomes the app with focus, along with the Xfce panel, but there's no opaque "desktop" layer blocking any Windows apps underneath it, as it would be if it weren't in Seamless mode.
In short, it does what it says on the tin: Host and Guest apps work together seamlessly as if it were all one OS with one "desktop".
>Yes, to the extent VirtualBox allows it. Make sure you have it turned on in the VM's settings and the Guest Additions are installed.
Great.
>If you then click on the title bar of a Linux application, it gets pulled to the front and the virtualized app becomes the app with focus, along with the Xfce panel, but there's no opaque "desktop" layer blocking any Windows apps underneath it, as it would be if it weren't in Seamless mode.
Just a reminder, that embrace, extend, and exterminate (aka extinguish) was both an anti-competitive strategy with collateral damage to society, and simply acting with normal self-interest to help customers.
Even if one believes a post-Gates/Ballmer/Myhrvold Microsoft no longer behaves illegally/unethically, one can still look forward to an embrace being followed by extension and extinguishing. Because that's how the incentives play out.
Embrace: Of course we want to give our users the best, even when some of that originated elsewhere.
Extend: We're not going to hold back our users by waiting on slow-moving standards bodies - often slowed by our competitors. Helping competitors isn't a priority for us. And the wellbeing of those few people who aren't our users, is understandably also not a high priority for us.
Exterminate: Why should we expend any effort at all towards keeping competitors viable? Why shouldn't we actively help potential customers join us? Why should we prioritize resources to helping a few disgruntled ones leave? Why shouldn't we fully monetize our intellectual property, both directly, and through affiliated third-parties like Intellectual Ventures?
Microsoft failed on phones, but VR/AR is coming, and it will transform the market for phones, laptops, and desktops. Once upon a time Microsoft created Windows-only web extensions. Now Mozilla writes them for it. Windows Everywhere may still happen. And "you can use Linux inside of Windows" is a big step towards that. It's not clear that's something to be happy about.
Kinda tiring to see this gets rehashed on every thread mentioning MS in any positive light. Many things Microsoft has done recently has been in the open source fashion since it's the way to do things now.
The times have changed, charging monthly for services is a much better economic strategy than making people pay upfront for a product.
It's clear to at least me that Linux will most likely never replace Windows as the main OS for everyday users so there is no real financial incentive to extinguish it anymore. I don't think MS cares that much anymore if you run linux or windows on the server side as long as you do it in Azure.
> Kinda tiring to see this gets rehashed on every thread mentioning MS in any positive light.
This thread was on a major change in the relationship between Microsoft and Linux. But comments were focused on the short-term impact on individual developers. Pointing out longer-term ecosystem impact seemed worthwhile.
> Kinda tiring to see this gets rehashed [... Microsoft has changed ...]
I encounter two versions of this sentiment. One is "yes, it's unfortunate that our only choice is to pay organized crime to collect our trash, but I don't want to discuss that every time we put out the trash" (to use an example from NYC of a few decades ago). And as long that doesn't drift into "organized crime isn't a problem", fine.
The other version is "Microsoft's conduct is no longer something to be concerned about". And if one knows that pharma, IBM, and Microsoft, are the current leaders in expanding the scope of software patents, and blocking patent reforms, and one agrees with those positions, then, well, ok. But often there isn't that awareness. Or other impacts are not fully appreciated, or not clearly reasoned about.
We're way off the front page, and my break is short, so for the rest, I'll just note that linux is unlikely to ever work as well as windows on Azure, and MS has a strong incentive to diminish linux and Mac as a competing centers of gravity for software developers. For instance, developers being able to easily avoid Microsoft products while developing and serving android and iOS, is something Microsoft would obviously like to change. And specifically, Microsoft will do whatever it takes, to not miss VR/AR as it did phones. And would really prefer to dominate VR/AR as it does desktop/laptop.
I didn't have deeper meaning behind what I wrote. I just think it's unnecessary to point out the stuff you did every time Microsoft does something positive. You won't see the same for other corporations like Apple, Google or Samsung for example which also holds and defends patents.
Can you point out something on linux on azure that doesn't work as well as with windows? Genuinly curious. I haven't had any linux images on azure so I have really no experience in the matter.
I don't think it's something wrong with them entering the VR/AR space, it is exactly what it needs. The more heavy players behind it the more faster the technology will evolve. Right now, there is still a big lack of both games and software for it so there is not a big incentive to buy such a system. I am very excited for the future of tech and I am not worried for one second about Microsoft will stop supporting open source and if they will, I would be the first to object to it [as a heavy user of their tech].
This is a great article, and I want to add that it's perfectly possible for a Windows PC to be a great dev machine without Linuxizing it.
Terminal emulators like Hyper or ConEmu (mentioned in the article) work great without WSL as well. Git's Windows build comes with bash and most coreutils built-in, so if your colleagues filled your npm scripts with sh scripts, there's a fair chance they'll "just work". Scoop [0] is a fantastic no-nonsense "apt install" like tool, which is leaner and less in the way than the better known Chocolatey Nuget. Obviously, VS Code, Atom, Sublime Text and the entire Jetbrains Suite all work fantastic on Windows.
Finally I'd like to recommend Git Extensions [1] if you like using Git with a UI. It's the best Git UI I've come across and it's native to Windows. It has a funny name because it started out as a set of extensions for Visual Studio but it has little to do with that anymore.
I'd like to particularly commend the Node ecosystem for making cross platform dev a breeze. The last few little details that don't work the same in Windows as they do on Linux/macOS (even if you have Git's okay-ish sh/coreutils tools in your PATH) are very easily bridged with tools like cross-env [2] and shelljs/shx [3].
Note: not disagreeing with anything in this article - if you want a Linux with a nice shell, Windows is a pretty decent option these days. I just want to point out that it's also a pretty decent option these days without WSL, and share some pointers on how to get started.
We build upon so much open souce software - Linux, GNU, compilers, editors, servers, and hundreds of libraries.
I use KDE on Ubuntu, and on the rare occasion that it is lacking, a bug report or a patch helps the whole community. All the users, in Ubuntu's statistics, help the KDE developers feel they're doing something worthwhile.
Running Windows and perhaps improving tooling on Windows helps Microsoft, something I refuse to do.
(Equally, I will correct Open Street Map, but not Google Maps.)
I recommend separating professional/technical advice from ideological veering religious sentiment. Linux users need to realize that lots of the world runs on Windows and developers need to target Windows as a result. The fact that Linux isn't more accessible is a separate problem.
Linux dominates the smartphones, servers and the supercomputers.
It's the software development model and the ideological underpinnings that made Linux technically superior, so I'd be careful not to wave them off as some religious nonsense.
It's still the majority in desktop consumer/corporate usage. I develop for all three major platforms and swap between them at home and at work; I truly serve no master here.
A nice overview! I tried moving to W10 w/ WSL last August, and for the most part it was great, but the then lack of what I'd call "decent" terminal emulators was the deal breaker for me back then. Things might be better now.
I want the following "features" from a terminal emulator:
* Tab support
* Support for more demanding applications (like tmux and other curses based applications)
* Sensible defaults
* Reasonably clean UI
* Open Source
I refuse to run an electron app as a terminal emulator. It might suit others, but it's really not for me. I have concerns over battery life, memory usage, performance and security.
Going through Łukasz's suggestions:
* Hyper -- Not tried this, but: electron based, non-starter.
* Babun -- Not tried this, but: no tab support.
* Cmder -- Tried this, No tab support, I found it a little glitchy under tmux.
* ConEmu -- Tried this, support tabs... but I found that a lot of configuration was required, the UI (at least of the the box) was cluttered, and also found it to be glitchy under tmux.
* MobaXterm -- Closed source, cluttered UI (for my needs).
Maybe I'm too picky.
I hadn't tried running a Linux native terminal under WSL, but will give it a go soon -- I hope it doesn't make everything ugly!
SecureCRT and PuTTY are both still out there and have 10+ years of history as being good Windows terminal emulators. I think both of them might require you fire up an sshd in WSL to use them, not sure either has direct support to launch WSL shells. But I haven't tried it.
The best part of WSL is that X clients work flawlessly (to the extent I tried, which is very little) with a native Windows X server like mobaxtrem. This includes e.g. Terminator.
Don't get me wrong -- it worked without much config change. But things like the UI, keyboard shortcuts etc weren't very compatible with me out of the box!
I still don't understand the fear of the Windows dos prompt. In the terminal section [0] he suggests Hyper:
> it is an Electron based app and it’s a bit sluggish but works well, scales well and looks like it’s 2018.
Electron? Would anyone in the Linux world accept an Electron prompt?
All you need is Git + clink [1] and you get a ton of GNU tools + readline/history. Then it's as fast as possible and you get sharper and clearer fonts than all the other pseudo windows terminals. This is the stuff that's really important to me.
I couldn't get console to handle vim colours (although 32-bit colours is supposed to be there now) which is a real shame. Plus certainly there's no bold / italic.
I'm forced to use Windows 7 at work. I run Linux in virtualbox to get round this limitation. It keeps corporate IT departments happy because they can enforce patching policies and audit software.
Same here. Our network is heavy on MS Active Directory authorization, so really the only way to run Linux at all is to host it on a Windows/Virtualbox VM.
Last time I ran into problems with web filters at work, it was because they had blocked a lot of security-related things on the grounds that they could potentially be useful to hackers. For example, the websites for most popular fuzzers were blocked.
When I'm purely in a dev mode, I do. But often, I'd like to be able to jump into online games with my friends on the same PC, and the games I like to play simply work better on Windows. It's nice to have the same or equivalent tools available on Windows for when I'm already there, so I don't need to switch back to my other boot or fiddle around with a virtual machine.
Very much a hobbyist dev machine, and I primarily write video games and related tools on this machine. Some of my emulator projects are cross-platform, so I need to be able to test on Windows for those anyway; it's kind of a general purpose workhorse.
The computer I use for my day job is exclusively Linux, as I have no need for Windows software or online games there. (I'm guilty of playing Minecraft during my lunch break, but that fortunately runs quite pleasantly under Linux.)
Office more specifically Excel and I know linux has alternatives but they are not as good and everyone else i am working with is using/sending excel.
I excitedly installed the new crossover linux because of there office 2017 support in my manjaro machine at home and yea i guess it "works" but the experience was slow, painful, and i wouldn't call it stable.
Beyond that i find i am constantly having to put work into my linux machine to get things to run YES its a million times better than it used to be AND I WANT to run linux but its always more work.
Windows has better support for touch screens, pens, and dictation, all of which I rely on. I use WSL to be productive, and have a separate Linux PC (with touch screen) too.
Because you want to run real software, too! There's nothing like the Adobe Creative Suite, Microsoft Office, plus great backup utilities, media playing and management, etc. If you're targeting Linux, run it in a VM so if you screw something up, your host OS stays intact.
I've tried to use Windows in the past, but always had that unsettling feeling that I was a mistake or two from being compromised in some way. It doesn't help that I had locked my wife's Windows (8 at the time) machine down as tightly as I could figure out how to (antivirus, etc), and she still managed to get an ad/malware infection. For those of you comfortable in this platform, what steps do you take to protect yourself?
Read through the entirety of https://decentsecurity.com and follow the advice. That's literally it. I've been using Windows (alongside Linux and recently MacOS) since the 3.x days and the last time I've had any kind of security issue with Windows was getting remotely infected by the Sasser worm in 2003.
The major points of vulnerability nowadays are the browser and the user. Secure the shit out of your browser - aggressive adblock, disable/click-to-play all plugins, no unnecessary addons. As a user, educate yourself about the realistic threats. Download software from reputable places - if you're used to a Linux package manager, this will take some adjusting but it's not difficult. When you're installing something, actually read what it's doing and think, don't just click next excitedly. I shouldn't even have to say this, but don't open email attachments unless they're from someone you know and you're expecting exactly what they sent. Let Windows install updates automatically and reboot when it asks. Keep your other software up to date - something like Chocolatey (https://chocolatey.org) might help, but that has its own security upsides and downsides. Turn on Windows Defender and let it keep itself updated.
For me: adblocker and absolute minimum of browser addons/plugins. Caution when installing/running just about anything, this is more important vs other OSes because you end up running executables from random websites. An antivirus is a crutch to save you if you messed up here.
Windows firewall is a pain in the ass, but if you have some time you can lock stuff down quite a bit (Windows Firewall Notifier).
Personally, I don't run an antivirus at all; I got sick of Windows Defender making disk accesses so slow so I disabled it. At most I'd set up scheduled scans.
Turning on controlled folder access if you're paranoid about ransomware. Not much else really. I haven't noticed any viruses / slowdowns / malfunctions this way.
Don't install antiviruses, they're basically malware created to fight malware. Windows defender in Windows 10 is enough.
- Use uBlock Origin and never disable it. If a site wants me to disable it, tough luck, I won't be reading that content.
- Never install software from let's say untrusted sources without researching first if the installer doesn't contain any additional "stuff" (in a VM for example).
I use a similar set up and wrote about it a while ago too.
One thing you may want to consider doing is mount your drives in WSL so that Docker volumes work. With your current set up, none of your volumes would work.
With the above WSL set up, the development performance is pretty nice I must say. Around 250kb of SCSS runs through a fully Dockerized webpack chain (sass-loader, precss, autoprefixer, css-loader) in 1.8 seconds and live reload works. I haven't even began to try and optimize it with fast-sass-loader and other tweaks. I'm also using the stock node-sass package.
15k+ line Rails apps with 75+ gems also reload in under 100ms for code changes (non-assets).
Was hoping this would be about porting some of the features from desktop Linux over to Windows. I have to use Windows at work and it feels like I'm missing an arm everytime.
Mainly because Windows doesn't allow keyboard shortcuts to be rebound and the defaults often require two hands, which with the mouse normally in my second hand is really annoying and not particularly fast.
Other things I'd like:
- Ability to bind clicking the left and right mouse button together to middle mouse click (allows you to just click in the middle on laptop touchpads).
- Tabs in the file manager.
- Workspaces, in a usable form. Windows 7 doesn't have them and it's not legal for us to use Windows 10 on anything that's connected to the internet. And when I do use Windows 10 / Server 2016 in isolated playground VMs, then its implementation makes me feel like some old person who forgets about applications that they have open, because there's no indication of other workspaces existing, nor the applications in them. The shortcut for switching between workspaces being one of those two-hand-shortcuts also means that checking workspaces to see what's in them is not viable.
- A functioning search. I do not understand how you manage to make basic system search as broken as it is on Windows.
- The dream would be a tiling window mmanager and an actual functional terminal.
I know, this is not achievable without third party software, which I can't really install for security reasons, so yeah, this is unfortunately a pipe dream.
I've been developing for deployment on Linux and using Windows for quite a while and with Docker, WSL and powershell it's been quite great!
Why not use pure linux? First, I like when my OS just works, without having to fix something frequently (that was my experience when using linux, both ubuntu and fedora). Better hardware compatibility too. Second one is that I use a surface device on which I like to write or sketch using the pen frequently.
In theory, it lets you have a tighter integration between the Windows side of your system and the Linux bits. For example, you can easily open Windows apps from bash, or access files from both Windows and Linux apps without having to set up some sort of shared folder to get everything connected. I use gitbash on Windows for much the same reason, though what it can do is much more limited.
It's also lighter weight - just a Linux userland living on top of a special translation layer that maps from Linux kernel calls into Windows ones. That saves you all the overhead of running a full virtual machine with its own dedicated chunk of your RAM and its own dedicated chunk of your hard disk.
None of those make much sense as you wont be deploying to your dev test(technical copy) and eventually live systems that do have or need this "tight integration".
You do of course have at least separate dev test and live systems.
And in these days of threadripper the performance argument makes no sense as you have cores to burn.
Regarding my comments on overhead, I wasn't talking about CPU. I was talking about RAM and disk consumption. Both of those can be very limited resources, especially if you want to accept the 16GB memory limit that's imposed on most lightweight notebooks.
No offense taken, but I do think you're extrapolating a bit too hard here. My personal computer that I also use for some hobby projects does not need to conform to whatever standard you consider to be necessary for your own professional use.
I wonder if Microsoft is working on a replacement for cmd? They added a few welcome and useful features, but it still remains a pretty antiquated shell.
Even for Powershell users the thing is painful. Bash for the Linux Subsystem is just another reason/justification for a refresh.
I'm aware of the third party options (as listed in this article), but third party software isn't always easy to deploy for political/policy reasons in enterprise or government.
I definitely think the cmd is a boat anchor around the Linux Subsystem's neck either way.
Sounds like you mean a terminal replacment? Yes, Mike Griese is on Hacker News and said they're working on a firefox-style multiprocess terminal last year.
cmd.exe is more like bash and less like xterm or the like. conhost.exe is likely what you're thinking of. I don't mind the minimalism but there are a few nice adjustments that can be made from the defaults like turning on vt100 support by default (no longer limited to when programs like bash are run).
Conhost has a long ways to go but they've made far more progress in the last year and a half than any other point in time I can recall, so I'm optimistic that it will be modernized even more to fit the growing capabilities of WSL.
The thing is, Windows 10 could be usable OS if Microsoft approached it from different angle. Now there is no way in hell I am using standard version as it is of right now, installed it on my father's PC few weeks ago, and it was installing updates and forcing default bowser by itself. What the hell?! I tried Server in VM next week and it seemed better. For .NET and some graphics stuff, Windows is logical solution. For everything else, leave me on Unix (Always BSDs over Linux If I may :)).
> As it should for most users.
No, if it asks me and I say don't install any updates, and restart computer for whatever reason, all I see is installing updates screen, despite my instructions not to do so.
> Forcing? I'm not sure what you mean. You can easily change the default browser just like in any other OS.
Installed FF, went and changed default browser to FF, then Windows asked me every 15 minutes to change to Edge, resulting in changing it all by itself to be default browser without my knowledge.
I spend all day working with Yocto-generated arm-gcc crosstools, and it crashes and burns on WSL. There are supposed workarounds, but for now I'm sticking to Virtualbox.
While I do agree that the Docker setup method in the post is more reliable, I recommend following this post on MSDN [0]. Exposing Docker to localhost like that really isn't the most secure of setups!
And it really is remarkable how much better development on Windows has become over the last 2 years.
WSL essentially is to Linux what WINE is to Windows. On all my machines WINE (and virtualbox when I needed true emulation) helped me to ditch native Windows completely years ago.
So why shouldn't I think WSL real purpose is to discourage people from doing native Linux installs? For most people the effect is the same, just reversed.
The problem might arise should they diverge enough that software has to be maintained in multiple versions, so that commercial software companies will have to choose which one to support, and they invariably go for the one with the (biggest) company/corporation attached to it which of course would be Microsoft (just look ad Debian vs Ubuntu).
Should this scenario seem impossible, think what would happen if Microsoft decided to write a library that exposes all important device drivers to the underlying Linux subsystem: gaming companies could write their games without any fear of incompatibilities because they're using the Windows graphics card driver, the same way I use Linux drivers for a sound card Windows ceased to support years ago when I load a Windows audio software under WINE (example: Reaper+Tascam US122). That day Linux native installs would be less and less appealing for commercial (and sadly many amateur) software developers,
which makes me very pessimistic.
Microsoft is already a platinum member of the Linux foundation for reasons that go beyond my level of comprehension; what would prevent them say 5 years from now, from telling the world that their Linux is the real one?
Personally speaking, while I think that WSL is neat as heck, and very convient; I don't trust Microsoft to not abandon it the way they abandoned Windows Services for Unix ( https://en.wikipedia.org/wiki/Windows_Services_for_UNIX ) once it gets popular enough.
So I'll enjoy it while it's here, but I'm nowhere near uninstalling VirtualBox. Once burned and all that.
The main difference is that SUA never was popular enough. Nor was it very useful to most people because POSIX compatibility doesn't mean much to most projects, but Linux, OSX, or BSD compatibility does. Having a system no one uses that's not compatible enough with other real-world systems is kinda a good reason for discontinuing it.
I'm glad MobaXTerm got mentioned. It's incredibly underrated. It's (essentially!) Putty, Putty tabs, Cygwin,X, and a slew of other stuff and comes as an installable executable or stand-alone to put on a USB stick. Free and Enterprise versions. I'm a huge fan.
The major problem I have is:
Npm installs are so problematic for me. One out of every 5 npm installs fails with some weird permission error. Like cannot rename a file inside the node-modules folder.
The second is the terminal styling and speed. Not easy to configure
Has anybody switched between WSL and Cygwin? Care to contrast? Cygwin's warts are quite familiar to me at this point, but WSL is still a big unknown. In general I'd rather be running Linux anyhow, but sometimes one needs to use Windows.
In many ways, WSL is the Cygwin concept implemented inside the system layer. Where Cygwin reimplements things in Win32-compatible-ish code and then calls that code, WSL translates the calls directly into compatible Windows code. In theory, sometimes this is faster, and sometimes slower. In practice, whereas Cygwin has to jump through hoops to work around incompatible interfaces at the library layer, WSL can get Windows to implement the things they need to optimize the especially painful parts.
This is also why file system access is so slow in WSL — the two systems are far enough apart that building such a fast path is either hard or really hard, depending on how much compatibility is important to you.
Yeah but then I'm severely limited since I can only run software that runs on the Macintosh OS or Linux and not Windows. Also, this does run Linux software natively (not in a VM).
I personally don't use this but I installed it a while ago to try it out and since forgot about it. My point - it doesn't alter your host OS half as much as a VM does. So it is superior in some ways to that solution as well.
Anyway, do you think there's any reason at all that Apple goes out of their way to support Windows on their Mac machines? I mean here is a company that has sung the praises of their Macintosh OS, talking nasty shit about Windows the whole time....and yet they actually do non-trivial work to support this supposedly shitty OS that they apparently hate. Hmmmm. It's a mystery for the ages I guess.
Using linux natively is probably a better solution, but WSL can run unmodified linux software (including binaries) so it may very well be more convenient than using OS X in a lot of cases.
I've tried almost all the terminal emulators listed in the article with Hyper being the most recent and keep going back to mintty. The first time I tried Hyper Ctrl L didn't work at all. It does now but the performance story or tmux experience ('reset' would be required to restore the status bar) is still not ideal. mintty just works.
With regards to having access to a Linux environment, I've tried developing using a Linux Vagrant box while sharing the Windows filesystem using VirtualBox Shared Folders. This worked for smallish directories but I still had to contend with workarounds for symlink issues. Then I went down the WSL route before and after the Fall Creators Update. It's impressive how far WSL has come but the file system performance hits are noticeable.
Now I'm settling on having just one beefy Debian Vagrant box. Its filesystem is exported to Windows using Samba so native text editors can be used but all fs operations run within the VM with no NTFS compatibility issues. Performance is great and Docker works without the Hyper-V lock-in.
I switched from Hyper to Terminus, cause Hyper was working somewhat randomly in regards to various shortcuts/autocomplete/special characters/special keys.
TBH if I wanted that type of experience I would just install a Linux distribution on a separate partition myself and dual boot. But I need access to some Adobe and video conferencing apps that aren't available for Linux. Having a Linux VM running on Windows seems ideal in that regard.
TL;DR: I don't get it. Why bother with Windows at all if you are not going to use it anyway?
Ninja-addition: but don't get me wrong, the writeup is quite nice anyway! And if you are in a mixed environment where you need to run MSBuild locally for win32 apps and do a small amount of local-linux it might make sense.
<slight-rant>
Windows isn't a nice self-hosting toolbox OS with *nix semantics and probably never will be, no matter how much emulation or subsystem layers are added. In a way, macOS is moving a bit in that direction as well, but without killing the foundation it is built upon. Perhaps, if at some point either the NT kernel gets replaced (no, it's not a bad kernel, it's just a 'different' kernel with no compatibility towards BSD, Linux, Unix, Mach, L4, or any other open kernel) it will allow for a more toolbox/developer-centric environment.
Now, while this argument might seem like the one about "I need things to be open source because I want to touch it" (which gets stale pretty quickly since 50% of the users never do this or probably don't even have the skills), it's not what I'm aiming for. It's more about the fact that you can interrogate, manipulate and visualise all levels of the system you are working on, in an identical way you can do on your target system. While most types of actions have comparable methods on Windows, there is nothing like a /sys /dev or /proc on Windows, you can't easily 'see' what you are doing, and that is a really problematic thing once you get past the entry-level web development. Most of the inspection and introspection on Windows comes in the form of closed source bulky IDE's that may or may not have that single feature you needed to figure out why your program has trouble reading some device node or why it's performance varies depending on connections, handles or other OS-dependant actions. At best, you can do some stuff with the sysinternals suite, but that doesn't even come close to basic tools like top,ps,lsof,sysctl,ldd,strace,gdb. The whole Windows ecosystem is so tightly bound to opaque systems that cannot be reached unless you dig around in some GUI, it makes it hard to do basic debugging. While it has gotten better over the years, we are still stuck with a ton of half-invisible things in ancient GUIs or MSC plugins that have an impact on your work but have nothing to do with your target system. Basically, you now have to maintain two systems of which one isn't really the one you wanted to work with in the first place.
It's not that you can't do it, it's just that much harder to do on Windows. This is not unique to Windows, and applies to smaller scopes as well. If you build an application in an IDE and you want to automatically test and integrate it, but you didn't check how to build, link and package your product because the IDE did it for you, you suddenly can't reliably do what you needed to do, and you have to reverse-engineer the somewhat opaque process that the IDE did for you in the background and reproduce it using toolbox-tooling in order to automate it or deploy it on your target system. While not using that IDE isn't the super-solution either, purely depending on one blind/black-box integrated system is hardly the smartest choice out there.
</slight-rant>
Not all companies allow OS choices. It's more common in B2B, where your clients have security requirements that are not as easily fulfilled with Linux.
Well, just because nobody did it and it was hard to get started with it doesn't mean it can't be done. On top of that, sec requirements are often missing the point entirely (i.e. "complex" passwords that rotate every 30 days -- people will just write them down and stick them on the monitor, on top of that, complexity requirements are 99% bs because they don't actually increase the entropy or security, both Troy Hunt and XKCD agree [it's on the internet so it must be true!])
Another thing is that this points to another issue; just because the "no"-department (old-style security / CISO) thinks something should be done in one way doesn't mean you just take it, if they have a wrong idea or if a policy doesn't do what it says it does, having a conversation might actually make things better. Same goes for taking on a job, if there are policies in place that prevent a good working environment, either the policies need to be updated or you might as well not take the job. (and yes, that is more often an option than you might think, at lest where I live -- I know that is purely anecdotal/N=1)
Reasons not to change or not to make things better are easy to find, it's the solutions that are hard.
I don't disagree with anything you said... yet I'm stuck on a non-Linux machine writing Linux code. I'm stuck changing my password on a yearly basis (it could be worse). I'm stuck with a "take control of your machine at any time" daemon constantly running in the background.
Not because my own companies IT department is populated by sticks-in-the-mud, but because we work with the financial information of some of the largest companies out there. As such, what they say goes.
So it's 2018, Linux binaries run on Windows for the most part, but the only way to get Linux programs with a GUI on Windows is through the X11 protocol, because it was designed to run on a network. Seems like some software is really hard to kill.
I don’t understand. The only way to get Windows programs with a GUI on Linux is through a WINE-style reimplementation of Windows libraries, because it was designed to run all linked together on a single machine. Seems like... software is stiff but protocols are good rendezvous points?
Well, X11 is supposed to die. People have tried to kill it off for about 15 years now. But no one has managed.
Wayland is the most serious effort to create a replacement at the moment and it is being pushed hard. But its design prevents it from running on WSL in any reasonable way.
After getting a new PC with Windows 10, I tried WSL and I found a scarily easy to smash foreign file system, no obvious way to update the offered Ubuntu version, and incompatibility with GUI applications except for experimental X server tricks.
In other words, both Cygwin and a Linux VM with properly shared folders appear easier to use and more useful in practice than WSL; I have no idea of what Linux software is worth running under WSL despite these limitations.
Emacs, Node.js, Bash, Python, Git and many other important tools have good Windows versions, while almost everything popular is available in a package or easy to compile, and in both cases much more dependable, under Cygwin.
The way things have been going is promising but the WSL is still too far away from being able to be considered as a place where full-time development happens. For example a true tiling window manager like i3 gives me tremendous productivity boost, which WSL can in no way deliver.
I appreciate that when I have to use Windows (e.g. for some apps that just don't work on Linux) I can still take advantage of a true Linux system's power. But that's about as far as it goes. Claiming it to be a "dev machine" is still quite an exaggeration. I'll see how it goes in a few years. Hope Microsoft and Canonical can keep up the work, for sure.
I’m not sure if this is still the case, but I found that I would only get proper font smoothing on X apps with WSL if I ran ubuntu-settings-daemon (just needs to run for a few moments, not stay running iirc). You may also have to install more packages (I recall installing ‘ubuntu-desktop’ because I didn’t feel like tracking down the exact ones needed).
If you don't mind my asking, what X server for windows are you using? And, what is your display resolution? I've had good default font rendering with VcXsrv on a 1366x768 display, for example, but the same configuration is almost illegible on 1600x1200, presumably because of the higher dpi.
VcXsrv on a non-hidpi 1080p. I’m trying to remember exactly what I did, maybe this:
1) install the ubuntu-desktop package
2) run “ubuntu-settings-daemon &”
3) while #2 is running, start a GUI app. Good font smoothing now.
4) I think at this point I could close #2 and still get good font rendering. At least until the next restart, or perhaps as long as I didn’t close all Linux processes.
I might also have added all of the fonts from Windows to the system-wide font folder on the Linux side.
I've generally noted that WSL really don't cut it as Linux
substituent. Same goes for regular Windows, last time I tried installing Tensorflow it failed due to scikit not being available (because of no pre-built binaries or something). Sure you can install it with Anaconda but it's still clunky.
I highly recommend using Docker, mounting a volume to it and running all the Linux code inside the container. You can still modify the files both ways but avoid most of the messy incompatibility issues. There's some learning curve to it but I think it will pay off in the long run.
I develop on Windows 10, and I don't try to "Linuxize" at all, even though we often deploy on Linux.
I do use ConEmu, but I use PowerShell as my shell, don't run WSL or Cygwin. All our devtools (Erlang, CUDA C/C++, Python) run fine in Windows and using Windows as it was meant to be without any "shims" makes it much easier to deal with differences in file nameing conventions, path separators, etc.
Powershell is an amazing shell scripting language. But it's a terrible daily driver of a shell. Everything is an enormously long line to type. Yes you can make aliases for common stuff but that's non-standard and not portable.
The primary purpose of a shell is not scripting. It's meant to be interactive. Powershell is too heavyweight to be be a good daily driver for a shell.
What do you mean by "not portable"? the alias script sits in a file and can be kept in source control, to be downloaded to any computer you land on. And many pw commands come with readymade aliases that are standard to the base install.
If you're after interactive use, then by all means define and use your own aliases (or functions) as you see fit. It'd be a pretty crappy idea to not do that. Custom aliases in scripts you share with others are not a good idea, of course, but that's firmly in the scripting aspect you note, not the interactive shell use.
That being said, even with commands being long, tab completion does an absolutely amazing job of shortening what you type. Also it's usually just the first command of a pipeline that doesn't have an alias because after that you tend to use the same few cmdlets over and over again (%, ?, select, ft, ...), which do have short aliases.
I too develop Node.js apps on Windows 10 and I'm with you - there is no reason to go out of my way to make it more Linux-like because everything just works.
I use Cmd over PowerShell though since PowerShell can't process commands separated by "&&", which are used extensively in package.json scripts. I'm happy with this setup though since I don't think the CLI is a good interface at all (I prefer GUIs with lots of keyboard acceleration), so I tend to minimize CLI use as much as possible by automating everything down to single, simple commands.
With VSCode, I can run my package scripts in the built-in terminal as well. Also, if I do have to edit some code on my Mac or Linux boxes (which I usually reserve for just compiling and testing things), VSCode gives me the same experience everywhere. I like that a lot!
This isn't a dig at Linux it's simply that we still have lazy content creators using things such as Silverlight to provide content. These applications "just work" on Windows, and when they are at all possible on Linux, they tend to require a lot of manual effort and simply aren't as good as the Windows solutions (Google Drive, Spotify, VPN applications etc.).
I also use my personal laptop for dev, and as I'm an OSS-stack developer, I use Linux. Having the ability to run Linux natively on Windows is superb, and is genuinely usable for work purposes for the most part.
The article doesn't capture a few other bugbears with WSL. The file system is slooooooow (really slow), quite a few applications just flat-out don't work, and others are buggy as hell (psql e.g.). Would I use it in a serious work capacity daily? No - but I'm certainly appreciative that I can use it and it works pretty well 95% of the time.