Hacker News new | past | comments | ask | show | jobs | submit login
From macOS to Windows and WSL (questionable.services)
176 points by elithrar 3 months ago | hide | past | web | favorite | 270 comments

I genuinely don't get this. WSL's backing filesystem (NTFS) completely runs like ass. It's dire. 10x the latency on everything at least compared to Linux running natively on the same hardware. This is because all small writes end up in MFT contention and the entire of Unixy type things are based on lots of small writes.

Plus on top of that, everything is rammed with telemetry on windows now, the QA has gone down the toilet leading to a hosed machine at least three times in the last two years for me and there is no clear direction they are going in other than "throw poo on everything and see where it sticks". Any bets you make are risky.

And honestly when I read about powershell and chocolatey used there I laugh because a huge amount of friction for me came from that direction. powershell's principle is based on the principle of most surprise. Like when I tell it to download something, I don't expect to have to futz with IE browser settings (WTF?!?!). And chocolatey sucks - the repository it pulls from is full of abandoned and broken crap and missing packages (.net SDKs are my favourite pain point here) where people have thought "hey it'll be good to port this" and then leave it to rot.

That's the status quo for me and I'm not happy.

I'm going the other way to OSX. It mostly just works and that's all I want and there's a local support network for the hardware which is decent and doesn't involve me shipping it off to some third party and losing it for 2 weeks.

I feel the same way. I used to use my Windows machine for everything and didn't even own a Mac but working on a Mac has been a real joy comparatively, especially for dev work. Now, my Windows machine is relegated to just being a gaming rig and I'm 100% happy with that. When I heard that WSL was a thing, I went back to try and get that setup and, between trying to turn off Cortana, navigating the god-awful new Control Panel/Settings, and the terrible overall performance, I don't know how someone could subject themselves to that in a professional capacity. Windows to me is infuriating and I'm pretty technically proficient. I feel bad for the droves of users that end up on it and don't have a choice, especially those that aren't technically skilled. It's just bad now.

> I don't know how someone could subject themselves to that in a professional capacity.

Active Directory. Well, AD and a bunch of applications that are critical to the business only operating and supported on Windows. But the other OSs don't have anything as good for management of the environment as AD is.

AD only works if all your clients are windows. In 2018, not all of them are. We're out of a monoculture now. Doing AD SSO on other platforms is a proper pain in the butt. For example when Microsoft deprecated some of the old protocols a few years back they broke RHEL entirely for 2 years.

Not sure there is another solution though. OAuth and various cloud logins etc isn't, yet.

> AD only works if all your clients are windows.

Uh, yeah, that's kinda my point. Outside of SV, pretty much every client is, and for this reason.

Are you saying that outside of Silicon Valley pretty much every client is Windows? Where do you live?

I work for a Fortune 100 company (not in SV, or even California) with 70,000 employees. At least 50% (it's probably higher) of computers here are Macs. When I go to a meeting, the table is a sea of Mac laptops with the occasional Windows laptop here and there. And these aren't meetings with developers, I'm talking about sales, business, training, etc. people.

That’s still extremely rare. Personal laptops are always up to personal preferences, but most organizations run on windows. At my employer, a university enrolling 50,000, everything is windows. If you wanted to request a new computer through the school to have access to shared directories, you can pick a dell desktop or one of two HP laptops. Macs are rare, and they are bought and forgot by IT. I’ve seen iMacs running leopard still; and they are all off of the intranet.

For my friends who do get macs from their workplace, it’s because they are 1 of just 20 employees and they don’t have anyone in IT, let alone a department or any policies.

One difference might be that almost nobody has a desktop computer. The vast majority of people use a laptop. But they are free to select from a menu of about 20 different laptops including Mac Book Airs and Mac Book Pros. Almost everyone also has a company supplied phone. Again, they have the a selection of about 20 phones to select from, including Android and iPhones. So the environment here is very heterogeneous.

This is increasingly not the case though.

Not every country people can enjoy a quality level to afford Apple computers, even with IT salaries.

Why do you immediately jump to the conclusion that the parent was referring to just Macs?

Because I have seldom worked in environments that allowed anything else as desktop.

UNIX derived OSes tend to be relegated to the server room, accessed remotely and live on their own network.

Fair. But I was commenting more on the environments that historically were served well by AD (offices) are growing in use of Apple computers.

[Citation Needed]

I can believe that this is a true statement inside the reality distortion bubble around SV, but here in the real world I'll need to see some evidence.

The problem is that you're assuming I'm in SV, I'm in a small Swedish city and I'm seeing our _very_ windows heavy company starting to adopt Apple computers.

And when I think of the companies I worked for in London they were also increasing the Apple quotient of desktop/laptop machines.

Yes, this is anecdatum but I really can't tell you how strongly integrated Windows is at my company and we're still seeing adoption of Apple computers.

Regardless of that, Sysadmins and Developers tend to get to choose their OS at most sensible companies. (Every company I've worked for, barring the current, has allowed Linux; most even encouraged it)

MICROSOFT went heterogeneous client for their AD. I'm sure others can.

>AD only works if all your clients are windows. In 2018, not all of them are. We're out of a monoculture now. Doing AD SSO on other platforms is a proper pain in the butt.

That's simply untrue. Every large org I work with has unified auth with ad and a secondary package like centrify. It's trivial to get working.

And that's ignoring all the linux appliances and embeded Linux flavors (think lights out management) that support AD out of the box.

While I think that's a good point, there are alternatives to AD at this point in time. Anyone using that as a crutch is just giving excuses for not wanting to update their infrastructure.

And those alternatives would be...?

MS is very slowly moving away from AD themselves to a "MDM" cloud-provisioning model more or less copied wholesale from what Apple did with iOS. They claim that binding machines and users to AD is optional these days, but the alternative workflow leaves a whooooole lotta gaps. Maybe in 5 years it will be a real alternative.

Plug: I work out JumpCloud, a directory as a service company.

Oh boy, if they somehow made the control panel even worse than it was in Win7, I'll have to take a look at it myself.

I've counted the different kinds of windows in Win7 control panel: there are ten of them. Ten distinct methods of organizing stuff in a window, all in the settings section of the OS, and that's not including third-party additions. There are navigational widgets that don't occur anywhere else in the OS (sidebars with links, iirc).

Also, all the hours lost in wandering through the control panel due to its awful localization…

These days, I'm pretty sure control panels can be used as a litmus test as to whether the GUI authors have a grip on UI design.

Windows 10 added a separate settings menu which doesn't support CPL applets so they kept the old one too. So now all of your settings are in two different places, but phrased slightly differently and in two menu trees and with fewer or more options for each setting. Oh, and now all the windows are WPF in a SPA, so they look different too.

It feels the settings are in dozen different places on Windows to begin with when it could go down 5 levels or deeper to get to the option you're looking for.

The "good old" control panel doesn't even look categorized well.

ha! you’ll love windows 10 let me tell you...

Apple wanting me to know all special keys by heart and not having ergonomic keyboards is one of the reasons that puts me off.

Plus why do they have to have a microscopic return key on laptops?

I'm using MS Natural 4000 with a Macbook. Alt and Win keys are remapped through the standard settings, the context menu key is remapped with Karabiner. Some extra keys can be mapped, only alas the numbered ones aren't seen even by USB Overdrive.

(Of course, the Natural/Sculpt series barely scrapes the ‘ergonomics,’ what with the slanted key layout inherited from typewriters, the flat board made for extendable fingers instead of jointed ones, the too-low wrist pad.)

As for the special keys, I'm not sure what that's about, since I'm using pretty much the same number of hotkeys as I did in Linux and a bit more than in Windows. I'm also using Alfred for many commands, which are made in English instead of obscure semi-random keys.

One of my most favorite things on MacOS is that cod+? Opens the help search menu which you can use to type the name of a command you want to run but don’t know where to find or what combo it uses. It’s especially useful for menu items that don’t have a key map assigned.

What? They don't want you to know them by heart that's why they're all listed in the menu next to every command that has them and they update live as you hold down the keys so you can see what modifiers are available.

Also, you can use whatever keyboard you want. I don't see how this is an issue at all... And what microscopic return key are you referring to? The return key on my MacBook is nearly the same size (maybe a half-centimeter shorter) than my full-size desktop keyboard.

I use a MicroSoft ergonomic keyboard with my Mac Pro. It works just fine. I also have all of my keys mapped to the same as what I was used to when I used to use Windows (ironically the last time I used Windows was about 15 years ago, old habits die hard).

I use windows on my dev machine for one reason - visual studio.

I recently tried setting up an alternative dev environment on Linux and it takes some work but I think Visual Studio can be matched on Linux now for C++ development.

The combination of VSCode, cquery, and clangd can provide code completion and navigation that I think is finally on par with IntelliSense, and it scales to large projects. VSCode's GDB integration is tolerable for debugging.

Linux has some definite advantages too. For one thing the filesystem is just blazing fast, which really helps with git. Various other things are faster too, like I'm writing an OpenGL app and creating a GL context is much faster on Linux. Also I've been trying out https://zapcc.com which made my builds five times faster, no joke. I'd almost move to Linux for that alone.

Just want to second this. Cquery can be finicky to get working, especially if your project involves precompiled headers (I've found that building cquery from source myself and ensuring that it's linked against the same version of libclang as the clang I'm using for compilation helps).

But once it's working, cquery is an absolute game changer for C++ development. It feels like a completely different activity than it did before.

VS is so much more than just code completion and navigation, even for C++.

Code completion, navigation, and debugging are the only VS features I miss on other platforms. What C++ features are you thinking of?

Architecture modelling, unit testing, C++/CLI, C++/CX (now C++/WinRT), MFC/UWP, multi-threading/GPGPU debugging, secure STL, SDL annotations, code maps.

I use the refactoring features quite a bit. The dialog editor is pretty easy to use as well.

I'd love to see a blog post that gets a fully-featured C++ dev env working in VSCode.

I started doing web dev before 2000. Only Windows worked as a dev machine then and OS X came around but obviously lack of third party tools made it a no go. Now I can't think of going back to Windows because third party tools that I take as granted aren't even avaialable with same level of quality on Windows.

3 or so years ago, I tried to set up a dev environment on Windows just see how good it can be then but gave up in 1 hour knowing it'll just slow me down bad.

I have no idea why most of people I know around still use Windows (and funnily 1 guy uses Linux)

I was nodding along, until you said that "on OSX things mostly just work". Oh boy. I have fought homebrew, I have fought Apple destroying permissions on each update, and the OS having embarassing mistakes.

Linux: not much better.

Everything sucks, they just suck in a variety of different flavors.

At least the updates work. I’ve lost count of the number of times Windows update has failed. Reboot loops, half-installed updates, updates conflicting with each other, and updates that killed my internet connection. At least I haven’t had an update brick my machine. Yet.

And don’t get me started on the upgrade process. Quicker and easier to drive to the store and buy a new machine.

> At least the updates work.

Ehhh, I had a whale of a time upgrading a machine (standard Core i7 desktop) from Ubuntu 16.04 LTS to 18.04. Read: I lost the entire machine and had to reinstall from scratch.

Also, Linux support for new hardware isn't amazing. So when I upgraded to a Ryzen APU, it basically wasn't possible to get Linux running on it without resorting to a beta kernel and lots of manual futzing around. WSL has been a godsend for me.

Yes you’re right there I suppose. The point is I pick the sandwich with the least amount of shit in it if I can.

It depends on domain. I hate working with MacOS APIs far more than either Windows or Linux. They don't give a damn about backwards compatibility, force you to use programming languages nobody uses on any other platform (no, I don't care how "nice" it is), and change without a moment's notice. They actively shun and deprecate open source solutions, and seem to go out of their way to be that special snowflake. You could say certain things perform better on Mac, but other things (like graphics) are utter garbage due to the lack of first party driver support. You can install bootcamp on a Mac and get better games and graphics editing performance immediately for example. That plus the overexpensive hardware and lack of a real keyboard and I'm just done.

The plan was always for Linux to become the best operating system for everybody. The plan was not to become the best by everybody else deciding to suck. Still, a victory is a victory...

I've had a pretty good experience running MacPorts instead of homebrew. It's easily been the best package management experience I've on a non-Linux OS.

Interesting, had the opposite experience. Homebrew does almost everything very nicely for me. Macports not as much, although to be fair this was years ago.

My last experience with homebrew was a while ago, but it was a jankfest. The repo was littered with arbitrary major/minor versions of the same packages and the bottling metaphors (bottles, casks, tap, pour -- a cellar even?) are IMO contrived and do nothing to make things understandable. At one point I had to go create a Github API token and set it in my environment just to make the thing work again. And lastly, I just couldn't take seriously the idea that piping the output of curl into sudo is the install method for something that wants to spread itself across your filesystem.

Maybe I'm old, but it struck me like they're reinventing the wheel without even trying to understand what works with existing package management.

macports forces you to install 15GB application that lots of users don't need(xcode). Homebrew just requires the cli tools.

Not true. You can use the Xcode command line tools now, without actually installing the full Xcode. I do this on my home machine and it's fine.

> powershell's principle is based on the principle of most surprise.

I don't think that's true at all. Compare powershell command names to UNIX command names for example. You can rest assured that Get-X will not change things and that there is almost certainly a Set-X that will, for example. You can tab complete variable names, cmdlet names, cmdlet switches, etc. Additionally, by piping objects instead of text, you aren't caught off guard by things like spaces in a path name.

> Like when I tell it to download something, I don't expect to have to futz with IE browser settings (WTF?!?!).

It definitely does have some flaws though, I'll give you that.

As an aside, you can use Invoke-WebRequest with the `-UseBasicParsing` switch to leave IE out of it. They definitely should have done that the other way around.

I was teaching a class on algorithms to some engineering students. They needed to sort a file of random integers using a few different algorithms.

This is the bash I wrote to check their results were sorted properly on Linux:

    diff <(cat output.txt) <(cat input.txt | sort -n)
This is the PowerShell I wrote to check their results were sorted properly on Windows:

    cat input.txt | foreach-object { [Int] $_ } | sort-object | out-file -encoding ascii sorted.txt
    diff (cat output.txt) (cat sorted.txt)
The casting felt a little painful, but it seemed straightforward. The sorted result was generated correctly.

Unfortunately, I was surprised to discover that PowerShell's diff found no differences between a sorted file of numbers and the unsorted file of numbers.

That's because `diff` is an alias of `Compare-Object`, which works radically differently than UNIX `diff` because of the defaults. In some circumstances you could see how this behavior would be more useful, though obviously not this one.

For reference, `Compare-Object -SyncWindow 0` would give the behavior you wanted. The default is [Int32]::MaxValue.

Aliasing UNIX command names is another one of PowerShell's mistakes in my opinion, because it creates the expectation that they work the same way when they often very much don't. Ultimately rather than help UNIX users, as was probably intended, this just confuses and annoys them.

> rather than help UNIX users, as was probably intended, this just confuses and annoys them.

Compatibility layers are great, when they're actually compatible. Almost-compatible layers are a nightmare.

> Compatibility layers are great, when they're actually compatible. Almost-compatible layers are a nightmare.

That's why you should not use them. You decided to use them.

I wasn't just blindly assuming the alias worked the same. I actually read the Compare-Object documentation [1], which contains that exact command as an example of how to compare files. I just didn't understand the sort of comparison it did.

If you search "PowerShell compare two files", it becomes pretty clear this is a common confusion even among people using the name Compare-Object. For example, the top serverfault question has an incorrect top-voted, accepted answer [2].

[1]: https://docs.microsoft.com/en-us/powershell/module/microsoft...

[2]: https://serverfault.com/a/5604

Well, perhaps some hypothetical "you" uses them:) To be clear, I run 100% genuine Bourne-family shells on unix-like systems, and merely visit MS-land for sightseeing;)

> You can tab complete variable names, cmdlet names, cmdlet switches, etc.

UNIX shells can do all that too.

> Additionally, by piping objects instead of text, you aren't caught off guard by things like spaces in a path name.

Not just spaces, to be fair. Files that begin with a hyphen can cause major issues too.

There’s definitely a lot of weird edge cases with UNIX shells but on the whole I still find them easier to work with than Powershell. But that’s just me

What about UNIX shells that allow to call public functions in dynamic libraries as if they were shell commands?

Or interact with running applications using the OS APIs like Powershell allows with WMI, COM, OLE Automation and .NET?

Yes, there is DBUS, but isn't really integrated across the whole stack.

> What about UNIX shells that allow to call public functions in dynamic libraries as if they were shell commands?


> Or interact with running applications using the OS APIs like Powershell allows with WMI, COM, OLE Automation and .NET?

Depends what you need. You can use nc -U /var/run/socket or pipes with expect. If you have root permissions you can use gdb scripting/ebpf to communicate with user-space/kernel-space data of your application.

> Yes, there is DBUS, but isn't really integrated across the whole stack.

DBUS is a bus for desktops/mobile. I never saw that someone use it for server applications. The main idea(after KISS) of the Linux shell is that if you can't write something in awk it will be better to use python. Actually, in the Linux world, GRPC/REST is more common than OS-level API.

So a couple of Linuxisms not portable across UNIX, which was my point.

Unix sockets and pipes are portable across all Unix(POSIX, because Linux is not Unix, but OS X is). Furthermore Unix sockets and pipes backwards compatible to 4.4BSD Unix or older OS.

Actually, this comparison is not correct. Windows and Unix have a different philosophy of kernel<->user level interaction. While Windows tries to add all functionality to the kernel, Unix does this on user level. Remember the KISS principle. So If you comfortable with Windows OS level API it's good for you, but for Unix world, it's not necessary to do something because we already have a different inter-process communication approach.

Which is why UNIX shells are a poor man's REPL quite far from what a Xerox inspired user/developer interaction aspired to achieve.

Pipe based interactions work properly only across CLI based applications, and even then the applications need to be explicitly written to accept and parse the data.

PowerShell or other Xerox inspired shells don't need kernel support to achieve their workflows.

> Which is why UNIX shells are a poor man's REPL quite far from what a Xerox inspired user/developer interaction aspired to achieve.

Unix shells are quite good for their purpose. It's not a full-featured programming language. Can you explain what is "poor man's REPL"?

> Pipe based interactions work properly only across CLI based applications

Yes. If you want to find something in logs or looking at any other text data pipes is a good solution. In any other case(like binary data) it will be better to choose GRPC(or any other RPC). I never have seen GUI apps on servers, sorry.

> PowerShell or other Xerox inspired shells don't need kernel support to achieve their workflows.

Good for them :)

A poor man's REPL is a shell that fails short of everything that was possible to do across the OS infrastructure with what Lisp/Smalltalk/Oberon/XDE/Cedar REPLs allowed to do.

An approximation of it would be to have a shell that is a Jupyter netbook with access to the whole set of OS APIs, shared libraries and applications, with the ability to jump into the debugger at any given step of the pipeline, and redo the current step.

I bet you saw GUI apps on UNIX workstations, though.

Then again, maybe Steve Jobs was right after all.



> A poor man's REPL is a shell that fails short of everything that was possible to do across the OS infrastructure with what Lisp/Smalltalk/Oberon/XDE/Cedar REPLs allowed to do.

Shell is a command-line interpreter with additional functionality like scripting. Why it should have fully functional REPL? Lisp/ST/etc are programming languages, they are uncomfortable for copying files, moving directories, calling curl or using tar.

> An approximation of it would be to have a shell that is a Jupyter netbook with access to the whole set of OS APIs, shared libraries and applications, with the ability to jump into the debugger at any given step of the pipeline, and redo the current step.

You can use python for this purpose. I don't understand why Unix shell should have all this functionality.

> I bet you saw GUI apps on UNIX workstations, though.

Yes, of course, I have GUI apps on my MacBook, but they don't use piping for inter-process communication because bi-directional pipes are not quite useful like an ordinary RPC.

> Shell is a command-line interpreter with additional functionality like scripting. Why it should have fully functional REPL? Lisp/ST/etc are programming languages, they are uncomfortable for copying files, moving directories, calling curl or using tar.

Lisp is an interactive programming language. On can run functions like COPY-FILE, RENAME-FILE, ...

The Listener of the Symbolics Lisp Machine has commands which allow prompting of arguments, input menus, completion, object reuse, etc. It's also integrated in Lisp. For example you can list a directory using a command and then use the pathname objects in calls to Lisp functions. One can also use the output of Lisp functions as input to commands. It's an interesting mix of a command interpreter and a Lisp REPL.

I guess we'll have to agree to disagree.

Nope. Your point was to argue personal preference as fact rather than simply accepting that some people like apples and others prefer oranges.

I get so utterly sick of stupid arguments where people compare their tools and then proceed to makes claims that one is objectively better than another when it’s clearly just a matter of personal taste

If I want that level of control I wouldn’t want to be writing a shell script in the first place.

That’s the problem with Powershell for me, it’s taken the REPL concept but made it too feature rich and too verbose so you lose the speed of writing the code and simplicity of piping predictable streams of data.

The thing with Linux / UNIX is it’s very command line driven from the outset. So if you wanted to do the function calls from shared libraries et al you could write a wrapper in C, Python, Perl, whatever and still use Bash etc to interface with it. You’re not tied to using Bash to solve all of your problems. Bash just provides a convenient interface for chaining all those tools in other languages together. Now I know the same can be said for command prompts on Windows but for whatever reason that seems to be less common than it is on POSIX.

That is where you get it wrong, because the REPL concept as introduced by Smalltalk and Interlisp-D/Lisp Machines it is exactly what Powershell tries to achieve.

UNIX shells are a poor imitation of it.

I never said UNIX invented the REPL concept (though I’d have thought time sharing systems predated LISP machines) and this a conversation about personal preference so there is not right or wrong answers (aside arrogant comments from techies who feel they should belittle others for having differing preferences).

I don’t happen to like Powershell. That doesn’t make me wrong. That literally just means I don’t happen to like Powershell. Period.

(and voting me down for personal preference is really just pathetic)

I am not the one downvoting you, I rather have discussions, even when I might not be right.

> REPL concept

the interactive REPL was introduced by Lisp around 1962 way before Lisp Machines, Smalltalk...

Yes, but not with the capabilities that their graphical sucessors brought into the computing world.

You surely could not display inline structured data and interact with it graphically on an IBM 704 teletype.

Well, you had Lisp structure editors already in the 60s. Doing it graphically doesn't really add too much to it. If you read the manual for BBN Lisp, you can see that it did end 60s / early 70s more than many current systems do - like managing source code, working with images, structure editors for code and data, sophisticated user-level error handling, etc..

Which one did you learn to use first? Wondering if that has much of an impact. I've only used Powershell for a bit while I work with *nix shells daily so I'm definitely biased towards them but wonder if I'd cut my teeth on Powershell if I'd think the same way.

Neither. I leaned the CLI from DOS (or it might have been CP/M?).

Unless you count BBC BASIC?

The last point is the point really.

The entire objectivity of it is ruined by weird edge cases and knowledge. The car only goes straight if both rear windows are open. You have to say "beetlejuice" three times before putting it in reverse etc etc.

You sure you aren't describing an average UNIX command?

No because you don't have to google for an hour to find the "don't punch me in the dick" flag.

Instead you just have to google how to use `tar` properly. Again.

You can tab complete powershell cmdlet switches. And MSDN has extensive and good documentation. Here's the page for Invoke-WebRequest. Notice the very first switch listed. Further note that Microsoft realized they screwed this up and deprecated the switch in PS 6.0 since it is now the default.


No googling because it hasn't changed in the last 20 years.

How do you unzip a 20 gig file in powershell?

1. You can use the windows shell via complicated series of calls. This runs like ass as the windows built in decompression is terrible. Also it only works properly if there's a desktop session running so over WinRM it does with a COM error.

2. Find out which version of powershell you have, hope it's 5.0 or later or install the right WMF version and then run expand-archive. And it still runs like ass. But oh no we can't because now we have to push WMF 5.0 to 150 servers which is going to take DAYS with SCCM. Some kit is on Windows 2012, some on 2012 R2 and some on 2016 because it's so damn hard migrating stuff as it has to be done by hand so you have a mix of stuff leading to a nightmare from hell.

3. Import System.IO.Compression and call .Net to do it then wrap it in a cmdlet called Expand-Archive, then wonder why it breaks when someone pushes out WMF 5.0+

4. Push 7za.exe out with ansible and run that.

This is the problem. No good outcome for really simple tasks. Friction for all of them.

Edit: I'm being 100% honest here and say I hate it with such a passion because I've lived in the same building as it for several years, not because I'm not aware of how it works.

> Import System.IO.Compression and call .Net to do it.

...and? Why is that a problem? It is an advantage that PowerShell can call on the CLR directly (including its C FFI).

Because half the time it slings

Exception calling "ExtractToDirectory" with "2" argument(s): "End of Central Directory record could not be found."

7zip works fine!

(copied straight from my MASSIVE OneNote book of weird powershell and .Net errors and workarounds). Also they just deprecated OneNote desktop other than the turdy UWP app. Another thing to be angry about.

But you're completely willing to forgive UNIX's toolset's problems because you're used to them after working with them for decades. I get it, but that does not make them better.

Working properly == better in my mind.

Consistency, trust over everything.

"properly" in this context meaning "the way I am used to, warts and all", right? Because otherwise I can't see why you'd be so upset about Invoke-WebRequest doing exactly what it was designed and documented to do.

Consistency? In UNIX tools? No. What you have is familiarity. Don't get me wrong, there's a lot to be said for working with tools you're familiar with, but again that doesn't mean they are actually objectively better tools.

Properly as in it does the intended job.

Consistently as in it stays doing the intended job in as an idempotent manor as possible.

Whether or not it’s ugly or elegant don’t matter if it doesn’t work or stay working.

You can argue semantics but this is a turd that can’t be glittered.

Another way to look at powershell is that the idea is good but the implementation isn’t. Unix style shells the implementation is excellent but the idea is mediocre. All productivity and therefore business value is really tied to the implementation quality.

> Another way to look at powershell is that the idea is good but the implementation isn’t.

I wouldn't entirely disagree with that statement.

> Unix style shells the implementation is excellent but the idea is mediocre.

This I would. The idea was a really good one... in the 70s. It has utterly failed to evolve since then. The implementation is actually pretty horrible. If you don't have a lot of familiarity with it it is undiscoverable, full of footguns, and consistency is an afterthought.

You learn to really appreciate POSIX implementations when you do things like grep and pipe results over 100GB+ text files. and no, these text files were not created due to unix philosophy, I’m talking about xml generated in Java as an example. grep&co are a godsend trying to tackle these without ripping out every hair on your body.

Specially when one is adventurous to write them in a portable way across multiple POSIX implementations.

forget about it *nix has won. No matter hiw bad some of it's design might have been. The was some nix fanatics act makes me worry about open source in general

The only two tar commands most people will ever need:

Equivalent of "zip": `tar cjf OutputFile.tar.bz2 InputFileOne InputFileTwo InputFileThree`

Equivalent of "unzip": `tar xf InputFile.whatever.extension`

(This works whether `InputFile.whatever.extension` is bzipped, gzipped, or not compressed at all)

I would strongly suggest that you use the `a` flag rather than `j`, to automatically select file type.

What can be simpler than tar? ‘tar eXtractZipFileVerbose $file’, done!

```tar --help | grep <thing I am trying to do>```

Fun thing about Invoke-WebRequest is that it uses TLS1.0 by default and doesn't understand anything newer unless you enter an incantation beforehand:

[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12

It’s even worse thatn that as the behaviour is defined by a weird combination of the OS, the .net framework, powershell version and local policy settings.

Yes, I imagine a lot of .NET devs don't realize their apps are TLSv1.0 only. I've worked with one vendor who was surprised when I told them.

Exactly that. Some vendors have only just got rid of SSLv3 in financial services as well. It’s slightly scary.

> As an aside, you can use Invoke-WebRequest with the `-UseBasicParsing` switch to leave IE out of it. They definitely should have done that the other way around.

They have made that change:

> This parameter has been deprecated. Beginning with PowerShell 6.0.0, all Web requests use basic parsing only. This parameter is included for backwards compatibility only and any use of it will have no effect on the operation of the cmdlet.

6.0.0 was released on 10 January 2018.

> IE browser settings

Those are the OS web settings. As used by all OS services, e.g. BITS (https://docs.microsoft.com/en-us/windows/desktop/bits/backgr...) and the Live Tile updater; and any app that doesn't embed its own network stack (e.g. Steam, most Mail apps, etc.)

People just think of this control panel as "the IE browser settings" because people use Firefox and Chrome, which do embed most of a network stack (for portability), and therefore ignore most of the OS network settings in favor of their own embedded configuration-points.

I use a mix of mac and windows machines, and all of my dev work is on windows. I mostly agree with the points you mention but they also don't matter that much to me day-to-day.

Windows upgrades have lately had a poor quality track record. I've had a bunch of cheap windows laptops break, all consumer-class machines running windows 10 home. My work thinkpad has been absolutely reliable though, partly because it's a thinkpad (better driver situation), and partly because it's windows pro. I only get feature updates after they've rolled out to everyone else, so by that time all the bugs are sorted out. Everyone knows not to upgrade a mac to a new major release when it's .0, you wait until .1 or .2. Windows home doesn't give you that option, windows pro does.

WSL performance is indeed not great, but getting better with every release. I side-step it by doing the heavy stuff natively. I only use WSL for typical shell scripting, and as soon as I need to run something more I'll figure out a way to solve it natively. There's almost always a way (couldn't figure out hadoop development, had to run a linux VM to do that). Did java, scala, node, php, python all without seeing a bash shell. I see WSL as an escape hatch, not as a daily driver.

I don't use chocolatey. I don't expect windows to do package management well, so I don't even try. I only set up a windows machine at most once a year, and it's not that much work. I do use ninite to get the basics on it faster.

As for the telemetry ... I don't see the harm. There hasn't been a single person reporting real harm (that I'm aware of, someone will now prove me wrong) and they've been doing this telemetry for years. I just wish they used their telemetry instead of shipping bugs that their insider program reported and they subsequently ignored.

They're making their money other places.

That telemetry should not be expected to improve the product 9/10 times.

I switched to Powershell/Windows after 20 years of bash/Linux and MacOS (see https://github.com/mikemaccana/powershell-profile/)

> WSL's backing filesystem (NTFS) completely runs like ass.

Yep you're right, and I've asked Microsoft about this and they're working on it: https://twitter.com/mikemaccana/status/1047941737719762946

> powershell's principle is based on the principle of most surprise.

Really? I find it quite consistent. Here's a bash to powershell conversion:


The general syntax is very much designed rather than grown, you can predict most commands, and using 'select' and 'where' to pick the keys you care about in your output is way better than scraping with regexs.

> Like when I tell it to download something, I don't expect to have to futz with IE browser settings (WTF?!?!).

That sucks. I haven't ever had to do that. I'm not sure why.

WRT NTFS I don't think they can fix it without releasing another major version of it and that comes with some insane level of risk based on the early churn in NTFS. I don't have an appetite for that. They tried to fork it with ReFS but that went pretty much nowhere.

Powershell is amazingly consistent for sure but each little bit of consistency has numerous inconsistencies in it which aren't exactly stable, hence the whole web request nightmare. Also things like different methods of getting the date return timezone and non timezone aware things. They nearly had it but rushed it out of the door. Also things like WinRM are totally unreliable and don't scale well due to the runtime performance being dire (try arguing with ansible/WinRM failures on a daily basis).

I think a lot of people using object pipelines miss the major point of designing test pipelines which is the first step is to make your pipeline easily parseable. So for example dates as unix dates, colon separated etc. Then you don't need to use regex and stuff. For that little bit of compromise first you end up with something which is several orders of magnitude faster than the UTF-16 and serialisation backed powershell pipelines. Try grepping a few gig of data with it :)

On your last point, you're lucky. This comes up a lot in corporate networks and proxies and occasionally randomly out of the blue.

> I think a lot of people using object pipelines miss the major point of designing test pipelines which is the first step is to make your pipeline easily parseable.

And if developers of UNIX tools actually did that then maybe object pipelines wouldn't be nearly as appealing as they are, but we don't live in that timeline.

I use jq a lot in bash. That's a good part of an object pipeline.

curl | jq | grep/awk | jq | curl

Yeah but ip, and ls, and cat /etc/somefile.conf, and pretty much anything asides from curl don't output structured data so you can't use jq on it.

That was just an example of a pipeline. jq can parse plain text easily with split/map as well.


ls -l

ip -o

I know about `ls -l` (and have for about 25 years now, thanks). You're still scraping.

Have you seen this comment from a Microsoft employee explaining why NTFS performance is so bad? It's pretty interesting and informative, but doesn't leave me with much hope that things will ever improve.


I use Windows for development at work (semi-voluntarily - our Windows support was getting neglected because most developers choose to use run macOS or Linux), and I've been getting pretty fed up with how slow it is. And it's not just filesystem performance; process creation and terminal output are horrendously slow as well.

The only redeeming thing about development on Windows is that Linux doesn't have any debuggers that are as good/usable as Visual Studio.

I’ve seen that and it misses the mark by a mile because the main performance constraint is MFT contention.

You can see this because the tools suck just as bad on native windows as on WSL without that in between.

Check out 10,000 small files on (1) NT native SVN, (2) WSL on Windows, (3) Linux on ext4.

1 and 2 are within 10% of the time delta. 3 is 10x faster at average.

At this point I reckon they won’t fix NT but close the gap on NT and forget to mention the MFT problem.

I’ve dealt with this issue going back 20 years since NT4 for ref.

Very interesting post - thanks for sharing it.

Filter drivers and IRPs really are the worst architectural blunder MS made in Windows. They took what really should be a corner-case (driver interposition) and put it on the development path of every hardware driver programmer to trip over. We got KMDF eventually, but that took years...

> 10x the latency on everything at least compared to Linux running natively on the same hardware.

The vast majority of my real world experience has not been that at all. Then I looked up the benchmarks


Also I use older Desktops and I have very quick experience running my scripts and Ranger. Having Ranger alone is HUGE for my workflow. Then I have awk, sed and grep at my finger tips?

Personally I would take Windows 10 WSL over any of my AppleOS experience. I really think the machines know I don't like Apple because for the past 25 years they always lockup, crash or over write what i do with them.

The file IO / compile benchmarks are realistic workloads. Look at: https://www.phoronix.com/scan.php?page=article&item=wsl-febr...

Also you can't interact with the WSL directory outside in a graphical text editor or File Explorer. There's going to be someone who responds to this saying it works for him, and I'm very happy for you. Every time I create a folder in the WSL directory in File Explorer it's permanently un-cd-able in WSL, and vice versa. Editing text files outside corrupts them. Yes, I've tried reinstalling.

WSL is just a shitty implemention on a shitty OS. Linux and OS X work perfectly and they have package managers and I can control updates.

The trick here is to not use any windows program to modify anything on the WSL file system (because of the unix permission system). But it works vice-versa ... so you can use any unix program to modify a file on the windows file system.

So my hack there is to simply make a symlink in my unix dir (`~/dev`, usually) that points to a folder on the windows file system ... that way I just do all my work in ~/dev and it's all good.

What’s the point then? Cygwin works better than this.

WSL works with unmodified Linux binaries, while you need to port/recompile everything for Cygwin.

I was simply communicating how to avoid this issue ... judgments of worth and value are a separate discussion.

Yeah - I generally touch files or create dirs in WSL first to get around it. Otherwise it is a cluster. If i could just use Linux @ work I'd be much happier.

Yes, you should have heeded the warnings against that. The article also mentions this in passing. You can get at Windows files from WSL, but don't do it the other way round.

you can get package managers on Windows as well, such as chocolatey or https://scoop.sh/

You hit the nail on the head with your description of Chocolatey.

I've found scoop to be a lot better (https://scoop.sh) for package management on Windows 10 than Chocolatey.

> WSL's backing filesystem (NTFS) completely runs like ass. It's dire. 10x the latency on everything at least compared to Linux running natively on the same hardware. This is because all small writes end up in MFT contention and the entire of Unixy type things are based on lots of small writes.

Does this matter? It seems good enough to me to use as a development environment to the extent that GNU/Linux and GNU/Windows are equal/or for development purposes.

10 years ago, on the exact same hardware, running a Rails application test suite in a Linux VM through VirtualBox was nearly 2x faster than running natively on the Windows side of things. This must be a combination of things, namely Ruby on Windows was super slow back then, but the filesystem impacted things as well.

10 years ago WSL didn't even exist.

Also, Ruby on Windows has always been an afterthought with a lot of dumb bugs, incomprehensible setup requirements, and terrible performance. Ruby is the biggest reason I use WSL today.

Nix runs way slower on Windows, since it caches by hashing all source code on each compile.

We have GitHub repositories with 10’s of thousands of files in so yes.

For me the main problem with NTFS is the file locking.

With Unix you can pretty much delete any file at any time (sudo ftw!) but not with WSL. Opening a file in an editor on the Windows side is often enough to lock things up.

(The case sensitivity is another sore point, but that can be averted by using only lowercase.)

That’s a very good point. The file locking semantics are different and totally incompatible.

Maybe it depends on other things besides WSL. I wonder if your system is mis-configured or has a weird combination of hardware.

For example I use WSL and it doesn't run like ass. All I did was disable the Windows Firewall and HD monitoring tools. Everything else is stock Windows 10 Pro (stable channel).

Here's some numbers.

NOTE: All of this is running in WSL, and to make matters "worse", it also includes running these things through Docker for Windows with WSL configured to communicate with Docker for Windows.

- A large Rails app (15k+ lines, 50+ gems, etc.) takes less than 100ms to pick up a code change with a Docker volume. I can see code changes before I'm able to even move my eyes to a browser to reload the page. It's awesome.

- Flask, Phoenix and Node apps also pick up code changes nearly instantly and the overall development experience is awesome. This is all through Docker as well.

- 100kb+ of SCSS running through a bunch of Webpack loaders compiles in less than 3 seconds in "watch" mode, without any type of caching. Slower than native Linux? Yes, probably, but it's not slow enough where it's an issue. There's a lot of low hanging fruit to optimize too. I didn't even try to make it faster because it doesn't have a negative effect on my daily development.

Long story short, combine WSL with a good terminal and Docker, and you have yourself a great development box. This is with ~$700 worth of computer parts on a 5 year old workstation, with an early generation SSD and I don't even keep my source code on the SSD (it's volume mounted into Docker with a spinning disk 1TB drive).

I can understand all of that. I'd also rather use OSX, but the new Macbook Pro's have such a garbage of a keyboard that I'd even rather use Windows and the slow WSL (though I use native linux myself).

I just bought a 2018 MacBook Air. The keyboard is different and could do with a little more travel but I don't have a problem with it really. It's better than the wobbly piece of crap on my 2013 MBP.

It's fine if you can live with it. I can only compare it to typing on an absolute hard surface and my hands get worn out fast from that. I hate it, I wish I didn't so I could use the otherwise ok hardware :)

Want to buy my MacBook Pro? ;)

You could consider one of the BSDs, if you want to go to the path of MacOS. I practically found them to be objectively pleasing once the machine is set up.

Hasn't getting a compatible machine set up been one of the most difficult parts? I've never tried BSD outside of a FreeBSD VM for tinkering (though I'd like to) – has general HW compatibility improved?

It seems to run best if you have all Intel for the CPU, networking and graphics, and your machine is at least a year or two old. I would assume Thinkpads are the best supported hardware.

It works great on Thinkpad X1 Carbon gen6.

I’ve never had a problem with it myself even on mid range kit (DL580 sized)

I've used FreeBSD since 2.0.5. It's what led me to MacOS X :)

> WSL's backing filesystem (NTFS) completely runs like ass. It's dire. 10x the latency on everything at least compared to Linux running natively on the same hardware. This is because all small writes end up in MFT contention and the entire of Unixy type things are based on lots of small writes.

that is hard to emphasize more than it is - doing C++ dev on WSL is complete garbage, lots of small write on compile, SLLLOOOOOOWWWWWWWW...

Oh hell that must be horrible. It’s bad enough compiling small things.

yeah. I tried switching to using my gaming desktop with WSL. I used an X server to run a terminal emulator that isn't terrible so I could get the best of Linux and Windows without having to reboot.

Add in zsh auto completion and then it all went to shit with the latency. Some commands could take MINUTES to complete.

> Like when I tell it to download something, I don't expect to have to futz with IE browser settings (WTF?!?!)

You Can use Power Shell Core. Which is way bettet, Cross Plat and does not need ie settings fiddling. I use it to script on cross Platform Tools mostly

Powershell core has telemetry. I won't entertain that risk.

This is covered in my original point.

Full disclaimer: I'm the PM for PowerShell Core.

I'm a huge privacy advocate, but telemetry doesn't have to be a dirty word. With PowerShell Core, we went through an RFC process to define our telemetry goals and implementation[1], we publish our data to a public dashboard[2], and all of the telemetry source code is out there in the open[3]. Our telemetry enables to help drive prioritization and decisions around platform/OS usage, and disabling it as simple as setting an environment variable[4].

If there are any other ways we could make our telemetry implementation more palatable without seriously reducing its usefulness, we're absolutely open to suggestions.

[1]: https://github.com/PowerShell/PowerShell-RFC/issues/50

[2]: https://aka.ms/psgithubbi

[3]: https://github.com/PowerShell/PowerShell/blob/master/src/Mic...

[4]: https://docs.microsoft.com/en-us/powershell/scripting/whats-...

Thanks for replying. If your remove it, it is palatable. Stop carrying the flag for something detrimental to the end user.

Firstly anything that ships data out of a secure environment is a risk regardless of your intentions. You don’t always get the code right (this has already happened in .net core), the surface area of your software is multiplied (everything that sends telemetry talks to different hosts) and finally it adds a huge amount of noise to logs and IDS systems which make them less effective.

To stop this it requires a large effort and costs a lot of money. This adds a lot of noise to audits, requires administrative effort to silence, means we may not be within compliance of various data protection directives.

For this we get no observable product improvements. Look at windows 10 for all the telemetry. It’s a pile of crap.

Telemetry is literally a guessing mechanism so you don’t have to listen to your users concerns. A cost cutting exercise. You closed connect and didn’t listen to your users then. You cut off partner support pretty heavily. Now you collect data and make a finger in the air guess.

Just no. Fix MSFT not us.

You're right. Telemetry doesn't have to be a dirty word.

Your colleagues have ensured it will be for the next decade, though, I'm afraid.

"without seriously reducing its usefulness" -- can you describe more clearly what the usefulness is that you are attempting to retain, and why that is important to your customers?

> here's a local support network for the hardware which is decent and doesn't involve me shipping it off to some third party and losing it for 2 weeks.

Small heads up, every serious hardware issue I've had with MacBook Pros involved them being shipped to a third party for a couple weeks. Sure, they might try a thing or two in the Apple Stores, but then it's shipped off. YMMV

They replaced the screen on mine in store. I think it depends on what tier of shop you’re in as some of them don’t have full service ability.

In my case they were massive flagship stores that Chat support suggested I go to with an appointment. Pretty sure it had to do with the complexity of the repair both times.

I had to exclude my Ubuntu WSL setup from Windows Defender. Processes like ‘apt’ we’re really slow because of it

> I respect those who’ve invested the time into maintaining & automating a full Linux environment they can use daily

I can't imagine what he means by that. All my Macs and Linux boxes (from a dozen of RPi-like ARM boxes, to laptops, to big Xeon server) and even the OpenIndiana and FreeBSD machines all work without much effort. A `yum update` here, a `pkg update` there and nothing ever breaks. It's actually boring compared to 2004 or so, but we end up getting used to actually working.

Seriously, I spend WAY more time fussing with Windows than Linux on my dual boot machines. Windows has also destroyed the usability of their settings screens to the point that I find configuring Linux far far easier than Windows.

Seriously, what a fucking mess they made with the new settings screens. I find myself clicking through 5 screens just to finally arrive at the old control panel for network interfaces to toggle my ethernet off.

not defending one way or the other, but remember North Carolina, Pennsylvania

ncpa.cpl is the old network view, still very useful if i'm not in the cli :)

Thanks. Good thinking. That will almost certainly be my route in the future.

It’s so bad it now requires search to find say, the mouse settings.

Meanwhile I can't even get sound to work properly in any Linux distro on my motherboard. Kubuntu 18.04 has a problem on both my laptop and desktop where a fresh install works fine until you run apt-get upgrade then kdm breaks and both machines boot to a black screen.

That's the caveat, you need to buy hardware that was designed with Linux in mind. Throwing it on hardware where the designers only thought about Windows will always be a bad experience.

I am glad to say it seems very hard to find hardware that doesn't work on Linux.

Have you tried Fedora? It has much better hardware support in my experience.

When I was experiencing audio issues on Debian, in 2003 or so, Fedora was, indeed, better.

Would you like to share some details? A lot of people that can help lurk around here and it could be a simple issue.

I've been using Linux since 2003, even though I'm a .NET engineer by trade. Most of the problems (except the audio problem on one particular motherboard) were relatively simple for me to fix. I imagine it would have been a nightmare if I didn't have 15 years of experience under my belt. My point was that this shouldn't have been a problem in the first place, and I've always found Windows to be much easier to get up and running.

I've always had to live in both worlds, and they each have their fair share of pros and cons.

Make sure that Kubuntu is giving you a stable version of KDE. My issue with Ubuntu in the past has been that it too often takes unstable snapshots of upstream software in order to accommodate it's own self-imposed release schedule.

I think you have to remember that you are in the 1% of people that want MORE control over a computer. Most users out there want more simplicity and flashier UI. Microsoft cannot please everyone so they opt to please the average users and developers just need to adapt.

And yet, transferring files between my windows 10 computers feels like I need a network engineering degree. Half the time I need to fire up a Win7 VM and transfer from one to the other that way.

Mac isn't better in that department, though (when talking to Windows, at least).

I wish windows would adopt something similar to chrome://flags for configuration. Easy to search and all in one place.

You might consider trying the "GodMode" trick - still works in Windows 10 and has quick access to settings all over the place.

I just remembered it after posting about digging for the network dialog and sure enough, network settings are right there, easily accessible.

GodMode Trick:

Create a new folder and call it:

Under Windows 10, it will hide the name, but the Icon will change. Click on the icon for wonders.

The GUID is the important bit, you can replace 'GodMode' with anything, but it can't be blank.

Yeah, seems the real name for this is supposed to be "All Tasks" as you can see by putting


in the run dialog or address bar of an explorer window.

GodMode is just a nickname it got from some power users it seems.

that is excellent. Thanks for sharing

Isn't that what the registry was supposed to be before it turned into a monster?

Personally, though, I've always preferred Firefox's (and old Opera's) about:config instead. It's far more flexible than Chrome's flags.

Quite a few years back I got a book from a friend, summarizing new and noteworthy changes in Windows 95 (you could make a good drinking game from it if you took a sip each time you encountered the expression "32-bit", it was all the rage). The registry was indeed intended to serve as a central storage for configuration, replacing .ini files scattered all over the system.

It could be easily replaced by a c:\Windows\etc folder full of .ini files and it'd be a vast improvement.

I think Windows frowned upon lots of small files for performance and locking reasons - if some app decides to hold the file hostage, you'll end up needing to restart the system to be able to save a new version.

C:\Windows\win.ini was a previous problem point, it had driver related settings, OS preferences and some non-MS applications added their own keys to the lot as well.

Of course the win.ini (and system.ini) became a mess. They should be in a folder, with each application having their own file with their own settings. And reading the file and then CLOSING IT so others could to stuff to it.

"Introducing Microsoft Windows 95" by Brent Ethington


Suspect is highly depends on what you're doing. I used to run Linux for video editing with a Nvidia GPU. It was an extreme hassle anytime a new driver came out and blew away some config. Moving to Mac cleared all those issues up. I know this pain well when you have a deadline and are messing around with kernel configs and compiling. Many many hours wasted.

Docker with ffmpeg has solved most of those issues now since you can basically have a custom toolchain totally isolated from your core operating system. So, it would likely work knowing what I know now (or with modern tools).

Yes. The Nvidia GPU stuff is usually more painful. I tended to not update my kernel+drivers as much on those boxes. In the end, I moved to Intel GPUs, which are punier, but, at least, work.

You don't go with an Intel GPU because you want a powerful GPU, you go with an Intel because you want a reliable GPU that will never give you any problems, ever.

With an AMD or something else, maybe it's more powerful, and works fine to most appearances. But the colors start to be a little funny when the uptime gets in to the months. Or a kernel update changes whether it thinks the unused S-video port is live. Or other weird <1% issues.

> You don't go with an Intel GPU because you want a powerful GPU

That's true. For desktop work, just about any GPU is overkill. Even the kind of scientific visualization we used expensive SGI boxes for not so long ago can easily be done with the cheapest Chromebook. I'd love to be able to play with OpenCL (where a beefier GPU would be useful) and see how far my boxes can actually go, but I never get around to find a good excuse.

> that will never give you any problems, ever

Here's a bug I've found in Intel GPU drivers on Windows 10: https://stackoverflow.com/q/43399487/126995

AFAIK not fixed to this day.

AMD GPUs support is really good now too.

So I hear. It may be a good option in the future.

Unless you have an older card.

How old? I have a Southern Islands card (GCN 1.0, FirePro W2100) and it works very well with the amdgpu driver. GCN 1.0 is from 2011.

> It was an extreme hassle anytime a new driver came out and blew away some config.

Your distro's fault. Never had such a problem with Arch but I remember I had it with Ubuntu, albeit so long ago I can't say it's still a problem.

How long ago was that? Valve spurred along a lot of progress in that area.

Indeed. And on my NixOS machines I can even boot into any version of the OS that I haven't garbage collected.

Also, I have found it easier to make reproducible environments everywhere in Unices. Nowadays, I just check out a git repo, run home-manager switch and all my software and configuration is there.

Author here: let me clarify further.

I’m not just talking about the CLI experience, since WSL doesn’t bring you anything new there. It’s Ubuntu (by default), or whichever other distro you choose.

When it comes to desktop applications (both in breadth & quality), video/GPU drivers, and (if it wasn’t clear already!) my preferences for a desktop experience, Ubuntu 18.04, Debian with XFCE, Fedora + KDE, etc - just don’t do it for me.

And that’s OK, because what I prefer probably doesn’t do it for you.

I'll give you the desktop environments aren't very good. Gnome 2 was about the peak and it went a bit weird after that as they appeared to have an internal battle between making it look like MacOS and inventing something new while locked in a dungeon with a bong. xfce isn't quite polished enough even with a lot of configuration done.

That's why people don't even use a desktop environment.

My setup for the past year and a half has only been:

* xorg server * i3 tiling wm * rofi application launcher

that's all one really needs.

I'm kind of happy with Gnome 3. It's almost Mac-like in being opinionated, but steps out of the way most of the time. I'm using it with Fedora on a couple machines and it works great.

I gave up on having my desktop looking like a futuristic sci-fi movie UI and that brought me a lot of peace (and free time). ;-)

None of the desktops are any good at high DPI yet, and Gnome seems to have gone off the deep end in terms of killing useful features and settings (removing desktop and tray icons for example).

It’s absurd to remove these things, and have the community re-implement core desktop feature as Javascript plugins without even a stable API between major versions.

GNOME on Wayland is nice on my hidpi screens. Wayland even got mixed DPI working better that most of my co-workers windows or Mac machines.

It still doesn’t have fractional scaling like the Mac does (at least without Wayland). I have 28” 4K monitors, so I need to run them at about 1.25-1.5x.

To do that, I need to scale to 2x in the Gnome settings, then scale down using xrandr, and I have to manually position the X offset the second monitor.

OTOH, Wayland is able to run software written in the 80's without it realizing. That's quite an accomplishment.

Indeed. I don't wish to be rude to the original poster but I've managed large networks of Windows and Linux/OpenBSD/FreeBSD/Solaris machines for over 20 years. Windows is a joke and it hasn't got any better.

I have actually also, unfortunately, spent a lot of time working with Windows Installer and Wix and that's a fine example if you ever get the experience to see eye gouging pain of the highest order.

Ya about the last headache I had setting anything up in linux was getting Jack2 and pulseaudio set up for recording. That's about the last time I remember struggling to get something working on linux and that was...i'm not even sure hoe long ago. I had to do it again a couple years ago on a new machine and there were none of the issues I had the first time.

I can't remember the last time I had a video card driver or other real hardware issue.

Come to think of it I can't really remember the last time i've really had to do any kind of system manangment other than package updates.

Oh...wait...I had to fuck around in grub a couple months ago when I lost power during a kernel update. That was about a half hour fix though....

I've lost power while windows was in the middle of updating before and it nearly killed my windows system. It was a few hours of booting into safe mode, figuring out what wasn't installed properly, fucking with the registry and windows update, manually comparing and replacing.dll and system files before finally getting it back up and running properly.

That was my experience also. I took it a step farther and invested some time in using Ansible to configure my laptop which has been great too.

> I’ll keep this short: I still depend on Lightroom, writing tools (Notion, Evernote prior), a solid default desktop environment, first-party hardware support (be it a MacBook or Surface) & battery life, and most of all, my time. I respect those who’ve invested the time into maintaining & automating a full Linux environment they can use daily, but I just don’t have the time for that investment nor am I ready to make the trade-offs required for it. To each their own.

Given the circuitous series of hoops the author goes through to get their WSL/Windows environment in something of a working configuration the above passage is kind of laughable.

I too depend on Lightroom, but I've had to break that hold and just this week have gone the other way to a full Linux desktop. Windows breaks so often it's not workable for me. And LR classic has been nothing but a buggy, slow, bloated mess for me. Even just little things like opening the app paints the window over all other UI elements and you can't click to bring another app to the front before clicking/focusing into LR. Let alone the slow loading, painfully slow image viewing and stupid stuff like hiding the mouse pointer after creating a new folder.

I intend to split RAW development and Digital Asset Management, which is what I should always have done, now that Lightroom is out of the way. RAWtherapee and Darktable to a great job (and I say that as a former pro photographer) for RAW development and easily enough for most pro work I did/do. The options for DAM are varied but even Digikam is pretty excellent for most things.

Having just spent a year with Windows (having run Linux previously) Linux is, amazingly, progressing towards 'just working' for me. Especially with Flatpaks/Snaps (though developers need to get a grip of understanding 'filesystem=home' vs 'filesystem=host' and how this effects usability vs sandboxing) as well as appimage. When they become available on Debian natively it'll be kind of awesome to run a stable desktop with latest selection of certain apps you might need.

It's true Nvidia support is bad. I did switch from Nvidia to AMD to make the move. Is that unreasonable to expect the user to do? Possibly. But doing that I get a hassle free 4k desktop, excellent stability (so far) and all the apps I need (so far).

Being back in a terminal 'proper' again on the desktop is just fantastic. Not having things update on their own with my say is even more fantastic. And the fact it's not eating 12GB of RAM running LR and a browser is the icing on the cake.

I still use the final version of the perpetually licensed Lightroom, and so far haven't found anything to replace it. Sometimes I use it in a Windows VM on FreeBSD sometimes I use it on a Mac. It largely just works, even with a catalog that I desperately need to pare down.

Meanwhile Darktable is a pathological mess. I really want to like it. They even offer a Mac binary on their download page. Unfortunately, it turns out that the only first class platform is Linux. None of the core devs actually use a Mac so bug reports languish and users are told to simply compile DT and fix the bugs themselves. I'm working up to compiling DT, but the instructions are all built around MacPorts which I ran away from years ago. Yes, I'm aware that DT is an open source project maintained by volunteers but sometimes I want higher level packages to just work without a fight.

If I want to import metadata from Lightroom I'm expected to export individual sidecar files and hack up a script to do the import (hint: this doesn't scale). The real deal breaker for me was that the latest version available on the DT site simply crashes for me while trying to import photos. But not every photo. I'm not particularly inclined to file a bug report because most Mac issues get put in the circular file.

Once I got past that I found the interface to be… interesting. Despite the superficial similarities to Lightroom, DT is wildly different in practice and not all that intuitive to me (e.g. applying presets from the lighttable vs darkroom modules — also seriously those names: darktable, lighttable, darkroom are all far too similar regardless of whether or not they're based on photography terms).

It's been a few years since I've tried RawTherapee but obviously that didn't stick either.

Sometime in early 2019 check out Skylum Luminar. They’re coming out with a Lightroom analogue that’s not subscription based.

> Sometime in early 2019 check out Skylum Luminar. They’re coming out with a Lightroom analogue that’s not subscription based.

Thanks, I hadn't seen that before. Two things put me off though:

1.) Lightroom really has motivated me to find an open source replacement. I don't think I want to get locked into another proprietary solution.

2.) The landing page for Luminar 3 that I found was so full of marketing hyperbole that I choked. There was a ton of fluff IDGAF about (e.g. export to 500px, "foliage enhancer" filter, Apple photos extensions, soft glow, artificial intelligence filter) and some that just felt like easily misinterpreted stuff (e.g. details enhancer filter, workspaces, polarizing filter).

If there's a trial I may check it out, but I'd want to know more about the tech details. Adobe got a few things right that I doubt other proprietary solutions will. Making LR so extensible with Lua is huge (is the core written in Lua?), storing its catalog in a relatively easy to parse SQLite db as well makes it easy enough to interoperate. As much as I kvetch about migrating from LR to DT, I think writing a module to import the LR metadata seems like an ideal first attempt at hacking on DT project.

I really don't understand most of his arguments against switching to Linux.

> I’ll keep this short: I still depend on Lightroom, writing tools (Notion, Evernote prior), a solid default desktop environment, first-party hardware support (be it a MacBook or Surface) & battery life, and most of all, my time. I respect those who’ve invested the time into maintaining & automating a full Linux environment they can use daily, but I just don’t have the time for that investment nor am I ready to make the trade-offs required for it. To each their own.

- Software support, no problem, I can understand. I still have a Windows partition for audio production.

- The "solid default desktop environment" is pretty much crap - both Gnome and KDE Plasma are far more solid DEs than Windows'.

- First-party support I can't figure out what he means by that...

- As for battery life, I have 10+ hours on my ZenBook running Arch. The battery life is actually better on Linux than on Windows on this laptop.

- Windows has been a much higher time-sink than Linux or macOS for me. Sure, an Arch or Gentoo desktop would be higher maintenance than your average distro, but come on, Fedora and Ubuntu are _super_ easy to install these days, and the overwhelming majority of the time just work out of the box. It's 0 maintenance. My maintenance/automation scripts were like 80% ported straight from my old macOS scripts.

I'm forced to use Windows on my work computer and it's IMHO terrible. Its virtual desktop feature are useless (really, switching desktops changes _all_ screens at once?), the shell is crap, WSL basically feels the same as running a VM and SSHing in, the DE is horrible.

The (One of the) real reason(s) anybody even considers Windows is Visual Studio proper. Nothing comes close to it in proper IDE experience. It has (arguably) the best debugging facilities in common existence.

Visual Studio Code with the right extensions comes pretty close :)

> - The "solid default desktop environment" is pretty much crap - both Gnome and KDE Plasma are far more solid DEs than Windows'.

You are not telling us anything better than the author, one way or another.

Never pretended I did either. I also don't really have to, the argument stays bad anyway.

I'm somewhat on the other side of things where I've been using Windows exclusively on my work laptop for many years (I have a Linux box at work) and have been seriously considering switching to a Macbook Pro for over a year now, but I'm continuously frustrated by things on the Apple/Macbook side of things (my wife has a 2017 MBP).

Apart from the annoyance of the keyboard on the MBP and my personal preference for physical Fn keys over a touchbar, it is simply amazing to me how bad the Macbook Pro is with playing with simple external peripherals like Docks and external monitors. I've tried using a USB3 based dock, as well as a thunderbolt dock to connect 2 external displays to a Macbook Pro and it's just been horrible. The former didn't even work, the latter only supported one monitor for whatever reason. I then tried using 2 simply USB-C to HDMI adapters to connect to the two monitors (hello dongle land!) and while both monitors are seen, OSX refuses to support 1920x1200 on my new Dell 2415 Monitor and ends up stretching a lower resolution across the monitor, making everything look like garbage.

Occasionally, on reboots, it will work fine again, but it needs to be your lucky day. Why Apple won't make their own fucking dock so that I can pay them some extra Apple tax just for some sanity and peace of mind is beyond me. Meanwhile, my 2 year old Dell XPS (which I don't really love at all) has never had an issue with external monitors via any of these docks ever.

So here I am using WSL on Windows 10, and you know what, it keeps getting better and better, and I might just end up sticking around with Windows.

Hardware really is where windows shines. I connect a new laptop to two external monitors with different resolution and dpi and everything just works including dragging windows between them and rescaling for most apps. For some reason it feels like it even knows which monitor is placed where which would be more than creepy (I can’t imagine it does, so must be confirmation bias).

It honestly feels like choosing a laptop today is trying to make the least bad choice.

MacBook? Piss-poor keyboards and external device support.

XPS? Poor support.

ThinkPad? Windows 10.

<Laptop> with Linux? X software is not available.

I use Windows + WSL on a daily basis for web development. This is what I did about the Terminal Problem: I installed XMing (an X-Server for Windows). Then I configured WSL to use the the X-Server installed on the host. This way I can use any Linux terminal emulator (actually any Linux GUI Application) on my Windows machine. This works surprisingly well for how much of a hack this is. I've been using this set up with gnome-terminal on a daily basis for over a year now.

It'd still be great if there was a solid windows-native alternative.

Cygwin has been working for me much better than WSL. Besides being a complete Linux-like environment, it is rather seamlessly integrated with Windows - which allows to take the full advantage of the software available on Linux when working with Windows files.

I'm at the point of considering a switch in the other direction.

I love how seamlessly wsl works and gives you a real Linux environment. Writing on my screen (surface pro) is great too.

However, I'm mad that I'm getting ads (preinstalled Candy crush) on a Windows pro machine. Also, os x is much more privacy oriented overall, and it's the de facto developer machine.

The only reason I'm hesitating is that although osx is Unix, I'm scared I may miss a full Linux installation which you have with wsl.

I've been using macOS for the most part of the last decade. I also use Windows, but only for gaming.

For me the BIG problem with macOS is actually the hardware. When Apple gets it right it's awesome, but this hasn't been the case for the last 5 or so years (with some exceptions).

The fact that there is no competition for macOS hardware ends up in very surreal situations such as overpriced broken products or design decisions that make absolutely no sense.

If I had to buy a new laptop now I'm not sure I'd buy an Apple product.

Same issue here. I'm a Linux user but recently got a new laptop and thought I should at least log in once to Windows before installing Linux.

I was welcomed by candy crush tiles and other questionable stuff. It is totally incomprehensible to me how a machine that costs almost $2k has candy crush as the first tile the user ever sees. I don't think Microsoft cares anymore.

I think it's just lack of communication/vision shared between departments. Everybody is trying to maximize their respective outcome without caring about the overall product. At least that's what it seems like to me.

I tend to SSH into an AWS box most of the time or use a Linux installation inside a vagrant VM when I have to use something Linux. Both of these are much smoother than on WSL/Windows.

However most of the built in packages and stuff that comes from brew is exactly the same as on Linux. Perhaps 99.999% of stuff is totally portable.

I've been using WSL now as my primary dev environment for about a year now and it's really good. I use it for full time web development and creating video courses (which is why I'm not running native Linux).

The WSL environment is very fast and reliable to the point where for the last year I've been 100% at peace with my dev environment. I would say in 20+ years of computing, this has been my favorite overall set up because you get a great dev set up, you can assemble your own computer part by part and you have excellent gaming support all without having to dual boot or run a Linux VM (which I did previously for ~5 years). I finally feel like I have 1 machine that does everything very well.

A typical dev day for me involves running Dockerized web apps (mostly Flask, Rails and Phoenix apps with Webpack, etc.), tons of terminal usage, Ansible, VMs, etc.. Everything you would expect to run as a developer.

If anyone is looking for a more step by step guide on how to get everything running, here's 2 posts I whipped up on getting everything you need installed on Windows[0] as well as getting Docker working flawlessly with WSL[1].

[0]: https://nickjanetakis.com/blog/using-wsl-and-mobaxterm-to-cr...

[1]: https://nickjanetakis.com/blog/setting-up-docker-for-windows...

Used Windows for years, from 3.11 to 7. Switched to Mac couple of years ago. I don't think I would consider using Windows ever again, unless they go back to 7 and create a new reasonable iteration out of it. Everything after 7 is just not worth the frustration.

I keep being tempted to go back to Macs - especially the new Mac Mini - now that most of my work is cross platform (.Net Core/Python/NodeJS) but I don't see any advantage just for that reason. I spend little time mucking with the OS. Most of my time is in VS Code/Visual Studio, Chrome and Notepad++. I don't see that changing. The rare occasion that I need Linux I use WSL locally.

I always kinda chuckle at articles like this - for a creative individual such as myself that's very locked into Logic Pro X and Final Cut X, and a full-time iOS developer, ditching a Mac would mean losing ~15 years of backwards compatibility for my media, and being stuck with far less-than-ideal virtualization solutions for Xcode.

Even with their pitiful current hardware offerings that almost seem like a direct insult to long-term dedicated fans, I'm still locked into their excellent software.

Author here: of course it’s not a fit for you!

I qualified my “yes” with the primary uses for my computer, which doesn’t include any macOS specific software (Sketch was the most recent).

If I was using FCPX, Logic or doing iOS development I wouldn’t have opened my editor to write the article ;)

You chuckle at articles because you are locked-in? Sure that chuckle ain't at your own expense.

I can see myself switching from FCPX/Motion to Premiere/After Effects quite easily (Premiere is really great nowadays), but I just cannot find a good replacement for Logic Pro.

It has it quirks and bugs, its performance is terrible on my 2013 MBP, but I just cannot find any other DAW that lets makes me feel as productive as Logic. Not just productive, it brought fun back to music production for me. Using FL Studio felt like a chore, even when I was quite experienced with it.

I am using mac from about 4 years, because it is standard at company. While hardware is decent (2015 Pro). I don't understand the fanboyism around OS. It's not keyboard friendly (e.g. can't quickly open menus without mouse), it force maximize to desktop, but multiple desktop support sucks. Multi monitor support is mediocre at best.

I am quite happy with WSL - I might not be the target audience as my needs are almost entirely for statistics but WSL beats a vm for rstudio server and awk. You’ll hear nothing but praise from me

This sounds like a huge pain. Makes me wonder if Google's approach of just sticking the whole thing in a VM is the better way to go. They still have a LOT of issues to work through (graphics acceleration, USB support, etc), but if they can really nail that, it could be interesting in a year or two.

Until then... I think I'll just stick with Mac.

Can't help but feel a bit of vindication/validation after reading this.

I've commented on the declining quality of Apple hardware and software, and switching to Windows 10 + WSL, several times on other posts.

Like OP, this isn't a perfect experience. It felt like it was a very balanced treatment of the issues.

There are still issues, for sure... but it's definitely passable, and improving. (At least for me.)

As far as the slow I/O performance, I believe the heart of this is Windows Defender live scanning the FS where each WSL environment resides. If you exclude those in Defender, you see a significant increase in I/O performance.

Side note: I wonder if Apple is observing this trend at all, and/or if they even care? They give lip service to the idea that they care about the Mac still... but the experience is still lackluster.

Have they fixed the git corruption issue yet? I switched from WSL to MacOS specifically because, if the computer blue screens while I'm working in WSL, something in the .git directory corrupts itself and I have to check out the repo again.

@terminals - consider ConEmu (wasn't listed) - it's fast (native) with unicode support and lots of configurability.


I switched from ConEmu to wsltty. ConEmu is a decent resource hog and I've seen it use 3-5% CPU cycles all the time, even when idling. It also handles the rendering differently (can't remember technical details), which is why it has some great features, but is also more of a resource hog. Maybe this will change with the new API coming in Windows 10.

By comparison Wsltty is much simpler, but works like a charm, and t-mux with mouse support is so liberating to use.

Edit - Found the resources on ConEmu's site that mention the technical limitations



Cmder is a wrapper around ConEmu (mostly some settings), that launches ConEmu itself. It wasn’t handling the glyphs my shell uses for Git repo status, return status, etc - compared to other options I tried.

If it worked well and line output didn’t break under tmux I would absolutely use it [until Alacritty improves].

He listed cmder, that is a ConEmu fork with some added goodies.

WSL is great on windows but the situation with Docker is pretty bad still.

Docker on windows needs to not require Hyper-V but to run as a kernel process like WSL does. Also the volume mounting straight up doesn't work on windows and if they can resolve those issues then suddenly a lot of things open up.

Not my experience with Docker on Windows - volume mounting works (and has worked) without issues. I remember at some point being prompted to share my drive so that Docker can access it - perhaps you need to flip this switch in the Docker settings?

I think volume binding only works with Windows containers?

I seem to recall trying it with Linux containers and immediately hitting permissions issues due to incompatibilities between Windows and Linux.

I am at least able to bind individual files using Docker Compose's 'secrets' and 'configs'.

Was on Windows from 1986-2010, Ubuntu 2010 - today, but will be switching to MacOS on a MacMini soon, and the reason: Apple's old filesystem HFS was slooow when grepping/compiling etc. but the new APFS is comparable to EXT4 in speed, makes all the difference.

My experience on WSL: very slow shells. Serious Web Dev work is not possible. So much catching up to do with Mac. Maybe MS should build that shell natively in windows with all those command programs

Last time I've tried chocolatey there were lots of outdated packages and those applications keep pestering me to download the latest version through their updaters or manually. So what's the point. Besides I hate Powershell Long-Ass-Weird-Awkward-Cmdlets -force.

> I’m using Terminus for now

Completely agree. If you're on Windows and still using ConEmu give Terminus a try.

Yeah, thanks to OP for that. I was using Hyper before and the performance was abysmal. Terminus has the same great rendering and is much more performant. 100 MB one terminal with some work is still not amazing, but it sure as hell beats 300MB while just idling with Hyper and 5 second load times for a window. I think Electron apps are an order of magnitude slower in general, but the Hyper team really seem to have made some poor choices on top of that.

My least favorite thing about Hyper was whenever you split a window horizontally you would lose your cursor because it would be hidden "behind" the split pane above it, and then you would have to hit CTRL+C like 20 times or force some output to the screen.

I opened an issue for this 2 months ago[0] and the team hasn't responded yet.

I'm about ready to call it quits with Hyper (ConEmu has its own set of even worse bugs).

The problem with Terminus is it currently has no support for splitting windows, so if you want that behavior you would have to use tmux. Although now with Windows 18.03+ being out, tmux sessions persist after closing your terminal so maybe that's the best way to handle splitting windows.

[0]: https://github.com/zeit/hyper/issues/3258

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact