It seems almost inevitable that Windows will eventually become a Linux distro. Microsoft could replace incredibly resilient cruft like NTFS with EXT4 or ZFS and the registry with human readable configuration files. On the other hand, they could replace Bash with PowerShell and hopefully pressure various orgs to adopt some sensible subset of configuration formats.
I think it more likely that Microsoft continues the ABI-hosting of Linux like it is currently with WSL. It doesn't make sense for them to use third-party kernel when the one they have works well enough and supports a long legacy of hardware and software, and it's something they can maintain control over.
If Microsoft feels competition from Linux, it makes more sense for them to allow hosting of Linux as a user-space application on their platform, rather than have a user migrate away.
I'm not so sure. Windows driver model is vastly different than Linux. Linux drivers have to be compiled for every kernel release. Windows drivers that are more than a decade old can still be installed and work correctly.
And this story that windows drivers work forever is a rosy tinted view of the world. Many drivers (my ltc scanner) stopped working working from win9x to xp. I saw an USB projector driver break when a system was updated from win 7 to 8. The same device worked as a fb device out of the box on my rockpi4. And windows drivers are platfrom dependent, the same driver will nit work on 32/64 systems or arm/x86. Linux is entirely another world.
I recently installed XP drivers on a Windows 10 system. Worked flawlessly, even though officially this wasn't supported. 9x to XP/Vista is the last breaking change we've seen and that was over 15 years ago. It's quite impressive actually.
I've got an Asus CP120 USB mini projector. When you plug it on a windows machine it presents itself as a mass storage device with the windows 7 (or xp, I don't remember) driver on it. Last time I tested, it didn't worked out of the box on a windows 10 machine; it would probably work if drivers were downloaded and installed, but I didn't bother.
On my rockpi4, I simply plugged it and instantly I've got a terminal on my wall. It is an ARM machine. That's the HUGE advantage of having a driver in the kernel: it will work on every architecture the code can compile. That was just the same experience with my usb wifi dongle, with my wacom tablet, with an epson multifunctional (I had to apt-get install escpr, but it is a single command), samsung printers... Some of these drivers are in user-space and not in the kernel, but it worked flawlessly in more than a single arch. That is impressive.
But I do admit I envy the number of drivers for desktop gadgets that are compatible with windows. Of course, binary drivers are only interesting you use a single arch.
Mainline drivers are maintained and updated, I think the statement you are replying to is short hand for “windows binary drivers work for years across kernel versions and minor distribution version”
The windows driver model is not semantically much different. There is greater cohesion at a higher level, leading to it being easier to create a shim layer for, if anything.
any example of a decade old driver that will install? not even the .cat signature algorithm will be the same, and windows enforces signed drivers these days.
I understand a lot of the benefits of ZFS and Powershell. But whats wrong with NTFS? Can you talk a little about the relative strengths of NTFS vs EXT4?
NTFS is a more sophisticated files filesystem but it has a worse performance under certain workloads, specifically the workload underneath `yarn install`
That'll be due to the file system filter driver[s] (i.e. antivirus). This should impact most/all file systems where the file system filter driver supports said file system.
Disable/remove the file system filter drivers and the performance issues largely disappear.
On current pro versions, you can’t really disable the live protection constantly, but you can add a permanent exclusion for your home folder or the whole drive. This should greatly speed up operations like that. Still a lot faster to do them inside a Docker container/volume on the same Windows host.
Also Docker and installing applications that have lots of small files.
In general NTFS doesn’t do well with lots of small files because opening files is expensive compared to ext4 (dramatic oversimplification). This shows up in random places where very little actual file reading/writing is happening for each file, like yarn, docker, installing video games, etc.
NTFS also has a 260 character path limit [0], which as the sibling comment says can really jam up node_modules and the current HEAD of the intellij-community repo which currently has some giant filename in a subdirectory of its repo
I'm aware of their claim about using Group Policy to remove the limit but I've never used GP in order to know its sharp edges for my gaming computer
NTFS has never had a limit of other than 32K. MAX_PATH limited 260 characters for the Win32 API (among others). It has always been possible to bypass the 260 character limit, though obviously most applications wouldn't work with a file that exceeded the system-defined MAX_PATH value.
Office does its own thing and doesn't leverage MAX_PATH. No idea why.
I love the idea of windows registry though, I hope it stays. With linux, i have to to poke around the file system or google "package X config location" and many programs have their own idiosyncratic rules around which config paths take precedence over others. Windows registry as implemented is not the best, but the idea of something like a central sqlite db for config is great.
Certainly not for me. Registry was one of the worst parts of the Windows experience for me; but I understand it can be subjective.
With separate files, the ability to download and replace them individually is a big win. Also, I can use terminal tools on them which is great for automation.
I've never understood the criticism of the registry. The only times I've ever had an issue with it were when there were RAM or disk problems with the computer. On Windows there's nothing at all stopping developers of standard Windows applications from implementing their own config files and many do.
1. Most of the programs put things all over the place. If you install software and use it for a while, you may find lots of keys in lots of places, all subtly affecting something.
It’s technically the fault of the program, not the registry. But culture and ecosystems matter, and by-and-large, Config in files is usually much more concentrated.
2. Related to (1), it is very hard to just move settings from one machine to another. How do you export your settings related to program X in order to use it on another machine? I keep my Linux config files in a git repository - I can easily track history and clone it to new machines. What’s the registry equivalent?
3. It is incredibly slow. When I still used windows, if I needed to do some registry editing, I would dump it to a text file, edit and then import the edited keys. That took about 1/10 of the time doing the same took in ResEdit.
For your first point, that's often the result of using third party libraries via COM. That COM library is its own thing and it wouldn't make a lot of sense to put all of its settings under your app. Plus, some parts of the registry (thinking of CLSID) are basically directories where the system looks when a program says "give me an instance of ThirdParty.Grid".
I agree to the above in a painful way. I don't think of myself as a power user but I have at times to clean up the registry for stuff that uninstallers missed because the updated application does not want to install. It's a complete mess of having to dig through a tip to find a broken match box so you can destroy it without setting on fire the whole thing.
Meanwhile OS X or Linux don't suffer from this and working with applications is a lot more "streamlined". As I said, this is just a slightly above average skilled end-user perspective.
Registry never made much sense on the local system: just use the filesystem. I assume there was once a plan to use it in a networked fashion. But Active Directory replaced it for that purpose.
The filesystem isn't inherently bad, but the lack of a standard config format, location, and APIs is quite terrible. There's no reason a thousand programs should have a thousand different ways to be configured, it's just a legacy of poor design with no standardization.
It's not poor design because it wasn't really designed in the first place...because it didn't really need to be designed.
There is a standard and it's minimal:
- config files are text files
- # indicates a comment
Why standardize further when the types of programs that require config files run an extremely wide gamut? Types of programs can be as diverse as web servers, graphic editors, kernel modules, networking programs, etc. Each are vastly different. I don't need to wait for, worry about, or try to install an updated registry processor that knows about new object types. Better to build it into the program or rely on a library. Why change the entire system of configuration storage just for one new type of program?
And text files are awesome for another reason: With the ability to comment config files you can understand any format - as well as include notes and conveniently have documentation where you need it.
Also: You can't use git or other versioning to backup your registry keys and rollback to previous versions.
Editing the registry never results in corrupt unreadable intermediate states, unlike writing to config files. A workaround is "atomic save" (where one app writes to a different filename and renames it over the original). This ensures you'll never get torn reads (I think the Windows registry doesn't support transactional/atomic updates of multiple values at once), but you lose permissions and symlinks or something like that.
Have you never experienced an unbootable system because of a corrupted registry? It's unfixable. And I'm not saying it happens when manually editing the registry. It might happen when Windows crashes at the wrong time and the system is in the middle of modifying it. It's quite failure-prone and I'm surprised it hasn't been improved much over the past decades.
Somehow I've never gotten a corrupted registry resulting in a broken account or system. Perhaps I'm lucky or too innocent in this regard. Though it does sound concerning if the Windows registry doesn't perform write-ahead logging like SQLite or a client-server database to enable crash recovery.
I should research what key-value database libraries I can use as a cross-platform registry-like storage format that's more resilient to app or system crashes.
You can use terminal tools on the registry fwiw. Powershell provides a native PSProvider that exposes the registry like a file system so you can cd into it and make changes.
Personally I like the Mac plist system the best. It just works, and it works so well that you never hear about it because no one ever complains about it!
There are at least three different formats for plist files, and the tools are differently broken depending on the format. There are some really odd cases like terminal colour settings which are stored in the plist as a pickled object, so you have to use the gui to adjust a colour rather than (say) using css syntax.
No settings format can force a programmer to make good decisions. A Windows developer could put a .Net binary serialized object into a blob registry key.
I know everyone here really likes Linux, but bash is kinda a joke compared to powershell when it comes to usability. Everything from the way piping works to function names, makes powershell easier for the beginner and memorability. The best way to figure out how to do something in bash is to google it and hope someone answered the question in SO. With powershell I just go to ms docs and scroll through the functions until I see the one function name that is self descriptive, and after a 2 min read of the docs I’m usually set.
That’s not to say powershell is perfect but on the whole it’s a lot easier to find the thing you’re looking for, and you don’t usually have to wade through SO snark to do it.
Agree. Make Windows more Linux like where possible, not the other way around. Understand that most of us will never use Windows Server products but we will use Windows 10 if it works well enough as a development platform. Do away with the Teen Titans design aesthetic - carefully study what macOS is doing. Keep the fast, keep the low power consumption and keep the solidness however.
I've looked at the busybox source, and it appears to me that bash compatibility that is added to the Almquist shell is a very thin veneer, not much more than defining [[ as an alias for [ (test).
If you're looking for arrays, you will be sorely disappointed AFAIK.
Hopefully not using the Linux kernel though. It changes too much, stuff gets dropped too often, it's buggy, it's insecure as hell, there no test framework. It releases features quickly, sure. But I kind of hate using it, both on my desktop and in production. Would rather BSD, but then you don't get features.