Administrating the server via a GUI always felt backwards somehow. Why waste CPU cycles on rendering a UI when the CLI is so great (on linux)?
Now of course they suck ass because MS has no idea what it's doing anymore.
When you start to administer hundreds or thousands of servers, scriptability of CLI tools and pliability of text files trumps all computer automation.
xCAT and Salt are two tools (that I use regularly) which are built on these properties and can manage thousands of servers with a "flick of a finger".
A simple HTML form to "arm" the installation server, a simple reboot and PXE booting can be much more effective than all the painkillers combined for remedying the headaches of IT people and managers alike.
If you don't roll changes that often, you can distribute emergency USBs to install servers and workstations. Even a single USB can detect and install the system accordingly.
Of course we should point out sucky tools regardless of its platform.
I feel the need to sing, "You may have both.... you _may_ have both.... it's not so hard to imagine... MS even did it in certain products..."
Being able to tune a system from top to bottom solely with text files is just amazing. Also, some people hate "everything is a file" philosophy, but it's the enabler of this configurability.
When I started programming Linux and saw that everything was accessible via normal file system, it was a big relief compared to W32's iceberg environment, where you cannot see everything and sometimes water is too cold to dive that deep.
Backup is done the other way around, you backup your configuration scripts
A main criticism of all this is that the usual tools didn't work remotely - you need a different, undiscoverable, possibly differently licensed, set of tools from what you use to change settings on your local computer. And of course you can't easily set up Computer A and then clone its settings to Computer B; you can only do this globally through things like Group Policy.
I can't think of any that I use regularly that don't work remotely. Which snap-ins are you referring to? Most of them you just right-click on the top level object and select "connect to another computer".
A quick check shows the availability of "connect to another computer" is a bit random; I can do it in services.msc and taskschd.msc but not in diskmgmt.msc . Maybe that's why I've never found it before.
Most Windows admins still prefer a lot of GUI management (I've gotten some groans in response to my statement new servers would tend not to have it), but remote desktop to the server is no longer the preferred way to do that: Remote Server Administration Tools effectively installs all of the server GUI on your desktop PC.
Due to the number of legacy applications Windows Server folks tend to support, it's unlikely server GUIs are going away entirely anytime soon, but for a lot of basic server functions supported directly by Microsoft, it's doable. And in addition to not wasting processor and memory pushing pixels, Windows Servers without the GUI are susceptible to less attacks, require less changes during Windows updates, and reboot faster, all on account of just having "less" onboard.
Back then they didn’t really understand the point of ssh though. They had a vision of remote management via .net RPC and powershell.
SSH is an awesome tool & capability as a relatively high level network channel. The defacto “shell” approach leads to a lot of problems when used as a management device. It encourages adhoc, unstructured, and opaque changes. Managing your hosts via Secure Shell simply leads to bespoke, unrepeatable, outcomes and crushing debt.
Moving to a well structered, repeatable, management paradigm is the only way to survive large or long term deployments. I see “systems configuration” and “orchestration” as the most common ways to achieve that. Personally Ive been trying to move linux/bsd host management off of SSH for 10 years now. I will be very very happy when SSH shell instantiations approach zero per day.
SSH is not for management. SSH is for remote access. What people do with their remote access is not SSH's concern. Ad-hoc management is just as possible and easy via RSAT and PowerShell. Hell, I've seen management via ansible devolve into a mess of unmaintainable, unrepeatable script heap.
> Moving to a well structered, repeatable, management paradigm is the only way to survive large or long term deployments.
And how would such a paradigm remotely access the machines? The tool I use uses SSH for:
1. Running commands on remote machines, interactively & non-interactively;
2. Synchronising files on remote machines, via SFTP and Rsync;
3. Forwarding network ports to provide secure access to remote services, e.g. Redis.
Personally, I've been trying to move app deployment and debugging onto SSH in every team I've worked with, and I will be very happy when developers are able to treat a remote machine (development or production) as little more than just a well-managed local machine.
To answer your questions you probably have a dedicated on host agent. See MCollective or AWS SSM for examples Im familiar with. It doesnt really matter that the transport or message model is. Its more important that your actions are discrete, well modeled, with defined & structured inputs and outputs. Its also inportant that theres an intermediary so you can get the scope of control away relying on client and end hosts.
For #1 interactive is almost orthoganal. For #1 & #2 Its that you have structure, repeatability, validation, auditing, testability, etc. #3 again a transport problem, or similar.
For your last point my issue is if you treat dev and prod as similar change and access problems. The environment can certainly look or feel the same. But the output of dev should be discrete managed artifacts. If you need to do adhoc exploratory stuff on prod thats a good sign you lack control or valid synoptic model of your systems.
That said, we may be trying to solve different problems. I have lots of teams, more ICs, and many hosts. Getting from artifice to industry is important at some point.
Edit: go check out the mcollective demos to get an idea of how you can change the access/use model.
Sometimes you just have to give people the candy they want - even if its not good for them.
Isn't this an anti-pattern considering the cloud tenants of treating machines as cattle not pets?
SSH and Remote Desktop solve a different problem. It may not be the one you have - but some of us just need to log into the machine because we have needs that don’t fit into some predefined workflow. Or we have so few boxes it’s not worth it.
Consider SSHing into a large build machine for compiling as an example.
For Ansible users, Windows getting OpenSSH is a huge win, since we (presumably) will no longer have to jump through all the hoops WinRM throws you in order to configure Windows machines remotely.
SSH was originally intended to replace the aging unencrypted protocols that were used for remote access (telnet, rsh etc). It comes from a time when configuration management tools like ansible, puppet et all didn't exist, and most system management was ad-hoc, or done by home grown scripting. So it makes a sense that it behaves the same way as rsh for the most part.
WinRM (which this Microsoft implementation of OpenSSH will replace for most Ansible/UNIX users that are managing Windows systems) also allowed arbitrary command execution as well, so it was really no better.
As an outside observer, it really looked to me like Microsoft had their head in the clouds up until about 2011. It's like the whole decade between 2005-2015 for MS was about recognizing and coming to terms with the multiple "whole worlds out there" and getting past NIH syndrome.
The one place this didn’t apply was your choice of code editor. You could use whatever you wanted and that had long been tradition. I spent a lot of effort getting VIM to work only to realize it didn’t scale well to multi-million LOC code bases.
Really had nothing to do with the size of the file.
An organisation large enough would have middle management and people who promote from within. And they are successful in one culture and it worked even for customers.
Passing serialised scriptblocks to the remote host, executing them, and getting serialized objects back. Meaning you can pass serialisable objects around, handle errors, the multiple output streams, etc. over the remoting connection.
Because it's integrated into the language, you can pull PowerShell modules over from the remote host to the local host automatically generating proxy stubs for them ("implicit remoting"). E.g. to manage and Exchange server you can remote to it, and bring the Exchange cmdlets back, and then import them locally as if they were a local module.
It's also able to do that and edit the proxies as it builds them for you, to allow and restrict what commands you get - based on your permissions and perhaps which tier of service you're paying for. And not just allow or restrict commands, but allow or restrict individual parameters on those commands, if desired.
That's fine in theory, except that some very common Windows Objects can't be serialized in any meaningful way (most painful one for me being the Windows Update Client), which means that certain "remote" operations are actually affecting the client machine, or are failing because the client lacks support for certain operations.
I've been using Powershell since v2, but to me, it's essentially a walled garden automation environment. Just one more way for Microsoft not to give you the tools to efficiently manage your fleet.
a) I think the normal use is something like SCCM or InTune to automate Windows Updates, not manually connecting to a machine and trying to script the Windows Update Client.
b) Microsoft built Azure Stack with PowerShell automation. Are you saying Microsoft refused to give themselves the tools to efficiently manage their own fleet? What tools even are these?
That’s why I can guarantee Remote Desktop has more usage than these remote rpc calls for ad-hoc work. Powershell is quite powerful but it’s features are more for programmers than users.
Because of its complexity RPC is not a replacement for ssh where you are typing commands directly to the remote machine.
This is partly why nobody developed proper open source Exchange clients: the Exchange protocol is MAPI over DCE/RPC, and you need to build a lot of infrastructure before it will even talk to you.
Samba still does amazing stuff on the directory RPC front.
A customer of mine fires up servers to run tasks on them. A few of those tasks are programs that unfortunately run only on Windows (the pain of updating those boxes!) I install ssh to be able to perform some basic operations on them and move files over sftp. Hopefully an integrated ssh server works better (issues with terminal, etc.)
I understand that this is not a scenario Microsoft would advocate in the past, but eventually Microsoft lost the war for servers so it's giving us interoperability tools now.
If you told me I had to do anything advanced with Windows using only a CLI, I wouldn’t know where to start. On the other hand, I could maneuver my way around a bash shell. I haven’t thought about that before now.....
cmd.exe and DOS were both text-oriented but extremely buggy and inconsistent. Perhaps MS could just focus on providing a very bash-like language (perhaps even include gnu utilities) while supporting the major interactive use cases for cmd.
I'm not great with it yet, but it feels intuitively like a step forward to be able to pass objects around instead of spending so much of my time using awk-ward 40 year old tools to tease text out of various fields.
But terminals and the tooling around them have always revolved around text. My point is that since Microsoft is (a) including OpenSSH, (b) upgrading their terminal infrastructure to support VT100 and ANSI codes, and (c) investing in WSL, it would make make a lot of sense to add a Windows-native shell that is more text-oriented. Not because I hate working with objects and APIs, but because plain text ought to be the primary, first-class supported data format when working on a terminal.
PowerShell is great, but it's an object-oriented (not text-oriented) shell.
That's because they're from the 70s, when literal terminals with no real computing power were connected to a centralized system that did all the work. They're archaic and silly.
They may seem that way to some, but they still provide a surprisingly versatile and useful user interface. Terminal programs and scripts are easy and fast to write, easy to interact with, easy to change, and trivially portable across POSIX platforms; hence, when writing a tool for oneself or another developer, most developers will usually opt for a CLI-based UI. Terminal program UIs are also very stable (in addition to being text-based), making them arbitrarily composable into scripts and workflows without fear of them breaking in the future.
Modern graphical UIs are usually easy to use, but they are far-removed from the code and tools that are used to write them or interact with their APIs programmatically. Terminal program UIs, however, are very close to the environment they're built in -- so the line between using a terminal program and interacting with it programmatically is delightfully blurred.
I use a multi-paned iTerm2/zsh window every day as a front-end developer (I'm on a Mac running macOS Mojave). I can't imagine working without it. There are a lot of good reasons npm, git, and other popular mainstays in modern development workflows are terminal-based.
They are doing exactly that: https://blogs.msdn.microsoft.com/commandline/2018/08/02/wind...
If I were a server admin, I’d probably turn all of that off because it’s more attack surface area. And cripple myself in the process. For that reason alone, I think a new shell is probably more likely to be helpful.
Personally, the Windows Admin Center is a nice-to-have tool for quick tasks by inexperienced or occasional admins - but you really should just learn Powershell.
But my understanding is that windows fleets are just administered drastically differently than Linux fleets.
On Linux we have a lot of config management tools and commandline remote administration kits.
In windows they have SCCM and remote access GUIs which do not get rendered on the target server. It’s very common to see a GUI program which does something pretty basic but over 500+ windows machines in windows heavy shops. In fact when I explained that I do all my scripting with a local target (because I use salt stack which is essentially a remote execution framework before it is a config management system) I was told in no uncertain terms that I was doing things the “old way”.
* Desired State Configuration: https://docs.microsoft.com/en-us/powershell/dsc/overview
* Powershell Remoting (please, please let SSH replace this)
* Microsoft's new command line push - cmd and the underlying console have had big updates recently
The saving grace here will be SSH because then at least we can drive all our kit across both platforms from Ansible and be done with the entire MSFT management stack.
At the moment we use Ansible but it runs over WinRM which is completely unreliable due to some architectural problems with how WinRM works.
Really, automation on Windows is extremely costly compared to Linux. I've got to the point I never want to run Windows infrastructure again.
Before anyone says "it's getting better". That has been the case from 10 years and it only got different.
I'd disagree. I've been managing fleets of Windows machines since NT 3.51 (and fleets of Linux servers since 2003). It's all about knowing the tools well.
Whilst I've been using Linux since around 1995 (Slackware!), and prior to that various Unix and Xenix(!) boxes had popped in and out of my IT life, I'd never managed more than one or two machines. That was until 2003'ish when I went to work for a shared hoster. It took me a while to become as proficient at managing all these new Linux boxes that entered into my life as I was with Windows. I had the learn the tools.
I'm not suggesting you weren't properly trained, but most of the complaints I hear about Windows being a devil to manage is usually because admins haven't learned how to use the tools and generally haven't a clue what they're doing.
1. Remote management is totally unreliable. If I point DSC at 20 nodes via WinRM, 25% of the time it will fail on one of the nodes.
2. Everything takes glacial amounts of time to happen, from deployment and provisioning to simple cases like restarting processes. This is because Windows is simply so damn large.
3. There are so many edge cases and bits of tribal knowledge dotted all over stackoverflow which is sometimes the only way of finding out what the hell is going on, it's impossible to cover all of them through automation and nigh on impossible to document the rationale behind them.
4. Literally everything is stateful and state is difficult to synchronise and reload.
5. The disparity between major platform releases makes it extremely difficult to develop automation that isn't brittle and can span major windows versions. And because everything has all of the other concerns from coupling to bugs it's impossible to chop big chunks of your infrastructure over to new windows server versions. So you end up in permanent migration mode.
6. The sheer amount of bugs in some of the larger products like SCCM and SCVMM are just unbearable. SCVMM client for linux is a pile of shit and I'm being polite there.
All of these cost immense amount of wall clock time and staff attention which far outweighs the cost benefit of using the platform to start with at least from an application server business.
On the desktop, things are slightly better as it's the least bad desktop platform for corporates but that's changing. Even Microsoft know that which is why they are forcing people down the cloud route and selling subscriptions.
Sure, managing linux servers is a doddle, but desktop environments are miles harder to get right.
That is what every single Windows environment I've seen turns into, including the SCCM powered and Microsoft direct consultancy managed ones.
i know what you mean. It's part of the reason I'm on the cusp of discarding my 20ish years of MS SQL Server experience and going full Postie for the rest of my life. Windows IT infrastructure in most enterprises is a pile of lies. It's particularly galling in academia and non-profit because they can't afford to keep up with all the Microsoftia and yet still the management dreams of using Microsoft StudlyWare which would require 10x the IT cost they could ever dream of, let alone get.
But people are totally 100% happy with that in some circumstances. They are happy with lower mediocrity because simply the vendor in question has got away with it for so many years that it is now the status quo.
If I bought a Tesla and it behaved like the average windows machine I'd drive it back through the fucking dealer's window.
Simply I demand better and I demand what I paid for.
In contrast, .NET Core supports various old Windows-only features, such as COM and soon WinForms.
Now that Microsoft has finally realized the importance of remote shell access for servers and has included it in OS by default, it will become convenient alternative to RDP.
Heck, the default installation of windows does not even have a GUI. People ahem really need to update their thinking of what it means to be Windows.
Virtually all Windows administration can be done entirely in Powershell.
And like I said, the literal default installation of Windows does not even include a GUI.
I haven't seen this done on Linux. Has this trick been implemented on other systems?
For running tests for a GUI application which requires us to spin thousands of Machines in parallel, manage job queues, interact with Windows UI automation, do some html result dumping, shell buffer streaming, and a whole bunch of crazy things, powershell interfaces with things really nicely.
I absolutely love that powershell pipes streams of objects rather than just streams of bytes. So much more expressive.
The amount of people who try and explain bash basics to me when I say I use Powershell is staggering. I'm quite good at regexs thanks, I just like my scripts picking keys 'select' and 'where' rather than scraping with grep / awk etc. The amount of (poor) pwsh clones on Linux is a good testament to the solidness of this approach.
With all the "do 1 thing well" utilities glued by bash, it just feels real effective.
The Language Server Protocol has revolutionized IDEs in general.
And now that they own GitHub (and their projects like Atom), it seems like the entire FOSS developer workflow is likely to be from MS-derived projects. Which I'm sure they are hoping will translate into more cloud service revenue and online software subscriptions.
If you are starting a new company, BizSpark is a really attractive offering. Could run a whole business on it, and yet none of your devs need to be running windows on their machines.
If you are wondering what's the Auto DevOps - GitLab Auto DevOps eliminates the complexities of getting going with automated software delivery by automatically setting up the pipeline and necessary integrations. You can find out more info at the landing page  and the documentation .
If I had all of the source code from any of the FAANG companies what would I do with it? The only one that may be valuable is Google and I could make a killing from knowing how to do SEO. But even then it would take an army of very smart developers to figure it out.
Maybe that's just my interpretation, but without explaining what being a "leader in open source" means and by what merits Microsoft (a business dealing mainly in proprietary software) is becoming it, that's all I have to go by.
Microsoft only releases the source on products they create when they clearly can't compete, and they have no other option. In terms of open source leading and inovation ms is worth 0.
Although they have a good R&D. There isn't anything new they contributed to OSS community.
Take a look at the picture: https://www.theverge.com/2016/9/15/12926288/microsoft-really...
If talking about heart and greed is all that's left to burn Microsoft with at this point, then they must really be knocking it out of the park.
ssh keys would solve so much trouble with scripting and credentials...
Window Key, type "optional features" and install the OpenSSH client.
We're a windows shop 100%. So it's non trivial controlling other machines from a command line.
I miss it so much.
Edit: Cmder - http://cmder.net/
However, I have changed the following:
* Set the default font to Droid Sans Mono (Slashed) 
* Set the scroll back to 9000 lines, and the default width/height to 132x50.
* Used ANSI to colour code the prompt.
* created a small tool called su.exe that re-launches cmd.exe as Administrator (it can't be done inline unfortunately)
And then I use openSSH (the Windows 10 bundled version) to remote to linux boxes.
It's all fine. I really don't understand the hate.
 my own edited copy of Droid Sans Mono where the Zero has a 'slash' added.
CMD is another horribly-designed feeble tool compatible with COMMAND.COM, acceptable back in the eighties. Yes, its fine for running a program or two here and there if you've got low expectations. You can get work done on 640x480 too, but we don't suffer that today. ;-)
I recommend this series of posts that will explain how poor the situation has been on Windows, and how it is improving:
None that were mainstream/widely used.
This is the Vagrantfile: https://github.com/andrewmackrodt/boot2lxd/blob/develop/Vagr... - it creates and attaches an extra virtualdisk in the project directory which was owned by SYSTEM IIRC.
TL;DR the VM which should have worked did not work, my FS had weird permissions and a reboot was required to clean up VirtualBox.
EDIT: 1803 build appparently can get OpenSSH Server running, but it's "a bit of work": https://www.bleepingcomputer.com/news/microsoft/how-to-insta...
There were no real alternatives. I remember when they got all excited about showing off remote Powershell to us circa 2004, and we collectively rolled our eyes.
I always wondered what messenger/hotmail was ran on, and such stories would be really really interesting imho.
Messenger was borne from the Net Meeting team in an informal weekend hackathon (before that was a thing). Backend stored the buddy lists and was 3 Hotmail ustores (Sun E4500 with EMC clariion storage). Front ends started out as Sun 420R then switched to Windows bit by bit (DP, SB, and then CS and PS). The Messenger HTTP gateway always ran in IIS.
Thank you for sharing that!
This will work exactly as it does on Linux (and the shim will continue to be open source): local admin (or user) accounts will be created and keys deployed, and the user will be able to choose PowerShell as their shell of choice.
(FWIW, Userify is both a cloud/SaaS and on-premise/self-hosted SSH key management solution designed for modern clouds like AWS and Azure that creates local accounts, deploys keys, and then keeps everything in sync using just outbound HTTPS connections.)
I have some vague memory of limitations in earlier versions, with distinct differences between password auth and key auth.
The difference between password auth and key auth is that using password auth you can access network shares that need authentication from within the remote session. With key auth, you wont be able to.
Perhaps hardware vendors would just start dedicating more effort to Linux drivers development then. Nevertheless the way drivers work in Windows (separate programs instead of kernel modules) is still better itself.
What advantages does it offer?
WNT evolved (spiritually) from VMS and has a lot of unique things in its structure.
Note also that Windows already kind of is a perfectly serviceable Linux kernel through WSL. That's one of the unique things about NT - it was designed for binary compatibility for programs written for several distinct systems.
Have they? Or have PC prices fallen fast and far? It’s hard to compare due to the product continually changing (insert some sort of apples with apples joke), but it looks to me like Mac prices have fallen.
>I, as a consumer, would gladly pay good money to be able to use Outlook and the MS Office suite.
You can do that on a Mac.
This makes absolutely no sense. Linux is a kernel. If Linux 'switched its kernel to NT', it would not be Linux. Perhaps you meant the userspace, which is not dependent upon the kernel?
Typical Linux user space does depend on the kernel ABI – most significantly via glibc, systemd and friends, but also to various degrees up through the stack.
Not really. There are some exceptions (systemd), but there are libc implementations for other kernels, and the GNU userland definitely runs on other kernels without a hard dependency on any Linux kernel ABI.
> Linux is a kernel.
You also know that 'Linux' is more commonly used as a description of the userspace and other common software. You are starting an unnecessary argument.
Case in point, this whole article is about software originally developed for OpenBSD. OpenSSH has nothing to do with Linux.
for those you who don't remember Xenix was Microsoft's UNIX that it marketted prior to releasing DOS. Originally the idea was Xenix was the multi-user OS, DOS was the single user.
Also people claiming that it does not spy because it open source should read . Source availability does not mean the binary was built from it.
1. You Think the Visual Studio Code Binary You Use Is Open Source? Think Again | Hacker News – https://news.ycombinator.com/item?id=18012301
"In the mid-to-late 1980s, Xenix was the most common Unix variant, measured according to the number of machines on which it was installed. Microsoft chairman Bill Gates said in 1996 that for a long time that company had the highest-volume AT&T Unix license"
Microsoft definitely understood Unix and figured out there was no market for it for close to 2 decades.
1. 1980 to 1998 for Google, 18 years -> that's awfully close to 2 decades.
2.1980 to 2001 for Apple, 21 years -> that's more than 2 decades.
So your counter examples confirm what I said.
3. Windows Phone and family failed because of marketing blunders, not because of tech. MS-DOS and Windows succeeded because of marketing prowess, not because of tech.
4. The mainstream market just wasn't ready for Unix for a long time. The biggest Unix success was Sun, for a long time, and Microsoft absolutely dwarfed Sun for most of their existence. Further proof that my statement was right.
Not alone, it was also the network effect, and Microsoft's terrible reputation they gained in the end of '90s and '00s.
> 1980 to 2000
Yes, I noticed, but that isn't a representative range. If anything, what's relevant is their recent history:
1) Less proprietary software, more FOSS.
2) Data gathering is more important.
3) As the old cash cows dwindle (Windows and Office) new ones are attempted.
> The OpenSSH client and server are now available as a supported Feature-on-Demand in Windows Server 2019 and Windows 10 1809!
And in any case: it's easy enough to learn Windows, but it gets a lot better when you put decent command-line tools on it.
(Yes, I already know about PowerShell and don't need twelve replies advocating it as a revolutionary concept.)
The reason you get twelve replies is because PowerShell is the answer to wanting a good shell and decent command line tools. Not perfect but it runs circles around typical ‘nix shells.
It would be a bit like having a decent Ruby or Python-based shell on Linux and bemoaning the lack of a Windows cmd port.
That said there’s a great case to be made for familiarity and experience and any bourne-like is going to do just fine without PowerShell’s bells and whistles. Powershell is also far from perfect and has warts and limitations like anything else.
...to a question not asked. No, really, I want Linux tools, and don't want PowerShell. When I want more than a shell, I want a real programming language, not a more powerful shell.
I appreciate that some people like PowerShell. But the answer to "no, not PowerShell" is not "let me tell you about why you want PowerShell anyway, you just don't understand".
> That said there’s a great case to be made for familiarity and experience
Here's a closely related example. Back when there was more than one competing distributed version control system, there was "tla" (a predecessor to baz, which was a predecessor to bzr). tla had commands like "init-tree" (not "init"), "file-diffs" (not "diff"), "apply-changeset" (not "apply"), "make-branch" (not "branch"), a dozen commands for logs (not "log"), and so on. Perforce has similar problems.
And PowerShell has "Expand-Archive" (not "unzip" or "tar"), "Invoke-WebRequest" (not "wget" or "curl"), "Get-Unique" (aliased as "gu" but not "uniq" or "unique"), and so on. Brevity is a virtue in a tool used so frequently, and brevity is not lack of clarity. Unnecessary verbosity is often an obfuscation.
Always, laughed while configuring/managing MS products.
Great news anyway. Congrats on launch (2018)
SSH alone doesn't seem any good without all the UNIX tools.
The office suite is also much better than alternatives.
So just switching the kernel to Linux would be enough and super winrar.
The asynchronous io API has nothing to do with this.
random google result: https://news.ycombinator.com/item?id=11866076
>Or using separate threads for I/O vs UI?
not the target use for completion ports, which is handling thousands of outstanding IO requests with a few threads.
The kernel is actually likable but would benefit from being open source and the GUI is a horrible blend of touch screen and legacy input hybridness that I wish never existed.
You know that GNOME also has a registry, right?
See the rants at:
I'd love to be able to submit this bug on the issue tracker, if you could tell me where to find it.
And we are talking here about an OS and GUI made by the same vendor. Hugely different from the Linux+GUI situation.
 which is another example of a awful legacy interface where by default you can only peek at the first 2 unrelated lines of every log mesage (and even worse: without a standardized format)
Let's see, other Windows 8/10 UI blessings I'd point out is revamped Wi-Fi connection management in the system tray, the quick notification tray access to switching secondary monitors between extend and duplicate modes, easy access to airplane mode, proper official UI for Bluetooth devices and cellular cards. Don't forget having a proper notification tray to begin with, so that fifty dozen system tray icons aren't each presenting their own popup bubbles.
While the start screen and charms bar were definitely missteps in Windows 8, I would cautiously be willing to almost entertain the claim that Windows 8 had a better UI than 7, just due to all the handy shortcuts they added. But 10 definitely brought back some of the strengths of the 7 UI, while keeping all of the improvements that 8 launched.
I don't use those so often that I need a dozen shortcuts.
- Notifications are non-stop until you turn them off. And, even when you turn everything off, there's system notifications that you can never turn off.
- The start menu is god awful. Why they decided to replace the easy-to-use Windows start menu with this animated abomination is beyond my understanding. Our office puts classic menu on all new pc's because of past complaints.
- The settings are all needlessly verbose. This isn't exactly new to this version of Windows, but they could really take a page out of Apple's book when it comes to changing system settings.
- And, they've went the extra step to make sure you don't change Edge as the default browser. Not exactly user friendly.
Windows UX might look more polished, but I find GNOME 3 to be the most usable one. Because I feel completely in control of the entire system with just a keyboard.
I strongly disagree. See how opinions work?
> and super winrar.
Winrar the compression utility, or is this some meme?
Sorry, no. For as bad as Windows 10 might be interface-wise, Linux DE's are far, far worse for anybody but Linux fans.
get their office suite to LibreOffice