Hey there, everyone! I'm the PM for this project, and it looks like I'm a little late to the party, but I wanted to drop in some notes to add to the discussion:
* we also shipped OpenSSH's sshd server, but it's a little tricky to configure right now. Expect a blog post this week.
* This is not production-ready in the current version of Windows 10 (hence the "(Beta)" in the label), but we hope to be soon.
* All of this is being done in open-source out of a fork at https://github.com/powershell/Win32-OpenSSH (code work is technically checked into https://github.com/powershell/openssh-portable/ first, but those will be consolidated at some point). Check the Wiki there for a ton of info.
* We've been working closely with the official OpenSSH Portable maintainers to get upstream at some point in the future. They've been awesome to work with so far, and we're really looking forward to moving Windows onto the definitive SSH implementation of the world.
This is been a super fun project to work on over the last couple years, and I'm glad that there's such a groundswell of excitement around it. Like I said, I hope to be publishing a lot more documentation/material on our plans as we get to a production-ready state.
I've also been super swamped with the release of PowerShell Core 6.0 [1] for Windows/macOS/Linux coming early next year, hence the lack of a good release announcement on these beta bits...Thanks to Patrick Kennedy for finding it and letting everyone know! :) )
It drops into cmd, but it is configurable (and there's actually a Chocolatey community package for it that I believe prompts you for which default shell you want at install-time).
With PowerShell Core 6, we also support PowerShell Remoting Protocol (PSRP) over SSH as a transport, which means that you can do stuff like New-PSSession and Enter-PSSession without WinRM. (PowerShell just gets registered as a "subsystem" of sshd, same thing sftp-server does.) You can check that out here: https://github.com/powershell/powershell/tree/master/demos/S...
Nice work. Is it fair to ask for even a ballpark answer on what the licensing story will be?
Windows Client Access Licensing is annoying, but they have some web exception for web servers. SSH remote access is a whole new ballgame! I assume it is encouraged for one-at-a-time administrator use in much the same way Remote Desktop currently works, but even that isn't really communicated clearly yet (even from a high-level perspective).
It seems like getting the plans out there as soon as possible to begin collecting feedback and managing expectations is going to be win-win for everyone.
I am having difficulty creating a new keypair using ssh-keygen.exe and a passphrase - should this be working? I receive the error ``Saving key "[...]/.ssh/id_ed25519" failed: invalid argument''.
Haven't seen that error message before but I highly encourage you to use our GitHub builds where we've fixed a ton of issues related to path parsing and VT100/ANSI compatibility.
Those builds will also be showing up in Insider builds over the next few weeks (if they haven't already).
As part of the beta, we only support crypto algorithms that ship as part of OpenSSH itself, as opposed to algorithms that ship in OpenSSL/LibreSSL. Hence, we only support ed25519 in Fall Creators Update.
If you grab our GitHub bits, we compile against LibreSSL there, and you should see LibreSSL showing up in future Insider builds of Windows. That brings support for all the other crypto algorithms you'd expect (RSA, DSA, etc.)
Yup! We support down to Windows 7 and Windows Server 2008R2. You just have to install the bits from GitHub (but they're actually more up-to-date because we can ship there a lot faster).
What kind of network drive support are you looking for specifically?
If you mean, "interact with them from an interactive shell like cmd or PowerShell", yes, you can absolutely do that. Our implementation literally stands up a console host on the server side, so whatever you can do in that console host with the privileges you have, you can do over SSH.
If you mean, "do you support SCP and SFTP for transferring files in a network-drivey way" the answer is also yes. SCP is supported directly against sshd, and we ship both an SFTP server and client.
If you're talking about something NFS or SMB related, I'd be curious to hear more details.
I don't sit within the console host team, but I do work very closely with them and their PM Rich Turner is very active on Twitter (@richturn_ms). They're tracking the (very popular) request for tabs at https://wpdev.uservoice.com/forums/266908-command-prompt-con...
It says "ON THE BACKLOG" since October 01, 2014. You have a pretty long backlog, if you didnt implement the second top voted request in three years, out of backlog ;-)
Again, not my backlog personally, but I'd wager that the VT100 work that they've been doing in the console host is going to do a lot more for the ecosystem (WSL/"Bash on Windows" included) than tabs.
That being said, yeah, I also want tabs. In the meantime, I switch back and forth between conhost and Cmder. :shrug:
These days you go to the Microsoft App store and download Ubuntu. It loads up actual bash and comes with everything you'd expect. (ssh being just one of the things). The terminal emulation is improving (there are still a couple of glitches) but pretty much all of my daily tools work exactly correctly.
When I bought my first Mac, back in the days of MacOS 10.2, I did so because of the BSD system underneath. Having a Unix system, whether its GNU or BSD, gives me access to tools that I'm familiar with and sometimes prefer to their GUI counterparts (e.g., scp/sftp vs FileZilla).
Now that Windows has this, switching back is something that I'm giving serious consideration to.
"Now that Windows has this, switching back is something that I'm giving serious consideration to."
It's not clear to me that Windows has this ...
I, like you, adopted the MacOS ecosystem because it was UNIX underneath ... but there's a big difference between underneath and alongside.
Although it is not commonly done, you can control and interact with your OSX system with UNIX commands ... there's one single filesystem namespace and you can interact with it from the command prompt as well as kill GUI apps or set preferences or ifconfig, etc.
It's my understanding that the Ubuntu subsystem in recent windows is sort of a parallel environment ... but is not meant to control the system directly or as an alternate path of interaction with Windows, correct ?
Alongside is an apt description. It feels a bit like running Linux in a Docker container.
The Windows Subsystem for Linux (WSL) has it's own directory in the Windows filesystem that corresponds to /. It's uses some NTFS magic to store the Linux file attributes that don't directly correspond to NTFS file attributes.
It also, within the Linux environment, mounts your host drives under /mnt.
For what I generally use the OSX terminal for, WSL probably hits about 80-90% of my use cases. A lot of lower level utilities have weird issues - e.g., 'ip addr' seems to present the Windows network interfaces as though they were typical Linux ones, 'ss -a' gives a bunch of netlink errors, dmesg says "dmesg: read kernel buffer failed: Function not implemented", 'tcpdump' doesn't work, etc. On the other hand, curl, scp, ssh, etc do exactly what you'd expect.
My limited experience with WSL gave me the impression that anything that needed /low level/ packet constructs either wasn't mapped correctly or might have required administrative privilege blessing on the Windows side of the environment ACLs.
I hope that these get filled out, at least with more constructive error returns, at some later date.
For now, I'm happy that they have enough of the low hanging fruit ripened sufficiently to make it possible to do 'normal' things from within WSL. I'd honestly rather they make the local (and network) filesystem more performant and robust; maybe they have.
My use case for WSL is coming around in the calendar year again so I'll be revisiting it soon.
> It's my understanding that the Ubuntu subsystem in recent windows is sort of a parallel environment ... but is not meant to control the system directly or as an alternate path of interaction with Windows, correct ?
I am not so sure about this. I think Dustin Kirkland sheds some more light here:
One of the interesting things is that if Linux developed support for NTFS ACLs you could probably make it pretty seamless. One of the more convoluted parts of the NetApp filer code was the code that allowed for 'mixed' UNIX and CIFS volumes.
We're working on improving integration between Windows and Linux distros on WSL. Stay tuned for progress. The release notes on the WSL doc site [1] are basically our changelog. Highly recommended for the latest updates.
To be completely fair to both sides though, the ability to tweak MacOS from terminal has always felt like it was opposed by Apple. They don't seem to support it officially in any capacity, and in many respects, actively fight it (i.e. changing the preference keys in every single release of MacOS, for literally no reason).
You can do some things to and with MacOS from CLI. Honestly though, it's all stuff I wouldn't miss awfully.
I've been using WSL + wsltty [1] + Xming [2] for months and didn't encounter major issues. wsltty also added support to the Microsoft Store version recently.
For Xming, simply set DISPLAY in shell and local GUI programs just work, as well as SSH X forwarding.
I tried this, but I found that my Ubuntu subsystem could not access my wifi connection at all, no matter what I tried. I remember reading Microsoft forums where developers from Microsoft acknowledged they were working through the issue. It still hadn't been resolved the last time I checked.
Well not if you aren't using other parts of Windows :-) but if you are constrained to using Windows for other reasons then the advantage is you get all the tools with a much lighter weight system than say running a virtual machine.
There are also some systems where kernel support for the devices is lagging, either because they are proprietary and poorly documented or because they have insufficient market penetration to get someone interested in writing good driver support. For example support for the pen on the Surface Pro 4 line is really horrible (IMHO) on Linux.
So these devices need real kernel drivers? Windows doesn't provide some kind of emulation through its own drivers? I guess that makes sense although it didn't occur to me.
If that's true, will those new drivers also work on a regular non-Windows Linux install? That would be really great news, and pretty ironic, if device manufacturers or even Microsoft itself were suddenly writing more/better drivers for the Linux kernel. :-)
The grandparent is saying that if you run plain Ubuntu directly on a device, Ubuntu (or really, the Linux kernel) might not have drivers that work reasonably well for some hardware on that device, while Windows most certainly does have decent drivers for that hardware.
When you run Ubuntu on top of windows, windows replaces/emulates the Linux kernel - at least the part it needs to run the subset of Ubuntu that windows currently can - this emulation provided to run Ubuntu is done on the interface between the kernel and userspace, it is not done on a device/driver level.
Drivers are OS specific, the drivers in question here are either windows drivers, which works only on windows, or they are linux drivers which work only on linux. (Noone is writing drivers for windows which could also work on linux)
Its not a layer on top of the NT system call infrastructure. Its brand new NT system calls that do exactly what the equivalent Linux ones do. There is no combination of win32 NT system calls that will get you the behavior of fork().
Aren't the NT system calls also just a personality module above the actual NT kernel? The kernel was originally designed to provide multiple kinds of user spaces so that it could run OS/2 programs alongside Win32 ones.
Of course not! It's quite the opposite. With more and more people ditching 'real' Linux for WSL (which is a 'Linux kernel API layer' for the Windows kernel, there is actually NO Linux kernel involved), support for the relevant hardware is never going to happen in Linux proper.
The big advantage is that it runs within Windows, so you can use your Windows and Linux tools at the same time. It's also not done as a virtual machine, so the Ubuntu running on Windows can access and manipulate files on your Windows system very easily.
Albeit very slowly. Unless they fixed it recently, writing to a vast amount of files is very slow compared to native Linux. Anyone who has cloned a large repository I'm sure is aware of this.
I felt like file speed sped up quite a bit in the Fall Creators Update, though of course that notion is entirely anecdotal.
Also, the native Windows git is still your best bet for git operations. That's one of the cases where I will have a PowerShell and an Ubuntu bash window side-by-side, working in the same /mnt/d/... | D:\... directory. git operations in PowerShell and Jekyll (or whatever) operations in Ubuntu bash.
Does it make a difference if you run your git operations through cmd from the bash command line? If so, is there a simple way to automatically get the current windows path from pwd? If it's the same speed as native git running in a separate window (I think it should be), it might be worth coming up with some kind of script to do this automatically. Not sure how to get that working with using Linux tools within git, though (editing commit messages, diffing, etc).
Also, how does file speed within the Linux environment (/home/username) compare to that outside (/mnt/C/whatever)?
Also, they _really_ chose a poor name for that executable; it should have been wsl.exe or some such, not bash.exe. At least lxss.exe isn't already in use by other software in my path...
We just released a tool to do the right path translation -- wslpath. Unfortunately we haven't added built in usage info yet. Here's a link[1] to usage info in recent Windows Insider flights.
Also, totally fair feedback about the naming. In FCU, we added wsl.exe in addition to bash.exe. It launches into your default shell rather than /bin/bash.
I hadn't thought about that (I use PowerShell for most tasks so that is still my "home" shell and it's bash that's the rare one), but I half remembered something to that effect, so I went to the docs:
Calling Windows EXEs from bash will automatically retain the current working directory under /mnt/c/*, so it should just work out of the box. Looks like you might be able to get away with just adding native Windows git to your bash path.
I haven't done this yet, thanks for the reminder! I've been curious to try it as I've gone through probably every other bash emulator on Windows and not been overly satisfied.
I would love to hear how it works or doesn't work for you! I have shown it to lots of folks and probably a quarter have some aspect that keeps it from being perfect (generally a terminal issue or the lack of a Microsoft provided X server. For the latter I use xwin32 [1] which works fine but may not be accelerated if your display controller isn't supported.)
I'm just getting set up yet, but so far so good! I haven't run anything too complex just yet, but it seems really performant and doesn't put a significant load on my system.
Next up is to try running something more taxing, and then look into GPU access from within it.
I'm looking forward to them developing the subsystem further.
Have you looked at Mobaxterm at all? I have the same issues, and accidentally discovered this a few years ago, and haven't looked back. Lightweight Cygwin install, own package manager, EXTREMELY usable. The free version is plenty good, but even the paid for version is 'only' $80.
I second this recommendation. It’s a really great tool; I really like the combined SSH and FTP integration so I can browse and edit files with my favorite text editor on the machine I’m ssh’ed into really efficiently while also running commands at the same time.
I wish Cygwin performed better on Windows, git operations on Linux take minutes on Windows because it has to spin up the whole cygwin environment. This kind of still makes me want to use Linux, but MobaXTerm makes my Windows desktop very usable for administration.
Neat. I wasn't aware Git client had this much feature. On Windows 10 I have "Bash on Windows", but on older systems (such as Windows 7), Git client would do the trick.
MobaXterm is nice, but I don't like how heavy it is...
WSL has exited beta on the mainline release. You can now install Ubuntu/Suse from the windows store. Look up the processes to extract data from your current wall, unintall the legacy edition and get the new addition. It's not clear yet how the legacy version will be impacted it may be easier to migrate sooner rather than later
Yeah it's not such a mind-blower if you are experienced in using Linux, where this is just common, but it's still really nice to not miss this when working on Windows.
However in the current version of MSYS2 (after an update a few months ago) they suddenly seem to mix the different linux-like distributions and for me it's really hard to choose which shell to use and which package to install for that shell.
Yeah I should clarify, the fact that msys had built in package management was just the most pleasant surprise ever. I use windows when I have to, not by choice. :)
No it's a completely open source, built from source (using gcc) software distribution for Windows. It uses pacman to install packages and comes with many hundreds of prebuilt ones.
Yeah, but doesn't Putty force you to use it's goofball ssh keys? With git bash you can use regular ssh keys. And, you can generate them the way you're used to (assuming you already use Linux). Idk, I think git bash works just fine in most cases.
I really miss ‘git bash here’ from File Explorer when I’m on my macbook. Why it doesn’t have open terminal here/open iterm here I don’t know. I used to have an AppleScript thing to do it for iterm 2 but it got broke and now all I can find is a button in the finder window.. but it really should be built in.
I'm not entirely sure what you are trying to do, but if you leave the terminal icon in your Dock, dragging any folder from Finder onto it will launch terminal in that directory. Any good?
Nice, works with iTerm2 too - thanks :) I have an iTerm button in Finder which opens it in the current folder, so it's okay - but Windows has right click > open powershell here, and if you have git bash then it has right click > open git bash here - which feels more natural to me. I think the Linux desktops I've used all had right click > shell too.
I used to use Cygwin but it became a pain to manage the packages and it couldn't update it's own setup.exe last I checked.
I recently tried out WSL and I still run into problems with the file system. My home directory is in a new place which is kind of annoying. And cloning a repo with Git changed all the line endings. autocrlf didn't do anything for me either.
Git Bash is based on msys/mingw which have been around a while. I think both mingw and cygwin will both be deprecated in favor Windows Subsystem for Linux.
Highly unlikely. You'd only deprecate them if they serve the exact same purpose, but they don't. Many users will switch, but not those who want a posix runtime environment for a win32 executable that can run on machines that don't have WSL installed (cygwin, and the fork msys2 which git-bash is based on); or to use the gcc compilers to target win32 executables that use the windows native (non-posix) runtime - mingw. Do you know that you can use gcc to build windows executables and dlls from Linux, Mac, or WSL? That's mingw and it's not going away.
You'd better believe that if Windows had an acceptable terminal 10 years ago I would have never purchased my first MacBook.
The pleasant interaction with other *nix systems was and is a primary driver for my choice of OS X.
PuTTy sucks. Every other terminal I have tried on Windows sucks. Getting terminal-based software to work on Windows sucks. I stopped trying years ago because a terminal on OS X doesn't suck. It might not be a perfect 10, but I'm happy with it.
I'm also more interested in Windows more and more as time passes.
Losing as the most popular OS has made Microsoft start doing some of the right things.
Can't agree more. This was also what made me to buy my first Mac: the fact I have a natively supported terminal (that's blazingly fast), with ssh client (and server too, whenever I need it), with shell that's not some kind of ugly hack (geez, CygWin nightmares) and BSD userland tools at my fingertips is the quality no computer/OS maker can deliver up till today, so despite being displeased with some recent design choices (I love my 'old school' Mac keyboard, I love my MagSafe adapter, I love my 2015 MBPR ports and form factor, thank you) I keep buying Macs.
I'm a Linux guy myself, but whenever I see colleagues use a terminal on Mac OS or Windows, the amount of little annoyances on Windows really sticks out. It's nothing major (hard to configure colour schemes, awkward copy/paste operations, PuTTY), but it compounds to an unpleasant experience.
Also lack of fullscreen support (pre-windows 10), no history and it seems that command line was just a joke for MS for the longest time.
I think it's the explosion of open source and the corresponding command line tools that they started investing some time and effort to the command line.
Honestly, I think the biggest driver is the type of developers you find now. With Open-Source being such a large area for almost every CS student and budding developer - the normal work environment is becoming very Linux-heavy.
A natural result is now the Devs working at Microsoft ALSO wish they could use the command line for everything in Windows. Thus Powershell, and things like AZ CLI becoming much better.
This. 10 times this. I moved to MacOS 10.3 in 2004 because the things I did professionally were almost all in Linux (which I had been running in the data center literally since 1994), and MacOS felt and acted like Linux with a nice GUI. I lived most of my life in Terminal and rarely noticed the difference between when I was ssh'd in to a server or working locally. Awesome stuff.
I doubt Windows 10's inclusion of ssh alone addresses this use case; for that you'd want a full cygwin or something. I don't know anymore - I don't use Windows for any purpose other than games on dedicated gaming computers these days. Despite Apple going backwards with each successive macOS release (probably since 10.8) it's still a much more usable OS for the majority of the things I do than Windows 10. I do like Windows 10 better than Vista and 7, though, on my current gaming computer.
Never been a problem for me. What are your complaints?
I barely notice the difference, but if I had to choose I'd take putty over Terminal.app, which has default keyboard-shortcuts that clash with Bash. (I believe it was Alt-b, Alt-d, or maybe Alt-f. I forget exactly.)
It's one of those things that's hard to explain. If you're used to a proper terminal emulator, you'll know immediately. PuTTy has terrible configuration and awkward default settings, and gives the impression of being pretty flimsy. Remove the network cable while in an SSH session, and PuTTy will immediately disconnect you. Try doing a port forward, and you'll need to dig around instead of just typing it into a terminal. Also aesthetically, PuTTy has nothing on Terminal.app or iTerm2 etc.
This is so true. Another attemp to explain might be: The cmd.exe and PuTTy always feel like foreign bodies in the windows world, they don't integrate, they are an rough try to fill the gap between the "old" command line world and the "new" windows world (in the win95 language). In contrast, already the OS X terminal feels like a first class citizen in the environment.
It is worth to emphasize what has been said before here: Since Windows popularity is decending, they start to catch up with that attitude. Nowadays, for instance, you can open the terminal from every folder in explorer.exe -- this is exactly this kind of integration Windows missed for 20 years. Once they kick out cmd.exe in favour of something like Console2 (https://www.hanselman.com/blog/Console2ABetterWindowsCommand...) and update the toolchain even more than they did with PowerShell (actually integrating GNU/bash was a major step), developers will come back.
Have you used Windows recently? The Console team has made significant improvements in the last year, with more on the way. This article has a good rundown of some.
> Remove the network cable while in an SSH session, and PuTTy will immediately disconnect you.
But it's not a problem with PuTTY, it's a problem with Windows networking
subsystem. Once you remove cable, the whole interface immediately goes down,
IP address gets deconfigured, and all open sockets using the address get
closed. (I was bitten in the ass by this when I had several Windows Server
machines under my care.)
You'll get exactly the same with any other long running TCP connection.
But it still stands that using PuTTY sucks. I'll gladly give you that many of the reasons it gives a bad experience are because of various Windows characteristics.
nix user going on 25 years here. I use WinSCP and PuTTY to all the time and they work just fine.
> Remove the network cable while in an SSH session...
This argument seems a bit contrived. Is this honestly something you're concerned with on a daily basis? I doubt it.
> Try doing a port forward, and you'll need to dig around instead of just typing it into a terminal.
No, you don't just type "it" into a terminal. First, you have to lookup the command syntax if you haven't used it in a while. Then you have to type it in correctly and if your syntax is off by even one character, things don't work.
> Also aesthetically, PuTTy has nothing on Terminal.app or iTerm2 etc.
Aesthetics are nice but do you even have a tool like WinSCP? It graphically displays the remote filesystem over your SSH connection and then lets you open a terminal to the path you were looking at. I don't even have to type my password when I open a new terminal from WinSCP.
Well, of course, and tmux is another thing that makes PuTTY janky. I'd rather not dumb down my tmux layout to something that PuTTY's terminal emulator can understand, when Git Bash running its bundled ssh handles it just fine.
If it didn't come with so much unrelated crap that could be installed independently I'd largely agree. But it takes this "... and the kitchen sink..." approach that detracts from it quite a lot. Want to install PostgreSQL running in Cygwin? Look no farther than your terminal software to do the work for you... blech.
I say all of this having been a paying MobaXterm customer in the past.
These days I use ConEmu, WSL, and Bitvise (for those rare occasions I want to easily tunnel to the Windows side).
Text rendering is bad (and is generally unpleasant to look at), it forces UI interactions, it's not a terminal so I can't do anything locally, no tabs, IIRC text doesn't reflow on window resize.
It's generally difficult and awkward to work with and unpleasant to look at.
To add to the other problems that people have mentioned, text selection is a little strange and putty doesn't pass along keys like "End" for some reason, which is useful in tools like less.
If you use git on Windows with PuTTY/plink it's also significantly slower than OpenSSH. I saw clones of a large repository go from 60 KB/s to 600 KB/s after switching from PuTTY to OpenSSH.
With PuTTy you have to open a separate application and configure connection in a completely different manner. Compared to Linux/OSX which enables a "work in a terminal" model, where "ssh", "scp", etc. are commands that can be used from within an existing terminal. It feels like a completely separate tool from the terminal instead of integrated with the terminal. I think this is a similar argument that users of text-based-editors (vim, emacs) have over using GUI-based editors.
As a counterpoint, I LOVE the whole ~/.ssh directory. It keeps everything in one place, is text-editable, is cross-platform between Mac and Linux, and easily transferred to new machines, or restored to existing ones after a fresh reinstall.
With Putty, if I want to change, say, the size of the terminal opened on connections, I'd have to update every Putty session individually. Been there; done that. Ridiculous. That should be a function of the terminal you run SSH under, not the SSH session itself.
The config all lives in HKEY_CURRENT_USER\SOFTWARE\SimonTatham\PuTTY. You can export it all via regedit or:
reg export HKEY_CURRENT_USER\SOFTWARE\SimonTatham\PuTTY PuttyConnections.reg
It is definitely a lot messier than working with ssh_config.
There are versions of PuTTY that store the config in files, usually with names like Porta-PuTTY. I use the hacked up Xming Portable Putty because I do a lot of X forwarding and the base PuTTY used to have some stability issues with it. They're probably fixed now, but I still use the hacked up PuTTY because my configs are in the text files instead of the Registry and regular PuTTY doesn't know about them.
It's all in the registry. I forget where, but search for the hostname or ip of one of your configs in regedit (CAVEAT EMPTOR!!!!) and export the whole folder with all the configs in. You'll get a .reg file (or whatever it is) that you can double click on any other windows machine and your settings will get ported over. Paths to key files are preserved verbatim.
Again CAVEAT EMPTOR (as with anything to do with regedit).
All of the sessions are stored in the registry, which you can dump to a text file easily enough. A couple of minutes with awk or something and you'd have your ssh_config file.
I quite like PuTTy as well. I have only had problems when using gdb or jdb - it doesn't translate arrows properly so history or editing current command doesn't work and some apostrophes or ticks or whatever get translated to accented characters. I always blamed the shitshow of unix tty for that. Is there an ssh client/terminal that doesn't have these problems?
PuTTY on Windows does all of those things perfectly reasonably for me (I just checked) - it might be as easy as making sure your PuTTY environment defaults to UTF-8 instead of ISO-8859-1 or various codepages.
For one, it's plainly impossible to use an existing OpenSSH private key with PuTTY. You can convert PuTTY keys to the OpenSSH format with putty-keygen (or what it's called), but not the other way around. This antifeature alone cost me two hours recently.
Been a while since I've used a Mac, but I seem to remember that at one time, Terminal.app lacked tabs, but they were added in a later version of the OS.
The big problem is the filesystem is still vastly different.
The web, Unix, Linux, OSX, pretty much everything other than Microsoft uses properly namespace driven forward-slash separated paths with no drives and case sensitivity.
That you can't fix, merely abstract it away. I am tired of mapping between the two.
Powershell doesn't but not everything is powershell. Win32 underneath pokes its head out occasionally. And I don't want to use powershell. It's horrid in so many ways.
I'm a Unix user since about '93 for ref as well. I am poisoned with forward slashes everywhere :)
MS-DOS (and Windows) have supported forward slashes as path separators since PC-DOS 2.0, despite defaulting to backslash. Anywhere where the forward slash isn't supported is a bug either in Windows or (more likely) in the app that won't accept them.
Yeah, that's a shortcoming with cmd.exe. At least PowerShell will switch slashes to whichever is default for the platform, but I doubt that supporting fore-slash tab completion in cmd.exe would be able to crawl out of the -100pt hole.
That makes sense: I don't do anything on Windows command line that isn't powershell. Give it a chance: install PSCX, openssh, and conemu, configure your emacs keybindings, steal my profile (https://github.com/mikemaccana/powershell-profile) and learn, at minimum, 'where', 'get-member' and 'select' before you make your mind up on anything.
> Losing as the most popular OS has made Microsoft start doing some of the right things.
While Microsoft has started doing good things which I can agree on, when did they lose OS market share to the point of not being number one? If we're talking about Apple hardware outselling any one vendor, that's true, but there's volume. Also, one of the first things I did after getting my MacBook Pro (and others I know who don't want to leverage VMs/Parallels. I like Windows and OSX being separate from a context point of view) was dual boot Windows.
Android is first by a large margin. iOS almost outnumbers Windows as well.
Microsoft post iPhone needs to prove that it's worth using a desktop at all, not that they deserve to have the highest share of desktop users. Stopping the bleeding inflicted by Macs, which sync so well with iOS, is part of that.
Desktops and laptops have lost much ground to phones and tablets. There was a time where comparing a desktop to a phone was silly, that time has passed.
The number of people using Windows to do their "computer things" has gone down dramatically. Even if they have a Windows computer, much of the time spent doing computer things has moved to Android and iOS.
Because the Linux desktop sucks. (that's the word of the day)
If I had a Linux laptop, an extraordinary amount of my time would be spent trying to make it work. Wireless breaks, sound breaks, upgrades break everything. I would have to spend a serious amount of time and research finding a laptop that had good Linux support... regardless I would still probably have to spend hours trying to get the sound or the wireless or sleep or some feature or another to work properly.
OS X just gets out of the way. I have never had to put any work into making the graphics card work or making sound work or making the network work or fixing boot... you get the picture.
If I'm using Linux on a laptop to do any sort of work, a sizable portion of the work becomes keeping Linux working on the laptop and I don't want that.
...meh, just limit yourself to the tops of the lines like Thinkpad, XPSs etc., make sure any networking components are branded Intel, pick Intel CPUs' integrated graphics or not-low-end Nvidia, and everything will just work fine out of the box with an LTS Ubuntu or latest Fedora (basically pick whatever's closest to what you use on your servers).
But yeah, if you buy anything that's not part of the "top of the line developer notebook" category, specifically the Intel not AMD ones, then almost nothing will work out of the box :)
I'm pretty sure that I've spent less time maintaining my laptop than my colleagues who are on OS X. That obviously isn't the case for every laptop, but I don't think that your statements generalize well either.
"Sucks" is subjective; you're allowed to think that.
In years past you may have spent hours getting wireless or sound working. These days, if you buy decent, mainstream hardware it just works. Ubuntu does a great job of getting out of the way.
I'm a Fedora guy, myself, but find myself recommending Xubuntu to more and more people for casual computing because 1) it really does just work, and 2) Xfce is lightweight and lets you use your CPU for doing real work vs. holding up a bloated windows manager.
Linux on unsupported laptops (which is most of them) sucks big time, but not linux desktop itself. It works great on standard desktop computers (towers)
you should make that distinction, because otherwise you'll just start a flame war.
I tend to disagree. Yes it works but I find it unpleasent to use. None of the available desktops is pleasant to use, especially fonts and symbols often look much worse than Windows or MacOS. As much as I like Linux on servers, I don't see any reason to use it with a graphic interface.
I've recently gotten a new laptop, and everything is supported out of the box in Debian. It has a 2160x1440 screen, and the hidpi support with gnome on Wayland is outstanding. I would say that the fonts and symbols look on par with the new MacBook pro (I was using my new computer side by side with a new model MacBook pro).
The Linux desktop has come a long way, even in just the last year or two. I know everyone always says that, so take it with a grain of salt, but I have never been happier with the state of the Linux desktop.
Interesting. Personally, I find Windows quite unpleasant to use, even basic window management is a hassle (just an example: you have to target very small hit boxes for simple things like resizing and moving windows around, and default keyboard shortcuts are not configurable). As to fonts, many if not most custom fonts you see more and more on websites look quite odd, and sometimes simply bad with Windows' font hinting (I do agree the built-in fonts work nicely, but so do fonts on certain Linux systems as well, and have done so for at least a decade since Canonical merged certain patches to their Ubuntu's version of freetype).
Might it be that we tend to like what we're used to? When I jumped from WinXP to a Gnome2-desktop, I tried to make it a bit more like WinXP, even though these days I would probably want nothing of how WinXP desktop is laid about.
Almost every time I hear this complaint it was years ago and with one distro. There is so much more than Ubuntu out there people (Ubuntu sucks IMHO).
One thing I've learned is distrohopping is a must if your hardware is an edge case... that said, since moving to Manjaro, I haven't had to use anything else.
~10 months ago. Looking at screenshots on their website I still don't like it. But maybe that's just because I grew up with Windows (although I do like how macOS looks). Think it's very much a taste thing. Most of what I do is browser based (incl. IDE) anyway so the OS doesn't really matter anymore.
Just to be clear, if a Macbook works better for you or if you just prefer macOS, by all means, go ahead. Macbooks, from what I hear, must be superb machines, and while I prefer Linux with a Mate or XFCE desktop, I have used a Mac for a while and there are things I do miss. And a lot of software is available for Windows and Mac only; given the choice between Windows and macOS, I wouldn't have to think long.
I hear that argument frequently. On macOS (or even Windows) things just work(tm), while on Linux (or some BSD), there is always something that does not work correctly.
I guess I must be quite lucky. A long time ago, I positively enjoyed spending an entire weekend getting a sound card to work or something. To be honest, these days I consider myself too old for that stuff. I, too, like it when things just work. But really, Linux as a desktop system has come a long way, and for the past couple of years, things have pretty much just worked(tm) for me.
Picking hardware that is supported by Linux takes a little care, so does picking a distro[1],
especially with laptops. But I prefer to do a little research before buying a laptop anyway, because I am usually on a budget.
[1] In my experience, the more recent the hardware, the more unlikely Debian is to work. OpenSuse has worked well for me, though.
I put Ubuntu (Mate) on an old Macbook of mine because it was absolutely crawling with MacOs on it. The only driver it needed was a Wifi driver. Everything else worked fine out of the box.
Also, I run an Intel Nuc (laptop hardware, essentially) with Ubuntu as a main development machine. 0 problems. 0% in the way.
OSX is great, don't get me wrong. But, it's been years since I've ran into driver problems with desktop Linux.
> If I had a Linux laptop, an extraordinary amount of my time would be spent trying to make it work. Wireless breaks, sound breaks, upgrades break everything.
At our office, the macOS laptops tend to have the most problems with sound & graphics; the Linux desktops, OTOH, Just Work™. I find that pretty funny, actually, because I would have expected it to be the other way around.
The most problems I ever had with Wifi was a mac machine (rMBP13). In the office, usually during RDP sessions, it often lost the connection and couldn't reconnect until reboot.
I've never had anything similar happen with a PC (we use Thinkpads), whether running Linux or Windows.
Linux Mint worked perfectly on my T420, and I expect it to work perfectly on the T440 I'm picking up today to replace it.
Sure, not the very newest hardware, but it's not like laptops have gotten significantly more powerful for anything that actually matters, for the last couple of years.
Oh, they did more powerful, just in departments you don't pay attention to.
The biggie is the GPU. T420 has no oomph to run an external 4K display (at more than 24 Hz, i.e. in usable mode). Broadwell and newer do have the capability to run 2 of them.
The CPUs got less power-hungry. You can do the same work with less juice, so your battery lasts longer.
The SSDs with the new interfaces got much faster. There is simply no comparison between M.2 nvme drive and SATA3 SSD.
Sadly, wifi took a step back, with almost universal unavailability of anything better than 2x2 MIMO 802.11ac. In the past, MIMO 3x3 used to be available (Broadcom, but the option was there).
So yes, modern laptop is significantly different experience than few years old one, despite the CPU having the same GHz.
My point is that they're not noticeably faster for the things most people use their computers for. They still run browsers, Spotify, email, word processors and all of that just fine and last "long enough" on a charge.
The improvements have been incredibly marginal for most people.
The difference is noticable, the modern laptops are subjectively perceived by users as faster, even if they have slower CPUs.
Also, they do not have a problem with Electron apps. for most people, they are just apps like anythig else, and do not perceive any performance problems with them.
The difference between NVME SSD and SATA SSD is like between classic HDD and SATA SSD. It has to; NVME can push 4GBps, SATA3 only 600MBps.
Even the Apple 2015 M.2 drives, which are "only" AHCI, but with much deeper queue (i.e. nonstandard), can do 1.something GBps and the difference compared to standard SATA SSD is noticable by normal users.
Electron apps are something normal people use and some of them compain about the speed/lag/memory usage. Usually those with older computers, who think nothing changed in the last years ;).
But for most people, that speed difference is purely academic, and doesn't actually benefit them. Sure, if you're compiling stuff or otherwise moving around a lot of data, it matters. But for most people, it's just some numbers on a spec sheet.
Of course it is, but even with all advances in terms of UI in Linux distros, many people still prefer the MacOS/Windows GUI and ecosystems, or just like Apple hardware more.
Macs historically have always been a decent compromise for having decent GUI/Linux-like terminal/nice hardware.
It's kind of a different environment now, with PC manufacturers coming up with well designed laptops and Windows offering more Linux integration.
If you want to get some work done, and not just fiddle around with getting wifi, sleep, graphics working, then installing Linux is not an option.
Linux could be the new Mac os X if all the Linux distributions chose to focus on one platform and way to do things, but nobody in the Linux world is going to do this.
Also, Mac has unified hardware, so they pretty much can optimize so that stuff Just Works.
I haven't had any trouble with Linux desktop on mainstream laptops for the past 15 or so years. The days of hand tweaking your Xfree86 configuration files and refresh rates are gone.
Of course there are things that are not working so nicely, and are not meant to. For instance, running RHEL (which is a server OS) on a laptop (which typically has new optimized hardware which is not meant to be server) and then expecting graphics and WiFi work on a kernel that is much older than your hardware and thus has no drivers.
Perhaps you might expect some trouble also when running Windows Server 2012 on a new laptop? I don't know, I haven't tried, but wouldn't be surprised.
Thanks for the tip. Might consider that in the future. Probably not though, as mac OS is such a better OS. But the hacker in me still kinda likes the idea of Linux, used to run it as my main OS for long time before Mac Os X.
That resolves that issue, but has its own downsides.
There’s a lot of things I like about macOS [eg: better power management], but the main reason I switched was because it was a Unix I could put Photoshop and InDesign onto.
Well, if you need certain applications that are not available on Linux, all the hardware support in the world does not help you.
That argument trumps pretty much everything else. If my boss came into my office today and told me to switch all of our desktop computers to Linux, as much as I would enjoy that, I would have to tell him that it cannot be done, because we use a lot of software that just is not available for Linux. (And a fair amount of it is not even available for macOS.)
Personally the most annoying thing is the little UI inconsistencies like clipboard handling. In a mac it is always Command-C/Command-V - in X sometimes it is Ctrl+C/Ctrl+V, Ctrl+Ins/Shit+Ins, mouse-select/mouse-middle-click and so on. Many applications have incompatible clipboards.
The reason for the confusion is that there are two separate clipboards: one for Ctrl-C/Ctrl-V and one for mouse-select/mouse-middle-click. To use the first clipboard on terminal emulators you need to use Ctrl-Shift-C/Ctrl-Shift-V instead because the Ctrl-* shortcuts have legacy meanings in the terminal. I don't remember ever needing to use Insert for copy pasting (although you can use it if you want).
I don't know what examples you had in mind when you mentioned applications having incompatible keyboards. The only one I know that does that is Vim (which by default uses the internal clipboard when you use its cut, copy and paste commands) but Vim is definitely not your average Linux application...
Check out MobaXterm ( https://mobaxterm.mobatek.net/ ), it's the best terminal for windows I ever tried. Have a really handy list for remote connections, an X server, sftp browser for connected servers, Linux Subsystem fro Windows support, and a lot of other useful features.
(I am not connected in any way to the developers)
I try it every few years, but always seem to end up going back to plain Terminal.app. The main issue is speed and memory usage— iTerm2 seems to be much slower and laggier, especially on a busy laptop (running VMs).
Because *nix kernels provide terminal devices, making a new terminal program is relatively easy on OS X compared to Windows where a lot of the free things have to be written in userspace.
I've recently discovered cmder and it actually makes Windoze bearable to use. Of course I would never give up Linux if it were my own choice, buy work is work. So they've got an SSH client at last, but still no decent terminal, I suppose.
I think the question is not whether someone would buy a MacBook because it has these tools built in but rather, why would someone buy Windoze given that it doesn't have these tools built in?
I really like the Windows 10 OpenSUSE terminal. Just install it from the Windows Store. OR you could install the Ubuntu Terminal. Either way they have replaced cmder for me.
One thing you need to be careful with regarding WSL is that it still doesn't correctly handle umask[0].
There are some workarounds if you use particular terminals (for instance you can't use Mobaxterm as it forces 777 regardless) but it's not exactly 'it just works' yet.
If you think PuTTY is even half the environment bash+ssh is, you probably won't understand what the people buying Macbooks, or using Linux directly on Whatever{books,tops,pads}™ are getting from their choice.
Not trying to be condescending, but it seems to need highlighting that just having SSH access isn't enough.
Not to mention that these other environments have offered other benefits [especially] to developers looking for a programming environment that works for them. The VS.Net IDE-play-button approach to automating testing and deployment doesn't scale for everybody.
Technically the "you" in your statement is not the same person/people as "the people buying Macbooks or using Linux directly". But I see what you mean. Hopefully you also see what I mean - that nobody in their right minds would think that PuTTY is trying to be bash+ssh. PuTTY is the ssh part of that combination, and there are plenty of options to cover for the bash part too.
PS: From the fact that you refer to Visual Studio as "VS.Net", I take it that you haven't been in Microsoftland for a while. No serious developer uses these "play-buttons" to do their automated testing and/or deployment.
> I take it that you haven't been in Microsoftland for a while
It's been about a decade since I moved to Linux. Just skimming the latest VS docs though, the structure of unit testing within VS pretty identical to how it was. The feedback is a little more inline and I guess the extensions are a fairly big deal... But that sort of lends to my point that the IDE is still pushing itself down your throat for an IDE-centric build environment.
The thing I've liked about collaboration since moving away from VS is our build and testing toolchains sit completely separate from development settings and IDE projects. It's very simple to switch things out and script in new workflows. It also means we don't need to install things like VS to test on a new (eg rebuilt production test) machine.
I'm sure the same is possible in Microsoftland, I've just never —even recently, I still interact with .Net developers and their work product— seen people making use of it, falling back on what Mother thinks best. Maybe they're not serious enough developers.
On reflection, my last reply was a little short-sighted. Of course the untethering and later freeing of MSBuild has certainly helped Microsoft-tethered developers.
The historical bias towards the IDE and component kits still exists —and probably will forever— but some of those things are reasons I found it so easy to pick and run with VB.NET and C#. I won't pretend it took me a while to work out what I was supposed to do when I left it all behind.
That all said, they are getting closer to the point (may be there already) where you can go from Powershell to remote Powershell over SSH.. But that still assumes you're lunatic enough to want to host anything on Windows. I think I've had too much freedom for too long now to ever consider that a good idea.
> I really can't believe somebody ever bought a MacBook because they found installing putty too much of hassle.
No, but something like installing putty was the last straw.
Putty is a terrible experience; very difficult to use. It's like every misstep or disconnection involves a dozen or so clicks back into the configuration area to "try again". This feels like normal on Windows though.
I recently tried to port my workflow to Windows for a year, but I eventually gave up simply because of something like this. I might've made it 6-7 months and I can't recall exactly where it was, death from a thousand cuts maybe.
Hat's off to you, I didn't even make it that many weeks.
Between filenames/paths being too long, poor terminals with janky colour schemes (save for Mobaxterm, it's great), WSL issues (like umask handling), gvim/vim issues with plugins, and loads of other annoying bits I just went back to Linux.
Linux is FAR from perfect, especially as I have an Nvidia graphics card, but there's a lot to like there too. Having run Fedora 27/Ubuntu 17.10 I've found I really really like Gnome. I seem to be in a club of one there (and it's buggy as hell on 17.10) but for me it's been great to use.
Not that I wouldn't use Mac if I found a couple grand down the back of the sofa, but as a daily driver Linux has been less painful. Even on the laptop, though I do have one of those Dell certified ones.
Normally you have a fully-fledged programming language that sometimes runs
a command being SSH client, and universal one at that (you can pipe data
through it, you can pipe data out of it, you can run remote commands
non-interactively with it, you can jump through SSH on other hosts with it,
you can set up TCP tunnels in ad-hoc manner without going through the
configuration, and obviously you can run an interactive shell). Plus you get
scp for file transfer out of the box, and lftp (which understands sftp
protocol) and rsync after installing some barely related software, all of them
still able to jump through other hosts.
Now compare it to the glorious ability to open a window with remote shell (and
only direct one, no jumping through a bastion host), and maybe a TCP tunnel
with some clicking through every single time (you can't recall the history of
commands, because there was no command in the first place; the best you can do is
to save the tunnel configuration permanently). And maybe file transfer if you
remembered to download PuTTY's scp client.
You can do all that stuff using Cygwin at least as far back as 2005, because I was running Unix dump and rsync through ssh from a Windows backup server back then. Cygwin eventually adapted PuTTY as its terminal emulator too, in the form of MinTTY.
You can do SSH proxying with PuTTY, too. It's just called something different:
Of course it's easy to straw-man this particular sentence. However many people - including me, once - bought a MacBook because it was the easiest way to get proper Unix tools like SSH at your fingertips.
Using PuTTY or Mingw tools on Windows sucks compared to using them on a Unix-based system. I gotta say the Windows Subsystem for Linux really helps in that regard - as does native SSH in cmd an PowerShell (PowerShell by itself is also pretty awesome once you get the hang of it). Linux did not use to be such a great competitor for the rest of desktop usage (some will argue it still isn't).
It's not only about this, you needed to install more and more things every year in windows for it to work. You needed to install Cygwin, a proper terminal (cmd.exe isn't exactly usable), python libraries, Putty, now Docker... Not to mention that since developers are only using Linux and Mac, you will have libraries which does not work on windows (because nobody even tried).
Depends, for the occasional deep into CLI world to run a script, is quite ok.
Granted, the defaults regarding mouse copy-paste and window size were only improved on Windows 10.
> Not to mention that since developers are only using Linux and Mac, you will have libraries which does not work on windows (because nobody even tried).
Not all developers are pure UNIX devs doing POSIXy stuff.
I happen to like Powershell's idea, as it is the closest to the REPL experience on Xerox PARC inspired systems, we get to have on mainstream environments, by using objects and having access to the full set OS APIs on the scripting prompt.
However I do think it could have been made less verbose and dislike the ps1 extension, as the 1 (one) does not make sense.
Verbosity is something I tend to reserve for scripts (which should not rely on certain aliases existing). In the shell code I write approaches golfing idioms (perhaps inspired by the fact that I learned PowerShell by golfing in it). I think they've done a good job of having the verbose things readable and understandable, while also being able to shorten parameters as long as they remain unambiguous (and offering short aliases for cmdlets that are used all the time).
I heard they wanted to use the ps extension but that has long been used for PostScript, and though very few Windows users use PostScript files directly they still didn't want to step on those toes. Particularly, they were very cautious about using anything that looked like an existing file extension because they didn't want it to be an easy vector for malware to confuse users. ps1 doesn't step on any toes, at least, and looks weird enough to an average computer user that they might hesitate to double click one.
Also, it's not like you don't need to install iTerm, zsh, docker, python libraries etc on mac. It's not like it comes with them by default. Your argument is kind of moot.
Given how critical good terminal experience is to modern IT the fact that putty is the go to tool for this in windows is a bad joke. Not to put down putty.
From the point of resourcing, I mean. Here we have Microsoft on a gazillion dollars of capital and one of the most critical tools for several people is developed by a single talented hobbyist who gives it for free.
Not to disagree with you on anything but same gazillion dollar company is now doubling down of a Javascript code editor because writing high quality native code is so difficult and expensive.
I have a Win10 XPS13 after 15 years of OSX. And Putty is bad. Not the ssh, but the tool support. And it's not only SSH but the missing shell. I currently use Ubuntu on Win10 just to use a proper shell with ssh.
And of course my next computer will be a MacBook again.
I can. Putty sucks. For a decent key management you need cmder + keypass, and i doubt people want to waste the time i did to experiment until they arrive to such conclusion.
> I really can't believe somebody ever bought a MacBook because they found installing putty too much of hassle.
I'm one of these somebodies.
It's not that installing Putty is a hassle, but my choice to buy a MacBook was very heavily driven by the fact that macOS is Unix based. My time spent between Windows 10 and macOS is pretty even, but it is a much more enjoyable experience to develop in macOS partly because of the Unix-based shell.
putty isn't a great client it just that it worked. I personally haven't used putty from a windows machine for over 5 years. I use cmder and just use it like I was on Linux. http://cmder.net/
Well not just ssh but I did leave Windows for Fedora because Windows lacked a solid terminal emulator and all third party ones did was act as a wrapper around cmd.exe which solved some UX issues but not the core limitations of cmd.exe itself.
I bought a MacBook because the terminal on Windows, including PuTTY, is just.. bad. It's my main environment, I want it to be top notch. I'm shocked I've yet to see something that feels like iTerm2 on Windows.
Maybe not, but having used both Putty and ssh (Linux) for many years, I'm so happy I don't have to run Putty again. I stopped using it at the end of 2008 when I reformatted my work laptop to Ubuntu. Not only Putty is a worse terminal than gnome-terminal but the key management was insane.
I swapped to mac for this very reason, actually. Now that windows has WSL, I've been straddling the fence on whether to swap back (tried it back in Dec., might give it another try soon).
I can't imagine (as the author asserts) that people left Windows for Apple because of the lack of a command ssh client. Here's a thought: If you need to spend time on Unix/Linux machines, why not just run that on the desktop? I've been Linux only for 10 years and have no issues. OK, I do need Windows for the rare moments where I have to collaborate on Word or PowerPoint documents; Win in a VM (KVM/QEMU) gets the job done.
Does this new client support ssh-copy-id and passwordless logins? I have a couple of public-facing machines that need ssh; I refuse to enable password authentication as they'll get banged on all day!
I work at a very corporate place with forced windows laptops. There are zero systems in the company we can sit down at and log into, except our laptops. The OS version is updated, and upgraded, automatically, even including major versions. I was just force upgraded to windows 10 and lost support for my programming environment. I had to recreate it in a new toolset because the one I have a thousand hours in is no longer supported. (It would work, it's just not supported - I cannot install it, but it would work fine if I could).
One arm of my company allows macs. This one does not, period. We have a 0% non windows 10 user base. We can have temporary admin access for 12 hours if we will out a report, but everything we do is recorded. It doesnt work if we're on wifi or battery. We are not allowed to install browser extensions, even if we are developing against the web.
My last job let everyone have admin/root. I had everything I ever wanted. My workflow was glorious. I was so comfortable. I was able to work 3 to 4 times faster on average, i.e. my yearly output was probably 3 to 4 times more productive. I invented new things, scratched my itches, and felt like the king of the world.
Our laptops are all Windows based and I've moved all of my development to VMs that I have admin rights over. I don't think I'd ever go back at this point regardless of what my desktop OS was.
It really helps with new developer onboarding as well. You can just provide them with a handful of VMs instead of spending time configuring a new machine with all of the dev tools your team uses.
I work in the public sector and our sysadmins have actually made a game out of tricking people into updating to Windows 10 (and allowing them to take back admin rights in the process). Like offering Office 2016 upgrades, but only if you upgrade to Windows 10 too.
I understand it’s much easier for them to manage things this way, but they’re not going to have the results they want by going about it this way. When my Windows 10 “upgrade” comes, I’ll just be dedicating one of my monitors to my own Arch (maybe Qubes) box where I can actually get shit done. I’m a C# dev too, which makes even less sense, but requesting permission to install simple dev tools is not going to happen. Life is too short for this nonsense.
What's your opposition to Windows 10 if you're already running Windows?
We're running Windows 7 and I'm begging to get into the pilot for Windows 10. As time passes more and more things break in Windows 7 and it becomes less useful. Most of Intels drivers are garbage and their Bluetooth stack is next to useless.
I'm running VMs ontop of my Windows 7 install for all development work. Anything that's Windows based is either a 10 or 2016.
I guess I should have noted that I do all my C# dev work in Windows 7 and run Arch VMs for everything else. My Windows 10 setup won’t be much different, but I just don’t trust Windows 10 and won’t be running my VMs on it.
I haven’t followed up on whether this “feature” made it into an actual Windows 10 update, but I remember reading about keylogging to the cloud as a way to pre-load your start menu with things that might be relevant to what you’re doing. Maybe it’s just being a developer and knowing what this kind of casual abstraction can cause, but I’m not okay with the philosophy that gets it into a test release of Windows 10. Microsoft is doing cool stuff these days but they still haven’t won me over.
I'm not aware of any keylogging to the cloud "feature". That sounds like some crazy conspiracy theory dreamed up by the people who hate Windows 10 and or Microsoft.
Windows 10 has the same frequently used app feature as Windows 7, which you can didable. You can optionally allow Microsoft to gather data about onscreen keyboard usage to improve suggestions, like Google Gboard on Android. Cortana's searches are obviously cloud based, but can be disabled. And Windows 10 offers suggested apps and features in like 3 different places in the OS, which can also be disabled. Maybe someone dreamed up a fantastic spyware feature based on all of those things.
You've told me three things that I can disable in Windows 10. Why is this stuff enabled by default in the first place? How do you know this is everything I need to disable to address these concerns? Or better, why isn't user consent requested before any serious "diagnostic tracking" like this? The answer, I think, is that it's too complicated for the average user. Once this "diagnostic tool" is effectively hidden from the user, and enabled on all devices, the tool either has to be monitored regularly (to make sure more features aren't auto-enabled like these were) or eliminated completely. I've spent too much of my life "monitoring" closed-source software to give much consideration to that option, at this point.
During installation/setup you have the option to disable a lot, unfortunately in an enterprise environment that isn't always something the user gets to see. Fortunately most of the crap is disabled or not present in the enterprise version of Windows 10.
It's on by default so that users will interact with it and try it out. This is pretty standard practice on every major OS or application you use today. New features are enabled by default and the user gets to figure out how to disable them if they don't like it.
Case in point, the latest update to Gmail on Android enables a feature of opening URLs in a Chrome Frame inside Gmail instead of using your browser. This is great for Google, not so much for the user. I got screwed over because of this feature because a nonce token I received was consumed by the Chrome Frame which promptly crashed.
Windows 10's suggestions and prompts are about on par with MacOS High Sierra's. If you're questioning that statement, try not setting up iCloud some time then come back to me.
It's a trust issue. I think it's a major leap to auto-enable new features without letting the users know what's going on, but people don't seem to have a problem with it these days as long as it doesn't raise any red flags in their mind or on social media. If we're auto-enabling stuff like this, don't users stop asking the questions? And is that consent?
That's not even going into who is making these decisions, the corporations who only stand to profit from you enabling these features. They will roll it back if there is enough public outcry, but burying the option in the system settings is one way to avoid mass public outcry. Convenient, isn't it.
Sure another major corporation is doing this with their products, but that doesn't make it right. None of this is an acceptable reason to continue sneaking it into their products. Plus the data collected has a potential for even more profit, which is where I just peace out and use an OS I trust. Why in the world would I give Windows 10 the benefit of the doubt?
There's definitely value in a lot of the data collected and there's also mass confusion about what's being collected and what collection can be disabled or can't as the case may be.
I'm not trying to justify data collection and I think that certain kinds of telemetry data are perfectly acceptable to be collected. The reason I bring up comparisons to other OSes is that often opponents to Windows 10 mention switching to other OSes which aren't necessarily any better.
With regards to trust in privacy and security, I can't say that I trust Microsoft any less than others. As an enterprise software and services provider, they are in a position where their products must meet certain standards in order to be adopted. The fact that they still are implies there's at least a certain level of trust held in them, unless you're the type of person who feels all companies are in on it.
Speaking of in terms of trusting in long term commitment and support, I would say I have greater trust in Microsoft than just about anyone else. They have the best track record when it comes to not outright abandoning products. You can argue that opensource software will always be supportable, that doesn't mean that it will be supported.
That's a good point about open source software not always being supported. I guess it's no coincidence that the large organizations where I've worked were all .NET shops, but all the Linux boxes ran Red Hat Enterprise. And the biggest complaint I've heard about about Microsoft not supporting old products was a discussion about Windows Server 2003 (In 2016...).
I don't think that "all companies are in on it" necessarily, but there's a reason for that kind of loyalty to customers and it's not because it feels good. I'm not going to knock Microsoft too much because while I do feel like they're off the mark in some areas, they're improving their developer support a lot recently so I'm excited to follow what happens. But I don't think it's a coincidence that they're rapidly increasing developer support either (we sat through the app-less wonder of Windows Mobile for quite a while). They are also of course a publicly-traded company, at the mercy of profit-demanding shareholders.
I would like to see a world where the people who use the products have an equal opportunity to contribute and improve it. Not someone behind a wall squeezing money out of pockets, or throwing candy at developers so they'll make their platform more appealing for them. Or at least as appealing as the other guys', else they go under. I'm hanging onto an ideology, I know.
Like digitalsushi said, this .NET job will let me retire but it still sucks. It could suck a lot worse though.
> I would like to see a world where the people who use the products have an equal opportunity to contribute and improve it. Not someone behind a wall squeezing money out of pockets, or throwing candy at developers so they'll make their platform more appealing for them. Or at least as appealing as the other guys', else they go under. I'm hanging onto an ideology, I know.
I would to and I also recognize that sometimes the community doesn't always push something in the right direction. Sometimes it takes a dictator to make things happen. Sometimes there's so many desperate projects that need to work in concert but can't or won't because of political reasons.
One of the advantages of a major corporation at the helm is that they can force a vision upon everyone under them and on the industry as a whole. That strength however is a great weakness or detriment to the industry if the person driving the boat has ideas that aren't in the communitie's best interest.
About Qubes: I was just playing with it a couple of weeks ago. It's interesting, and I'd consider using it except for one thing. When you're running a browser, the cursor doesn't change when hovering over a link. I know it sounds nitpicky, but I kind of want/need that. I did some digging around and didn't get any solution. Are you OK with that, or did you find a workaround?
I used Qubes for a few years and don’t remember running into that, but I’ve used Vim keybinding extensions in the browser for a while and may not have noticed. It’s that way in all browsers for you?
I'm pretty sure it happened in both Firefox and Chrome. It seems to be a common complaint, but I couldn't find a fix (and didn't really look too hard).
Honest question: Why did you take the job? I've heard such horror stories from previous companies of current coworkers as well, but they didn't suffer as much as non-developers. I myself make it a point to ask in job interviews whether I will be able to install and administrate my OS of choice.
It was foisted on me, but I created a group in our approved software policy with just me in it, added Virtualbox to the group and managed to get the change request signed off! I use Ubuntu now for everything but MS Office and the dreaded ERP client.
good lord. I would've gone the hyper-v route, that comes with Windows 10 pro, doesn't it? (another linux only user here, so idk). Should perform better than Virtualbox too
There's another gotcha here too: if your deployment uses Windows 10 Credential Guard and/or Device Guard, the Hyper-V hypervisor is installed even though Windows reports the Hyper-V role as not installed, and you will not be able to run other virtualization solutions.
My guess is virtualization software is mutually exclusive. On Linux, I can't run VirtualBox and KVM at the same time: if I want to run one, I have to unload the kernel modules belonging to the other.
Actually, it’s virtualization hardware that is exclusive.
Microsoft pulled an old trick out of their sleeve and implemented a feature, Device Guard, that requires virtualization. And of course it blocks other uses of the virtualization hardware, except, surprise surprise, their own virtualization solution.
D-, does not play well with others.
If your company requires Device Guard you can only use software based virtualization or Hyper-V.
Yes. Other virtualization solutions in Windows (virtualbox) run in it userland, hyper-v actually boots up below your windows system and makes your desktop os a virtual machine. Other VM run side by side your desktop os. The limitation being going turtles all the way down isn't supported by CPUs. Making the waters even more complicated is that some software moved from virtual box to hyper-v (docker for windows)
on init, the Hyper-V driver hijacks the running Windows instance, schedules it as the 'root partition', and runs it like just another VM.
unlike 'child partitions', the root partition still has non-virtualized drivers, which run unimpeded by privileged instruction traps.
when a child partition (guest VM) is started, it's scheduled as a peer to the root. its Virtual Processors (VPs) run alongside the root partition's VPs.
when the child partition traps into the hypervisor, or uses one of the enlightened drivers, the hypervisor dispatches a bus request to the root partition, which handles the IRQ using a Virtual Service Provider implemented by the host Windows kernel.
so the hypervisor is kind of a bridge between the host Windows and the child VMs, but the host Windows actually runs as a sort of privileged VM itself, just one with full hardware access and scheduling priority.
Hyper-V uses a type 1 hypervisor, which sits below the NT kernel. You might be thinking about the Windows Subsystem for Linux, which does sit on top of the NT kernel.
Disclosure: work at Microsoft on the Hyper-V team.
Hyper-V sucks on the Desktop. Too many features missing to allow good integration between Windows host and Linux. (Like Clipboard sharing, shared folders etc.)
In my experience it's tolerable if you know how to wrangle Cygwin/X and Samba and treat the Linux guest like a remote machine that just happens to be on a phenomenally fast network. But it definitely does suck by comparison to something like VMWare or Virtualbox with proper integration support.
You're absolutely right. Hyper-V has historically focused on server use cases and not on the desktop experience. This has led to all sorts of rough edges on Windows desktop. That said, this is starting to change [1], and I'm optimistic we'll make more progress in this space moving forward.
Disclosure: work for Microsoft on the Hyper-V team.
Still on windows 7 at work, so doesn't help. Ubuntu runs beautifully under Virtualbox, but this is not true of all Linux distro's. I have had problems with Mint for instance.
In fact Ubuntu is so good in Virtualbox that I even get away with running HN's hated Electron apps in the VM!
Obviously somebody in those corporations chose it.
Running a large fleet of non-Windows computers means writing your own software to manage them, just like Google had to do [0]. Not every business wants to do that.
Part of this though is because software was historically built for windows in a way that it isn't anymore. Even at tech companies, where the majority of employees are using macbooks, you'll often see people in the finance department using windows machines.
I imagine as the growth of SaaS continues we will see less of this, especially because it's much easier for the IT department to manage one type of computer where macs tend to have a higher build quality than most windows based machines.
I'm working now to move my Win 8 install to a VM then I can format this damn machine an install a real OS. All I need Windows for really is Skype and MS Teams.
I've been using it since the latest version was made available, and early on it crashed from time to time, but I haven't noticed problems for more than a year.
When you import UTF-8 encoded CSV file, you cannot scroll the preview, since todays update. In previous versions, it would crash Excel.
You cannot connect to PostgreSQL, unless you use a "blessed" ODBC driver. The one from EnterpriseDB, which works will all other Mac ODBC apps, will make Excel crash. So will your own build from odbc.postgresql.org, no matter which version you try.
There are exactly none of the Power* or Inquire features from the Windows Excel.
Outlooks is far.ore than email. It's email, scheduling, calendars, meeting room bookings etc.. and it's all integrated. Not using outlook in a corp environment is a quick route to madness.
I know lots of developers who use a Mac specifically because it is a Unix system with normal Unix tools that can also run lots of commercial desktop software.
> I can't imagine (as the author asserts) that people left Windows for Apple because of the lack of a command ssh client. Here's a thought: If you need to spend time on Unix/Linux machines, why not just run that on the desktop? I've been Linux only for 10 years and have no issues.
I find it really convenient to run the same OS on my laptop, as on the servers I administrate.
Others probably feel the same, and I suspect that was the GP's point as well. :)
Also, almost nobody care about Unix certification. Linux isn't certified, and except for really niche things, Linux is all that matters (if you aren't running Windows).
I think their point is that there's no point in trying to force Windows to behave like a Unix when you can just use a Mac or install/dualboot Linux. There are a lot of reasons (fascism-based IT department, can't afford a Mac, etc.), but it's a good point. Once I stopped trying to trick Windows into working like Linux and just freaking installed Linux, my workflow became SUPER smooth.
Usually people who work for Corp or Government can't choose the OS their organisation supports. You can be actually happy if your windows is as modern and free that you can install something like git-bash on there (and then of course you work on a Linux VM that is hosted somewhere in the office, or AWS or similar)
I’m going to try this out, but if you’re using Windows 10, you can install WSL, which has all of the normal SSH commands, and definitely supports using keys for SSH.
But it is a problem. And also a difference in philosophy. NTFS gives the user and the process the guarantee that the filesystem will remain consistent. Also as a user, I have a guarantee that the file I have open, represents a file that actually exists on the file system, and not a deleted file.
This is not relevant to the on-disk filesystem, just how the OS handles files.
The philosophy is also flawed: open a file, then create a hard link to it. You now can't delete that hard link (because the file is open), even though you just created it. This is not a problem on POSIX because it correctly distinguishes a file name (represented by a directory entry) from a file (represented by the inode).
Its not clear to me what "flaw" you're referring to. NTFS is reference counted and can easily accommodate other behaviors. By default, it prevents deletion, unless a process specifically permits deletion, in which case, the file can be deleted.
This makes a whole lot of sense to me as a user, because I don't want to worry about open files being possibly deleted from disk.
>open a file, then create a hard link to it. You now can't delete that hard link (because the file is open), even though you just created it. This is not a problem on POSIX because it correctly distinguishes a file name (represented by a directory entry) from a file (represented by the inode).
Nothing like not being able to delete a directory because Explorer is keeping the folder's thumbnail database open and in use even though you are no longer in the directory.
I don't know about ssh-copy-id but it definitely supports passwordless logins. You just need to set up ~/.ssh directories as you would on other systems, and ensure the correct file permissions are set. There's a Powershell script to help with that.
> If you need to spend time on Unix/Linux machines, why not just run that on the desktop? I've been Linux only for 10 years and have no issues.
Because at least for Linux, its desktop support is still severely lacking. Maybe it's the distribution I was using (Ubuntu), but all i wanted to do was hook my laptop up to my monitors and use the external monitors alongside my laptop monitor, like I do on Mac or Windows laptops.
I failed.
I remember hand coding an XFree86Config file almost two decades ago to get three monitors setup.
So yeah, that's why I don't work with Linux on the desktop. Because it doesn't work for me on the desktop.
The only hardware I've seen having problems recently were Macbooks. I'd research the model before I would try to put Linux on one. The ones having two graphics processors are really finicky, as I can attest to about my work laptop.
That said, I'd probably check out a laptop in general if I were going to buy one to see if anyone has major problems. I was bitten about 5 years ago with one where the touchpad was too sensitive, no matter what I did.
linux hardware support is pretty good these days and will work on most machines that don't have particularly esoteric hardware configurations.
But if you want to save yourself potential trouble, it's a good idea to research beforehand to make sure there's no obvious compatibility issues.
Also, these days various vendors sell machines with linux pre-installed so that's an option.
Best general advice I can give is to stay far, far away from anything with switchable graphics, and if possible anything that requires proprietary drivers.
Just as another anecdote, I use my Linux laptop with an external monitor every single day. Even have StumpWM configured with keystrokes to switch between portrait and landscape modes (using arandr).
I find the free version of Office 365 is often sufficient for collaborating on Office documents without needing Windows, although they've intentionally left many features out so it isn't a complete replacement.
Ultimately this probably helps increase the overall internet security. Although in recent years it was available from a TLS secured source [0], the putty.org site (which might or might not be operated by the maintainers) is still not https secured. Given that probably tens or hundreds of thousands downloaded it from there (imagine, getting an SSH client from an unknown source!) I'm surprised not more happened.
Other than that, thanks for the great work maintaining this project which helped me and others a great deal throughout my career. Countless times have I been stranded on a Windows server and quickly needed an SSH client.
Putty binaries are all signed, which is what you should be looking at when authenticating a release. Whether you fetch them over SSL is of little importance.
You are, of course, absolutely correct. And I hope this is what internal package maintainers in large companies and individuals using putty as their standard SSH client do. However, for many of us putty is a backup when they are not on their linux/osx machine and just quickly need an SSH client to do something. The workflow there is google->putty->first result->download->execute. You absolutely shouldn't, but we also shouldn't drink and sleep 8h a night :)
The level of assurance you get with a signing key downloaded over HTTP and one downloaded over HTTPS is roughly equivalent. Sure, HTTPS gives you a degree of protection from MitM attacks, but it won't stop attackers (whether criminals or militaries) from hacking the web server and changing both the software and the signing keys---after all, if one is possible, so is the other.
Clearly we have different concepts of "roughly equivalent" :) One means everyone on your network can trivially serve you trojanized binaries, the other doesn't.
I think hacking a web server is a lot easier than hacking a network connection. Hacking web servers is well within the capabilities of your average vandal, while hacking network links in order to perform a MitM attack requires significant resources (e.g., those of a large criminal syndicate or an intelligence service, but I repeat myself).
Edited to add: ARP-spoofing the right LAN requires spearphishing and APT, which I think also require significant resources.
Sure, against a complete stranger the web server might be more vulnerable, but sometimes the attackers are already in our LANs :)
I was thinking more about employees, or students at universities, or such. I believe I've seen tools that ARP-spoof and then automatically detect downloads of ELF or PE files and trojanize them, all without requiring almost any knowledge from the attacker. I don't know if any of these tools detect Putty and fix its signature too, but it wouldn't be hard to do.
The signature is automatically verified by the system — when you open the installer, it either shows "unknown developer" or "PuTTY team" in the UAC dialog. Which is easy to verify.
Do you know the version of PuTTY you have installed without checking?
If open source isn't a requirement and you want a polished GUI client, https://www.bitvise.com/ssh-client has been free as in beer for about 1.5 years. It will be tough for them to beat being part of the default install!
Bitvise sells a solid Windows-friendly SSH server for $100. I am not affiliated apart from being a happy user since before OpenSSH supported AuthenticationMethods (multiple required) in 2013; it has been my straightforward licensing alternative to Remote Desktop Gateway.
Thanks for pointing this out; I'd forgotten about that (having given up on Putty after discovering the difficulty of sharing configuration https://stackoverflow.com/q/13023920).
Putty.org links to the Putty download page with a disclaimer separating promotion of Bitvise software below. The contrast in marketing language between "source code [...] developed and supported by a group of volunteers" vs. "developed and supported professionally" definitely appeals directly to the Windows mindset! Unfortunately Bitvise's "growth hacking" makes commercial sense, even if it does cost them potential users.
My main issue is the terminal emulator. MinTTY is the only I can get to work like I want with colors and fonts, but I still miss ligatures and the performances are bad.
I'd really like something faster with full color and font support I can SSH from.
I use it to change my colors depending on the machine I am on. If I log onto a production server, the color scheme switch to high contrast with red background.
Have you tried this out in recent Windows releases? We added 24-bit color support [1] to the console in Sep 2016, and built a colortool (that supports solarized) this summer [2]. Most of this should work, and we're all ears for things that don't.
Ah, no, I don't believe so - colortool calls into kernel 32.dll to set the colors it looks like, so they're system wide. The github project linked in the second blog post would be a good spot to put a bug/feature request in though.
I'm interested how well the client works under cmd.exe. Last I tried to use ssh via cygwin, I remember having a lot of issues with escape codes not working, to the point that I had to go ahead and install puTTY anyway, just because it was a much better terminal emulator. Microsoft has neglected their terminal emulator for 20 years now[0]. I hope they improve it soon. I'm aware that some cosmetic features were added in Windows 10 but frankly it still sucks compared to even the most basic Unix terminals. Making the terminal a first class citizen in Windows would do a lot to win developer market share.
Windows Console has improved a lot since Creators Update, adding built-in support for many VT escapes (including all the common ones) and even 24-bit color in the console. They're doing their best to make it first class for developers. You should follow the command line team's blog: https://blogs.msdn.microsoft.com/commandline/
Oops, that was supposed to be a a footnote (just my acknowledgement that there had been some changes to cmd for Windows 10), but I decided to put it inline. Can't edit with the HN app I'm using currently.
I know all about the alternative terminals for Windows and none of them are much better. ConEmu feels so hacky and way out of scope for what it's supposed to be doing. Does it still require Administrator, and hook directly into the framebuffer?
You can blame MS for first part. Don't really get what you mean by "out of scope".
But anyway, from my personal point of view, ConeEmu is by far the best console terminal out there (including the Linux ones). It's by far the most customizable one and with some work the result can be pretty amazing.
If you do not want to watch go to “Manage Optional Features” then + “Add a feature”. You can then scroll down the list and find the OpenSSH Client (Beta) and OpenSSH Server (Beta) features in Windows.
Do you really need a video when the installation instructions are 2 sentences?
Even if the installation instructions are more than 2 sentences, I absolutely hate if video is the only documentation method. It has its value as a supplement to a text description, but not without the text.
(The worst offender that I've come across for this stuff is Minecraft mods. I want to know how this machine block works, not watch you chat about random stuff for 30 minutes hoping that the explanation is somewhere around minute 19!)
Yeh, at least provide a command-line method of installing it.
If you're the type to install ssh, I'm guessing you're not a typical Windows user who is horrified by the command-prompt.
As someone who makes online training courses, the whole putty dance for Windows was so cumbersome. This is excellent news.
Personally I would recommend anyone on Windows to use WSL (which supports ssh and scp out of the box) but it's nice to see this is available for people who don't want to use WSL.
Although it's appreciated as a UNIX admin I can honestly say I don't use Windows by choice but because my enterprise says I must. That's partially because all the productivity tools are found there and partially because the desktop guys are massive Microsoft bigots and refuse to host anything else. (It's OK I can say that as a massive UNIX bigot and besides they'll tell you the same.)
And frankly with the tiniest bit of effort I can crank up a local xterm and then ssh which gives me lovely things like color, font choice, easy window size changes and thousands of lines of scrollback. Why in 2017 would I want to use a bare naked ssh client unless there is simply no other choice?
Default packages are extremely important. That you can walk up to any computer and get work done without installing anything, even downloading putty.exe is far too much effort in some cases. Also it is kind of rude to just download and execute things on friends/colleagues computers ad hoc.
If this will be installed by default it will be awesome, and imho even the process described here by enabling a feature is still better than putty.exe (it will get managed by windows and updated if need be etc.).
For my own windows computers though, the first thing I install is WSL and a proper terminal app (cmder).
Yeah it's handy to have by default, no doubt. But unless it also includes all the other client modes as well (scp, sftp, etc) and all the features found in Putty to boot I think the author's conclusion that Putty's days are numbered is a bit unrealistic.
I will say if this includes active directory integration so that ssh'ing from a windows box to a kerberos-protected unix one that could be slightly helpful. And if not that's just another reason to keep using putty. LOL.
Ever since Windows 10 was released, you could download and install the OpenSSH client. It's great that Microsoft has actually built this in now (even if it is beta).
I don't work for Microsoft, but my guess would be in order to securely access a shell. (Remotely, but srsh doesn't roll off the tongue)
Honestly though, I'm more interested in an SSH server for Windows. I haven't tried for many years, but last time I did, getting something more secure than telnet was a massive pain.
Why? If you're on Windows and managing Windows, you just use WinRM for a PowerShell session. Even then, though, most of the tools support RPC so your tools already communicate with remote systems.
"No SSH" doesn't mean "no remote management capability." SSH hasn't been available because you don't need it unless you need to manage a Linux system or a network device. The only reason it's being introduced now is because people like to use git, and git fucking sucks if you're not in a POSIX environment.
At the time I wanted to manage my only Windows box remotely, and from mobile devices that had nothing but ssh and vnc. (SSH being the clear winner there.)
Yeah. My memory is fuzzy... I think I got to that step, but I couldn't figure out how to have it start with the machine without a bunch of hoops and caveats. But that was many years ago and I could be mixing it up with something else.
It's very simple these days - there's a script, `ssh-host-config`, which asks a number of questions (like whether you want user isolation for the service or not) and installs the service as native Windows service (using cygrunsrv).
Every router for the past 10 years has shipped with an SSH client. If you wanted to login to it securely from a Windows box, you had to download Putty first, or enable some janky self-signed web interface.
The SHA1 variants are deprecated on modern OpenSSH as well, although they're still supported if you explicitly enable them.
I agree that it would be nice to have them "opt-in" in Windows as well, in my experience a significant amount of legacy equipment still uses these deprecated algorithms.
What I'd really like to see is an OpenSSH server for Windows. My biggest gripe is not being able to SSH into a Windows machine and use one of the 3 major shells available now (cmd/powershell/bash).
If it doesn't have to be OSS, Bitvise have one that is free for personal use, otherwise available under a commercial license.
Ive been using it for probably > 5 years, and it's been rock solid.runs as a Windows service, supports both Windows and custom credentials, comes with a nice GUI configuration tool too, can highly recommend it.
I prefer installing OpenSSH from Chocolatey instead, since that way I know I'm getting every month's updates. I've been using the sshd for a year now on my media server, and have the client on all my machines. It's pretty solid, although I do have one ongoing issue related to how ssh-agent works (instead of emulating UNIX domain sockets it uses Windows named pipes, and I can't make a working proxy for Windows gpg-agent to replace it so I can use my smartcard for SSH).
This is well and good but I wish Microsoft would take a strong stance on corporations launching MITM attacks on their employees' HTTPS sessions by injecting their SSL certificates on every session. I can understand corporations blocking sites like facebook but I don't see why they need to read everything I read on the web.
Sounds like an issue for lawmakers or HR are your workplace, not Microsoft...
Microsoft is a technology company, they aren't in charge of the HR or management at your workplace and consequently shouldn't be trying to fix HR issues using technology. There's plenty of legitimate uses for MitM-ling TLS, and in some circumstances even that is one of them (e.g. on hardened networks with VPN endpoint interconnects).
Back in the late 90's I used to carry a memory stick with VNC and putty upon that I use to tunnel a VNC connection over SSH to my home network and oven over bonded ISDN (128k which I had at the time in the UK) it was more than usable or attaching to any of my systems.
Then Cygwin came of age, proved very very very useful for adding the tools windows corporate desktops missed out upon for work related activities.
But in all that time, PuTTy has been a very good terminal client for SSH needs. Whilst it is good that Microsoft is adding this, it has never been a hurdle for many and those who run into a corporate wall that tough, have always been able (from the ones I've worked with and collegues) been able to circumnavigate around it :-
Usually using the corporate security policies to bash the corporate desktop witch upon the head. Fight fire with fire if you can't bend the rules.
For those who like -- and prefer -- GUI, Putty (or any other GUI based SSH client) will remain king. For the rest, this is very good news. It is the same client many have been using for many years under their Unix alike OS, just on Microsoft Windows.
Point being, those who wanted SSH or unix tools under windows - there have always been options, even with the most zealous of corporate desktop policy that I have encountered and in the worst cases, you can use the corporate security policy to make that happen.
If one's necessities are not limited to an ssh client, Cygwin has been available for years, is free software, and has a very good integration with Windows (you have read/write access to the whole filesystem, for example, something presently cannot be done with win10 subsystem for Linux)
I think this is because WSL uses NTFS Alternate data streams to store linux file attributes, and they don't survive when a file is copied to a non-NTFS file system.
According to the MSYS2 devs, MSYS2 is slightly slower than Cygwin, and it has a significantly different project objective, leading to much reduced command-line environment:
MSYS2 provies a minimal shell required to run autotools and other build systems which get the source for software from the Internet from different repositories, configure them and build them. The shell and core tools exist mainly to allow porting Unix programs to run natively on Windows (i.e. without requiring a POSIX emulation layer). MSYS2 doesn't try to duplicate Cygwin's efforts more than necessary, so the number of provided POSIX-emulated software is very small.
(I recommend Cygwin, I've been using it for 15 years or so, and have access to almost everything I've ever needed from Linux command line. The slowest thing is forking; emulating it in Win32 executables is not easy on NT kernel. MSYS2 doesn't fix that.
Also, apt-cyg is pretty decent for installing packages on Cygwin.)
Why "according to the MSYS2 devs" from 2014 though? Have you not tried MSYS2 yourself? I tried Cygwin a handful of years ago and it was worlds slower than MSYS, which I decided to use instead. I switched to MSYS2 a couple years ago or so, and it which hasn't felt any slower than MSYS. Cygwin took forever just to traverse a directory hierarchy. Has it has gotten an enormous speed boost since?
I briefly had a look at MSYS2 several years ago but it was all oriented around Win32 compilation. For example, I wanted an ffmpeg build, but I needed it to understand Cygwin paths, while the MSYS2 instructions are for a Win32 executable rather than Cygwin executable.
Directory trees are indeed another thing that are not enormously fast in Cygwin, but how often do you do recursive listings of directory trees? I don't even do that often on Linux because it's not fast there either.
Specific example: `find` has several tricks up its sleeve that it thinks are fast (e.g. looking at link counts on directories to test for leaf directories), but emulating the APIs that it uses for these tricks are part of what makes it slow on Cygwin (Cygwin will need to count the directories to emulate the link count, which makes it O(n^2) rather than O(n)). You can use -noleaf to turn this off. But `find` does other stats that make it slow.
I use a script that wraps `cmd /c dir /b /s /a` (IIRC) and cygpath for the few scripts I have that do much recursive processing, but it's very rarely a big factor. My biggest directories are on my NAS which is Linux.
My single biggest performance problem - as in, almost debilitating on Cygwin - are poorly written Emacs plugins that do filesystem access in the status line update callback (e.g., Projectile). But they're easily monkeypatched when they crop up.
> but how often do you do recursive listings of directory trees
I mean, it affected enough scripts I was running that I noticed the slowdown plainly as daylight when I started using Cygwin. I'm surprised you don't notice it!
Yeah, they probably found out that sending the data from their own adware/spyware over unencrypted channels back to their own servers was too insecure. just kidding ;-)
No seriously: Congratulations MS and please evaluate again if you really need that extra money you make with your advertising deals.
So this is why Microsoft is such a leader in technology. After only 22 years, ssh is now in it's operating system! Wouldn't it be nice to have a monopoly that actually is innovative, rather than simply a lackluster follower?
The first thing I always installed in Windows was the gnu tools port. (unixutils?)
Now when I sit down at a Windows box, it's the Ubuntu terminal. Over the years, I've become pretty platform agnostic. However, having to change my keyboard shortcuts for Mac, and no single way to copy/paste in a terminal between Windows (double right-click)/Mac (alternate between middle-click, or right click and paste depending on the source)/Linux (good old middle click!) is pretty annoying. It's also annoying that the Mac uses a whole different button for copy/paste, and the Home/End keys don't work the same.
Haven't seen any discussion yet of the licensing aspects of accessing a Windows server via SSH.
Linking an independent Microsoft licensing consultant discussing fun terms like "External Connectors" vs. "Client Access Licenses" (CALs), the "Primary User of a Windows device" and the current "Web Workload" exception... maybe a new exception is on the way.
As a sysadmin who put up with MSs crap for many years, got fed up, and went fully gnu+Linux only...
Sorry Microsoft, too little too late, you are an anti-user company, and I hope your desktop share withers and dies. I'd rather see the world on osx cause at least it has a semi-decent gnu userland.
I smell blood in the water with moves like this. Its time to ramp up the attack people! Step 1: rip out AD and replace with Samba4. Step 2: windows servers become Linux servers. Step 3: non-office user terminals become Linux terminals. ...
So they finally got mouse selection working? Font size scales according to screen resolution? They must have woken up to the 21st century.
Thank you for the link, I had no idea. I haven't kept tabs on the recent developments. Not sure if I'll ever upgrade to Windows 10, but I keep it in a cage somewhere, so I'll take a look.
Next step: add recognition for foreign partition formats like HFS+ and ext4. Just recognition would be a great step, something along the lines of: hey we see there are partitions on this disk; are you sure you want to format this? This instead of: Do you want to format this disk?
I don't even expect that you can read or write to those partitions, which of course would be trivial to add for them. Just recognition would be great!
Microsoft is definitely other company nowadays. When I started on IT, more than a decade ago, they had a completely other market positioning and strategy.
Well, I gave this a try on my Windows machine. I tried to generate a key pair to pass around. I have yet to figure out how to give ssh-keygen a path spec it can write to. Even no path (just file name for the key in the current directory) fails with "invalid argument"
So on that note, anyone know how to report bugs on this? Cursory Googling isn't turning up much.
Last I checked bash on windows is still segregated from the windows system proper. So you cannot call any windows utility/compilers from windows bash. You cannot write to existing files/directories either, based on WSL docs.
Hopefully these limitations will be removed.
I needed to reboot to make the install of ssh work. Then I did `ssh linuxbox` and it tried to login me in as winuser@winbox@linuxbox instead of winuser@linuxbox. The -l option fixed that. The gtop command looked nice! - it didn't on putty.
For the love of God, I cannot get ssh-add working with passphrase protected private keys. If there is no passphrase protection on the private key, then it works fine. I guess this is what beta means ;-)
I'm not sure how useful this is by itself. It's cool that it's there for convenience, but I think I'd rather stick to either Git on Windows or Windows Subsystem for Linux (WSL)
Holy smokes! Why did it take them that long? This is why Microsoft loses so often. They crawl when it comes to innovation. Thank goodness for their surface line of products
Because they relied on software sales to businesses, therefore incompatible tools and vendor lock-in. They're now changing their business model towards IaaS/PaaS.
>This is why Microsoft loses so often.
Microsoft loses how? You might not like them but they definitely haven't lost anything.
> There is a new beta feature in Windows 10 that may just see the retirement of Putty from many users: an OpenSSH client and OpenSSH server application for Windows.
great to see you are including OpenSSH as part of your own OS, it's a carefully crafted piece of software that has a lot of appeal among users.
A nice follow-up is to generously contribute back to The OpenBSD Foundation. You were already a donor and, doing so, are helping further developments while being much cheaper than hiring a team all by yourself.
Welcome to the 90's Microsoft. Glad you could make it.
No one has used telnet for over 20 years. The fact that it took Redmond over 20 years to incorporte an SSH client proves to me that they really aren't as security conscious as they claim to be.
you miss the point that the right ssh keys will be used by default. currently malware would have to search for and use the keys by itself.
now the servers will be on a gold platter, ready to be served
* we also shipped OpenSSH's sshd server, but it's a little tricky to configure right now. Expect a blog post this week. * This is not production-ready in the current version of Windows 10 (hence the "(Beta)" in the label), but we hope to be soon. * All of this is being done in open-source out of a fork at https://github.com/powershell/Win32-OpenSSH (code work is technically checked into https://github.com/powershell/openssh-portable/ first, but those will be consolidated at some point). Check the Wiki there for a ton of info. * We've been working closely with the official OpenSSH Portable maintainers to get upstream at some point in the future. They've been awesome to work with so far, and we're really looking forward to moving Windows onto the definitive SSH implementation of the world.
This is been a super fun project to work on over the last couple years, and I'm glad that there's such a groundswell of excitement around it. Like I said, I hope to be publishing a lot more documentation/material on our plans as we get to a production-ready state.
I've also been super swamped with the release of PowerShell Core 6.0 [1] for Windows/macOS/Linux coming early next year, hence the lack of a good release announcement on these beta bits...Thanks to Patrick Kennedy for finding it and letting everyone know! :) )
[1]: https://github.com/powershell/powershell