Hacker News new | comments | show | ask | jobs | submit login
Looking Forward: Support for Secure Shell (msdn.com)
828 points by BryantD on June 2, 2015 | hide | past | web | favorite | 390 comments



For all Balmer's thing of dancing on a stage and chanting "developers", there was no point under Gates or he at which Microsoft felt like a pro-developer company.

That has completely changed in the last eighteen months. Each time I think "wouldn't it be cool if" I'm finding a few weeks later that someone at Microsoft is well ahead of me. How much easier it will be to ship my sucky roguelikes to Windows users in this new world!

Hmm. They can now have a path to obsolete cmd. As long as they ship a decent ssh client with the system, users will become accustomed to ssh-ing to their own box instead of using cmd.

Wishlist: tmux, emacs, vi, netcat, shell option for vi-mode, rc-file with preferences, ncurses library, something simpler than curses, zip and unzip. 256 colour is fine, although 24-bit would be impressive. /proc would be cool, but also a big ask I assume. They already have a strong compiler. Make it really easy to find the hex fingerprint required to log on to the sshd-server. Something like inetd could be useful, too.


>Wishlist: tmux, emacs, vi, netcat, shell option for vi-mode, rc-file with preferences, ncurses library, something simpler than curses, zip and unzip

No thankyou. I can certainly understand including SSH support, but I don't want what is pretty much the only remaining viable non-Unix platform to start bundling horribly dated and clunky Unix-like commands and bloated GNU tools.


Couldn't agree more.

As a die hard Unix guy and ex Slashdot-esque zealot (colloquially a bit of a twat). Yet I'm knocking out PowerShell all the time now and dread having to log into the pile of CentOS kit I have lying around. It is arcane. Even after 15 years I spend most of my time in the manpages or working out another damn config format.

Literally did a two liner to scrape a web page, parse it and call a REST endpoint with the parsed data as JSON in PowerShell. I cant even use python now. I've been broken.

It makes me cringe saying all this as well kind of like a racist making amends with his past.

The only thing I still hate is windows update.


I'm not even a Powershell fan, I'm just not especially fond of Unix-likes, which feel like a local maximum that has turned into an unchangable standard. I think Unix-like environments feel dated, clunky and there's a tendancy for it to result in permanent bloat, stagnation and a web of interdependencies.

I understand that lots of people are familiar with Unix-like tools and their many idiosyncrasies and like to be able to transfer that knowledge across platforms. But that doesn't mean there isn't merit to a platform not conforming to that and trying to improve on it.


Lots of people keep using the word "bloat" to refer to Linux as opposed to Windows, which I find bizarre given that it's long been a criticism in the other direction.


Since the word is used mainly as an insult due to people having emotional attachments to technology, the real question in my mind is how did you not find it bizarre on every use.

Or I guess another way to say it is that "bloated" is only a measurement of the speaker's disdain for the software under question. It has little relationship with any actual quality measurement, generally. Bizarre on its face, to me.


Man after years of hearing how bloated Windows is compared to other OSes, I was expecting my first Windows Phone to be the same. But nope to my surprise it was fast and fluid, even on low end devices! I guess, the same can also be said about Win8/Win10 compared to Vista/Win7.

Bloat seems like dirt that accumulates overtime. A little bit of cleanup and optimization can sometimes do magic.


Bloat and legacy support are two sides of the same coin.

Microsoft chose one answer to that, Apple another.

I think Windows Phone is a clear example that Microsoft has technical chops when starting clean, it just lacks business willpower to make decisions that favor performance over support.


Yeah, I would genuinely like to know what all is considered to be bloat within the Unix space. If anyone could point to some information about this I'd be curious.


X11, KDE, GNOME, GCC, Emacs, Mozilla? Any huge project I guess. The GNU command line tools are not as lean as the BSD ones but so what?


They also generally have more (useful) features than the BSD ones.


Not only that but they include more documentation inside the binary. Someone also pointed out to me one time that some features in some GNU utils are only possible to have if they are implemented in the binary, rather as a separate program.


Well, generally speaking nothing is impossible, but it's often more efficient to put the extra features in the same binary. However this "bloat" is nothing compared to your typical desktop or programming environment.


I disagree. 1. Unix-like environments are (or rather were before systemd) kinda similar and once you drop a shell - good to go. Even with powershell Microsoft have gone down the route of version/feature pinning. If you plan to maintain at least Windows 7/Server 2003 in your scripts - well, powershell 2 is the highest you can aim, bye bye new features, welcome bloat. Red Hat while shipping old versions still backports features. 2. The only thing in Unix-likes being standard are core utils and OpenSSH. Everything else can be anything (SysV/upstart/Systemd, Gnome/KDE/tiny_tiling_wm, bash/zsh) and we are actually deprecating older technologies for new ones (quite often changing the name). Due to ELTS (extremely long term support, often called legacy cruft) core subsystems are mostly the same (on the outside) in Windows. And if it is a server you can bet it is running IIS and MSSQL. 3. Try using `wmic` shell or `net` util. Unbearable clunkiness of such tools and bearable GUI tools mean that we have very little general scripts/tools to transfer


This isn't the case. WMF is available separately. I just rolled our WMF4 over all our kit and we have PS4 everywhere now including win7 and 2008.


I would really like to see what would be the glorious future of the "dated", and "clunky" Unix-like environment.


I did find an open source implementation here:

http://pash.sourceforge.net/

Don't know if it's any good --- never tried it; it says it's about half complete, but I don't know if it's a useful half.

After looking at the docs, you could do a lot of what Powershell does with Unix shells; you'd need a different set of conventions, where instead of using unformatted text as an intermediate format you used a streamable table format with support for metadata. Then you could have commands like 'where', which would be awesome.

$ xps | xwhere user -eq dg | xsort -desc rss | xtop 10 | xecho "rss=@.rss cmdline=@.command"

...or something. sh syntax is a bit lacking; PowerShell's got lots of useful builtins, including having native support for the format so it knows how to present it to the user. An sh version would conversion routines back and forth from text.

The tricky part would be bootstrapping; getting enough functionality quickly enough that enough people would start using it to make it sustainable.

I'd still rather use this than faff around with awk, though. I've done way too much of that. And if I never have to parse the output of ls -l using cut again, I will be a happy person.


> Don't know if it's any good --- never tried it; it says it's about half complete, but I don't know if it's a useful half.

There are at least two parties working on it who have interest in certain features to be working: It's the NuGet shell within MonoDevelop, so custom PowerShell hosts need to work as well as a bunch of things needed by NuGet. Then one of the more prolific developers actually gets paid to work on Pash to support features related to PowerShell add-ins and a few others. (My own efforts so far were mostly bits and pieces, missing cmdlets (there are still a lot of those), test cases, and weird parser behaviour, mostly due to my history of golfing in PowerShell – golfed code makes for some fun tests of edge cases.)


Isn't this just complaining you want a different tool? The entire Powershell format depends on, amongst other things, the object interface being sane, existing, and usable.

The whole point of the GNU system was working around the usual "I can see it here, need this bit, and want to put it there". If you need to do something really specific a lot, you write a tool which does that.


Exactly. The Unix/Linux world has not gone the way of a Powershell type shell for the simple reason that for the uses where we want an object-oriented API, we have a plethora of scripting languages designed for that purpose, and we all have our own favourites.

When we want a text oriented API we use shell scripts. When we want an object oriented API we pick our favourite scripting language - which may very well for many of us be different for different problem domains.


Yes, precisely. The main thing would be to do is to define the stream format, and then persuade people to actually use it.

The point of the exercise is that the traditional Unix pipelinable commands have standardised on unstructured textual data. Powershell has standardised on structured tabular data, which is what lets you do the cool things.


There's also http://www.lbreyer.com/xml-coreutils.html which is interesting.

But I've never tried it; just came across it recently as a homebrew update.


> And if I never have to parse the output of ls -l using cut again, I will be a happy person.

Try stat instead, e.g.

    $ stat -c 'NAME: %n; OWNER: %U, SIZE: %s' * 
also supports --printf (see man stat).


It's not standard enough --- coreutils and OSX's work differently.


Yeah, same with the find -printf (at least macports can install coreutils and give you "gstat"). Portability is hard :-/ http://mywiki.wooledge.org/ParsingLs


Try exploiting find(1) with -printf, with formats like:

    find "$@" -printf "%TY %Tm %Td %TT %1y %4m %3n %4U:%-3G %8s  %p\t%l\n"
and flavor the output elements and format to suit. (this sorts by date well, but would do less well on file or symlink names with embedded newlines - swap \0 for \n to wrestle with those.)


> Literally did a two liner to scrape a web page, parse it and call a REST endpoint with the parsed data as JSON in PowerShell.

This sounds cool. Can you recommend terse resources for getting to know PowerShell? Is MSDN the best place to look for docs or are there better places to go?


TBH I couldn't find a decent book or resource and sort of hacked my way around for a year or so. It had a good build in manual (get-help). This is a good poke at the fundamentals of my example:

Scrape a page:

   $flight = " LH3396"
   $url = "http://bing.com?q=flight status for $flight"
   $result = Invoke-WebRequest $url
   $elements = $result.AllElements | Where Class -eq "ans" | Select -First 1 -ExpandProperty innerText
Hit a REST endpoint:

   $body = @{
       Name = "So long and thanks for all the fish"
   }

   Invoke-RestMethod -Method Post -Uri "$resource\new" -Body (ConvertTo-Json $body) -Header @{"X-ApiKey"=$apiKey}
Sources:

[1] http://stackoverflow.com/questions/9053573/powershell-html-p...

[2] http://www.lavinski.me/calling-a-rest-json-api-with-powershe...


This is really cool (and unfortunately I know nothing about powershell, I'm young and dumb) but I'm pretty sure this is replicable line for line with bash (and curl, maybe awk too idk). Am I wrong?


The main difference between bash/gnuutils stuff and Powershell is that every command returns an object (or an enumeration of objects), and commands can take an object (or an enumeration of objects) as input.

This lets everything implicitly understand how to access named properties without everyone having to do string parsing.

Sure, you can solve the same problems with bash/grep/awk/sed/etc - but sometimes it's a bunch simpler to solve in Powershell.


Don't use awk for parsing XML/JSON, there are excellent tools like html-xml-utils or jq that already do that well:

    $ sudo apt-get install html-xml-utils jq

    $ curl 'https://duckduckgo.com/html/?q=cake' | hxclean \
       | hxselect  .web-result:first-child .snippet

    $ curl apy.projectjj.com/listPairs | jq .responseData[0]

Though I'd typically use Python's BeautifulSoup for anything major with html, there's just too much bad html out there ;)


wow, didn't even know of jq but look's super cool. was trying to avoid using packages you would have to 'install' since it seemed that all the functionality the post I responded to used built-in's for powershell commands.


Probably not. In my experience it's not so much about things being impossible elsewhere, just that PowerShell can often be better at fiddling in a REPL until you got the results you wanted, in a way. The point where you need (or want) to upgrade to a more powerful language is IMHO earlier in bash than in PowerShell².

PowerShell handles objects like Unix utilities handle text. You get a lot more orthogonality in commands, e.g. there's just a few commands dealing with JSON, CSV, XML, etc.¹ – they mostly do just the conversion of a specific text-based format to an object list or tree representation and back. Everything else after that can be done with just the same set of core commands which deal with objects. Filtering, projecting, and iterating over object sequences is probably the most common part of PowerShell scripts and one-liners and it's a part that's useful everywhere you need the language. Note that we have a bit of that in the samples here, too. ConvertTo-JSON is used to convert a hashmap to a JSON string to use in a request, and Invoke-WebRequest already handles parsing HTML for us so the following line can just filter elements by certain attributes.

Where the ideal format for the core Unix tools you use most frequently, is free-form text, you often have special commands that work on other formats by replicating a few core tools' features on that format, e.g. a grep for JSON, a grep for XML, etc. There are others, of course, that do the conversion in a way that's friendly to text-based tools, but the representation still lacks fidelity. Finding XML elements with certain attributes quickly becomes an exercise in how to write robust regexes. Cutting up CSV by column numbers is a frequent occurrence – and since the tools do not understand the format it makes for not a pretty read in the script's code. Personal opinion here, based on lots of PowerShell written, some shell scripts written, lots of horrible things read in either (sure, awful PowerShell scripts do exist, but I'd argue that discovering a nice way is much easier where you can cut most of the ad-hoc parsers written in regex or string manipulation in pipelines).

(One last point about orthogonality of commands: ls has a few options on how to sort or output the results, for example. Sorting a list of things? That's Sort-Object's domain. Formatting a list of things? Format-Table, Format-Wide, Format-List. That's quite a bit less each individual command has to do and it's all just for nicety to the user. For working with the output programmatically you don't (well, and can't) need them at all.)

I have a few posts on SO where I tried to steer people into actually learning how the language works and that you should use the pipeline as much as possible, e.g. http://stackoverflow.com/a/7394766/73070 or http://stackoverflow.com/a/3104721/73070. That's not to say you can't write un-understandable things: http://stackoverflow.com/q/1018873/73070.

And finally, let's not forget that PowerShell exists on Windows where text-based tools are mostly useless. Want to query the event log? The registry? WMI? Good luck. Windows has a long history of having non-text formats everywhere in the system and for administration it's a bit hard to pretend they don't exist. Jeffrey Snover elaborates a bit on that here: http://stackoverflow.com/a/573861/73070.

There may be a lot of developers getting by with Unix tools on Windows, but they're not the target audience for PowerShell³. And the previous approach to scripting things on Windows servers and domains was either batch files or VBScript/JScript.

This ... uhm, got longer than anticipated and probably a lot less coherent than planned. Apologies for anything that makes no sense. I didn't have caffeine yet.

______

¹ ConvertFrom-JSON and ConvertTo-JSON for JSON for example. For CSV there are also convenience commands that directly work on files as well, so there's four of them. XML processing is built-in since .NET can do that easily already.

² I probably don't get to live the rest of the day now, I guess.

³ I am a developer, though, who uses PowerShell daily for scripting, as a shell, and a .NET playground. The PowerShell team at MS was a bit surprised once when I told them that my background was not server administration (the Scripting Games were quite focused on that part and often involved doing things with WMI or AD – stuff I rarely, if ever, do.)


Your comment was useful, so not totally in vein :-)

Just incited me to have a little look at powershell (read through [0], useful intro). It looks nice, I can definitely see the utility in have a simple object model for transferring information between processes. In nix land you get pretty good at extracting data from simple text forms, though sometimes it's harder than it should be.

One thing that jumped out at me there is the overhead of the commands.

    430ms: ls | where {$_.Name -like "*.exe"}
    140ms: ls *.exe
    27ms : ls -Filter "*.exe".
Not so much the absolute numbers but the fact that there are 3 different ways of doing it and the more flexible choice is over a magnitude slower.

What happens when you add another command to the pipeline? Do they buffer the streams like in linux?

I guess the situation will improve over time but how complete is the eco-system at the moment? One area nixes will always shine is the total ubiquity. Everything can be done over commands and everything works with text.

[0] https://developer.rackspace.com/blog/powershell-101-from-a-l...


You found three ways of doing things that all do filtering at a different level. The -Filter parameter employs filtering on the provider side¹ of PowerShell, i.e. in the part that queries the file system directly. Essentially your filter is probably passed directly to FindFirstFile/FindNextFile which means that fewer results have to travel fewer layers. The -Path parameter (implied in ls .exe as it's the first parameter) works within the cmdlet itself, which is also evident in that it supports a slightly different wildcard syntax (you can use character classes here, but not in -Filter) because it is already over in PowerShell land. The slowest option here pushes filtering yet another layer up, by using the pipeline, so you get two cmdlets, passing values to each other and that's some overhead as well, of course. Note that the most flexible option here is by combining Get-ChildItem with Where-Object, whereas the direct equivalent in Unix would probably be find which replicates some of ls' functionality² to do what it does, which probably places it in a very similar spot to ls, performance-wise.

It's not uncommon for the most flexible option to be the slowest, though. In my own tests my results were 18 ms, 115 ms and 140 ms for doing those commands in $Env:Windir\system32, so the difference wasn't as big as in your case. For a quick command on the command line I feel performance is adequate in either case, unless you're doing things with very large directories. If you handle a large volume of data, regardless of whether it's files, lines, or other objects, you probably want to filter as much as you can as close to the source as you can – generally speaking.

As for buffering ... I'm not aware of, unless the cmdlet needs to have complete output of the previous one to do its work. Every result from a pipeline is passed individually from one cmdlet to the next by default. Some cmdlets do* buffer, though, e.g. Get-Content has a -ReadCount parameter that controls buffering in the cmdlet (man gc -param readcount). Sort-Object and Group-Object are the most common (for me at least) that always need complete output of the stage before to return anything, for quite obvious reasons.

However, even though I did some work on Pash, the open-source reimplementation of PowerShell, I'm not terribly well-versed in its internal workings, so take the buffering part with a grain of salt.

As for completeness, well, the Unix ecosystem has an enormous edge here, simply by having been there for decades and amassing tools and utilities. Since PowerShell was intended for system administrators you can expect nearly everything needed there to have PowerShell-native support. This includes files, processes, services, event logs, active directory, and various other things I know little to nothing about. Get-Command -Verb Get gives you a list of things that are supported directly that way. It seems like even configuration things like network, disks and other such things are supported by now. At Microsoft there's a rule, I think, that every new configuration GUI in Windows Server has to be built on PowerShell. Which means, everything you can do in the GUI, you can do in PowerShell, and I think you can in some cases even access the script to do the changes you just made in the GUI – e.g. for doing the same change on a few hundred machines at once, or whatever.

Of course, you can just work with any collection of .NET objects by virtue of the common cmdlets working with objects (gcm -noun object). For me, whenever there is no native support, .NET is often a good escape hatch, that in many cases isn't terribly inconvenient to use. You also have more control over what exactly happens at that level, because you're one abstraction level lower. As a last resort, it's still a shell. It can run any program and get its output. Output from native programs is returned as string[], line by line, and in many cases that's not worse than with cmd or any Unix shell.

_____

¹ Keep in mind, the file system is just one provider and there are others, e.g. registry, cert store, functions, variables, aliases, environment variables that work with exactly the same commands. That's why ls is an alias for Get-ChildItem and there is no Get-File, because those commands are agnostic of the underlying provider.

² So much for do one thing – but understandable, because ls' output is not rich enough to filter for certain things further down in the pipeline.


Oh yeah, I can see how the different filtering spots would make a difference.

Was just a bit surprised at the overhead of adding a command to the pipeline. A similar setup on linux would be like the following I guess (on a folder with 4600 files, 170 matching files).

    20ms time ls *.pdf
    35ms time find -maxdepth 1 -iname '*.pdf'
    60ms time ls | egrep '.pdf$'
I was more wondering if adding each additional command added so much overhead. Your numbers looks much more reasonable. Maybe the article I read had a big step due to filesystem / caching or something.


Ha, now when you Google "flight status for LH3396" the fourth result is your StackOverflow link.


Not an expert but there's a "PowerShell in 10 Minutes" here:

http://social.technet.microsoft.com/wiki/contents/articles/1...


Powershell in Depth is the best book about PS there is:

http://www.manning.com/jones6/


> The only thing I still hate is windows update.

all those "2 liners" don't mean much if you have to reboot your box every 2 days to apply more emergency 0-day security patches.


What platform does not have 0-day security vulnerabilities? Hurd?


His comment didn't imply that other platforms don't have 0-day vulnerabilities.


One thing that became clear with Heartbleed is people apply security updates and don't restart the necessary services to pick up the updates. Rebooting the entire OS is the nuclear option, but it's incredibly effective.


Yeah every 2 days I have to reboot all of our kit to apply updates...

http://i.imgur.com/5ZxiLGf.png


Neglecting security is not a virtue.


Pragmatism is.

This is a Hyper-V hypervisor host on a private VLAN three layers behind the internet and two layers behind the users locked in a room with no console access other than via an ILO card on yet another VLAN.

There has only been one KB which infers a security problem and that has had alternative mitigation put in place.

Not rebooting your server doesn't imply incompetence or disregard to security.


So, no security patches?


A reasonable question.

Only ones that affect the network surface footprint, so none here as it's on a private VLAN.


that's a pretty terrible practice no matter what OS you're using.


Why?

I assume that "private VLAN" means it isn't exposed to potential external attack.


It must be great to know no-one within your environment will ever do anything wrong. Or download anything that will do anything wrong. Or visit any websites than can hijack your browser into doing anything wrong.


a) Human error is a possibility but that's not something that can be eliminated.

b) They won't download anything wrong. There's no route to the internet for this machine.

c) They won't visit any web sites. There's no browser on the machine. This is a core profile windows server installation.

Don't assume that we don't know what we're doing. We have 500ish Windows Server machines floating around.


> a) Human error is a possibility but that's not something that can be eliminated.

It's good that you've managed to perfect the hiring process to the point you have zero risk of internal fraud or malice.


Where did I imply or state that?

Nowhere!

What does that have to do with security updates and reboots?

Nothing!


Being 'private' only means you don't get hit by drive-by scans from the Internet. There are (depending on configuration) plenty of opportunities for internal attacks, for example the workstation being used to access the boxes. Not to mention removable media (usb, cd-rom) or files copied onto those otherwise isolated hosts could be infected.

Patching servers is just good practice. As is designing a system that can handle rebooting individual servers without user-facing downtime.



I've seen this in older units, but has any recent switch been vulnerable to this?


Power shell is ok but MS DOS terminal needs an update: it has several limitations about fonts, encoding and missing libs that make it the worst terminal experience imho.

Ah also POSIX compliance could simplify the life of those who develop on Windows. I tryed to develop on Windows for many years, I tryed hard but you are always missing some feature that on Unix is basic.


For the love of... just let me drag it wider than an old vt110 terminal!


That's all done in Windows 10.


Name a feature. I've ported tonnes of POSIX stuff to windows. 99% of it's there and the rest is easily plugged into a bit of win32.

I tend to use PowerShell ISE that is built into windows rather than cmd, which you're right, sucks.


>two liner to scrape a web page, parse it and call a REST endpoint with the parsed data as JSON in PowerShell

Can you post this 2 liner? Sounds interesting.


I can't post the one-liner as it's earning me a shit ton of cash every month but in concept it's the same as what I've posted here:

https://news.ycombinator.com/item?id=9649090


How can two lines of code be earning you shit tons of cash? Can you show it if you swapped out specific details with generic placeholders (which page you're parsing, which endpoint you're hitting)?


It merely exploits some fundamental differences on how humans operate between two markets and turns that into real-time information for me on which I can make trade decisions.

This isn't stocks, shares, futures etc for ref either.

I don't want to say any more nor post any code as it's unique and a good earner at the moment.


> It merely exploits some fundamental differences on how humans operate between two markets and turns that into real-time information for me on which I can make trade decisions.

Now that's teasing ;)


> It merely exploits some fundamental differences on how humans operate between two markets and turns that into real-time information for me on which I can make trade decisions

Let me guess: arbitrage applied to sports gambling. Wink my way if I'm on the right track.


No wink. Doesn't involve betting. Although that's a good idea :)


Understandable. Sounds really interesting.


I feel as if this comment is an exemplar of Microsoft itself.


I wouldn't post the old python version of it (using urllib2, BeautifulSoup) that ran on a Linode Debian VM either for reference.


Sounds awesome, how would you recommend someone get started with PS? I'm CLI-only dev, use Node, code tons of front-end but all I use to do is CMDer, VIM, and SSH all running together.


Fire up PowerShell ISE (comes with windows) as administrator and type:

    Update-Help
This will download all the help.

Then you can type "Get-Help About_" and hit tab to see the master topic lists.


> bloated GNU tools

If you were coming from the Unix/BSD world, I could understand this, but how is using a GUI for everything considered elegant, while GNU tools are considered "bloated?" Doesn't the GUI necessarily "bloat" the program more than a few extra command-line options would?


You don't need a GUI; PowerShell is a powerful command line scripting environment.


Which needs a full desktop environment to be used. Or am I wrong?


Server Core doesn't include a full DE.


It doesn't involve the desktop environment unless you ask it to, and you can invoke it remotely from outside. The system does boot to GUI, but the overhead of that with nobody logged in is not very large.

Embedded Windows, on the other hand, is forever the poor relation.


Still no alt+F1 to alt+F7, right? That would be cool to have. I'm really curious, as I made the leap to Ubuntu years ago and the most I've touched windows is doing reinstalls on friends computers.


It's true, but you can log in multiple users, or log in users multiple times, and these run in distinct areas, kind of like virtual terminals. These are called "sessions." http://blogs.technet.com/b/askperf/archive/2007/07/24/sessio...


If you've SSH'd into your Windows machine, as it seems Microsoft is working towards, you would not need to interact with the desktop environment.


Of that list, I can only see downsides to the middle ones: "emacs, vi, netcat, shell option for vi-mode, rc-file with preferences". Who uses vi-mode in the shell? And Windows already has a boatload of editors.

Windows could do a lot better than curses by providing easy means to pop MFC dialogs from the shell to prompt the user.

Zip and unzip are already available, in https://pscx.codeplex.com/


    > Who uses vi-mode in the shell?
There's at least a dozen of us. It's my wishlist :)

    > Windows could do a lot better than curses by
    > providing easy means to pop MFC dialogs from
    > the shell to prompt the user.
When you work in an environment with a lot of hosts, Windows is a laborious partner. You need to open remote desktops and click windows to push the OS around. With unix you often have processes that start out being manual, and which gradually get sewn up into automation.

With deliberate effort, you can build automation to Windows platforms. But in unix, one guy with a clue can create vast value working with the defaults. Even on an ancient Solaris box with no compiler or extra tools installed. If you're desperate you'll be able to find a shell that can open a socket server or staple some awk together to get a result.

Windows powershell is a vision of what a shell can be that has grown up separate to the unix world. In a fair world it would be considered heavy-lifting-equipment, like bash or emacs lisp. It would be thriving. But it has lived in a straight-jacket because until now Microsoft has forced operators to to interact with it via the windows GUI.

MFC popups would undermine the benefits of SSH. If you're worried about your shell hanging at a background window event, you'll avoid using it in anger.

Same issue with editors. Existing editors require the GUI layer. A strength of ssh is how well it suits you when you want to dive on to a host and make a fast change.


Additionally, popping up GUI dialog windows over slow network links will get very old very fast.


Console interaction shines particularly for high-latency liks. Mainframe/mini interfaces were particularly strong - your buffer could get many characters ahead of the display, but you could have complete confidence that everything would work out.


At least a few people want vi-mode in the shell, see https://github.com/lzybkr/PSReadLine/pull/101


> Windows could do a lot better than curses by providing easy means to pop MFC dialogs from the shell to prompt the user.

This was common on the Amiga. E.g. tab completion in the shell can bring up a pre-filtered file requester.


"the only remaining viable non-Unix platform"

Why do you think that is? As in, why is everyone else on unix (like toolchains)? Why do you think pretty much the entire tech-stack of mobile and web companies has moved to Linux or BSD as the default choice.

The reason is clear. The unix "small tools connected by text based pipes" philosophy has won. Whether disaffected powershell users find it clunky or not is not the point. Unix tools are the lingua-franca of developers everywhere.

There are many reasons for this but the most important is that the commandline toolset for managing a machine from a remote terminal has always been the most important UI for unix over the last forty years. You can call it "dated". The industry calls it "the standard".

The irony here is that Microsoft itself has come to the realization that they have to follow where the developer ecosystem is inexorably marching en mass. There will of course be a certain percent of holdouts clinging to their powershells. But now they'll be fighting against not only the linux'er and BSD'rs. But Microsoft management itself.


regedit would like to have coffee with you.


Or you could use the PowerShell command line. http://blogs.technet.com/b/heyscriptingguy/archive/2012/03/1...


You can't polish a turd, but you can roll it in PowerShell?


Powershell will need some kind of text editor though, even if it's just something as simple as nano.


Edlin. The Standard Microsoft Text Editor.


> Edlin. The Standard Microsoft Text Editor.

One line is enough for everyone.


> Hmm. They can now have a path to obsolete cmd. As long as they ship a decent ssh client with the system, users will become accustomed to ssh-ing to their own box instead of using cmd.

Baby steps! SSH is a transport protocol, not a shell. If you SSH to your own box, you're just going to get PowerShell.


SSH clients are also typically terminal emulators, though. The idea here would be to have an ssh client which acts as a better terminal than cmd.exe does. The effect of this isn't about whether you get access to a better shell, but instead about whether you have a better terminal with which to talk to that shell.


The definitive SSH client isn't though. PuTTY, SecureCRT and the like are terminal emulators with SSH built in, but those are not the standard.

One of the big missing features in Windows is lack of SSH transport. Sure, its terminal emulator is adequate at best, but it does have one. The major news here is that it will finally be possible to remote into a Windows host the same way we remote to everything else.


FYI: http://www.hanselman.com/blog/Console2ABetterWindowsCommandP...

Will happily run a cmd window, or other command you supply (e.g. I run a cygwin zsh)


I recommend checking out ConsoleZ which is a fork that looks very promising and has a github repo that has more traffic than the sourceforge console2 project.


For anyone looking for old and mature options, xterm on a rootless Cygwin X server works quite well.


... which is AWESOME.


> there was no point under Gates or he at which Microsoft felt like a pro-developer company.

That hasn't been my experience. Microsoft has always been supportive of my compiler company, even providing Microsoft tools to be bundled with it. In fact, a huge reason MSDOS was so enormously successful was the ease with which anyone could (and did) write and ship software for it. Microsoft was well aware of this and supported it.


There's a slight difference between "pro-developer" and "pro-company-that-develops-things". Microsoft has always provided excellent corporate support. If you have an issue and you're paying lots of money, Microsoft will move one and zero to fix your problem. Early and recent Microsofts, though, have a feeling that middle Microsoft didn't where they're friendly directly to the people writing the code. Better documentation, better quality-of-life features (ssh, headless, Visual Studio), more listening to the community. I mean, ssh? A build specifically intended for the raspberry pi form factor?


"Hmm. They can now have a path to obsolete cmd. As long as they ship a decent ssh client with the system, users will become accustomed to ssh-ing to their own box instead of using cmd."

I'm not sure I understand what you mean here ...

My OS X desktop/laptop (for instance) have ssh on them, but I almost never ssh to them ... I just open up a local terminal window.

Genuinely curious as to why one would ssh to their windows system rather than just run cmd.exe ...


It is SSH versus RDP, not SSH versus cmd.exe.

Not all of us who manage (partial) Windows environments run Windows or care to click around a whole bunch of times if we can just use the terminal we've already got open & focused.


Genuinely curious as to why one would use cmd.exe instead of PowerShell?


Because cmd.exe, or command.com, has been around for over 30 years, and lots of people have learned it. PowerShell is nice but a couple of decades overdue.


This feature is more useful on Windows Server.


Cygwin has all of those things and ssh server support.


I don't know many sysadmins who would use Cygwin to handle remote access to production servers.


A while back when I was responsible for windows boxes they ran ssh and RDP, there were plenty of times when machines were un-RDP-able but I could still SSH in and fix the saddness.


They're over at /r/sysadmin.


/r/sysadmin is more populated by the winadmins who would give up their gui only after death.


there's a dude there asking how to pick an email address. i'm pretty sure it's serious.

<http://www.reddit.com/r/sysadmin/comments/38ba26/professiona...


Fair. But I couldn't target my crappy roguelikes at cygwin - requires complicated third-party installs.

I ran cygwin-with-ssh heavily for a while in 2003. It was a hell of a thing to get it going. Then I had trouble getting it to work on a new computer and gave up. Shouldn't be surprised that it got good again.


Try Babun [1] for a pre-configured Cygwin setup with an easy install. I've been using it for the past month or so and it works great.

[1]: http://babun.github.io/


That looks amazing. Thanks for the link.


Alternatively MSYS + MinGW + GCC / MSVC.


> complicated third-party installs

Wait, isn't that nearly everything you ever install on Windows? Not complicated per se, but everything is third party and not signed by people maintaining Windows (as with most GNU/Linux distributions).


Chocolately helps with at least the "complicated" part. Has helped me fall in love with Windows (7) again.


Don't MSI packages help?


Wishlist:

mobaxterm - http://mobaxterm.mobatek.net/


> Wishlist: tmux, emacs, vi, netcat, shell option for vi-mode, rc-file with preferences, ncurses library, something simpler than curses, zip and unzip.

First, let's have a copy tool with delta-sync support. I can't believe I have to use a cygwin-based rsync to quickly update my backups throughout the day (currently: shut down Virtualbox dev VM, rsync changes, start VM).

I understand the MS way to do it is DSFR, but for a single developer workstation that's overkill.


Why not use robocopy?


AFAIK robocopy doesn't do delta-sync of file contents, only of a group of files. So if I want to copy a 50GB file where 10MB has changed, it'll still copy the whole 50GB again.

I would love to be wrong.


Yesterday only I was talking with my friend about "Would Microsoft will ever support ssh?" and voila its done.

I am a linux guy who did his intern in Microsoft. That was the first time I appreciated Powershell and from then I am in love with it. Its object oriented nature makes thing easy, fluid, understandable. I can never run linux command line with Google but with Powershell, its possible with the way it manages help pages.


Never thought I'd say this... but I can foresee a future release of Windows being touted as POSIX compliant.


It is but only in some editions

Interix release 6.1 is available for Windows Server 2008 R2 and Windows 7 for the Enterprise and Ultimate editions.

http://en.wikipedia.org/wiki/Interix


It already was. In the NT days.


Yes, they even had a slogan "Windows NT is a better UNIX than UNIX".

Which never rang really true, but NT had promise.


Could you be mixing this up? There was a heavil-used phrase, "OS/2 is a better Windows than Windows" during the Windows 3.0 era, because the OS/2 Windows subsystem was far more stable than Microsoft's Windows (and also - as it happens - a good deal more stable than the rest of OS/2 also).


I'm not making it up; if you google the phrase "better Unix than Unix" you'll find a number of references to these words. However, it probably wasn't a direct quote by Bill Gates, as was claimed.

http://www.uniforum.org/publications/ufm/aug96/unix-nt.html

https://books.google.fi/books?id=g0CPF6MEFcUC&pg=PA1&lpg=PA1...


Thanks. Bizarre.


Supporting developers means making their lives easy by giving them streamlined systems and frameworks, not throwing a hodge-podge of of command line utilities at them or exposing them to the messy bazaar of the FOSS world.

I'd say Microsoft has been been more pro-developer than most and I'm super glad that I chose to live primarily in their ecosystem because they make my life easy compared to the mess I have to put up with in Unix-land.


I would say that's a minority view at best.

There are two worlds in desktop software development – the "open-source-unix-y" world, and the "Microsoft" world. The former has typically meant having a broad range of excellent tooling; it's certainly been a bit messy, but that's mostly because of the ease with which things have been able to change and develop. Even development in the Apple sphere has always involved the same basic tools.

On the other hand, development using the Microsoft platform has traditionally been an entirely different process and toolset. It's cleaner, in the sense that there's less diversity and a greater focus on compatibility (to some extent). But the tradeoff there is that it's often been slow to keep up with developments in the rest of the industry, and did not allow users access to open, standard tooling that other developers had.

That is changing now, and we'll start to see the best of the traditionally locked-away Microsoft platform make its way into open-source platforms, and vice versa. That's not a bad thing, but I think it's pretty foolish to consider their past stance as 'developer-friendly', especially considering the number of frameworks and toolkits they've pushed and abandoned over the past decade. Hopefully, that is changing.


The dichotomy has long been "large number of single-purpose tools that can be combined" versus "single monolith that could theoretically be scripted but rarely is". The monolithic system doesn't have to worry about mismatches, seams and fragmentation, but is also brittle.


> ... that's a minority view at best.

Not at all - go ask any devops/IT group running in small, medium and large businesses to replace their Microsoft tools with a competitor's (if you can find one) or FOSS tools and they'll laugh in your face because of how easy Microsoft makes things for them.

> But the tradeoff there is that it's often been slow to keep up with developments in the rest of the industry...

Considering that Microsoft kicked off Web 2.0 with the invention of XMLHTTPRequest and the iframe, I'd have to disagree.

Your whole argument is closed == bad and that's simply not true.


Not at all - go ask any devops/IT group running in small, medium and large businesses to replace their Microsoft tools with a competitor's (if you can find one) or FOSS tools and they'll laugh in your face because of how easy Microsoft makes things for them.

IT is a bit of an outlier. But I must admit, I've never encountered any self-described 'DevOps' who uses anything but Unix. Different spheres, perhaps.

Considering that Microsoft kicked off Web 2.0 with the invention of XMLHTTPRequest and the iframe, I'd have to disagree.

That's a bit shallow – every technology company has developed some tools and techniques that were ahead of the curve. It doesn't mean that their entire platform is.

Your whole argument is closed == bad and that's simply not true.

No, my argument is that closed has a very large downside, and that if you don't bear that in mind it's going to harm you later.


I don't think of IT as an outlier here because every IT department I've dealt with for a long while either contains programmers (usually in small/medium sized companies) or works closely with groups of programmers to keep their apps running. That's what I call devops, maybe I have the wrong definition though.

The argument about Ajax coming from Microsoft is a bit shallow. (Sorry that's all the effort I felt like putting into it at the moment.) Here are a couple other thoughts along the same vein:

1. Microsoft has done extensive research into many areas of tech that are just now blooming, such as mobile and tablet computing. They had a general purpose mobile OS with multiple third party app-stores before any of the modern industry players. They had tablets. I think the industry caught up to Microsoft while they were busy making money elsewhere.

2. Look how quickly Microsoft can pivot into doing the kinds of things that Amazon AWS, Google and Apple are doing. I think it's a testament to how "there" their platform is already.

> ...closed has a very large downside...

It can be a huge upside too. Developers have been making lots of money off of tightly controlled, closed software platforms like iOS and Windows for a long time.

The web is the only completely free open source "platform" that I can think of that is a huge hit with programmers and that people generally use. However, in my opinion - programming it sucks compared to the closed, native systems.


I appreciate the courage of posting what seems like an unpopular opinion, but I can't agree with you. The messy bazaar of the FOSS world has been overlapping with the Windows ecosystem for some time, even high profile projects from Microsoft like ASP.NET MVC have been open source for years now.


Wonderful.

I am a big supporter of Powershell, and while Powershell has supported remoting since almost day one, it will never enjoy quite as much support as SSH already receives (e.g. third party tools, firewall support, etc). It is also nice that they're looking into using something fairly "proven" secure, OpenSSH is exposed to the internet a lot (even if, yes, that is not best practice) so we can reasonably expect it to withstand day to day attacks.

In general people really are starting to run out of reasons to "hate" Microsoft. It will be interesting to see what they come up with in the future...

PS - I really hope later they expand this to SFTP support. SFTP is significantly better than either FTP or FTPS, and something Windows has lacked since forever.


I've been historically what can be described a Microsoft hater and I have to tell you, missing official SSH support has never been a hate generator and insinuating otherwise is insulting.

The first reason for which I have historically hated Microsoft is because of how they fought open standards. I can't believe that Microsoft changed in any meaningful way when I can't get a Lumia phone or Outlook to work with CalDAV / CardDAV. And that's just one current annoyance, as we can always talk about ODF, OpenGL and others.

The second reason for why I hated Microsoft is for their funding of SCO's lawsuit for the ownership of Unix. That was a long time ago, they must have changed right? Except that currently they are behaving like a patent troll, extracting profit out of Android through what can be described as racketeering, more profit than they do from Windows Phone.

The third reason for why I now hate Microsoft is for how they are (again) pushing for Trusted Computing. It's basically what happens when the OS provider becomes the gatekeeper of what you can install and do with your own computer. I never took this as a threat to personal computing, except that now Apple has made it acceptable. Things like insisting on logging in with Microsoft accounts, or only being able to install "modern apps" through their store (while taking a revenue cut of course) would have been unthinkable only 10 years ago. So thanks Apple for shifting the overton window, thanks Microsoft for delivering it to PCs.

Of course, "hate" is a strong word. I don't really hate them, I just speak against them. But given how people fill the forums lately with messages of the second coming, I'm wondering what the heck are these people smoking, because I want some.


CalDAV and CardDAV are supported on Windows Phone 8 since GDR2 as the method to sync Gmail accounts. OneDrive supports ODF via Office Online. OpenGL runs fine on Windows and is less relevant now than ever thanks to Metal and DirectX 12, etc. Oh and the new Outlook app for iOS and Android also has broad support for competitors and open standards. Sure, it's through the cloud, but it works. If you want to punish companies for sand boxing, sure, but hey, how are those iOS viruses and spyware treating you? Even Google's making a kid-safe part of their app store. There will always be ways to develop or hack these devices, but safer, saner defaults are appreciated by the majority of non-technical users.


> OpenGL runs fine on Windows and is less relevant now than ever thanks to Metal and DirectX 12, etc.

I have to express strong disagreement with this entire sentence. OpenGL on Windows desktop continues to suffer greatly from Microsoft's lack of support. OpenGL is more relevant now than ever before due to the dominance of OpenGL ES on mobile and web. Microsoft is starting to support it themselves with WebGL in IE11 and the announced iOS/Android app support for Windows 10. They even joined Khronos Group, but they're still clinging to proprietary DirectX for the Windows desktop, to nobody's benefit but their own. And finally, Vulkan is more interesting than either Metal or DX12 due to being cross-platform, and if Microsoft continues to ignore it in favor of DX12 and we have a repeat of the OpenGL vs DirectX situation it will be a huge shame.


> OpenGL on Windows desktop continues to suffer greatly from Microsoft's lack of support

In what way? I write graphics code for a living and from what I've seen opengl is basically fine on windows. They could do more, sure, but I never felt like they were getting in the way.

I have mixed feelings on the directx thing. In principle I'm not a fan of directx, but in terms of API quality it's vastly better than opengl. Admittedly that's not saying much: opengl's design kind of sucks. State management when almost every state you set is global is a nightmare.


> They could do more, sure, but I never felt like they were getting in the way.

If you would try to ship OpenGL game you'll find out that Windows 8+ shipped with crippled AMD Legacy driver (HD2XXX-4XXX) that don't have OpenGL support in it and can't be really replaced using AMD Catalyst installer until user manually install new driver via Device Manager using extremely tricky way.

There was also crippled Intel drivers without GL support, but as far as I aware new drivers are shipped with GL already.


You'd have to take that up with the hardware manufacturers, I guess. It's not like Microsoft writes said drivers and if AMD chooses to submit a DirectX-only driver for inclusion in the OS then where's MS's fault?


I don't believe that it's AMD and Intel decision to provide crippled drivers to them. After all they never really updated these drivers after hardware become "legacy" and you won't find any package without OpenGL except what Windows 8+ installing via Windows Update. Anyway I had no intention to blame Microsoft there as likely they removed GL support from those drivers not to "harm GL", but just because they won't able to test it as it's not exist for them.

Other issue is that Windows 8+ had default behavior to replace manually installed drivers with GL by newer "Microsoft version" without GL, like that: http://answers.microsoft.com/en-us/windows/forum/windows8_1-...

In the end it's only Intel and AMD fault that they don't care enough to put pressure on Microsoft or at least release proper installers. BTW Intel's "The driver being installed is not validated for this computer" bullshit only prove how little they care about GL on Windows.

So comment above stated "opengl is basically fine on windows". No, it's not. Any person that tried to ship or support OpenGL-powered software know that.


Actually, drivers when you're talking about companies like these are developed with Microsoft involved nearly every step of the way. If they cared about OpenGL on Windows, it would've been avoided quite likely.


> CalDAV and CardDAV are supported on Windows Phone 8 since GDR2 as the method to sync Gmail accounts.

Yes, but it only works with Gmail (and iCloud) accounts, not arbitrary CalDAV/CardDav servers. Some people have had success with adding a fake iCloud account then modifying the server address, but it's not officially supported and really shouldn't be relied on to work properly.

I really hope that Windows 10 Mobile includes official support for CalDAV and CardDAV.


CalDAV and CardDAV are not supported on Windows Phone 8, as I'm not talking about Gmail accounts. Office Online supports ODF because they were forced to, but they continue to lobby against it. The new Outlook app for Android is a privacy invading and buggy piece of shit.

And the "grandma" argument falls flat on its face when you've got an app store filled with scams, mallware and trademark violations. It's also a funny argument given that platforms other than Windows haven't suffered from viruses as much. One could say that if Windows wouldn't exist, we wouldn't have this argument in the first place.


If lawsuits and not supporting something you like makes you dislike a company, I'd guess that almost all corporations would be on your 'hate' list.

My dislike of MS is quite simple. I simply don't like their products. That doesn't mean I automatically love OSX or Linux. IMO OSX is a mediocre product with a polished user experience and Linux as a desktop is just broken. Unfortunately, I can't say there is a single OS that I like at the moment.


I'd guess that almost all corporations would be on your 'hate' list.

This is a popular position, yes. But Microsoft in particular have tried to make Linux impossible on a number of occasions over a period of decades, so it'll take a while for the guerillas to come out of the jungle and stop fighting them.

But now they're in competition with the platform that want to annex all your personal data and the platform that want a veto over all applications, so they're not necessarily the most hated party in the room any more.


Linux survived because commercial vendors poured in over a billion dollars into making it a viable UNIX alternative. Microsoft's feeble attempts to sabotage it are pretty much irrelevant in that regard.


It's easy to call it "feeble" because it lost, but the worst-case outcome would have ruled the POSIX API was copyright SCO (funded by Microsoft), making it infringement to distribute Linux.

Linux could have survived in peaceful co-existence without the corporate billion. It would have been smaller and more hobbyist. It could not have survived if all the judgements had gone the wrong way.


I think it is intent that counts here, not overall success of efforts. May they be feeble - they are still attempts of sabotage.


That's the popularity argument. Yes, companies like Apple or Oracle are on my hate list.

I like how whenever we bring up this popularity argument, we are in essence talking only about 2 or 3 companies, the big ones. Here's another picture: http://www.openinventionnetwork.com/about-us/members/


> In general people really are starting to run out of reasons to "hate" Microsoft. It will be interesting to see what they come up with in the future...

I agree with you overall, MS has made GREAT strides in the last year towards moving my needle from "Wouldn't touch if my life depended on it" to "Huh, that's actually pretty interesting". The old MS still shows through with things like how they are handling Win 10 (Both in versions and cost) but overall they are looking up. That said MS still has a long ways to go and I still wouldn't use Windows for anything (other than gaming) but some other stuff MS has done does interest me.


> how they are handling Win 10 (Both in versions and cost)

I'm a bit confused by this. It's free for any genuine Windows 7 or 8 user, and $99 to $199 for anyone else. There are two SKUs (Home and Pro), rather than the multitude with XP or Vista. How is this not a reasonable approach? Especially since Windows 10 is supposed to be the last Windows release of this kind.


> There are two SKUs (Home and Pro), rather than the multitude with XP or Vista

Two SKUs, and then a bunch more; whole list currently is: Home, Mobile, Pro, Enterprise, Education, Mobile Enterprise, IoT core. There are also upcoming still unnamed "industrial" SKUs. Oh, and of course MS is going to offer 32 bit versions too.

Also of course there will also be full suite of Windows Server SKUs, because obviously those are completely different from the client OS...


> Mobile

Is for an entirely different class of devices than the others.

> IoT core

Ditto

All the others are price discrimination for organizations. The typical end-user does not have to contend with the list you put forth.


I believe they also have Windows 10 Enterprise, Windows 10 education and if we include the phone OS: Windows 10 mobile and Windows 10 mobile enterprise.


This is correct - there are 6 Windows 10 SKUs[1]. 4 if you exclude the mobile SKUs. The "only 2 SKUs" was just a rumour/wish.

I'm echoing speculation I read elsewhere: the reason for the numerous versions is Federal Money. The US government demands that it pays the lowest price the vendors offers an item for. Since Microsoft likes to offer discounts to universities, the government would be entitled to the lowest discounted price on the SKU. Their solution is to make Education & Enterprise versions and maintain the price disparity. Money trumps simplicity.

https://blogs.windows.com/bloggingwindows/2015/05/13/introdu...

(edited for clarity)


> The "only 2 SKUs" was just a rumour/wish.

Its only two consumer-facing desktop SKUs, but that's the same as Win 8.1. Its still down from Win 7, though.


The versions are all pretty reasonable and not significantly different outside of the one thing that defines their SKU.


Enterprise is understandable - businesses may prefer not to use Pro. I don't believe that you can count Windows 10 Mobile/Mobile Enterprise against the count as they are merely part of Microsoft's effort to combine all their platforms under one operating system.


> It's free for any genuine Windows 7 or 8 user, and $99 to $199 for anyone else.

It's free ONLY if you get it in their window (1 year I think) it just seems unnecessarily complicated when they should just make it free. Also they have 3-4 more SKUs than just Home and Pro (which really should only be 1).

> Windows 10 is supposed to be the last Windows release of this kind

I'll believe that when I see it also there is nothing I've seen so far to imply all further updates would be free.


Why should microsoft just make their primary commercial product free for everyone? I am curious to know what percentage of MS profit is from OS sales, but I assume it's very high?

Apple can make their OS free cause they make money on the hardware, which 99% of people buy from them to run the OS on.

And of course linux and other open source is another story.


I don't think microsoft makes that much money from customers buying their OS directly, but rather from OEMs installing Windows by default on the laptops/desktops they are selling.

Never once have I met a person buying a standalone windows upgrade/windows OS. Either they end up buying a new computer, they stick with whatever was installed on their computer (hello, XP/Vista users), or they illegally download the upgrades.


They definitely sell retail copies of Windows, and someone must be buying them, or else I can't imagine the retailers would bother stocking them.

For example, a good amount of the people that have Macs also need to run Windows virtual machines (or Bootcamp), and the primary way to do that is to buy a retail copy of Windows. I can't imagine that's a very high percentage of revenue, but I wouldn't say that nobody buys Windows directly.


If you illegally download upgrades, then you're almost certainly infecting your computer with significant malware.

Do people not care about this?

I'm not a big Windows fan (I've run Linux as my primary OS for years), but I've bought several Windows OS packages over the years for work (testing software).


SYSLOG! For the love of god, please support syslog! I work as a consultant supporting a SIEM, and the amount of hoops we need to jump through to get logs from Windows servers is crazy compared to changing one line in a syslog.conf file. I actually dread when a client says "we're an all Windows environment" because wow initial setup just got that much harder.

And if we want to install a syslog forwarder on their domain controllers... no one ever trusts software installed on their domain controllers. Everyone trusts a single line in syslog.conf.


MS has supported event forwarding since 2003. You can set machines to forward events or have them pulled. The events are XML that conform to a published schema. There is a WMI call that call pull the aggregated events off the collection servers.

I heard this kind of thing from a vendor the other day. It's like people don't even try to learn how it works.

Why are they different? The event log has some transactional guarantees that were required for a specific kind of security evaluation...C2? I can't remember the rest.

The thing is...you don't have to install a client on the domain controllers. You can setup forwarding to some log hosts and collect them there.


It's like people don't even try to learn how it works

It's much harder to know how it works, for some reason. Information like this doesn't make its way into the community and circulate. On a UNIX system you can poke around /etc and get an idea of the scope of what is configurable. The same is very much not true of the registry and only slightly true of WMI.


You are right. The guy that wrote PowerShell says that UNIX is document oriented configuration while windows is API oriented configuration.

To get into it in any depth you have to approach windows programmatically. The most power is through C\C++...to be a good windows admin you need to read the docs about how you interact with different subsystems, even if you aren't going to code against them.


That's a really good point, thank you. I've never really thought about it that way; e.g. the registry is something you should sort of treat as an opaque blob of state (like a smalltalk image?) that you just mutate over time programatically, rather than as a declarative set of configurations to bring the machine to at any time.


> You are right. The guy that wrote PowerShell says that UNIX is document oriented configuration while windows is API oriented configuration.

That's an interesting way of putting it. I strongly prefer the Unix way then,(just avoid turing complete config languages).


After coding for several years, the two aren't all that different.

I'm not saying that I think that the UNIX WAY is ever going to go away...but I do think that extreme scale makes the programmatic approach to configuration make more sense. Instead of treating every system as a "system" you treat it as a simple programmable node among thousands of others. You are already starting to see Linux go this way with systemD. The stuff that the CoreOS people are doing with etcD, fleet, and flannel are really the future of *NIX.

Please cut me some slack...I'm NOT TRYING TO ARGUE ABOUT SYSTEMD. I'm just saying that it's oriented towards developers using API's. That's one of the reasons why admins who are used to the "one true way" dislike it so much. And they should have options if they don't want to use it. I'm just saying that cloud scale deployments are driving changes to infrastructure to make it more "programmable". I'm not even saying it's "right"...Its just a observation.


I think you have it backwards. Sysadmins are not the ones arguing in favor of "the one true way". That is squarely in the newer breed of developer/admin devops hybrid that the systemd camp is pandering.

For sysadmins the nix way of text in and text out allow systems to be as simple or complex as they need to be, because parts can be swapped, added or removed as needed.

The kernel don't care what your initial process is (you can for instance point the Linux kernel straight at the sh binary and be presented with a root shell the moment the kernel is done getting the hardware up and running), and the programs you want to run don't care either.

Thus you can run nix on anything from a dinky single core SoC to a warehouse sized compute cluster.

But systemd is pandering the latter while giving the former the middle finger. This by ignoring the text in text out loose bindings that has been the core of *nix.


I'm not trying to be facetious...What part of X11 matches the text in/text out -- small programs that do one thing -- mode? Even Linus says that "model" doesn't really apply any more.

Maybe systemD and it's author's are whatever...I think that the CoreOS people are demonstrating that programmatic administration is the best model for massive scale. That's my only point.


X11 may be an odd duck out in the nix world, but then i started as a way to put graphical terminals on mainframes.

As such the server end started out being a beast all its own...

Still more flexible than systemd though, as i can run a X server on, say, Windows to get the UI of a program running any kind of nix out there.


That's quite the most carefully hedged "please don't kill me" comment I've read in a long time :)

I suspect there's a scaling issue in here as well. Tools optimised for large numbers of systems are always going to look clunky and overdesigned when used on a single system.

With regard to APIs, I think this is one area where the availability of source is quite critical. "Control Panel" is clearly a tool that manipulates some API, as are all the snap-ins, but because I can't see the source

(I recently had to fix a WinCE issue by trawling the source to find the right registry key. While Windows has a power user community, WinCE really doesn't, and is subtly different in enough places to cause problems. Google sometimes makes me feel like I'm the only person using CE 7)


Wow. Before I start...that's cool that you are using CE. What are you using it for? (If you don't mind and have a minute.)

I get what you are saying. I don't think that the programmatic model of administration works for running something like PeopleSoft, right? You use large hosts that are very special and benefit from the traditional admin model. It's very much a "pet"...where the CoreOS model is more "cattle".

I'm more ambivalent about access to source as long as there is good documentation and debugging tools. I'm not going to fix a kernel bug at this point in my development. Maybe one day...


Also, to be fair...someone has to write code to turn the text config file into bits in memory.


From what I know, you can use Microsoft RPC to pull event logs or you can install a syslog forwarder, or you can do a combination of these two things (have a Windows syslog forwarder that is not a DC that can pull logs from a DC through RPC). The problem with the last option is, adding another Windows server costs an additional license. Installing a client on the DC doesn't. And there are limitations, so in a huge Windows environment, you're installing several extra Windows servers, each individually licensed, just to forward logs.

I've worked in security with several different SIEMs for half a decade and those are the only options I've ever seen. So if there's something else besides this that is easier, it's not just me who is missing it. HP, Dell, Intel and a dozen other companies are missing it as well.


Secureworks was the vendor I was referring to...

Any place large enough to need a SIEM wouldn't balk at licensing a server or servers if it was explained to them that you wouldn't need yet another highly privileged agent running on a domain controller.

Its not just RPC. That's the thing I've noticed about a lot of security people... they don't even really pretend to take windows seriously despite its insanely large footprint and exposure at an organization. For what its worth... most IT people are just as bad. I'm trying to turn that around at my organization. I don't blame people...I get it. It just takes time to disseminate info.

Its called event collector. Spread the word.

https://msdn.microsoft.com/en-us/library/windows/desktop/bb4...


>Any place large enough to need a SIEM wouldn't balk at licensing a server or servers if it was explained to them that you wouldn't need yet another highly privileged agent running on a domain controller.

You'd be surprised at how many organizations will have the management spend $1.5m on a SIEM but the engineers refuse to use another Windows license.

There are ways to collect Windows logs. It's just not as easy as collecting syslog. And syslog doesn't need a user account to collect logs.


Hilarious thought: how much stuff like syslog support would microsoft have to add to start attracting systemd refugees? :D


You CAN export the event log data of course, even remotely. But it's rather ugly and I agree that some standard aggregation (syslog or anything) would be great.


Even the choice of outputs (of which remote syslog should be one) would be a great addition to the Windows event log.


Microsoft has supported event forwarding since 2003. You can set the machines to push events or have them pulled.


>OpenSSH is exposed to the internet a lot (even if, yes, that is not best practice)

Please elaborate.


A VPN with sshd exposed only inside the private network is a much smarter way of handling remote access. (But it's initially harder, so lots of people don't do it.)


You're just trading sshd bugs for VPN bugs, in that case. Which are more likely? From what I know, I think I'll put my lot in with sshd. Perhaps I'm not well informed, though.

Also, sshd + fwknop (port knocking) is a very secure combo, IMO.


With openvpn (which is just ssl) you can simply not respond when someone presents a bad key.

openssh always responds (unless I've missed a recent feature) thus exposing which port its listening on and that you sent a bad key.

The silent failure is preferable for this application.

Port knocking gives you a roughly equivalent layer.


And both these things you never do in production because it takes little effort to establish there must be a port open, whereas the cumulative time you'll spend tracking down whether its a network issue or bad/wrong keys is just not worth it.

Not to mention: nobody's going to be brute-forcing properly generated keys remotely. And if they're not properly generated, you have much bigger problems.


openvpn, which is just ssl, was vulnerable to heartbleed.

Like the GP said - you're trading VPN bugs for SSH bugs - and experience shows that betting on SSH is generally wiser.

If you only need TCP/DNS and not a full-blown VPN, a program called sshuttle uses ssh+python to provide excellent seamless poor man's VPN. It's not perfect - e.g., you lose the ip src address on the forwarded connections - but it works amazingly well, much better than e.g. openvpn and most other vpn products I've used.


PubkeyAuthentication and disabling password logins helps a lot too. I've also been using deny_hosts a lot over the last couple years as an extra layer.


That solution is secure, but I used the word "smarter" to describe the use of a VPN intentionally. You are introducing nonstandard practices (everybody knows what a VPN is and everybody's already able to deal with them) with nonstandard behavior (ever tried to debug fwknop? because I have) and creating an obscure way for your stuff to fail (and it will, at 3AM, and you will not be able to Google your way out of it).

And you're also manually creating SSH tunnels to do anything else inside that network, so you've got that going for you, too.

It's 2015. A VPN is the settled method for accessing sensitive services inside a private network. Doing otherwise may create great nerd cred but doesn't make you do your job better.


Why not just do away with port knocking and use keys?


sshd would still be authenticated. It's defense in depth.


the code for the VPN is better because...?


The reason that no one has given yet is that you can monitor more carefully when you have a VPN. With a VPN, you can follow the bastion model; the VPN server runs only the VPN code and nothing else (with as much crap removed as possible), with a really restrictive SELinux policy, every tiny error logged and forwarded.

So now, since any attack has to be through the bastion, and all bastion errors are looked at by a human (because there should be only a few of them), then you're more likely to notice a breach more quickly because it won't become caught up in your general logs.

It should be noted that it's as easily possible to have the bastion server run an SSH server, and allow SSH access to other servers on your network.


It's not necessarily better. But now you have two layers of security (VPN + SSH), rather than just one.

And as others have mentioned, either way it's still good practice to disable password auth, so that you can only connect to SSH using a public/private keypair.


Even better don't use default ports. Or have a second ssh that doesn't except any IP address at port 22 and than have your non-standard port with keys and limited user names and if possible a white list of your IP addresses.


If you do this, keep it in the privileged range (< 1024) or you run the risk of your ssh server crashing and some malicious normal user binds to your unprivileged port with a fake sshd and grabs your root password.


Not using default ports will mildly confuse automated scans and do absolutely nothing to a determined attacker. Or somebody with nmap, which is not the same thing.

If you're whitelisting IPs, you may as well run it on port 22.


No it makes it harder and more of a pain. Trust me I have a friend who loves breaking into my personal server. That one trick two ssh running on different ports screwed with him for a long, long time. He is a genius of a hacker and has been doing it for a living for years. When he finally got in he was so pissed that threw him.


You are describing an anecdotal instance of a person whose capabilities are not established being thrown by something that nmap will catch on a normal scan.

Color me skeptical. I shall decline to "trust you."


Scanning the internet isn't that slow

https://www.youtube.com/watch?v=UOWexFaRylM


nmap the whole internet isn't very fast


> Trust me I have a friend who loves breaking into my personal server.

Sterling work establishing your own competence there.


Not my competence it his competence I trust and I got him good with that one since it never occurred to him that one stupid trick messed with him for so long. Lie 5 minutes a month.


One small benefit of using a non default port is that it keeps down the noise from automated scans. So any "real" suspicious activity will now stand out as it is not drowned out by the noise anymore.


If there is a defect with VPN and you gain arbitrary execution then you'd have access on the system as whatever user the VPN was running as. You don't necessarily have to break VPN and SSH.


The machines running your VPN are presumably not the same machines you're trying to access via SSH. A compromise in your VPN should't give anyone execution access to anything interesting.


If VPN is broken, you still need to break SSH


Why is SFTP > FTPS? Any place I could look for some info on this?

Are you talking about firewall easy-to-setup?


SFTP is file transfer over ssh. FTPS is FTP over SSL.

FTPS being FTP-based, and FTP being a terrible protocol, SFTP > FTPS definitely. You get all the advantages and security of SSH with SFTP.


From what I have seen FTPS seems as secure as SFTP if well configured (i.e. no fallback), why do you say FTP is such a bad protocol?

I have tried to look it up, but the answers seem to revolve around firewall being hard to setup and FTP being not secure (which is resolved by encapsuling it in SSL).

Also my search suggests than ssh provides an additional overhead compared to using ssl for file transfer (i.e. http://serverfault.com/questions/131240/ftp-v-s-sftp-v-s-ftp...)


> Given our changes in leadership and culture, we decided to give it another try and this time, because we are able to show the clear and compelling customer value, the company is very supportive.

Is that code for Ballmer's regime vs Nadella's regime?


It's pretty much a public secret that Ballmer was one of the last reasons why Microsoft did not participate much in the OSS community even though a lot of employees wanted it.

To give a concrete example of the changes (heard from a former Microsoft employee): Under Ballmer they were not allowed to touch anything open-source. Pretty soon after he left that was changed to a policy that a BSD/MIT open-source solution must be used if it is available unless there's a damned good reason.


Ballmer has a BAD case of "Not invented here". I remember reading a story about him calling out and mocking an employee in a meeting b/c they has an iPhone (this was around 2008 I think). First off it's a VERY GOOD IDEA to use your competitors products if only to see how they stack up (Spoiler: They blew away MS's mobile offerings in 2008 and still do to this day). Second, what a way to make all of your employees yes-men who live in their own world shut off from what was happening in real world. He was douche all around from what I can tell and him leaving is one of the best things that could have happened to MS. Ballmer took over MS soon after I really got into computers and let it stagnate (XP/IE6) which lead me to switch to Linux/OSX and I've never looked back. I'm not saying it's all his fault but I think he was extremely influential in decisions that ultimately pushed me away from MS.


I moved away in 1999 to the Mac, and because of the incredible expensive Macs nowadays, I'm probably going to make another move, to Ubuntu.

Still, Steve Jobs was not an easy guy either. I don't know if he would mock an employee for having an Android phone? Not unlikely given how anti-Android he was.


VERY good point and honestly one that I hadn't thought of. I'm not aware of any publicised cases of Jobs calling out an Android phone but I would believe it, he was an asshole. IIRC he was so anti-android b/c he saw Android as a blatant rip-off of iOS (not weighing in here on either side).

As far as cost of macs I think it's still worth it even if you want to run Ubuntu as your OS b/c they hold their value better than any other laptop I've seen on the market. They also resell FAST which is very nice.


I buy my computers to use them, not to resell them. At this point my Macbook Pro is worth maybe $300, and I'd be more likely to gift it to a relative than sell it in the first place.

I also found it to be a huge pain to install Linux on it, which is why all my Linux computers are Thinkpads (with the exception of the old HP workstation I'm typing this from).


I read it more as a barb against Steven Sinofsky former president of the Windows division, former internal competitor for CEO and alleged naysayer of open source.


This has been my long time dream. There's so little that prevents this from happening theoretically (obviously there's a lot of coding will be done, but hey that's the fun part). I am very glad Microsoft is taking all the right steps to bring two world closer: Linux and Windows.

With all the announcements around stuff like Docker for Windows Server Containers and cross platform .NET, this was nearly inevitable. Now the server management also steers in the right direction.

Disclaimer: ms employee doing tons of open source.


http://www.microsoft.com/en-us/download/details.aspx?id=2391

If I am forced to use Windows in an Enterprise setting, then I just go to Control Panel and enable the POSIX layer ("SUA"), then download the SDK and install. With some minor changes to the %Path, it just works.

SUA has older versions of tcsh, ksh, vi and many other utilities, including an older Perl and an old GCC toolchain that does work. It is 4.2BSD based. If you are at home on BSD, it is like going back in time.

netcat, tmux, emacs, etc. you would have compile yourself. Maybe OpenSSH would compile and run. I have not tried.

Perhaps an alternative to Cygwin, etc. Not "better" but different. It generally "seems" faster and I find it's more difficult to "break" than Cygwin which in my experience can be very "delicate". The SUA White Paper says SUA comes to within 10% of the speed of native Windows.

The main advantage though, for me, is that this is not "unauthorized third party software" to the extent it comes with Windows and the SDK download comes from Microsoft's Akamai account.


Afaik SUA is pretty much dead, so do not carry too much hope in seeing improvements for it.


I just download GnuWin which gives you most Unix tools compiled natively for Win32. Includes OpenSSH too.

http://gnuwin32.sourceforge.net/


from the look of the website, GnuWin is terribly out of date and packages like OpenSSL (I don't seen OpenSSH listed) will have lots of known security vulnerabilities..


SUA was deprecated in Windows Server 2012 and Windows 8, and MS actually recommends using cygwin or mingw as alternatives to it.


I will keep using it as long as it is there.

I hate using Windows and I have never been one to follow MS "recomendations". Are you kidding? I do not work in an IT department.

I used MSYS and Cygwin for many years. Now I use SUA.

The less I have to use Windows the better. It dulls the mind.


When I am forced to use Windows, I immediately download Cygwin.

I am almost tempted to congratulate Microsoft and welcome them to 1995, although I am quite sure Cygwin didn't have ssh back then.


I happily await the day when I no longer need to install PuTTY on all of my fresh Windows installs.


They really need to announce a conhost replacement.

The "experiments" in Windows 10 are undeniably improvements for when we still use things like cmd, but Powershell needs a modern terminal window and there needs to be a native answer to Putty for both SSH and Powershell remoting.

I'd say after SSH this should be a priority for the Powershell team. Conhost is holding them back. A lot.


conhost is getting some upgrades in Windows 10: http://www.hanselman.com/blog/Windows10GetsAFreshCommandProm...

Given that a lot of new development in the Windows world is focused on the console, I expect that we will see more enhancements in the future.


One thing that I'm not 100% clear on: have they fixed multi-line selection in the Win10 command prompt?


Yes


Try ConEmu, it's a very good terminal window.


Isn't ISE pretty much what you are asking for?


It tries, but I think ISE is super clunky and it's awkward to use. It lacks the simplicity of a bog-standard terminal while not adding features to balance out its clunkiness (akin to the difference between a text editor an IDE--the IDE had best add something I can use if I'm going to take the hit).


Could you expand why you find ISE clunky and awkward compared to standard terminal? I mean you can just type in stuff and get answers back with fairly minimal interference as far as I can tell, just like a regular terminal.


I would suggest you go look at iTerm2, because an example is worth all the words in the world. tmux integration, multiple panes in a window as well as tabs, easy buffer search, instant startup, everything.

A shell host should open and do its absolute utmost to get out of my way. ISE does not do this. It's just...it's what I expect 2008 Microsoft to think a terminal should be, lacking empathy for me as a user. And maybe that's intentional--maybe I'm not the target audience. Maybe it's for mouse drivers who are forced to the CLI in extremity. But it's tasteless and it's hindering where it really must not be.


Those sound nice-to-have features, but (personally) I don't think the lack of those makes something "super clunky and awkward to use", especially in comparison to standard terminals. Do you think xterm is too super clunky because it lacks all the bells and whistles?

I still do not see where the lack of empathy and tastelessness is apparent in ISE.


I don't think xterm is clunky, no. It doesn't try to help me, but it also doesn't get in my way. iTerm2's feature set does not increase the friction of dealing with the shell inside of it. Literally every time I have to open up ISE, I go, "jesus, why the hell is it doing things?". The experience is just straight-up repellent. It's hard to really describe past "I'd rather click a mouse than use this." And that never happens elsewhere.

But anyway, your "nice to haves" are my "it's 2015, I'm not wasting my time with less." Microsoft's got more money than God, they can do something worth the time of day.


Honestly, I'm getting really interested in what you find so offensive in ISE. I mean, I can see it being not so great, but being straight out offensive I can't really understand. It is perfectly understandable if you don't like PowerShell (it is kinda weird). But ISE? The most different thing in it is a "fancy" tab-completion, and that hardly can be reason to so deep hatred.

And I'm not saying ISE is the best thing ever or anything, but it is a significant step up from conhost, and honestly not really lot worse than xterm in many ways.


What is ISE? Some cursory googling makes it out to be some kind of Cisco router thing.


The Windows PowerShell Integrated Scripting Environment

https://technet.microsoft.com/en-us/library/dd315244.aspx?f=...


Powershell doesn't support interactive console apps (those that want user input), so it's really not much of a cmd.exe replacement.

http://powershell.com/cs/blogs/tips/archive/2012/12/12/block...


To be clear - the ISE (powershell_ise.exe) doesn't support interactive console apps. powershell.exe supports them just fine.


Yes I see you are right, thanks for the clarification. The base Powershell.exe with a better window wrapper like cmdr actually should make a nice combination.

It's really too bad the limitation on the nicer ISE though. It does make a bad first impression coming from Linux, when trying to compile and run a simple interactive console app, trying to use ISE as a cmd.exe replacement.


In a recent HN thread, someone pointed me to MobaXterm (http://mobaxterm.mobatek.net/) and I have gladly deleted PuTTY.


I use console2, with a cygwin zsh, and then run command line ssh inside that. Sounds a bit convoluted, but works really well and gives me the experience I desire, where remote shells and local ones look and act the same.


Been using it at work for the past few weeks. Really do enjoy it. Does include support for X11Forwarding, VNC, SFTP, etc.


Awesome.

I've been stressing over finding a Windows SSH client that supports ed25519 and even thought SecureCRT recently just announced support for ECDSA, they do not have a timeframe for ed25519.

Off to try out how well this works.


Do you know if it has support for using an external text editor instead of the one they provide?


It does.


Another vote for MobaXTerm.


What's wrong with putty? I've been happily using it for a few years. What have I been missing out on?!



I don't understand. Why doesn't Simon Tatham the original author of Putty just buy a domain, and put Putty there? Looks like he's piggy-backing off-of someone else's domain, and then relying on supposedly odd RSA/DSA signing of the provided binaries? Odd, to say the least.


Or worse, Cygwin, because of some obscure process that needs to connect via ssh to your Windows server...


Cygwin comes with openssh-server [1]. I'll admit cygwin is quirky and not the best option always but I've used it to host ssh server on Windows in the past.

[1] https://cygwin.com/faq.html#faq.using.sshd-in-domain


Agreed here. Installing and configuring Cygwin has traditionally been a royal pain; Microsoft offering some equivalent as an officially-supported Windows add-on (or just part of the core Windows) would be a godsend.


Just install Cygwin with MinTTY and be happy.


[deleted]


While I agree that a native terminal/ssh client is the best solution, putty is very much maintained still: http://www.chiark.greenend.org.uk/~sgtatham/putty/changes.ht... last change was in February of 2015


Oops, my mistake. Should have fact-checked!

More

Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: