Hacker News new | past | comments | ask | show | jobs | submit login
The Man Behind Windows PowerShell (heavybit.com)
167 points by sbrown12 on Sept 14, 2017 | hide | past | web | favorite | 128 comments



I've recently written a couple thousand lines of PowerShell.

It's a miserable language, full of unexpected behaviors and badly designed features. Oh, it's got some interesting stuff going on, but things like:

- including the text of 'echo' (and stdout generated by invoked tools) in a function's result was a big surprise. You wind up piping stuff to "Out-Null" a lot in defense of this, since a lot of Windows tools are stupidly chatty ("The Blffgh command succeeded with status 0!") and that's awkward.

- so was the "unwrapping" that happens when a function returns a list (return [a,b] you get the list you want, try to return [a] and it rips off the list and just returns the inner 'a', and I think an empty list results in null . . . just wow).

The sole reason I use PowerShell is because it's good for mucking with OS-level objects in Windows, stuff that I would otherwise have to write native code to frob. And sometimes that is actually kind of attractive.


I write hundreds of lines of PowerShell every day, and quite like it as a language. Coming from a C# developer background, I initially struggled when I first started with PowerShell, but I believe this was mainly 1: the many quirks and issues with the early versions of PowerShell (this was back in PowerShell v1 days), and 2: that I was trying to use PowerShell as if it were C# (or a similarly 'structured' language).

Yes, the output from calling native commands within functions can be a little tricky, and something that annoyed me initially, but once you understand the concept of passing objects around, not strings, you quickly get used to either capturing the output into a variable ($BlahResult = .\blah.exe /foo bar /baz 2>&1), where you can then access the string output (stdout) and any ErrorRecord (stderr) lines, or as you suggested, piping to Out-Null if you don't actually need the output. If you're writing your own 'echo' type statements for debugging or information purposes, you learn to use Write-Verbose or Write-Debug instead of Write-Output (which is what happens by default).

The unwrapping example was something that got me initially, because I was used to defining a collection (array, List, whatever), populating it, and then returning that collection... and then having the calling code iterate over that collection. "The PowerShell way" is to not focus on the collection/array at all, but instead just let it take care of that and deal with the items themselves. If you pipe the output of a function to Foreach-Object, it will work correctly - if it's a single object returned, that's what will be $_ in the loop, if it's multiple, they will all be passed in one-by-one as $_... if you actually want to access the array itself (for example to count the number of items), then you can enforce the array type by wrapping it in an additional array (i.e. @($Users).Count).


If you stay in PowerShell I guess that's okay. But I interop with Python and other things a fair amount, via JSON, and have had to write wrappers and guards against the unwrapping nonsense.

But "... quickly get used to" is more readily translated as "Spent an afternoon debugging, found the reason, did a forehead plant on the keyboard, then tried to remember on every invocation, so as not to get burned again."

Python, LISP, Smalltalk and a host of other languages made much better decisions in similar areas.


Powershell can convert JSON objects to .NET objects with ConvertFrom-Json. And it outputs decent JSON by piping objects to ConvertTo-Json. I think few people know about these cmdlets, but combined with Invoke-WebRequest, they offer access to any JSON API.

Since .NET objects can't be copied from one session and pasted through RDP into another, I'll often take an array or other object, and pipe it through ConvertTo-Json into clip.exe, which throws the JSON onto my clipboard. In the RDP session, I'll pipe Get-Clipboard into ConvertFrom-Json, all written into a variable.

Piping ( | ) sends objects from one cmdlet to another. Many cmdlets and functions have a preset parameter for pipeline input, and it's easy to specify this when making your own functions. Foreach-Object ( % ) can have cmdlets act against each item in the pipeline, with $_ being the "this" character. Just remember to wrap them in curly brackets, as this marks them as as the script block being invoked.

Additionally, Powershell can interoperate with CSV just as easily, with ConvertFrom-CSV and ConvertTo-CSV.

Want a simple API? Here's a one-liner - just host the output with IIS:

While ($true) {Get-Process | ConvertTo-Json > C:\www\ApiFile.html; sleep 60}

For a more interactive API, the .NET HTTPListener class can easily be called and built upon.

My favorite part of the language is the type system, where you just put the .NET type in square brackets before the variable. [string]$myString = "Hello World"


Here's another absolutely classis interaction with PowerShell. Let's make a list of strings; I'm partly interested in what CSV conversion does to values that already have commas.

    PS> $x = @('a', 'b', 'c', 'd,xx', 'e')

    PS> $x
    a
    b
    c
    d,xx
    e
Okay, that's great. Now:

    PS> $x | ConvertTo-json
    [
        "a",
        "b",
        "c",
        "d,xx",
        "e"
    ]
Hey, conversion to JSON works! But let's try the CSV conversion:

    PS > $x | ConvertTo-Csv
    #TYPE System.String
    "Length"
    "1"
    "1"
    "1"
    "4"
    "1"
... and I think I just died a little inside. I didn't even get to answer the original question I had about escaping separators. I have no earthly idea why PowerShell decided to spew the lengths of the elements rather than the values, or what the hell the other gorp in the output is.

The official documentation is not much help. I could go do some research on StackOverflow and so forth, but . . . I've lost the will to live. For this little exercise I've reached the point of uncaring.

This is basically my everyday interaction with PowerShell; try out something new, have it fail in a way that this LISP / Python / C++ / etc. veteran would never expect, and spend 20 minutes or maybe a whole bloody day researching why, or working around the behavior with other tools. I could become an expert in PowerShell, but I'm not interested, I just want things to work in reasonable ways. I've got shit to do. [I'm writing this HN post because in ten minutes I get to dive into some more PowerShell]

PowerShell has repeatedly failed the "reasonable expectation" test. I can't depend on it to be a lightweight and useful tool. Instead, every time I use it I can expect to spend 60 percent of my time working around capricious nonsense, dredging through postings by other people who have thankfully preceded me.


It's about knowing what the function expects to receive.

What you got seems unexpected initially, but as you read the doc on this function (which took me a good 20 secs) you may think it is indeed well conceived, and realize that it was weird to expect passing an array of strings to it and expect it to work.. I don't even know what it should've done.

Should it have given you a single line of comma separated values? But then what should you give to the function to have multiple lines of csv? An array of array of strings?


"$arrayOfObjects | ConvertTo-Csv" will create one row per object in the array, with a column for each public field of the object. That's exactly what happened, and it seems like a reasonable behavior to me.

Do you have an alternative? Maybe for the edge case where the contents of the CSV is a list of primitives, it should give you a "Value" column? Or did you expect that, given an array of stuff, the best approach is to encode the whole array in a single CSV row?


> it's good for mucking with OS-level objects in Windows, stuff that I would otherwise have to write native code to frob.

In about one-and-a-half months, earlier this year, I gave my Lisp dialect an FFI, using which I was able to completely translate the MSDN "Your first Windows Program" C example, almost line by line:

http://nongnu.org/txr/rosetta-solutions-main.html#Window%20c...

After a bunch of FFI definitions of constants, structures and functions, the core of code is just:

  (defun WindowProc (hwnd uMsg wParam lParam)
    (caseql* uMsg
      (WM_DESTROY
        (PostQuitMessage 0)
        0)
      (WM_PAINT
        (let* ((ps (new PAINTSTRUCT))
               (hdc (BeginPaint hwnd ps)))
          (FillRect hdc ps.rcPaint (cptr-int (succ COLOR_WINDOW) 'HBRUSH))
          (EndPaint hwnd ps)
          0))
      (t (DefWindowProc hwnd uMsg wParam lParam))))
 
  (let* ((hInstance (GetModuleHandle nil))
         (wc (new WNDCLASS
                  lpfnWndProc [wndproc-fn WindowProc]
                  hInstance hInstance
                  lpszClassName "Sample Window Class")))
    (RegisterClass wc)
    (let ((hwnd (CreateWindowEx 0 wc.lpszClassName "Learn to Program Windows"
                                WS_OVERLAPPEDWINDOW
                                CW_USEDEFAULT CW_USEDEFAULT
                                CW_USEDEFAULT CW_USEDEFAULT
                                NULL NULL hInstance NULL)))
      (unless (equal hwnd NULL)
        (ShowWindow hwnd SW_SHOWDEFAULT)

        (let ((msg (new MSG)))
          (while (GetMessage msg NULL 0 0)
            (TranslateMessage msg)
            (DispatchMessage msg))))))


True, but then you have to develop in Lisp :p

I'm kidding, I'm kidding. But on the other hand, related to your example: the Windows APIs are in my opinion awful. Look at that CreateWindowEx call...


> True, but then you have to develop in Lisp

Sorry to poop your meme frolicking here, but your statement is flatly untrue at face value; other non-native languages exist that provide access to Windows API's.

Obviously, since I made this for myself at considerable effort in my free time over more than eight years, developing in Lisp must obviously be something I really want, not something I have to do.


Non-native languages with Win32, access: VB 6, VBScript/ JScript (CScript), everything on top of .Net, Python, Perl, heck, even Autohotkey.

Regarding the second part, don't take it so personally, I was just joking. You're also on the internet: if a lighthearted comment such as mine annoys you, a real internet troll will ruin your day...


I agree on those 2 points.

regarding your second point, I don't like this unrolling either.

The trick seems to be instead of :

    return $mylist
Add a comma in front:

    return ,$mylist
https://stackoverflow.com/a/16122464

This will make sure [a] never gets changed to a, and [] never gets changed to $null


There is a slight better trick. If you specify the variable as an array, it will always be treated as such.

For example, instead of

    return $mylist
...you can...

    return @($mylist)
The return keyword is unnecessary in PowerShell so this can be shortened to:

    @(mylist)


Yow. That's a great answer . . . for some value of "I wish I didn't have to know that, but I do have to, so therefore it is great." :-)


> "It's a miserable language, full of unexpected behaviors and badly designed features."

Yes, powershell is an insufferable language. It is in good company with every other language ever invented by humans.

"Unexpected behaviors and badly designed features." Are you talking about ... C? PHP? Javascript? Java? Ruby?

The design/implementation faults in C, for example, come at a cost of billions of dollars a year in damages (and that doesn't include the cost of mitigation).

Sure, Powershell has plenty of warts, point me to a perfect language that does what Powershell does as well as it does but without the warts and then people can stop using Powershell.


What are you comparing it to? Compared to Bash, I love the fact that everything gets returned as an object that you can query, rather than as a blob of text that you have to hack on.


You're mistaken. Bash functions and unix commands return an exit status (an integer), never text.


That's for status, not data. A time in bash is a set of characters, not a time.


All the shell does is allow the user to redirect input and output streams to/from system commands. Whether these commands output/read text or binary data or both is irrelevant from the shell's standpoint.


The whole unwrapping the list sound like bugs I hit trying to implement a lisp interpreter.


Or, the language designer thinking he's saving programmers work by jus returning the value in cases where there is a list of length one.

However, he's really making it worse because now all return values from that function have to be type-checked to see whether they are scalars or lists.


R comes to mind, where once in a blue moon a matrix result will happen to only have a single column, which R helpfully decides should be turned into a simple vector, which then doesn't have the matrix operations available anymore (like dim()), and kaboom!


The classic story of every interpreted language.


Yeah. I learned LISP pretty early in my career, wrote a few LISP interpreters, and those are indeed the kinds of bugs that I was stomping. PowerShell seems to have turned those bugs into language features. Augh.


You would've loved VBscript lol


I haven't met a single person who likes PowerShell. It's perhaps the textbook example of ugly design that looks technically consistent but utterly unfriendly and mind bogglingly verbose. I am not saying Windows command line is great or bash is best, but at least those things are designed for humans at certain extent. I have written few ps scripts and virtually every line, every step, every task almost always required googling. I don't know anything about Snover but my image of PowerShell designer was someone who couldn't write more sophisticated parser and compensated that lack of skills and creativity to solve fundamental challenges in designing OO shell by offloading complexity and unfriendlyness on users. Basically just reverse of Steve Jobs.

It was aweful when PowerShell team tried to shove their ugly creation down people's throats by removing Shift+right click menu for "Command prompt here" by "PowerShell here". Not a sign of good product when you have to force it upon people. The designers of this thing should have been demoted, let alone making them "Distinguished Engineer".


Now that I'm a PowerShell expert, the idea of dropping down to a *nix shell that only knows how to pass text around is appalling. Like, "the 1970's called, they want their shell back" kind-of appalling. And so what if it's a few extra characters to script stuff in it? I can make aliases for whatever I want, there's tab-completion for everything, everything is self-discoverable, there's Get-Help and Get-Command (alias gcm) to find everything that's available to me, and, by the way, it's all just a language projection on top of .NET... it's all great.

It's like any other language... until you bite the bullet and really learn it, you don't know. And the idea that Windows isn't built for scripting is from the 1990's. For the last 15 years, there's (almost) nothing that Microsoft ships that can't be automated/scripted/controlled from a command-line.


C is from the 1970's. Lisp from the 1950's. Just because something is old it doesn't mean it's bad. And likewise, new doesn't imply better.


Well, C is pretty bad, compared to modern languages. It's full of buffer overflows, undefined behaviour, etc. Just because something is popular doesn't mean it is well designed.


Man, will this sentiment ever die... You know Stroustroup, there are only two kinds of languages: The ones everybody bitches about and the ones nobody uses.

Please guys, demonstrate a better way. Make that better language.

There is a reason most kernels are developed in C, most databases are developed in C, most (AAA) games are developed in C. And it's not "C has the necessary mind share". Man, these people make tons of scripting and DDL languages to get their game done. If they knew a better (more practical) approach than using C in the critical places, they would do it.

Yes, you get buffer overflows and memory management bugs. Much more so if you are a beginner, but also if you're very experienced. But people just haven't found a practical alternative.

How would you even be able to define what a valid buffer region is, if so many functions are basically custom allocators, declaring a certain sub-slice of the buffer they were given (taking pointers / indices), and handing it to the next function? It's just not practically possible to make a formalism that guards against buffer overflows here. You could make a "slice"/"buffer" datatype, but that would just just increase the line noise and keep the bugs at the countless locations of wrapping/unwrapping.


> Make that better language.

Rust, Go, Swift, etc. Even C++.

> There is a reason most kernels are developed in C, most databases are developed in C, most (AAA) games are developed in C. And it's not "C has the necessary mind share".

The only thing that is still pretty much all C is kernels. Game engines are usually C++ (a much safer language). Databases are written in all sorts of languages. Sure the big old ones are C or C++, but that's because they are old!

> But people just haven't found a practical alternative.

They have. Garbage collection works in a lot of cases. Rust has its lifetime & borrowing system. C++ has proper smart pointers (finally).

> It's just not practically possible to make a formalism that guards against buffer overflows here. You could make a "slice"/"buffer" datatype, but that would just just increase the line noise and keep the bugs at the countless locations of wrapping/unwrapping.

I take it you haven't used Go. It has slices. They work fine. No buffer overflows.


> Rust, Go, Swift, etc. Even C++.

If you carefully read my post, you noticed C here means actually C(++). And a great majority of developers from these domains actually write C++ like this: C(++).

Rust, Go, Swift. Show me the serious kernels, databases, AAA games. Rust hasn't even descended the hype train.

Garbage collection is a bit like exceptions. It works for some cases, but not for serious engineering of long-lived applications. If you have ever written a Java application with millions of little objects, you know what I mean. For starters, we can not even afford to pay for the extra memory overhead that comes with each object. For the advanced, at some point we want the application to do exactly that: advance, instead of doing GC only.

> I take it you haven't used Go. It has slices. They work fine. No buffer overflows.

These slices all come with the (huge) overhead of adding a reference to the original runtime object, and on top of that each array access is checked. Correct?


> but not for serious engineering of long-lived applications

Oh come on. Does Lucene (or Solr or Elasticsearch built on top of it) not qualify as serious engineering? Elasticsearch is quite successful, and is indeed intended to be used a long-lived application!

Does this mean that the likes of Lucene don't run into GC issues? Of course not. I've certainly diagnosed problems in Elasticsearch related to GC (which, more often then not, is a symptom of something else going wrong), but saying it's not qualified for "serious engineering" is just patently ridiculous.

And that's only one example. There are loads more!

> These slices all come with the (huge) overhead of adding a reference to the original runtime object

Huh? This is the representation of a slice: https://golang.org/pkg/reflect/#SliceHeader --- It's pretty standard for a dynamically growable region of memory.


Yes I knew I should not pull the "serious engineering" card going in... But there I go, giving a mostly clueless answer to a high-profile HN user :-)

I don't know elasticsearch, but if this is something like a database where millions of objects are tracked (like in an RDBMS, or in a 3D Game if coded by an unexperienced coder who likes to isolate everything down to the Vertex or scalar level into "objects"), then I would assume at least one of the following applies

   - The objects in the datastore are not represented as individual runtime object after all
   - The GC for objects in the datastore is highly tuned (GC only done manually, at certain points),
     and the memory space overhead of having individual DB objects represented by runtime objects
     is just accepted.
I mean, I did finish said Java application, but I got good performance from it only after transforming it into an unreadable mess based on SOA's of int[] (which means unboxed integers, not objects) and lots of boilerplate code. Would have been easier to do in C, hands down (language was not my own choice).

> Huh? This is the representation of a slice: https://golang.org/pkg/reflect/#SliceHeader --- It's pretty standard for a dynamically growable region of memory.

and object/GC overhead? It's GC tracked objects after all, right? (again, I admit to knowing next to nothing about Go's runtime)


> and object/GC overhead? It's GC tracked objects after all, right? (again, I admit to knowing next to nothing about Go's runtime)

Go has value semantics. So when you have a `[]T` ("slice of T"), then what you have is 24 bytes on the stack consisting of the aforementioned `SliceHeader` type. So there's no extra indirection there, but there might be a write barrier lurking somewhere. :-)

> I don't know elasticsearch, but if this is something like a database where millions of objects are tracked

Elasticsearch is built on top of Lucene, which is a search engine library, which is itself a form of database. I don't think there's any inherent requirement that a database needs to have millions of objects in memory at any given point in time. There are various things you can ask Elasticsearch to do that will invariably cause many objects to be loaded into memory; and usually this ends up being a problem that you need to fix. It exposes itself as "OMG Elasticsearch is spending all its time in GC," but the GC isn't the problem. The allocation is.

In normal operation, Elasticsearch isn't going to load a millions of objects into memory. Instead, its going to read objects that it needs from disk, and of course, the on disk data structures are cleverly designed (like any database). This in turn relies heavily on the operating system's page cache!


Maybe it's just me, but the capitalization of the commands is really annoying.


why? it's all case insensitive anyways. plus tab completion puts in the correct case.


Perhaps because one need to hold shift too often.


again, it is case insensitive, no need to use the shift key.


>I have written few ps scripts and virtually every line, every step, every task almost always required googling.

In what way? I use PowerShell daily and hardly ever have to Google [things that I've used more than once]. I imagine if I picked up another language, I'd be Googling furiously, but that shouldn't disqualify the language.


I find this hard to believe unless your use is limited to "dir this" and "ping that".

Powershell is like a gas station men's room in the middle of nowhere. It's better than nothing but nothing to celebrate.


>I find this hard to believe unless your use is limited to "dir this" and "ping that".

as opposed to bash/posix with its confusingly named utilities?

    want to rename something? it's mv
    want to tell the time? it's date
    why is the command to download something curl/wget?
    what's the difference between if [[ stuff ]] and if [ stuff ] anyways?
the list goes on. and no, man pages don't help if you don't even know which command to look for. at least powershell has a somewhat consistent naming convention for its commands (verb-something) so you can at least sort of guess what the command is.


> want to rename something? it's mv

There is a rename command as well. But apart from the first time I tried to rename a file I've never had trouble remembering it. There is a good reason for it to, internally it is a move operation.

> want to tell the time? it's date

Is this different in powershell? Some quick googling indicates the Get-Date is the method you want and being .net based everything deals with DateTime objects.

> why is the command to download something curl/wget?

I find this easier to remember than Invoke-WebRequest, or is it Invoke-RestMethod?

If they were so hard to remember that could be aliased to something similar, but I've never seen anyone bother.

Edit - I mean I've never heard anyone going the other way and suggest doing "alias Invoke-WebRequest=\"curl\"" in bash.


They're pre-aliased to curl and get. ls works too. Most of the options won't work of course.


> why is the command to download something curl/wget?

I find this easier to remember than Invoke-WebRequest, or is it Invoke-RestMethod?

easier to remember: maybe, but much less discoverable than Invoke-WebRequest/Invoke-RestMethod. this seems to be the overall trend for linux shell commands. a few characters long for easy typing, but otherwise arbitrary strings that you have to remember and have 0 discoverability. GGP's comment was about how you had to search for how to do everything in powershell, so in that respect powershell is at least not worse than bash.


> 0 discoverability

Have you ever heard of `apropos`?


Interesting ....

I really liked DEC's VMS DCL ... and "help" was top notch.


want to rename something? it's Rename-Item, unless you also want to put it somewhere else, in which case you need Move-Item (it won't figure it out for you.)

want to tell the time? it's Get-Date ("time" won't work, neither will "get-time")

why is the command to download something Invoke-WebRequest?

what does '&"C:\Program Files\TortoiseSVN\bin\TortoiseProc.exe" /command:about' do?

and if you want to complain about ridiculous syntax, why the hell is ` the escape character instead of \ like every other language?


>want to rename something? it's Rename-Item, unless you also want to put it somewhere else, in which case you need Move-Item (it won't figure it out for you.)

so... working as intended? (unless you subscribe to the belief that rename is the same operation as moving)

>want to tell the time? it's Get-Date ("time" won't work, neither will "get-time")

fair point, although at least it doesn't alias any existing commands (typing time will get you a confusing output of 0 user, 0 kernel, 0 real)

>what does '&"C:\Program Files\TortoiseSVN\bin\TortoiseProc.exe" /command:about' do?

& = run program in specified path. i agree this isn't used in any other shell language, but then again all the linux shells are fairly similar to start with. it's probably there because otherwise it won't be able to differentiate between a command that you want to execute, and a string literal.

also for windows command line programs, it's fairly stander to use /arg:value for command line arguments instead of --arg value

>and if you want to complain about ridiculous syntax, why the hell is ` the escape character instead of \ like every other language?

probably because \ is used in windows paths, so to avoid unnecessarily quoting every path, they chose an odd line continuation character.

also, about what you were responding to for the last two points: that was more a poke at how '[' isn't actually part of the syntax, but in fact an alias for test.


> probably because \ is used in windows paths,

Or.. you could actually encourage people to use sane path separators. The rest of the world uses a normal slash which works most of the time in Windows too.

CP/M lost. URLs won.

It's like UTF-16. They somehow had the means to replace Windows-1252 with UTF-16, but they can't bring themselves to move to UTF-8 which everyone else and the web uses instead.


PowerShell has a few rough edges, but once you get over those it's super easy. Even when you can't remember the exact syntax for a command, the built-in help makes it quick to look up.

What tool(s) have you tried using to write PowerShell scripts?


>I find this hard to believe

You find it hard to believe that someone who uses a tool daily gets to know it?


Powershell has some good ideas, they just didn't come together as a whole. The idea of piping objects rather than lines of text might be good (I'm not convinced in either direction there) and probably much more efficient, but it's integration with .net hurt it. .Net code is just way to close to the "enterprise" spectrum of languages and way to terse for a shell. The powershell language itself inherits this lack of terseness.

When it launched there also wasn't a whole lot you could do with it, windows wasn't built to be automated by scripts like this. This hurt the uptake right of the bat and it never really reached that critical mass to become adopted.

A decade after powershells creation I still write batch files for windows specific things and stick to cygwin/bash for everything else. It's not even a familiarity thing, I learned (or at least tried to) powershell before I (properly) learned to use bash.

MS would have been better off creating language/environment independent binaries to manipulate windows.

> Not a sign of good product when you have to force it upon people.

This sums it up nicely, the only reason PS has any use whatsoever is because it has the MS name attached.


But it's hard to understand what you're saying as .Net's not a language, which one are you talking about?

I feel like I should stick up for C# as it's probably one of the best languages available today. I'm biased as I prefer statically typed languages so none of my brain is taken up by utterly useless information like having to remember the names of properties, methods, method signatures, etc. which the computer should bloody well tell me.


It might not be a language, but the .net object model/environment brings about as much joy to powershell scripting as the browser document object model brings to javascript...


I have tried PowerShell a few time and every time I hated it. I know people who like it though.

I am doing a lot scripting now with CSScript. It's much nicer than PowerShell. The main advantage of PowerShell is interop with .NET so why not use C#?


> so why not use C#

Because you need to compile your C# source first. With PowerShell you just write and run. Also in a PowerShell shell you can fart about and test stuff without actually having to write a fully working script in a file, i.e. it's got its own REPL.


With CSScript you don't have to compile first. It runs as a script.


Yes, I know, I was addressing your question and explaining why:

> The main advantage of PowerShell is interop with .NET so why not use C#?

Anyhow, I've written tens of thousands of lines of CScript and PowerShell over the past 15-20 years, give me PowerShell any day of the week for automation and admin tasks. Why anyone would persist with CScript/WSH for sysadmin tasks and automation (unless you're still managing an ancient fleet of Windows 2003/2000/NT) makes my mind boggle.

Also you there's a reasonably useful IDE with ships with PowerShell so you can actually set breakpoints, step through your code and inspect variables (though I prefer to use PowerGUI). That alone is a massive productivity booster.


I've written hundreds of thousands of lines of PowerShell. I love it.


I just wrote a 200 line DSC config that eliminated 90% of my job.


How recursive.


Really? I'm guessing you haven't spoken to many people who are proficient in it. I'm certainly not, but I can still appreciate the fact that you are working with Objects instead of text blobs, you have access to a decent IDE and a great help doc system.


I have met plenty of developers that love PS and significantly prefer it to bash. Generally .net people that struggle outside of visual studio


Now you know one, I love it and prefer using it to any UNIX standard shell.


Things will become interesting in the near future as Powershell 6, which is cross platform, is released and becomes available on Linux distros.


I love PowerShell! Used it daily as a Windows sysadmin for many years. Wrote thousands of lines in it and wrote several blog posts on it.

It's an extremely flexible language that's fairly readable.


While I'm sure that's partly sampling bias. The truth is, Powershell has a lot of flaws. Both in the program itself, as well a the language decisions.

And I _REALLY_ like the idea of an object-oriented piping system. I would love to have that built into Linux/bash/etc instead of every program having to have some sort of "human readable" mode and (if you're lucky) a "machine readable" mode. Except every machine readable mode is different and still requires a parsing pass to get it filtered into your program correctly. I find it fun and even relaxing to design parsing passes to bring say, smem, into a Python script. But that's because (Thank God) I don't have to do it for a living. [And the newest era of "some" programs supporting JSON or XML output but there's still no easy tools for filtering through it into the next program--requiring yet another translation layer.]

Now back to Powershell. It's fun to learn the first day. And then after that, it's just an 80 degree walk uphill after you learn one stupid idea after another. You want to do something insanely simple in concept and you're pouring through ours of documentation and examples for the exact use-case.

Personally, I've started using Python2/3 for scripts lately. Most systems I use already have Python, and Python is easily 10x-100x faster in many situations and that's not even using a compiled variant. And while I enjoy C variant programming, Python is still a huge step up from all those stupid Bash idiosyncrasies like string -eq non-string but wait, you added a space before the equals, so it explodes anyway. (I can't remember a great example off-the-top-of-my-head, but anyone who has used Bash has had those days of an obscure error being related to a single incorrect whitespace.)


Object-oriented piping is actually somewhat problematic, because objects carry behavior, not just data. This means that everyone in the pipeline now has to agree on the semantics of that behavior - in case of PowerShell, they need to understand and talk the .NET object model, for example.

A much more lightweight approach is to exchange structured data. This can even be easily done on top of existing byte streams, just standardizing the format.

FreeBSD started adopting this approach via libxo for its base utilities: https://github.com/Juniper/libxo. I hope it spreads into Linux.


> And the newest era of "some" programs supporting JSON or XML output but there's still no easy tools for filtering through it into the next program--requiring yet another translation layer.

As someone who's been working with aws command line tools quite a bit recently that accept and emit lots of json, I've found jq to be a life saver! https://stedolan.github.io/jq/


>>> And I _REALLY_ like the idea of an object-oriented piping system. I would love to have that built into Linux/bash/etc instead of every program having to have some sort of "human readable" mode and (if you're lucky) a "machine readable" mode.

That's sort of what ansible/salt do. Provide the most useful system commands with structured arguments as input and output, instead of crazy text everywhere.


> I haven't met a single person who likes PowerShell.

The only thing I like about it is it's object oriented nature. That everything is an object ( PSObject ) that can be piped is very nice.

However, the shell itself is slow, ugly and cumbersome. I just don't understand why windows can't create a shell like bash. The old cmd shell was horrible. PowerShell is a step up but it's still lightyears behind all the terminals we have in linux/bsd/etc.

> mind bogglingly verbose.

It's verbose if you don't use the aliases. But I don't mind the verbosity as you can understand the command just from the name.

I like the idea of powershell ( OO, Verb-Noun commands, etc ). But the execution has been disappointing.


PowerShell is one of the few bits of software which has actually made me throw a computer in anger. The idea has potential but the implementation is just bad.

It's slow (I have to wait 5-10 second and before it responds on an i7?!?), inconsistent (some things return local time and some UTC), unreliable (it will just fail randomly after working for two months), has terrible memory usage problems (try copying a large file with it via WinRM), has a terrible scheduling and security model (scheduling something that actually doesn't barf with errors is difficult), spooges out UTF16 randomly, and is full of nasty surprises which are constantly lurking waiting to bite your face off.

It is entirely the opposite of what I want from a language of any kind.

That man owes me at least 3 months of my life back.

The lure is a few one liners work pretty well to start with but the moment they turn into two liners then problems start but you're invested in it then and it's too late.

I use python for automation now. No such problems.


Definitely agree is very slow - even for "simple" things like echoing lines.

The one thing I do like is I can make assumptions about how the scripting language works - like writing a if-statement for instance. This lets me write simple scripts pretty fast. Especially compared to bash which tends to be quite quirky and requires a lot of checking docs.


Do you use the last version of Powershell? Old versions were quite slows for me but new ones are fine.


At first I was a little annoyed that (on mobile) it didn't tell me the length of the interview.

Had I known how long it was before I started listening I probably would not have listened to it.

I am glad that I didn't know, so that I began listening.

Once I started listening it didn't take long before I was convinced that I ought to listen to the whole thing.

Very interesting interview.

Very glad they provided a transcript also. Having both the text to read and their voices to listen to was much more enjoyable together than would either of those have been alone.

There were some typos here and there in the transcript but since I am on mobile I didn't dare leave the page to point them out. Someone else will have to take care of that to have them fixed. I do remember one error that occurred in a couple of places was that the transcript said SANS instead of just SAN (singular) or SANs (with a lower-case "s").


This is a great line:

> On the other hand, if I'm at a lower level and in fact I'm at a really higher level, it takes time, but it'll eventually rectify itself.

A lot of people get wrapped up in a desired title and recognition, but the confidence Jeffrey had to know that his work would speak for itself is instructive. Additionally, it's not like it was instantaneous; it took 5 years to get that rectification.


No, it's bad advice.

If you are at a low level, you move to Facebook/Google and you come back to Microsoft the next year. The world is filled with people who are waiting for a raise and will not get it in the next decade.


Yea, I used to believe that fairy tale. "Just wait! The quality of your work will speak for itself" is nonsense. Reality is, for every person who uses the Hope And Prayer career development strategy, there's another person who hustles, self-promotes, or brown-noses their way into promotion after promotion. In most places, a raise that I get is a raise that you don't get, so you really have to think about it as a competition.


This is exactly why I hate working for corporations and big business.


I agree that more recent generations advance monetarily through a series of jumps between organizations. However, I think that this quote is particularly salient to the felt need for recognition that your work is recognized and relevant and important. It's a mercenary mentality that I admit that I have and fully endorse. However, the way I parsed the conversation is that Jeffery was already being paid at levels above where he was functioning at the time. It took time for his work to be recognized as having the requisite value of his pay grade and, once that recognition was achieved, it seemed to accelerate his upward mobility within Microsoft.

I'm not trying to discount your perspective, because it's one that I have lived out and fully endorse. However, I took this quote as a nice check, or even corrective, to my default mode of operation. I think that these checks to our own personal status quos are important to our growth as human beings.


This thread is full of people who are bad at PowerShell complaining about not using PowerShell properly.

Learning it is one of the most important things a Windows sysadmim should do, today. Hell, if you consider yourself a Windows power user, you should learn it.

A single sub-1000-line script I wrote in a couple days of downtime has saved over $10,000,000 in taxpayer dollars and has never stopped working since the day it added itself as a scheduled task. I am not a programmer.

It's very powerful and generally limited only by your knowledge of how to do it with PowerShell. And all you need to start and learn everything are two commands. Three if you count update-help.

Get-help Get-command


If you were writing bank app in COBOL on mainframe 24X7 for many decades, you would have probably found it intuitive, productive and high value as well. The problem is that for most people scripting is not their day to day job. It's something you do for may be a day after you write month worth of code. If your scripting language is ugly and unfriendly you will tend to forget it and basically have to re-learn it every time you try to use it. That's not a good or productive experience for people who are writing 10 lines of script after long intervals.


There's a lot of comments here about Powershell, but this podcast (with a transcription, yes!) has a lot of other great discussion on MS's transition from waterfall and how it's effected the people.


I found it kinda cool that the foundational difference between Unix and Windows is that Unix returns file-descriptors whereas Windows decided to give out handles.

You can do stuff like putting callbacks on a handle to do async-io when you have a pointer to work with.


What's the difference between file descriptors and handles? Both are just opaque numeric identifiers. There's no particular reason why there can't be an API to associate a callback with a file descriptor for async I/O.


There is no huge difference.

One small detail is that HANDLE covers more things with the same table. So for example, a process, a thread, a cross-process semaphore or mutex - those are handles, whereas Unix-like systems have different types for each of those (pid_t, pthread_t, sem_t, etc.).

But then Linux started introducing fds as synchronization primitives too - like signalfd(2) or eventfd(2).

One of the more annoying things about Windows handles is there is no equivalent of dup2(2) - you can't swap the kernel object of an existing HANDLE with another existing HANDLE. This makes imitating certain unix idioms involving redirecting I/O after the process has already started kind of clumsy.

[Worth noting in a discussion of the HANDLE typedef that the type is literally a void pointer, and some user-mode code abuses that to make them pointers to heap objects instead of a proper kernel handle - for example FindFirstFile()'s handle is not a true kernel handle, which is for example why you need to close it with FindClose and not the normal way to close handles.]


> What's the difference between file descriptors and handles? Both are just opaque numeric identifiers.

The main difference I know of is that file descriptors are sequential, while handles are random. That is, a function like open() will always return the lowest-numbered available file descriptor, while CreateFile() can return any available handle.

Other than that, both are basically identical as far as I know: both are just keys to an in-kernel table which points to the real object.


> The main difference I know of is that file descriptors are sequential, while handles are random.

Is it an API guarantee, or just an irrelevant implementation detail?


For file descriptors, API guarantee, specified by POSIX: http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2...

"[...] atomically allocate the lowest numbered available (that is, not already open in the calling process) file descriptor [...]"


Did PowerShell really catch on?

Was this difference so good that it was incorporated on other platforms?

I really liked the idea of the PowerShell, but somehow it felt to me that it never really became "a thing" and the Windows Subsystem for Linux move from MS felt like they gave up on the whole shell thing in the end. :\


It caught on very strongly in the Windows platform operations community - those of us who run Exchange, Active Directory, Windows Server in general for large enterprises. These businesses are just starting to poke at the cloud a bit, and will still have a lot of "traditional" Microsoft infrastructure for several years.

We're generally not as publicly chatty as developers, and don't tend to do Windows platform admin as a hobby. We have varying levels of programming experience.

For us, PowerShell has been nothing short of miraculous, though some people are put off by anything that's not a GUI. It's a shallow initial learning curve and easy enough for semi-automating some very repetitive user management tasks. The more skilled scripters can switch pretty easily into C# once they need the performance, because they already understand the .Net framework from mucking around with it in PowerShell.

And if you're ever at a small conference where Jeffrey Snover is speaking, say hi - he's generally eager to talk with PowerShell users.


Never became a thing? You're talking like it's already done and abandoned. You need to recalibrate your perspective to longer timelines. Every release of Windows server removes more and more of the GUI, and nano server can only be setup using powershell commands. MS is completely all-in on this and the WSL is not going to replace it by a long shot.


The "removing more and more GUI" part already felt like MS would jump on the Unix/Linux train and abandon their "all can be done via GUI" philosophy.

The PS and WSL story seemed like they tried to do better but gave up.

But what can I say, I'm no Windows guy, just talking from an outside view :)


> The PS and WSL story seemed like they tried to do better but gave up.

The PowerShell story is pretty simple: a cross-platform, object oriented shell. Having used it, I attempted to locate a similar Linux solution, to no avail. I'm tired of cutting text and object translation makes my life a great deal more straightforward.

As for WSFL, it seems "Windows can run everything. Docket support is stellar" is a much more likely story than "we gave up and are slowly becoming Linux with a GUI".


They're working on the Linux solution bit: https://github.com/PowerShell/PowerShell


In my small experience no. Linux is super powerful. It is darn simple to pipe commands to each other and always know the output is text. Powershell passes .NET objects, so even if you think you should be able to easily pass the output from one command to another, in reality it doesn't work out nearly so well and puts a lot of cognitive load on me for even simple things. For complex things, the language is much nicer than BASH, but that is irrelevant as most people aren't using BASH for the complex stuff. I use the full range of Linux commands for simple stuff and use Python or Perl for the more complex stuff. I really want to like PS and have put a lot of time into it, but in the end I only see it filling the role of someone scripting the deployment of user permissions and server configurations. So it is great for a very small subset of IT work, but doesn't give the user the power over general OS use like Linux does.


For something in-between the two the Xonsh shell might be nice. It's basically a Python-based shell with full access to Python for scripting purposes.

I like Bash but writing anything remotely complex in it is... not recommended


I'll have to check it out, but usually for the easy stuff, the standard commands like grep + Awk is more than enough.


CMD is for simple things, PowerShell is made for complex things.


I disagree. CMD can't even do simple things. Sure it can list the files in a directory and write that to a text file, but not much more. I'm exaggerating some, but CMD is so underpowered compared to what you get in any Linux terminal that this isn't a fair comparison.


Of course it is underpowered compared to what Linux offers. The main point is that Powershell isn't designed for Bash-like scripting.


I'll cede that. So Windows really has no way for the user to do things easily though. There is no middle ground between CMD & Powershell. I think they could fix that by putting in all the Linux commands people care about and make it easier to return the object as text without writing a novel. The aliases are only helpful if they can still pipe like Linux.


> "Did PowerShell really catch on?"

Yes, it's very popular with Windows sysadmins, which is more or less who it was designed for.


I see, guess I simply didn't meet much windows admins :)


Windows admins and web/mobile developers move in different tech circles. I didn't know very many web/mobile developers before switching to a team of Java, Node and front-end developers. I'm amazed at what they don't know about working in an old-school, Microsoft-heavy corporation.

This included the object-oriented nature of PowerShell. The ones who have fully accepted that it is a command line wrapper for .Net like it way better than they did when they were trying to use it as a crappy version of bash.

I, on the other hand, routinely astound them with my ignorance about the brave new world of build tools that assume that you have constant access to the internet (among many other points of my ignorance).

I'm now trying to get re-accustomed to Unix-style command line tools, and it's irritating as anything not to be able to ().somepropertyname to get exactly what I want :)


Yeah it really did! Most of their server offerings are PowerShell first. Server Nano and Core are managed with PowerShell over WinRM. PowerShell Desired State Configuration has been growing in popularity over the last few years.

It hasn't really taken off outside of Windows. Unless you're in a mostly Windows environment, PowerShell on the Mac or on Linux doesn't make a lot of sense.


This is tangential but I really like the design for that page. The audio time-stamps that appear next to the transcript are just awesome.


I met Jeffrey at a conference here in Dallas last year. Incredibly humble guy and full of great stories. I'm happy for his success!


Spent about 6 months writing a 5k-ish system in powershell (using powercli for vsphere/esx) turned out to be the most frustrating half a year of my career and one of the worst architecture decisions I have made. Unfortunately there weren't other viable options for integrating with vmware at that time.


Big fan of Snover.


I've actually had to use powershell because my current employer's systems are so locked down I can't install any other programming tools (or software in general) and am constantly forced to find ways to workaround our freaking useless IT department.

I have a UNIX background and I really, really, dislike doing any sort of programming on Windows. Powershell is a giant bucket of WTF.


Windows Powershell is some kind of disgusting hybrid of Perl and Bash. It's terrible to read and annoying to write.


why is powershell so pedantic, my fingers are gonna fall off with all the typing


> why is <not-my-preferred-shell> so pedantic, my fingers are gonna fall off with all the typing

I apologise, but your comment doesn't list any specifics to back up your broad assertion. Do you mind expanding on, and backing up, that assertion? Your view could apply to any technology or shell.



Hi, I see you located an outdated PowerShell question on StackOverflow. It appears you may not be familiar with the official documentation [1].

Please feel free to take a few days and learn some PowerShell. After you feel a little more familiar with the documentation, and have used the shell enough to discuss, please feel free to reach out.

[1] https://docs.microsoft.com/en-us/powershell/wmf/5.0/feedback...


Tab-complete is your friend, especially when trying to figure out exactly which parameter you want.


"I had one executive say, "Exactly what part of effing Windows is confusing you.""

Reminds me of Dave Korn's story.

https://news.slashdot.org/story/01/02/06/2030205/david-korn-...

See Question 5

"I think this is symbolic of the way the company works."


That's an article from 2001 about a USENIX conference from the 90's though. I think that today at least a sizeable part of Microsoft comes from a Unix background.


How was that transcript created? Just lots of elbow-grease?


We get a first-draft transcript made through rev.com, then manually editorialize and format to make it easier to follow for folks who prefer to read.

The player itself is built with video.js which, when combined with the ttml format for the final transcript, allows us to provide that click-to-time interactivity.


There's no publication date on the episode page. Right now it's only available through a `meta property="article:published_time"` in the page source, or by finding the episode in the series list. This makes it difficult to cite.

EDIT: I just incorporated this quote into the PowerShell article. The transcript differs a bit, so you might want to update it. Notable differences are the transcript's "would turn structured data" and the omission of mention of awk, grep, and sed. (Presumably, the latter sounded like unintelligible false starts to the stenographer, who just cut it entirely.)

Here's the quote:

> I'd been driving a bunch of managing changes, and then I originally took the UNIX tools and made them available on Windows, and then it just didn't work. Because there's a core architectural difference between Windows and Linux. On Linux, everything's an ASCII text file, so anything that can manipulate that is a managing tool. AWK, grep, sed? Happy days. I brought those tools available on Windows, and then they didn't help manage Windows because in Windows, everything's an API that returns structured data. So, that didn't help. [...] I came up with this idea of PowerShell, and I said, "Hey, we can do this better."

EDIT 2: The references to "PEARL" are all a little weird, too.


Thanks for this, I've updated the transcript.


I have a habit of highlighting as I read. When I highlight text on this page, it seeks to a section of the podcast, even if I haven't hit play.

Perhaps it would be possible to only seek on click, not selection.


I have that habit as well. I'm not sure if this level of control is possible with video.js, but I'll take a look. Disabling click-to-time until the play button has been clicked might also be an option. Thanks for listening/reading


Thanks for this, I much prefer reading to videos/audio, and it seems a lot more content is moving to that medium only.


Ah, the man behind why I chose macOS for my development environment. Thanks for costing me a bunch of money mate.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: