Hacker News new | past | comments | ask | show | jobs | submit login
The increased use of PowerShell in attacks [pdf] (symantec.com)
86 points by selmat on May 19, 2017 | hide | past | favorite | 116 comments



PowerShell is obviously a much better scripting language than the ancient DOS BAT "language" (if you can call it that). In theory it's also mostly ubiquitous on Windows which means that you can rely on it being there when creating a script. Yet I've found that often people keep using BAT files (e.g. in build scripts, etc). I think it is because you cannot just execute a PowerShell script unless it is signed or the user has manually enabled non signed script execution (by executing a command on the PowerShell command). This means you cannot rely on it just working, at which point it's often best to fall back to DOS or use another scripting language such a Python.

I understand that this is done for security reasons, but Windows already lets you execute any executable or BAT file that you might have downloaded from the Internet. So I'm not sure that disabling PowerShell script execution really gains you much (and there are probably other better solutions anyway).

So IMHO as long as DOS is available and PowerShell is so limited by default BAT files will not go away, which is unfortunate.


My heart sinks every time I see foo.bat next to foo.ps1, where foo.bat is just `powershell -ExecutionPolicy Unrestricted -Command foo.ps1` so that it's double-click-friendly.


I'm definitely guilty of doing that the very few times I had to write a powershell script.

But what's the alternative? What's the proper way of signing a script, and how much work is it to do that?


Just put your powershell file in an executable, its been around for ages.

https://gallery.technet.microsoft.com/PS2EXE-Convert-PowerSh...

Read more on how to use it at https://redmondmag.com/articles/2017/01/27/convert-a-powersh...


Then you'll have 3 files - the script, the exe, and a little bat file that reruns ps2exe for updates. :)


Over 18KB for a "Hello World" script (even a native EXE would be smaller)...? A "stub" batch file to execute it is a few dozen bytes at most.

Then again, those using PS probably don't care about such things being small and efficient anyway.


I didn't downvote you, but consider that I answered the GP's question and solved a problem they are having, and you added some snark.

There is plenty of talk in this space about trading off machine resources for programmer effectiveness, so while your gripe is technically accurate that ship has sailed long long ago.


"Furthermore “script execution” have to be allowed (see cmdlet: set-execultionpolicy)."

...and we're back to square one. :-)



Well, in my experience the double-click-convenience of .bat files (due to their default cmd.exe association) is the reason people do this, not just the execution policy.

To sign a script you run:

    Set-AuthenticodeSignature foo.ps1 $someCert
and make sure that $someCert 's pubkey is available on every user's computer.

The alternative is of course to get every user to run `Set-ExecutionPolicy Unrestricted` to solve the "problem" permanently.


Yea, there are a few things PS does that makes it almost useless.


(Thanks to ezquerra for bringing me back to the conversation here via Twitter.)

Disclaimer: I'm a PM on PowerShell at Microsoft, and the following is based on my personal observations. There's also no current plan to do any of what I suggested (but it's certainly giving me a lot to think about at the start of my work week).

The default ExecutionPolicy was engrained and controversial long before I joined the PowerShell Team (or even Microsoft), and I'll be the first to admit that, as a long time Linux user, I didn't GET why I couldn't just run arbitrary scripts.

The public Microsoft opinion on the matter has always been that ExecutionPolicy is there to try and prevent yourself from shooting yourself in the foot. In Linux, this is very similar to adding a +x to a script (although that's clearly much simpler than signing a script).

I'd say it's also akin to the red browser pages warning you about self signed certificates, or Microsoft/Apple "preventing" you from running certain app packages that are "untrusted" in one form or another. In one sense, you could actually argue that PowerShell was ahead of the curve.

Now as a power user with much less at a stake than, say, an IT administrator at a Fortune 500 company, the first thing I do with all these restrictions, warnings, and confirmation prompts is to turn them off.

But those warnings (often with no clear way to disable them) are there for a reason. PowerShell was built at a time when Microsoft admins were GUI heavy, and PowerShell did its best to herald them into a scripting world while fostering best practices. And if you're using a bunch of scripts sourced from SMB shares within your domain as a domain administrator, you don't want to accidentally run a script that hadn't gone through your internal validation process (hopefully culminating in the script getting signed by your domain CA).

So let me assume that you agree with everything so far. Why does this experience still stink? Any Microsoft administration worth their salt uses PowerShell, and many of even the power-est of users finds ExecutionPolicy annoying.

In my opinion, it's too hard to sign scripts. We should be following in the footsteps of the drive to get everything on HTTPS (props to Let's Encrypt and HTTPS Everywhere, along with many others). We should have guidance on signing if you have a CA, we should have guidance on getting scripts signed by third party CAs, and we should probably offer a cheap/free service for signing stuff that goes up on the PowerShell Gallery.

Oh, and we should make it easier to read the red error that gets thrown explaining how to lookup a help topic that tells you how to bypass/change ExecutionPolicy.

Unfortunately, that's all easier said than done. But the default being what it is puts the onus on us to do something to make security easier.


Thank you Joeyaiello for your reply.

I understand your reasoning. I like some of your proposals. For example I'm all for making it easier to sign scripts, for example. Yet I feel that the possible solutions that you mention ignore the fact that you can bypass this "security" mechanism can be bypassed with a simple BAT file.

Why do PowerShell scripts require more security than a BAT file or an executable? Are users really safer thanks to the ExecutionPolicy check? Or are they simply worse off because people will either use less powerful BAT files or completelly opaque executables? At least with a PowerShell script you are able to inspect the code if you are so inclined. By pushing people to use executables instead they are less likely to know what changes will be done to the system.

If the problem is admins accidentally double clicking on unsigned scripts, by all means show a confirmation dialog (if the script is not signed) when a non signed script is _first_ executed. Actually, do that for BAT files and perhaps even for executables as well. But don't do it (by default at least) when someone calls a script explicitly from a command line or from another script. IMHO that would really make us all safer and would make PowerShell a real replacement for BAT files.


This is not very surprising at all.

The use of PowerShell is completely analogous to using shell scripts on linux, along with making use of things like the GNU core utils and other nearly universally-installed utilities (curl/wget, tar, etc).

Traditionally there wasn't a lot you could do purely from a shell or scripting environment on Windows -- all the "good stuff" was in Win32 APIs, hence needing to ship binaries.

Microsoft has been adding everything to PowerShell, to the point that you can now do nearly anything you can do with the GUI -- and in many cases more, since most of the administrative commands accept a -ComputerName parameter and are integrated with domain authentication. It's only natural to take advantage of that.


Disagree strongly with the need to ship binaries to get up to mischief. Win32 is available from WSH (cscript.exe/wscript.exe), which has been included by default since Windows 98.


I think this is just a observation that correlates with an increase in the usability of powershell. I have been very impressed with recent releases, they have a lot of functionality built in and the ecosystem is _finally_ starting to mature with NuGet and Chocolatey.


Not a huge surprise. A fairly ubiquitous, already installed interpreter that has routines to do system level stuff. Same reason PHP scripts are popularly uploaded by similar groups on Unix machines.


This explains why we're not allowed to run any PowerShell scripts that are not signed by some IT root certificate on my work laptop. However, given that I can run any arbitrary executable (with user rights), this does seem a bit ridiculous.


This is PowerShell's default behavior, to only run scripts that are signed. The configuration is changeable by an administrator.

I think this is a reasonable precaution, given PowerShell's potential scope as an attack surface, compared to the tiny fraction of Windows machines that are ever actually used to do anything with PowerShell.


I know that. But they won't let users change the default policy to RemoteSigned...that's what I find annoying. (I can't use stuff like posh-git as a result.) The proposed workaround is to copy scripts into ISE and then use the "Run Snippet" feature. How does PowerShell present more of a surface than the standard Windows APIs available to any executable though?


PowerShell scripts have the same access capability as any executable... but it may be easier to get a malicious PowerShell script onto and running on a target machine. Antivirus programs will scan executables and users may know to be suspicious of them, but it's not hard to imagine a user clicking a harmless-looking .ps1 file inside an archive that suddenly starts executing.


I might be a bit biased working in offensive security, but 90% of the PowerShell I see in use is malicious.

Unfortunately Microsoft has let the genie out of the bottle, and most of the advancements in PowerShell security are centered around trying to add ACLs and logging to a scripting language, or figuring out how much of it you can disable and still have things work in production.

One of the best things we can hope to do going forward is sign on more Antivirus vendors to support Microsoft AMSI [1] which is basically a hook to pass PowerShell (and other scripting languages) off to your AV engine before they get executed.

1. https://msdn.microsoft.com/en-us/library/windows/desktop/dn8...


> Unfortunately Microsoft has let the genie out of the bottle

PowerShell has no more rights or power than the user it runs as. Anything you can do in PowerShell you could also do via WMI, Win32 API calls, etc.


>I might be a bit biased working in offensive security, but 90% of the PowerShell I see in use is malicious.

Yes you are, any competent Windows admin that needs to do things will probably be using PowerShell to do said things.


> I might be a bit biased working in offensive security, but 90% of the PowerShell I see in use is malicious. > > Unfortunately Microsoft has let the genie out of the bottle

And if you worked in that niche before PowerShell existed, you probably saw similar prevalence in PE/COFF use. Is it fair to say Microsoft "let the genie out" by allowing users of its operating system run software on it?


So far as I know people have been using bash to implement atta cks on Unix systems since 1992 if not earlier.


Not that I have much to add here, but as a complete non-techie PowerShell seems amazingly powerful for this use.

I went from having no knowledge at all to having written a web scraping function and checked 7000 sites for a specific phrase in less than two hours. It was the most intuitive piece of technology I've ever used.


PowerShell is a little bit more difficult then something like Python or a well equipped Bash shell, but the fact that you can do it shows how much progress it has made


I would say nothing is much less intuitive than a Bash shell. Maybe the Git CLI. But Bash is spectacularly unintuitive. How do I rename files? `mv`? List files is `ls`? Why not `list`? What the hell does `2>&1` even do?! Have you ever looked at the escaping options of `ls`? How do I end a for look? `fi`? WTF kind of a weird joke is that?

If you're thinking "but... it's all so obvious. What is this guy talking about", ask yourself how long ago you first typed a shell command. Can you even remember how hard it was to learn? For me it was something like 15 years and I actually can remember because it made such an impression on me how terrible a system it was. (That and X86Config if you remember that insanity.)

Even though it was awful to learn, it feels totally easy and 'intuitive' to me now. But that's just because I use it so much. It's not intuitive. Don't fall into the old curmudgeon trap of thinking the thing that you find trivial through experience is actually good.

/rant


Can you even remember how hard it was to learn?

I can remember when I first learned bash/sh and Unix systems in general (by reading a book) and it was not really hard. However, I saw your sentiment a lot among the students I used to teach, and my response has always been "you're approaching it the wrong way." Bash, PowerShell, etc. are effectively programming languages, and my advice for learning programming languages applies: it's a language.

To elucidate, programming languages have their own vocabulary and rules just like a human language (and often, the rules are far more consistent and the vocabulary smaller) --- to ask questions like "Why not `list`?" is like asking "Why is the plural of sheep, sheep? Why is it called a mouton in French? Why not sheepe?"

Of course, you can ask such etymological questions when learning about programming languages and use them to help you, because they will have far more logical answers than the same questions about human languages; but in general, learning a language as a language is the best way to progress.

In fact I'd say super-verbose and somewhat convoluted languages like PowerShell (and C#, which I don't fancy much either --- nor its Java-ish ancestry, for that matter) may only appear to be more intuitive at first, but are actually hiding some quite nonintuitive complexity. The verboseness may make it easier to get started, but becomes a hindrance thereafter.

/rant


To elucidate, programming languages have their own vocabulary and rules just like a human language (and often, the rules are far more consistent and the vocabulary smaller) --- to ask questions like "Why not `list`?" is like asking "Why is the plural of sheep, sheep? Why is it called a mouton in French? Why not sheepe?"

But unlike normal languages, software languages are designed. So asking why questions makes a lot more sense.


The weird names was only one of his complaints and it's relatively minor. Everything about bash seems really weird and over complicated. I am trying to learn it now. It's great for trivial stuff, but for anything nontrivial I end up going back to a real programming language.

I wanted to do floating point math. The recommended answer is to pipe your data into a better programming language. You can't even do math in bash. On another occasion I wanted to make a simple program that opens a random file. The recommended answer breaks in the simple case where files have spaces in them. The correct answer I found is this horrible mess of weird characters and it's completely incomprehensible. The best answer someone submitted was just to call one line of python from bash...


The fact that your shell is not a programming language is not a real gripe. Treat your shell as a shell and you'll be fine.


Ok sure, but being able to perform operations on a bunch of files shouldn't be hard in a shell language. And mathematical operations are an incredibly basic thing that are necessary for almost anything.

And why is it so wrong for the shell to be a decent programming language? It's not impossible. It doesn't need to make anything harder or more complicated. And it adds a ton of functionality. There are tons of people programming in Bash and powershell anyway.


The correct answer I found is this horrible mess of weird characters and it's completely incomprehensible.

Someone who tries to learn Chinese may have the same thought after having been exposed to nothing but English. The keyword is "language".

The quoting rules in bash/sh are not difficult to memorise, and definitely a must-learn.

I wanted to do floating point math.

Maths is not a strong point of shell languages because they were designed more for string manipulation and gluing other programs together. Try doing pipelines and redirection in a "real programming language", for a contrasting example.

"The right tool for the job".


> You can't even do math in bash.

You shouldn't, certainly, but, as to 'can't', I think that `expr` (or, as I just learned from ABS, `$(( ))`) covers a lot of basic uses: http://www.tldp.org/LDP/abs/html/arithexp.html .


> In fact I'd say super-verbose and somewhat convoluted languages like PowerShell (and C#, which I don't fancy much either --- nor its Java-ish ancestry, for that matter) may only appear to be more intuitive at first, but are actually hiding some quite nonintuitive complexity. The verboseness may make it easier to get started, but becomes a hindrance thereafter.

Some historical syntax ancestry aside, state-of-the-art C# is too classy and elegant to be in the same sentence with Java.


Modern C# seems confused about what it is. I think a lot of stuff should exist exclusively in F#. And C# is just as bureaucratic in nature as Java.


Not really. The direction is clear; C# evolves with the times (still, the only version to have broken backwards compatibility was 2.0 because of Generics - which was/is a good thing) . Currently, it's at the very base, an object oriented programming language with powerful functional style features. The bureaucracy you're referring to is at worst (not for Java though) it's static type system, which again is a good thing under the right application scenario.


What were you using before? Compared to DOS the Unix shell actually seemed a very well designed system. There are a couple of things that are admittedly hard to grasp, redirection being one of them. But I have no problem with mv and ls. By the way, fi closes an if block. A for loop uses do and done.


In DOS you use `dir` to list files in a directory. You can also use `dir` to list files that match a string in a directory. `dir 1.png` would list all PNGs with a "1" in their name. You need a separate command with options and a flag to do that in bash. In DOS, it's less than a dozen characters, and it's far more intuitive than having to use `find`. Sure, `find` and `ls` might be more flexible, but definitely not more intuitive.


You need a separate command with options and a flag to do that in bash.

No you don't; whether ls is a builtin is mostly irrelevant, but

    ls *1*.png
does exactly what you (I assume) are saying,

    dir *1*.png
which AFAIK is a new extension introduced with WinNT's command processor, since the old algorithm...

https://blogs.msdn.microsoft.com/oldnewthing/20071217-00/?p=...

...would effectively produce a pattern matching all files ending in '.png'.


This use of the word "new" is a bit of a stretch. Consider that the "new" wildcard algorithm has been around for as long as Linux has. Possibly longer, although I'd have to check when 4DOS and 4OS2 started providing extended wildcards.


using ls with globbing is redundant: the shell will expand * 1* .png to a list of files, which will then be passed to the argument vector of ls, looking up the files twice. You could just do echo * 1* .png.

ls doesn't do glob matching. ls '*' would try to fstatat (FreeBSD) that character.

EDIT: formatting


Does DOS have anything like "locate x"? Or do you have to use the horribly inefficient graphical search which takes AGES. The Unix terminal takes a little while to learn, but it is powerful. Windows is mind-numbing.



That's for PowerShell, not DOS. IIRC, for DOS you could do something like "dir /s *.txt" to list all files with .txt extension in all subdirectories of the current directory, so "dir /s x" would be somewhat equivalent to Unix's "find -name x" (not as fast as "locate x", but should return the same if run on the root directory, and if there were no changes since the locate database was last rebuilt).


I've used that before and the functionality isn't even close. The UNIX command is fast, while the recursive PS command takes forever and prints tons of garbage to your console when it tries to read from certain directories. It also isn't near as intuitive as locate and takes longer to write-out. You're right that it is better than the graphical way though!


locate is fast because there's a cron job that does the heavy lifting of scanning and indexing the filesystem. Without that, find is about as fast as PS.


Yes and it puts it in a database updated regularly for fast access. I'm not sure why this isn't default on Windows. How often does one need to search for a random file in the OS?


Teaching an old dog new tricks.

It's very common to think of the first thing you learnt as more intuitive. Learning the second thing you have baggage and expectations of how something is supposed to work.


> But Bash is spectacularly unintuitive. [...] Can you even remember how hard it was to learn?

From what I recall, it was easy. Other than a few small differences ("mkdir" instead of "md", needs a space between the "cd" and the "..", and so on), it was similar to what I used every day. The rest I learned gradually, by reading shell scripts written by other people, and the man and info pages.

As for XF86Config, I always used the XF86Config generators which came with the distributions, so it was never much of a problem for me.


> But Bash is spectacularly unintuitive. How do I rename files? `mv`? List files is `ls`

mv and ls are not shell commands, they are programs. They existed long before bash and were developed at a time of 110 baud teletype connections when short, highly abbreviated command names were worth the inconvenience.

If you don't like them you're free to alias "rename" to "mv" and "list" to "ls"


Bash was spectacularly intuitive while I was learning it. And err, ending a conditional `if` statement with `fi` is probably one of the most intuitive linguistic constructs ever in CS; no need to analyze whether we should end it with `end`, `finish`, `done` or (worse) `terminate`.


`end` is pretty much universally used to end blocks in 'wordy' programming languages. Are you seriously suggesting that `esac` is more intuitive than `end`? Cute, perhaps, but not intuitive.

"How do you end this block? Maybe... I type 'end'? Nope! You write the opening statement backwards! Ha ha get it? Backwards lol!"


>> What the hell does `2>&1` even do?!

It redirects standard error to standard out? That's what it does in powershell anyway. Frex, I got this line in my $PROFILE:

  $p2 = & $env:Python_2/python.exe --version 2>&1
To get the current Python 2 version on my machine. I'm not really sure what it's an alias for, but I like having the shortcut of the more concise syntax.

The real problem is that, for some reason, python.exe --version outputs to standard _error_. That, I don't really get. I'm sure there's a good reason for it but it's not obvious and that's what's making the "2>&1" at the end hard to figure out.


Maybe it's intuitive compared to whatever else there was in the 70s when the bourne shell was invented? I dunno. I agree, bash is not very intuitive to the contemporary scripter _especially_ when you get into scripting.


I'll the first to admit that bash sucks, but the only thing you mentioned that is part of bash is `2>&1`.


Using bash at a high complexity level is like going thru a unix archeology expedition every time you sit down.


And few things are as rewarding as an Unix expedition. If these management people at work used awk instead of Excel for some of the things.... man, they'd save hundreds of hours in just a month.


we use POSH extensively at work and I haven't seen a single person not using the built-in Unix aliases. Even the TIER 1 peeps got it.

Also, there's a story behind all these commands, their naming, the symbols, reasons for the brevity. But I've never not found Unix intuitive, even before I was aware of the reasoning behind everything. That said, Unix is definitely an environment for the more savvy. But what's intuitive for me is not necessarily intuitive for someone less savvy.


> How do I end a for look? `fi`?

done?


Do you have an example of that? I'm curious how PowerShell is better than other languages for this type of problem. Does it just have good built-in parallelism?


Disclaimer: I'm a PM who works on PowerShell at Microsoft

There's a bunch of 3rd-party modules out there that do a great job of offering parallelism[1][2][3], and we also support the notion of "PowerShell Jobs"[4] (which one or two of those third-party implementations actually build on to achieve what they're doing).

However, we decided that strong parallelism support is important enough to build into the language, so we're mulling on a first-party design.

If you're interested, we've got an RFC out where we're looking for feedback on the parallelism/concurrency design.[5]

[1]: https://www.powershellgallery.com/packages/PSParallel/

[2]: https://www.powershellgallery.com/packages/SplitPipeline

[3]: https://github.com/RamblingCookieMonster/Invoke-Parallel

[4]: https://msdn.microsoft.com/en-us/powershell/reference/5.1/mi...

[5]: https://github.com/PowerShell/PowerShell-RFC/issues/85


In addition to jobs there are workflows which also offer parallelism but they're a little fickly.


You can see my entire thinking outlined in my reddit thread. https://www.reddit.com/r/PowerShell/comments/5f4r10/web_scra...

I miss-remembered the number of sited scraped but it's all the same. There is no parallelism (I really really am a complete non-techie) but actually in this case I didn't want to hammer away as the sites were all under the same domain and I imagine there might be trouble (which I why I also played it very safe and added the pause).

I know there are better and smarter ways of doing things and I did further modifications and experiments after my thread, but in any case, I had a task and was able to quickly complete it in PowerShell starting from absolute 0.


As a security analyst/engineer, PowerShell is both one of my favorite (for collecting information, forensics, administrating and protecting, etc.) and least favorite (because practically every day we're fending off malware using different PowerShell techniques we have to detect and mitigate, plus it's a favorite of APTs) tools.


are you nuts? its about 10x harder to use than c#, and twice as hard to debug.

I've come to the conclusion that its just better to write a small c# app than try and use powershell


PM on PowerShell here: if you have any of those small C# apps handy, I'd love to take a look at them to see what you're doing that's difficult in PowerShell.

Similarly, we're trying to make some improvements to debugging right now with PowerShell 6.0 [1] (the one that's cross-plat and built on .NET Core), and I'm very interested in hearing your feedback on how we can do a better job there.

Even if it's "fix your docs" or "too hard to learn", that's great info, especially coming from a C# guru.

[1]: https://github.com/powershell/powershell/issues?utf8=%E2%9C%...


Writing a robust, idiomatic PowerShell function of any complexity is deceptively challenging, and the background required to do it correctly is almost all unique to PowerShell. PS is a weird amalgamation of C#, Bash, Perl, VB and cmd-isms that ends up being incredibly confusing to basically everyone.

Try getting an experienced C# developer to write a PS function that:

- Initializes an IDisposable resource and deterministically disposes of it when the function terminates (successfully, with an error, or due to ctrl+C). - Honors the caller's $ErrorActionPreference/-ErrorAction. - Understands that catches won't catch "non-terminating" errors. - Throws errors without trashing the ErrorRecord so that the caller gets a meaningful stacktrace. - Copes with the difference between process working directory and PS working directory if a .NET method and a relative path are involved. - Doesn't blow up on a square bracket because someone didn't realize they needed -LiteralPath. - Doesn't accidentally write an unintended object to the pipeline. - Does something reasonable with pipeline input. - Doesn't get tripped up by type coercion magic - $a = @(); $a -eq $null - Uses multiple ParameterSets without creating ambiguity problems. - Doesn't get tripped up by PS silently swallowing exceptions thrown by property getters. - Uses some appropriate subset of language features and modules available across different versions of the OS and WMF. The OS based limitations are understandable. The hard dependencies that Microsoft's own products have on specific PS versions are not. - Is not super-slow.

It's not impossible, but it sure is hard (and takes a ton of boilerplate), and there aren't many resources out there to teach you how. It's all quick and dirty "how to do X" recipes for sysadmins that don't care about more than what they need to paste into a console.

To be clear, I actually mostly like PS I'm just trying to express a set of pain points here.


That list you just provided is actually an AWESOME list of things we should make more accessible/understandable in our documentation. There's a few that would also make great rules for PSScriptAnalzyer[1] (basically our linter; the LiteralPath one in particular would probably be easy to write as a warning).

At this point I should also plug our tooling improvements. First, our VS Code Extension[2] is AWESOME, debugging there has massively improved, there's snippet support for boilerplate with best practices, it integrates directly with PSScriptAnalzyer, and even enables some "quick fix" of linter warnings. We've also got something called Plaster[3] that does templating to encourage best practices (think of it like a "Yeoman for PowerShell").

But yeah, as we craft new docs for writing the best "cross-CLR, cross-platform" (aka universal, aka portable) PowerShell modules, I want to make sure we tick all these boxes.

Oh, and we believe we addressed the whole "different singleton version on different versions of Windows/WMF" thing by making sure that PowerShell 6[4] is x-copyable, fully side-by-side, and supported downlevel to Windows 7/2008 R2 (though Win7 support is currently busted, we're working through fixing it). If you've got a workload or an application that depends on a new feature of PowerShell, you can include PowerShell 6 app-local, or you can distribute it to machines in an environment through either MSI or a ZIP-based file copy.

As for Microsoft products that support specific versions: again, I totally hear you. The best thing you can do is push on those products their respective feedback mechanisms (almost everything is on the Feedback Hub or some product-specific UserVoice now) to support all versions of PowerShell. In the meantime, the side-by-side nature of PowerShell 6 means you can install the latest version without fear that you'll void your support or break existing scripts/workloads.

[1] https://github.com/PowerShell/PSScriptAnalyzer

[2] https://github.com/PowerShell/vscode-powershell

[3] https://github.com/PowerShell/Plaster

[4] https://github.com/PowerShell/PowerShell


As a touch-and-go end user I'll take you up on that at least with some anecdotes:

In 2007 I joined the powershell enthusiasts excited by the notion of a bash-like capability treating .NET constructs as a first-class citizen. I invested time into learning common use cases. Certain things you would THINK would be easy like loading a DLL as well as it's corresponding config file were actually quite difficult in practice which was a shame because the primary vision for powershell was to be flexible glue between compiled assemblies. Have you ever tried to invoke ms-test with powershell on a DLL which has a config xml?

I had a touch-and-go relationship with poweshell. I returned to use it a few years later to find that many of my powershell V1 scripts were obseleted by the powershell V2 runtime. Writing C#/.NET code to execute on many versions of windows this is rarely an issue. Writing powershell code to execute on many versions of windows you have a serious problem on your hands as the commands syntax and general practices vary significantly between versions.

To this day googling for examples on how to accomplish something basic such as running a dos command line call with parameters constructed within powershell is not only highly vexing for the occasional casual user such as myself but many of the answers I see in blogs or stackoverflow are completely temporal and you need to be careful to look for results dating around 2013 for powershell v2 because anything before that wont work, but not too new because v4 has a new approach and that would require a service pack not all my users can currently procure. Not to mention everyone writes powershell in a different style. You often end up with a handful of different code examples that look wildly different each with their own issues as well.

Sure there are expert powershell users out there with muscle memory that can write elegant solutions for version incompatibility issues and common gotchas, however for the casual seldom user the overall experience that too much frustration, less than intuitive approach and 10 ways to solve a common problem. It feels sort of like running perl in a dos prompt that is constantly changing. Compare this with zen of python where [There should be one-- and preferably only one --obvious way to do it.] I find myself installing python runtime and avoiding powershell when I'm required to make something of significant complexity and high compatibility.


I've tried to use PS for some system automation glue using .NET libraries I was also developing, the biggest irritation was that code written with some common C#/.NET idioms are awkward to interop with via Powershell. It's been a while (and this was with PS v4 I think so some of this may have gotten better in later versions) but off the top of my head the problematic stuff included:

* Referencing dependencies (if you're used to "add reference" and "using" it's awkward having to write a bunch of Assembly.Load that then needs to be maintained redundantly with the C#; it would be nice if PS could somehow work with VS/MSBuild projects)

* Calling extension methods (ties in with above since in C# they depend on "using")

* Calling generic methods

* Calling overloaded methods

* Calling async methods

There are workarounds for all the above but put together they definitely make PS less pleasant/useful for me.

Also, much of the above also applies to interop with WinRT which I think is becoming important (as much of the value of PS comes from good access to the underlying system, and more and more Windows functionality is being exposed through WinRT APIs).

The other problem I've had is performance - for example, PS could be useful for some of the same text processing tasks for which Bash is often used (and I'd prefer it for its stronger programming language/object system), but in my experience is dramatically and prohibitively slower when dealing with large amounts of text.


One thing I'm having conniptions over is the inability to automate the installation of TLS certificates into stores other than the local machine or user. I'm referring specifically to the NTDS store, where I need to replace certs used for LDAPS, and I can find no way to do this outside of using MMC certificate store snapins.

If PowerShell could support this, it would allow fully automated cert replacement for those of us who use Let's Encrypt on Windows (ducks)


If I could make a suggestion about debugging scripts.

It doesn't seem that I can currently debug scripts that are contained in a module folder, but not directly the script being called.

If I have a module file that looks like this:

Get-ChildItem -Path $PSScriptRoot\.ps1 | ForEach-Object{ .$_.FullName } # source the script files Export-ModuleMember -function -* # export those sourced scripts

I am not sure how it works exactly, but if I am writing a new cmdlet, Get-HackerNews.ps1, the breakpoints aren't hit in ISE. It would be super cool if it did.

------

Also is there guidance on module structure? Right now it looks like the docs indicate that all the cmdlets should be placed in the .psm1 file, but to me that sounds like it would get rather unmaintainable very quickly. Just looking at the one my dev team has created, we have nearly 50 exported functions. Maybe our our module could be broken up, but that would still leave us ~10 cmdlets per module.


Here is an example of a real project that uses PowerShell to help DBA automation, you dont include them in your psm1, just dot source them, see https://github.com/sqlcollaborative/dbatools/blob/master/dba... for an example.


Awesome, that is similar to what I do right now.


I have a bunch of scripts that I use for various things.

- One which stops IIS app pools before a deployment (30 of them). I wanted to stop them all in parallel but found this far more difficult than executing a list of Tasks in c#. So I now just run c# console app to do it

- one which runs msdeploy commands against a web server and had many issues dealing with passing parameters down to msdeploy. I can't quite recall but I think it was either quotes or spaces related.

The ISE as it's called is very very basic. Not what I'd expect from MS. I'd like a full debugger.

There also seems to be a potential security risk of having editable text file scripts on a server Vs a compiled exe.


Off-topic, but if I say "1..10 | Get-Widget" and Get-Widget doesn't have an InputObject parameter, the input object should be bound to the first positional parameter. Likewise, "dir | $_.length" should be semantically equivalent to "dir | %{$_.length}". Right now the former is an error because the second pipeline element is an expression, but you could implicitly make it a foreach. You could do either of these without a breaking change.


One major complaint I have with powershell is just how completely horribly slow the UI is.

Compared to what cygwin’s bash does, it’s like an entirely different world.


Different guy but same conclusion here (write it in C# instead of powershell). The main pain points I run into:

1) Powershell mingles "process stdout" and "function return" to be one and the same. While this kind of makes sense, unless I'm extremely fastidious about redirecting everything I call to $null, chances are I'm not returning what I expect to return. The end result is I shy away from doing anything that needs to return values (rather than output) in Powershell, which is kind of important for anything nontrivial. I tend to set globals like I've gone back to the Apple 2 and started programming in BASIC again as a result, which gets messy real quick. Processing stdout/stderr in C# is a little annoying by comparison, but easily fixed.

2) Syntax changes in updates. When I installed powershell tools for VS, my scripts broke - to fix them I had to replace e.g.:

  info "$vmGuid: has BuildMatrix tags: $tags"
With:

  info "${vmGuid}: has BuildMatrix tags: $tags"
And:

  logCmd {& $VBM -- clonevm $OriginalVmGuid --register --name $NewCloneId}
  logCmd {& $VBM -- modifyvm $NewCloneId --natpf1 "ssh,tcp,$SshHost,$SshPort,,22"}
  logCmd {& $VBM -- startvm $NewCloneId --type headless}
  waitUpTo 60 3 -Sensitive $true {& $VBM -- guestcontrol $NewCloneId run --exe "/bin/sh" --username "$Username" --password "$Password" -- '/bin/sh' -c "mkdir ~/.ssh; echo '$PubKey' >> ~/.ssh/authorized_keys"}
  waitUpTo 60 3 {& $Ssh -- -o NoHostAuthenticationForLocalhost=yes -o ConnectTimeout=60 -o BatchMode=yes -p $SshPort $Username@$SshHost -- '/bin/sh' -c "echo Server ready" 2>&1}
With:

  logCmd {& $VBM clonevm $OriginalVmGuid --register --name $NewCloneId}
  logCmd {& $VBM modifyvm $NewCloneId --natpf1 "ssh,tcp,$SshHost,$SshPort,,22"}
  logCmd {& $VBM startvm $NewCloneId --type headless}
  waitUpTo 60 3 -Sensitive $true {& $VBM guestcontrol $NewCloneId run --exe "/bin/sh" --username "$Username" --password "$Password" -- '/bin/sh' -c "mkdir ~/.ssh; echo '$PubKey' >> ~/.ssh/authorized_keys"}
  waitUpTo 60 3 {& $Ssh -o NoHostAuthenticationForLocalhost=yes -o ConnectTimeout=60 -o BatchMode=yes -p $SshPort $Username@$SshHost -- '/bin/sh' -c "echo Server ready" 2>&1}
Those "--"s I removed were mandatory in PowerShell 2 for at least some of these commands, yet caused problems in later versions - I don't even know how to write this in a way that will work on multiple powershell versions. I guess I have to explicitly pin -Version when invoking PowerShell all the time? The now simple problem of "How do I get this running consistently on multiple computers" has now turned into an IT and debugging headache. Meanwhile, C# projects get consistently built with a single VS and C# compiler version, and machines I install to are just running the even more stable bytecode.

3) Default installs lag way behind - e.g. Windows 7 only comes with PowerShell 2 by default, so you don't even have things like Invoke-WebRequest, forcing you to write your own or get your IT department to install an updated version for everyone. C# just feels like it has more "batteries included" even for older .NET targets.

4) Figuring out exactly what types are expected vs what I happen to be using is a general muddle in Powershell for any sort of collection in my experience. C#'s static typing, clear errors, and intellisense in general is just extremely hard to beat.


Yes...lots of frustration when I write a script and co-workers can't use it. I'm glad they're adding stuff, but the default installs are ancient in a lot of companies.


You're not comparing your 1000th day of C# programming to your first day of PowerShell programming by any chance, are you?


About 100 to 1.


The beautiful part about Powershell is that it's object oriented so that it doesn't depend on "plain" textual output formats. Something like a bash command line and piping text outputs, struggling with regexps in awk or whatnot while crossing your fingers no update will break your scripts felt pretty archaic and a wrong way to go about it when I had started to get more fluent in PS. (which btw is OSS and available on Linux now too)

But the worst thing I came across with Powershell is this. The fricking return semantics. http://stackoverflow.com/questions/10286164/function-return-...


I wish I could give out a prize for comments like this. "I tried to use X language casually for a few days and didn't like it, it must objectively suck!"

PowerShell isn't 10x harder to use than C#, how do I know that? Because many people use PowerShell regularly in their day to day jobs, and do so quite successfully. Most people who actually use PowerShell don't complain about how hard it is.

It is harder than C# to debug, probably more than twice as hard, actually. Does that make it impossible to program in? Hardly. There are tons of other languages that are roughly equally difficult to debug, especially many of the dynamic languages: PHP, Python, etc. People have been building things of tremendous value with hard to debug languages for ages. The fact that C# has such a good debugging story is great, but it's not the end-all be-all of a language. Also, PowerShell has some significantly advanced debugging tools compared to a lot of other dynamic languages.


Different strokes. Different people grasp concepts differently.

I can say that every programming language that I grew to love, I liked almost immediately. The ones that make no sense from the get-go, I don't spend much time on.


> I can say that every programming language that I grew to love, I liked almost immediately. The ones that make no sense from the get-go, I don't spend much time on.

Surely the second sentence is the reason for the first? If you don't give languages that make a bad first impression a second chance, then you have no idea whether you would have grown to love one of them. (Certainly you aren't obliged to find out, but to regard this as evidence that you wouldn't have liked them I think is reversing causality.)


They just have different places. I use both and really Powershell is not 10x more difficult. It's just different language with different goals.


I agree that powershell can be hard to debug but I would not say that C# is exactly easier to use. Some cases sure, but when it comes to machine management especially, powershell destroys C#. Another example of powershell being superior to C# would be ist ability to interact with RESTful APIs and parsing JSON (see Invoke-RestMethod ConvertFrom-Json, and ConvertTo-Json). Not saying that it can't be done in C#, but definitely far easier in powershell.


Powershell works well as a shell, but fails specacularly as a programming language.

The whole "I can run this script on 100 different machines at once super easy" is fine, but lets see you try to write some rigorous code in PS and the come back and tell me how it's easier than C#.

They both have their place.


I once took over a certain repository that builds a dozen or so disparate external libraries with dependencies between themselves. The original developer of the repository did it by manually remembering which library had which dependencies and running .bat scripts to build them in order.

When I took over it, I wrote a PS script to automate the whole build. The script contains a description of each library's dependencies, so it knows what order to build all the libraries in. Each library is built in its own "job" using Start-Job, so the builds are also maximally parallelized; the script is essentially a loop of two functions: `for each library -> if library's dependencies have all been built -> start parallel job to build library` and `for each job -> if the job has finished -> mark the library as built`.

I would not write it in C#. Most of the build steps for each library involve running other processes, so using a shell DSL rather than multiple Process.Start()+Process.WaitForExit() is convenient. Even things like copying files is more convenient using Copy-Item than C#, especially since the PS versions support globs.

I would hope a build tool counts as both rigorous code as well as justifies PS as a programming language.

Aside: The repository was forked by some other people who did not like / could not read PS, so they rewrote it in Python. The Python version is several times larger than the PS version, though to be fair to it it's also more generic and has more features. A lot of that verbosity though does come from the more verbose code for spawning processes from Python compared to PS.


Sure, but I bet your PS code isn't rigorous or bullet proof.

and I should mention, I feel like that's a solved problem. I don't know the specifics of the project, but it's hard for me to buy that you needed to write dependency checker in PS.


Soooo... you reimplemented GNU Make?


> Soooo... you reimplemented GNU Make?

Well, it puts Arnavion in good company: https://en.wikipedia.org/wiki/List_of_build_automation_softw... .


Pretty much, but the build steps are more complicated than what make supports on its own, so even a makefile would have had to be augmented by shell scripts anyway.


I started using F# scripts instead of PowerShell or compiled C#. Just got to copy Fsi.exe and a few other dlls onto the target machine for it to work.


I second this. Powershell syntax is almost as bad as Bash. I have no idea why they didn't just make a command-line version of C#, it's a far better designed language.


Bash and Powershell are shells, they are command languages, and not programming languages like C#. That distinction, while in many ways subtle with tons of overlap, is the primary factor in shell syntax being crappy as a "language" -- it's because command evaluation is (necessarily) prioritized over language design. There is a good reason that no shells exist that have a "better designed language." As well as good reasons that no well designed programming language makes a good shell. They solve different problems, and I don't think C# would make a good command shell without significant syntax compromises.


> There is a good reason that no shells exist that have a "better designed language." As well as good reasons that no well designed programming language makes a good shell.

Where does tcl fit into things for you?


Hmm good question! It's been a very long time since I used tcl/tk, but I just gave myself a 2 minute refresher and browsed the docs. I would say Tcl is a programming language, not a system shell. The tcl tutorial concurs, calls tcl specifically a 'dynamic programming language', and compares tcl to python.

I'm definitely using 'shell' and 'command' in a narrow sense. I meant to add contrast between PowerShell and C#, by calling one a command shell and the other a programming language. But both words have been used with much broader meanings. So I'll try to be a little more careful and say that what I'm really talking about is a system shell and system commands, where a system command means direct access to executable files on the system, and system shell means an interpreter that provides system commands.

Even though Tcl stands for 'command language', it's not talking about system commands using the same distinction I made above. Tcl's use of "command" is referring to programmer-defined functions & API, and it is not referring to system commands.

Tclsh is a shell, but I wouldn't call that a system shell like PowerShell or bash. It is an interactive tcl interpreter, but not really a system shell. Tclsh isn't something sysadmins would typically use, right?

Perhaps the defining characteristic of a system shell (as opposed to a programming language) is system commands; running executable files on the system can be done by typing the bare name of the file, without any other syntax in the way. That is true in powershell and bash, and not true in C#, tcl, python, etc. To run a system command in tcl, you have to use 'exec' or 'open', you have to use 'glob' to get file completion, and there's required programming language syntax frequently involved in passing arguments. In tcl you have to use $env(var) or $::env(var) to get an environment variable rather than $var. Those are the kinds of things that shells provide with little or no syntax, and exactly the kind of stuff that begins to compromise language syntax and language design in favor of promoting system commands to first class status.

BTW, you got me curious about tcl again. I never used it enough to get a sense of what it's best used for. What kinds of things would you use tcl for yourself? Do you like to use it for system shell tasks instead of bash? (I know there are certainly legit reasons to do that.) What kinds of programs would you start in tcl?


You would have thought that they had learned from bash and made a really nice scripting language. Instead it just has another set of weirdnesses compared to bash.

they definitely should have gone with some variation of C#.


It's made for sysadmins like me who think C# is too hard.


Or the kind of programming you do at work is easier solved by short scripts such as Perl, Python, PS, Bash, or Batch instead of something like Java where one has to write a lot of scaffolding.


Sometimes it's better to write something in a more advanced language, but for small tasks it's hard to beat the overhead of a shell!

There are crossover points (varying by person) where a scripting language is preferable to a shell, and where a strongly-typed language is preferable to a dynamically-typed language.


PowerShell is a scripting language though, but it can also be used as a shell. The syntax is more advanced now that it has classes...etc.


Agree - I find the thing incomprehensible without some sort of intellisense running (which I abhor I tell you abhor!) seems arcane and weird to me and much prefer c# but "WHY ARE ATTACKERS USING POWERSHELL?" - simply because its already there probably? And cmdlets are a bit like nmap scripts - if you can read lua you can probably spitball them together. I'm not sure that ps is amazingly more powerful - just all we before was .bat from hell and debug sadly gone now...


He said "as a complete non-techie".


There are people that need not debug.


A judicious use of xargs, curl and grep would have done it in 1 line in Linux. I looked at your PowerShell code; that does look a bit complicated compared to what Linux offers.


I'm sure you are right. I was just trying to convey the positive impression I got from what little time I spent with PowerShell and the great documentation it has.

Especially the fact that I was able to accomplish what I set out to do in a very reasonable amount of time.


You can do it in one line in PS too.

    $sites = @('https://www.google.com/', 'https://www.example.com/')
    $searchString = 'oogle'
    $sites | %{ if ((iwr $_).Content -like "*$searchString*") { "$_;OK" } else { "$_;NOT OK" } } > C:\temp\output.csv


I love PowerShell. We use it to manage farms of machines, and I use Amazon's excellent PowerShell aws library to do everything from managing my Route53 DNS to setting up new Amazon VPCs.

Security issues aside, it's by far the best OS shell scripting language around, and it has concepts that make it different from any other attempt to write a shell language that includes pipes and redirections.


Is there a list of these symantec white papers to read?


A lot of them can be seen by Googling for "site:symantec.com filetype:pdf white paper" and appending any other term you are interested in.

Admittedly not the best way to get at them, but Symantec's own website doesn't seem to list them in a more user-friendly fashion than straight up Googling...


Thanks, forgot about searching for file types!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: