but didn't see anything interesting being sent there... My tcpdump output showed it connects to a http server at 89.238.150.154:5 and exchanges some data there
Yes the syntax looks familiar, I got few more responses that match the commands from that pastebin. Seems like a general C&C setup where they just add new exploits as they get published.
Anyhow, doesn't seem like it sends anything to Cloudflare. I think it just checks if the IP is alive (perhaps this is how it tests connectivity to the internet). It also checks my routing table and extracts the MAC address.
P.S as of now, the CC server at 89.238.150.154:5 is not accessible.
Could someone explain to me if what I think this gist shows is correct?
A get request is sent to the server with additional commands added to the content, which creates the file ./tmp/besh whose content comes from the ngix file from http://162.253.66.76/nginx. The executable flag is set and then the file gets executed.
The next three commands show information about the downloaded nginx file (check sums, file command info). For what reason? Is the file really an nginx server or is it just named like this to show that nginx is exploitable? I know that this is basically about the bash exploit, right?
The file is only named nginx. The file has been downloaded from an untrested server, 162.253.66.76, so it could be anything really.
The title/description of the gist claims it is a kernel exploit with CnC (Command and Control) capabilities. So yeah, the file is only named nginx; it doesn't have anything to do with the popular web server software of the same name. Probably named that way to avoid suspicion
Discovering a case where wget shells out to bash while setting some env vars based on received headers. And then anonymously posting a supposed shellshock payload just begging to be downloaded with wget.
If you run a web server that generates its own CAPTCHA using something like ImageMagick, or call system() to gzip something, you could possibly be vulnerable.
Never underestimate vulnerabilities and the way people can use them, or even combine them, to exploit systems.
Consider how many people touch an enterprise system, or even a system at a smaller shop. Consider how many people touch shared hosting servers or even dedicated boxes.
Do /you/ trust all of them, along with all the authors of all the software exposed to the web (or touched by something exposed to the web) on that system?
On shared hosting systems, you have to design the system with the assumption that someone is always compromised. So, additional accounts getting compromised should just be business as usual.
Seriously, if you're on shared hosting, it's almost certain that at least one person on the server is compromised/malicious
Why would it be messed up if it's true? The Unix philosophy is to compose complex functionality using lots of small tools. Shelling out to existing tools instead of reinventing your own makes total sense.
I have seen implementations that shell out to bash scripts throughout my career in web and back-end development. It's a serious antipattern in the wild.
I imagine that there would be little need for a 0day. Also, if your setup allows you to be exploited via HTTP, chances are high that you're running old stuff anyway.
Somone is going to make a fortune mining bitcoin in the coming weeks.
You could bot up every server at AmaGooBookSoft and still not make an appreciable amount of money mining Bitcoin, though I suppose that is not true if you go to one of the scrypt altcoins. CPU mining is very uncompetitive. A CPU miner is upper bounded at about 100 mega-hash per second (100 * 10^6 hashes) and more typically closer to 5 * 10^6. 100 megahash per second is worth approximately two cents per month. My off-the-cuff estimate for AmaGooBookSoft is 300k servers, which gets you about $6k a month or $200 a day if you rooted all of them and they were the most effective CPUs ever reported for mining Bitcoin.
The far more likely way for this bug resulting in someone exfiltrating lots of bitcoins is them hitting every Bitcoin exchange looking for e.g. cPanel on a development box, Wordpress hosted on Nginx with fastcgi, etc etc. If they find one, they've got the hot wallet inside of five minutes later if it is on the same box, a bit more if it is somewhere else on the local network.
(I've got to admit, the first thing I did after patching my boxes, to relax about the stress, was grepping Bitcoin Core for system calls. No obvious ones that I could see, for what it is worth.)
If I can emphasize again, though: just because you don't engage in risky behaviors like e.g. transacting in bitcoins, doesn't mean your boxes are safe. Every. Server. On. The. Internet. Will be probed for a variety of exploits enabled by this vulnerability. There will be dozens of independently administered for loops probing for it within 24 hours.
FWIW your off the cuff estimate for AmaGooBookSoft servers is off by an order of magnitude (possibly 2 orders if you're talking about all 4 companies combined).
If ShellShock is currently un-patched, it is in fact a 0 day right? Just a very public one. Should we like not be connected to the internet until this is fixed? Are personal routers vulnerable?
A 0 day means we don't know of it's existence, if it's been previous mentioned and people have failed to patch it then it isn't a 0-day anymore. The term comes from the "zero days to fix". Since we've had time to fix this problem, and failed to, it is no longer a 0-day. Hypothetically, third parties might have been safe from this if they fixed it independently.
The bash exploit may not be a zero-day, but it only executes a payload allegedly containing a kernel exploit. The question seems to be, is that brand new?
Command and control. One frequent thing that black hats do is have all the boxes they root subscribe to e.g. an IRC channel, so that they can receive further commands. The ratware ("software written with nefarious purposes in mind") often comes pre-configured with options for blasting spam, hosting content on HTTP (for SEO or exploitative purposes), doing DOSes, and executing arbitrary commands against the local system.
One can, of course, envision numerous ways to get data in and out without it being an IRC channel, but that is easy to implement, works across a wide variety of target environments, and plays well with the existing ratware ecosystem.
I had this stupid idea long ago of using "security by obscurity" and rename some commands that are typically used manually and are favorite ones of exploits like this, for ex: curl, wget, gcc -> rename to le_curl, le_wget, le_gcc etc, just for my use. Maybe it wasn't such a stupid idea.
yes, but those commands (wget, curl, gcc and a few others) are unnecessary usually but sometimes needed once in a while, ideally you could uninstall them and then install on demand.
Your idea won't work and also would prevent most scripts in your system to work correctly.
I wrote unix exploits for a living for years (All legit, in a pentest company) and for payloads I would normally use echo or printf to upload a binary and execute it. And those are built-in commands in the shell.
I know, still most of the script kiddy attacks I see (on a honeypot ssh for ex) are of the 'wget download bad script' type. I typically uninstall gcc and my scripts don't use wget/curl/scp, now rsync otoh... can't blacklist all possible tools, you may as well netfilter traffic to block non-root initiated traffic going out.
Any CGI script which was written in bash, that's the worry. Since mod_cgi uses environment variables to pass along HTTP headers, those programs are sure to touch the environment.
Even if the CGI script is in some other language, think of how many .sh wrapper scripts there are out there.
> This bug only affects people who dont care about security.
And those running code written by people who don't know any better...
If you are an admin for servers running third part code that you have not verified every line of, you need to be concerned about anything like this just in case said code does something that isn't considered best practise.
Also if the CGI code path uses affected functions, you are not going to be protected by avoiding using them in the code that is eventually called.
no, you need to use the exec* family function to start a process without invoking the shell. Take care, it does not really start a new program. It replaces the current executing program. So, you will need to fork beforehand.
Execve, but even that is stretching it, if I write something in Python and someone would tell me "hey use system() or subprocess" I tell them, no thanks, I would rather not do it - then go look for python-way of doing it, if its an image library or whatever it is that needs to be done.
Now you say, but you really really have to run this "utility" with subprocessing or whatever. Well, then its outside of python program and instead of relying on subprocessing Id consider exposing that utility as an interface and writing a small protocl through which the python and utility can exchange data. You most probably have to parse output of utility anyway, better do it right straight away. And if you dont have to parse output of utility - then you send a signal/request/messaging-bus to a listerner which will do what you want - but now with cleaned environment.
All you're doing is hiding the call to system/execve behind deeper layers of abstraction.
Plus if people actually went ahead and reproduced all of the GNU/busybox toolchain inside of Python then everyone would be queuing up to criticise them, particularly if they introduced more security issues (e.g. reproducing rsync fully within a Python library).
Realistically using execve instead of system is a step forward. It is more efficient for non-scripts and you aren't potentially picking up poisoned environmental variables. But if you NEED to run utilities then all you can really do is pre-parse all the parameters carefully and hope for the best.
Suggesting never using either execve or system is just highly unrealistic. There is just too much useful code available via it and aren't nearly enough libraries to reproduce all of that code within whatever language you're working.
> All you're doing is hiding the call to system/execve behind deeper layers of abstraction.
Thats the point, layers where environment variables do not pass - since they in that abstraction do not make sense.
> Plus if people actually went ahead and reproduced all of the GNU/busybox toolchain inside of Python
Basically all of gnu coreutils/busybox, is already inside python, its called import os.
For your rsync example, python-librsync exists. C library, with python binding/interface. No need to run a bash to use the algorithm. If you still want to exec it, then use pythons binding to exceve system call or similar, not to system.
I didnt suggest never to use execve and/or system, I said do not ever use system, sometimes highly questionably use execve.
I ran the "nginx" binary thru strace in a vagrant vm and got some connection attempts to a clouldflare IP
connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("108.162.197.26")}, 16) = 0
but didn't see anything interesting being sent there... My tcpdump output showed it connects to a http server at 89.238.150.154:5 and exchanges some data there
sent >>> BUILD X86
recv >>> !* HTTP
recv >>> 190.93.240.15,190.93.241.15,141.101.112.16,190.93.243.15,190.93.242.15 pastebin.com /4HQ2w4AZ 80 2
recv >>> PING
sent >>> PONG
then it just goes to do ping/pong with the same server. At one point the process forks a separate process of itself and dies...
The pastebin link leads to an uploadcash.org file named hermoine_granger_jpg.jpg which I can assume is a payload of somekind...