But why does the malware need curl in the first place? It seems perfectly capable of downloading things without. And it's running on windows where it's guarenteed to have a http library available anyways.
Most antivirus software will probably account for direct Windows API usage and maybe using CURL doesn't set off such flags? Not sure, I don't make antivirus software, nor do I use it.
malware users often purchase exploits and scripts - it's not like they will rewrite their scripts to take advantage of http libraries or even care about efficiency.
I suspect it's just part of a package that someone bought and is using.
Where in the post did it say it was running on windows? It was stated that the User Agent string wasn't identifiable, nor did it send any identifiable headers.
edit: thanks for the clarification, I totally missed the file name
Making an http request for an hardcorded path (and no https) is quite easy, you just need to open a socket and send a two lines header. curl can do more advanced and intelligent things than that.
I guess substituting the existing file served at that URL with a curl that when called by the particular malware in question (I guess you could detect this fairly easily) warns the user graphically of the infection would be considered going too far.
I'm not even sure you could reliably detect that - the article says they didn't have any special/identifying headers. But yeah you're right if they could detect then supplying an exe that warned the user would still feel sorta weird IMO, and how many users are likely to trust a random popup or notification seemingly from out of the blue?
There are "indicators of compromise" listed at the end of the slides. So you could use that. But that's not curl's job... disabling some old versions by changing the url is more than enough in my opinion.
Detect the malware in the binary served by the webserver when running on the infected machine, not on the webserver. i.e. always serve the `compromised` binary for that URL.
Detection would wind up being an arms race, I expect; if you were going to do this, better to replace it with an application that just does the warning, with a hint for valid users blindly downloading.
It would be more like, "let the script kiddies using this tool shrug their shoulders when it stops working and switch to another (of which there are legion)."
As somebody else mentioned - this is a great honeypot opportunity! By serving malicious builds based on referer and user-agent, they might be able to gather really interesting data.
> As somebody else mentioned - this is a great honeypot opportunity! By serving malicious builds based on referer and user-agent, they might be able to gather really interesting data.
They said the malware used no referer header and changing user-agent. If the user-agent were useful to segment these downloads from others, they most likely would have refused downloads based on that, because by renaming the file like they did, they're breaking build scripts for lots of downstream projects.
There's also the ethical issue of breaking into others machines, even if it's "for a good cause".
Isn't that rather hard to do ethically? How would you make this "malicious build", given that it's probably going to be running on my dad's laptop at some point?
You could make it harmless but not a valid cURL, also. Something that just prints to the screen that there's a good chance it was downloaded by the original malware and points you to a page on haxx.se where you can get the real version.
The fact the article states they've already changed the url for downloading the file, suggested to me that it wasn't intended to be automatically downloaded (I realise that automation does not imply malicious).
Are you talking about how cURL works or how one downloads (Windows) cURL in the first place, as is the topic of this post?
It may not be ideal but you could certainly put some form of authentication in front of the Windows downloads or force token generation via the Web site to download the executable.
The question is whether that's a good idea - as mentioned, it's not cURL's responsibility to prevent malicious usage, but perhaps being a little more cautious about the acquisition of cURL in the first place for Windows users might not be seen as an intrusion.
It's funny, but a little tame of Crockford to so easily give out a special evil-allowing license to IBM. He doesn't seem very convinced of his own Manichean license.
P2P-sharing would be a really great solution to this from the malware developer's point of view. I don't harbor any ill-will towards malware devs, especially not when they're going after windoze machines :^)
But yeah, the curl website could really come up with a lot of data, so it seems like a very immature solution to what is really a critical component of the app to just blindly reach out for some predefined url. It'd be way smarter to run a CC machine somewhere and start pairing some peers. curl is somewhat small, right?
I think they'll probably just keep renaming the file for now, and updating the URL. It'll just be cat and mouse, and newer versions ship with it, so there's really no point for the curl maintainers to be wasting too much time on it since it's a limited-time issue that will more than likely sort itself out later.
Yeah! Them people that don't share our hobbies and choices can get fucked, right? I mean transitioning to linux is so easy and relearning a new ecosystem so painless. Them scam victims deserve whats coming for them!
I also agree that malware devs are dumb. Instead of using that curl download in a way they know works, they could just add a p2p module to their malware, open up a few ports on infected machines and hide all that from av. Easy right? I mean them archive urls change all the time! And updating that via the cc is just too unelegant!