Hacker News new | comments | ask | show | jobs | submit login
Transmission BitTorrent app contained malware (transmissionbt.com)
895 points by mroling on Mar 6, 2016 | hide | past | web | favorite | 338 comments



The fact that the binary was infected, I can somewhat understand. However, the way communication happened/is happening on this issue is very disconcerning and basically makes it impossible to know whether it's safe to currently download 2.92 from their site.

Questions like

- how did the compromised binary get there? Was the source code hijacked or was the binary altered after it had been built?

- Were the SHA256 hashes on the site also compromised (btw: Having hashes on the site is good enough for making sure you're not installing a corrupted binary. It doesn't do anything against intentional alterations of the binary though. These hashes need to be stored on an external site)?

- How did the compromise happen?

- what steps were taken to ensure that the same compromise doesn't happen to new binaries posted?

- Did the attacker leave any foothold on the compromised system(s)?

- How were such footholds removed?

All questions that need to be answered before it's safe to upgrade transmission either from the website or with the AutoUpdate feature. A red warning telling me that one binary was infected and that I have to download another binary isn't good enough.

I know the transmission people are volunteer developers and no PR people and I can totally accept that, but there's some things that just need to be made clear before we can safely update to later versions (and thankfully, 2.8 keeps running just fine)


It will probably take time to get all of the answers, but in this case, automatic updates are safe.

Although I'm not a Transmission developer, I develop software that uses the same automatic update mechanism. It appears that the hacker did not update the MD5 present in the automatic update mechanism. (Sparkle) Thus, when the automatic update mechanism downloaded the hacked version of Transmission, it reported it as a corrupted download.

You can see the comment here: https://forum.transmissionbt.com/viewtopic.php?f=4&t=17834#p...


> It appears that the hacker did not update the MD5 present in the automatic update mechanism. (Sparkle) Thus, when the automatic update mechanism downloaded the hacked version of Transmission, it reported it as a corrupted download

yeah. But not knowing how the attacker got access, we have no idea whether they have changed the current 2.92 binary again, this time remembering to update the hash in the appcast or whether this time around the binary is actually pristine.

The fact that the site was never down between this happening and the red warning text appearing makes me suspect that only a hasty cleanup was performed and that the actual security flaw might still exist.


An attacker would need the private key to update the signature in the app cast. It's possible the devs store their private key on the server, although that would be silly.

Although that doesn't discount the recent MITM vulnerability Sparkle had and if transmission is still using an old version of the framework.


VirusTotal has some more info, including the files it writes:

https://www.virustotal.com/en/file/d1ac55a4e610380f0ab239fcc...

(Look under the "Behavioural information" tab)

Written Files and Created Processes are interesting:

[Transmission] /Users/user1/Library/kernel_service (successful)

[unknown] /Users/user1/Library/.kernel_pid (successful)

[unknown] /Users/user1/Library/Saved Application State/org.m0k.transmission.savedState/window_1.data (successful)

[Transmission] /Users/user1/Library/Saved Application State/org.m0k.transmission.savedState/data.data (successful)

[Transmission] /Users/user1/Library/Saved Application State/org.m0k.transmission.savedState/windows.plist (successful)

[kernel_service] /Users/user1/Library/.kernel_time (successful)

Created processes

/Volumes/Transmission/Transmission.app/Contents/MacOS/Transmission (successful)

/Users/user1/Library/kernel_service (successful)

kernel_service (successful)

Edited to add: If anyone has a copy of the DMG, sha1 5f8ae46ae82e346000f366c3eabdafbec76e99e9, please link me a copy via email (brendandg@nyu.edu) or twitter DM (@moyix).


One of the researchers posted links to both malicious dmgs.

[1]: https://twitter.com/claud_xiao/status/706563279355645953


Maybe take a look around https://build.transmissionbt.com/ - but then again maybe the svn repo wasn't compromised? I tried a "svn diff svn://svn.transmissionbt.com/Transmission/tags/2.90 svn://svn.transmissionbt.com/Transmission/tags/2.91" and didn't see anything suspicious on a fast scroll-through


Side topic: probably not a good idea to expose Jenkins externally, especially if you don't keep Jenkins up-to-date all the time (for transmission bt it is up-to-date right now). This Jenkins probably contain the key to the svn server, so if someone finds a hole...


>Side topic: probably not a good idea to expose Jenkins externally

Jenkins (and CI in general) can be a very weak point. This was posted on Hacker News a while back https://github.com/samratashok/ContinuousIntrusion


Very useful demonstration. thanks for sharing.


> Jenkins probably contain the key to the svn server

Why should it? For open-source software build-server can even be ran by a third party.


I am not following your comment here. Most Jenkins setup use global credentials or use the server-side config file like .ssh/config, .gitconfig link.


But ideally there are no credentials because Jenkins doesn't need to push code to the repo (and can clone the pubically available code).


But it might need to push new (binary) updates if the master/deploy branches gets updated or a commit contains a specific tag.

As far as I know, only the binary was updated.

I'd be interested to hear, though, how it got compromised after all.


Yeah, that makes sense. Build servers are one of the weakest links in distributing software. That's why this exists and I'm glad it's making progress:

https://reproducible-builds.org

And even if you sign updates, the key management for doing that is usually centralized, which can be bad:

http://arstechnica.com/security/2016/02/most-software-alread...


I do not think it was the build server, though. According to this analysis[1], the developer used a different key to sign the build (all Mac apps need to be signed or the default behavior is to reject that App. You can permanently disable this behavior in settings, or just for on app by holding control while opening the App, which a lot users who use transmission probably do because not all legitimate Apps are signed). Anyways, since the app was signed by a third party's certificate (which was approved by Apple), chances are only the website was compromised. If the build server had been compromised, the attack would have had access to the developer's certificate and they would most likely have used that.

[1]: http://researchcenter.paloaltonetworks.com/2016/03/new-os-x-...


Transmission, as an open source project, should probably evaluate another CI service such as Travis rather than run their own Jenkins services.



Copy/pasting the helpful parts of that article:

How to Protect Yourself

Users who have directly downloaded Transmission installer from official website after 11:00am PST, March 4, 2016 and before 7:00pm PST, March 5, 2016, may be been infected by KeRanger. If the Transmission installer was downloaded earlier or downloaded from any third party websites, we also suggest users perform the following security checks. Users of older versions of Transmission do not appear to be affected as of now.

We suggest users take the following steps to identify and remove KeRanger holds their files for ransom:

1. Using either Terminal or Finder, check whether /Applications/Transmission.app/Contents/Resources/ General.rtf or /Volumes/Transmission/Transmission.app/Contents/Resources/ General.rtf exist. If any of these exist, the Transmission application is infected and we suggest deleting this version of Transmission.

2. Using “Activity Monitor” preinstalled in OS X, check whether any process named “kernel_service” is running. If so, double check the process, choose the “Open Files and Ports” and check whether there is a file name like “/Users/<username>/Library/kernel_service” (Figure 12). If so, the process is KeRanger’s main process. We suggest terminating it with “Quit -> Force Quit”.

3. After these steps, we also recommend users check whether the files “.kernel_pid”, “.kernel_time”, “.kernel_complete” or “kernel_service” existing in ~/Library directory. If so, you should delete them.


You have a typo: "Applicaions". Usually not a problem but in this case it will say "No file/folder found" when it's just a typo.


Thanks. Fixed it. I only copy/pasted originally though :)


"It will then sleep for three days. Note that, in a different sample of KeRanger we discovered, the malware also sleeps for three days, but also makes requests to the C2 server every five minutes."

It's fascinating!


Isn't it possible to fire a takedown notice to that server? I mean KeRanger committed a felony and Amazon (assuming you mean Amazon's EC2 server) might react quickly if they realize what has happened. It might save a lot of computers from getting destroyed. As long as the server is somewhere in the Western world, it should not be a problem.


It's a "Command-and-Control" server (C&C or C2).

https://en.wikipedia.org/wiki/Command_and_control_%28malware...

I just learned that too. For me, C&C reminds me "Command and Conqueer" (the game).

https://en.wikipedia.org/wiki/Command_%26_Conquer


Thanks, I just realized it after reading Claud Xiao and Jin Chen's analysis, too. Apparently, this ransomware uses Tor to hide its origin.

Analysis: http://researchcenter.paloaltonetworks.com/2016/03/new-os-x-...


I liked the "We have ticket system." (in the screenshot of "README_TO_DECRYPT.txt").

They ask (only) 1 BtC as a ransom.


And they decrypt one file for free, to prove they can do it. Nice touch.

Screenshots of the web UI:

https://twitter.com/moyix/status/706577507965870080/photo/1


The server isn't on EC2, it's hosted on Tor. The malware uses an HTTP-to-TOR gateway service (onion.nu and onion.link) to pull down the encryption key and README file from one of three different hidden services. In theory you could try to get the gateways to block the connections, but I'm not sure they're likely to be cooperative.


Amazon's abuse teamight help, but the DMCA would not relevant unless you can show copyright infringement in this.


Nice. That matches what I'm seeing.


Do the developers have an explanation anywhere as to how this happened? The homepage ( https://transmissionbt.com/ ) has a big red warning to upgrade to 2.91, but I can't find any info about how someone went about putting malware in the download.


Yep, this deserves a more detailed explanation (or maybe they still don't know what happened). I updated from the previous version to 2.90 through the app built-in update, and I don't seem to have any "kernel_service" process running. Can someone that has that process in their system tell us where they downloaded the program?


Agreed. I don't have the process running either. Screenshot of Transmission 2.90 red warning to update to 2.91. http://imgur.com/aQdHJ3b


> I updated from the previous version to 2.90 through the app built-in update...

Same, and I also don't see any `kernel_service` process running.

Fingers crossed for the in-app update not being affected by the hack.


I'd definitely run a virus scan to be sure... If you don't have one just install a Trial version and remove it again after a week.


Noted: I've gone with BitDefender from the Map App Store. Will report back results.

EDIT: welp, BitDefender found nothing, all clear.


Maybe give Malwarebytes Anti-Malware for Mac [1] a try? I've used their Windows products for a while now.

[1] https://www.malwarebytes.org/antimalware/mac/


Avira is what we use (at a very security-conscious org), and it's been unobtrusive.


(reply to noondip): if anyones got a better suggestion I'd love to hear it :)


Back when Apple still made Mac OS X Server as a separate operating system, they included ClamAV¹ to scan for malware in mail. They don’t include it anymore, but ClamXav² (been around since 2004³) is a nice GUI for ClamAV that I’ve been using for a while now.

――――――

¹ — https://en.wikipedia.org/wiki/Clam_AntiVirus#Mac_OS_X

² — http://clamxav.com/index.html

³ — http://clamxav.com/birthday.html


I run a private mail server and swear by ClamAV to help reduce noise and pollution that accumulates and spreads through my server, but I don't think I've ever had any luck with it being a good front line defense against up-and-coming malware, whether it targets Windows or Mac. I don't think I would recommend it as a primary malware scanner for a Mac, or Windows.


The same BitDefender which was hacked a few months ago? http://securityaffairs.co/wordpress/39028/cyber-crime/bitdef...



If you release commercial or popular open-source software, it's probably a super-bad idea to keep your signing key on a notebook computer you use outside of the office.

Have a trusted machine kept in a secure location to sign it for you if that's practical.

I bet someone's key leaked out here.


All that stuff - bittorrent, soulseek, calibre etc - lives in a vm, with access to the host only via samba shares. I'll decide what you see and where you can write. Yes, it's great you download stuff. No, you can't write to the stuff I'm sharing. Yes, having a web-server serving up books to the outside world is great. No, you can't serve up anything from my filesystem to anyone who feels like it.

When you can't (be bothered to) vet the source code, stick it in a vm. On a sensible machine with an ssd it's only 10 seconds away. Why risk it. Especially if the software you want/need to run only works under windows.


This is exactly why sandboxed apps (e.g., iOS/UWP/etc.) are a good thing.


Or the Mac App Store itself. Its enforced sandboxing would have provided a decent first line of defense against this, but torrent clients can't be submitted to the App Store due to Apple not liking the legal aspects, not to mention the other issues people have with it. (Outside the store, apps can still opt into sandboxing, but that wouldn't help with a malicious installer.)


I doubt Apple keeps it out for "Legal" aspects. iTunes sales seems more likely.


For Windows there is SandboxIE: http://www.sandboxie.com/index.php?DownloadSandboxie

It should be able to sandbox Windows Apps, except for Metro/Modern UI Apps and Microsoft Edge.

Too many programs are having a backdoor or Trojan in them now. It is a good idea to run any app that accesses the Internet in a sandbox first to see what it does.


Just a warning: by default it doesn't protect your documents from reading.

It isolates the process, all writes (filesystem, registry) go to the sandbox instead of the host filesystem, so a malicious software can't easily install itself. But reading data is mostly unprotected by default, so a malware ran in a sandbox may steal some sensitive data. To protect such data you have to pre-configure sandbox manually.


Also cameyo to make your own:

http://www.cameyo.com

And they have prepackaged sandboxed apps:

https://online.cameyo.com/public


I have no problem paying for apps but for people who have paid for SandboxIE, do you think you've got your money's worth out of it?


Yes. Back when I still used Windows five years ago, the app was an essential tool for me. All web browsers, downloaders and basically anything else that I consider high risk must be run in a sandbox, which is routinely emptied. Less risky apps are sometimes also installed in a sandbox, which is emptied much less frequently. And it's great for trying out trial versions of software before I fully trust their publishers. I didn't run any antivirus on that machine at all and I'm that confident. Nowadays I've switched to OS X but I still miss the easy sandboxing of basically any app.


Beware that VMs are not necessarily secure. They can be escaped!


This argument is similar to "a condom can always break". Technically you are correct, but I'd still use one.


Which is the entire premise behind Qubes: https://www.qubes-os.org


This argument is only similar if the condom is known to have huge design flaws

VMs have tons of well documented issues. If you want a smaller attack surface, try OS virtualization technologies (zones/jails)


This makes no sense. VMs are by far the most secure form of isolation. No one is going to get infected with malware that escapes VMs - it is far too valuable.


How can you honestly think VMs are the most secure form of isolation?


Sure, in theory. Are there any current exploits for VirtualBox?

The way I see it, they're more secure that running the same apps on bare metal. Ubuntu host running a Fedora VM; the latter (with Transmission etc) only running when I need the apps running - seems an almost entirely painless way of providing a lot of security.



"Requires a 0-day" is still a huge barrier. It's not 100% secure, sure, but it's an improvement.


Yeah, not arguing that. Just don't treat it like a panacea


Yes! Some reliable ways to extract an RSA key, and some less reliable ways to swap two cache lines. Virtualization on x86 is a helpful tool for configuration management, but should not be mistaken for a security feature.


More secure than without though.

Lets try to avoid treating security as something binary.


Yes but if you run a compromised App in VM A and have your sensitive data in VM B then they have to break out of the VM A and then break into VM B. It's no longer worth the effort. There are often easier ways like phising.


You can also lease a VPS (anonymously even) and use Deluge with a webGUI. Said webGUI can be a Tor onion service, for better isolation.

But still, this is about malware in Transmission itself, not anything downloaded using it. So the fact that it's a BitTorrent client is rather beside the point, I think.


> soulseek

Neat. Never heard of this one before - what makes it special?


It's a very old P2P file sharing network modeled after Napster. It's main propositions (in my opinion) are its active users who share harder-to-find electronic music, and its discussion groups which are available inside the app.


I didn't know Soulseek is still alive - brings me back to the days of Direct Connect / DC++ and hunting down rare live sets and (as you mentioned) electronic music over ISDN/dial up.


Wouldn't have a container for this be good enough ?


A container may be "good enough", but running within a container generally doesn't give the same security posture as running within a virtual machine.


I have no idea;I've never used a container. Is it easier than downloading an ISO, selecting how much ram/disk/cpu to give it and installing it?


Admittedly, this might reasonably be considered a basic question, but how do you recommend running a VM on a Mac?



http://veertu.com

Note that I've only used it to run Linux so far.


Does it run much better than in VirtualBox?


Now you got an infected VM. How about we stop packing malware in OSS?


CNBC isn't a website I'd expect to read anything tech-related on, but there are actually a few details in this article:

http://www.cnbc.com/2016/03/06/reuters-america-apple-users-t...

- It's Ransomware.

- Seems to be a 3 day grace-period (chance to remove it, possibly).

- The Transmission developer certificate [Gatekeeper] has been revoked.


Along with the recent Linux Mint hijack, this really illustrates the need for people to verify programs they download. Though I think most people can't be bothered to verify the checksum on a file every time they download it.

On the other hand, the Windows and OS X App Stores are awful. Linux package managers are looking like one of the only straightforward ways to distribute applications securely.


> Along with the recent Linux Mint hijack, this really illustrates the need for people to verify programs they download. Though I think most people can't be bothered to verify the checksum on a file every time they download it.

Barring a situation where a CDN hosting the download is compromised but the main site is not hosted on the CDN, it's extremely unlikely that someone would have the ability to inject malware into the download and not have the ability to make the checksum match. Posting checksums is actually pretty useless, and was something that used to be used to deal with the possibility of malicious mirrors, but doesn't provide any security against mitm attacks (unless the main site is secure but the downloads aren't which is idiotic by 2016 standards anyway), the site getting hacked, etc.

Digital signatures are a little bit better if the key is kept safe, since hacking the site and replacing the binary won't allow a random person to produce a valid signature, although ability to modify the source code would still allow someone to introduce backdoors into the next version, but there's still a huge problem where you need some way to determine what key was supposed to be used to sign the binary in the first place, so just posting a signature on a website is also basically useless.

Digital signatures can work if there's some sort of centralized distribution method, or for safely updating software that's already installed.


In Debian and Ubuntu at least, all published files containing binary executable files (ISOs, .deb packages, etc.) are hashed and the hash signed by a well-known system pre-installed PGP key.

Given trust in the protection of the private key used to sign the hash list file the integrity of the executable content can be proved (assuming useful SHA1 collision creation is prohibitively expensive).

Coincidentally I was writing a Bash script this weekend to auto-install (Ubuntu) releases into LVM volumes and it includes the following code to verify the download:

  set -e
  # ...
  ISO="${NEW_DIST}-desktop-${ARCH}.iso"
  for F in SHA1SUMS SHA1SUMS.gpg ${ISO}; do
    if [ ! -r $F ]; then
      wget http://cdimage.ubuntu.com/${FLAVOUR}/daily-live/current/$F
    fi
  done

  if ! gpg --verify --keyring /etc/apt/trusted.gpg SHA1SUMS.gpg SHA1SUMS; then
    echo "Error: failed to verify the hash file list signature; files may have been tampered with"
    exit 2
  fi
  if ! grep ${ISO} SHA1SUMS | sha1sum -c; then
    echo "${ISO} is corrupted; please try again"
    exit 1
  fi


Shouldn't you be using HTTPS for all downloads and grabbing the sha1s and image from different mirrors?


That is a good idea but note the use of GPG - if you aren't able to forge that signature, the verification step will fail.


Right, I missed that.


Great idea.


You'd think there would be some sort of global torrent network that simultaneously distributes binaries and signatures.

Doesn't seem like a horrible idea to me: you could just add the developer's key to your client, have your client broadcast interest, receive a _signed_ list of available software with appropriate magnet info... Download servers could serve as initial trackers until enough information has propagated through the network for downloads to be trackerless. Checksums? Guaranteed. Signatures? Acquired. Checking? Performed automagically.

Granted, this just moves the point of failure to the developer's key. (Key acquisition needn't necessarily take place on the developer's site, a friend in the network could pass you a link containing the dev's key and the application's magnet info.)


> (unless the main site is secure but the downloads aren't > which is idiotic by 2016 standards anyway), the site > getting hacked, etc.

It's not idiotic at all. You let anyone who wants to spread the load by providing downloads, but you use checksums - behind https - to ensure they can be trusted.


I thought only apps signed by "identified developers" are run by default on Macs with Gatekeeper now. Shouldn't code-signing have prevented this? Unless they inserted the malware before the signing process.


>I thought only apps signed by "identified developers" are run by default on Macs with Gatekeeper now. Shouldn't code-signing have prevented this?

"By default". Most developers don't bother to register, and lots of people change the default (and after that, they can right click to open the app and bypass the warning).


Anyone can sign up for the Apple Developer Program to become an "identified developer", so there's nothing that stops an attacker from signing their malware.


And according to the analysis [0], this is exactly what they did. They used a different cert to sign their malware.

I have to admit that Windows' UAC is better in that regard, as it shows the signees name. But of course this is only useful if you know the "right" name.

[0] http://researchcenter.paloaltonetworks.com/2016/03/new-os-x-...


Yeah, I think this is a major issue on OS X. For the average user it is impossible to tell who signed an app, if it is sandboxed, and what permissions it has. Hell, using the codesign command to extract entitlements from all binaries in a package is hard even for advanced users...

(There is third party tool named RB App Checker which does make these tasks a bit easier, though)


Well, I guess that’s at least one advantage for apps that use Installer.app¹ to install; Installer.app makes it really easy to see the certificate².

――――――

¹ — https://en.wikipedia.org/wiki/Installer_(OS_X)

² — http://f.cl.ly/items/1s1E3n19273M1l3i3S2X/developer_id_insta...


Hold down control, right click and choose run. Then once will run unsigned binaries (after a warning).


A small nit: just plain right click, or hold down control and left click, which is the same as right click.


The malware version was signed with the Transmission developer key.


No, it wasn’t:

The two KeRanger infected Transmission installers were signed with a legitimate certificate issued by Apple. The developer ID in this certificate is “POLISAN BOYA SANAYI VE TICARET ANONIM SIRKETI (Z7276PX673)”, which was different from the developer ID used to sign previous versions of the Transmission installer. In the code signing information, we found that these installers were generated and signed on the morning of March 4.

From: http://researchcenter.paloaltonetworks.com/2016/03/new-os-x-...


What. That's interesting -- Polisan is a relatively well-known paint company in Turkey. I don't think they have a part in this -- maybe they did not store their private keys well enough?


> which was different from the developer ID used to sign previous versions of the Transmission installer

and that didn't ring any alarm bells?


For the end user? No, it wouldn’t. As thesimon and jakobegger, respectively, said:

And according to the analysis, this is exactly what they did. They used a different cert to sign their malware. I have to admit that Windows' UAC is better in that regard, as it shows the signees name. But of course this is only useful if you know the "right" name.

Yeah, I think this is a major issue on OS X. For the average user it is impossible to tell who signed an app, if it is sandboxed, and what permissions it has. Hell, using the codesign command to extract entitlements from all binaries in a package is hard even for advanced users... (There is third party tool named RB App Checker which does make these tasks a bit easier, though)

…in this comment thread: https://news.ycombinator.com/item?id=11234966


It did actually, but only for in-app updates [0].

[0]: https://forum.transmissionbt.com/viewtopic.php?f=4&t=17835


It could cause a failure for updates but not fresh installs.

Many people would uninstall and download it over again when running into that kind of error message.



There is also the web of trust for PGP, which sort of solves the problem of needing a central store of the key. It does require being inside the web though (bootstrapping). But once you are, you can construct how much you trust a key from someone you haven't met.


I really sort of expect a signing-keys-on-download-server announcement any time now.


>unless the main site is secure but the downloads aren't which is idiotic by 2016 standards anyway

https adds a performance hit. The security of "checksum over https and actual file over http", if the checksum is checked, is the same as "actual file over https", barring preimage attacks.


This attitude is completely irresponsible.

HTTPS will ensure integrity of your download automatically with no action required from the user.

Checksums require the user to perform the integrity check manually, and 99.9% of users wont bother.

Please don't put your users at risk just to save a negligibly small number of CPU cycles.


Granted, the project I saw this reasoning on (https://www.whonix.org/wiki/Download_Security) is one where users are especially likely to do security checks, and they generally aren't satisfied with the security of SSL anyway.

> just to save a negligibly small number of CPU cycles

The link above says they can't afford the additional cost. If it's so negligible, would you sponder the cost of those extra cycles? I'm sure they would host on SSL if someone covered the cost.


Quote from a google engineer in 2010 (it's only gotten cheaper in the last 6 years w/ advances in CPU tech) regarding SSL overhead:

> On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that. [0]

[0]: https://www.imperialviolet.org/2010/06/25/overclocking-ssl.h...


None of affects the point they're making, which is that they can't find SSL mirrors that aren't more expensive. If you find one, let them know and I'm sure they'll be happy to switch over.


It is 2016. SSL is not slow anymore. Only case it could be deemed slow would be on a webpage where the browser has to download a ton of small files likes images. Each image would require a new connection and each connection would require full SSL handshake. Even then the fix is not to not use SSL but to bundle all the images/files into 1.


Maybe not computationally, but if you're 100ms, 200ms, 300ms or more away from the rest of the internet, all the SSL handshakes really add up.


Keep in mind the topic at hand is downloading a single large file, the TLS handshake is a rounding error of the total time, regardless of where you are in the world.


This is a persistent myth https://istlsfastyet.com


I've seen whonix say this

https://www.whonix.org/wiki/Download_Security

>Practically it is difficult to provide SSL protected downloads at all. Many important software projects can only be downloaded in the clear, such as Ubuntu, Debian, Tails, Qubes OS, etc. This is because someone has to pay the bill and SSL (encryption) makes it more expensive. At the moment we don't have any mirror supporting SSL. We're looking for SSL supported mirrors to share the load.

Is it not true that mirrors supporting SSL are more expensive?


No, it's not true anymore. From the link you replied to:

"On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers will help to dispel that." - Adam Langley, Google

Getting an SSL certificate used to be a cost, but that's taken care of now by https://letsencrypt.org/.


So can you recommend a mirror for them that supports SSL?

There are multiple named projects there that aren't using SSL, and I don't think it's just laziness. If you know of a way for them to use SSL mirrors for no additional cost, I'll work on getting them to switch over.


Debian, Ubuntu, Qubes, and others are on https://mirrors.kernel.org.

I suspect that wiki page you linked might be out of date. It seems like all of the Whonix download links on their website are over https, like the VirtualBox images https://www.whonix.org/download/12.0.0.3.2/Whonix-Workstatio....

Whonix also runs a tor mirror, which has significantly more overhead than TLS.


I know the last time I played with Whonix it was http, so I think you're right that it's a recent change.

For tails: https://tails.thecthulhu.com/. It appears to be the same server behind http://dl.amnesia.boum.org/ based on the TLS cert.

The situation is messy to actually use https for all of these projects, but I think the issue now is organization rather than overhead.


Huh. The language there seems to be from 2013.

I seem to remember downloading whonix from their site over HTTP around a year ago.

Do you see a tails HTTPS mirror?


This is only true for Intel and AMD x86_64 servers that have hardware accelerated AES with the AES-NI instruction set. Software implementations of AES and the other ciphers are much, much slower than AES with hardware acceleration. RC4 was the fastest decent software cipher for a while, but that has been found to be insecure and its use is discouraged. The fastest possible replacement would probably be ChaCha20, but that cipher is not widely supported yet. The other software ciphers are very slow, and certainly wouldn't be considered as "fast yet".


Most people download software from websites using GUI browsers, while performing a checksum generally requires opening a terminal, changing directories to where the file was downloaded, and running the checksum program there. Maybe the web browser should provide a UI for doing checksums directly in the download manager. For example, each download entry could have a blank "checksum" text box where you can paste in the checksum given on the page.


This doesn't solve the problem. At all.

A checksum is NOT a substitute for a digital signature.

https://paragonie.com/blog/2015/08/you-wouldnt-base64-a-pass...


It is given the following attack scenario: attacker is man-in-the-middling, and the SHA (but not the actual binary) is delivered via https.

In the case where the attacked has direct control over the website then you're right, it doesn't help at all.


> In the case where the attacked has direct control over the website then you're right, it doesn't help at all.

I was pretty sure that's the threat model we were discussing: Software authenticity.

The only way to automatically know if a piece of software is legitimate is to have a trusted public key that can verify a signature.

Also, HTTPS is implied these days. If you're not using HTTPS, you are either malicious, negligent, incompetent, or working for someone who is some or all of the above.


> If you're not using HTTPS, you are either malicious, negligent, incompetent…

Or poor. Hosting large amounts of binaries over https isn't cheap. I just priced Amazon S3 and cloudfront and for the amount of data that I serve it would cost $300 per month. That's a lot to commit for a GPL-ed binary that brings in practically zero revenue. Maybe there's a cut rate VPS out there that can handle 150GB of data and 3TB of bandwidth per month on the cheap, but I haven't found it yet.


How much is it for the same volume of non-HTTPS traffic?


Right. All I have to do is distribute the correct hash for my binary as a malicious software distributor because there's no authenticity verification at all, only that the bits in my binary blob match a certain pattern.


Is that supposed to be sarcasm? Hard to tell.


I suspect not.


That would be a useful extension/plugin for browsers actually.

Maybe like pointed out in another reply, not for checksums but for signatures. So you just copy/paste the signature after selecting a file, and then it can verify it's validity.

Is there no such extension yet? it seems like there should be one already.



Maybe something like: - have a database of common downloads and all their crypto info, which developers can update once they are validated - have browser extensions that will check packages on download and alert if suspicious

You could pay for it with some sort of sponsorship from apps themselves, who have an interest in not getting compromised like this (it's terrible publicity).


yeah, maybe they should call it something like.... https??


>Though I think most people can't be bothered to verify the checksum on a file every time they download it.

This wouldn't help anyway. If the malicious party had access to alter the downloads (as they did here) they could just as well change the checksum shown on the page to.

>On the other hand, the Windows and OS X App Stores are awful.

Haven't used the Windows one, but what's "awful" about the OS X one? Quick, one click, installations, isolated, signed, easy updates.

Might be bad for the application developers somehow, but I don't see anything much bad about it from a user perspective -- except maybe the lack of trials. Then again I've been able to get a refund any time I bought an app that was subpar and written to Apple (that was 2 times).


In the original thread, the initial reporters specifically pointed out that the files they had downloaded did not match the checksums on the Transmission page. My guess would be that the attackers compromised a mirror, but not the web server serving up the user-visible page with the checksum.


If the website is compromised, the checksum could be changed as well.

With digital signatures the problem is that I don't often know who is the official author of the app.


The Linux Mint hijacker changed the checksum too.


> Linux package managers are looking like one of the only straightforward ways to distribute applications securely.

Linux distributions package what is released upstream. If upstream is compromised, so is the Linux package.


Generally, Linux package maintainers grab the upstream source, while most of these compromises seem to be of the binaries. And, of course, the maintainers generally review the changes before publishing them


No, Linux distributions offer packages and operating systems that are the result of painstaking work in which all upstream code is reviewed, patched for any inconsistency, and often blocked from going into public archives until known bugs are fixed.


That's... optimistic.


That's how OpenSUSE works. Debian too AFAIK.


It's the aspirational ideal behind how these projects work.


Actually, most have scripts that pull the upstream source and build new binaries without any manual intervention. It is the responsibility of the package maintainer to review every change in code.


The app made it onto the OSX App Store and the author's certs were revoked. This isn't a case of verify source, verify application. This is a case of anything can be infected and it's damn near impossible to check everything.


Transmission wasn't on the Mac App Store, though the app was signed. Apple offers developers the ability to sign their apps distributed outside the Mac App Store to certify them as an Apple-identified developer https://developer.apple.com/library/ios/documentation/IDEs/C...

As such, checking the source is still very much relevant here since this wasn't a compromised app in the Mac App Store, it's an app distributed outside it.


It was signed, but wasn't in the Mac App Store.


Linux package managers are looking like one of the only straightforward ways to distribute applications securely.

Unless you are a small independent app developer. Virtually no distribution wants to take proprietary software. And you have to package for a wide variety of different distributions.

On the other hand, the Windows and OS X App Stores are awful.

The Mac App store works pretty much effortless for me. It's sometimes a bit slow, but other than that it's pretty trivial to use.


You can run your own repo. It's basically a folder with some metadata. DEB and RPM variants should get you 80% of the way.


How is that any more secure than just providing a download?


It's not at the time of installation, but prior to updates the package management system will check signatures of the packages. (And it will only accept packages signed with your key, so the attack used against Transmission wouldn't work)


The question is whether we should trust proprietary software even if it is downloaded securely. I consider "hard to get proprietary software into the official repos" as a feature. Unfortunately it's not as hard as you make it sound in most distributions.


In what respect is the OS X AppStore awful?


The guys at Bohemian coding discussed (even if not too in detail) it here: http://blog.sketchapp.com/post/134322691555/leaving-the-mac-...


- The APIs exposed to Mac App Store apps are more limited (because the OS X sandbox is not completely comprehensive in what it provides). This limits the types of apps that can be sold on the store.

- There's no means of providing paid upgrades. E.g. for a major version bump, which a lot of developers rely on to keep their business afloat.

- The store interface and navigation are also much slower than the iOS counterpart.

- Recently some certificate issues rendered users unable to open their apps.

- Not 100% sure on this one: You can't download older app versions if your OS is no longer supported.


- There's no means of providing paid upgrades. E.g. for a major version bump, which a lot of developers rely on to keep their business afloat.

Apple and other do this by simply numbering the names of apps. They don't allow you to specify special "upgrade" pricing, but the effect of this was that developers no longer really have full retail pricing and everything is just set to the upgrade price.

Logic Pro 8 for instance used to retail at $499. The upgrade price was $199. Now Logic Pro X on the Mac App Store is just $199 regardless of whether you are first time user or someone who had the previous version.

- The store interface and navigation are also much slower than the iOS counterpart.

I haven't really found that the Mac Store is any slower. I've found that they are both slow.

- Not 100% sure on this one: You can't download older app versions if your OS is no longer supported.

I don't believe it will even show you newer versions of the apps as long as the developer properly specifies the minimum OS version.


Along with the recent Linux Mint hijack, this really illustrates the need for people to verify programs they download.

I'm game, but I need a little help. What are your favorite techniques for verifying downloaded programs?


There's no completely secure way, except for getting the public key directly from the developer over a trusted channel (or in person). And even that won't protect you in case the developer's keys gets compromised.

But there are a number of things that can be done:

- always verify the checksum (if available), in case the download mirror (but not the web site itself) got compromised.

- check for strange strings in the binary (use "strings" and "grep"). E.g. URLs

- scan the downloaded file on Jotti or VirusTotal.

- unpack the binary manually with 7-zip or similar if it's a self-extracting file.

- check installation scripts, build files, etc. (if applicable).

- if downloading source code, check a couple of files at random. Will most likely not protect you, but if everyone does it, it helps detecting embedded malware (or bugs) early.

- run "strace" (Linux/Unix) or "FileMon" (Windows) or similar software and log what the software does when you install and run it for the first time.

- and check Hacker New regularly ;)


How is the OSX App Store 'awful' compared to a Linux package manager?


app store: anyone gets a id. sign whatever. just have to get past the automated detection.

Debian: have to also fool several people involved in the packaging of said package upstream and everyone using it and building from source


Fair point. How is a compromised package revoked in the Debian case?


What amazes me is that this process can be totally automated and made invisible to the user.

Alas, nobody wants to invest the day or so in actually writing the methods to do it.


Why there isn't a default protocol for sending a the SHA key and verifying the downloaded file automatically?


I've become increasingly paranoid lately, given that things like these happen and major bugs are uncovered in software that I use almost every day.

It's good that the Transmission developer reacted quickly and made waves so that people can at least be aware that they might have been exposed..

But I wonder how many more applications from the hundreds that I have installed on my machines contain weird stuff - either intentional (for money) or unintentionally (result of a hack).

Open source software is especially vulnerable to this kind of stuff.

If a hacker gets access to a server holding the binaries for an open source app (which most people download), the hacker can just compile the program from sources and add his own code in there and place the installer online.

Given that many big governments are now involved in the information wars, this scenario is quite likely.


"Open source software is especially vulnerable to this kind of stuff."

I'm not sure I follow on this front. Proprietary software could be compromised (whether intentionally by the vendor or unintentionally by some outsider working on the software) effectively forever with no one noticing. At least with OSS, the number of eyes on the source makes it less likely that an exploit will exist for long (though the definition of "long" could vary wildly dependent on popularity and the skill level of the software's normal users).

"Given that many big governments are now involved in the information wars, this scenario is quite likely."

Again, this one seems to point more to proprietary software then OSS. A government only needs to compromise a single company to make an exploit happen in commercial software. OSS exploits can be caught by the Linux distribution vendors that package the software, the users, the developers themselves (who are often working at different companies and in different nations), etc.

So, it may seem easier to compromise an OSS project, by attacking the distribution server and uploading a compromised binary built from source with patches...but, there are many good ways to guard against that (though any single mitigation, like signing with developer keys, can be compromised, the more eyes the less likely it is to succeed for long). But, if a government compromises a company, or someone within that company, all bets are off, and the problem literally may never be found.


I was thinking more about the users on Macs and Windows who use open source software..

The risk is not in the sources, but in the server which hosts the installers. A hacker could just build the software from sources (adding his backdoor) and replace the original installers with his own.


..and the same is true for closed source software. Replacing installers and patching binaries isn't difficult.


> "Open source software is especially vulnerable to this kind of stuff."

I am sorry, what? Why would open source contain more bugs/hacks than closed source specifically? It is more often in the news for few reasons, including that many projects are widely used. However it's against any PR from companies to have their security issues disclosed like they are in open source so they try to minimize the exposure. See [1]

[1] http://www.techrepublic.com/article/open-source-vs-proprieta...


The risk is not in the software itself, but in the server which hosts the installers. A hacker could just build the software from sources (adding his backdoor) and replace the original installers with his own, if the server is not properly secured.


The risk is exactly the same with proprietary software. A hacker can unpack the installer and create a new one with his changes. Or, as they often do, create a wrapper which installs their malware and then calls into the original unmodified installer.


Not because of the fact that it's open source, but because of the distribution models used.

SourceForge has been linked to bundled malware and hijacked projects like GIMP and FileZilla.


I don't follow, what does it matter for the "distribution model" if the software is open- or closed-source? The problem with SourceForge were its malware-riddled installers, how would it be any better if the downloads were proprietary software?


If a hacker gets access to a server holding the binaries for an open source app (which most people download), the hacker can just compile the program from sources and add his own code in there and place the installer online.

Code signing is used to prevent this. So, either the attacker has an Apple developer account (and is hopefully traceable through their credit card information), the Transmission project was sloppy with their signing key, or the machine of the developer with the signing key was compromised.

People have been ranting negatively about the Mac App store. But this is exactly why we need sandboxed applications by default (which is what the Mac App Store enforces). A sandboxed application cannot take your data hostage.

(Yes, I understand that App Store distribution is probably not possible for a Bittorrent Client.)


So, either the attacker has an Apple developer account (and is hopefully traceable through their credit card information), the Transmission project was sloppy with their signing key, or the machine of the developer with the signing key was compromised.

Sorry, I forgot another possibility: some other developer's key was compromised.


Or, as is the case here, the malicious party was simply issued a key by Apple (for apps that are downloaded from places other than the Mac App Store, developers can get a unique Developer ID from Apple (for free) and use it to digitally sign their apps, the purpose being that Apple can revoke it after the fact if it turns out to be malware):

The two KeRanger infected Transmission installers were signed with a legitimate certificate issued by Apple. The developer ID in this certificate is “POLISAN BOYA SANAYI VE TICARET ANONIM SIRKETI (Z7276PX673)”, which was different from the developer ID used to sign previous versions of the Transmission installer. In the code signing information, we found that these installers were generated and signed on the morning of March 4.

From: http://researchcenter.paloaltonetworks.com/2016/03/new-os-x-...


Interesting. It seems that this is actually a legitimate company:

http://www.bloomberg.com/research/stocks/private/snapshot.as...

So, it could be that the malicious part was not issued a key, but stole the key from this company.


OK so let's imagine sandboxing is on by default, then "Transmission" pushes an update that asks for read/write access to the whole home directory. You don't know if Transmission has some legitimate need for that or not, so you just shrug and click Allow. Boom—infected.

So I think sandboxing is basically useless against these kinds of attacks. Either they allow apps to elevate their entitlements in an update, or they don't and developers will always out-out from the start (or pick the widest set of entitlements available).

If sandboxing is forced with no opt-out, then users will have to jailbreak their computers so they can install Parallels...


then "Transmission" pushes an update that asks for read/write access to the whole home directory.

There is no such entitlement:

https://developer.apple.com/library/mac/documentation/Miscel...

If you are talking about the 'open directory' dialog. Well, if a user is careless enough to just give a sandboxed app access to their complete home directory - tough luck.

So I think sandboxing is basically useless against these kinds of attacks.

No, it's not, because no entitlement allows blanket access to the user's home directory. Hence, an application cannot just encrypt all of the user's data.


> Open source software is especially vulnerable ... the hacker can just compile the program from sources and add his own code

You must be too young to remember how computer viruses originally spread by way of modified executable files: http://computervirus.uw.hu/ch04lev1sec2.html


Actually, dabbling in virus writing was the most exciting thing I could do on my 4.77Mhz 8086 running MSDOS 3.3 back in the day :).

Patching is possible, but why bother when you can compile the app from sources?

Nice book btw.


> but why bother when you can compile the app from sources?

Well, imagine that you are the attacker here. Would you rather keep your malware source in sync with the upstream code and build every target every update, or just use an off the shelf binary wrapper. Would you answer the same if you were targeting more than one app (in the case of a CDN attack)?

The only scenario in which source level malware makes sense to me is this: you are targeting a specific application and you are able to get your code into the project's SCM. In this scenario OSS is no more vulnerable than closed source.


It's trivial to do this with closed source applications as well.


Reproducible builds might be a possible solution to this (as long as we can verify the integrity of the checksum): https://wiki.debian.org/ReproducibleBuilds


> Open source software is especially vulnerable to this kind > of stuff.

Give it 2 or 3 years and stories will come trickling out about how most OS apps have had commits from hackers, governments etc. So far most source checking - to the extent that it happens at all - is all about buffer overruns and the like; micro stuff that's easily catchable. Well, you say that, but, you know, heartbleed etc. But what about whole modules designed with two purposes in mind?


> > Open source software is especially vulnerable to this kind > of stuff.

> Give it 2 or 3 years and stories will come trickling out about how most OS apps have had commits from hackers, governments etc. So far most source checking - to the extent that it happens at all - is all about buffer overruns and the like; micro stuff that's easily catchable. Well, you say that, but, you know, heartbleed etc. But what about whole modules designed with two purposes in mind?

Free Software has existed for over 20 years. Not to mention the fact that the same problem you describe is far more trivial for proprietary software. There's no straightforward way to find out if it's backdoored (although, luckily quite a few backdoors are done badly so we can find out). The point is that if you assume that all free software is compromised, you have to assume all proprietary software is compromised. I'd prefer to have some free software be compromised because then I'm not at the mercy of the vendor to fix it.


Open Source software has existed for more than 2 or 3 years, you know.

All software is vulnerable to bad actors writing malicious code. What makes it any safer if its proprietary software? In any case, in a pessimistic scenario you'd have to change your sentence to "give it 2 or 3 years and stories will come trickling about how ALL apps, open or closed source, were tampered with by hackers, the government, etc".


It's a risk inherit in using any software you didn't write.


Hm. https://trac.transmissionbt.com/wiki/Changes#version-2.91 lists the following under Mac changes for 2.90

>Allow downloading files from http servers (not https) on OS X 10.11+

Mac version affected in OP was 10.10, though.

Maybe it had something to do with

>Change Sparkle Update URL to use HTTPS instead of HTTP (addresses Sparkle vulnerability) ?

Edit: it appears the infection was downloaded from a website, in which case this doesn't help. But one did say the in-app update failed on incorrect signature first.


>Allow downloading files from http servers (not https) on OS X 10.11+

This reads like they disabled Apple's "App Transport Security", which only allows HTTPS connections unless a program explicitly makes an exception. Introduced in iOS 9 and OS 10.11 (El Capitan). I bet the failing HTTP connections caused a bug in Transmission, and it was an easier fix to disable ATS than to transition whatever connection to HTTPS.

https://developer.apple.com/library/prerelease/ios/documenta...


It's probably for "web seeds" or similar, where a torrent file's author specifies alternate URIs where the content can also be found. Transmission has no control over whether the torrent file's author specifies http or https, it has to allow both (and http is actually safe, since the downloaded file goes through the same piecewise checksum as if it were downloaded from a peer).


> and it was an easier fix to disable ATS than to transition whatever connection to HTTPS.

Pretty sure this is for arbitrary downloads. Unless you want to prevent transmission to download from http based sources out of principle it makes no sense to do anything other than opting out of this behavior.


Right, being a web connected app based on a distributed community of other clients, it's very possible that the encryption isn't possible to implement on their end. IIRC it's only blocking HTTP connections, so the torrent transfers themselves aren't affected (unless it's masking that as HTTP traffic to avoid easy inspection?), but there may be other things that require HTTP. Connections to trackers maybe?

On the other hand, El Capitan came out last September. If this just changed in 2.9.0, the restricted HTTP connections can't have been that big of a problem.


If the file /System/Library/CoreServices/XProtect.bundle/Contents/Resources/XProtect.plist contains:

        <dict>
                <key>Description</key>
                <string>OSX.KeRanger.A</string>
                <key>LaunchServices</key>
                <dict>
                        <key>LSItemContentType</key>
                        <string>com.apple.application-bundle</string>
                </dict>
                <key>Matches</key>
                <array>
                        <dict>
                                <key>MatchFile</key>
                                <dict>
                                        <key>NSURLTypeIdentifierKey</key>
                                        <string>public.unix-executable</string>
                                </dict>
                                <key>MatchType</key>
                                <string>Match</string>
                                <key>Pattern</key>
                                <string>488DBDD0EFFFFFBE00000000BA0004000031C04989D8*31F64C89E7*83F8FF7457C785C4EBFFFF00000000</string>
                        </dict>
                </array>
        </dict>
Does that mean I am infected?


No. That means you have up to date protection against being infected.


What does the <string> match pattern mean exactly, how is it used to identify the executable?


The match type is saying that to match a file it must be fingerprint (hash) match using the Key provided below it.


Looking more at this issue, it seems like the problem may have been (hard to tell, not a lot of information) a compromise of a third-party mirror to which https://www.transmissionbt.com/ redirected users; the checksum on the HTTPS site was unaltered, and was used to identify the altered download.

Perhaps a defense against this kind of attack would be an altered version of HSTS - one that protected the content of download links, and not just of sub-resources included on the page.


2.90 was released a couple of days ago[1], so if you haven't used Transmission in a couple of weeks this doesn't affect you.

[1]: https://en.wikipedia.org/wiki/Transmission_%28BitTorrent_cli...


It seems that is a ransomware campaign http://www.reuters.com/article/us-apple-ransomware-idUSKCN0W... Next monday, tomorrow could pave terror on the office.


It might be worth updating the title to specify the vulnerable version (2.90) and the platform (OS X - from what I can tell, this is not a vulnerability on Linux or Windows).


Isn't it quite popular on Debian and derivates too? It's Pre-installed with GNOME there as far as I know. Fair enough, it's extremly interesteing. Never saw such an infection in the "free World", outside the laboratory. I hope they can find the source.


It's quite popular everywhere. Interesting that 2.90 just showed up in Fedora updates.


At least Linux distributions usually compile from source. I wonder if the source was also modified, or only the binaries.

EDIT: I downloaded the Transmission 0.90 and 0.91 source code and took a look. The diff between them is quite small, with nothing suspicious being removed, and the 0.90 .tar.xz MD5 matches what Fedora used (according to http://pkgs.fedoraproject.org/cgit/rpms/transmission.git/com...). So, unless there was also a malicious source code change the developer didn't catch, Fedora's package should be clean.


> I wonder if the source was also modified, or only the binaries.

Personally, pending further information, I've removed Transmission from my machine.


It seems the most recent version is 2.84 on ubuntu 15.10. The last update to the package was on July 2015.


There's a fairly popular PPA which is currently on 2.90[1]. However, it seems like only OS X binaries included malware (... hopefully).

[1]: https://launchpad.net/~transmissionbt/+archive/ubuntu/ppa


Transmission put up a new version - 2.92 that supposedly checks for and removes the malware.


Threw away Transmission as soon as I read this (even though I was running a old version), my trust is pretty much gone now, never installing it again. Shame because it really was a nice app.


I don't understand this attitude, the Transmission team responded immediately to the problem. There's no indication that this was the result of some problem specific to the application or its developers.

Transmission was and remains my favorite torrent client on OS X, although I still like the torrent information presentation uTorrent had, and I still will run my old pre-adware 1.6.4 if I want more detail on a swarm.

Given that older versions still work, it seems exceptionally silly to delete an old version because of a site compromise.


Isn't it open source?

So clone the last release version you trusted, build that from source and be happy?


I don't even trust websites and emails, so not sure why you would trust a bittorrent client. I still use these tools, but with some some caution. Your level of caution is up to you. Other posters suggested things such as verifying checksum and virtual machines.


flying the day after 9/11 was the safest time, I really doubt this sort of thing will happen again to the same software


This would be true if they knew how it was compromised, they've been silent on that issue so far.

The current version could be being compromised this minute for all we know.


Not an official comment, but from other parts of the hacker news thread it sounds like one of the mirrors the main site redirects to was hacked, not the main site itself. The SHA sums on the main site where apparently unaltered. So it sounds like the only fault on the developers is trusting that mirror.


Same here, into the garbage it goes. What did you switch to by the way, Deluge? I switched TO Transmission because it was open source and supposed to be pure.


Do you trust the alternatives?


This is a good illustration of why you should not install apps as administrator. Specifically, you should not install Mac OS packages, which allow for arbitrary pre- and post- install scripts to be executed as root.

Same is true for Windows and Linux.

There are privilege escalation bugs in any OS, but it is usually not a given. Throw the application into ~/Applications as a Mac bundle, worst that will happen is your account will be compromised. Much easier to detect and clean. Most trojans won't even succeed.

We are going to have these problems until the developer community realizes that executing a randomly downloaded package installer as a privileged user is giving away the keys to the kingdom.

Application stores is one solution, but really is not an open one. I'd rather see the apps distributed in a form similar to Apple app bundles, where a non-privileged user can just install the app into their home.


I think it's a poor illustration. You could install and run this app as a regular user (and never escalate to administrator) and the app's bundled malware would still absolutely destroy anything of value on your computer.

It's the stuff inside $HOME (and $HOME/Documents) that's valuable. Not system binaries in {/bin,/sbin,/Applications} that can be re-downloaded in a second.

The problem is that any non-sandboxed app runs with the same uid and full read/write permissions to all of $HOME as well as all the other running processes, even if it only needs read/write access to $HOME/Documents/Appname/ and none of the other pids.


First, obviously you can make an account for running the untrusted software, like Bittorrent clients (which are known to carry malware frequently).

Second, most malware requires and counts on having admin privileges on target machine. The task of auditing, cleaning and finding out that malware is present is significantly easier if malware is limited to a non-privileged account. With malware running as a non-privileged user you still have to clean up and recover, but you can easily switch an account, compare, audit and trace. The anti-malware tools also still have a chance when OS is not compromised, otherwise it's all lost the moment you ran a malicious post-install script.

The more common problem, however, is a regular app install. The goal of the application packager is to make their application work first, and preserve your environment second. So, in many cases even not malware does bad things to your OS. The scripts are usually written by devs that are fairly clueless, which leads to some pretty awful stuff in them. Almost 100% of the time the install/uninstall action is not idempotent, although it should be.

What really needs to happen is a shift in a mentality that accepts the idea that apps need to be installed as an administrator (unless the apps are a part of the main OS distro).


> most malware requires and counts on having admin privileges on target machine.

If you really believe this, run rm -rf ~ on your computer right now. Also rm -rf /Volumes/* (on OS X) or wherever your network/external drives are mounted on your OS. Since you don't have admin privileges, nothing bad happened right? Because that is the primary goal of ransomware.

Anyway, this specific malware doesn't even attempt to acquire root; it operates entirely as your local user. And there's no installer package, so why are you complaining about them?


His comment went right past you. What you care about the most on your computer is your personal data, and all of it sits under $HOME. Any script running as $USER can steal sensitive data, wipe out personal and work files, maybe even cloud storage services. None of that requires admin rights.

The only solution is sandboxing everything.


For things that are likely to carry malware, use a separate account. Probably a good idea for a Bittorent client in any case.

In practice, however, it is much easier to deal with malware if there's no admin rights. It matters even for a clueless user, since the OS mechanisms of detection can't be altered and more much so for a power user.

This specific malware installs a kernel module, as far I can tell. I am guessing it would be harder to encrypt data and not be noticed and removed quickly.

Of course, there are even more obvious reasons, like sharing a computer with... kids that tend to bring malware at every turn.

We really need to educate the devs and change the culture. There's no reason for something like a word processor and file sharing app to require full access to the system. That's why we have access controls in the first place.


> Throw the application into ~/Applications as a Mac bundle, worst that will happen is your account will be compromised.

On a typical single-user setup, there's not much difference between an account compromise and a machine compromise anyway.


That is only true if you have no interest in recovery post compromise. A user level account shouldn't be able to put the system in such a state that online recovery is impossible, whereas a system level account easily can - think loadable kernel modules. Only offline recovery works once you lose trust in the kernel. That is the difference between "Alright grandma, lemme remote in" and "Sorry old lady, better start looking for the factory install CDs". Lets not even get into how screwed we are with UEFI...


If you have to wipe the user account anyway, then wiping the system at the same time hardly adds any more effort -- in fact it's probably easier. Your system files are the easiest part of your system to recover, because the originals are readily accessible from the vendor.


I guess it depends. In the grandma scenario it adds a lot more effort. A corporate laptop in a standard AD environment, no problem. In a situation where you've customized the system (custom packages, sshd.conf tuning, flags in rc/csh/sysctl/resolv/loader/randomsbinutilityinstalled2yearsago.conf) it would be a lot more work than just reinstalling the OS. Use backups you say? What if I told you that you could use the very same backups to rollback changes to the user's home directory, in 5 minutes, and not have to reimage the entire machine? I'm just saying: even on a single user setup - there is a world of difference in what options you have open to you, depending upon whether you let the malware hit ring 0 or not.


Not sure why anyone with a compromised machine would rather have the risk of a lingering backdoor just to save 1-2 hours clean formatting and reinstalling


Because unless an unknown method of privilege elevation was used, it doesn't make sense. Do you throw a pinch of table salt over your shoulder as well? It also has a very strong Microsoft smell to it, where instead of doing root cause analysis on why Windows is misbehaving - you just reboot and cross your fingers.


> Because unless an unknown method of privilege elevation was used

You seem to think that is unlikely. Why? New privesc bugs are found on a monthly basis in Linux and Windows. Does Grandma stay on top of kernel patches?

Nobody who does sandbox security (hint: I do sandbox security) thinks UID separation is sufficient to cordon malware anymore.


> You seem to think that is unlikely. Why?

Because of the single user pc context. I've never seen a dropper that didn't have ring 0 later pull down a payload that escalated privilege. I'm not saying that it isn't possible, but at best it is very uncommon. I understand the better safe than sorry position, but with the context in mind, what safety are you getting by just assuming UID separation failed and going through the rigmarole of reinstallation? The user data has already been exposed.

I just don't agree with the simplified decision tree of "Infected --> reinstall", which disregards your work in sandboxing. Why should I even bother with the additional complexity of capability mode in my software, if we're all just assuming our defense has no depth.


> I've never seen a dropper that didn't have ring 0 later pull down a payload that escalated privilege.

Probably because it isn't very useful to the attacker. Pwning the single user's account is sufficient. But I wouldn't bet on that being the case if it happened to me.

> what safety are you getting by just assuming UID separation failed and going through the rigmarole of reinstallation? The user data has already been exposed.

Most user data is not executable, so can probably safely be copied over. But if you don't wipe all executable software on the system it's hard to tell if some of it is still infected.

> Why should I even bother with the additional complexity of capability mode in my software, if we're all just assuming our defense has no depth.

Well, I'm talking about today's legacy desktop (and, to some extent, server) OSs, which have not prioritized user isolation because it hasn't really mattered ever since people stopped using timeshare systems.

Modern OSs that sandbox applications (e.g. ChromeOS, Android, iOS) are another story. I would expect that one Android app being malicious does not mean you have to wipe your phone, just uninstall the app. I would expect that ChromeOS can even recover from a full sandbox breakout, given secure boot.

But I don't trust Linux, Windows, or Mac OS desktops to be suitably hardened nor able to recover. And as wiping the whole system does not add very much cost over the cost of wiping the user account, it seems to me worth it to go all the way.


> Probably because it isn't very useful to the attacker.

Ring 0 is really important for building a botnet, which provides a very real incentive for the folks that actually write the droppers. Ideally (for the botnet owner) they establish persistence, then sell access by directing the bots to download additional malware under the control of botnet customers. Long story short: you don't get paid as much if you don't have ring 0.

> Most user data is not executable...

I was speaking from the perspective of the real purpose behind all this, protecting user data - and that the horse is already out of the barn. As far as cleanup, you are presupposing a loss of ring 0. If ring 0 is secure then killing all the user processes and performing a snapshot rollback of user space will definitely clear the malware.

> Well, I'm talking about today's legacy...

Ah, well then I agree. If your platform does not have user isolation, then you shouldn't rely on user isolation for security.

> And as wiping the whole system does not add very much cost...

Well we've got a catch-22. Because implementing security practices that do harden the system add a lot more cost to a hamfisted wipe. For example: On my laptop I've got five jails, a maze of netgraph nodes that result in a complex ruleset, host IPS, kerberos authentication and authorization, encrypted filesytems, close integration with TPM and various certificate based credentials. Just assuming that none of that works and doing a system wipe is a lot more work than simply popping in the latest Ubuntu dvd iso... consider the labor of rekeying alone.

So the advice to do a system wipe isn't bad, but it should be prefixed with: "If you've made no effort to secure your system and are completely relying upon the distro provider for security".


Sure, if you've set all that up and know what you're doing, then you're in a position to make your own judgment call and maybe you don't need to wipe the system. Earlier we were talking about "grandma" which I assumed was a metaphor for "person who doesn't know computers".


> Earlier we were talking about "grandma"...

Don't forget the grandson part of the metaphor, he is the one who will be making that judgement call. Do you remember how everybody would blow into the Nintendo cartridges, even after Nintendo explained why it was a bad idea? I have a feeling that helpdesk folks will continue to advise a system wipe, even if your running the latest Windows 25 with its formally proven microkernel... just to be safe.


Me neither, but there's a whole industry of software for Windows users that promises to remove malware.


so how do I get around this if most apps I want require this?


1. Petition vendors to stop distributing .pkg's

2. Most packages can be extracted with pkgutil and then just copied into ~/Applications.

It is infrequent that somebody needs to modify your OS and if they do, then they better explain why.


While we're here, can anyone recommend a good antivirus for OSX?

I've just been looking at BitDefender, which looks promising, but would rather get this right than faff around with potentially crappy AV tools.


It seems an up to date OS is the way to go (as in this case it was detected by OSX)


> can anyone recommend a good antivirus for OSX?

Common Sense 2016, see https://github.com/drduh/OS-X-Security-and-Privacy-Guide


Ironically this recommends Transmission as a BT client.


Thanks, that's some good readin'


Common Sense 2016 would not have prevented a malicious Transmission update though


Agreed - it would not defend against this presumed watering-hole attack. However, neither would have AV: https://www.virustotal.com/en/file/d1ac55a4e610380f0ab239fcc...

Nevertheless, I still believe Common Sense to be a better alternative to bloated, vulnerable anti-virus programs.


I went to a Mac developer's group a few years ago in Toronto. One of the devs was working on Mac antivirus software but basically had the attitude that it was unnecessary, and spent most of the meetup trashing Windows. Just really bizarre and inept behaviour. Not sure I trust his anti-virus software. Too bad I can't remember which one he was working on.


The strength of a chain is the strength of its weakest link, and the more "apps" are provided as the system the longer and more vulnerable is the chain.

When it comes to checksums with have the chicken egg problem plus the collision attack of md5.

MD5 has been the standard for too long (and is deprecated since 10 years for crypto checksum). And for next generation of softwares to install that don't do modern checksum how can they trust the download of the package required to check for whatever the new format? Plus the new format is less likely to be checked without errors. A off by one character could easily be discarded in checking given the number of packages that are now required to be installed and the human limitation in focus.

Human are the limiting factors, and security is modeling the user in a kind of grotesque caricature of a robot that can check thousands of informations perfectly and remember 20 characters passwords for tens of appliances.

There is a tyranny of computer engineers regarding what is safe for people having a life not concerned about geeky technology that is a tad annoying.

People have the right to be human and to fail is human. The burden put on human to make the system safe in order to avoid costly for the bosses human interactions is way to high.

And since computer security always blame failure on human behaviour I begin to positively dislike it.


> There is a tyranny of computer engineers regarding what is safe for people having a life not concerned about geeky technology that is a tad annoying.

You know you can make that complaint about any tool or technology, right?

"Gosh why do I have to follow all these rules and observe traffic lights to drive a car?" (something that actually intimidates me, in fact, because I've never driven a car.)

"Why do I have to worry about cutting or burning myself or someone else while trying to cook a meal?"

"Why are all these procedures and protocols, like schools and banks and taxes, required to function at all in contemporary society?"

Until computers advance to the point of being artificially intelligent familiars that can figure out exactly what we want from a simple vocal command and do something even better, we're gonna have to put in a little effort from our end to make them work the way we want them to.


You know all engineers do not always blame users?

There are fields of engineering where an accident even due to human causes is systematically seen as an engineering problem.

And that may be the reason why traveling by plane and train are safer than by car.

But US engineers made a great job at convincing legal department that poorly engineered goods where not the causes of accidents.


> You know all engineers do not always blame users? > There are fields of engineering where an accident even due to human causes is systematically seen as an engineering problem.

In those other fields, such as automobiles, an accident by a person may cause death of another.

On a computers your careless may not cause someone else to outright die (which tends to cause a lax attitude on the part of users) but they can still cause someone else harm, like inadvertently leaking someone's financial information or causing malware on your device to participate in a DDOS attack on someone. Time and again it's been proven that users are often the weakest link in this field no matter how tight the security is. It's only understandable for the engineers to be annoyed.

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: