
A Case of Stolen Source Code - uptown
https://panic.com/blog/stolen-source-code/
======
Sidnicious
> …breeze right through an in-retrospect-sketchy authentication dialog…

I can't blame them for this. A surprising number of apps ask for root (inc.
Adobe installers and Chrome). As far as I know, it's to make updates more
reliable when an admin installs a program for a day-to-day user who can't
write to /Applications and /Library.

We're long overdue for better sandboxing on desktop (outside of app stores).

~~~
kefka
It doesn't matter that much, honestly.

I only do root for administration tasks. Filesystem stuff, hardware, server
config. All the goodies are in my homedir. Exfiltration is easy as that.
Running bad binaries is easy as running under my username.

In the end, there's no protections of what my username can do to files owned
by my user. And that's why nasty tool that:

    
    
         1. generates priv/pub key using gpg
         2. emails priv key elsewhere and deletes
         3. crypts everything it can grab in ~
         4. Pops up nasty message demanding money
    

works so easily, and so well.

The only thing I know that can thwart attacks like this is Qubes, or a well
setup SELinux.. But SELinux then impedes usage. (down the rabbit hole we go).

Edit: Honestly, I'm waiting for a Command and Control to be exclusively in
Tor, email keys only through a Tor gateway, and also serve as a slave node to
control and use. I could certainly see a "If you agree to keep this
application on here, we will give you your files back over the course of X
duration".

There's plenty more nefarious ways this all can be used to cause more damage,
and "reward" the user with their files back, by being a slave node for more
infection. IIRC, there was one of these malware tools that granted access to
files if you screwed over your friends and they paid.

~~~
mikeash
The thing is that, at least on the Mac, there easily can be protections on
what your username can do to files owned by your user. There's an extensive
sandboxing facility which limits apps to touching files within their own
container, or files explicitly chosen by the user. All apps distributed
through the App Store have to use it, and apps distributed outside the App
Store can use it as well, but don't have to.

As I see it, the problem on the Mac boils down to:

1\. Sandboxing your app is often a less-than-fun experience for the developer,
so few bother with it unless they're forced to (because they want to sell in
the App Store).

2\. Apple doesn't put much effort into non-App-Store distribution, so there's
no automatic checking or verification that sandboxing is enabled for a
freshly-downloaded app. You have to put in some non-trivial effort to see if
an app is sandboxed, and essentially nobody does.

I think these two feed on each other, too. Developers don't sandbox, so
there's little point in checking. Users don't check, so there's little point
in sandboxing. If Apple made the tooling better and we could convince users to
check and developers to sandbox whenever practical, it would go a long way
toward improving this.

~~~
johncolanduoni
What improvements to the developer experience for the Mac sandbox do you think
are needed? If you get access all files through an open dialog, you're almost
automatically set (and with a few lines of code you can even maintain access
to those files). If you do something more complicated, you can write specific
sandbox exceptions (as long as you don't want to distribute on the App Store).
Privilege separation is also very easy to implement via XPC (complete with
automatic proxy objects).

I think most apps don't sandbox not because it's especially hard, but just
because it never occurs to the developers.

~~~
kitsunesoba
As noted in another comment, the macOS app sandbox is buggy and unnecessarily
rigid in its permissions/capabilities. For many classes of apps, sandbox use
is highly impractical or even impossible.

If these issues were fixed I believe that sandboxing would quickly become the
norm. Many of us want to use the sandbox but don't want to waste too much
effort fighting it.

~~~
johncolanduoni
> For many classes of apps, sandbox use is highly impractical or even
> impossible.

Worst case, you can see exactly what is being blocked in Console and then add
word-for-word exceptions via the com.apple.security.temporary-exception.sbpl
entitlement. You can also switch to an allow by default model by using
sandbox_init manually.

Even if the sandbox doesn't work for your entire app, you can use XPC to
isolate more privileged components in either direction (i.e. your service can
be more or less privileged than your main app). What specific abilities are
not provided that you think would help?

~~~
jakobegger
I don't think that this is correct. There are a lot of things that sandboxed
apps can't do, even with exceptions. One such example is opening unix sockets
-- a sandboxed app can only open sockets inside it's sandbox. This alone rules
out a large class of apps. Shared memory is another problem. (These two
currently prevent me from shipping Postgres.app on the Mac App Store)

Using sandbox_init manually sounds like it should be possible in theory, but
it is way too complicated in practice. There is barely any documentation on
it, and you'd need to be familiar with macOS at a very low level to
effectively use it -- which is highly unlikely for application software
developers.

~~~
johncolanduoni
You can allow access to a unix socket via things like:

    
    
        (allow network-outbound (remote unix-socket (path-literal "/private/var/run/syslog")))
    

Similarly you can allow use of shared memory:

    
    
        (allow ipc-posix-shm)
    

Most of the rule types are documented here[1]. Even for the ones that aren't,
the error message in the logs uses the same syntax (e.g. if a unix socket is
blocked you'll get a complaint about "network-outbound"). You mostly just need
to be able to copy and paste.

[1]: [https://reverse.put.as/wp-content/uploads/2011/09/Apple-
Sand...](https://reverse.put.as/wp-content/uploads/2011/09/Apple-Sandbox-
Guide-v1.0.pdf)

------
masto
I'm a bit surprised at the "personalized attention" from the attacker: that a
human on the other end takes time to poke around individual machines,
recognize the developer, and tailor a source code theft + ransom campaign to
them. I had assumed that these are bulk compromises of at least thousands of
machines and they just blast out scripts to turn them into spam proxies or
whatever.

Maybe given the limited scale of this one and the obvious interest the
attacker has in producing trojaned versions of popular software, this is
actually what they were hoping for in the first place.

~~~
wlesieutre
It might be as simple as an automated "look for ssh keys" in the malware. If
you find an SSH key, pretty good odds it's a developer. Scan for git repos, or
check their email address to see where they work and go from there.

~~~
kccqzy
This makes me wonder, is it easy enough to write a kernel extension such that
whenever any process that tries to open(2) my ssh private key, or any
hardlinks or symlinks pointing to it, it checks against a known whitelist, and
if the process is not in the whitelist, a dialog pops up and asks me for my
permission. Is this easy to implement?

Frankly I can only think of a small number of processes that need to
automatically access the file: backupd, sshd, and Carbon Copy Cloner.
Everything else should require my attention.

~~~
lloeki
Alternatively, sidestep open(2) by implementing a SSH agent so that you can do
creative things like [https://krypt.co](https://krypt.co) does, so the key is
not laying right on your main filesystem in the first place (and possibly not
even ever on the workstation).

------
joshaidan
I find this story pretty fascinating. First, it's interesting how a broad
attack, such as putting malware into software used by a large number of
people, suddenly becomes a targeted attack: the attackers grab SSH keys and
start cloning git repositories. I'm assuming that there was a significant
number of victims in this attack. Were they targeting developers? Or did they
just happen to comb through all this data and find what looked to be source
code / git repositories.

The other thing I find interesting is this comment:

> We’re working on the assumption that there’s no point in paying — the
> attacker has no reason to keep their end of the bargain.

If you really want to be successful in exploiting people through cyber
attacks, I guess you will need some kind of system to provide guaranteed
contracts, i.e. proof that if a victim pays the ransom, then the other end of
the bargain will be held.

It might seem that there's some incentive for ransom holders to hold up their
end of the bargain for the majority of cases if they want their attacks to be
profitable.

~~~
kbenson
> If you really want to be successful in exploiting people through cyber
> attacks, I guess you will need some kind of system to provide guaranteed
> contracts, i.e. proof that if a victim pays the ransom, then the other end
> of the bargain will be held.

You're describing a legal system and the rule of law. I'm not sure there's way
to guarantee anything like you describe when there is some illegality in the
nature of the process.

Trade only works when you can trust either the parties involved or the system
as a whole to uphold their promises (for the system, that's that involved
parties that don't uphold their ends will be punished).

~~~
iEchoic
> You're describing a legal system and the rule of law. I'm not sure there's
> way to guarantee anything like you describe when there is some illegality in
> the nature of the process.

Legal systems aren't the only way to give confidence that both ends of a
bargain will be held. As one example, some darknet markets have escrow systems
for this purpose. It's not too hard to imagine a way to do this with ransomed
code. Reputation-based systems also provide incentives for sellers to deliver
on their promises.

~~~
kbenson
> As one example, some darknet markets have escrow systems for this purpose.

Those only function because the darknet functions as the system, and the
punishment for not following through is that the party loses access to or
prestige in that market. What entity exists that is trusted and has leverage
with both the people that are ransoming (criminals) and average citizens
(ostensibly law abiding)? Should _I_ trust a darknet broker to not screw me?
No. They have no incentive not to, as long as their _actual_ client, the
ransomer, doesn't care. For the same reason, the ransomer should not trust any
legal entity, because they can not deliver the money and give it back to the
victim (since _they_ are the client).

There may exist a way for this to work, but I certainly can't think of one,
and what you described doesn't work either. Trust is the integral factor as I
see it, and while you can have trust _within_ a criminal community, and within
a law-abiding community, I'm not sure how you get that trust to cross that
boundary.

~~~
djmobley
A simple solution is the one you describe. A reputation system for ransomers.
Time earned reputation for upholding promises.

~~~
kbenson
And how do you ensure you are dealing with the same person from one
transaction to the next? Any authority that can confirm an anonymous criminal
is who they say they are needs to be illegal to keep law enforcement from
finding out the identities, and if not they are still participating in a
crime.

Again, how do you trust a criminal person or organization? By their nature,
they don't follow the same rules.

~~~
Bartweiss
Wouldn’t a cryptographic sig suffice for this?

You don’t need an authority vouching for you to become a ‘trusted’ criminal.
You just need proof of identity, and a reputation established over time. Drug
dealers do this all the time, even though they’re criminals. Hell, it’s even
how legitimate businesses work - the FBI isn’t going to shut down Bic for
selling shoddy pens, so they build a reputation on “we’re Bic and we did right
by you last time”.

An example: a malware group sends every target an RSA-signed demand (with
public key disclosed on Pastebin or something). The few people who pay up find
that they follow through, so they grow a reputation as sincere. They could
even kick things off with a round of freebies - “Here’s your data, here’s our
sig, we deleted/unlocked/whatever it for free this time to prove ourselves.” I
suppose they’d have to publish demands and outcomes since most targets won’t
disclose on their own.

There’s likely a flaw in my specifics (probably around disclosing attacks and
proving followthrough), but I only put five minutes into it. As long as you
can prove identity, you ought to be able to build ‘trust’.

~~~
kbenson
> Drug dealers do this all the time, even though they’re criminals.

Drug dealers and those buying from them are _both_ committing illegal acts.
That changes the dynamic. Neither party can rely on the legal system to
enforce misconduct. That allows an entirely criminal system to work. For
example, if you don't pay the drug dealer, they'll just hurt you. If the drug
dealer doesn't give you the drugs, or gives you crappy/cut drugs, you just
won't use them next time. It's important to note that this transactional
relationship does not begin with one party accosting the other, as in the
ransomware case.

The ransomware scenario is the equivalent of being mugged in an alleyway, but
only of your smartphone, and the mugger offering to give your phone back if
you go to an ATM and come back with $100. The whole interaction began with an
crime perpetrated by one party on the other.

> As long as you can prove identity, you ought to be able to build ‘trust’.

One problem is that the identity, because it is anonymous, it worth
fundamentally less for this purpose than any real identity. The ransomer could
decide law enforcement is getting too close, and stop responding to all
payments, or abandon the system and someone else could take it over. For any
identity used just for this scam, the loss of reputation is irrelevant, and if
they are using the same identity for multiple scams they are inviting more law
enforcement response. There are no future consequences of mention to screwing
people over, since the identity can be changed at any time.

The only thing that really protects you in any of these situations are the
incentives of the criminals, but those incentives, be they economic or liberty
based, are subject to very different constraints than a legally operating
entity. The bottom line is that the person or people involved has started the
whole relationship by showing they are willing to screw you over. Establishing
trust is not _impossible_ (some people will trust), but it's very hard to do,
a large percentage of will never actually trust you, and they likely
shouldn't, because you don't have the same incentives or punishments they do.

------
randomf1fan
How does one realistically protect against these new attack vectors? It's all
become so quick - the malware infects your machine, and seconds later your
repos are cloned.

Most computers are always connected to the internet when they're on, even if
they don't necessarily need to be. Airgapping isn't really used outside of
very sensitive networks, but I'm starting to think we need to head towards a
model of connecting machines only when really needed.

Of course the cloud based world doesn't allow for that, and perhaps I'm a
luddite, but I increasingly find myself disabling the network connection when
I'm working on my PC. Kind of like the dial-up days.

~~~
shubb
Have a fun laptop, a work laptop, and maybe banking tablet?

As a good corporate drone, this arrangement is kind of forced on me, but a lot
of small company / startup folks totally mix the two. Might be a good thing to
not do.

Sure it doesn't protect you from e.g. a tool you need for work being
compromised, but it reduces the attack surface - this guy probably wouldn't
have installed handbrake on his work machine.

Another thing we do specifically because medical data is, a lot of the time
I'm forced to work inside a non internet connected network that I vpn and then
remote desktop to. Firewall rules mean the only thing getting in from my
laptop is vnc. Some systems also require plugging into a specific physical
network. Overkill for most uses but it makes losing laptops fat less scary if
you can keep a lot of your stuff on a more secure remote system.

~~~
waz0wski
> Have a fun laptop, a work laptop, and a banking tablet?

Try out Qubes: [http://qubes-os.org](http://qubes-os.org)

~~~
shubb
This is a really good thing, and thank you for showing it to me.

Something like this could be good if you wanted to rapidly switch between
different compartments on a single device. It would be great for e.g. keeping
a 'sensitive data' compartment seperate from a 'emails and paperwork'
compartment on a work laptop.

Doing something like this is certainly better than using a single device with
no seperation or just user accounts.

Psychologically, I still think that training people to use different devices
for different things is more likely to stick than (account seperation on
steroids). This extends to physical security - not leaving a work laptop in
your backpack in a nightclub cloakroom like you might a personal device. But
in the end with that reason, at a small comapany where you can avoid hiring
idiots, it's up to each person to decide what psychological tricks they need
to get themselves to do things.

I wouldn't trust something like this to keep high security information
seperate. When some exploit that escapes Xen or (for a corp) accesses windows
systems otherwise securely configured, there is nothing like isolated networks
to keep your blood pressure low. For most software a service dev type people
you already have this - your data lives in a data center on carefully
configured production servers. But for data science type users, you see a lot
of people (especially in accademida) doing work with potentially scary
datasets on local laptops they probably also watch pirate TV on at home, which
is a bit concerning. I guess at least if they were using qubes it would be a
bit better though.

~~~
greedo
Training users has been tried for over two decades and has largely failed to
hinder black hats in any significant way.

~~~
coldtea
Failed on the users who took well to the training, or to those who ignored
it/failed it?

Because we can always not care about those others in the context of what _we_
should do.

~~~
greedo
Failed to improve computer security overall. Users (generally speaking, not HN
readers) don't have the skills/inclination/time to be proficient at managing
their systems. Efforts to educate them in malware avoidance, system upkeep etc
etc are failures by and large.

Technology can only do so much to "protect" users from themselves, and from
miscreants. Couple this with an indifference to privacy on most of the
connected population, and you've got a recipe for a world where nothing is
safe.

[http://panelsyndicate.com/comics/tpeye](http://panelsyndicate.com/comics/tpeye)

------
escapologybb
Slightly OT: I'm a reasonably competent Mac user, I use them all day and
depend on them to control my house as I'm disabled. In the event I were to be
compromised, can anyone suggest a logging tool/tools that I might be able to
use on my network such that I could work out what the problem was and correct
anything that needs correcting please?

We are looking at four or five Macs of differing types but all running the
latest OS, a number of iPhones, iPads, more Raspberry Pi's than I'm going to
admit to and a number of other IoT devices.

TIA!

Also, I really wish more companies would be this forthcoming when they pwned.
I think it's really good when are large company comes out with this type of
mea culpa, mea maxima culpa. If professionals can get totally pwned, I really
do think it tends to make ordinary users think about their security a little
more. Or maybe I'm just hopelessly optimistic!

~~~
alecco
Network syslog and a Raspberry Pi with an external drive should be more than
enough.

~~~
escapologybb
I've been doing some googling and it looks like syslog is something that I run
on every machine, and then it passes the results of its logging to the
Raspberry Pi for collation and possible inspection later on. Have I got the
basic gist of it?

Thanks for the answer, greatly appreciated. :-)

------
ianlevesque
> I also likely bypassed the Gatekeeper warning without even thinking about
> it, because I run a handful of apps that are still not signed by their
> developers.

Apple really needs to fix this. In particular open source applications don't
sign for whatever reason and it's clear that barring some change they aren't
going to start now.

~~~
st3fan
Fix what? Remove the option to bypass? Remove the warning? Lock it all down to
just app store apps?

~~~
ianlevesque
No if enough of them were signed then people wouldn't be in the habit of
bypassing the warnings, there's no need to force it to be all locked down.

~~~
theparanoid
Users click-through any and every kind of dialog box without reading. It's one
of the principles of UI design. Users don't read. Requiring the user to type
in "Install dangerous program" would work.

------
briandoll
One way to protect against this is to not have SSH keys on your laptop. I've
been using Kryptonite [https://krypt.co/](https://krypt.co/) lately, which is
sort of like two-factor for SSH keys.

~~~
mintplant
This relies on the security of my phone OS, which I trust much less than my
desktop's.

~~~
hdhzy
I wonder why is that? Apps on the phone are by default sandboxed and there is
no way to get root (instead of running the exploit).

~~~
mintplant
Lack of updates from OEMs, and a general lack of attention to security on the
part of the same, at least on Android. Many devices still haven't received the
patch for last month's Broadcom vulnerability, for example.

------
ythn
> without stopping to wonder why HandBrake would need admin privileges, or why
> it would suddenly need them when it hadn’t before

Seems like it's completely random that an app needs admin or not. Blender3d?
No admin. Unity3d? admin. etc.

~~~
Whitestrake
Arbitrary, rather than random. Most of the time, it's entirely up to the
developer. I'm sure the percentage of applications that _actually require_
administrative privileges to perform tasks is in the single digits or lower.

~~~
ythn
> I'm sure the percentage of applications that actually require administrative
> privileges to perform tasks is in the single digits or lower.

This is probably true. I'm surprised we don't get after companies for
unnecessarily requiring admin with their apps.

------
ChuckMcM
Great writeup!. I think a lot of developers would do well to understand both
the 'right' way to respond to this sort of event, and the tools you need in
order to do so. Most importantly being detailed loggging and processes for re-
keying everything.

I've participated in, and run, exercises where such damage is inflicted on
purpose to surface gaps in the the response processes and to fix them. I was
inspired by the Google DiRT (disaster recovery) and NetFlix Chaos Monkey
exercises. Both of these create not simply review processes but simulation by
action, or actually doing the damage to see the process work. Setting up your
systems so that you can do that is a really powerful tool.

~~~
jacquesm
That actually goes a step further than Chaos Monkey. I wonder how many
organizations would survive that approach if it were intense enough from day
#1. Better to ramp that up carefully and give people room to breathe and fix
things.

------
stillhere
No one has time to examine every line of source code in the 3rd party
applications that we use. That being said it irks me when people don't at
least isolate their sensitive material. There are many solutions available
including virtualization and jails to run 3rd party applications with less
risk involved.

------
msravi
And this is why ssh keys need to be encrypted - it's a good 2nd factor that
will prevent access to all your important stuff if your laptop is
stolen/compromised.

ssh-keygen -p -f keyfile

------
MarkMc
\- Do not install unsigned software

\- Do not install personal software on your work computer

------
otempomores
A version controllsystem which would allow the seperate safe versioning of ip
central code with merge to build system would be nice.

------
tlrobinson
It's not particularly hard to add malware to an already compiled binary,
without access to the source code, is it?

~~~
sullivanmatt
You are correct, but if you actually have the source and can compile a binary
from that, it is much easier to evade detection. As you might imagine, the
gnarly things you have to do to add malware to existing software often trigger
detection mechanisms.

------
HurrdurrHodor
"There’s no indication any customer information was obtained by the attacker.
Furthermore, there’s no indication Panic Sync data was accessed."

Read: The attacker could have accessed all that data but didn't send me an
e-mail telling me that he did.

~~~
ScottWhigham
It wasn't their production environment that was compromised; it was their
source code repository.

------
nthcolumn
Meh, should all be on github anyway. Like.. Handbrake!

~~~
nthcolumn
Handbrake is on github, people put lots of hours into it and it can be
downloaded, checked and compiled. handbrake is used to transcode video - often
between proprietary formats, people often put a lot of hours into the videos
transcoded by handbrake, but this was binary handbrake on a mac, macs are
based on unix, people put a lot of hours into unix... people put a lot of
hours into the company's source code... but it too was stolen, the world is a
cynical place somehow. Maybe if it all was on github the world would not seem
such a cynical place and people would realize that the value is in what they
themselves bring and not in the thing on github.

------
Sidnicious
> And more importantly, the right people at Apple are now standing by to
> quickly shut down any stolen/malware-infested versions of our apps that we
> may discover.

The "stolen" part bugs me — even though it would be incredibly shitty to
distribute cracked-from-source versions of Panic apps, I hope that Apple
wouldn't prevent users from running them. I appreciate the malware protection
built into macOS, but this might be an abuse of it.

~~~
Mtinie
Can you expand on your comment? I don't follow your logic. Isn't Apple legally
culpable if they knowingly act as a marketplace for stolen goods?

~~~
lanna
[deleted]

~~~
khedoros1
If someone broke into the KFC vault and wrote down the spice recipe used for
the chicken, we'd still call that a "stolen recipe". If part of the value of
the source code is its secrecy, then its value decreases when it's made
public.

Look at an example of one way the word "steal" is used in speech. If I say
"Good artists copy; great artists steal", and I saying that great artists
break into a building and illegally remove a physical artifact, or am I saying
that they copy something for their own benefit? If one can "steal" an idea,
then isn't that a "stolen idea"? And if that stolen idea is directly used to
create some salable product, then isn't that a "stolen product", in that
sense?

edit: The comment I responded to made the claim that source code couldn't be
stolen, only copied (similar to the standard argument "it's copyright
infringement, not theft", often applied to copied media). There was more, but
I don't remember the wording, and I don't want to misrepresent the position.

~~~
sillysaurus3
_If part of the value of the source code is its secrecy, then its value
decreases when it 's made public._

It's not necessarily true that part of the value of source code is its
secrecy, though. We'd like to believe that, but it's difficult to come up with
evidence to support it. Most instances where source code is leaked result in
no damage to the owner, for example.

~~~
nemothekid
> _It 's not necessarily true that part of the value of source code is its
> secrecy, though._

Pretty sure the same could be said of KFC's secret spices recipe.

~~~
pcwalton
I hate to be the person to post this comment, but anyway: We have pretty good
evidence that KFC's recipe has been reverse engineered and/or leaked anyway.
Doesn't seem to have affected its sales much.

[https://en.wikipedia.org/wiki/KFC_Original_Recipe#Recipe](https://en.wikipedia.org/wiki/KFC_Original_Recipe#Recipe)

~~~
eric_h
Indeed, never underestimate the power of the brand itself.

------
floatboth
> Within 24 hours of the hack, we were on the phone with two important teams:
> Apple and the FBI.

FBI, seriously? Calling the cops, over malware, as a cool independent software
company?! I mean, sure, fuck malware, but what happened to "fuck the police"?
:D

------
jlg23
Lesson learned: None.

You use the same machine for development of commercial, closed source software
and video transcoding for most probably private use.

Your postmortem can be summarized as "[advertisement]".

I get that real security is too hard for most people. But even a few
precautions can make a big difference. In order of effectiveness (least
effective first):

* Don't have sensitive data mounted automatically (yes, ubuntu, your encrypted home directory is a joke).

* Don't have sensitive data on the OS-drive. Even if you are limited by archaic USB2, RAM is cheap and so is a virtual memory backed disk. Pushing your closed source into it won't take more than 30s.

* Work hard and party hard. But keep that separated. One computer for fun, one for work. The one for work should not even think about talking to external devices until it's sure the environment is friendly.

PS: I do drink my own kool-aid - I always carry 2 laptops that run 4 operating
systems. My development and sysop environment is not even capable of playing a
movie.

Edit: "too hard for most people" may sound harsh, but it is not meant like
this. I teach OPSEC to activists in developing countries, work for a non-
profit with real privacy concerns in a first world country and make real money
doing audits for rather large companies. When I say "for most people" it
should probably have been "in most circumstances".

~~~
z3t4
i think its not too hard. its just not convenient. We need to make security
the path of least recistence.

------
zeveb
Why was he installing Handbrake on a work computer? Maybe he had a business
need to transcode videos, in which case no problem, but was he installing
Handbrake on a work computer in order to rip DVDs personally? Worse, was he
perhaps doing work on a personal computer?

Folks, don't mix your business & professional lives. The cost is not worth the
benefit!

~~~
packetslave
Why do you think that Handbrake is only for ripping DVDs?

~~~
zeveb
I mentioned _right there_ its use for transcoding videos!

~~~
mynameisvlad
Okay, then was your comment at all necessary? Why assume he was using it for
personal DVD ripping when you literally provided a legitimate business use in
your own comment?

~~~
zeveb
> Why assume he was using it for personal DVD ripping when you literally
> provided a legitimate business use in your own comment?

Because IMHO that's the most likely reason for a developer to have Handbrake
installed. It's not the only reason, as I noted, but I believe it's the most
likely one.

~~~
mynameisvlad
No, that's the most likely reason for _you_ to have Handbrake installed. Don't
project yourself onto others.

As I already said, you provided a valid business case, at which point the rest
of your holier-than-thou comment goes out the window because it could very
well have been that and the OP has no incentive to say claim otherwise.

