
Rest-client gem is hijacked - PleaseHelpMe
https://github.com/rest-client/rest-client/issues/713
======
mwmanning
Hey since this is blown up I just want to address it directly.

I take responsibility for what happened here. My RubyGems.org account was
using an insecure, reused password that has leaked to the internet in other
breaches.

I made that account probably over 10 years ago, so it predated my use of
password managers and I haven't used it much lately, so I didn't catch it in a
1password audit or anything.

Sometimes we miss things despite our best efforts.

Rotate your passwords, kids.

~~~
donkeyd
Wow, that's a pretty well executed and possibly targeted attack then. It blows
my mind how easy it can be to perform a high impact attack by abusing popular
libraries. Hopefully this was caught before it got into production in high
profile implementations.

~~~
mwmanning
Yeah I'm assuming the methodology is:

1) Find high-value target libraries

2) Grab the usernames of accounts with push access

3) Check those against password dumps

I feel really stupid about this, but like I said it was an oversight. I
apologize and will try to do better.

~~~
0x0
Sounds like rubygems and other registries like npm should try to get ahold of
those password dumps and check them against their own account databases
somewhat frequently!

~~~
dspillett
If you find a reused password, how do you let the user know though? If I got a
"your account is vulnerable" message I'd ignore it as junk like all the other
ones I get pretty much daily. You could force a change next time the user logs
to your interactive interface, but many users won't do that for some time.

The best approach is probably to disable the account completely until an
interactive login is made and a password reset can be forced but some would be
up in arms about the inconvenience caused: you can't just allow a simple reset
as the login could be coming from an attacker not the original user, an extra
channel will need to be used to verify the identity. You might just have to
leave the account locked forever and expect the user to create a new one - but
now you have the old account and its content which may be used as a dependency
of many projects which now break, unnecessarily if there _hasn 't_ been a
login by a nefarious type.

~~~
yebyen
You could send that notification, invalidate any client tokens, and also
disable the compromised password forcing the user to re-authenticate through
their email address, a-la password reset, and I guess also verify they aren't
using the same password again.

You wouldn't lock the account forever, the point is to establish that the
person whose password was compromised knows, that the password is not the only
factor which is used to regain access to the account, and to ensure that your
service (rubygems) and its downstream users are not compromised as well as a
result of the breach.

Any groaning about the inconvenience caused by disabling account access until
the password is changed, can be simply shrugged away in favor of security
concerns, with a link to this story about rest-client.

By the time you have learned the user's plaintext password, their account may
already have been compromised. There's a case to make that you disable all
downloads of any gems that might be compromised from the account until you've
verified they aren't. That might be over the top, especially for popular
projects as now we are talking serious inconvenience affecting potentially
thousands or more of downstreams.

It's a sticky situation, since you don't really know how long that password
has been in the open for hackers to use and abuse once you've discovered it in
a password dump.

------
ageitgey
It looks pretty bad if you had deployed this :(

Here is a summary of the exploit re-pasted from a great comment [1] written by
@JanDintel on the github thread:

\- It sent the URL of the infected host to the attacker.

\- It sent the environment variables of the infected host to the attacker.
Depending on your set-up this can include credentials of services that you use
e.g. database, payment service provider.

\- It allowed to eval Ruby code on the infected host. Attacker needed to send
a signed (using the attacker’s own key) cookie with the Ruby code to run.

\- It overloaded the #authenticate method on the Identity class. Every time
the method gets called it will send the email/password to the attacker.
However I'm unsure which libraries use the Identity class though, maybe
someone else knows?

So... it potentially comprised your user's passwords AND (if you were on
Heroku or similar like many Rails apps are) system-level access to all your
attached data stores. About as bad as it gets.

[1] [https://github.com/rest-client/rest-
client/issues/713#issuec...](https://github.com/rest-client/rest-
client/issues/713#issuecomment-522967049)

~~~
flixic
Not only the data could be accessed, it's entirely possible it was modified.
Unless you have good logging of all data changes (at DB level) it can be very
difficult to detect these changes.

The first hijacked version was released on August 13th.

------
sersi
I think that rubygems should consider automatically enforcing multifactor
authentication for popular gems.

So any gem with more than 50,000 downloads should force to gem maintainer to
have MFA set up before they can publish a new version or do anything with that
gem.

Because, having MFA is not about protecting gem maintainers, it's about
protecting users. So, gem maintainers should not be allowed to be careless
with security by not using MFA. It's not their choice to make.

~~~
raesene9
It's not just rubygems that has this issue, it's all the other repos too, most
of which (AFAIK) don't enforce 2FA

Also worth noting that whilst MFA helps, it's not a panacea as MFA isn't
generally compatible with automated CI/CD processes, so API keys will still be
required, and can be leaked/lost/stolen.

~~~
cooljacob204
Why would a CI/CD need to permissions modify and commit code to a repo?

~~~
raesene9
Say your CI pipeline runs automated tests, builds the gem and pushes to
Rubygems, it needs permissions to push to Rubygems.

So if an attacker compromises the API key used by that pipeline, they get the
rights to push to Rubygems.

~~~
enneff
Actually publishing new versions seems like something that happens
infrequently enough that it would be fine to require a manual auth to complete
it. Given the potential risks it seems prudent.

~~~
raesene9
I'd expect that very much depends on the software in question. As one example
that I'm aware of, gvisor from Google delivers nightly builds. So they're
building and pushing every single day.

It'd depend on the individual software library and of course as a consumer of
many libraries you generally will have limited or no visibility of the
practices of all your dependencies.

~~~
rubber_duck
I don't think nightly builds are the same thing as releases - you can have CI
publish a build but to create a versioned public release it should require
manual auth.

~~~
raesene9
That's a view of course (although in the case of gVisor they don't actually do
versioned build just nightlies) but here's a question.

As a consumer of software libraries, have you ever looked into the security
practices of the library author before choosing whether to use it or not?

~~~
ryanbrunner
This is becoming increasingly impractical for certain ecosystems as packages
depend on packages, which depend on packages, etc. It's less of a problem with
the bundler ecosystem, where larger packages with relatively few dependencies
is more standard, but in the JS world installing a package means that you're
likely installing tens or hundreds of sub-dependencies.

------
elaus
It's mind-boggling to think how fragile and potentially dangerous those
dependency ecosystems are – no matter if it's Ruby, JS, PHP or other languages
widely used for web apps.

We all just hope that nothing bad will happen or that it will be noticed fast
enough. Accounts get compromised, maintainers quit and transfer their project,
bad actors might even pay the dev of some lesser-known dependency…

I have no easy solution for this problem and of course I too use external
dependencies in my projects – but it feels like it's only a matter of time
till disaster will happen and most of us just ignore this problem till then.

~~~
raesene9
Yeah this problem has been a known issue for a long time, but there really is
no easy fix, and incentives are stacked against it getting resolved in any
meaningful way.

I did a talk for AppsecEU back in 2015 on this topic and found good material
talking about it as a risk going back years before that...

~~~
19ylram49
I think I might start using fully isolated environments (i.e., via
Vagrant/Docker/etc.) for all of my projects from now on (I already do for
some).

~~~
jcoby
That approach will mitigate your machine getting compromised (which is good)
but it won't fix your production machines getting compromised if the gem or
package gets deployed. That is usually a much worse outcome.

And even in isolated environments I find myself running code outside of the
container for testing. Usually a quick script to test some package's
functionality or opening a REPL to run something or running a code-generator
(manage.py, artisan, etc). That's all it takes for the malware to break out of
the isolation and attack your machine.

------
romaaeterna
Ruby gem hijacking also happened to Strong_password a few weeks ago.

[https://news.ycombinator.com/item?id=20377136](https://news.ycombinator.com/item?id=20377136)

This is major new line of attack, and web app infrastructure is critically
weak to it. We rejected distro-controlled package management in favor of pip
and gem and npm years ago (for good reasons), but as this sort of attack
becomes much more common (which it will), we might find ourselves missing the
days of strong central control.

Rubygems should have acted on the Strong_password news, but missed the
opportunity. I hope that they can get their act together now that they are
lucky enough to have a second chance before this style of attack really
explodes.

~~~
majewsky
> We rejected distro-controlled package management in favor of pip and gem and
> npm years ago (for good reasons)

While I'm generally a fan of distribution-provided packages, they would not
have helped in this case. Distributions simply lack the manpower to audit all
upstream releases for these kinds of issues.

~~~
hyperpape
This gem was published six days before it was found, which means that the
effectiveness of the attack seems to have relied on it being picked up by
people doing automatic upgrades. Wouldn't a distro help because it
fundamentally is less predictable about when it takes a new version?

------
robotfelix
It's worth noting that the hijacker pushed a malicious version of 1.6.x

Version 1.7.0 was released to rubygems on 8th July 2014, and 2.0.0 on 2nd July
2016, so anyone who has started using rest-client or run a `bundle update`
recently is unlikely to be affected.

The impact _could_ have been significantly greater had the hijacker pushed a
new versions of 1.8.x or 2.x as well, so it's very fortunate the breach was
spotted now.

~~~
rcfox
That's a good point. Could indicate a targeted attack?

~~~
Implicated
That was my first thought - it would seem as though releasing on a version
that old would be deliberate and why else would you do that if you weren't
targeting something specific?

------
shioyama
I'm an author of gems with a total of more than 4 million downloads. I just
setup 2fa now after seeing this.

~~~
shioyama
I actually ended up going further and removing me as author from gems I don't
actually maintain anymore, which brings down my exposure considerably. It's
actually not as easy as it could be to remove yourself as an author from a gem
(have to do it via command line).

~~~
rcfox
What happens to orphaned gems? It seems like someone could make the case for
getting ownership of them much easier than if you kept ownership and added a
loud deprecation warning.

~~~
shioyama
I wasn't the only author on those gems, the others are (mostly) still active.
Not sure how orphaned gems are handled, though.

------
forgingahead
This is bad. Also from the Github comments, here's a useful snippet to quickly
search your projects to see if any of your projects are impacted[0]:

    
    
      cd ~/code # Where all my projects live
      grep --include='Gemfile.lock' -r . -e 'rest-client \(1\.6\.1[0123]\)'
    

[https://github.com/rest-client/rest-
client/issues/713#issuec...](https://github.com/rest-client/rest-
client/issues/713#issuecomment-522987796)

------
abarringer
Whitelist outbound access from your network. It's not perfect and it can be
painful to deploy, particularly gems, but it stops several categories of
attacks.

------
raesene9
Following closely on from Webmin's compromised CI/CD pipeline, this is another
instance of the growing problem of supply chain attacks.

With the software supply chain being as complex as it is, and the large number
of moving parts, we're only going to see more of these ...

------
shakna
Rubygems acted when they were notified [0]. There were a few affected by this
at the same time, not just rest-client.

[0] [https://github.com/rubygems/rubygems.org/wiki/Gems-yanked-
an...](https://github.com/rubygems/rubygems.org/wiki/Gems-yanked-and-accounts-
locked#19-aug-2019)

------
raesene9
For all the people who are rightly concerned about these attacks here's a a
couple of questions

\- Would you pay money for access to a package repository that had good
security practices? How much would you pay? Would you accept delays in library
updates to allow for security checks? If so, how long a delay would be
acceptable.

\- Have you ever looked into the security practices of open source
applications or libraries that you wanted to use and had the information you
did or did not find affect your decision to use that software?

\- How often do you use the inventory of all the libararies you have, to
periodically check on the provenance of those libaries and that they are
maintaining good security practices?

Ultimately these problems (like most) are one of incentives. It's very easy to
build software very quickly using the huge number of open source components
that are freely available.

Whilst speed of development and price are the primary considerations, it's not
going to be surprising that security takes a lower rung on the ladder.

~~~
sundayedition
I used to work for the government and we had a private gem server that
introduced much for what you're asking about; delays in releases as a trade-
off for security practices. Ultimately, I don't think there was any tooling
available to automate the vulnerability scanning enough that a private server
gives you much of a jump start on any kind of CVE that bundle audit wouldn't
already catch. The workaround of needing to hit GitHub directly to get a
current version and bypass the private server is also available.

Additionally, GitHub provides a bundler-audit like service for free. And,
identifying those that are t CVEs yet seems like something scriptable but also
obfuscatable (on the attacker end).

I don't think any team I've been on (federal or private) would pay extra for
this kind of service given the frequency with which we do updates and the
amount of effort that needs a careful developer spends when doing these
updates. The most recent breaches have been noted well I'm advance of any
updates we would have done.

I'd be glad to be wrong about it because I think a few more tools in this
space would only help the community.

It would be nice if a service could summarize the differences in actual gem
releases, to make things like the changelog and the diff easily digestable for
all the available updates though (versus a scan/lint). That would let a
developers more easily identify these kinds of breaches than cloning and
diffing.

------
notyourday
This actually is a perfect illustration why in production all of your systems
have to go through a while-listing proxy rather than a NAT for the outside
access.

There should be a very limited number of known external URLs that your
production system needs to hit. Whitelist them on a proxy. Block the rest.
Dump the blocked requests into a log. Put alerts on a log. It will get most of
data exfiltration attempts or attacks such as this. Remember, when your goal
is not to have a perfect security -- your goal is to have a better security
than someone else so that someone else gets to be a chump and not you.

------
kpeekhn
Is there a way to check if a gem was released by an account using MFA?

If there was a "published with mfa" flag on every gem release and it would
allow a Bundler setting to block installing gems without 2FA.

Of course, this would also help attackers find targets. But maybe its worth
the trade-off?

~~~
hombre_fatal
Seems like a pointless, false sense of security.

What about all the attacks where the malicious actor is someone with publish
rights, like friendly package takeover? Your proposal makes that even more
effective since now the attacker gets a nice "published with mfa" badge.

~~~
kpeekhn
It would have prevented this attack, so I'm not sure how its pointless.
Obviously it doesn't fix everything. MFA is MFA. I don't know why anyone would
take it as a guarantee that some third-party has audited all the code.

~~~
hombre_fatal
> It would have prevented this attack

Your post was about a "published with mfa" vanity badge which I was responding
to, not the merits of mfa in general.

~~~
kpeekhn
sorry, that is not what I mean.

I don't care really about a badge, I care about the information being
available so that it can be used in Bundler. Its about developers being given
the choice in their gemfile to disallow installation of any gems uploaded
without 2FA. But in order to do that, we need rubygems to publish that
information.

~~~
hombre_fatal
No, I understand you. My point is that whether a package uses 2FA or not
should have zero impact on your security practices, yet your proposal suggests
otherwise.

It doesn't seem like it does you much good to know if a package uses 2FA
except to potentially weaken your defenses. For example, any scrutiny you
level at a non-2FA package should also be leveled at 2FA-enabled packages.
Though I suppose there is a non-zero benefit, so I won't belabor this argument
any further.

Perhaps package repositories should be nagging publishers to enable 2FA.
Though poorly implemented 2FA also introduces new attack vectors like the "lol
lost my phone" social engineering attack.

------
htns
Seems rather similar to the strong_password case from a month back:
[https://news.ycombinator.com/item?id=20377136](https://news.ycombinator.com/item?id=20377136)
. I wonder if anyone has checked basic things like scanning all of rubygems
for "pastebin" or "eval( * http * )".

~~~
gotts
it surprises me a bit.

I'm wondering why wouldn't RubyGems implement some basic form of malware
detection? This type of code shouldn't be too hard to classify.

~~~
derimagia
Malicious users would just change their code slightly to get past it. Use a
different service than pastebin, or just obfuscating it more.

~~~
gotts
After thinking about.. I think you must be right. Malware detection is not an
easy task especially because of Ruby's dynamic nature.

Even simple open(), sleep(), eval() could be easily obfuscated.

------
u801e
Does ruby gems allow for signing releases? For example, the maintainer could
upload their public key and use their private key to sign a package release.
Then the consumer could verify the signature via the public key. If the public
key changes, then the consumer could be alerted to that fact.

~~~
shioyama
Yes it does, and I do this with my gems, but it's not widely used and I'm sure
virtually none of the users of the gems I author probably take advantage of
it.

[https://guides.rubygems.org/security/](https://guides.rubygems.org/security/)

> However, this method of securing gems is not widely used. It requires a
> number of manual steps on the part of the developer, and there is no well-
> established chain of trust for gem signing keys. Discussion of new signing
> models such as X509 and OpenPGP is going on in the rubygems-trust wiki, the
> RubyGems-Developers list and in IRC. The goal is to improve (or replace) the
> signing system so that it is easy for authors and transparent for users.

------
bdcravens
This is a fairly old version of the gem, but probably what attacker was going
for (users who infrequently go to new versions, and avoiding those who watch
most up to date version)

------
gmontard
That is sadly a good example of why relying on trusted and accountable API
clients should be considered critical for business.

When consuming APIs and not thinking about this, we are only building
technical debt and security issues for the future.

Today's example is really bad since it targets a well-used meta API open-
source library, but how many of those issues are already present on hundreds
of other obscure open-source API clients?

------
westoque
Out of curiosity, is there a legal way to go after people that do these? e.g.,
File a police report?

~~~
raesene9
Unless the attacker had very poor OpSec, it would be hard to track them down,
even assuming the relevant police force had the skills/manpower to do so.

Then you get the delight of likely jurisdictional issues, if it turns out the
attacker is not a resident of the same country as the victim that reported it.

~~~
ashleyn
It's also likely it was done by an intelligence agency.

~~~
mschuster91
Not really. The attacker seemed to target bitcoin once again... surprising
this shit is still profitable, though.

~~~
dspillett
_> surprising this shit is still profitable_

If the cost of the attack is as near to zero as makes no odds, _any_ income is
profit, whether it comes from being able to compromise bitcoin related
accounts elsewhere, getting a miner to run on 00s of servers and/or 000s of
clients, or getting other details to use in a "send me bitcoint and I
will/won't X" blackmail. And if there is no income from the attack, the cost
of trying is near zero.

------
t0mbstone
What's kind of interesting about this is the fact that it is able to just
blindly dump environment variables.

For a long time, environment variables have been evangelized as the secure
place to store credentials and things, but that just gives third party scripts
a known place to look.

You could argue that might actually be more secure to store your secrets in a
separate, custom config file that gets read into the rails app via an
initializer or something.

------
pqdbr
Many of these attacks seem to use pastebin. I will add a hosts entry pointing
pastebin to localhost in my production servers.

------
gotts
Many popular gems have multiple authors(with push ability) on RubyGems. Like
4, 5 authors, sometimes even more. It may look impressive on their profile but
from a security standpoint that's 4x, 5x Attack surface from what it
potentially could be.

------
dkozyatinskiy
Hello, here is an explanation and analysis of this rest-client backdoor:
[https://application.security/ruby-
backdoor](https://application.security/ruby-backdoor)

------
jbverschoor
Good thing your Gemfile.lock is checked in and gems hosted on rubygems.org are
frozen / imutable

------
willfiveash
Another case for universal adoption of 2FA.

------
hellothereyo
Why doesn't this repo mandate 2FA?

