Hacker News new | past | comments | ask | show | jobs | submit login
Hunting for Nginx alias traversals in the wild (hakaioffsec.com)
534 points by celesian 11 months ago | hide | past | favorite | 160 comments



FWIW gixy (nginx configuration checker) catches this: https://github.com/yandex/gixy/blob/master/docs/en/plugins/a...

(and nixos automatically runs gixy on a configuration generated through it, so the system refuses to build <3)


If a webserver requires additional tools for the user to avoid all these pitfalls, maybe just maybe it should re-evaluate its defaults.


Yeah, the config checker should be built-in, and if it does not pass, then one must use --force or similar to start the server.


But then how will nginx continue to pretend that it is still 1995?

nginx was once amazing, but it’s decidedly bad now when compared to modern webservers.


What is a modern webserver? I only use Apache or nginx... anything cobbled together with nodejs or go doesn't count.


Caddy has been my default choice recently: https://caddyserver.com

Among other things, it features automatic TLS via ACME and dead-simple configuration for my most common use cases: namely, serving a directory of static files and reverse-proxying to an app server.

It is written in Go, but I certainly wouldn't describe it as "cobbled together."

I'm also a fan of Traefik but it's strictly a reverse proxy, there's not even built-in support for serving static files. But it's great if you have e.g. a bunch of containers on a single host and you want to front them all with a single load balancer.


Thank you. I didn't know about gixy and ran it on my home server which found a vulnerability ($uri in a 301 redirect)


I just gave nix a go and so far it seems great.

But do you know, if they’re a nicer options finder? The one I found where you just search all several thousand options kinda sucks. I want to just see my package (say, ssh) and just the ssh options, but the results get littered with irrelevancy.


When I roughly know what I'm doing I use search.nixos.org; if you give it the full services.foo prefix it's usually relevant enough, e.g. for ssh you'd want "services.openssh", which you can find skimming through the results of just searching 'ssh' first:

https://search.nixos.org/options?channel=unstable&from=0&siz...

For anything I'm not 100% sure will be obvious I search through a local clone of the nixpkgs repo directly, but I'll be honest and say I just never took time to search for a better tool


> if they’re a nicer options finder?

https://mynixos.com/

> I want to just see my package (say, ssh) and just the ssh options

https://mynixos.com/nixpkgs/options/programs.ssh


I would suggest using man and searching like any piece of documentation. Specifically you are looking for `man configuration.nix`


That's actually worse search experience and slower because less is struggling with the amount of lines.



My main usage of Nix is on non-NixOS machines, and I use Home Manager, and while it has a similar problem, just searching the options in the packages it provides configuration for is a smaller issue.

Not sure if this helps you at all or not, it really depends on your usage of Nix, but for managing user configuration I do recommend Home Manager.


I found this a few weeks ago: https://github.com/mlvzk/manix


NixOS doesn't run Gixy anymore, see https://github.com/NixOS/nixpkgs/pull/209075.


NixOS core maintainer here. That's about nginx' own test. Gixy is still run when writing any nginx config file with the writer helper function https://github.com/NixOS/nixpkgs/blob/b6cc06826812247fe54655...


At risk of asking a dumb question, is there any good reason that you’d want nginx to allow traversing into “..” from a URL path? It just seems like problems waiting to happen.

Edit: Actually, I’m a bit lost as to what’s happening in the original vuln. http://localhost/foo../secretfile.txt gets interpreted as /var/www/foo/../secretfile.txt or whatever… but why wouldn’t a server without the vulnerability interpret http://localhost/foo/../secretfile.txt the same way? Why does “..” in paths only work sometimes?


That has been a known issue in nginx for a very long time and its a common attack vector at CTFs:

https://book.hacktricks.xyz/network-services-pentesting/pent...


There is a LFI vulnerability because:

    /imgs../flag.txt
Transforms to:

    /path/images/../flag.txt
I've only implemented a handful of HTTP servers for fun, but I've always resolved relative paths and constrained them. So I'd turn "/path/images/../flag.txt" into "/path/flag.txt", which would not start with the root "/path/images/" and hence denied without further checks.

Am I wrong, or, why doesn't nginx do this?


It does when you use the root directive. Alias should hardly be used if possible for those reasons.


The problem is that a URL isn't actually a path. It's an abstract address to a resource which can be a directory or file (or an executable or stream or ...).

In this case part of the URL is being interpreted by nginx as a directory (http://localhost/foo) due to how that URL is mapped in the configuration to the local filesystem. Apparently it references a directory, so when nginx constructs the full path to the requested resource, it ends up with "${mapped_path}/../secretfile.txt" which would be valid on the local filesystem even if it doesn't make sense in the URL. Notice how the location of the slashes doesn't matter because URLs don't actually have path elements (even if we pretend they do), they are just strings.

This is a very common problem that I have noticed with web servers in general since the web took off. Mapping URLs directly to file paths was popular because it started with simple file servers with indexes. That rapidly turned into a mixed environment where URLs became application identifiers instead of paths since apps can be targeted by part of the path and the rest is considered parameters or arguments.

And no, it generally doesn't usually make sense to honor '.' or '..' in URLs for filesystem objects and my apps sanitize the requested path to ensure a correct mapping. It's also good to be aware that browsers do treat URLs as path-like when building relative links so you have to be careful with how and when you use trailing '/'s because it can target different resources which have different semantics on the server side.


Not in any "normal" use-case, no. It'd make sense to make this behavior opt-in, like having a `allow_parent_traversal on;` flag in the location.


Just guessing, but NginX probably either checks for "/foo/bar/.." and disallows it, or normalizes it to "/foo/" but "/foo/bar.." is a perfectly valid file name, so it doesn't get caught by the net checking for this.


> Why does “..” in paths only work sometimes?

That fully depends upon the file permissions. In this case, let's assume that a user has permissions to read files all the way from the web index directory (../index.html) back to the root directory (/). At that point, since they have permission to traverse down to the root directory, they now have permission to view any world viewable file that can be traversed to from the root directory, for instance /etc/passwd.

In other words, imagine a fork with three prongs, and your web server resides on the far right prong. Imagine that the part of the fork where the prongs meet (the "palm" of the fork) is the file system. If your web server residing on the far right prong of that fork allows file permission to files and directories that lead all the way to the palm of the fork, at that point you could continue accessing files on other prongs once you have reached the palm.


Isn't setting correct permissions for www-data like, the first note in a bunch of "secure your web server" tutorials? I thought if read is only set for the directory with actual public files, and not for the parent directory, there should be no traversal possible like this?


> "Isn't setting correct permissions for www-data like, the first note in a bunch of "secure your web server" tutorials?"

It is indeed. And yet here we are.


How is this not seen as a vulnerability in nginx? This behavior is utterly absurd, seems to have no beneficial purpose, and straightforwardly exploitable.


It's done for speed. Straightforward text replacement is so much faster than checking to see if a path is properly terminated by a slash. And remember that Nginx became popular due to benchmarks that showed that it was more "web scale" than Apache2.


I find it hard to believe that searching for “..” would even show up in a benchmark.

In any case, it seems that nginx does try to search for .. but has a bug in the corner case where the “location” doesn’t end with a slash. I assume there’s some kind of URL normalization pass that happens before the routing pass, and if the route matches part of a path component, nothing catches the ..

If I’m right, this is just an IMO rather embarrassing bug and should he fixed.


Yeah, this whole thing reads to me like a bug in nginx. There is no obvious reason users would need that functionality.


What are the alternarives though?


To Nginx? HAProxy. Or Caddy if you're just doing web stuff.


HAProxy does not serve static files, so it's a poor alternative for this specific case.


Your comment makes nginx sound like some fly-by-night server that only achieved its performance by making lots of tiny-yet-dangerous "optimisations" like this one.

More likely it is an omission, which could be rectified with a warning or failure running nginx -t (verify configuration).

The actual performance comes from an architectural choice between event vs process based servers, as detailed in the C10k problem article. [1]

[1] http://www.kegel.com/c10k.html


False, the actual performance comes from architectural differences and optimizations.


> And remember that Nginx became popular due to benchmarks that showed that it was more "web scale" than Apache2.

More like because it was much faster out of the box, and came with many batteries included while Apache2 required mods to be separately install.


And the config was nicer to read and write.


They could simply normalize the paths when parsing the configuration file. The overhead wouldn’t show up in benchmark because it only happens once at startup (and maybe when the conf file changes)


[flagged]


And a good carpenter understands that tools can be better or worse, and that a good carpenter who uses bad tools will be worse off than a good carpenter who uses good tools. It's not wrong of anyone (regardless of skill level) to demand better tools; power is not a blanket excuse for poor design. You can have powerful tools with non-dangerous defaults.


That idiom is so dumb.

There are 1000s of jobs that are incredibly easy with the tool for exactly that job but painful without, the auto industry alone has tonnes.

Software with footguns can be improved by removing the footguns.


A poor craftsman blames his tools... instead of fixing them.

How so many programmers read that as license to use crap tools boggles me.


Yeah, after that phrase confusing me for years, I concluded that it really means a good craftsman won't even have bad (ie blame worthy) tools to begin with.


Falso dichotomy. Tools can be "powerful" without being easy to misconfigure. To use a carpenter example, circular saws have guard shields that snap into place if the saw is free in the air. Seems like a good thing, no? Are saws meaningfully less "powerful" because of this security feature?


Shall we make straight razors illegal too while we're at it?

Let's not reduce everything to the least common denominator lest we end up like (formerly) Oregon where you couldn't pump your own gas because it was "dangerous" for the lay person.


Illegal? No, we don't need to do that. But we can certainly point out that safety razors already exist, are much easier to use safely and correctly, and do at least as good of a job for 99.9...÷ of people.

This also means that if someone chooses to use a straight razor it is immediate cause for suspicion of their reasons (e.g. it might be an aesthetic preference rather than dispassionately choosing the best tool for the job) and if they advise other people to use that tool, we can call that out as bad advice.

So we don't need to make Nginx illegal, or make that default illegal, but we can certainly point out that it's a foolish default when it offers no extra power to anyone, but offers significant risk to many.


I don't think you'd find a single woodworker anywhere who, if they could afford it, wouldn't prefer a table saw with Sawstop. How much are your fingers worth to you?


A poor carpenter blames his tools because it is the carpenter who maintains them. Unless you’re contributing back to upstream nginx’s code, the analogy isn’t apt. A carpenter could and should simply choose the best tool for the job, and when they determine what that tool is, maintain it as well as can be.


There are quite a few such foot guns in nginx, it's part of why I've moved to using caddy.


In the immortal words of Michael Bluth, a poor carpenter blames his shoddy tools.


OK hear me out: a Linux capability like option that removes the .. option from the kernels file name parser.

Like web apps have been seen various bypasses involving somehow smuggling two dots somewhere since we were on dial up modems. It's time to look for a way to close this once and for all, as the Linux kernel has done with several other classes of user land bugs.


https://man7.org/linux/man-pages/man2/openat2.2.html RESOLVE_BENEATH

(FreeBSD has this in ordinary openat(2) as O_RESOLVE_BENEATH.)


That would break so many things that it would be insane to do.

You could just run nginx as a separate user with very limited rights, or just run it on Docker. This, plus updating regularly usually fixes 90% of security issues.


Most (I hope all) distributions already run nginx as a separate user. It's best practice.

But that won't help if you alias to "/foo/bar/www" and the the application has a SQLite database at "/foo/bar/db.db", which the nginx user has to have access to. Same if you run it in a container (or lock down permissions using systemd).


There is no reason the web server needs to have access to the database file, the application that needs it should be running under a different user.


If that's an option then that's the right way to go. There is a reason some MTAs have been doing something like this for decades now (I'm thinking of qmail).

To be honest, I'm not sure if it's even possible to run the application/interpreter/cgi (e.g. php) as a child of the nginx process - though with Apache I'm still seeing that occasionally.


But the issue is -- would it break the things a web server is doing? It doesn't have to be a universal solution.


    /some/../path 
should pretty much 100% of the time be disallowed, there is no sensible use case that is not "someone wrote ugly code"

../some/path makes sense sometimes at least

... but I'd imagine it wouldn't as useful as you think it is, because many apps resolve .. before passing it to the OS


I don't agree. Those kinds of paths are often result of concatenation of several configuration options. Like APP_DIR=/some/app/bin; LOG_DIR="$APP_DIR/../logs". And APP_DIR comes to you from distro scripts, so you're not going to fork those scripts and support your own fork across updates, you just build upon those scripts.


The whole point of having an APP_DIR option is so that you can change it and things will just keep working. By doing $APP_DIR/.. you invalidate that by making assumptions about the parent structure. In particular something that could easily happen in the future is that you may not have write access to "$APP_DIR/.." You gotta do what you gotta do, but it is smelly.


Then you have fucked up your app config.

If user gives your app a directory to play with, exiting that dir via ../something is the last thing you should do, it's horrible malpractice that just causes annoyance

"distro scripts" near always just show direct path to /var/lib/something for data and /usr/something for rest.


That makes no difference. Code often normalizes paths before they ever touch the filesystem API


It's something else in the kernel, there we have the permission system which we rely on.

If you are serving files to web from the folder, the web framework should handle not taversing the public root folder it was tasked to serve. If are rolling your own, well now you have to consider all kinds of stuff, including this.


I don't think this would have prevented it. Removing ".." segments from paths is part of URL parsing and required by the HTTP specification. Nginx very likely does this too.


> The Google VRP Team recognized our work by awarding us a $500 reward for uncovering this vulnerability. They believed the impact on the application wasn't severe enough to warrant a larger reward.

Exposing email and private keys of GCP accounts only gives you $500 reward? WTF. Google being Google I guess.


Glad that the leaks are still encrypted. Even companies that specialize in this sort of stuff are not immune to leaks, so this is honestly the best case scenario.


The title is significantly editorialized. The post title is:

Hunting for Nginx Alias Traversals in the wild

and the hn submission highlights the bitwarden vulnerability while there is a google one discussed as well.


Ok, we've reverted the title. Submitted title was "Leaking Bitwarden's Vault with a Nginx vulnerability".


If all you need is a simple way to serve static files that minimizes resource consumption and is reliably secure, what is the state of the art these days? In the past I would probably reach for Nginx, but I wonder if a more focused/less configurable tool would be preferable from a security standpoint.


I use https://static-web-server.net/

Cross-platform, written in Rust, straightforward configuration, secure defaults, also has a hardened container image and a hardened NixOS module.

I wouldn't recommend Caddy. Their official docker image runs as root by default [1], and they don't provide a properly sandboxed systemd unit file [2].

[1]: https://github.com/caddyserver/caddy-docker/issues/104

[2]: https://github.com/caddyserver/dist/blob/master/init/caddy.s...

EDITED: phrasing


I use this...

    [Unit]
    Description=Caddy webserver
    Documentation=https://caddyserver.com/docs/
    After=network-online.target
    Wants=network-online.target systemd-networkd-wait-online.service
    StartLimitIntervalSec=14400
    StartLimitBurst=10

    [Service]
    User=caddy
    Group=caddy

    # environment: store secrets here such as API tokens
    EnvironmentFile=-/var/lib/caddy/envfile
    # data directory: uses $XDG_DATA_HOME/caddy
    # TLS certificates and other assets are stored here
    Environment=XDG_DATA_HOME=/var/lib
    # config directory: uses $XDG_CONFIG_HOME/caddy
    Environment=XDG_CONFIG_HOME=/etc

    ExecStart=/usr/bin/caddy run --config /etc/caddy/Caddyfile
    ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile

    # Do not allow the process to be restarted in a tight loop.
    Restart=on-abnormal

    # Use graceful shutdown with a reasonable timeout
    KillMode=mixed
    KillSignal=SIGQUIT
    TimeoutStopSec=5s

    # Sufficient resource limits
    LimitNOFILE=1048576
    LimitNPROC=512

    # Grants binding to port 443...
    AmbientCapabilities=CAP_NET_BIND_SERVICE
    # ...and limits potentially inherited capabilities to this
    CapabilityBoundingSet=CAP_NET_BIND_SERVICE

    # Hardening options
    LockPersonality=true
    NoNewPrivileges=true
    PrivateTmp=true
    PrivateDevices=true

    ProtectControlGroups=true
    ProtectHome=true
    ProtectKernelTunables=true
    ProtectKernelModules=true
    ProtectSystem=strict

    ReadWritePaths=/var/lib/caddy
    ReadWritePaths=/etc/caddy/autosave.json
    ReadOnlyPaths=/etc/caddy
    ReadOnlyPaths=/var/lib/caddy/envfile
    [Install]
    WantedBy=multi-user.target


What's wrong with the unit file?


All linux processes have a default set of permissions that's absurdly large, even when not running as root. Web servers don't need but a fraction of those permissions. This may help: https://www.redhat.com/sysadmin/mastering-systemd


If you want a sandboxed unit file, why not just sandbox it yourself?


Sandboxing it yourself is fraught because any new feature could cause things like a syscall filter to crash the app. It has to be part of the application build/test/release process to prevent that, like it is in SWS.

Besides, we should be creating and using software that is secure by default: https://www.cisa.gov/sites/default/files/2023-06/principles_...


Ah yes, I agree Linux should not let processes have a set of permissions that large by default.


Shameless plug: Caddy does a great job here. Automatic HTTPS, written in Go so memory safety bugs are not a concern, has a solid file_server module.


+1 to Caddy. Just tried it recently and I was very happy to forget all the nginx jargon the next moment.


Isn't everything forced to https now


Browsers try to navigate you to HTTPS but no, http the protocol is still working as it always has. Both nginx and Apache will require configuration to serve HTTPS.

You might still use HTTP on an internal network in a DMZ or other trusted network.


Good to know


No.

Some things were never meant to be, like downloading CRLs over HTTPS.


Im using caddy, it's great!


I have used Caddy for years, automatic SSL certificates, does file serving, does reverse proxy, very easy and clear to configure. Single-binary (Go) so easy to "install", single configfile.


Caddy is pretty simple to configure and serve static files from.



Last release 2016??


OP wanted a simple web server for serving static content. Are you aware of open CVEs? No? It's possible for software to be done you know. Just because something isn't a rolling release of change for the sake of change (like most Google crapware) doesn't mean it isn't fit for purpose.


Considering the vast majority of commit were made after 2016, I don't think it is "done".

And a C program, written by a single developer, with only 27 issues ever being filed? With all due respect, that's guaranteed to have some nasty bugs in there.


werc, shttpd, etc.

Treat any web request like you would a real user on a Linux system you'd need to give access to to download files via scp. Chroot, strict permissions, etc. Can't escape what you can't escape. A ../ should return the same as expected in the shell, permission denied


how is a static site served from S3 considered in these parts of the interweb? i've never done this, but see it as an option, yet i never really hear others using it either.


In my view, it's perfect (okay, maybe slightly less than perfect, and dedicated platforms taking ot to the next level like Netlify, CloudFlare Pages, Firebase Hosting, etc are for their added related services and tools, as well as their generous free tiers). It's pay as you go, scales from zero to infinite, and has zero attack surface or maintenance.

I've run a couple of websites (WordPress or Hugo based, including my personal blog) like that and it's great.


You probably want some kind of CDN to avoid a HN frontpage link from making you go bankrupt, but it's a pretty decent solution.

I personally prefer something like Github Pages, though - it doesn't get much more hands-off than that!


Good Q. Using S3 as origin behind Cloudfront seems like a pretty standard AWS CDN setup for static assets... but S3 isn't a traditional web server.


Could you give a commentary to traefik also ? In terms security and reliability, thanks


This is probably a dumb question but why would Bitwarden allow unauthenticated requests to /attachments at all? Even with the Nginx bug, wouldn’t the request have failed if that URL required authentication?


This is an exploit against the web server's configuration, so never executes Bitwarden's authentication code or any Bitwarden code at all. It isn't unusual or incorrect for projects to use their own authentication rather than Nginx or a module.

It is still Bitwarden's responsibility since they shipped a dangerous configuration via Docker. Which they seemingly acknowledge and have since fixed.


> It is still Bitwarden's responsibility since they shipped a dangerous configuration via Docker. Which they seemingly acknowledge and have since fixed.

The screenshot makes it look like the docker setup option was still in beta and the page had warnings all over it saying there could be possible issues. I can't really judge Bitwarden too harshly here for releasing something in beta that was later found to have a vulnerability in it.


Ahh okay. That explanation makes sense. Thanks!


Please excuse the silly question: Would proper directory and file ownerships not prevent this traversal?

If nginx does not run as root, how can it read other files than the ones explicitly assigned to the nginx user?


It would absolutely prevent it. Run app as one user, nginx as other, go-rwx on all app files, set the group of the "static" files as www-data and g+r on them and now web server can't access app files.

It's LITERALLY app hosting 101 and people did it that way 20+ years ago.


Ah the wonders of 022 umask. Personally I would always recommend making files unreadable to other users. If not for all files then at least significant directories like everything under /home, etc.

It may require more fiddling with group memberships, but it's well worth it.


I don't know about everyone else, but at this point I'm no longer doing a proper installation of nginx for personal stuff. I always just spin up a docker image... and I'm not checking if it runs as root or not, really.

Probably really screwing things up. Ouch.


Typical umask is 022 so most things are readable by nginx workers but not writable, they don’t need to be explicitly assigned (e.g. to www-data). If your application generates sensitive data of course you should probably use a 077 umask.


You could make an argument that bitwarden vaults constitute sensitive information.


You are correct.

Unfortunately, nginx (and other web servers) generally need to run as root in normal web applications because they are listening on port 80 or 443. Ports below 1024 can be opened only by root.

A more detailed explanation can be found here: https://unix.stackexchange.com/questions/134301/why-does-ngi...


> Ports below 1024 can be opened only by root.

Or processes running with the CAP_NET_BIND_SERVICE capability! [1]

Capabilities are a Linux kernel feature. Granting CAP_NET_BIND_SERVICE to nginx means you do not need to start it with full root privileges. This capability gives it the ability to open ports below 1024

Using systemd, you can use this feature like this:

    [Service]
    ExecStart=/usr/bin/nginx -c /etc/my_nginx.conf
    AmbientCapabilities=CAP_NET_BIND_SERVICE
    CapabilityBoundingSet=CAP_NET_BIND_SERVICE
    User=nginx
    Group=nginx
(You probably also want to enable a ton of other sandboxing options, see `systemd-analyze security` for tips)

[1]: https://man7.org/linux/man-pages/man7/capabilities.7.html


Nginx is started as root but it does not run as root, it changes its user after opening log files and sockets. (unless you use a lazy docker container and just run everything as root inside it).


Even in (the official) docker image, a nginx user is created: (latest, layer 6)

/bin/sh -c set -x && groupadd --system --gid 101 nginx && useradd --system --gid nginx --no-create-home --home /nonexistent --comment "nginx user" --shell /bin/false --uid 101 nginx .....

[1] https://hub.docker.com/layers/library/nginx/latest/images/sh...


Nginx workers shouldn’t run as root and certainly don’t on any distro I know. Typically you have a www-data user/group or equivalent. Dropping privilege is very basic.


OT but this isn't the first time I've seen someone mistake the verb "delve" with "dwelve":

> ...we started dwelving into the code base...

The author may not be a native speaker, but this is far from a judgement on their English. I'm just curious about the provenance of this mistake, given the scarcity of words that begin with "dw". At first I thought it was a typo -- especially on a QWERTY keyboard -- but I've seen it often enough to question this.


>I'm just curious about the provenance of this mistake

Because of English pronunciation (pronounciation? :-P). English is extremely irregular, there are a thousand of footguns in the language - both spoken and written -, so as non-native speakers we tend to make small mistakes that stick to our brains like glue, and it's very hard to get rid of (rid off? :-P).

For me it kinda makes sense to say "dwelve" because it reminds me of "dwarfs" (dwarves? :-P) that live underground!


Dwindling dwarves dwell dweep :)


btw, as a non-native, I also cannot understand why some native speakers confuse the use of "you're, your" or "there, their" or even "through, tough". To me they sound completely different!


A Reddit thread on r/grammar (https://www.reddit.com/r/grammar/comments/fxahta/does_the_wo...) involves a poster asking a genuine question about the alleged word 'dwelve'. The answering commenter speculates that the author is conflating 'delve' and 'dwell'.

Another comment, added years later, admits the same confusion.


IMO, people learn language by seeing/hearing it used. And the internet is rife with misuse of language.

My particular pet peeve is using "weary" instead of "wary" or "leery". I've started to hear it spoke in youtube videos now, too, so it's not just a typo.


And I learned something. Just threw it into deepl to understand the translation. Thanks for pointing it out. Would have tripped me probably.


I dropped nginx because it was really fiddly to configure and misconfiguration has potentially bad consequences.


what webserver did you not just describe?


Caddy?


This has nothing to do with bitwarden. This is a generic directory traversal attack (enabled by Nginx's configuration language being full of serious gotchas).


It does have to do with BitWarden: they wrote and shipped the buggy config.


It looks like they did say it was still beta and warned there could be issues though. I'll give them credit for that much.


Public relations future-proofing:

"Everything is in beta!"


The article didn't mention permissions, would this still work if the nginx user is denied permissions on things like `/var/log`? I suspect it wouldn't but isn't the most common cause of security flaws going to be unchecked assumptions?

As an aside, I didn't know Github code search accepted regex.


no it wouldn't work if the user nginx is running as didn't have read access to the directory or files


Ah then I just realized, it probably does have access to all nginx log directories, because nginx needs write permissions to them anyway, right? Now I really want to go double check all my permission setups...


It depends on how nginx is designed. In theory you could separate log writing into a different process, and drop those permissions from the worker process.

Or just write to stdout and have systemd handle the logging for you, that'd work too.


If I understand correctly, this is a vulnerability in self-hosted Bitwarden only. Is that correct?


This is for the single image self-hosted setup method, which is still in beta. The current supported self-hosted setup is a script that creates a bunch of individual containers for the different services.


Yes, per the article: "Bitwarden also offers a self-hosted option for those who want to maintain their own server, which is the one we are going to examine."


> Nginx, a versatile web server pivotal to numerous internet infrastructures, has held a dominant market share since its inception in 2004

Horse pucky. In those days, Apache httpd held dominant market share. Nice historical hijacking.


Note that this leaks the vault with secrets encrypted - a leak of the cyphertext.

> This vulnerability has been disclosed to Bitwarden and has since then been fixed. Bitwarden issued a US$6000 bounty, which is the highest bounty they issued on their HackerOne program.

That's a ridiculously low payout.


Small companies can't just give out $50k bounties, even if it would be deserved.



Huh. First time hearing about this. No longer a small company then.


I don’t know enough about bounty programs to comment on the amount, but my understanding is that leaking encrypted secrets isn’t really dangerous?


It's generally a question of time.

If you want to play the long game and collect a lot of encrypted data now, you can simply wait until it is possible to trivially decrypt, and/or start cracking now and let the years work on it.

Most encryption decisions are framed as a tradeoff of the time and resources it would currently take to brute-force your way through it, and how many years before a simple attack becomes feasible, vs. your $5 wrench attacks in the present day.


BW uses 100K rounds of PBKDF2 for the master password so I don't think that will be any time soon


BW now uses Argon2 over PBKDF. I can’t remember if that is by default, opt-in, or new accounts. But barring an argon vuln, this is even less of a concern.

Also, I think BW has been using more than 100k for some time now. Last I saw 600K was the recommendation.


The default for new Bitwarden accounts from Feb 2023 on is PDBFK2 HMAC SHA 256 setting at 600,001 iterations on the client and 100,000 on the server with the option to use Argon2id. These settings are above current OWASP recommendations. https://cheatsheetseries.owasp.org/cheatsheets/Password_Stor... https://bitwarden.com/help/kdf-algorithms/


All the replies have given random statistics, but these don't shed much light on the length of time it may take an attacker to brute-force a password, or find a chink in the armor of the vault's encryption algorithm.

Now as I said, a significant threat actor with lots of time in their future plans can collect encrypted stuff such as vaults and bide their time. Someday, the decryption may be cost-effectively cheap. Someday, a flaw may be uncovered in the cryptography. Someday, a vault owner's secret key(s) may leak and can be correlated.

As I said, it's just a question of time, and the ability to hold on to your cards for long enough that they can be played in the proper manner. It may take 5 years, 10 or 20, but if the payoff is valuable enough, it's worth the wait for the threat actor.


There is practically zero scenarios where hacking ANY bitwarden account 20 years from now nets you anything useful.

If the concern is general encryption when you were concerned about a 20 year from now scenario, don’t send it.


> There is practically zero scenarios where hacking ANY bitwarden account 20 years from now nets you anything useful.

Bitwarden is a password manager, yes? What about cloud accounts of someone's employer, like an AWS account that runs $1,000,000 of monthly assets? That wouldn't be valuable in 20 years?

What about VPN credentials for some big tech intranet? Yeah, hopefully they use MFA and they expire passwords before 20 years, but just in case, right?

I can certainly see nation-state actors hanging on to juicy encrypted password manager vaults, just on the off-chance they could hit the jackpot. I can think of plenty of accounts that would still be valuable and enabled 20 years from now.


Twenty years ago we had Windows XP.

You think AWS accounts are going to have a simple password requirement in the same time?

You don’t think twenty years from now that everything is a multifactor / immutable likely-bio hardware key?


a password vault contains a lot of long-lived secrets protected by a human-provided key, so it's really not something you want out there, even encrypted.


I would assume most people that are doing self-hosted are securing it behind a VPN like Wireguard instead of opening it to the whole web. (at least I hope so)


I am not. Working well so far. My instance is behind Caddy, behind a secret URL path. To talk to the instance, this “pre-shares secret” needs to be known first. So far I haven’t seen any abnormal hits. I’m closing in on 3 years of using it in this setup, via Vaultwarden.

I’m aware that this is security through obscurity. The instance’s accounts use strong passwords and MFA.


Is this can work for mobile devices ?


Yeah, the full URL can be specified in Bitwarden clients (browser extension, mobile app) and then never touched again. The secret path only leaks if users use Bitwarden's sharing feature. It's not a "pre-shared secret" in that sense, as it can publicly leak by design.


Any pointer how do you setup this ? Thanks

Sharing features did you mean organization, bitwarden send ?


I thought so too. But then did a quick search on Shodan and found these:

https://www.shodan.io/search?query=bitwarden

https://www.shodan.io/search?query=vaultwarden


I'm afraid not. I've seen some really dumb setups of BW when helping selfhosted.

I do think that while selfhosting is admirable, in the case of your password vault, it's not. It's one thing where I'd always advice against selfhosting or DIY, because the downside risk is just too big.

The chance of fng up may be tiny, bit if you fck up, it's bad. Potentially bankruptcy or jail bad.


not compared to the $500 Google gave them


Not sure why your comment is last in the page. Google have significantly more resources and the authors looked to disagree with the amount awarded for the google vulnerability.


What would I need to grep my nginx logs for to see if my possibly misconfigured servers were exploited? [^/]+\.\. (not adding a question mark after that regex even though I'm asking if that one would be ok)


Don't let web server access app's code, soo many security problems solved...


Okay so I self-host Vaultwarden, what do I need to do to fix the vulnerability? The article mentions another flavor of the self hosted docker image though.


I have nginx-proxy docker container on top of vaultwarden - there aren't any alias directives there. Vaultwarden itself appears to use rust with some http framework called "rocket" [1]. Sorry I'm not familiar with rust world.

But anyways, said vuln doesn't apply to vaultwarden.

[1] https://github.com/dani-garcia/vaultwarden/blob/19e671ff25bf...


Vaultwarden does not include or use nginx, and neither does its official Docker image. Unless you are using nginx yourself (you'd know) this does not affect you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: