Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: Upgrade your Metabase installation (github.com/metabase)
208 points by zhoutong on July 21, 2023 | hide | past | favorite | 72 comments



One of the better decisions we took at my firm was to not allow direct access to any production DB to analytics visualization tools like Metabase and Redash.

Always write your analytics data to a separate DB in a periodically run job. Only store aggregated anonymized data in the analytics DB you expose to internal stakeholders via tools like Metabase.


Also your production database is optimized for different workloads than your analytics database.

Usually production is used for fetching and updating a small number of records at a time (think updating a shopping cart), and has strict latency requirements whereas analytics involves reading a large amount of data in columns (think count group by one or two columns), and can be done in batches where the results can get a more and more stale until the next batch runs.


How do you batch write the results (say updating shopping carts) when frontend has to reflect whats in the database?


They're talking about moving data between two different back end databases. Your production database optimized for your application/latency.

Then you have your warehouse database that you updated once a day with information from prod.


That's a great idea and it articulates something I have thought about the whole "use boring tech" things (which I support). It doesn't preclude letting people use the shiny new thing. You can always let them plug it in and use it. But the core of the system should be as simple as possible and based on thoroughly understood tech (from the point of view of the team in question/accessible labor market).


I tend to discuss things in terms of the trunk, branch, and leaves.

Mostly in that the leaves of your system (parts that nothing else connects to or builds on) are generally a low risk place to try new things sometimes. If you do run into any intractable issues, it’s also an easy spot to pluck it off and replace it.


Worth pointing out that we recently discovered an RCE in RestrictedPython that affects Redash: https://github.com/zopefoundation/RestrictedPython/security/...

This should further emphasize the need to isolate these tools and ensure they are only accessible to people who need them.


Exactly right -- we do all of that, and even then tightly control and audit who has access to the anonymized, aggregated, read-only data cube.


What kind of tooling do you/people use for that? Or just custom scripts?


Look up OLTP vs OLAP data stores to get an idea. There are a lot of common patterns for the specifics of implementing this. Usually you run a regularly scheduled job that dumps data representing some time period (e.g. daily jobs). There are some considerations for late arriving data, which is a classic DE interview question, but for the most part, big nightly dumps of the last day’s data/transactions/snapshots to date-partitioned columnar stores using an orchestration engine like Airflow is sufficient for 99% of use cases.


Tangent: I hate OLTP and OLAP as acronyms. They're only one letter/word off and completely obscure the relevant meaning lots of semantic noise. Just say transactional vs analytical processing. (They are still good search key terms because lots of existing literature/resources use the terms)


(not the person you're replying to)

I can't recommend any specific tools without knowing a lot about the environment, but if you're looking for terms to google: ELT (Extract, Load, Transform) and CDC (Change Data Capture) will give you a sense of the landscape.

edit: the sibling comment that mentions Airflow is a good answer for an example of an ELT workflow.


Don't Maria, Postgres, etc make replication pretty easy?


How many of you have received this notice via an official security advisory channel you're monitoring/acting on? If so, which advisory service do you use and how you configure it? Learning about HN is useful, but far from a reliable solution.


I am subsribed to their Github releases and when I saw a release for every old version I knew what's up :-)


Yeah I do the same for projects I use. I also received an email but don't remember if I also signed up to their newsletters or something like that.


Saw it on HN.


It is definitely not announced on Full Disclosure nor on oss-security mailing lists.



> Will you release any information about the vulnerability?

> Yes, we’ll be releasing the patch publicly, as well as a CVE and an explanation in two weeks. We’re delaying release to give our install base a bit of extra time before this is widely exploited.

From their blog.


Oh absolutely, but its trivial to get a CVE from the relevant CNA's. A webform or a phone call.

Its a bit silly.


Don't you have to share more details about the exploit then? That seems to be the thing they're trying to avoid for now.


Negative, you can request a CVE without specific details, CNA's do this all the time until unembargo.


I got an email directly from Metabase.


I think it's important to review the term "Zero Trust" because so many companies are getting it wrong.

Zero Trust does not mean: "No mor VPNs and private IP network ranges, everything is public. ::elitist hipster noises::"

Zero Trust simply means: "Just _because_ you're on a private network [or coming from a known ip], doesn't mean you're authenticated."

You should have every single one of your internal network services (like Metabase) behind a VPN like Wireguard or numerous other options. The sole purpose of this is to reduce your firewall log noise to a manageable level that can be reviewed by hand if necessary.

Obviously this isn't perfect security, but that's the _entire_ point: every security researcher says security should be an onion, not a glass sphere; many layers of independent security.


This is why I try to put everything behind NGINX with basic auth. Unfortunately not everything works well that way but in this case I suspect that this is made unexploitable by anyone without the password.


Ha, I was just about to go in here and say the same thing.

"Fortunately" some "white hat" hacker contacted us last year about another Metabase exploit. I gave him a 30 USD tip and ended up doing exactly what you are suggesting.

Now I'm glad that means I don't need to interrupt my vacation to fix this thing right now.


Here in Italy you get lucky if the company is not suing you :(


EDIT: I misunderstood.


That’s simply not true, sadly; you’re very much reliant on the company not attempting to sue you. Counter examples (not implying these have been successful, but it is also not unheard of to have the police show up at your door and collect all computers/phones etc. to investigate)

- https://www.golem.de/news/connect-app-cdu-verklagt-offenbar-... - https://www.heise.de/news/Modern-Solution-Anklage-gegen-Aufd...



I thought gp was talking abhobt their employer suing them for bugs they created.


Hmm, I was thinking that's a standard thing, atleast in HN crowd. basic setup Cloudflare -> Nginx -> Docker -> 3rd Party app, all on a dedicated vm


You can also setup some reverse proxies to auth with SSO like Google. I use Traefik + https://github.com/thomseddon/traefik-forward-auth for personal projects, even on my local network.


I like NGINX, but I prefer how simple it is to set up Caddy with basic auth. Caddy is already simpler to configure (and has automatic SSL via Let's Encrypt), but it's so simple to get its basic directive working compared to NGINX that I do it by default now.


Better yet, oauth2-proxy in case of an organization: only admins need to know the secrets, every user simply uses SSO to get access.


or vpn



They say they’ll be releasing the patch publicly, but isn’t this OSS, can’t anyone just do a diff and with a little “elbow grease” find the patch?


They haven't released the source, and the compiled versions are non-trivial to diff (e.g. there are nondeterministic numbers from the clojure compiler that seem to have changed from one to the other, and .clj files have been removed from the jar).

The old version has `hash=1bb88f5`, which is a public commit: https://github.com/metabase/metabase/commit/1bb88f5

Whereas the new version has `hash=c8912af`, which is not: https://github.com/metabase/metabase/commit/c8912af


I could be wrong (and often am), but I am seeing updates related druid client authentication.


I didn't even know you could have a "private" commit on GitHub/an open source repo like that.


Oh, I didn't mean to imply you can, just that it's 404... presumably it exists in a repo checked out on someone's machine, and maybe in a separate private Github repo.


This is silly on my end (I woke up early and have time to kill)...

Also like, note: I would never publicly disclose whatever I find, I'm just curious

I observed exactly what you said about the Clojure filenames not matching up, etc. etc.

    #!/bin/bash
    
    # Variables
    DIR1=~/metabase-v0.46.6.jar.src # decompiled with jd-cli / jd-gui (java decompiler)
    DIR2=~/metabase-v0.46.6.1.jar.src # decompiled with jd-cli / jd-gui (java decompiler)
    
    # Function to create fuzzy hash for each file in a directory
    create_fuzzy_hashes() {
      dir=$1
      for file in $(find $dir -type f)
      do
        ssdeep -b $file >> ${dir}/hashes.txt
      done
    }
    
    # Create fuzzy hashes for each file in the directories
    create_fuzzy_hashes $DIR1
    create_fuzzy_hashes $DIR2
    
    # Compare the hashes
    ssdeep -k $DIR1/hashes.txt $DIR2/hashes.txt
How far do you think this gets us (fuzzy hashing)?

I was thinking this, or binary diffing the .class (instead of the "decompiled" .java)?


I found something which is clearly a security fix, using the same idea but more naive: just diffing at the lengths of the decompiled files. It's not at all clear how the issue I found would be triggered by an unauthenticated user though.


> Yes, we’ll be releasing the patch publicly, as well as a CVE and an explanation in two weeks. We’re delaying release to give our install base a bit of extra time before this is widely exploited.


Unfortunately that means it's not possible to deploy this without violating the AGPL...


No one cares. It's a two week violation and no one is going to hunt anyone down who released this early internally.


Even though this is technically a violation, licenses aren't black & white. The objective and intent of the AGPL is not being violated by delaying release by a couple weeks to give time for security patches to be applied.


https://github.com/metabase/metabase/compare/v0.46.6...v0.46...

I can't tell if that's it?

edit: I've looked at it a few times, I don't think that's it?


The only thing that seems remotely interesting is the "private key" part - I don't know Clojure but it doesn't seem like that's it.


They backported it to v0.45x and those changes don't seem to be included: https://github.com/metabase/metabase/compare/v0.45.4...v0.45...

aka, It isn't checked in to source control publicly yet. Interesting.

I tried to "decompile" the jars and loop over the files but it didn't yield much/wasn't clean enough to be of help.


It would be nice to know if this vulnerability affects people who never made their Metabase installations publicly accessible.

Aka if I am running Metabase locally.


It’ll be an RCE. If you are network isolated or have a proxy in front of it, you can take the weekend off.


How would an attacker exploit that?


A vulnerability (not necessarily this one, just hypothesising) could be exploited via a payload result from an outbound request to the internet.


I thought when the OP of this comment thread said locally they meant like, it isn't exposed to the Internet


"exposed" as a word does a lot of heavy lifting here. When someone is asking me casually "hey, is this server exposed to the public internet"?

I take it to mean "can someone connect to it in an inbound manner from the public internet?"

If the answer is no, it doesn't necessarily mean that packets don't have other ways of making their way to the server, for example, a service running locally could have a webhook mechanism that fires events to an internet-accessible server whenever certain events happen.

You might trust the services you're sending requests to as part of that, but they could become compromised and send exploits as a response. Other vulnerabilities could be services running locally but that reach out to the internet to check for updates... more surface area to exploit.

If the OP was asking "I'm running this locally and I've set up my machine and firewalls to disallow any packets outside of the loopback interface", then the risk of the unpatched server is certainly reduced, but they could still be affected by another piece of software running on the same machine with internet access that is compromised first.

Anything beyond an isolated machine with 100% air-gapping is theoretically at risk.

Doesn't mean that the OP's question was a bad question or anything, they can use the answer to know how quickly they should worry about patching based on their own situation and risk tolerance.


Great answer btw.

And yes, that is what I meant. curl hackmeplease.com 57 stack traces down.


Emergency deployment late Friday afternoon (by EU time, at least), the best way to end a week :)


Thanks for the heads up ! Without your message I'd probably have found out in a couple months :)


If I have my metabase installation protected behind oauth with G suite am I protected from these kinds of vectors?


Perhaps a naive question, but if running metabase within a docker container, what permissions would this RCE have? AFAIK the container has network access and access to the mounted volumes and that's it right?


Presumably the metabase instance also has credentials to access some databases, some of which may be have enough privileges to also get RCE on the database machines (as well as messing with the data they hold).


We issue separate read-only credentials for database access fortunately. Still doesn't remove the risk of all the data been exfiltrated though.


The container has access to whatever database you connect metabase to for BI. If the db connection credentials are available to the container, it's possible a malicious actor could access your prod db.


It depends on how the container is being run and if it has root Access


> Extremely severe. An unauthenticated attacker can run arbitrary commands with the same privileges as the Metabase server on the server you are running Metabase on.

Java deserialization strikes another one down, I assume?


Will it still be (as) dangerous if Metabase is running inside a container?


To all the data inside of it? Sure.

To all of the auth tokens and user creds? Why not.


What would happen if a software's database was completely accessible via an open api end point?


thank you!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: