One of the better decisions we took at my firm was to not allow direct access to any production DB to analytics visualization tools like Metabase and Redash.
Always write your analytics data to a separate DB in a periodically run job. Only store aggregated anonymized data in the analytics DB you expose to internal stakeholders via tools like Metabase.
Also your production database is optimized for different workloads than your analytics database.
Usually production is used for fetching and updating a small number of records at a time (think updating a shopping cart), and has strict latency requirements whereas analytics involves reading a large amount of data in columns (think count group by one or two columns), and can be done in batches where the results can get a more and more stale until the next batch runs.
That's a great idea and it articulates something I have thought about the whole "use boring tech" things (which I support). It doesn't preclude letting people use the shiny new thing. You can always let them plug it in and use it. But the core of the system should be as simple as possible and based on thoroughly understood tech (from the point of view of the team in question/accessible labor market).
I tend to discuss things in terms of the trunk, branch, and leaves.
Mostly in that the leaves of your system (parts that nothing else connects to or builds on) are generally a low risk place to try new things sometimes. If you do run into any intractable issues, it’s also an easy spot to pluck it off and replace it.
Look up OLTP vs OLAP data stores to get an idea. There are a lot of common patterns for the specifics of implementing this. Usually you run a regularly scheduled job that dumps data representing some time period (e.g. daily jobs). There are some considerations for late arriving data, which is a classic DE interview question, but for the most part, big nightly dumps of the last day’s data/transactions/snapshots to date-partitioned columnar stores using an orchestration engine like Airflow is sufficient for 99% of use cases.
Tangent: I hate OLTP and OLAP as acronyms. They're only one letter/word off and completely obscure the relevant meaning lots of semantic noise. Just say transactional vs analytical processing. (They are still good search key terms because lots of existing literature/resources use the terms)
I can't recommend any specific tools without knowing a lot about the environment, but if you're looking for terms to google: ELT (Extract, Load, Transform) and CDC (Change Data Capture) will give you a sense of the landscape.
edit: the sibling comment that mentions Airflow is a good answer for an example of an ELT workflow.
How many of you have received this notice via an official security advisory channel you're monitoring/acting on? If so, which advisory service do you use and how you configure it? Learning about HN is useful, but far from a reliable solution.
> Will you release any information about the vulnerability?
> Yes, we’ll be releasing the patch publicly, as well as a CVE and an explanation in two weeks. We’re delaying release to give our install base a bit of extra time before this is widely exploited.
I think it's important to review the term "Zero Trust" because so many companies are getting it wrong.
Zero Trust does not mean: "No mor VPNs and private IP network ranges, everything is public. ::elitist hipster noises::"
Zero Trust simply means: "Just _because_ you're on a private network [or coming from a known ip], doesn't mean you're authenticated."
You should have every single one of your internal network services (like Metabase) behind a VPN like Wireguard or numerous other options. The sole purpose of this is to reduce your firewall log noise to a manageable level that can be reviewed by hand if necessary.
Obviously this isn't perfect security, but that's the _entire_ point: every security researcher says security should be an onion, not a glass sphere; many layers of independent security.
This is why I try to put everything behind NGINX with basic auth. Unfortunately not everything works well that way but in this case I suspect that this is made unexploitable by anyone without the password.
Ha, I was just about to go in here and say the same thing.
"Fortunately" some "white hat" hacker contacted us last year about another Metabase exploit. I gave him a 30 USD tip and ended up doing exactly what you are suggesting.
Now I'm glad that means I don't need to interrupt my vacation to fix this thing right now.
That’s simply not true, sadly; you’re very much reliant on the company not attempting to sue you. Counter examples (not implying these have been successful, but it is also not unheard of to have the police show up at your door and collect all computers/phones etc. to investigate)
I like NGINX, but I prefer how simple it is to set up Caddy with basic auth. Caddy is already simpler to configure (and has automatic SSL via Let's Encrypt), but it's so simple to get its basic directive working compared to NGINX that I do it by default now.
They haven't released the source, and the compiled versions are non-trivial to diff (e.g. there are nondeterministic numbers from the clojure compiler that seem to have changed from one to the other, and .clj files have been removed from the jar).
Oh, I didn't mean to imply you can, just that it's 404... presumably it exists in a repo checked out on someone's machine, and maybe in a separate private Github repo.
This is silly on my end (I woke up early and have time to kill)...
Also like, note: I would never publicly disclose whatever I find, I'm just curious
I observed exactly what you said about the Clojure filenames not matching up, etc. etc.
#!/bin/bash
# Variables
DIR1=~/metabase-v0.46.6.jar.src # decompiled with jd-cli / jd-gui (java decompiler)
DIR2=~/metabase-v0.46.6.1.jar.src # decompiled with jd-cli / jd-gui (java decompiler)
# Function to create fuzzy hash for each file in a directory
create_fuzzy_hashes() {
dir=$1
for file in $(find $dir -type f)
do
ssdeep -b $file >> ${dir}/hashes.txt
done
}
# Create fuzzy hashes for each file in the directories
create_fuzzy_hashes $DIR1
create_fuzzy_hashes $DIR2
# Compare the hashes
ssdeep -k $DIR1/hashes.txt $DIR2/hashes.txt
How far do you think this gets us (fuzzy hashing)?
I was thinking this, or binary diffing the .class (instead of the "decompiled" .java)?
I found something which is clearly a security fix, using the same idea but more naive: just diffing at the lengths of the decompiled files. It's not at all clear how the issue I found would be triggered by an unauthenticated user though.
> Yes, we’ll be releasing the patch publicly, as well as a CVE and an explanation in two weeks. We’re delaying release to give our install base a bit of extra time before this is widely exploited.
Even though this is technically a violation, licenses aren't black & white. The objective and intent of the AGPL is not being violated by delaying release by a couple weeks to give time for security patches to be applied.
"exposed" as a word does a lot of heavy lifting here. When someone is asking me casually "hey, is this server exposed to the public internet"?
I take it to mean "can someone connect to it in an inbound manner from the public internet?"
If the answer is no, it doesn't necessarily mean that packets don't have other ways of making their way to the server, for example, a service running locally could have a webhook mechanism that fires events to an internet-accessible server whenever certain events happen.
You might trust the services you're sending requests to as part of that, but they could become compromised and send exploits as a response. Other vulnerabilities could be services running locally but that reach out to the internet to check for updates... more surface area to exploit.
If the OP was asking "I'm running this locally and I've set up my machine and firewalls to disallow any packets outside of the loopback interface", then the risk of the unpatched server is certainly reduced, but they could still be affected by another piece of software running on the same machine with internet access that is compromised first.
Anything beyond an isolated machine with 100% air-gapping is theoretically at risk.
Doesn't mean that the OP's question was a bad question or anything, they can use the answer to know how quickly they should worry about patching based on their own situation and risk tolerance.
Perhaps a naive question, but if running metabase within a docker container, what permissions would this RCE have? AFAIK the container has network access and access to the mounted volumes and that's it right?
Presumably the metabase instance also has credentials to access some databases, some of which may be have enough privileges to also get RCE on the database machines (as well as messing with the data they hold).
The container has access to whatever database you connect metabase to for BI. If the db connection credentials are available to the container, it's possible a malicious actor could access your prod db.
> Extremely severe. An unauthenticated attacker can run arbitrary commands with the same privileges as the Metabase server on the server you are running Metabase on.
Java deserialization strikes another one down, I assume?
Always write your analytics data to a separate DB in a periodically run job. Only store aggregated anonymized data in the analytics DB you expose to internal stakeholders via tools like Metabase.