I read the horror stories, the monthly bills of 10's of thousands for one server and just assumed there was something more substantial to the product; like they did something groundbreaking or novel. I never cared enough to actually look and see what they did.
Just about every advertised Datadog alternative does maybe 10% of what Datadog can do, and likely has hundreds less pluggable integrations than Datadog. While it may be the case that it's overkill for a simple application, one of the biggest benefits to Datadog is that there's an integration for just about anything, and the product can go deep if you need it to.
The "omg my bill is out of control" issue is usually manifest from a few sources, one of the biggest is relying heavily on custom metrics added over time, and so you think you're paying X but really you end up paying 2-3X or more by the end of the year. But the tricky thing is, most of those things that cost a lot of money either have a lot of value, or held a lot of value at the time.
For us it was the mismatch between AWS and Datadog billing (AWS bills by the second, Datadog bills by the hour, so you should only ever use Datadog for persistent instances, not high churn instances like dynamic background jobs, or else completely rearchitect your application for the benefit of a vendor)
This is incredibly common - I've heard a company end up rearchitecting their instance type choices due to Datadog billing on a per-node basis (with some peak usage billing shenanigans). Their business model unfortunately encourages some very specific architectures which doesn't work for everyone.
Before I had a chance to work with datadog, I generally operated Prometheus / Grafana as it's basically industry standard in k8s. The ability for an application to publish it's own often very detailed metrics and have those auto scrape is powerful.
Learning that datadog charges these as custom metrics was shocking. This opens a wormhole of allow-list or opt-in considerations, and then there is tag cardinality or even introducing a middleware like vector. It feels very backwards to spend effort on reducing observability.
If datadog was crap it would make things easier, but it really is a fantastic product, and a business after all. Prometheus integration is just so very cumbersome which is probably strategic I would imagine.
I'd love an open source alternative. But there just isn't one for APM (which is our main use case). Nothing comes close. Every time I see "OpenTelemetry integration" I just close the page. Hours and hours of manual setup, code pushes, etc while New Relic installs once and works.
I assume it's the same for people who use DataDog begrudgingly.
> I'd love an open source alternative. But there just isn't one for APM (which is our main use case). Nothing comes close. Every time I see "OpenTelemetry integration" I just close the page. Hours and hours of manual setup, code pushes, etc while New Relic installs once and works.
Depending on the language/environment/framework, OpenTelemetry Autoinstrumentation just works. It's the new standard, and lots of working is ongoing to make it work for everything, everywhere, and even the big observability vendors are adopting it.
I'm wondering when's the last time you tried OpenTelemetry - and which language it was in? I'm not going to say it's super mature (it's not) - but I think it's come a long way from being a ton of manual setup and it's more akin to SDKs available commercially. Admittedly we (HyperDX) do offer some wrapped OpenTelemetry SDKs ourselves to users to make it even easier - but I think the base Otel instrumentation is easy enough as it is.
FWIW most of OTel is pretty easy to use and set up too. OTel Operator over K8s that installs autoinstrumentation agents for 5 languages --> pretty easy onboarding.
Datadog is a pretty amazing product and the folks that build it should be proud of what they have done. BUT it's extremely expensive and most people don't use all the features. It's like Splunk, 99% of people havent invested the time or energy to get the full value of the product they are paying for.
Oddly enough, this is why we started OneUptime in the first place. We were burned by the DataDog bill and wanted an open source observability platform ourselves.
I imagine datadog's AWS bill is also out of control, considering all the absurd levels of queries/groupings you can do.
I used to work on a growing AWS product with tons of features that no one used.
Often when we were creating a feature, our managers would have us include tags and support for making parts of the feature optional, but make sure no parts of the feature (or the feature itself) where optional to start with. We would enable the ability to toggle the feature if "A significant enough amount of customers weighted by revenue requested it".
Also got the "Build filtering, but don't expose it unless we have to".
I read the horror stories, the monthly bills of 10's of thousands for one server and just assumed there was something more substantial to the product; like they did something groundbreaking or novel. I never cared enough to actually look and see what they did.
I use uptime-kuma - https://github.com/louislam/uptime-kuma - it obviously does a fraction of what these other things do but it does everything I need.