RHEL and derivatives have long been the standard in the Ops world for their stability. Rocky Linux and Alma linux are the main two community editions, they are both supported by pretty good communities, so it's hard to pick one.
Seems Ubuntu was largely popular with devs and the masses due to it's ease of use back in the day. Never understood why more didn't just go with Fedora back then.
My last dip into alternative distros (~2 years ago) for my dev desktop was quickly over by various problems/missing settings for display, audio and network. Not saying it wouldn't have worked (one was even ubuntu based), but I had to go and configure desktop stuff on the CLI, including having to figure out how. I rather not.
I'm absolutely fine running something else on a server though and my docker images are usually alpine based. But for desktop Ubuntu is the closest to "looks decent and just works"
I used to be a RHEL admin and was just more comfortable over there, but after the whole CentOS mess I ended up running Ubuntu LTS at home instead - I just wanted a "set it and forget it" machine so I didn't go with Fedora.
I'm currently regretting that decision, as I'm really not looking forward to devoting another weekend to rebuilding again.
I don't know about Rocky, but I updated all my CentOS systems (about a dozen desktops and a file server) to Stream and they work fine. The changeover has been pretty much a non-event.
I just rebuilt one my ceph nodes with Rocky 9. Seems they are pretty closely tracking RHEL. I think Alma is a little quicker with patches, but both are fast.
> Never understood why more didn't just go with Fedora back then.
For me at least: back then Fedora didn't have a supported non-terminal way to upgrade to a newer major version. (Whereas now upgrading Fedora is more polished and simpler than Ubuntu.)
Fedora, while very nice, goes too fast. Basically as fast, as the non-LTS ubuntu releases. Ubuntu has the LTS option, for Fedora, that LTS option was CentOS and today Rocky/Alma.
With the rest, I agree. I run fedora on my desktop, but I would not use it for my parents, for example. Even with LTS, they were complaining that it changes all the time.
If it works for you, why not. Just be aware, that the point releases did have breaking ABI changes, and now they can happen randomly, without waiting for point release.
Since I used the old, non-stream centos, the changes left some bitter taste. Enough to prefer alma.
Wouldn't Debian be a safer choice? Ubuntu is just layers on top of Debian (one of the layers being the "pro" thing, it seems...). So Debian should the the obvious solution. RHEL would be the most likely to move in the same direction as Ubuntu, wouldn't it?
I’m now expecting to see Xbox Store on PlayStation (and vice versa), Samsung Store on my LG TV (because they have xCloud), no bad practices with a third party store, and everything positive.
Seeing that this solves all our problems, I’m looking forward to xCloud on my LG soon.
Norway is not big enough on Internet scale to make a difference. Equipment and software companies will put in balance the cost vs the benefit and may decide to ignore that market.
Promscale is the open source observability backend for metrics and traces powered by SQL.
Whereas Mimir/Cortex is designed only for metrics.
Key differences:
1. Promscale is light in architecture as all you need is Promscale connector + TimescaleDB to store and analyse metrics, traces where as Cortex comes with highly scalable micro-services architecture this requires deploying 10's of services like ingestor, distributor, querier, etc.
2. Promscale offers storage for metrics, traces and logs (in future). One system for all observability data. whereas the Mimir/Cortex is purpose built for metrics.
3. Promscale supports querying the metrics using PromQL, SQL and traces using Jaeger query and SQL. whereas in Cortex/Mimir all you can use is PromQL for metrics querying.
4. The Observability data in Cortex/Mimir is stored in object store like S3, GCS whereas in Promscale the data is stored in relational database i.e. TimescaleDB. This means that Promscale can support more complex analytics via SQL but Cortex is better for horizontal scalability at really large scales.
5. Promscale offers per metric retention, whereas Cortex/Mimir offers a global retention policy across the metrics.
Hi. I'm a Mimir maintainer. I don't have hands-on/production experience with Promscale, so I can't speak about it. I'm chiming in just to add a note about the Mimir deployment modes.
> Cortex comes with highly scalable micro-services architecture this requires deploying 10's of services like ingestor, distributor, querier, etc.
Mimir also supports the monolithic deployment mode. It's about deploying the whole Mimir as a single unit (eg. a Kubernetes StatefulSet) which you then scale out adding more replicas.
Promscale supports reporting/ingestion of data using Prometheus remote-write for metrics, OTLP (OpenTelemetry Line Protocol) for traces.
Dashboards you can use Promscale as Prometheus datasource for PromQL based querying, visualising, as Jaeger datasource for querying, visualising traces and as PostgreSQL datasource to query both metrics and traces using SQL. If you are interested in visualising data using SQL, we recently published a blog on visualising traces using SQL (https://www.timescale.com/blog/learn-opentelemetry-tracing-w...)
Alerts needs to be configured on the Prometheus end, Promscale doesn't support alerting at the moment. But expect the native alerting from Promscale in the upcoming releases.
If you are interested in evaluating, setting up Promscale reach out to us in Timescale community slack(http://slack.timescale.com/) in #promscale channel.
I spent about an hour with Rancher Desktop when I was really pissed off about the Docker Desktop licensing change. There were a few things that out of the box were a problem for me (note: really a problem with nerdctl/containerd):
1. nerdctl did not support registry mirrors for image pulls. This is an obvious blocker for some uses cases.
2. with Docker Desktop you can bind container ports to any interface on the host system, including e.g. ip aliases on localhost. It didn't seem to be possible with nerdctl using whatever VM backend Rancher is using.
2a. I'm not sure how this works today, but with Docker Desktop a `docker pull` can interact with a registry available on the host's localhost address (i.e. through an SSH tunnel established on the host). This worked with Rancher, but I believe I had to edit /etc/hosts inside the Rancher-controlled VM to point at a different IP address, whereas with Docker Desktop it just worked.
I also seemed to recall needing to manually start things with Rancher, i.e. just having the app open was not enough for docker/kubectl/nerdctl to be ready to go. But I don't remember at this point.
These are all (I hope) uncommon and weird use cases, but they are the sort of thing that will keep some people using Docker Desktop instead of an alternative. They are, for better or worse, the value add of Docker Desktop.
After seeing that they did not capture the logs. What is the “proper” way of storing said logs? I guess you need a remote logserver like logstash to store them. But what service does actually send the logs from the server to a central storage.
Looking into Loki, Graphite, etc. But I’m a bit at a loss where to begin.
[1]: https://penpot.app/