
Red Hat Satellite to standardize on PostgreSQL backend - pplonski86
https://www.redhat.com/en/blog/red-hat-satellite-standardize-postgresql-backend
======
bane
Rule #1: Always start with PostgreSQL unless you have a very compelling reason
not to.

~~~
int_19h
Always start with in-memory data structure with straightforward persistence
(i.e. load and save it all at once, using some popular format).

If you need ACID, then start with SQLite.

If you need scalability as well, then PostgreSQL. Most other features aren't
worth the hassle of configuration compared to a single file on disk and a
library to link to.

(But I suspect that most NoSQL apps these days would do just fine with SQLite,
perf-wise. As an industry, we don't have a good sense of scale.)

~~~
kevinmgranger
For most languages and frameworks, using Postgres as opposed to sqlite is
barely a higher cost if you’ve got the ops skills. I’ve yet to see an app that
straddled this local/web-scale line where SQLite was a better choice or
Postgres wasn’t the obvious choice.

There isn’t some massive cost to using Postgres instead, is there some desktop
divide I’m missing here?

~~~
mr_overalls
From:
[https://www.sqlite.org/whentouse.html](https://www.sqlite.org/whentouse.html)

Appropriate Uses For SQLite:

SQLite is not directly comparable to client/server SQL database engines such
as MySQL, Oracle, PostgreSQL, or SQL Server since SQLite is trying to solve a
different problem.

Client/server SQL database engines strive to implement a shared repository of
enterprise data. They emphasize scalability, concurrency, centralization, and
control. SQLite strives to provide local data storage for individual
applications and devices. SQLite emphasizes economy, efficiency, reliability,
independence, and simplicity.

SQLite does not compete with client/server databases. SQLite competes with
fopen().

~~~
joshklein
I like the way I recall someone (Richard Hipp?) putting it: SQLite is a file
format.

SQLite is an efficient file format for relational data with excellent tooling.

------
BossingAround
A silly, genuine question: when I'm writing a NodeJS app, and I need to store
a JSON in a DB that looks like this:

{ "a" : "b", "c" : { "ca":"cb", "cd":"ce" }, "d": { ... } }

How do I store it? To me, NoSQL seemed like the choice in the past.

This is not difficult in relational DBs, but it requires rather long SQL
commands, and SQL table design. It seems to me that "MongoDB.insert(obj)"
seems like the way to go, as it is much simpler.

How do experienced Node devs solve this issue? Is it simply relational all the
way? Is it Postgress's JSON field..?

~~~
pilif
_> How do I store it? To me, NoSQL seemed like the choice in the past._

If you're using postgres, you could use a column of type jsonb. Postgres comes
with many operators and functions
([https://www.postgresql.org/docs/11/functions-
json.html](https://www.postgresql.org/docs/11/functions-json.html)) to query
into jsonb typed columns and many of them even allow index usage for very
quick access.

However, the other option would be to use proper SQL tables and normalization
to store your data. Then you can query very easily for your data and by making
use of a real typed schema, you get free validation of your input and
protection against corrupted data later down the line.

And aggregating your information to produce reports over various documents
becomes very easy too.

Here's above schema in SQL DDL, Postgres dialect. I'm not sure about
nullability and types of your values, but by making use of database types and
constraints, you can describe the shape of your data much better which further
helps you preventing invalid data from being stored due to bugs

    
    
        create table things (
            id bigserial primary key,
            a text not null
        );
    
        create table c_things (
            thing_id bigint not null 
                primary key 
                references things (id) on delete cascade,
            ca text not null,
            cd text not null
        );
    
        create table d_things(
            thing_id bigint not null 
                primary key 
                references things (id) on delete cascade
            -- ...
        )

~~~
ako
Or you can store the jsonb in Postgres and use views to also present it in a
normalized way. See example here:
[https://www.endpoint.com/blog/2016/02/29/converting-json-
to-...](https://www.endpoint.com/blog/2016/02/29/converting-json-to-
postgresql-values)

------
dmoreno
This sentence, when talking about support for old enterprise versions, is a
bit suspicious:

> Satellite will not use newer versions of MongoDB that are licensed under
> SSPL.

So my read is that there is not only an intention of database consolidation,
but also a desire of avoiding the SSPL.

~~~
amyjess
Nothing "suspicious" about it.

Red Hat also explicitly evicted Mongo from RHEL 8 because of the SSPL [0], and
they have a longstanding policy of only shipping OSS.

It's no secret, and they've been very upfront about not shipping anything
SSPL.

[0] Previous HN discussion:
[https://news.ycombinator.com/item?id=18919543](https://news.ycombinator.com/item?id=18919543)

------
lerax
Indeed a sane choice

------
SlowRobotAhead
Does someone have the time explain what NoSQL Mongo didn’t do that PostgreSQL
does?

I guess I don’t really understand the structured SQL vs NoSQL when you aren’t
accessing directly but via a program. I also don’t understand why they would
have supported a NoSQL and object-relational at the same time at any point.

~~~
MBCook
From reading the post it sounds like they want transactions for some of the
functionality they’re trying to build. I guess Mongo still doesn’t have those.

~~~
d4l3k
They recently added multi document transactions though.

[https://docs.mongodb.com/manual/core/transactions/](https://docs.mongodb.com/manual/core/transactions/)

~~~
cutety
This may just be my bias against NoSQL (I’ll qualify this with I don’t believe
it should never be used, but 90% of use cases psql/MySQL/etc is likely the
better choice these days), but when it comes to choosing a DB engine when you
want ACID transactions, I’d pick the one that was built to handle ACID
transactions from the start, rather than one that just added it with this
large caveat in the middle of the docs:

> In most cases, multi-document transaction incurs a greater performance cost
> over single document writes, and the availability of multi-document
> transaction should not be a replacement for effective schema design. For
> many scenarios, the denormalized data model (embedded documents and arrays)
> will continue to be optimal for your data and use cases. That is, for many
> scenarios, modeling your data appropriately will minimize the need for
> multi-document transactions.

So, to be able to have one of the core features of SQL, you lose the biggest
feature of NoSQL which is super fast writes. Which, they try to remind you
that if you model your data in correctly, by creating denormalized documents
with embedded data, you probably won’t need them anyway. Which just makes me
cringe at having to maintain these huge unstructured documents just so you get
faster writes since you don’t have to worry about ACID.

Or, you can use Postgres, which was built with ACID from the start, using
technology and design patterns proven over decades, and has support for JSONB
if you need unstructured/document storage that you can also (fairly)
efficiently query.

Maybe I’m just woefully uninformed, but I just can’t imagine a use case for
Mongo unless your dealing with Google level data/traffic, and fortunately in
their case, they have enough money to hire enough people that actually know
how & when to use Mongo effectively to the point that it’s not just a fancy
way to write to /dev/null.

I’d actually be really interested in some non-Google scale use cases for
Mongo, like where Postgres’ performance was actually an issue, and how much
Mongo actually outperforms it, and the trade offs/issues switching. Most posts
I’ve seen are of the opposite migration, but I want to see what all the hype
for a DB engine (what I’ve always saw as one of the less _sexy_ areas of CS,
at least in marketability, compared to ML/AI or Programing Language/Compilers)

~~~
cobythedog
...

------
linkmotif
I’ve never enjoyed Mongo for a second, not conceptually, not practically.

But I don’t understand the people who act like Postgres is the end of
everything. SQL never felt right for me either. It’s a fine language for
business analysis, I guess, but is it actually pleasant for development? Do
all of you who so enthusiastically post that it should be the first tool you
reach for really never experience object-relational impedance? Do you really
enjoy modeling like this?

~~~
acdha
Yes: what you’re really doing is documenting your data structures and access
patterns, which is way less work to do consciously rather than trying to bolt
it on ad hoc.

A really big thing is normalization which some people tend to downplay until
they’ve had to write code to recurse complex structures enforcing consistency
or making global changes. Similarly, atomicity and isolation are really useful
characteristics to be able to take for granted without having to code around
the problem everywhere.

The usual arc I've seen for document stores, ISAMs, etc. is that people say
this is great based on the first 20 minutes and then about a year later
realize that they've spent thousands of lines of code implementing a really
clunky subset of what they'd get out of the box with a SQL database and the
believed ease of use or performance benefits were far less dramatic than
promised or even negative.

~~~
linkmotif
Right, Postgres is better than document stores, but is it actually pleasant to
store your data in square tables and then process it to runtime objects from
that? Even with battle tested ORMs, it’s never felt right for me.

~~~
chousuke
I find it very pleasant, but I also tend to avoid ORMs. I can just write SQL
queries to get me the data I need in any form I want, and it's not always
useful to stuff that into objects when I can just process it as is.

~~~
linkmotif
Processing data from format to format is a total drag to me. I just can't do
it.

------
fooblat
I'm so out of date with Satellite server that my first reaction was: Wow it
took this long to get oracle out!

------
peteridah
I am quite surprised that companies still use Red Hat Satellite Server. In
2010 while I was still at Red Hat, deploying and managing it was the mainstay
of the consulting business, and it was based on Oracle RAC. It was to my mind
already legacy software at that time.

~~~
ricklepick
Satellite of today is not Satellite of 2010 by any stretch.

Satellite 6 is based on foreman + katello + pulp

~~~
dralley
And, for anyone who has experience with Satellite 6.0/6.1... it has gotten
much, much, much better since then.

------
AzzieElbab
I am no fun of MongoDB, but I must admit it has gotten a lot better in the
past few years. So, why drop it now?

~~~
nailer
New license that's not OSS (MongoDB Inc saying it is doesn't make it so, the
OSD determines OSS, not some company).

~~~
ricklepick
The upstream project has been pretty vocal about their reasons to switch, and
it's not because of the license. MongoDB just isn't the right tool for their
use case, despite valiant attempts to utilize it.

~~~
VWWHFSfQ
This is very clearly because of licensing.

------
wcchandler
Any guesses on a timeline? Satellite 6.6? Or will it be in the next major
version? 7.0?

~~~
dralley
Pulp 2 will continue using MongoDB until EOL. Pulp 3, which is still in
development, is using Postgres -- but while the platform functionality is
nearly done the plugins for e.g. RPM and Docker are not yet near feature
parity.

------
alrs
Is there a word for a case of "I told you so" that went on for so long that it
curdled from frustration, to despair, to cynicism, to a realignment of your
understanding of the human project as something barely capable of tying its
own shoes and making it out to the mailbox and back?

~~~
nodesocket
This is a dead horse that's been beaten to death over and over again here on
HN. MongoDB is garbage. MongoDB doesn't scale. MongoDB is $h!t says HN users.

It just feels like an echo chamber. Are you running the latest version of
MongoDB in a replica set with journaling enabled and write concern set to one?
MongoDB has worked great for my uses, up to moderate write/read scale. Sure,
if you are running "big data" or enterprise things, it might not be the best
choice, but it's not the steaming pile of horse excrement that some HN users
try to make it.

Second, for those who still insist that MongoDB is crap, what is the best pure
document store database then? I used to champion RethinkDB, but they failed
and development has basically stopped. You wouldn't build a business on
RethinkDB now a days unfortunately.

~~~
metildaa
Modern Mongo still seems to corrupt itself on the Unifi Cloud Controller I
have to deal with if there is no disk space left or hard power off. Postgres
will be usable after disk space is freed, or when booted back up. This is a
really basic reliability issue.

~~~
voltagex_
OT: Ubiquiti seem to be ... not good at software and only relatively good at
hardware. I'm not sure why they're as popular as they are and I'm slightly
ashamed at being suckered in by Troy Hunt's marketing.

Is it just a case of the rest of the home/SOHO WAP/router segment is
cataclysmically bad, so they only have to be a little bit better? (The WAP is
good, but not as fast as the Turris Omnia it replaced).

~~~
laurentdc
> I'm not sure why they're as popular as they are

For me it's always been price.

Back when we adopted them Ubiquiti UAPs were pretty much the only thing close
to enterprise access points (centralized management, radius support and all
that jazz) but at 1/10th of the price of Cisco/Ruckus/Aruba.

I agree that they're a bit overhyped: we have 200+ deployed and they are not
exactly the most reliable, some devices just disappear from the controller
forever, some early generation ones would just overheat and die, CLI
management is non-existent. But hey for $70 a pop you can just buy a truckload
of them and replace them as needed.

------
hartator
Not sure I understand the MongoDB hate here. Migrations and denomalization is
not somthing you have to deal with in MongoDB.

~~~
Benjamin_Dobell
You almost certainly still need to worry about migrations.

MongoDB can store data in a schemaless fashion, however, chances are your
application has a data model (schema).

Sure, you _can_ avoid migrations and just tack on more and more backwards
compatibility for old data layouts. But performance suffers, and chances are
that code quality and correctness suffer too.

~~~
__david__
From a certain perspective, adding backwards compatibility for old data
layouts _is_ a migration. A migration that gets run every the data is loaded
instead of once during a classic db migration. This is inefficient in the long
run.

