I think the real advantage of Badger is that it's not as sophisticated as Redis. ie you can have Redis-like functionality compiled into your application so less faffing about setting up another daemon / cloud micro-service inside your NAT / VPC / whatever.
If you want persistence, then I'd recommend persisting to disk. While I've not had this fun with Redis, I've written code that took out an entire Cassandra ring. Were stuff only in memory, it would have not been pretty. Just because something is distributed doesn't mean it's guaranteed to never go completely down.
(That said, if you're using Redis as an in-memory cache, this is a potentially acceptable tradeoff.)
An application developer would not choose Badger, but instead would pick a DBMS such as Redis.
A database engineer would use Badger to develop a DBMS that the application engineer could use. If the database engineer so chooses they could expose a Redis compatible API.
Cassandra, for example, has it's own storage engine that's responsible for writing bits to disk.
I know rocksdb is LSM-based and was built by FB to address write amplification on SSDs.
Badger, RocksDB, LMDB, etc. are storage engines. A process uses these storage engines to write data to memory (note: the storage engine may support persistent, volatile, or both types of memory).
A database management system (DBMS) is a higher level concept that often has multiple processes either on a single server or distributed across multiple servers. Simply stated, each process within a DBMS uses the storage engine to read/write to/from memory (persistent or volatile).
It's important to note that storage engines are a specialized area and require different skills from writing a DBMS. It's a big deal when other people write high quality storage engines because it makes it a lot easier to write a DBMS.