I think its a nice tool for deployment and making reproducible builds, but a lot of other things become harder through docker - like managing a databases's data, and communication between local processes.
Maybe the tooling has improved in the last few years, but I've gone back to the raw unix coalface.
It doesn't have to be this way. If you use shared folders to persist data on host you are in no worse position than you would be in if you used natively installed app, persistence wise.
I think the Docker's focus on orchestration (which makes business sense for them) is the reason why running DBs in containers got bad reputation. But really, if you use shared dirs with host and view containers as processes you can use them for DBs too.
IPC with containers OTOH forces you to architect the system as a bunch of microservices, which is usually not a bad idea either.