I've been using dbt for achieving the same goals and it's honestly been a bliss. Macros, packages, cross referencing views, unit tests, documentation generation, snapshotting.
And I don't even swear as much at jinja as I thought I would.
dbt allows macros to resolve arguments at "evaluation" time by nesting queries (for example, get a list of DISTINCT values so that they can be then used as column names), which is really useful. I have a particularly nasty database schema (thanks, wordpress) to deal with, and I now have plenty of macros that do the dirty stuff for me.
Collaborate on data models, version them, and test and document your queries before safely deploying them to production, with monitoring and visibility.
I've been using a bit of dbt to smooth some of the edges of an existing SQL Server analytics solution.
dbt's biggest strength is that you can incrementally expand it to fill the gaps in your existing solution: some source data quality tests here, a few automatic history-of-changes tables powered by dbt snapshot there, and with enough context you can then create push-button documentation and data-lineage graphs, which tends to be a lot more documentation than most companies' BI teams maintain natively.
The downsides that I have found so far (subjectively) have been basically that it's wholely geared for batch operations, it seems to expect to export to only one data warehouse and thus lacks a "many-to-many" data-mesh-like mode, and it wasn't easy to hack the documentation page to describe things like "the ETL process that populates this table". Also, it does not seem to have a way to define or create source tables if they do not exist (so it will not black-start a project for you, it's really only for existing data), and if your code is already checked in to SSDT you might have to move some code around.
...But I have complex problems due to $CURRENT_JOB's existing code base, organizational structure and business needs. (And yes, I am really liking my job.)
My needs aside, dbt is refreshingly easy to get started with, it provides value very fast relative to time invested, and you could do far worse than to spend a day or a week seeing what small annoyances dbt could help you with.
I especially enjoy being able to write "unit tests". Since I tend to mess up my SQL, or some code change might taint the data in a way I can potentially expect (null values, for example), I can add some pretty high level checks like: "the sum of all order line items should match the sum of the prices of all line item products" which are built out of two separate data sources, and make sure my analytics stay in range. All checked into git.
I do agree that the iterative workflow could be a bit smoother, and the doc side of things is a bit rough. I expected the project to gain a higher velocity than it seems to have right now.
I just started incrementally throwing views at our existing schema, mostly lifting SQL out of the code base and committing it to the dbt repository. From there, I started heavily annotating the columns, and starting to write unit tests for the views.
This uncovered a ton of hidden assumptions and dirty data which I either cleaned up, or manually flagged in the views (as in, skip the orders listed in this CSV file because they stem from the period of time where shipstation blablabla).
Since then I'm continuously adding to it, refactoring the macros as if they were code, and it really feels in line with the normal workflow of a developer.
And I don't even swear as much at jinja as I thought I would.
dbt allows macros to resolve arguments at "evaluation" time by nesting queries (for example, get a list of DISTINCT values so that they can be then used as column names), which is really useful. I have a particularly nasty database schema (thanks, wordpress) to deal with, and I now have plenty of macros that do the dirty stuff for me.