There can be significant impact from off-page types (VARCHAR(MAX), NVARCHAR(MAX), VARBINARY(MAX), and the deprecated TEXT/NTEXT/BINARY) in SQL Server, and presumably other DBMs, too.
Though one key difference is that SQL Server doesn't compress the off-page parts as this article states postgres does. In fact even if you have the table set to compress using either row or page compression option, off-page data is not compressed.
The way PG handles this is better than SQL Server was back when I used that primarily (2008r2). The compression is not as amazing as it could be, but it is quite helpful regardless. It'll be nice when the work on the "pluggable compression" [1] (or similar) is committed, allowing the use of better compression algorithms like zstd.
IIRC 2008r2 didn't support compression at all (I think it was added in 2012 for Enterprise/dev editions then all other editions in 2016sp1?), at least not without enabling CLR and writing your own modules to handle it.
The latest versions still don't compress off-page data at all (with the exception of using 2019+s support for UTF8 in place of fixed two-bytes-per-character string types as a form of compression) though there are methods using triggers, backing tables, and the [DE]COMPRESS functions if you really need LOB compression and don't need things like full text search.
Though one key difference is that SQL Server doesn't compress the off-page parts as this article states postgres does. In fact even if you have the table set to compress using either row or page compression option, off-page data is not compressed.