It needs to happen in-browser because of the modern prevalence of SSL. I think only Firefox has APIs powerful enough to intercept stuff on this level, and even those APIs are obsolete, so I think it really needs to be integrated straight into a browser fork. Chrome / Firefox WebExtensions have request interception, but I'm not sure it's powerful enough to do this kind of stuff
(Also, the mere fact such a scheme would have to jinx with the mechanics of a fundamental browser security mechanism should be enough to indicate how difficult it would be to implement safely!)
Or an alternate model / protocol for content retrieval.
Consider that SSL is largely used for connection encryption, though it has the additional side-effect of site authentication.
If you can still validate the site, and rely on a hash of the content to detect changes, then you're starting to get toward a cacheable, secure, autenticated, system.
It's not sufficient due to certificate pinning, and it also completely short-circuits any browser-based security policies keyed on the certificate or options advertised in the protocol like SPDY. That would be the worst possible outcome, and why any correct solution (i.e doesn't break security or functionality) would be hard to reach
It can work in multiple ways, depending on your goals. Keep in mind, Squid is infrastructure software, not end-user software. There is no one configuration; it depends on what you want to achieve and your environment.
That said, the web site[1] offers great docs with lots of config samples to start from. If you're starting from scratch, take a look at CARP configuration. Squid also speaks ICP and HTCP.
Squid is one of the rarely-sung heroes of web content delivery.