Sadly even though this is an obvious concept and trivial to implement, it took them over 20 years since the web came out to get it in most browsers. The cost to society of having thousands of copies of the same commonly used files (like jQuery) hosted locally on countless servers rather than having a centrally hosted version already cached from previously visited sites is staggering to contemplate. I'd really like to know who was behind the holdup on deploying SRI.
If we had something like SRI from the start, we could have linked to resources by their hash instead of their URL (more like how IPFS works). There's a name for this concept that eludes me, and also a great video explaining its potential but also difficulties when it comes to security and HTTPS. The short of it is that if we had routers that accepted hashes as well as URLs, then we could ask for a list of all data matching a given hash and download that file (or its pieces) from the closest cache(s). So instead of linking to jQuery at https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.mi... we could just ask the router for the file with SHA2 hash 160a426ff2894252cd7cebbdd6d6b7da8fcd319c65b70468f10b6690c45d02ef and it would return its contents regardless of where it came from, including the browser's own cache if it already had that file (I just used https://hash.online-convert.com/sha256-generator but there would be a better standard for this).
Anyway, hope this helps and sorry for any confusion.
Google Analytics or recaptcha for example aren't versioned. Deploying SRI is just going to break your site when they update the script.