Chevron had so much seismic data they required (from memory) six robot tape libraries. They were ganged together so the picker robots could hand tapes between cabinets in case all drives in a given cabinet were in use. It was cool as hell to watch the camera mounted right above the picker flying around this dark wall of tapes to go grab one.
One of Shell's Quality training videos was about The Guy Who Lost The Tape; seems a seismic data tape was mislabeled and lost, causing Shell issues with bidding on a lease. Their Deming-style quality training was all about preventing that sort of thing happening again. I dare say their data was more valuable in total than the hardware.
Offshore deepwater (~2500ft+) non-seismic surveys might cost 6-7 figures per day to operate, and might fill up a hard drive every 1-3 days.
Depending on how many drives fit on a tape, the raw data could get very expensive, very quickly, even before it's been processed, analyzed, etc.
I used to work with a guy who told me that the reason BP bought Amoco, and not the other way around, is that years before, the Amoco team misread the seismic map (not the chart), and bid on the wrong piece of land. BP got the other piece, and the difference was big enough that within a decade, one bought the other.
THAT is how much the data is worth.
But my experience is mostly not-seismic, so I'm not sure how the survey parameters would be adjusted for a revisit.
Data and systems analysis at scale is definitely not new to them.
Those machines were pretty powerful for those days, almost at supercomputer level. One cost more than a years salary for me back then.
But Oil has always loved computers.
My (in my opinion failed) HonsBSc project was on signal analysis of GC-MS (Gas Chromatography coupled to Mass Spectrometry) signals.
If I had to do it again in 2019, then machine learning would be a much more pertinent focus. However, without the computing power of today and ubiquitous presence of programming libraries for that purpose, there are actually other ways of approaching such data (like wavelets).
Wavelet transforms were invented quite a while ago  but I think the seismic data analysts were some of the first to really investigate the applications of that field. The other application is for compression (and loading over an internet connection) .
Former energy VC with the Norwegian state energy co here. Two of the companies we invested into were HPC related—one of them developed node controller solutions for parallel processing, the other developed a platform to analyze massive amounts of subsurface data more quickly.
When the cost to drill a single exploratory well costs as much as some IPO’s, in some cases with <10% expected chance of success, there’s plenty of love for supercomputing in this industry.
The Prize: The Epic Quest for Oil, Money & Power
So it's more a book to persuade (feelings) than inform (facts) - no wonder they aren't mentioned.
Know what you're reading.
and I know you the passage you cite from his bio is not in related to the book I cited.
Basically, I can see that they needed super computers in the 80s. But I might have assumed that their computing needs wouldn't keep scaling with supply. So that by now, just renting some GPUs from Amazon might be enough.