Any more supervision? Did the way development and deployments work change? I can imagine when you're publicly traded the higher ups might suddenly care much more about being on the safe side of things and try to enforce stricter rules.
I was at Yelp during the lead up to the IPO and left about 15 months after the IPO. My title was Director of Systems and I managed two teams: the engineering (production website) portion of operations and a team that did internal/office IT. There was another group, that I believe operated under the CFO, that handled the ERP and accounting systems ("Business Solutions"), I often worked directly with the Director of that team (on a number of projects, not just IPO/SOX related stuff). They were intimately involved with a lot of the stuff leading up to the IPO, and I was brought in to answer engineering org specific aspects (other engineering managers were brought in also, if their teams did anything with money or business/metrics reporting). This is a rough description of my perspective (which is going on three years old now), so YMMV. I'm not going to describe anything that shouldn't be part of a standard audit, though.
SOX is about auditing, reporting, and verifiablity around the financial aspects of the company. They don't care, for example, how the search engine on your website works, or if your iPhone app is using obj-c or swift. But anything that is tangentially related to money or feeds into reports about finance is examined in grave detail. That is, they don't care about your web server logs, unless those web server logs are used for billing purposes (or verifying billing). They do care who can access the data, who can change data, what audit logs are available/collected when accessing the data, what the backups look like, how much of the process is automated. Standard due-diligence stuff.
That being said, questions are asked about everything, and anything that isn't documented needs to be. Eventually, the focus gets more specific as they whittle down which areas are finance related and they need to concentrate on. And then someone reviews it and asks more questions. For months. The firm that does the auditing had a team holed up in a conference room for weeks at a time. People who are, for lack of a better term, "document experts" come in, hand-hold you, and ask you to fill in the details on a somewhat "off the shelf" document that describes an extremely generic process that it is assumed everyone and their dog's software company does. You have to document your actual processes, not necessarily implement things exactly as (implicitly) prescribed.
This part was especially arduous, because the finance auditing industry doesn't seem to be fully exposed to/aware of modern tools that we take for granted. Like git (which provides above and beyond the level of auditing and historical change management that SOX seems to require, all searchable and reportable immediately). Or automated deployment (necessary with any group of engineers beyond a handful, as we all know). Or testing. Or SSH key based authentication (I swear, I thought I was going to have to explain the math behind public key cryptography at one point). Or aggregated system logs. Or isolated reporting/auditing/monitoring systems. Or doing multiple code deployments per day. Having this stuff should be bog standard at any software company in the Valley of any decent size. Or maybe it just seems like they aren't aware of it because of the level of detail and repetition of the questions and they're just making sure you know it and have it documented to the nines.
On a number of occasions I was asked to provide a report of all the changes made to a certain part of the code base (they pretty much looked at a directory listing, handpicked anything that indicated finance related code based on file names or what someone said a part of the code did). I said I'd have that within an hour. I spent most of the time reading the git-log(1) man page to figure out how to format what they wanted in a way they could load it into Excel. They didn't believe it could be turned around so fast, I think they were expecting boxes of paperwork would have to be gone through to find who wrote what code when and who approved it for deployment, and who else looked at it to verify it. All this was available via git-log, some deployment system logs, git-tags and the output of the automated testing system. Then they'd interview the members of engineering who appeared in these reports to talk about their aspects of the code.
Once the initial phases where the fresh-out-of-school discovery and paper-collector people have done their job, it was mainly a lot of, what amounted to, casual conversations. Periodically I'd end up talking to someone who had dealt with tech companies before, so automated testing and source control wouldn't need to be explained (again).
There weren't that many changes we had to do, at least engineering-wise, I think because we already had a pretty solid system in place. If Best Practices are your SOP, then I don't think it ends up being that intrusive. One thing that is different is that you gain the ability to say "We do this for SOX compliance".
As for if there was any more supervision, I think it was valuable because it exposed our (engineering) processes to a wider (internal) audience; just meeting with the auditors about your area you get to see the level of detail required for something traditionally considered mundane. You may consider that report you're generating to be a one-off, but it turns out some higher ups look at it and it really needs to be fully productionized and documented. There was an intent to make the IPO and auditing processes not disrupt the engineering org as a whole. We had to be more explicit around who was allowed to change certain parts of the code and who needed to review and approve certain changes going out. I say "be more explicit" because we were already doing like 95% of that ("yes, the project manager and tech manager approves and schedules that set of changes", "yes, only this set of people have access to that sensitive system", "yes, the accounts of all exited employees have been disabled"). I think there were maybe a handful of cases where research need to be done to find out how something happened, and what we were doing was sufficient already and it was just a matter of documenting it and shoring it up. I was sure to include the method (scripts, command lines, ldapsearch strings) I used to generate any reports they asked for (like lists of employees, list of engineers, list of engineering managers, Active Directory accounts, git logs), so during later audits they could just say "run this series of commands you ran last time". This came in handy during the entire process.
There are periodic audits after the fact that verify that you're actually performing the processes as they are documented. This may be the hardest thing to get used to, since it may mean you have to be less agile and can't refactor your processes as easily as you could before. But usually you want to change finance related things slowly, if at all, so this shouldn't be that big of deal. I think a lot of the changes had already occurred on the way to becoming "a big company", in the years leading up to the IPO.