Exactly: I have to provision for peak throughput on Spanner. Average throughput is much lower than the peak throughput, so I'm doubtful of seeing savings no Spanner.
(But I bet that Spanner is much easier than DynamoDB to develop with...)
You can scale Spanner up/down based on demand although there is a lag time with it.
I built a system that relies on a high-performance database and tested with both AWS DynamoDB and Google Cloud Spanner (see disclaimer) and was able to scale Google Cloud Spanner much higher than AWS DynamoDB.
DynamoDB is limited to 1000 WRUs per node, and there isn't an obvious way to get more than 100 nodes per table, so you're limited to 100,000 WRUs per table (= 102400000 bytes/sec = 97 MiB/sec = 776 Mib/sec) -- even if you reserve more than 100,000 WRUs in capacity for the table. The obvious workaround would be to shard the data across multiple tables, but that would have made the software more difficult to use.
Google Cloud Spanner was able to do much more than 97 MiB/sec in traffic (though the exact amount isn't yet public), and also was capable of much larger transactions (100 MiB versus DynamoDB's 25 (now it is 100) items * 400KiB of ~10 MiB) which was a bonus.
Disclaimer: The work was funded by a former Google CEO and I worked with the Google Spanner team on setting it up, while I am a former AWS employee I didn't work with AWS on the DynamoDB part of it, though I did normal quota adjustments.