They just made a very fast, low latency, distributed mesh CDN that the customer pays Google for the connectivity AND electrical bill.
Gigabit to your house won't remove latency, other than perhaps to apps hosted by google (including google, obviously)
As for electricity, the cost of packaging up and sending drive units to the customer + upkeep would dwarf the costs of electricity for the total lifetime of the drives.
I don't think this really makes sense.
Having some of the mirrors you use for this be in people's homes is an interesting twist that I had not thought of.
The saving compared with putting it at a local switching office is that you don't need to buy a set of special machines and hard drives that sit at that office. Instead you're leveraging underutilized machines that people have already paid you for that are sitting at their houses.
As far as the rest of the network is concerned, the traffic doesn't need to go to them so they are happy.
Now, this could also be done in the switch closet, you are right. However, since this would have to also go through either the uplink, or every switch would need a port dedicated to a cache network/box, it would start getting expensive at switching points. Each would start looking like a mini-data (micro? nano?) center. At that point, you could just eat that cost, or say "what are alternatives that cost the same or less in capex and opex?" Perhaps with Google's network-fu, they have solved similar problems in data centers already, and said "we can use our caching/routing stuff here, and put a small capex increase each customer box, which we also need no matter what, and decrease switching point capex, and since it is a simpler network, reduce opex too".
Essentially, it is a similar problem to the one bittorrent solves, just at a different scale/locality. It also starts to look like solutions some vendors/ISPs looked into at one point for bittorrent - instead of stopping bittorrent, keep a map of local people seeing segments and reroute requests for those segments to the local network rather than across the uplink.
 assume a decent switch with a full mesh backplane. Also assume real switches will be used with real numbers, not my exemplary ones - the analysis will be the same, but the numbers will of course be different.
The fact that makes it work is that not all routes through Google's network are created equal.
Routes that go to and from data centers go a longer distance, through more pieces of equipment, and include busy backbones that you do not want to get overloaded. Routes that stay in a local neighborhood go a short distance and put load on one router which should be able to take it, and totally skip the critical backbone.
From the point of view of the network operator, going to a data center is slow and expensive. Keeping traffic inside a local neighborhood is fast and cheap. Thus they want as much traffic as possible to go the fast and cheap route.
CDNs cache data on local mirrors, and routes traffic to them whenever possible because that is faster and cheaper than going all the way to a data center. Every large ISP does this, and it would be shocking if Google didn't follow suit.
But actually caching data on hardware that is sitting at customer's houses is an interesting twist.
I'll admit that the line is a bit blurry.
Edit: I don't mean to dismiss your idea about the value of Google reserving some space on the disk for their own purposes. Cable & satellite operators already do that today. Technology exists for example that allows the operators to cache household-targeted TV ads on the disk. These deployments are still small scale but I think it's highly likely Google is thinking about such things as a way of monetizing their new network. If you're curious about this topic do a quick search on Google's investment in Invidi and on their partnership with Echostar.