They send it to Google Colab. It allow you to run model over a powerful server for free. 6 GB of download at this crazy bandwidth cost would still be cheap versus offering themselves that kind of beast of a server to 60k persons each day. I remember when I tried it I saw that it used 10 GB of memory, that was crazy!
You'd save on transferring the bytes around, but now you would have to self-host jupyter and the GPUs it uses. That is going to be even more expensive than IP transit because now you have to have 60,000 12GB GPUs in your datacenter.
Like I said before, this is one of those things that wouldn't exist without the Cloud. If you run things on your user's computers, you have to send them a lot of bits. If you run things on your own computers, you're spared that bandwidth, but now have to have enough "computers" to satisfy your users. It's simply something that's not super cheap to run these days.
I will admit that it is surprising that Google <-> Google traffic is billed at the normal egress rates, but the reasoning does make sense -- a 30Gbps flow is nothing to sneeze at. That is using some tangible resources.
This is what makes the price so surprising - you are copying data from one Google Service to another, but it's billed as egress.