What I can tell you is that we published messages in batches, closing a batch every 5 seconds or 100 tweets (at the highest traffic rate, we were receiving and publishing about 100 tweets every 1-2 seconds). This is way below the quota limits.
On the subscriber side, we received batches of at most 10 messages. I didn't investigate if this is a limit or a config param. The sensation was to receive the messages pretty much instantly (I was looking in parallel at 2 shells, one publishing and one receiving), but again, I don't have precise measurements.
Did you just use Twitter stream API with track?
I am particularly interested in this part. If you are able to share any anecdotal experiences, I'd love to hear them.
Google Cloud Pub/Sub is better aligned to server-to-server messaging, similar to other Cloud queuing services, service buses, event logs, or open-source systems such as RabbitMQ or Kafka.
Google Cloud Pub/Sub is HTTP(S), so I guess it's based on PubSubHubbub?
However! I know of a number of PaaS products that provide similar functionality, and with some effort, you can build it in AWS or Azure features, or you can build your own on top of RabbitMQ or Apache projects. The characteristics are going to be different, but it's doable. It might be like a MySQL to Postgres migration, or it might be like a MySQL to Mongo migration, but there _is_ a migration. Using a vendor product with unique advantages as a dependency is a known engineering problem with known risks. Take your dependencies carefully, but it's riskier to take no dependencies and fail to deliver a useful product.
It'd be pretty easy to swap in different techs if all we're talking about is PubSub.