I'm not sure how you arrived at this 3750 per second number.
Processing 1.5 million messages for ⅓ of the time (1500000 messages / 30d / 8h / 60m / 60s) is an average of 1.7361 messages per second during those 8 hours per day. If the remaining 500k messages were spread evenly over the remaining 16 hours a day for a month, that's an average of 0.963 messages per second ((((500000 / 30 / 8 / 60 / 60 )×2)+1.7361)/3) over the entire month.
Even if you were to process 1.5million messages during the last 8 hours of the month, that is only 52.08 ( 1500000 messages / 8h / 60m / 60s ) messages per second during that 8 hour period.
You dramatically misunderstand VPC.
Everybody on AWS today uses VPC, unless you happened to be grandfathered in to EC2 'classic'.
There is NO COST to using VPC (aside from admin over head maybe?). It's a huge security upgrade, as it gives you the ability to isolate things much better.
The specifics of HIPAA/HITECH compliance on AWS are available under their BAA (which is confidential), and do require some things that cost more money. VPC is not one of them.
If you have only a quarter rack of servers and want the minimum s for physical security, yeah, you can probably make that work cheaper than AWS. When you have a whole datacenter worth of gear and now need a more sophisticated security / entry system, etc. the argument becomes much closer.
With that said, you misunderstanding how VPC works and what it is makes me wonder if your pricing is coming out right: it's really complicated to price out a full AWS cost, just as much as to price out your own costs.
The 'calorie count' website score is about the healthiness of the meal, not the quality of meat. McNuggets are the same chicken, just with way more breading, and therefore more fat/salt/etc., so 'less healthy'.
The alternative is what we have in infosec: the folks in charge of 'cyber' in the govt don't have a clue, because they have no background or connections to the 'industry' or technology at all. And it's a total disaster.
I don't buy either side of that statement. Government does hassle the masses, and I'm not sure what incentives private companies have to use data collection on the masses. My argument for the latter is probably a little weak, but certainly in the former, history is clear.
Of course, collecting metadata -- especially via centralized services like Google Analytics -- means you're creating a very tempting trove of information for both the government and private entities that may make use of it to your users' disadvantage.
I've created a simple solution for a few projects I'm working on, it's a kind of REST service wrapper.
Basically, I have a "service" package that offers:
* mux routing
* Negroni for middleware (with stats and auth middlewares delivered out of the box)
* a centralised logger
* Rendering of things to JSON
* An idea of controllers (a controller is a web handler for a given route, handling all verbs)
* An idea of handlers on those controllers (per verb... you basically just load in your GET or POST handlers here)
* HEAD and OPTIONS for free (including automatic Allow headers)
* Error handling returns JSON throughout, including 404s, etc
* Basic filling of structs with POST'd/PUT'd JSON
The thing is fairly primitive, and it's Go idiomatic and so doesn't provide a context for the middleware and handlers, but it just allows one to rapidly say "this struct, to this controller, on this route" and the rest is for free.
This is just a starting ground though, a test of an idea. If I can package this better and have it be a little less hacky in nature I'll look to open-sourcing it. Right now it's really just testing the idea of taking all of the cross-cutting concerns and common issues and solving them via this service package I've made, with the goal of just letting us get on with writing the business logic.
I want to expand it... with better description of errors, standard schemas for arrays being returned, auto-documentation of the endpoints (perhaps Swagger, but it seems awfully monolithic in nature), etc.
That's a familiar system, but it requires a car to come to nearly a full stop before the sensor picks it up. This leads often to the lights alternating so out of phase it produces the maximum delay for both streets.