A better way is to completely leave the "website" bits of S3 off, and leave that all up to CloudFront. You can create an Origin Access Identity, then grant that OAI access to read your S3 bucket (all automated in the wizard when you create a CF dist and specify an S3 origin). You then specify a default object in your CF dist, and bam, CF is using the S3 REST API over SSL to secure that CF-S3 hop.
That if monitored and enforced would stop many data breaches. With some public bucketd enforcement will be difficult
I switched from gh-pages/cloudflare to netlify, and it looks as though page crawl performance has worsened significantly...
(IIRC Netlify also has an option you can enable to serve some assets via CloudFront, so that should speed things up for subresources.)
a) it's totally free, which means once it's cached at CF, no charges from AWS for bandwidth, also no charges for Route 53 since CF handles the DNS too.
b) it can be used to terminate SSL in front of the S3 bucket (with or without the S3 bucket properly using SSL, depending on if you're using path-based or host-based bucket access)
c) cache invalidations are stupid fast
d) any CDN changes are done nearly instant, vs. "however long" Cloudfront takes
GitLab Pages offers no IPv6 support. GitHub doesn't support IPv6 for custom domains officially, but you can easily work around that by adding 2a04:4e42::403 as the AAAA record.
Granted I pay for S3 hosting and Github...
I'd rather have the site go down than me go broke, so is it really a good idea ?
I think cloudflare gives more options as a CDN than cloudfront.
Amazon’s pricing is easy for this simple setup.
I doubt dang is going to walk you through how they detected it either. No need to make people's fraud easier in the future.
Just take your licks and move on.