Hacker News new | past | comments | ask | show | jobs | submit login
Abusing Terraform to Upload Static Websites to S3 (tangramvision.com)
64 points by grschafer 9 days ago | hide | past | favorite | 11 comments





As fun as it is to do this stuff, it is better to think about what a unit of deployment is first. Conceptually, it is important to separate the stuff that changes rarely (e.g., bucket name, the CNAME etc) from the stuff that changes at each deployment (web site content).

The role of terraform (or any other infrastructure component) is to make sure the place where you put the web site content exists (and the correct wiring exists) before you try to put the web site content there.

Having a layer of indirection between your infra code and the actual deployment interface helps. At its most basic, you can use terragrunt for this to invoke `aws s3 sync`[1].

[1]: https://docs.aws.amazon.com/cli/latest/reference/s3/sync.htm...


AWS CDK makes this much easier - including automatically invalidating the cloudfront distribution https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws...

Love this. It's pretty obviously the wrong tool for the job, but I learned a ton about Terraform from reading about how they got it to do this.

Bit disappointed that the Netlify Terraform provider[1] isn't supported any more, because it would solve this exact use case (and more complicated ones!) in a much less cumbersome way. I’d love to be able to point a Netlify redirect at an IP that’s output from another Terraform resource.

[1]: https://github.com/hashicorp/terraform-provider-netlify


A while back I made a quick shell script[1] to easily create a static Hugo blog hosted on S3 with a Cloudfront distribution. All you need to bring is a domain name. The AWS CLI is extremely capable.

[1] https://github.com/sa7mon/orchestra


I created a repo that use Terraform to create:

AWS CloudFront website

ACM certificate

Route53 DNS entries

S3 bucket

___

It also creates a default index.html file and will make:

www.example.com/something automatically redirect to www.example.com/something/index.html

S3 bucket permissions are private and only accessible by the CloudFront distribution via an OAI.

After purchasing a new domain in Route53, you have have a new website up and running in about 5 minutes.

___

https://github.com/jftuga/terraform_cloudfront_builder


Uuh, don't? Data inside S3 buckets have a different lifecycle from anything that you'd otherwise be doing in Terraform.

Not necessarily. Consider a design where each bucket is immutable for its entire lifetime. (Think: domain squatting "This domain is for sale" pages. Or domains that only exist to serve a redirect. Or, say, example.com. Or the Web 1.0 category of "per-subdomain, immutable, fixed format, non-interactive, user-generated content" sites, e.g. YTMND.)

For such setups, it could be easier to think of an object-storage bucket as, itself, a fully-static resource, even a content-addressable resource — where the bucket name contains an asset-pipeline fingerprint hash of the content of the bucket. In such cases, whether the bucket exists is the same as asking whether the content is deployed.

Then, rather than relying on S3-or-equivalent's own HTTP hosting (which requires that your CNAME also be the name of the bucket), you'd give the bucket a well-known "symbolic link" name by either configuring your own load-balancer with a hostname ⇒ bucket-content-address mapping (think k8s Ingress resources for this); or by configuring a third-party like Cloudflare to route the subdomain to a Worker that in turn makes requests to the specific bucket, passing the actual internal hostname. In the latter case, the per-subdomain Worker script would also be part of the Terraform deployment.


This is actually a really good example of why Terraform shouldn't be used in deployments. Terraform is best used for things that almost never need to change, or that you want to be able to re-create very quickly in the event that something bad happens. (It's also good for fancy CI/CD jobs that build new infra as part of a pipeline and destroy it after, but that can also lead to running into AWS service quotas and other issues)

Interesting. I do this on a regular basis with Pulumi and Cloudflare Workers KV.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: