The role of terraform (or any other infrastructure component) is to make sure the place where you put the web site content exists (and the correct wiring exists) before you try to put the web site content there.
Having a layer of indirection between your infra code and the actual deployment interface helps. At its most basic, you can use terragrunt for this to invoke `aws s3 sync`.
AWS CloudFront website
Route53 DNS entries
It also creates a default index.html file and will make:
www.example.com/something automatically redirect to www.example.com/something/index.html
S3 bucket permissions are private and only accessible by the CloudFront distribution via an OAI.
After purchasing a new domain in Route53, you have have a new website up and running in about 5 minutes.
For such setups, it could be easier to think of an object-storage bucket as, itself, a fully-static resource, even a content-addressable resource — where the bucket name contains an asset-pipeline fingerprint hash of the content of the bucket. In such cases, whether the bucket exists is the same as asking whether the content is deployed.
Then, rather than relying on S3-or-equivalent's own HTTP hosting (which requires that your CNAME also be the name of the bucket), you'd give the bucket a well-known "symbolic link" name by either configuring your own load-balancer with a hostname ⇒ bucket-content-address mapping (think k8s Ingress resources for this); or by configuring a third-party like Cloudflare to route the subdomain to a Worker that in turn makes requests to the specific bucket, passing the actual internal hostname. In the latter case, the per-subdomain Worker script would also be part of the Terraform deployment.