
Connecting a Git Repository to Amazon S3 and AWS Services - freedomben
https://aws.amazon.com/about-aws/whats-new/2017/09/connect-your-git-repository-to-amazon-s3-and-aws-services-using-webhooks-and-new-quick-start/
======
Kpourdeilami
If you're planning to use this as a way to deploy your static website to S3, I
highly recommend using Netlify[0] instead. I have configured it to
automatically publish anything that I put on the `deploy` branch and it
supports letsencrypt certificates as well as rewrite rules

0: [https://www.netlify.com](https://www.netlify.com)

~~~
marban
+1 Also way better than surge.sh in this area.

------
berlam
A few months back I created a tool [0], which deploys static content from
GitHub to S3, with GitHub push events and AWS Lambda. This was more like a PoC
but worked quite well for me. During development I was very happy with the
usage of jgit, which allowed me to stay in the Java stack (even if it is not
the perfect choice for Lambda imho). There was and still is much on the ToDo
list. If you are interested in the sources, you can take a look at it.

[0] [https://github.com/berlam/github-
bucket](https://github.com/berlam/github-bucket)

~~~
falsedan
Nice! I'm planning to make my travis build handle the publishing to S3.

------
rubenbe
From the title of this post, I was expecting something like Dulwich [0] which
has the ability to store individual objects in an Swift object store.

But instead it is a webhook which reacts to pushes on a repo.

[0]
[https://github.com/jelmer/dulwich/blob/master/README.swift.m...](https://github.com/jelmer/dulwich/blob/master/README.swift.md)

Edit: title has been improved in the meantime

~~~
falsedan
I thought it would be about jgit's S3 support[0], or git-remote-helpers[1].

0:
[http://download.eclipse.org/jgit/docs/jgit-2.0.0.20120613090...](http://download.eclipse.org/jgit/docs/jgit-2.0.0.201206130900-r/apidocs/org/eclipse/jgit/transport/AmazonS3.html)
1: [https://git-scm.com/docs/git-remote-helpers](https://git-scm.com/docs/git-
remote-helpers)

------
EgoIncarnate
and a 3rd party git repository:
[https://aws.amazon.com/quickstart/architecture/git-
to-s3-usi...](https://aws.amazon.com/quickstart/architecture/git-to-s3-using-
webhooks/)

------
ge96
I like the free t2.small EC2 instance, pretty sweet, got a private git server
on there but still figuring out how to actually use Git.

~~~
jshmrsn
Just to make sure you're aware, your free 750 hours per month will only be
available for the first 12 months of your AWS account. According to the AWS
website, it's t2.micro instances that have the available free hours, not
t2.small. Make sure you're not getting unexpected costs in the billing
section.

See [https://aws.amazon.com/free/](https://aws.amazon.com/free/)

~~~
ge96
Yeah you're right, sorry said the wrong one (t2.micro) it's funny when you
look at it, it seems cheap like $0.012/hr but you multiply that by 750 and
yeah... I briefly went up to t2.medium and I was like damn $30/mo for a
server, that's a lot (to me).

OVH has nicer prices for roughly same specs eg. 2cores 8GB ram at I think
$16/mo but that detachable/scalable is really nice without data loss on server
configs.

Yeah good point about the year, I was surprised when I was charged the $99.99
for Amazon Prime haha

~~~
foepys
I am paying 4,63€ for a VPS that would cost me $30 on AWS. If you don't need
scalability, always look into traditional (VPS/bare-metal) hosting.

~~~
ge96
Yeah paying in advance helps too I do 1 year for like $40-$50, single core
10GB 2GB ram

edit: the one for $30/mo on AWS is 2 Core, 4GB RAM, like 30GB space, what
specs are you getting for that price? For me roughly that (though $ is not as
strong) that's what I mentioned above through OVH.

~~~
foepys
I'm at Hetzner and I'm using it for some bandwidth-hungry stuff. Hetzner
includes 2 TB egress traffic per month which alone would cost me about $60 on
AWS. It's easy to forget about the data transfer when looking at AWS/GCP/Azure
prices.

~~~
ge96
Wait is data transfer equivalent to bandwidth? I never even look at bandwidth.
Sorry if that's obvious I have seen that "metric" before and didn't click till
now.

~~~
vidarh
For really low bandwidth stuff (<1GB month) it won't hit you with AWS. But as
soon as you're over that, it gets real expensive real quick. E.g. 1TB on AWS
(traffic _out_ of AWS) costs you $90 most regions, I believe, while you often
get at least that much included elsewhere.

For Hetzner as mentioned, once you exceed the included bandwidth (which for
most VPSs and servers is 2-3TB at least per month), it costs less than 2
euro/month extra.

This isn't likely to affect you for a personal git repo, but mentioning it as
it's one of the things that often burn people with AWS, as the AWS bandwidth
rates are crazily high to the point where for high-bandwidth applications even
if you want/need to use AWS it's sometimes cheaper to run your own CDN in
front of it to reduce bandwidth charges.

My order of preference if I don't _need_ AWS specific services tends to be
Hetzner, OVH and Digital Ocean, depending on what additional geographical
regions I need (Hetzner is only in Germany) or other service requirements.

AWS is great for specific use-cases as long as you keep a close eye on costs.

~~~
ge96
Wasn't able to respond to you earlier

Thanks for the tip, I wasn't even considering bandwidth. I think when I first
was setting up EC2 I got a "bad batch" if that's a thing, I should say
instance. I couldn't figure out why the TTFB was varying from 3-5 seconds I
tried so many things eventually I had to switch and since I didn't have
elastic IP I think, I switched servers (regions) to whatever I was assigned on
restart and it was instant/correct like in the 10s/100s ms range. Anyway I'll
keep an eye on that, don't have users yet but something to keep in mind,
thankfully not really serving media mostly text.

~~~
foepys
For very fresh comments, HN disables the reply link on the comment overview to
deter flame wars. You can click on the comment date to see only the single
comment and then the reply link will be visible. Or just wait 2 minutes.

~~~
ge96
That is interesting, I like it.

I don't know I am often wrong, just need to learn how to deal with it, the
proper thing is to accept/thank/learn move on haha.

------
rsync
I don't know about lambda functions but I _think_ you can do all of this with
rsync.net since we have both s3cmd and git in our environment:

    
    
      ssh user@rsync.net s3cmd get s3://rsync/mscdex.exe
    
      ssh user@rsync.net "git clone git://github.com/freebsd/freebsd.git freebsd"
    

So you could run a cron job that would clone a repo to your rsync.net account
and then publish (or backup) that repo to an s3 account.

Or whatever.

~~~
korzun
This allows you to hook your repository directly to Amazon AWS.

You are suggesting the exact opposite.

~~~
rsync
I'm not suggesting anything. Those two tools (s3cmd and git) are in place and
you can do whatever you'd like with them in whatever order.

I just pasted those two example commands in that order because that's what
order they are in on our howto page...

------
hgontijo
All of this just to avoid creating a Git connector?!

~~~
korzun
You should understand what problem they are solving first; then comment.

