
AWS CodePipeline - jeffbarr
https://aws.amazon.com/blogs/aws/now-available-aws-codepipeline/
======
darksaints
Internally at Amazon, Pipelines (which inspired this service) was a lifesaver.
Apollo (which is the inspiration for CodeDeploy) was also helpful, but should
probably just be replaced by Docker or OSv at this point.

But if they ever release a tool that is inspired by the Brazil build system,
pack up and run for the hills. When it takes a team of devs over two years to
get _Python_ to build and run on your servers, you know your frankenstein
build system is broken. It could be replaced by shell scripts and still be
orders of magnitude better. Nobody deserves the horror of working with that
barf sandwich.

~~~
mirceal
hah. I believe you either don't understand Brazil (the build system) or you
had to deal with inexperienced people that don't get it. IMHO the internal
Amazon tooling is THE reason Amazon is able to build things so fast and to
iterate so quickly.

~~~
SpikeGronim
I love Brazil and Apollo. As an ex-Amazonian I miss them on a weekly basis.

~~~
coreyoconnor
Check out nix / nixpkgs. :-) That's the closest to Brazil IMO.

~~~
mirceal
look pretty awesome. never heard of this before. I'm sure it has some
caveats...

~~~
coreyoconnor
Yep. Definitely! One caveat, in short, is: Everything involved in the build
must be placed under the nix store (/nix/store). Which, for some software, can
cause issues: They might have a hardcoded runtime path that references
/usr/lib. Nix provides tools to resolve these assumptions, but they can still
be a sticking point at times. There is also, more or less, a unsafePerformIO
that can be used. Which is discouraged. Still, for private builds the cost can
be acceptable.

~~~
mirceal
do you know if there is a way to cache things if used across multiple apps?

~~~
coreyoconnor
Assuming I understand the question correctly: Yes.

Nix uses a pure evaluation model so it enjoys the property: An artifact in the
nix store is uniquely identified by the closure used to build the artifact.
For any artifacts A, B if and only if the closure used to build the artifact
are equal then the artifact file path will be equal.

This creates opportunities for sharing between builds that can be hard to
achieve in other systems. One form of sharing "referencing a derivation in
multiple apps" works as expected, just like other systems: Each app will
reference the same artifact.

(a derivation is the closure to be evaluated to build an artifact in the nix
store. Well, attribute set of closures.)

Suppose a derivation is assigned to the variable "commonData" and two
derivations "appX" and "appY" reference this closure. "commonData" will be
built once and both app derivations will receive a path to the same file in
the nix store.

The other form of sharing comes from the equality comparison being based on
the closure and not the name used to reference the closure.

Ehh.. I'm butchering the explanation... I think there is a succinct PL term
that covers this.

Suppose we have a derivation:

let x = mkDerviation { name = "foo"; builder = aBuilder; src = /share/src/foo;
}

which is referenced by another derivation

let y = mkDerviation { name = "bar"; builder = aBuilder; src = /share/src/bar;
inherit x; }

"y" will force the evaluation of the "x" derivation's closure. The source
directories, since they are not in the nix store, will be copied to the nix
store first. (By an implicit conversion between local files and nix store
paths)

So far so good, but what happens if there is another derivation like so?

let z = mkDerviation { name = "zab"; builder = aBuilder; src = /share/src/zab;
somethingLikeX = mkDerviation { name = "foo"; builder = aBuilder; src =
/share/src/foo; }; }

"somethingLikeX"'s equation is equal to "x" but not the same reference.

What will happen if z is evaluated after y? (assuming aBuilder is the same)
First, the derivation "somethingLikeX" will be evaluated. Ah ha! That closure
is equal to the closure for "x" above! Which has already been evaluated. So
that evaluation result will be shared. Even though "z" does not directly
reference "x".

This can result in more sharing than the developer explicitly requested: Equal
closures are shared.

------
felipesabino
I wonder why GitHub specifically and not just Git repos in general? Isn't it
weird?

It means they don't even support their own new "Git" product AWS CodeCommit
[1]

[1][https://aws.amazon.com/blogs/aws/now-available-aws-
codecommi...](https://aws.amazon.com/blogs/aws/now-available-aws-codecommit/)

~~~
andrewguenther
My guess would be WebHooks support.

~~~
felipesabino
That could be, just noticed that CodePipeline's Post-Receive Hooks are still a
work in progress

~~~
shekhargulati
Yes, you have to poll CodePipeline every second or so to get the jobs and
perform your task. Currently, web hooks are not implemented or they are WIP.

------
atmosx
This is interesting for lone developers, but I'm not sure about the pricing:

 _You’ll pay $1 per active pipeline per month (the first one is available to
you at no charge as part of the AWS Free Tier). An active pipeline has at
least one code change move through it during the course of a month._

Does this mean that every time you run a session you pay 1 EUR no matter how
many stages the session has (pull, compile/build, test (multiple tests) and
deploy?

~~~
BillinghamJ
No, you pay $1/month/pipeline for the pipelines which you've used during the
month.

Pipeline used 0 times - $0, 1 time - $1, 500 times - $1

~~~
mirceal
10000 times - $1 :) this is not to make money. It's to drive adoption of AWS
and get [even] more people on it by making their lives easier.

------
jtwaleson
Is there any way to integrate this with ECS? That would be a great feature for
me.

------
jayonsoftware
Can we build .NET code ?

~~~
cschneid
From my reading, it appears to hook up to a Jenkins server, potentially that
you control, so it can do anything Jenkins can, which includes building
windows artifacts.

------
pragar
Thanks. I was eagerly waiting for this :)

------
maikklein
Could I install Unreal Engine 4 on CodePipleline so that I can build my game
remotely?

~~~
jordanthoms
As far as I can tell this is more about orchestrating builds, to actually
build your game you'd need to run jenkins on an EC2 instance and connect it to
CodePipeline.

------
dynjo
Amazon seriously need to hire some good UI designers. They produce great stuff
but it all looks like it was designed by developers in 1980.

~~~
softawre
It just needs to be usable, which it is in my experience.

Not that it matters, but I think they use GWT.

~~~
andybak
The initial impression of a new AWS console is pretty overwhelming. I've stuck
with DigitalOcean and CloudFlare over Amazon alternatives partly because every
time I sign into AWS all I can think is "Heck. Which of these product names
and acronyms matches the thing I came here to do?"

~~~
Someone1234
I will fully admit that when I started using AWS, it took me a while to learn
Amazon's "language." Unfortunately Amazon does a bad job of distinguishing
between their core offerings (e.g. S3, EC2, Route53, etc) and their very niche
ones (e.g. EMR, Kinesis, SWF, etc), so new users are left scrambling to figure
out what they need to know.

I understand their desire to create unique "products" that people can use in a
conversation (e.g. "Have you considered Route53 for your DNS?"), but
ultimately mixing common and niche things together and giving everything
confusing names is likely doing Amazon more harm than good.

That all being said, Amazon are slowly improving. See this page[0]. They now
have a list of their products and how they fit into different categories. But
the console can still be a jumbled mess of different acronyms and made up
words.

[0] [https://aws.amazon.com/](https://aws.amazon.com/)

~~~
rjbwork
This is why I REALLY like the MS/Azure way of doing things.

"I need to host a web app" "Okay there's Azure Web Apps for that"

"I need to store lots of files" "There's Azure Storage/Blob Storage for that"

"I need a SQL Database" "There's Azure SQL for that"

"I need a VM" "There's Azure Virtual Machines for that"

"I need a Data Lake" "There's Azure Data Lake for that"

"I need a Data Lake" "There's Azure Data Lake for that"

"I need a Data Warehouse" "There's Azure Data Warehouse for that"

"I need a Cache" "There's Azure Redis Cache for that"

I could go on, but you get the picture. Cute names are not the way to go when
you're offering dozens of services which may overlap with eachother somewhat.
I can just scroll down a list of things MS offers on Azure and be able to
easily pick out the things I need to use by their names alone.

~~~
Artemis2
As someone who does not uses Azure, what is a Data Lake? Is it like a Cloud
but liquid?

~~~
redwards510
"A massive, easily accessible data repository built on (relatively)
inexpensive computer hardware for storing "big data". Unlike data marts, which
are optimized for data analysis by storing only some attributes and dropping
data below the level aggregation, a data lake is designed to retain all
attributes, especially so when you do not yet know what the scope of data or
its use will be."

~~~
Artemis2
> data marts

So it's just deeper down the rabbit hole.

~~~
rjbwork
The intended use is to be able to use things like Hadoop or other tabular text
processing systems to glean information from enormous amounts of data, then
once valuable insights are found, use the Data Lake source to process it into
a form suitable for a data mart, or preferably, a data warehouse.

------
ebbv
What's up with the Amazons spam? There's 5 different submissions on the front
page right now. They could have all been one. Bad form, AWS team.

~~~
andyjohnson0
AWS Global Summit [1] is taking place in NY at the moment. Hence the new
product announcements.

[1] [https://live.awsevents.com/](https://live.awsevents.com/)

~~~
ebbv
That is not addressing my question; why are they posted as 5 separate
submissions instead of being 1 LiveBlog post or something?

~~~
ceejayoz
Each one is a separate post, about a separate service, which would logically
generate separate, product-specific discussions.

~~~
andybak
I agree. A single post would just end up unmanageable and dominated by whoever
had the top comment. (I always yearn for a 'by date' sort for HN comments as
often the top comment hijacks the discussion on a tangent and it's annoying to
find people discussing the primary topic)

~~~
dang
> I always yearn for a 'by date' sort for HN comments

We can't make a big flat list and sort those by date without losing the
threadedness of the comments and thus their context. But we could sort
siblings by date at each level of the tree, including the top level. Is that
what you mean?

~~~
andybak
> without losing the threadedness

Sometimes this would be a good thing. Top-level (and in general 'higher up the
tree' comments completely dominate. It's rare for the direction of a HN thread
to shift much after a reasonable number of comments as everything gets buried
quite far down.

I'd really like a 'what changed since i was last here' button. Quite often I
come back to a long thread I was especially interested in and I'm completely
unable to find out if anything of value has been added.

> But we could sort siblings by date at each level of the tree

More options = better. The intended audience of HN can cope with a few more
things to click. Treat us like power-users.

~~~
dang
HN has always treated users like power users. That's part of its design. But
there are other considerations, such as its minimalism and the fact that there
is a single shared view of the content.

~~~
andybak
> a single shared view of the content

So is the lack of options to resort and reorder comments an intentional thing?

It would be interesting to hear the reasons for it and what benefits you
consider it brings.

