
AWS CodeArtifact: A fully managed software artifact repository service - rawrenstein
https://aws.amazon.com/about-aws/whats-new/2020/06/introducing-aws-codeartifact-a-fully-managed-software-artifact-repository-service/
======
WatchDog
This has been a fairly obvious service that has been missing for a while, nice
to see them provide a solution.

Most dependency management tools have some kind of hacky support for using S3
directly.

Full fledged artifact management tools like Artifactory and Nexus support S3
backed storage.

Interesting to see that the pricing is approximately double that of S3, for
what I imagine is not much more than a thin layer on top of it.

~~~
ludjer
Considering the price of Nexus and Artifactory this is way cheaper for a SAAS
offering with SLA's. I imagine Artifactory is really going to have to up their
product offering or at least lower their entry prices.

~~~
hn_throwaway_99
Github already released their package repo last year (and have since purchased
NPM). If anything I imagine that had Artifactory pretty scared vs. this. If
your company already uses GitHub it's a hard sell to say why you'd need
something like Artifactory over the Github package repo.

~~~
Turbots
And since I've been trying GitHub Actions, I don't know why you would need
artifactory, nexus or this aws service anymore. Github offers private
repositories, releases, project pages, cicd through actions and Microsoft is
offering plenty of deployment options on Azure with AKS or plain Azure Compute

~~~
weehack
[https://twitter.com/pahudnet/status/1271141814389493760](https://twitter.com/pahudnet/status/1271141814389493760)

------
antoncohen
The login credentials expire after 12 hours (or less)[1], just like with their
Docker registry (ECR). That makes it pretty annoying to use, especially on
developer laptops.

GCP has a similar offering[2]. And GitHub[3].

[1] [https://docs.aws.amazon.com/codeartifact/latest/ug/python-
co...](https://docs.aws.amazon.com/codeartifact/latest/ug/python-
configure.html)

[2] [https://cloud.google.com/artifact-
registry](https://cloud.google.com/artifact-registry)

[3]
[https://github.com/features/packages](https://github.com/features/packages)

~~~
blaisio
I could not disagree more re. the expiring credentials. It is a bad practice
to have credentials that never expire, especially on developer laptops,
especially credentials of this nature. Developers frequently store this stuff
in plain text in their home directory or as environment variables. That's a
huge security risk! This service manages the process of generating and
expiring credentials automatically, which is awesome.

~~~
antoncohen
This service is for code artifacts. What credential to the developers use to
access source code? Do they expire?

It is common for developers to use Git to store source code, in a hosted
service like GitHub. It is common to use SSH keys to access Git. Frequently
those SSH keys are generated without passphrases. Those are non-expiring
credentials stored on disk. If HTTPS is used to access Git, it will likely be
with non-expiring credentials.

I'm not saying short lived credential are bad, not at all. I'm pointing out
how this service differs from similar services, requiring a change it
workflow, which might be annoying to some people. Not everyone is operating
under the same threat model.

~~~
tiew9Vii
Your source code may reference a shared library at a specific version from a
trusted source to build. This trusted source is CodeArtifact.

The short lived passwords is a non issue and a good thing. Your dependency
resolver should handle fetching the new password and most orgs I’ve worked at
had scripts dealing with short lived passwords/iam.

~~~
antoncohen
> Your dependency resolver should handle fetching the new password

According to AWS's documentation, none of the supported dependency resolvers
will fetch the new password[1][2][3].

If they were capable of automatically fetching the new password without human
intervention, it would mean they have credentials for generating credentials.
If this isn't on an EC2 instance (where an IAM role can be used), that means
there are long-lived credentials (probably written to disk) used to generate
short-lived credentials.

This would be the case if you are using a hosted CI service that doesn't run
on your own EC2 instances. You would probably be providing an AWS key and
secret, which would then be used to generate the short-lived credentials. But
the key and secret won't be short-lived, and will have at least the same
access as the short-lived credentials (probably more access).

> Your source code may reference a shared library at a specific version from a
> trusted source to build. This trusted source is CodeArtifact.

HTTPS is what forms the trust between you and the artifact repository. Short-
lived passwords don't do anything to ensure you are talking to the real
trusted source. They may make it so the artifact repository can better trust
you are who you say you are, but I don't see what they has to do with safely
getting a specific version of a library.

[1] [https://docs.aws.amazon.com/codeartifact/latest/ug/python-
co...](https://docs.aws.amazon.com/codeartifact/latest/ug/python-
configure.html)

[2] [https://docs.aws.amazon.com/codeartifact/latest/ug/npm-
auth....](https://docs.aws.amazon.com/codeartifact/latest/ug/npm-auth.html)

[3] [https://docs.aws.amazon.com/codeartifact/latest/ug/env-
var.h...](https://docs.aws.amazon.com/codeartifact/latest/ug/env-var.html)

~~~
shortj
> that means there are long-lived credentials (probably written to disk) used
> to generate short-lived credentials.

In terms of local development experience, most mature organizations will have
these "long lived" credentials still require an MFA at a minimum of once per
day and locked down to particular IP addresses to be allowed to get the
temporary credentials.[1]

> This would be the case if you are using a hosted CI service that doesn't run
> on your own EC2 instances.

Typically you'd want to see third-party platforms leveraging IAM cross-account
roles these days to fix the problem of them having static credentials.
Granted, many of them are still using AWS access key and secret.

This is still not a "solved" area though, and a point of concern I wish would
get more aggressively addressed by AWS.

[1]
[https://github.com/trek10inc/awsume](https://github.com/trek10inc/awsume),
[https://github.com/99designs/aws-vault](https://github.com/99designs/aws-
vault), and a few other tools make this much easier to deal with locally.

------
pskinner
Is it just me or is this missing plain artifacts - those that are not packaged
for a specific tool? I'm thinking of plain binaries and resources required for
things like db build tools and automated testing tools - just files really.
How do I publish a tarball up to this, for example?

Also the lack of nuget is a major issue.

~~~
greyskull
I think CodeArtifact loses value when you aren't using a package manager; the
benefit is an api-compatible service with various controls and audits built on
top.

Out of curiosity, what would you want from this service for the "plain binary"
use-case when S3 already exists?

~~~
ec109685
It’s nice having the metadata around the push available versus raw blobs to
s3.

~~~
bostik
Objects in S3 can have custom metadata associated with them. Look at the
returned data for the HeadObject call.[0]

It's not advertised in the documentation, but HeadObject(Bucket,
Key)['Metadata'] is a neat dictionary of custom values.

0:
[https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObje...](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)

------
tkinz27
It’s frustrating to not see more system package management (deb, rpm) from
these new services (github and gitlab for instance).

Are others not packaging their code in intermediate packages before packing
them into containers?

~~~
manigandham
What's the purpose of intermediate packages if you're already using
containers?

~~~
tkinz27
Very large c++/python/cuda application that is packed into various different
images (squashfs images, but functionally the same).

We end up having a lot of libraries that are shared across multiple images.

~~~
manigandham
Would it not be easier to just pack into different base images? Docker is very
efficient with reusing these layers.

------
FrenchTouch42
I'd like really like to see more support added (Ruby, etc). It could be a
great alternative to Artifactory.

------
scarface74
No C#/Nuget support? Really?

~~~
tkahnoski
AWS products always take an MVP approach. The rest is driven by customer
feedback on the roadmap. CodeGuru/CodeProfiler/X-Ray are similar to limited
language support they've built out over time.

Whenever I see a product announcement like this missing something I need to
use it, I immediately ping our Technical Account Manager to get the vote up
for a particular enhancement.

~~~
swyx
sounds surprisingly manual. has AWS not tried to formalize some sort of
feature voting system?

~~~
tkahnoski
Some products have started doing public github “roadmaps”. Use github issues
to get more accessible public feedback but who knows how that gets processed
internally.

------
lflux
You know it's an AWS service when you look at it and go "Huh, it's only 2x the
price of S3, what a bargain!"

~~~
setheron
2x the price of S3 is very cheap.

------
andycowley
No deb, RPM, or nuget. Half a product really. As annoying and expensive as
Nexus and Artifactory are, at least they're more fully featured.

------
soygul
Seems like a direct competitor for Artifactory and Nexus. I wonder if it is
profitable for them to create an inferior alternative to fully flagged
artifact managers. Or if they are doing this for product-completeness of AWS.

------
doliveira
I'd wait a few years to be ready, AWS developer tools are really crude. Last
year I had to build a Lambda to be able to spit multiple output artifacts in
CodePipeline.

------
saxonww
Appears to support ivy/gradle/maven, npm/yarn, and pip/twine only.

------
StreamBright
What is wrong with S3?

------
dahfizz
I don't get it.

The git server you use supports artifacts already. You could also just put all
of your artifacts on an S3 bucket if you needed somewhere to put them, which
is exactly what this is but more expensive. I don't understand when this would
save you money or simplify devops.

~~~
cle
It’s not “exactly what this is”. Every time AWS or Azure or GCP releases a
service, there are a droves of people on HN decrying them as “just <something
I’m familiar with>”, without bothering to understand if that’s actually true.
It’s not.

Skim the docs and you will see it is not “just S3”.

~~~
thayne
yep. I've worked on a solution that "just uses s3". It is not trivial.

