Hacker News new | past | comments | ask | show | jobs | submit login
GitLab Container Registry (about.gitlab.com)
449 points by _abnl on May 23, 2016 | hide | past | favorite | 118 comments



Kind of unrelated, but I'm super happy to be working at GitLab.

Nearly everything we do is out in the open, and most features are available in CE, which is fully open source (https://gitlab.com/gitlab-org/gitlab-ce). I've wanted great, open source tools that don't look like dung for a long long time, and GitLab is definitely reaching that goal. Can't help but gush about the product we're building. :D

Here are some of my favorite upcoming features: https://gitlab.com/gitlab-org/gitlab-ce/issues/17575, https://gitlab.com/gitlab-org/gitlab-ce/issues/14661, https://gitlab.com/gitlab-org/gitlab-ce/issues/15337


Damn, you are one lucky person! Getting paid to work on OSS is a dream come true.

Keep up the awesome work everyone :)


Thanks! And I think Connor created his own luck. People were assuming he already worked at GitLab since he was so active on social media and on our issue trackers. If you consistently contribute to GitLab it is very hard for us not to hire you.


That's a pretty effective hiring method in my opinion. You guys should publish a blog post or something on your hiring practices and results - I'm sure the HN crowd would enjoy it.


Thanks, our hiring practices are detailed on https://about.gitlab.com/handbook/hiring/ what kinds of results should we include?

I'm also thinking about writing about an open organization, where the handbooks and issue trackers are public. Would that be interesting?


Yeah, definitely. It's an interesting concept, I didn't know about the handbooks being public, seeing employees interacting on issues in the open and seeing the company's strategy be posted publicly is very cool.


Not to jump on to the gitlab bandwagon here. Adam from Skymind (YC W16) here

I just want to say the last 4 hires we've made (we're only 11 soon 12) have been from around the world (we're in 4 countries) and this has worked great for us. A thriving open source community is key to hiring for us. A lot of our team does some sort of traveling.

I myself will be working from tokyo (currently in korea actually..) moving from SF.

We make the time zones work :D. Key to that for us has been gitter where we can collaborate/support the open source community while also being able to use it for collaborating with partners and running things like training classes.

Just wanted to provide another example to prove gitlab isn't an anomaly. The same is true for a lot of the larger open source organizations (eg: canonical,redhat,automattic)


Thanks for chiming in Adam. Good to hear you're making it work. I think having a HQ is overrated. Technology is shrinking distance and making it easier to avoid the commute.


It's such a crazy effective way to hire people. Luckily most companies don't understand it ;-)


I want to work on growth for Gitlab. Where are you guys seeing problems in market penetration so I can fill out an application[0].

[0] pun intented.

edit: To anyone interested, this is the public haml file that generates their website, to which they encourage contributions. So a pull request job application it is.


I'm running GitLab CE off a NAS to host my projects, and it works really well. Thank you all for making it freely available.


Glad to hear that, thanks for using GitLab! For other people, I think there are Synology packages on https://www.synology.com/en-us/dsm/app_packages/Docker-GitLa...


Unfortunately only for the Intel processor units. I didn't realize this until my ARM-powered device arrived and the package wasn't available.


Not to mention the awesome team at GitLab and the Core Team from the community. Awesome progress and quite transparent and with an amazingly open approach. Buffer with a product one wants to work on.


Thanks for recognizing our Core Team, too. They're an integral part of our community. :)


Sorry if this is the wrong venue to ask this, but any idea if/when Codeship will natively support you guys?

It's honestly my one hangup since we do all of our deploys there.


Codeship founder here :) Getting a lot of requests for a GitLab integration recently and we're looking into it and are hopefully able to build it soon (I wish it would already be there)


I want GitLab + CodeShip so badly but I won't be holding my breath.

This has been a very standard response for the last 2 years.

https://twitter.com/codeship/status/454327470402322433


I know :/ Wish we would have built it already


Let me know if there's any way we can help! job at gitlab!


Fantastic, Gitlab are absolutely nailing it at the moment. Don't think there's any other hosted service offering so many CI/CD features for free under one roof.

It feels like what Github did for SCM, Gitlab are doing for CI.


Thanks Luke! We'll try to keep this going. We're working on deployment environments for our next release so you can see what is deployed in each environment. After that we can do deployments that need to be confirm manually.


Wow, that's great news. Exactly what we are looking for. I was investigating to use GitLab CI to build, and then only deploy commits with a certain tag, but a dedicated workflow for deployments is much more convenient and less error-prone.


Glad to hear that. By the way, the 'only deploy tags' strategy is already possible today if you want to do it that way.


I agree, this is awesome. Their post yesterday hinted at something, and I honestly didn't think it would be this exciting. I can't wait to use it.


Thanks, glad you like it. BTW I wonder how many people understood our hint yesterday: "We almost can't _contain_ our excitement about how our announcement on Monday will _register_ with you!".


Heh, I didn't read the hint until after the registry announcement, but saw what you did and appreciated it with the benefit of hindsight!


These days it seems that Docker is everywhere. I am new to docker and I seem to find it as one more additional complicated system that the developer now needs to learn in order to deploy his or her application.

How useful do you find docker for application which can be deployed on Heroku or Beanstalk? I can understand using docker for a Language ecosystem which is not supported on Public PaaS. Or for people for whom public PaaS is not an option.

I would like to know about the experience of using Docker in day to day development from people who have used Docker in team environments. How was your experience of converting the team to use Docker instead of their regular development/deployment? For example, at our company, For LAMP or MEAN or Java Stacks, we have a defined a procedure of setting up the dev machines to include all these tools and developers know how to manage these on their own machine. Once the code is pushed to Jenkins, there are deploy scripts written by tech leads which take care of deployment to various servers or to Heroku.

In your Docker driven development process, did everybody started using docker on their development machine too? Or everybody just kept using their local setup, and only one/two people handled the task of creating final docker images? Or do you just use you CI server to handle the docker image creation and deployment from that point onwards?


I am also pretty new to docker and it took a few days to get my head into it, but I am using it mainly as a contained dev environment, similar to the way we use vagrant. I am finding that docker is much better suited to complex environments, and a lot easier to update.

Contrary to your fears of everyone having to learn docker to develop, I see it as a tool that allows every developer on a team to be brought into new technology without having to learn all about a new toolchain. We are using docker-compose for this. Everyone has to install docker, and have some cursory knowledge of docker-compose commands, but that's it.

As an example, I recently shoehorned an angular2 front end into an existing project. Without docker, this means every developer needs node and all the dependencies before they begin to work. With docker-compose, the next time they start up the environment, the npm container installs all of their dependencies, and the typescript compiler container listens and compiles all changes to the webroot. The other developers don't even need to know what node is, or that it's being used!

Docker is a very powerful tool with a lot of untapped potential I'm sure. I've no experience with docker in production, however.


In your Docker driven development process, did everybody started using docker on their development machine too?

Yes. The build scripts produce an image as well as a JAR.

It doesn't do anyone any good to have the knowledge of how to package your code for production locked in just a few people's heads.


It seems like every time I set up some supporting infrastructure it ends up rolled into the next gitlab release a week later, not that I'm complaining (much).

If you haven't checked out gitlab in a while, you definitely should. It's been moving fast and come a long way lately.

Thank you Gitlab team for making an open source, self hosted platform and all the recent improvements you've made.


Thanks for your kind words. May I ask what supporting infrastructure you'll set in 3 weeks from now? We'll have to add it to the schedule for 8.9


Well since you asked, I'm working on a system that writes perfect bug-free code for me, but if you can implement that I'll save myself the effort.

But really, the next thing on the roadmap is to get tighter integration between our CFM (saltstack/salt-api) and gitlab. Tie it into deploys, auto-open issues on failed states, etc., though that's gonna be more on the salt side than the gitlab side and likely wouldn't make sense for you guys to implement anyways.

Im sure I'll end up setting up whatever does make it into 8.9 a week before it's released though.


It would be nice to upload Salt modules directly from GitLab.

Since you can deploy from GitLab too, auto-opening issues for failed builds makes sense. I opened an issue on https://gitlab.com/gitlab-org/gitlab-ce/issues/17771

I predict you'll be setting up different environments a week before 8.9 since that is what we'll ship.


At this pace of innovation, GitHub will soon be a part of the history! GitHub has done a tremendous job with forking and pull requests, but, at least recently, they've acted disorientedly! I hope they get back into shape!


I really don't know what to hope for.

All that I do now that I really enjoy watching what looks like a full on feature battle between GitLab and GitHub going on at this moment. As a guy who uses both (GitLab for my job, GitHub personally), I feel like I'm getting the best of both worlds.

Although, to be perfectly honest, it feels to me like, at this point of the "feature battle", GitLab started embracing and actually improving every single thing I liked about GitHub to the point where I'm questioning my stay on GitHub for personal projects.


Considering Gitlab is the more open source of the two, the answer has to always be gitlab.

Albeit, it still sucks most free software communities cannot use self-hosted gitlab because too many features are stuck in the enterprise edition, but its way better than what github is offering, which is nothing.


now if only they could get gitlab.com page loads consistently under 1 second...

x_x


This is my first week at Gitlab. I've joined along with a few other people (one of whom has a great big bushy beard) to specifically make you happy.

There are interesting challenges that currently make this none trivial but expect things to improve greatly in the near future.


Yes, we should fix the slow page loads on GitLab.com. We're working on it in https://gitlab.com/gitlab-com/operations/issues/42 I'm sorry for the current slowness.


Yeah the site is taking a ridiculous amount of time to load.


Yup! It supposedly is written in a much more efficient language, but it severely slower than GitHub and Bitbucket.


It's written in Ruby like Github, but the latter have considerably more experience running high-traffic systems.

Just perused their job page and will apply this week. Sound like they could use some Varnish and Ceph talent :)


For sure we're interested in people that have experience with Ceph.


Oh, I confused it with Gogs, sorry!


Everyone at GitLab has been doing an amazing job shipping some really cool features.

I'm really looking forward to trying out the new container registry and move away from my hack-y solution that I use right now to build Docker images on my own VM, and move back to the shared runners :)


Thanks Andrew, I hope your move goes smoothly. We want to allow everyone to go frictionless from idea to production. We're extending GitLab so this can be done simply without having to script things together yourself.


I'm excited for this, seems like GitLab is moving more into a all-in-one solution, compared to Github that focuses on "social coding", whatever that now means.

So to try out this new feature (together with the pipelines), I tried setting up a simple project that uses a docker image to serve a simple html page.

However, it seems like it's not possible to build/push from the CI system (unless you setup a self-hosted runner) which kind of leaves this "Container Registry" without value, because I still need to manually build/push my images from one machine...


Thanks, our idea is indeed to build an all-in-one solution. For more information see https://about.gitlab.com/direction/#scope

You should be able to build/push from the CI system, see the example GitLab CI configuration file in the blog post.

EDIT: For GitLab.com users using shared runners this is waiting on the Runner upgrade that we'll do in the coming days. If you use your own runner that runs 1.2 it should be fine.


Does this work for autoscale runners? Are those privileged by default?


It is about the Runners needing to be on version 1.2. We'll upgrade the .com autoscaling ones in the coming days. If you run your own autoscaling runners you can upgrade yourself.


Ah, thanks to sytse, Snappy and tmaczukin for the replis. I missed that.

My configuration looks like this currently (and I'm guessing I'll hold of for a few days for the shared workers to get updated):

  image: docker:latest
  services:
  - docker:dind
  stages:
    - build
    - deploy
  build:
    stage: build
    script:
      - docker build -t registry.gitlab.com/victorbjelkholm/deploy-html-test:latest .
    only:
      - master
  deploy:
    stage: deploy
    script:
      - docker push registry.gitlab.com/victorbjelkholm/deploy-html-test:latest
    only:
      - master


If you're going to use the shared runners, I don't think that configuration is going to work. The build step will build it locally, and even though you named it to use the registry, it won't actually push to the registry automatically. By breaking up the push to another step, you're actually going to run that deploy stage on a fresh Digital Ocean instance, so your previously-built image will be lost.

That's why the example pushed the image right after the build step.

Having said that, if you're literally using that script, you don't even have tests, so you may as well just put both steps into a single build stage like the simplified example. :)

For now, you'll need the explicit `docker login` as well. We'll work to remove that.


Thanks, you didn't miss anything in the blog post, we updated the blog post. https://gitlab.com/gitlab-com/www-gitlab-com/commit/c758f6b2...


Our shared runners should be updated within a week. Then you will be able to use Container Registry on GitLab.com without self-hosted runner.


Yeah, sorry about that. We'll update the blog post to clarify. We're updating the CI runners this week to support this.



After GitHub moved to unlimited repositories, I started to question if our small team should stay on GitLab - and this just triple-confirmed that we will.

Awesome feature for an already awesome product. Great job!


Thank you very much much. I hope your switch goes smoothly.


Up until recently I never even considered using CI in my projects because I just didn't have the time. The way GitLab are implementing all these tools makes it not only easy to use, but fun.


Glad to hear that! This was our intention. CI and CD is something everyone knows they should do but it isn't always done. By making it easier to set up we try to get more people to do it and feel good about themselves.


Interesting move.

Seems Gitlab are moving fairly fast (compared to competition) and going for that all in one ecosystem (If you want it) from SCM to deployment with the range of products they've introduced.


We are thinking big. The integrated Container Registry is part of our plan. We will soon have an integrated deployments with easy to access environments. We also plan to introduce manual actions on pipelines allowing you to execute arbitary things, ex. promote to production, but also merge it back to the master.

https://gitlab.com/gitlab-org/gitlab-ce/issues/17009 https://gitlab.com/gitlab-org/gitlab-ce/issues/17010


Other good news for GitLab: Zach Holman just joined them as an advisor!

https://mobile.twitter.com/holman/status/734842346278244352


HN discussion of that tweet in https://news.ycombinator.com/item?id=11756674 BTW that post and this post are 1 and 2 on HN right now


Gitlab is what Github was 3 years ago.

They are on a roll, rolling out features left and right and seem to nail the need for every engineer.

Kudos to the team! Keep rocking!


Thank you, we're very determined to keep shipping. Please let us know if there is something you can use that we're not doing yet.


I'm really impressed by the hard work of GitLab team, keep it up!

If anybody from GitLab is reading: any plans for rkt support?


Thanks!

No current plans for rkt, but feel free to create an issue to discuss it.


For those who don't know what rkt is, I am presuming that it is the App container runtime: https://coreos.com/rkt/ https://github.com/coreos/rkt


I am also interested in rkt support, so I created an issue like you suggested. https://gitlab.com/gitlab-org/gitlab-ce/issues/17784


Can I use gitlab container registry with gitlab.com? Can I use private images in said registry with public gitlab.com ci builders? I haven't found a way to use private containers with gitlab.com ci yet without spinning up a worker for every project. Maybe a way to register a "team wide" builder?


Currently not. The limitations can be found here: http://docs.gitlab.com/ce/container_registry/README.html#lim.... We plan to improve that in one of upcoming releases.


So awesome! I've been invested into the Docker ecosystem lately and one of the next things I've wanted to setup is a decent CI workflow.

I wish Github would also ship more features at the pace Gitlab is. The only issue I've had with Gitlab are the response times on their UI. Hopefully they sort that out soon.


I suppose you are using GitLab.com. We're working on improving the speed [0]. If you are willing to run your own instance, it should be fast.

[0]: https://gitlab.com/gitlab-com/operations/issues/42


It's happening: GitLab has basically filled the feature gap with GitHub and it is surpassing it.


Once the performance gap closes it'll be even more exciting.

I just worry about 10, 15, 20 years in the future when ArchiveTeam has to back up GitHub and GitLab and whatever else... there's so much data!


There will also be bigger storage :)


I wish Gitlab had an easy clean way to import GitHub. The current method involves setting up oauth information in the config file and restarting gitlab. Not nice :/ I fail to see why they cannot just work with GitHub API tokens for import?


This is great! But I'm curious if anyone at Gitlab can comment on what this gives over running the existing docker registry? Can we configure this to use S3 to store the resulting images?

Our runner image has docker, we run our tests with docker-compose, if they pass we push them to our existing registry. In fact our .gitlab-ci.yml looks very similar to the example under "elaborate example" in the blog post.

Just wondering what we'd be missing out on if we didn't switch to Gitlab Container Registry.


The biggest plus of using integrated registry is that you have integrated authentication and authorization of GitLab that follow your groups and members assigned to your GitLab projects, making it really easy to have private container repositories stored on registry.

Second the built-in registry is really easy to configure and maintain. You have to specify the address and provide a certificate to start using it. You also use the same backups mechanism as you use for your GitLab installation.

In the future we may make it possible to use external storage for container images, like S3. This is something that is already supported by docker/distribution.


I am super stoked to see this and will be using the crap out of it and pointing others to it (the registry is kind of a pain to get going)! It looks like this is just the v2 registry (from Distribution) integrated into Gitlab, so I'm wondering what's stopping me from backing this registry with S3? Is it just not supported by the Gitlab config yaml? I back my private registry with S3 and it's just a couple of config options to enable it. Or am I misunderstanding some fundamental concept here? Thanks for the awesome work!


Glad to hear you're super stoked! I think you're on the money regarding the s3 backup. I think it is making the configuration accessible. I expect you can work around that by doing it yourself.


It really comes down to cost and convenience. We don't charge anything additional to run the container registry (including unlimited private projects for personal or business use), and it's already installed with GitLab.

Having said that, we do love deep integration, so we'll continue to improve it going forward. If you have any ideas for improvements, please do create an issue!


In the same idea, has someone already worked integration with a debian repository (with aptly or similar) and has some linked to share on how to do it the smartway?

We're thinking about using FPM to create the debian package Package which is the retrieve as the artifacts of the gitlab-ci build stage

And then to have a separate service that will receive a webhook at the end of the (successful) build, to retrieve the artifacts and update a repository using aptly.

Does it seems the right way or is there some much simpler solution ?


At my work, we have a setup close to this. On a successful build of jenkins (after build action), a deb is generated with FPM (awesome piece of software btw) and uploaded to an s3 bucket. Then on another machine, a cron that run every minutes sync the s3 bucket to a local aptly repository and publish it to another s3 bucket which is the real debian repository.

This works well but the s3+cron part is probably not the smartest way and publication can take some time (like 5-10min).


I know it probably doesn't matter too much for internal use, but what does Lintian say about your packages?


Many bad things! I didn't check until now but it's a bunch of "non-standard-dir-perm" and "dir-or-file-in-var-www" and things like "maintainer-name-missing". I think it's possible to build correct debian package with FPM but for our use, we just want to build easily native deb package.


> I think it's possible to build correct debian package with FPM [...]

Of course it is possible, it's just ridiculous amount of work. It's much easier to build package with proper tools in the first place.


Aptly is a piece of cake compared to reprepro, because of the REST API. I just run it on a separate box, and make curl invocations from Jenkins jobs to post debs to it at the end of the build.

Depending what it is you're building, using the proper debian tooling (dpkg-buildpackage, etc) is not that much harder than FPM, and cooperating with the system gives you a lot of goodies for free (sourcedebs, cowbuilder, etc).


I've tried several times to figure out how to get Docker working in a situation like mine. And we've been considering GitLab as well. So this is likely a good time to experiment.

Doing all the research on how to integrate Docker into my particular situation has been daunting. I really need to track down some online courses or something. The articles just aren't cutting it. Or need to find a mentor just to ask stupid questions to.


Looks great. One thing that does not appear to be clear though is if you build multiple docker images from the same project.

Our deployment workflow at the moment builds two docker images, one for the web app and another for the background services. Both share the same code.

It would appear you can only have one docker image per project?


AFAIK you can just have 2 stages to build each image and push both under different tags.


You can do this but currently you can have only one image per project. We did this to keep it simple. You can use a dummy project to work around it. Or consider making web app and the background services separate projects.


am I correct that this is only for on-prem deployments, that they aren't going to be offering a public container registry a la docker hub or quay?


You can enable Container Registry for your projects on GitLab.com :)

And in a few days our shared runners should be updated to support the full docker workflow with the Container Registry.


thats really amazing - are there size limits?


Our soft limit is 10GB per project for GitLab.com https://about.gitlab.com/gitlab-ci/


The Container Registry is available on our free GitLab.com.


This might sound like a silly question, but does this require gitlab, or can it work standalone?


You install it by installing the GitLab Omnibus package. But you push and pull to it from other application too, no need to use GitLab.


When would we expect to see this on githost.io?


We are investigating what it will take to enable this on GitHost.io. At first glance, it doesn't seem to be too difficult. I created https://gitlab.com/gitlab-com/githost/issues/12 to track this feature request.


These guys are f*cking killing it. Way to go!


Awesome, a new place to host your unpatched, opaque disk images!


I think the patching of Docker container images is a big problem. We try to provide a completely automated flow in GitLab using CI, CD, and the container registry. I would love to have a big button that says 'update all my containers'. I've made an issue https://gitlab.com/gitlab-org/gitlab-ee/issues/592


The problem is that Docker, and more generally using stacks of binary disk images, is fundamentally flawed with respect to security and reproducibility. It's nontrivial to inspect these images for vulnerabilities because there is nothing that specifies the precise set of software that is in that image. Some stuff is compiled from source, some stuff is installed via one or more package managers, each of which may bundle additional software that one may not know about if they didn't inspect each piece of software carefully. Furthermore, one cannot even verify the image reasonably because the results are different depending on when it was built.

In short, container images as popularized by Docker are insecure by design.


Don't you think that gitlab adding container registry support will encourage more people to build their own images, as opposed to trusting "black-box" images from elsewhere? Combining a base image with the deployable artifact is much more efficient than baking amis or other images. You then know exactly what your image contains and nothing more, because you built it yourself.

Of course I'm looking at this from the perspective of deploying micro-services and immutable infrastructure. Your use-case may be different.


Any nontrivial image relies on a large number of other images as a base, and just because you built it yourself doesn't mean that you didn't just download and install software with known security vulnerabilities.


Yes, but you will ALWAYS have that issue with software you build yourself. This is primarily why I maintain an in-house yum repository for anything that we use out of our distribution repositories, it takes a little more effort to build RPM's but it's worth it from a maintainability and security aspect (as I type this I'm working on a salt-based package management tool somewhat akin to Katello/Sattelite that I can use to manage upgrades that Puppet doesn't).


The thought is you use a vanilla LTS image as a base. If that needs to updated then your ci can easily do it.

You then combine that with your installable artifact (deb, rpm). It's the same exact process using classic images, so i'm not sure what the complaint about docker specifically would be.


At least by building images by myself I'm taking responsibility in a way for the security of them - rather than relying on the (in)security of whatever base image.

I did see a feature somewhere for scanning container images for security vulnerabilities, but I think something closer to FreeBSD's pkg-audit is needed.


I agree that building your own images is the future. We want to make this as easy as possible. I think it is essential for security.


Can I play too?

The problem is that unix, and more generally any operating system, is fundamentally flawed with respect to security and reproducibility. It's nontrivial to inspect these installations for vulnerabilities because there is nothing that specifies the precise set of software that is in that installation. Some stuff is compiled from source, some stuff is installed via one or more package managers, each of which may bundle additional software that one may not know about if they didn't inspect each piece of software carefully. Furthermore, one cannot even verify the installation reasonably because the results are different depending on when it was installed.

In short, operating systems are insecure by design.


Indeed. I recommend you look into one of the projects that is trying to mitigate this problem by allowing you to precisely specify, inspect, and change the dependency graph of software on your computer. That is, Guix or Nix.

Guix (and I presume Nix) can then take advantage of that transparency and control to build containers, virtual machines, and bare-metal systems to your specifications.


if you're building go or rust packages, all you need is the output executable alone a lot of the time. :-) The host OS and docker version provide most of the low-level bits.


The registry is just that: a registry. You can push a Guix-generated image to it if you wish and use with any container runtime. rkt supports the same API; the Docker relation is by name only.

In fact, being able to push static images directly to such a registry with Guix would be a great feature. Both for continuous integration (without needing Hydra) and for coupling with a cluster scheduler such as Kubernetes.





Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: