Kind of unrelated, but I'm super happy to be working at GitLab.
Nearly everything we do is out in the open, and most features are available in CE, which is fully open source (https://gitlab.com/gitlab-org/gitlab-ce). I've wanted great, open source tools that don't look like dung for a long long time, and GitLab is definitely reaching that goal. Can't help but gush about the product we're building. :D
Thanks! And I think Connor created his own luck. People were assuming he already worked at GitLab since he was so active on social media and on our issue trackers. If you consistently contribute to GitLab it is very hard for us not to hire you.
That's a pretty effective hiring method in my opinion. You guys should publish a blog post or something on your hiring practices and results - I'm sure the HN crowd would enjoy it.
Yeah, definitely. It's an interesting concept, I didn't know about the handbooks being public, seeing employees interacting on issues in the open and seeing the company's strategy be posted publicly is very cool.
Not to jump on to the gitlab bandwagon here. Adam from Skymind (YC W16) here
I just want to say the last 4 hires we've made (we're only 11 soon 12) have been from around the world (we're in 4 countries) and this has worked great for us. A thriving open source community is key to hiring for us. A lot of our team does some sort of traveling.
I myself will be working from tokyo (currently in korea actually..) moving from SF.
We make the time zones work :D. Key to that for us has been gitter where we can collaborate/support the open source community while also being able to use it for collaborating with partners and running things like training classes.
Just wanted to provide another example to prove gitlab isn't an anomaly. The same is true for a lot of the larger open source organizations (eg: canonical,redhat,automattic)
Thanks for chiming in Adam. Good to hear you're making it work. I think having a HQ is overrated. Technology is shrinking distance and making it easier to avoid the commute.
I want to work on growth for Gitlab. Where are you guys seeing problems in market penetration so I can fill out an application[0].
[0] pun intented.
edit: To anyone interested, this is the public haml file that generates their website, to which they encourage contributions. So a pull request job application it is.
Not to mention the awesome team at GitLab and the Core Team from the community. Awesome progress and quite transparent and with an amazingly open approach.
Buffer with a product one wants to work on.
Codeship founder here :) Getting a lot of requests for a GitLab integration recently and we're looking into it and are hopefully able to build it soon (I wish it would already be there)
Fantastic, Gitlab are absolutely nailing it at the moment. Don't think there's any other hosted service offering so many CI/CD features for free under one roof.
It feels like what Github did for SCM, Gitlab are doing for CI.
Thanks Luke! We'll try to keep this going. We're working on deployment environments for our next release so you can see what is deployed in each environment. After that we can do deployments that need to be confirm manually.
Wow, that's great news. Exactly what we are looking for. I was investigating to use GitLab CI to build, and then only deploy commits with a certain tag, but a dedicated workflow for deployments is much more convenient and less error-prone.
Thanks, glad you like it. BTW I wonder how many people understood our hint yesterday: "We almost can't _contain_ our excitement about how our announcement on Monday will _register_ with you!".
These days it seems that Docker is everywhere. I am new to docker and I seem to find it as one more additional complicated system that the developer now needs to learn in order to deploy his or her application.
How useful do you find docker for application which can be deployed on Heroku or Beanstalk? I can understand using docker for a Language ecosystem which is not supported on Public PaaS. Or for people for whom public PaaS is not an option.
I would like to know about the experience of using Docker in day to day development from people who have used Docker in team environments. How was your experience of converting the team to use Docker instead of their regular development/deployment? For example, at our company, For LAMP or MEAN or Java Stacks, we have a defined a procedure of setting up the dev machines to include all these tools and developers know how to manage these on their own machine. Once the code is pushed to Jenkins, there are deploy scripts written by tech leads which take care of deployment to various servers or to Heroku.
In your Docker driven development process, did everybody started using docker on their development machine too? Or everybody just kept using their local setup, and only one/two people handled the task of creating final docker images? Or do you just use you CI server to handle the docker image creation and deployment from that point onwards?
I am also pretty new to docker and it took a few days to get my head into it, but I am using it mainly as a contained dev environment, similar to the way we use vagrant. I am finding that docker is much better suited to complex environments, and a lot easier to update.
Contrary to your fears of everyone having to learn docker to develop, I see it as a tool that allows every developer on a team to be brought into new technology without having to learn all about a new toolchain. We are using docker-compose for this. Everyone has to install docker, and have some cursory knowledge of docker-compose commands, but that's it.
As an example, I recently shoehorned an angular2 front end into an existing project. Without docker, this means every developer needs node and all the dependencies before they begin to work. With docker-compose, the next time they start up the environment, the npm container installs all of their dependencies, and the typescript compiler container listens and compiles all changes to the webroot. The other developers don't even need to know what node is, or that it's being used!
Docker is a very powerful tool with a lot of untapped potential I'm sure. I've no experience with docker in production, however.
It seems like every time I set up some supporting infrastructure it ends up rolled into the next gitlab release a week later, not that I'm complaining (much).
If you haven't checked out gitlab in a while, you definitely should. It's been moving fast and come a long way lately.
Thank you Gitlab team for making an open source, self hosted platform and all the recent improvements you've made.
Well since you asked, I'm working on a system that writes perfect bug-free code for me, but if you can implement that I'll save myself the effort.
But really, the next thing on the roadmap is to get tighter integration between our CFM (saltstack/salt-api) and gitlab. Tie it into deploys, auto-open issues on failed states, etc., though that's gonna be more on the salt side than the gitlab side and likely wouldn't make sense for you guys to implement anyways.
Im sure I'll end up setting up whatever does make it into 8.9 a week before it's released though.
At this pace of innovation, GitHub will soon be a part of the history! GitHub has done a tremendous job with forking and pull requests, but, at least recently, they've acted disorientedly! I hope they get back into shape!
All that I do now that I really enjoy watching what looks like a full on feature battle between GitLab and GitHub going on at this moment. As a guy who uses both (GitLab for my job, GitHub personally), I feel like I'm getting the best of both worlds.
Although, to be perfectly honest, it feels to me like, at this point of the "feature battle", GitLab started embracing and actually improving every single thing I liked about GitHub to the point where I'm questioning my stay on GitHub for personal projects.
Considering Gitlab is the more open source of the two, the answer has to always be gitlab.
Albeit, it still sucks most free software communities cannot use self-hosted gitlab because too many features are stuck in the enterprise edition, but its way better than what github is offering, which is nothing.
Everyone at GitLab has been doing an amazing job shipping some really cool features.
I'm really looking forward to trying out the new container registry and move away from my hack-y solution that I use right now to build Docker images on my own VM, and move back to the shared runners :)
Thanks Andrew, I hope your move goes smoothly. We want to allow everyone to go frictionless from idea to production. We're extending GitLab so this can be done simply without having to script things together yourself.
I'm excited for this, seems like GitLab is moving more into a all-in-one solution, compared to Github that focuses on "social coding", whatever that now means.
So to try out this new feature (together with the pipelines), I tried setting up a simple project that uses a docker image to serve a simple html page.
However, it seems like it's not possible to build/push from the CI system (unless you setup a self-hosted runner) which kind of leaves this "Container Registry" without value, because I still need to manually build/push my images from one machine...
You should be able to build/push from the CI system, see the example GitLab CI configuration file in the blog post.
EDIT: For GitLab.com users using shared runners this is waiting on the Runner upgrade that we'll do in the coming days. If you use your own runner that runs 1.2 it should be fine.
It is about the Runners needing to be on version 1.2. We'll upgrade the .com autoscaling ones in the coming days. If you run your own autoscaling runners you can upgrade yourself.
If you're going to use the shared runners, I don't think that configuration is going to work. The build step will build it locally, and even though you named it to use the registry, it won't actually push to the registry automatically. By breaking up the push to another step, you're actually going to run that deploy stage on a fresh Digital Ocean instance, so your previously-built image will be lost.
That's why the example pushed the image right after the build step.
Having said that, if you're literally using that script, you don't even have tests, so you may as well just put both steps into a single build stage like the simplified example. :)
For now, you'll need the explicit `docker login` as well. We'll work to remove that.
After GitHub moved to unlimited repositories, I started to question if our small team should stay on GitLab - and this just triple-confirmed that we will.
Awesome feature for an already awesome product. Great job!
Up until recently I never even considered using CI in my projects because I just didn't have the time. The way GitLab are implementing all these tools makes it not only easy to use, but fun.
Glad to hear that! This was our intention. CI and CD is something everyone knows they should do but it isn't always done. By making it easier to set up we try to get more people to do it and feel good about themselves.
Seems Gitlab are moving fairly fast (compared to competition) and going for that all in one ecosystem (If you want it) from SCM to deployment with the range of products they've introduced.
We are thinking big. The integrated Container Registry is part of our plan. We will soon have an integrated deployments with easy to access environments. We also plan to introduce manual actions on pipelines allowing you to execute arbitary things, ex. promote to production, but also merge it back to the master.
Can I use gitlab container registry with gitlab.com? Can I use private images in said registry with public gitlab.com ci builders?
I haven't found a way to use private containers with gitlab.com ci yet without spinning up a worker for every project.
Maybe a way to register a "team wide" builder?
So awesome! I've been invested into the Docker ecosystem lately and one of the next things I've wanted to setup is a decent CI workflow.
I wish Github would also ship more features at the pace Gitlab is. The only issue I've had with Gitlab are the response times on their UI. Hopefully they sort that out soon.
I wish Gitlab had an easy clean way to import GitHub. The current method involves setting up oauth information in the config file and restarting gitlab. Not nice :/ I fail to see why they cannot just work with GitHub API tokens for import?
This is great! But I'm curious if anyone at Gitlab can comment on what this gives over running the existing docker registry? Can we configure this to use S3 to store the resulting images?
Our runner image has docker, we run our tests with docker-compose, if they pass we push them to our existing registry. In fact our .gitlab-ci.yml looks very similar to the example under "elaborate example" in the blog post.
Just wondering what we'd be missing out on if we didn't switch to Gitlab Container Registry.
The biggest plus of using integrated registry is that you have integrated authentication and authorization of GitLab that follow your groups and members assigned to your GitLab projects, making it really easy to have private container repositories stored on registry.
Second the built-in registry is really easy to configure and maintain. You have to specify the address and provide a certificate to start using it. You also use the same backups mechanism as you use for your GitLab installation.
In the future we may make it possible to use external storage for container images, like S3. This is something that is already supported by docker/distribution.
I am super stoked to see this and will be using the crap out of it and pointing others to it (the registry is kind of a pain to get going)! It looks like this is just the v2 registry (from Distribution) integrated into Gitlab, so I'm wondering what's stopping me from backing this registry with S3? Is it just not supported by the Gitlab config yaml? I back my private registry with S3 and it's just a couple of config options to enable it. Or am I misunderstanding some fundamental concept here? Thanks for the awesome work!
Glad to hear you're super stoked! I think you're on the money regarding the s3 backup. I think it is making the configuration accessible. I expect you can work around that by doing it yourself.
It really comes down to cost and convenience. We don't charge anything additional to run the container registry (including unlimited private projects for personal or business use), and it's already installed with GitLab.
Having said that, we do love deep integration, so we'll continue to improve it going forward. If you have any ideas for improvements, please do create an issue!
In the same idea, has someone already worked integration with a debian repository (with aptly or similar) and has some linked to share on how to do it the smartway?
We're thinking about using FPM to create the debian package
Package which is the retrieve as the artifacts of the gitlab-ci build stage
And then to have a separate service that will receive a webhook at the end of the (successful) build, to retrieve the artifacts and update a repository using aptly.
Does it seems the right way or is there some much simpler solution ?
At my work, we have a setup close to this.
On a successful build of jenkins (after build action), a deb is generated with FPM (awesome piece of software btw) and uploaded to an s3 bucket.
Then on another machine, a cron that run every minutes sync the s3 bucket to a local aptly repository and publish it to another s3 bucket which is the real debian repository.
This works well but the s3+cron part is probably not the smartest way and publication can take some time (like 5-10min).
Many bad things! I didn't check until now but it's a bunch of "non-standard-dir-perm" and "dir-or-file-in-var-www" and things like "maintainer-name-missing". I think it's possible to build correct debian package with FPM but for our use, we just want to build easily native deb package.
Aptly is a piece of cake compared to reprepro, because of the REST API. I just run it on a separate box, and make curl invocations from Jenkins jobs to post debs to it at the end of the build.
Depending what it is you're building, using the proper debian tooling (dpkg-buildpackage, etc) is not that much harder than FPM, and cooperating with the system gives you a lot of goodies for free (sourcedebs, cowbuilder, etc).
I've tried several times to figure out how to get Docker working in a situation like mine. And we've been considering GitLab as well. So this is likely a good time to experiment.
Doing all the research on how to integrate Docker into my particular situation has been daunting. I really need to track down some online courses or something. The articles just aren't cutting it. Or need to find a mentor just to ask stupid questions to.
You can do this but currently you can have only one image per project. We did this to keep it simple. You can use a dummy project to work around it. Or consider making web app and the background services separate projects.
We are investigating what it will take to enable this on GitHost.io. At first glance, it doesn't seem to be too difficult. I created https://gitlab.com/gitlab-com/githost/issues/12 to track this feature request.
I think the patching of Docker container images is a big problem. We try to provide a completely automated flow in GitLab using CI, CD, and the container registry. I would love to have a big button that says 'update all my containers'. I've made an issue https://gitlab.com/gitlab-org/gitlab-ee/issues/592
The problem is that Docker, and more generally using stacks of binary disk images, is fundamentally flawed with respect to security and reproducibility. It's nontrivial to inspect these images for vulnerabilities because there is nothing that specifies the precise set of software that is in that image. Some stuff is compiled from source, some stuff is installed via one or more package managers, each of which may bundle additional software that one may not know about if they didn't inspect each piece of software carefully. Furthermore, one cannot even verify the image reasonably because the results are different depending on when it was built.
In short, container images as popularized by Docker are insecure by design.
Don't you think that gitlab adding container registry support will encourage more people to build their own images, as opposed to trusting "black-box" images from elsewhere? Combining a base image with the deployable artifact is much more efficient than baking amis or other images. You then know exactly what your image contains and nothing more, because you built it yourself.
Of course I'm looking at this from the perspective of deploying micro-services and immutable infrastructure. Your use-case may be different.
Any nontrivial image relies on a large number of other images as a base, and just because you built it yourself doesn't mean that you didn't just download and install software with known security vulnerabilities.
Yes, but you will ALWAYS have that issue with software you build yourself. This is primarily why I maintain an in-house yum repository for anything that we use out of our distribution repositories, it takes a little more effort to build RPM's but it's worth it from a maintainability and security aspect (as I type this I'm working on a salt-based package management tool somewhat akin to Katello/Sattelite that I can use to manage upgrades that Puppet doesn't).
The thought is you use a vanilla LTS image as a base. If that needs to updated then your ci can easily do it.
You then combine that with your installable artifact (deb, rpm). It's the same exact process using classic images, so i'm not sure what the complaint about docker specifically would be.
At least by building images by myself I'm taking responsibility in a way for the security of them - rather than relying on the (in)security of whatever base image.
I did see a feature somewhere for scanning container images for security vulnerabilities, but I think something closer to FreeBSD's pkg-audit is needed.
The problem is that unix, and more generally any operating system, is fundamentally flawed with respect to security and reproducibility. It's nontrivial to inspect these installations for vulnerabilities because there is nothing that specifies the precise set of software that is in that installation. Some stuff is compiled from source, some stuff is installed via one or more package managers, each of which may bundle additional software that one may not know about if they didn't inspect each piece of software carefully. Furthermore, one cannot even verify the installation reasonably because the results are different depending on when it was installed.
In short, operating systems are insecure by design.
Indeed. I recommend you look into one of the projects that is trying to mitigate this problem by allowing you to precisely specify, inspect, and change the dependency graph of software on your computer. That is, Guix or Nix.
Guix (and I presume Nix) can then take advantage of that transparency and control to build containers, virtual machines, and bare-metal systems to your specifications.
if you're building go or rust packages, all you need is the output executable alone a lot of the time. :-) The host OS and docker version provide most of the low-level bits.
The registry is just that: a registry. You can push a Guix-generated image to it if you wish and use with any container runtime. rkt supports the same API; the Docker relation is by name only.
In fact, being able to push static images directly to such a registry with Guix would be a great feature. Both for continuous integration (without needing Hydra) and for coupling with a cluster scheduler such as Kubernetes.
Nearly everything we do is out in the open, and most features are available in CE, which is fully open source (https://gitlab.com/gitlab-org/gitlab-ce). I've wanted great, open source tools that don't look like dung for a long long time, and GitLab is definitely reaching that goal. Can't help but gush about the product we're building. :D
Here are some of my favorite upcoming features: https://gitlab.com/gitlab-org/gitlab-ce/issues/17575, https://gitlab.com/gitlab-org/gitlab-ce/issues/14661, https://gitlab.com/gitlab-org/gitlab-ce/issues/15337