Nearly everything we do is out in the open, and most features are available in CE, which is fully open source (https://gitlab.com/gitlab-org/gitlab-ce). I've wanted great, open source tools that don't look like dung for a long long time, and GitLab is definitely reaching that goal. Can't help but gush about the product we're building. :D
Here are some of my favorite upcoming features: https://gitlab.com/gitlab-org/gitlab-ce/issues/17575, https://gitlab.com/gitlab-org/gitlab-ce/issues/14661, https://gitlab.com/gitlab-org/gitlab-ce/issues/15337
Keep up the awesome work everyone :)
I'm also thinking about writing about an open organization, where the handbooks and issue trackers are public. Would that be interesting?
I just want to say the last 4 hires we've made (we're only 11 soon 12) have been from around the world (we're in 4 countries) and this has worked great for us. A thriving open source community is key to hiring for us. A lot of our team does some sort of traveling.
I myself will be working from tokyo (currently in korea actually..) moving from SF.
We make the time zones work :D. Key to that for us has been gitter where we can collaborate/support the open source community while also being able to use it for collaborating with partners and running things like training classes.
Just wanted to provide another example to prove gitlab isn't an anomaly. The same is true for a lot of the larger open source organizations (eg: canonical,redhat,automattic)
 pun intented.
edit: To anyone interested, this is the public haml file that generates their website, to which they encourage contributions. So a pull request job application it is.
It's honestly my one hangup since we do all of our deploys there.
This has been a very standard response for the last 2 years.
It feels like what Github did for SCM, Gitlab are doing for CI.
How useful do you find docker for application which can be deployed on Heroku or Beanstalk? I can understand using docker for a Language ecosystem which is not supported on Public PaaS. Or for people for whom public PaaS is not an option.
I would like to know about the experience of using Docker in day to day development from people who have used Docker in team environments. How was your experience of converting the team to use Docker instead of their regular development/deployment? For example, at our company, For LAMP or MEAN or Java Stacks, we have a defined a procedure of setting up the dev machines to include all these tools and developers know how to manage these on their own machine. Once the code is pushed to Jenkins, there are deploy scripts written by tech leads which take care of deployment to various servers or to Heroku.
In your Docker driven development process, did everybody started using docker on their development machine too? Or everybody just kept using their local setup, and only one/two people handled the task of creating final docker images? Or do you just use you CI server to handle the docker image creation and deployment from that point onwards?
Contrary to your fears of everyone having to learn docker to develop, I see it as a tool that allows every developer on a team to be brought into new technology without having to learn all about a new toolchain. We are using docker-compose for this. Everyone has to install docker, and have some cursory knowledge of docker-compose commands, but that's it.
As an example, I recently shoehorned an angular2 front end into an existing project. Without docker, this means every developer needs node and all the dependencies before they begin to work. With docker-compose, the next time they start up the environment, the npm container installs all of their dependencies, and the typescript compiler container listens and compiles all changes to the webroot. The other developers don't even need to know what node is, or that it's being used!
Docker is a very powerful tool with a lot of untapped potential I'm sure. I've no experience with docker in production, however.
Yes. The build scripts produce an image as well as a JAR.
It doesn't do anyone any good to have the knowledge of how to package your code for production locked in just a few people's heads.
If you haven't checked out gitlab in a while, you definitely should. It's been moving fast and come a long way lately.
Thank you Gitlab team for making an open source, self hosted platform and all the recent improvements you've made.
But really, the next thing on the roadmap is to get tighter integration between our CFM (saltstack/salt-api) and gitlab. Tie it into deploys, auto-open issues on failed states, etc., though that's gonna be more on the salt side than the gitlab side and likely wouldn't make sense for you guys to implement anyways.
Im sure I'll end up setting up whatever does make it into 8.9 a week before it's released though.
Since you can deploy from GitLab too, auto-opening issues for failed builds makes sense. I opened an issue on https://gitlab.com/gitlab-org/gitlab-ce/issues/17771
I predict you'll be setting up different environments a week before 8.9 since that is what we'll ship.
All that I do now that I really enjoy watching what looks like a full on feature battle between GitLab and GitHub going on at this moment. As a guy who uses both (GitLab for my job, GitHub personally), I feel like I'm getting the best of both worlds.
Although, to be perfectly honest, it feels to me like, at this point of the "feature battle", GitLab started embracing and actually improving every single thing I liked about GitHub to the point where I'm questioning my stay on GitHub for personal projects.
Albeit, it still sucks most free software communities cannot use self-hosted gitlab because too many features are stuck in the enterprise edition, but its way better than what github is offering, which is nothing.
There are interesting challenges that currently make this none trivial but expect things to improve greatly in the near future.
Just perused their job page and will apply this week. Sound like they could use some Varnish and Ceph talent :)
I'm really looking forward to trying out the new container registry and move away from my hack-y solution that I use right now to build Docker images on my own VM, and move back to the shared runners :)
So to try out this new feature (together with the pipelines), I tried setting up a simple project that uses a docker image to serve a simple html page.
However, it seems like it's not possible to build/push from the CI system (unless you setup a self-hosted runner) which kind of leaves this "Container Registry" without value, because I still need to manually build/push my images from one machine...
You should be able to build/push from the CI system, see the example GitLab CI configuration file in the blog post.
EDIT: For GitLab.com users using shared runners this is waiting on the Runner upgrade that we'll do in the coming days. If you use your own runner that runs 1.2 it should be fine.
My configuration looks like this currently (and I'm guessing I'll hold of for a few days for the shared workers to get updated):
- docker build -t registry.gitlab.com/victorbjelkholm/deploy-html-test:latest .
- docker push registry.gitlab.com/victorbjelkholm/deploy-html-test:latest
That's why the example pushed the image right after the build step.
Having said that, if you're literally using that script, you don't even have tests, so you may as well just put both steps into a single build stage like the simplified example. :)
For now, you'll need the explicit `docker login` as well. We'll work to remove that.
Awesome feature for an already awesome product. Great job!
Seems Gitlab are moving fairly fast (compared to competition) and going for that all in one ecosystem (If you want it) from SCM to deployment with the range of products they've introduced.
They are on a roll, rolling out features left and right and seem to nail the need for every engineer.
Kudos to the team! Keep rocking!
If anybody from GitLab is reading: any plans for rkt support?
No current plans for rkt, but feel free to create an issue to discuss it.
I wish Github would also ship more features at the pace Gitlab is. The only issue I've had with Gitlab are the response times on their UI. Hopefully they sort that out soon.
I just worry about 10, 15, 20 years in the future when ArchiveTeam has to back up GitHub and GitLab and whatever else... there's so much data!
Our runner image has docker, we run our tests with docker-compose, if they pass we push them to our existing registry. In fact our .gitlab-ci.yml looks very similar to the example under "elaborate example" in the blog post.
Just wondering what we'd be missing out on if we didn't switch to Gitlab Container Registry.
Second the built-in registry is really easy to configure and maintain. You have to specify the address and provide a certificate to start using it. You also use the same backups mechanism as you use for your GitLab installation.
In the future we may make it possible to use external storage for container images, like S3. This is something that is already supported by docker/distribution.
Having said that, we do love deep integration, so we'll continue to improve it going forward. If you have any ideas for improvements, please do create an issue!
We're thinking about using FPM to create the debian package
Package which is the retrieve as the artifacts of the gitlab-ci build stage
And then to have a separate service that will receive a webhook at the end of the (successful) build, to retrieve the artifacts and update a repository using aptly.
Does it seems the right way or is there some much simpler solution ?
This works well but the s3+cron part is probably not the smartest way and publication can take some time (like 5-10min).
Of course it is possible, it's just ridiculous amount of work. It's much
easier to build package with proper tools in the first place.
Depending what it is you're building, using the proper debian tooling (dpkg-buildpackage, etc) is not that much harder than FPM, and cooperating with the system gives you a lot of goodies for free (sourcedebs, cowbuilder, etc).
Doing all the research on how to integrate Docker into my particular situation has been daunting. I really need to track down some online courses or something. The articles just aren't cutting it. Or need to find a mentor just to ask stupid questions to.
Our deployment workflow at the moment builds two docker images, one for the web app and another for the background services. Both share the same code.
It would appear you can only have one docker image per project?
And in a few days our shared runners should be updated to support the full docker workflow with the Container Registry.
In short, container images as popularized by Docker are insecure by design.
Of course I'm looking at this from the perspective of deploying micro-services and immutable infrastructure. Your use-case may be different.
You then combine that with your installable artifact (deb, rpm). It's the same exact process using classic images, so i'm not sure what the complaint about docker specifically would be.
I did see a feature somewhere for scanning container images for security vulnerabilities, but I think something closer to FreeBSD's pkg-audit is needed.
The problem is that unix, and more generally any operating system, is fundamentally flawed with respect to security and reproducibility. It's nontrivial to inspect these installations for vulnerabilities because there is nothing that specifies the precise set of software that is in that installation. Some stuff is compiled from source, some stuff is installed via one or more package managers, each of which may bundle additional software that one may not know about if they didn't inspect each piece of software carefully. Furthermore, one cannot even verify the installation reasonably because the results are different depending on when it was installed.
In short, operating systems are insecure by design.
Guix (and I presume Nix) can then take advantage of that transparency and control to build containers, virtual machines, and bare-metal systems to your specifications.
In fact, being able to push static images directly to such a registry with Guix would be a great feature. Both for continuous integration (without needing Hydra) and for coupling with a cluster scheduler such as Kubernetes.