Hacker News new | past | comments | ask | show | jobs | submit login

This is great news for developers. The trend has been to combine version control and CI for years now. For a timeline see https://about.gitlab.com/2019/08/08/built-in-ci-cd-version-c...

This is bad news for the CI providers that depend on GitHub, in particular CircleCI. Luckily for them (or maybe they saw this coming) they recently raised a series D https://circleci.com/blog/we-raised-a-56m-series-d-what-s-ne... and are already looking to add support for more platforms. It is hard to depend on a marketplace when it starts competing with you, from planning (Waffle.io), to dependency scanning (Gemnasium acquired by us), to CI (Travis CI layoff where especially sad).

It is interesting that a lot of the things GitHub is shipping is already part of Azure DevOps https://docs.microsoft.com/en-us/azure/architecture/example-... The overlap between Azure DevOps and GitHub seems to increase instead of to be reduced. I wonder what the integration story is and what will happen to Azure DevOps.

It's a horrible trend. CI should not be tied to version control. I mean we all have to deal with it now, but I'd much rather have my CI agnostic and not have config files for it checked into the repo.

I've browsed through the article you linked to, one of the subtitles was "Realizing the future of DevOps is a single application". Also a horrible idea: I think it locks developers into a certain workflow which is hard to escape. You have an issue with your setup you can't figure out - happened to me with Gitlab CI - sorry, you're out of luck. Every application is different, DevOps processes is something to be carefully crafted for each particular case with many considerations: large/small company, platform, development cycle, people preferred workflow etc. What I like to do is to have small well tested parts constitute my devops. It's a bad idea to adopt something just because everyone is doing this.

To sum it up, code should be separate from testing, deployment etc. On our team, I make sure developers don't have to think about devops. They know how to deploy and test and they know the workflow and commands. But that's about it.

I'm an operations guy, but I think I have a different perspective. The developers I work with don't have to think about CI/CD, but the configuration still lives in the repo, I'm just a contributer to that repo like they are.

Having CI configuration separate from the code sounds like a nightmare when a code change requires CI configurations to be updated. A new version of code requires a new dependency for instance, there needs to be a way to tie the CI configuration change with a commit that introduced that dependency. That comes automatically when they're in the same repo.

Having CI configuration inside the codebase also sounds like a nightmare when changes to the CI or deployment environment require configuration changes or when multiple CI/deployment environments exist.

For example as a use case: Software has dozens of tagged releases; organization moves from deploying on AWS to deploying in a Kubernetes cluster (requiring at least one change to the deployment configuration). Now, to deploy any of the old tagged releases, every release now has to be updated with the new configuration. This gets messy because there are two different orthogonal sets of versions involved. First, the code being developed has versions and second, the environments for testing, integration, and deployment also change over time and have versions to be controlled.

Even more broadly, consider multiple organizations using the same software package. They will each almost certainly have their own CI infrastructure, so there is no one "CI configuration" that could ever be checked into the repository along with the code without each user having to maintain their own forks/patchsets of the repo with all the pain that entails.

> organization moves from deploying on AWS to deploying in a Kubernetes cluster

I had (and still have) high hopes for circleci's orbs to help with this use case. Unfortunately, orbs are private - which makes it a no-go for us.

But, in my dream world, we have bits of the deployed configuration that can be imported from else where - and this is built right into the CI system.

In practice, for my org, the code and configuration for the CI comes from both the "infra" repo as well as the "application" repo. The configuration itself is stored in the app repo, but then there's a call `python deploy_to_kubernetes.py <args>`. The `deploy_to_xxx.py` script would be in the "infra" repo.

It also depends on your workflow - do you change the common deploy infrastructure more often, or do you change the application specific deploy infra more often.

Yeah, writing code to deploy code is sometimes fun, but sometimes nasty.

You can create a separate repo with your own CI config that pulls in the code you want to test; and thus ignore the code's CI config file. When something breaks, you'd then need to determine in which repo something changed: in the CI config repo, or the code repo. And then, you have CI events attached to PRs in the CI config repository.

IMHO it makes sense to have CI config version controlled in the same repo as the code. Unless there's a good tool for bisecting across multiple repos and subrepos?

I hear you. I thought about it and I think you need to reframe the problem. If a change in the application breaks your CI, it means you need to adjust the CI so it doesn't break when such changes are introduced. In my experience, these kinds of things happen very rarely.

In my experience maintaining an active project at a large enterprise, these kinds of things happen nearly daily. Sometimes I wake up and our EMEA team has merged in a change that requires a CI change as well, and they are able to self-service those through their PRs.

I'll give you a counter example: whenever I change CI workflow for something that has nothing to do with the repo - like a new deployment scheme to staging - I have to go and ask people to merge/rebase from master into their branches or they won't be able to deploy. It happens pretty often and I'd rather avoid this.

To fix this problem you can set up you CI server to merge into the base branch automatically before running the code (in fact you should probably do this by default for other reasons).

This way your devs won't have to merge, they can just rerun their tests, which should be the same workflow as if your CI config is separate from your codebase.

If you tests are lightweight and fast you could even trigger this automatically

I 100% agree and am building https://boxci.dev to solve this problem for myself and I hope others too.

It's a CI service that lets you run your builds however you want, on any machine you want (cloud vm; in house server; your laptop) using an open source cli that just wraps any shell script/command and streams the logs to a service to give you all the useful stuff like build history, team account management, etc.

In other words, how you configure and run your builds will be up to you - scripts in the same repo, in another repo, in no repo - whatever you want, since you could just git clone or otherwise copy the source from wherever it is - there's no 1-1 relationship between the source repo and the CI, unless you want that. It'll be launching very soon :-)

Looking forward to it, very interested. However, I'm going to be honest with you - it'd be a hard decision to rely on a service rather than an open-source software. Like, you guys can go bust any time and while the runner is indeed open-source, as I imagine, the management console for it won't be. And I understand that's what people are paying for, but, perhaps, consider licencing it, rather than running from your own server and charging for the service.

This is a really good point and something I'd been thinking about, thanks. I'll have think about the licensing option - it's a good idea and something I'd not considered.

My initial thought was a guarantee that should the company not work out, the management console software will be completey open sourced. Obviously this would just be relying on trust though, which I can see could be an issue.

You might appreciate the sourcehut approach (which I am the author of). Each build job is decoupled from git and can be submitted ad-hoc with zero or more git repos declared in the manifest. Your manifest is machine-editable too so you can add extra build steps or tweak various options before submitting it in specific cases. postmarketOS does this, for example, to automate builds of all of their packages. SourceHut itself does this to fill the demand made by this trend, too, by pulling build manifests out of your git repo and tweaking the sources to check out specific commits, then submitting it over the API.



Having all the tooling being integrated makes it a lot easier to offer good feature interaction. Right now the atomisation of "software project management features" means that your project is run over 10 servers all held together by flaky webhooks, and requests that should be instant instead take 30 seconds because everything has to be remotely queried.

Though really the thing that should happen in this case is that we should be seeing more tooling that can be run on a local machine that has a deeper understanding of what a "project" is. Git is great for version control. But stuff like issue tracking and CI also exist. So if there was some meta-tool that could tie all that together.

A bonus: if you make some simple-ish CLI tool that ties all this together, the "Github as the controller of everything" risk goes down because it would become easier for other people to spin up all-encompasing services.

A tool like this would do to project hosting what Microsoft's language server tooling has done to building IDEs. A mostly unified meta-model would mean that we wouldn't spend our times rewriting things to store issue lists.

Even if code and deployment config are different (and managed by different teams), there's no reason why they cant be stored in the same repo.

Where else would you put these configs?

Into a separate repo. See my other reply: https://news.ycombinator.com/item?id=20647649

Why it's horrible?

Technically, version control lends itself naturally as part of the now well-accepted infrastructure-as-code mantra.

Operationally, version control is the one that developers interfacing most primarily, shifting the interactions to that interface would be benefitical to users.

Of course, DevOps as a skill set is becoming less and less relevant, given the increasingly integrated toolings that interfacing directly with developers, that's for sure.

So, it's a very interesting situation. On one hand, I agree with you: when I start a new project, I am the master of it, I take care of everything and I need to make sure deployment and testing are implemented as early as possible. But this needs to scale. When we hired more people, I quickly realized no one wanted to deal with CI. They wanted it to just work and I wanted people to work on features and bugs, not fighting CI. So you can call me a devops guy by accident (turns out, that's a huge chunk of what I do as a CTO - remove obstacles by implementing and managing devops).

I think my ideal devops situation is the one where you start with a simple deployment script from your local machine when it's a one man show and then scale it to a large organization not by switching to a large piece of devops software, but by gradually adding pieces that you need as your team grows and requirements change. Exactly what happened to us and I think our devops workflow is really great and I'm very proud of it.

I'm interested to know how many different teams use your CI system, as well as how many different platforms (operating system, distro, programming language) you support.

Some companies have tons of old and new projects with very heterogeneous technologies in use. Imagine 50+ teams, several different programming languages, and things being deployed to different "hardware" (bare metal, cloud VMs, kubernetes, etc). It just seems like a lot of work to manage CI configs for all those different teams/cases, handle "support" requests from different teams, fix issues, and so forth. Hence, why the "easy way" out is to have each team manage CI configuration themselves as much as possible, to spread the maintenance cost across many capable developers.

Why devops, if everything can be deved? If the dev experience is good enough, do anyone needs ops?

I don't think it's that bad for CircleCI. CircleCI's focus is CI/CD and it's highly unlikely GitHub is going to do it as well as them. It's hardly commoditized technology. Their current customers are already heavily integrated with them and their orbs offering is further solidifying that relationship. Also, GitHub is expanding the CI/CD market with Actions so competitors in this space are likely to benefit.


- waffle.io was acquired and shutdown by the acquiree

- TravisCI was sold to a private equity firm and lost their way

> I don't think it's that bad for CircleCI. CircleCI's focus is CI/CD and it's highly unlikely GitHub is going to do it as well as them.

I don't think it's bad for entrenched CircleCI users, sure, but I do think it's bad for prospective CircleCI users, who, are likely using GitHub already, so why would they not use something tightly integrated?

If you don't think GitHub is going to try and increase adoption of this tool then why did they build it?

(Work at GitLab; Opinions are my own)

> it's highly unlikely GitHub is going to do it as well as them.

They don't need to. They only need to be able to run tasks in sequence when triggered by some event, and the rest just builds itself.

An integrated solution that bundles issue tracking, CICD pipelines, and package/container repository always beats spreading each feature throughout multiple separate service providers.

> An integrated solution that bundles issue tracking, CICD pipelines, and package/container repository always beats spreading each feature throughout multiple separate service providers.

I agree but with GitHub actions, it's also possible for CircleCI to build a tight integration. When it comes down to it, each CI system has its idiosyncracies so choosing a CI system isn't as simple as how tightly integrated it is (e.g. how flexible it is with Docker, or the way it builds containers, or running multi-container setups)

I think a good strategy for Microsoft would be to reuse as much as possible CI/CD code from github in Azure devops. Azure devops probably doesn’t need to be as flexible as long as it is robust and just works. Github will probably be the place where experiments can happen.

Azure Pipelines is currently considered the best CI/CD solution. (For example see GitLab issues about how users would like GitLab to improve theirs.)

[citation needed]

I'm in the process of replacing TeamCity with Azure Pipelines (not my choice). It's ok but I also doubt it's best in class, TeamCity is far better IMO (more flexible, more informations/actions on one screen, less clicks, better logs display, better tests results display, live tests results, etc). I wish they had an hosted offer, and yaml configuration.

Can't find the discussion now on GitLab, but I recall comments claiming that Azure Pipelines is very flexible, and GitLab CI's new task DAG should be modeled after that.

Last time I checked Azure Pipelines still doesn't have build caching


F.S. I work for Codefresh a CI/CD solution

You can bodge your own solution if you use your own build agent.

For hosted build agents, this is about to change, as a new caching task is in preview. I've tried it, and it's unstable at the moment, so I'd recommend just waiting until it's out of preview.

They just added it in preview within the last couple weeks.

Any reason CI providers couldn't start adding repo hosting?

Sure! This is how Heroku works. Although, if you also mean change management (issues, code review, browsing, etc), that's a much bigger lift, especially if your intention is to outdo GitHub.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact