As a developer, in the short term I love this. Fewer things I need to cobble together and worry about how to integrate. I mean, it's already the case that if GitHub goes down that my CircleCI jobs won't work, so having one company to yell at and monitor alone is a plus.
But long term it makes the competitive ecosystem much less robust. And as a startup employee, makes me feel how disrupting established platform competitors gets that much more difficult - even if you have a better product, it's hard to fight against the "platform" as they have more integrated points of value.
In this case, Github is actively competing and adding features that helps all of their users. If that means that some other companies lose market share then it's just a sign that the value proposition has changed. I see no problem with this, it's how the market works.
My argument is every single monopoly eventually becomes 'bad' and makes progress stagnate and prices rise.
Just look at Google and Amazon. Up until I'd say late-ish 00s, nearly everyone loved Google and Amazon. I remember when Google search, GMail, Maps, street view, etc. first came out. I loved them, and I felt they were so much better than the competition. Same thing with Amazon - their selection was huge, great delivery and customer service.
I still use Google and Amazon for so much, but it's usually with reluctance and distaste. Check out Google search results these days. The ENTIRE first page about half way past the fold is ads, or AMP carousels, or shit I just want to get out of my way. But I still use them because with their heft and scale no one does long tail searches as comprehensively.
I think you've acknowledged that Google and Amazon provide the best services of their respective classes. Can we find a way to maintain that high standard without the crud that comes with it?
The web app is bloated and slow. The ads. The awkward design. Privacy issues. There's a bunch of benefits someone could offer. Fastmail tried but I still think it could be better and not based in Australia where privacy has been eroded.
The barrier to entry for doing categorization, spam detection, search, autosuggestions, and even complex JS frontends is significantly lower than it's ever been. You could even offload some of the above challenges to the client with prepackaged ML models. The spam filtering stuff could even be open sourced and community run like Adblockers. Abuse could still be handled server side.
I mean, I can assume you don't care for the change, but it's there. Streetview was launched in 2007.
However just recently they added an AR navigation to the mobile apps.
GitHub has gained near-monopoly status because it has provided a superior user experience at a great price. If either of those value propositions start to crumble,
git remote add ...
(One issue is that such a DSL could only provide lowest common denominator functionality... or else, suppose CI providers 1 and 2 offer feature X but 3 and 4 don't, then if you use feature X, you can switch between 1 and 2, but the tool would give an "unsupported feature" error if you tried to generate output for 3 or 4.)
I personally find it almost scarily ambitious and am a bit skeptical because of the medium-term maintenance load.
My argument is that technological platforms are pretty special when it comes to monopoly power because their lock in/switching costs are so high that they can last long after they've stagnated in terms of innovation.
And they just buy up all their competition.
Surprised no one has taken facebook's launch strategy. Only allow it in one school. Then only ivy league colleges. Then all colleges. Then everyone. Also allow you to import in your email contacts on signup.
That strategy created press / word of mouth and kept it youthful until they were ready for the big launch.
IE stagnated for years after MS killed Netscape. No company has really ever challenged Windows on desktop PCs. It was only when new platforms like the Internet and mobile computing came along that provided space for upstarts did MS start competing again.
I think if anything, the problem today is companies that could be challengers just end up selling to the very incumbent they're competing against.
A company can be a tech giant and still not be classed as a monopoly. Which are the kind of examples you’re think of.
In fact the few areas where you actually see Microsoft improve things are in areas where they have experience but don’t actually have a monopoly (eg IDEs).
Epic appears to be firmly pro-developer, which no matter what Tim Sweeney says isn't currently translating into an improved consumer experience.
Being too pro-consumer on the other hand... well, that gives you mobile gaming with heavy focus on free-to-play and plenty of ads.
Be careful what you wish for :)
I don't think it's an evil plan by any means, though, and their competition has been nothing but good for the ecosystem. Remember, they were evil about Windows because so many people didn't have any other real choice. They don't have a monopoly over the cloud, so they can't be bullies (and management is totally different now, too, probably as a result).
However on my very first impressions, the tools they're rolling out kind of look like Azure DevOps under the hood. There's a good chance they'll leave the github brand alone though and just make it better.
Personally, I've been happy working in the MS stack. They really want to earn the love and mindshare of developers.
Github could definitely win out for small projects and startups.
In trying to migrate a fairly popular open source library I maintain to CircleCI, from TravisCI, I bumped in the following two problems:
1. setting up a build matrix with various configurations (JVM version, compiler version, coverage on or off, etc) is a pain in the ass (possible, but a pain nonetheless), a configuration that TravisCI has nailed
2. basic functionality, like triggering a notification on Gitter via a simple HTTP hook when the build has finished, does not work
I do love CircleCI for our work project, as it doesn't need a build matrix or notifications to Gitter and thus was fairly easy to setup.
But I also love to see some competition for my public projects.
This problem is what I'm trying to solve with https://boxci.dev for myself and hopefully others too. The idea is it'll do all the management stuff that a CI service should do, but let you run the actual builds however and wherever you like, on your own machines, using an open-source cli. Check it out, perhaps you'll find it useful. It's launching very soon!
That all being said, I'm not sure how the long run comes into play here? It's bad if we extrapolate a long run prediction where no one else innovates or creates something in this space. But in reality, someone's going to be annoyed by it, someone's going to start up a different solution, and whether it works or not...it'll be there. Whether it's a small startup or a big enterprise challenging - if there's market gains to be had from adding this, it'll get added.
Don't get me wrong, you're on point with a lot of things in tech. Monoculture and monopolies are becoming super "in-your-face" considering just how much control the FAANG companies have in influencing tech and the like. However, I do find it odd that a single feature release by a larger company results, more often in not, in much of HN community losing their minds over prospective world domination. (not you op, but just in general...granted similar outrage happens to small open source stuff just as much so maybe this point is moot).
As John Maynard Keynes put it, "In the long run we are all dead." But in the short run, this just seems like a super useful feature added to a super useful product that will just make things easier for all developers.
These happen routinely on a daily basis. And being in a strategically foundational position makes such moves straightforward.
In the developer tooling market, GitHub is foundational, obviously. So it naturally will expand, and can also easily do so.
And it was these moments I lament Google's inability to buy GitHub. All these stuff are just repeating Google has had for a decade, they would have been the perfect marriage but Google refuses to pay big money...
We use Jira for ticketing and Github for code storage, like many companies. We've had SOC2 auditors tell us that they'd prefer to see us on Bitbucket just because the integration with Jira is more seamless; it guarantees a much deeper audit trail to go from RFC in Confluence -> Ticket in Jira -> Code in Bitbucket -> Deploy in whateverthefuck atlassian does for CI/CD. They didn't dock us, certainly.
But they're not wrong! I agree with them entirely! There's a part of me that hates that monopolization of our stack into one company, but there's another part of me that's like "that auditability and integration is so nice, and we literally will never see it if we're using ten different SaaS providers for each thing". How do you rectify those competing viewpoints? I don't know if you can.
(Caveat: import is broken. Worked for a few tiny repos, silently failed for our most important one, failing to import issues. Gitlab support just say "watch the open source issue tracker" , which is not really acceptable but at least more open than GitHub)
This is bad news for the CI providers that depend on GitHub, in particular CircleCI. Luckily for them (or maybe they saw this coming) they recently raised a series D https://circleci.com/blog/we-raised-a-56m-series-d-what-s-ne... and are already looking to add support for more platforms. It is hard to depend on a marketplace when it starts competing with you, from planning (Waffle.io), to dependency scanning (Gemnasium acquired by us), to CI (Travis CI layoff where especially sad).
It is interesting that a lot of the things GitHub is shipping is already part of Azure DevOps https://docs.microsoft.com/en-us/azure/architecture/example-... The overlap between Azure DevOps and GitHub seems to increase instead of to be reduced. I wonder what the integration story is and what will happen to Azure DevOps.
I've browsed through the article you linked to, one of the subtitles was "Realizing the future of DevOps is a single application". Also a horrible idea: I think it locks developers into a certain workflow which is hard to escape. You have an issue with your setup you can't figure out - happened to me with Gitlab CI - sorry, you're out of luck. Every application is different, DevOps processes is something to be carefully crafted for each particular case with many considerations: large/small company, platform, development cycle, people preferred workflow etc. What I like to do is to have small well tested parts constitute my devops. It's a bad idea to adopt something just because everyone is doing this.
To sum it up, code should be separate from testing, deployment etc. On our team, I make sure developers don't have to think about devops. They know how to deploy and test and they know the workflow and commands. But that's about it.
Having CI configuration separate from the code sounds like a nightmare when a code change requires CI configurations to be updated. A new version of code requires a new dependency for instance, there needs to be a way to tie the CI configuration change with a commit that introduced that dependency. That comes automatically when they're in the same repo.
For example as a use case: Software has dozens of tagged releases; organization moves from deploying on AWS to deploying in a Kubernetes cluster (requiring at least one change to the deployment configuration). Now, to deploy any of the old tagged releases, every release now has to be updated with the new configuration. This gets messy because there are two different orthogonal sets of versions involved. First, the code being developed has versions and second, the environments for testing, integration, and deployment also change over time and have versions to be controlled.
Even more broadly, consider multiple organizations using the same software package. They will each almost certainly have their own CI infrastructure, so there is no one "CI configuration" that could ever be checked into the repository along with the code without each user having to maintain their own forks/patchsets of the repo with all the pain that entails.
I had (and still have) high hopes for circleci's orbs to help with this use case. Unfortunately, orbs are private - which makes it a no-go for us.
But, in my dream world, we have bits of the deployed configuration that can be imported from else where - and this is built right into the CI system.
In practice, for my org, the code and configuration for the CI comes from both the "infra" repo as well as the "application" repo. The configuration itself is stored in the app repo, but then there's a call `python deploy_to_kubernetes.py <args>`. The `deploy_to_xxx.py` script would be in the "infra" repo.
It also depends on your workflow - do you change the common deploy infrastructure more often, or do you change the application specific deploy infra more often.
Yeah, writing code to deploy code is sometimes fun, but sometimes nasty.
IMHO it makes sense to have CI config version controlled in the same repo as the code. Unless there's a good tool for bisecting across multiple repos and subrepos?
This way your devs won't have to merge, they can just rerun their tests, which should be the same workflow as if your CI config is separate from your codebase.
It's a CI service that lets you run your builds however you want, on any machine you want (cloud vm; in house server; your laptop) using an open source cli that just wraps any shell script/command and streams the logs to a service to give you all the useful stuff like build history, team account management, etc.
In other words, how you configure and run your builds will be up to you - scripts in the same repo, in another repo, in no repo - whatever you want, since you could just git clone or otherwise copy the source from wherever it is - there's no 1-1 relationship between the source repo and the CI, unless you want that. It'll be launching very soon :-)
My initial thought was a guarantee that should the company not work out, the management console software will be completey open sourced. Obviously this would just be relying on trust though, which I can see could be an issue.
Though really the thing that should happen in this case is that we should be seeing more tooling that can be run on a local machine that has a deeper understanding of what a "project" is. Git is great for version control. But stuff like issue tracking and CI also exist. So if there was some meta-tool that could tie all that together.
A bonus: if you make some simple-ish CLI tool that ties all this together, the "Github as the controller of everything" risk goes down because it would become easier for other people to spin up all-encompasing services.
A tool like this would do to project hosting what Microsoft's language server tooling has done to building IDEs. A mostly unified meta-model would mean that we wouldn't spend our times rewriting things to store issue lists.
Where else would you put these configs?
Technically, version control lends itself naturally as part of the now well-accepted infrastructure-as-code mantra.
Operationally, version control is the one that developers interfacing most primarily, shifting the interactions to that interface would be benefitical to users.
Of course, DevOps as a skill set is becoming less and less relevant, given the increasingly integrated toolings that interfacing directly with developers, that's for sure.
I think my ideal devops situation is the one where you start with a simple deployment script from your local machine when it's a one man show and then scale it to a large organization not by switching to a large piece of devops software, but by gradually adding pieces that you need as your team grows and requirements change. Exactly what happened to us and I think our devops workflow is really great and I'm very proud of it.
Some companies have tons of old and new projects with very heterogeneous technologies in use. Imagine 50+ teams, several different programming languages, and things being deployed to different "hardware" (bare metal, cloud VMs, kubernetes, etc). It just seems like a lot of work to manage CI configs for all those different teams/cases, handle "support" requests from different teams, fix issues, and so forth. Hence, why the "easy way" out is to have each team manage CI configuration themselves as much as possible, to spread the maintenance cost across many capable developers.
- waffle.io was acquired and shutdown by the acquiree
- TravisCI was sold to a private equity firm and lost their way
I don't think it's bad for entrenched CircleCI users, sure, but I do think it's bad for prospective CircleCI users, who, are likely using GitHub already, so why would they not use something tightly integrated?
If you don't think GitHub is going to try and increase adoption of this tool then why did they build it?
(Work at GitLab; Opinions are my own)
They don't need to. They only need to be able to run tasks in sequence when triggered by some event, and the rest just builds itself.
An integrated solution that bundles issue tracking, CICD pipelines, and package/container repository always beats spreading each feature throughout multiple separate service providers.
I agree but with GitHub actions, it's also possible for CircleCI to build a tight integration. When it comes down to it, each CI system has its idiosyncracies so choosing a CI system isn't as simple as how tightly integrated it is (e.g. how flexible it is with Docker, or the way it builds containers, or running multi-container setups)
F.S. I work for Codefresh a CI/CD solution
For hosted build agents, this is about to change, as a new caching task is in preview. I've tried it, and it's unstable at the moment, so I'd recommend just waiting until it's out of preview.
There are issues open to look into it but no fix in sight yet. While this announcement sounds useful, don't throw away your current CI/CD tooling which is probably a lot nicer to use.
Lastly I really dislike how pretty much any really useful actions are created and maintained by single people. There are just some actions I'd want to see be supported by GitHub, I don't wan to have to handover things like Slack access keys to a non-trusted third party to post messages.
Every time I try to use actions I'm surprised it was launched in it's current obscure, unpolished state.
Also, I am not even sure what the appropriate syntax to use is with all the mixed messaging and examples (YAML or the other thing? Which do I use!?).
Regardless of which variant of syntax I attempted, the actions UI told me there was some generic error and that nothing was to be done. One additional problem I noticed is that if you have a protected master branch, you are going to be forced to get code reviews from your team every single time you try to iterate on the workflow script. There is no apparent way to test or validate actions without committing directly to master and seeing what the result may be.
All around, a complete mess in my estimation. I will be sticking with Jenkins for the foreseeable future. This GH feature is apparently not designed for people who care about straightforward solutions to simple problems:
git clone <repo>
<if failure, flag build, create issue, send email, etc>
Anyone have thoughts on how this compares to e.g. Google Cloud Builder in terms of functionality? Being integrated into the GH backend seems like a big perk, rather than having to use webhooks for everything.
Seems like you can do things like build your Docker containers (https://developer.github.com/actions/creating-github-actions...).
One thing that's great about Gitlab is the Gitlab server/runner split, where you can run workers in your own VPC, but still use their hosted offering. This makes it easier to keep your deploy secrets (aka the keys to the kingdom) locked down, as they never leave your cloud provider.
It's actually Cloud Build under the covers. Their Actions Library sure beats have to figure out how to write the configs yourself for GCB though.
Nope, it's actually something based on Azure Pipelines' code (on a different infrastructure)
That's always a concern with platforms with attached marketplaces (heck, it's even potentially an issue with building apps for OS’s, even if they don't have an attached marketplace—ask Netscape.)
Does anyone know how they are going to bill for the compute used in the CI?
It is totally free for public repos. For private repos:
- Free accounts get 2000 free minutes
- Pro accounts get 3000 free minutes
- Team accounts get 10k free minutes
- Enterprise accounts get 50k free minutes
Additional runner minutes are:
- Linux: $0.008 per min
- Windows: $0.016 per min
- macOS: $0.08 per min (yeah that's not a typo, it is copied straight from the page, macOS is mad expensive)
I'm not sure if for large open source projects, these machines will be nearly enough to run the CI jobs.
The price for Linux seems quite steep when you compare with for example what you pay with GCP.
It will be interesting to see the Github security teams catching those "public" repos doing nasty stuff like mining crypto - even with hard timeouts on each job it will be cool to see how this plays out!
Roughly inline with what other CI providers charge for MacOS per minute. Circle CI cheapest plan is 0.0498 per minute, but you need to pay USD$249 per month to get up to 5000 minutes. To make it that cheap you need to use all of your minutes up.
For teams, azure devops costs between 6-52/month/user and includes many features not needed for a standard GitHub+CircleCI (or gitlab) project.
No doubt there will soon be a Gitlab blog post passive aggressively complaining about Github copying them again.
They should put that kind of energy into creating teaching content.
What's the typical latency for these kinds of features to arrive?
- Actions can fail, but still continue (more like an additional success/failure status)
- Manually triggered actions (maybe with parameters that need to be entered by the user)
- Artifacts attached to actions especially HTML reports (next to plain text, this is the universal output type for a lot of quality tools)
Thanks for your interest in GitHub Actions.
You can set a property `continue-on-error: true` for each step and the runner will ignore a failed result and continue the workflow. For more details on workflow configuration please see https://help.github.com/en/articles/workflow-syntax-for-gith...
Workflow runs can be triggered via a `repository_dispatch` event with a custom payload. Using this model you could create a tool to allow for manual triggering. However, we do expect to provide a more integrated experience for triggering manual runs with custom inputs.
More capabilities for actions to post artifacts and reports as part of our experience is absolutely on our radar.
We are working to bring as many new users into the beta as quickly as we can and we look forward to your feedback.
Azure Pipelines remains available and it offers extra features that won’t be in GH Actions - like support for repos outside of GitHub (BitBucket, or Git anywhere)... it also supports SVN and TFVC.
Pipelines is also tightly integrated with Azure and has a richer CD experience (eg the K8s integration)
While writing your own actions was painful (at least for me), reusing actions that other people wrote worked like magic. I think the reusability aspect is going to be huge when Actions get more and more popular.
The next logical step after Github Actions for CI/CD is to offer Github as a place to run the production code too. You would be buying Azure with fewer steps. It's pretty arbitrary that we develop code in one place while then going off and using a separate suite of tools and processes to run it. Having a suite for everything in one place could be very appealing to the market.
I know some people use the service with a static site generator as a free way to host a blog, but it’s not really the same thing at all.
However i wouldn't consider it for mission critical stuff - github's infrastructure can't be compared to actual paid-for cloud hosting.
I think the idea in this thread is that since github is owned by MS, and they have Azure, that won't be an accurate statement for very long. Github doesn't have to build that infrastructure, they just need to competently integrate into MS existing infrastructure.
I can't believe I didn't piece this together sooner but I agree with this threads premise. I'm already hosting my source on Github so why not build it there? And if I'm building it there then why not deploy it there? Rather than manage a complicated pipeline, I just `git push` and everything else just works ... all the way to massive scale.
As someone who has worked on hand-built github to AWS pipelines ... I can actually see this being the killer feature Azure needs to actually win a large market share.
I'm sure there are warts but the current develop, deploy, debug(in prod!) workflow with Azure and VSCode is supposed to be pretty awesome.
The brand is already MSFT Github, it just hasn't changed the site masthead. MSFT's core strategy and future goals aren't opaque. There's only 2 paths that maximize profit. Either MS bundles it up into a MS toolchain alongside hobbled alternate toolchain support, or abandon Github in parts until it's decommissioned (no need to dilute your brand). Maybe I won't be alive to see it, but I left for Gitlab because I believe this is how Github will maybe survive.
That's a good example of a tiny change Microsoft could do, which would make me want to leave github, hehe :)
- Market segmentation, the people on Azure DevOps are not likely to use GitHub Actions and vice versa.
- Long term plan might be to bring both of them closer together.
- Build good will (marketing).
One small improvement I would really appreciate is a reduction in the number of clicks required to reach the real build status page. Clicking "Details" on a status row for a travis build takes me directly to the summary on travis, whereas for azure it leads to a redundant summary page hosted on github first, and requires another click to actually reach azure.
If github gets bloated with bad ideas and microsoft specific features the community will leave.
True? microsoft person on stage https://devblogs.microsoft.com/devops/author/jeremy-eplingou...
Sorry Github! Not good!
However, it's not clear what happens to existing actions and workflows. Do they just stop working? Can actions still be made from a dockerfile and entrypoint script?
Already used the "migrate-actions" binary on a few projects, and while I don't have the new version enabled (you'll get a notification when the repo is available for an upgrade), this is better in terms of simplicity since you don't have to manage action references or the "resolves" field (when you were manually editing the hcl).
The only downgrade is that it doesn't look like you can run actions in parallel within the same "job"; so you can't have eg. "cd project1 -> npm i" and "cd project2 -> npm i" running at the same time and then have a third action that can use the output/filesytem of both of those commmands. Now the "job" will run one, wait for completion, then run the other, then you can have an action that uses the changes of those.
“CircleCI has been building a CI/CD platform since 2011, and GitHub has been a great partner. GitHub Actions is further validation that CI/CD is critical for the success of every software team. We believe that developers thrive in open, connected ecosystems, and we look forward to working with GitHub to lead the evolution of CI/CD.”
CEO of CircleCI
Although it's possible they can successfully position themselves as the premium upgrade, it's hard to see how this isn't a threat to CircleCI.
One of our main goals when creating Codefresh was to make plugins that are not tied to Codefresh itself. As a result we followed the obvious model with plugins where they are just docker images and nothing else.
We are very glad to see that Github actions follows the same model. This means that we instantly get all Github actions as possible Codefresh plugins (and the opposite).
I would be really happy if other CI companies follow suit so that eventually a central CI/CD plugin repository can be created for all competing solutions.
I am mostly just excited for it to be released so I can try it.
Travis CI has a CLI that allows you to encrypt your PyPI password and stick it into the Yaml file (I think it works by encrypting your password and then uploading the decryption key to your Travis account). Will GitHub Actions have something similar, somehow?
We spent a ton of time working through the syntax changes in HCL, YAML, and other languages. What we found was that while HCL and other languages are amazing for expressing pure configuration, software development lifecycle and continuous integration workflows are a blend of configuration and pseudo-scripting, and we were only able to come up with a clean syntax for this in YAML.
For example, although HCL supports heredocs, we felt that they suffered from readability problems versus multi-line strings in YAML when you're just trying to read through a workflow file and see what it does.
There are many, many other reasons we felt that YAML was actually the right choice, and I'm optimistic others will come around to it in the same way I have.
Eventually they started using an in-repo config file like GitLab but only supported about 20% of the GUI features. It's still abysmal. I wouldn't mind when Azure DevOps (Server) were to be replaced by GitHub (Enterprise).
Ahh, now I see, support for the main.workflow syntax will be removed, and you must migrate:
Did they get negative feedback from beta users?
Or is it related to the recent release of HCL2?
"Real" programming languages have good compilers, good error messages, debuggers, IDE's with syntax highlighters and so on. json/yaml/toml/xml/whatever always feels like you are poking something and then trying it, checking for errors from typos and starting over. This feels quite backwards if you are used to developing with a good strict compiler and debugger.
There are projects like pulumi that provide API's for multiple languages and hook into almost any service (e.g. Azure, AWS)
A lot of times you really only have a list of tasks that are executed one by one, and the nature of that doesn't really change. I think it's overkill to introduce a new wrapping layer in those cases (and that is what Pulumi really is, it mostly just wraps Terraform if I understand right). But yes, sometimes it's better to have 'real' code at your disposal. It's just a matter to know when's when. Like always :-)
Also, debugging those yaml problems or real code doesn't really seem to make a difference for me. It's both equally annoying, and since there is a 'declarative' layer involved most of the time anyway, just a bit deeper down, debugging the 'real' issues (not typos and such -- btw. check out yamllint if you don't know it) is not any easier with wrappers like Pulumi. But maybe we have just worked on different type of complexities in our pasts. That usually explains different preferences for tooling. Just saying that this is not a thing we should/can generalize.
As for a % badge, you can use our new Coveralls GitHub Action to post your coverage data to https://coveralls.io, then add the badge to your readme: