Hacker News new | past | comments | ask | show | jobs | submit login

The git commit, push, wait loop is terrible UX. Users deserve portable pipelines that run anywhere, including their local machines. I understand Act [1] goes some way to solving this headache but it's by and large not a true representation.

There are many pipelines you can't run locally, because they're production, for example, but there's no reason why we can't capture these workflows to run them locally at less-critical stages of development. Garden offers portable pipelines and then adds caching across your entire web of dependencies. Some of our customers see 80% or higher reductions in run times plus devs get that immediate feedback on what tests are failing or passing without pushing to git first using our Garden Workflows.

We're OSS. [2]

[1] https://github.com/nektos/act

[2] https://docs.garden.io




If folks just had actions target make or bash scripts instead of turning actions into bash scripts none of this would be an issue. Your CI/CD and your devs should all use the same targets/commands like `make release`.


I'm actually confused and scared on how often this isn't the case? What are people doing in their actions that isn't easily doable locally?


A huge portion of my actions are for things like caching or publishing artifacts, which are unique to actions itself.


I'd assume you would be able to publish and deploy locally before setting up actions. Such that those are likely targets in your build system?

Caching, I can mostly understand as unique there. Though, I think I'm living with whatever the default stuff in actions is. Slow for builds that don't happen often, of course, but not so slow that I care.


Unfortunately, my team has some builds that take ~25 min without caching and maybe 2 min with caching.

I'm still not entirely sure why it's the case, but the connection to the package registry is incredibly slow, so downloading all dependencies takes forever.


I'm fortunate that worrying about 25 minute builds just doesn't matter. The long pole on all builds is still the code review that goes with it, such that I just don't care about getting that time too low here.

That is, I am assuming that a CI build is not on the immediate dev loop, such that the person pushing it doesn't have to wait for the build before they prepare a review on it.


Why should caching in the cloud be any different than caching locally?


There isn’t any locally in GHA after the runner exits


It's the cloud. Runners are ephemeral (pretend, but still) with no persistent storage. This makes you either rebuild everything in every release stage (bad) or put artifacts in s3 or whatever (also bad) - this is especially painful for intermediate artifacts like dependency bundle caches etc.

As much as I like make it just doesn't work with the typical cloud stateless by default configs. If it works for you, your project is small enough and try to keep it this way.


Rebuilding at every stage shouldn't be too bad, with pinned dependencies. I can see problems with it, of course. That said, using a private code publishing location seems the correct path? That isn't too difficult to setup, is it?

That said, I'm still not clear on what difficulties folks are worried about. I'm also not clear I care on the mess of commits getting things working. The initial commits of getting anything working are almost always a mess. Such that worrying about that seems excessive.


> with no persistent storage

There's https://github.com/actions/cache though?


They're saying that unless you use actions, you don't get the cohesive cache and artifacts support. That replicating that in the cloud or locally is a PITA. Thus people are using the GH actions vendor specific tooling in that way.


Just run the GitHub cache action on your build directory and then run make inside it?


All the linting checks and end to end tests I don’t want to bother setting up locally for every repo I touch.


Aren't these just other targets in whatever build system you are using, though?


This is how it should be done. It was trivial to port my company's CI from Jenkins to Gitlab because we did this.

Confusion arises when developers don't realise they are using something in their local environment, though. It could be some build output that is gitignored, or some system interpreter like Python (especially needing a particular version of Python).

Luckily with something like Gitlab CI it's easi to run stuff locally in the same container as it will be run in CI.


Well… yeah?

My GitHub actions workflow consist of calls to make lint, make test, make build, etc. Everything is useable in local.

There’s just some specificities when it comes to boot the dependencies (I use a compose file in local and GitHub action services in CI, I have caching in CI, etc.) but all the flows use make.

This is not a technical problem, you’re just doing it wrong if you don’t have a Makefile or equivalent.


Yeah, it seems like we lost a lot of the "CI shouldn't be a snowflake" when we started creating teams that specialize in "DevOps" and "DevOps tools." Once something becomes a career, I think you've hit the turning point of "this thing is going to become too complicated." I see the same thing with capital-A Agile and all the career scrum masters needing something to do with their time.


Act's incompleteness has had me barking up the wrong tree many times. At this point I've temporarily abandoned using it in favor of the old cycle. I'm hoping it gets better in time!


I don't get why GitHub doesn't adopt it and make it a standard. Especially the lack of caches is annoying.


We need Terraform for build pipelines and God help you if you use Bitbucket lol


FYI garden.io’s landing page appears to be broken on iOS. It runs off the page to the right.


Thanks for flagging! We'll fix that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: