Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What would your dream CI/CD pipeline tool be like?
7 points by chupa-chups on Oct 5, 2019 | hide | past | favorite | 9 comments
If you could wish for whatever you wanted, what would your perfect CI/CD tool be like?

- how would you create new builds?

- what do you expect to be required to do to create a dev, stage, prod deployment, and how to set it up?

- what would you pay for a solution like the one you imagine?

- would you reward if the solution wouldn't lock you to it, to a specific cloud or anything at all?

If you're a professional Devops/fullstack engineer, please try to look at it like you're "just" an application engineer ... so you're not into terraform, ansible etc.




Push to a git repo which contains a dotfile indicating that CI/CD should act on it. Possibly from subdirectories if it has multiple artifacts or is in a monorepo.

Understands how to fetch/publish packages for common ecosystems (e.g. gems, jars, npm, go) and extensible to add others.

Define multiple environments with selectable automatic/manual progression to next environment upon passing test suite. Secrets can come from a separate repo with more restricted access. Each test/deploy should include the code and secret repo verion hashes.

Fast, parallel execution of tests in suite. Automatic retry of failed test, metrics collected to identify flakey code/tests.

Perform blue/gree deploys in each environment. Audible text-to-speech notification of test failures delivered to individual dev or group (via Slack and/or other means).

Quick one-button revert to an earlier version in any environment. Historical record of which version is running in every environment. Also have a button to force-promote a failed stage onto the next.

Periodic health checks running a subset of test suite with dashboard, alerts, and history. Audible text-to-speech announcement of health-check failures to dev/group.

It should deploy and test artifacts in the cloud of my choosing, in the deployment unit of my choosing (EC2, EKS, etc) to my own AWS (or GCP) account.


The developers I work with that are in other teams have figured out how to do CI/CD with the provided tools that come with AWS and GCP. Throw in Gitlab and Github doing their own CI/CD integrations, I don't know if there's a niche left for you to go after. As a DevOps person, CI/CD is not rocket science, its pretty straightforward concept that doesn't take long to set up and works automatically afterwards.


- treat steps of the process as functions that return data, instead of jobs that return at most success/fail, then provide me with ways to harness, manipulate, track, and display that data. (And a documented structure by which I can define what data I choose to return from a given step; I'm thinking mostly about two scenarios, the first being metrics about a build, and the second being structured information about errors/stacktraces)

- provide me a way to use your tool locally during development, in place of (or wrapped around) the standard build tool for my language, and a guarantee that (within reason) it will give the same results on my machine as it does when running an automated job managed by the server. Give me clean, detailed ways to inspect the difference in situations where the behavior differs between my machine and the server. This would in theory help me to reduce the incidence of "works on my machine" sorts of errors.

- Give me a a well-documented http/s api, and a way to add arbitrary endpoints to that api, which trigger arbitrary scripts/jobs (or just query arbitrary subsets of data from my first point above). I should be able to decide whether my new endpoint is specific to some arbitrary grouping defined for my org (and thus only visible to members of that grouping) or available to anybody who can reach the endpoint. (I'm thinking of teams practicing radical transparency, who might want to, say, provide a public endpoint that anybody can hit that provides various build metrics; or teams who want to provide their product owners or business stakeholders a controlled window into parts of the process, without having to build their own services to provide the api for that window)

- give me an easy way to wrap a debugger around a given job or step, locally or remotely, and a clean interface for stepping through it and (if appropriate) walking the stack.

- save a run as an artifact, so that it can be marked, saved, exported, reviewed, downloaded, and stepped through safely at a later date and on a different machine. I should be able to see responses that we got from remote resources without having to touch those resources when reviewing a job saved in this manner.


Heroku, free as in free software, self hosted.


There's Dokku for single-server and Flynn for more flexible, multi-server setup. I'm curious for other options.

I wish there's an easy way to have Netlify style preview deploys for all branches with none of the hassle.


in my experience dokku is slightly broken en shitty compared to heroku. so i guess you get what you pay for.


can you elaborate on what's broken?


Flynn just became a Kubernetes distribution, IIRC.


www.harness.io is awesome, definitely willing to pay for that. Also loving Azure DevOps Pipelines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: