Hacker News new | past | comments | ask | show | jobs | submit login
Waypoint: Build, deploy, and release applications across any platform (hashicorp.com)
549 points by jacobwg on Oct 15, 2020 | hide | past | favorite | 135 comments



From https://www.ottoproject.io

    It was an ambitious project, and we feel that it has not
    lived up to expectations. Rather than having a public project
    that does not meet HashiCorp standards, we have decided to
    close source the project so we can rethink its design and
    implementation. The source code is still available for
    download, but Otto will no longer be actively maintained or
    supported.
It looks like this is the successor to Otto, just with a bit of a different architecture.

The thing I'm worried about in terms of considering trying/adopting it for any project is whether this will suffer the same fate, if it's not as successful as Hashicorp hopes. I don't want to learn new tools that give a small increase in efficiency if I don't have a guaranteed return on that effort and some reasonable confidence using that tool will help me in other areas too.

Especially if the code behind the thing has the risk of going closed-source... it's kind of the Google problem (http://killedbygoogle.com), and I really hope Hashicorp can avoid that (which they have, as they continue to support well-used even if not-flashy-anymore tools like Vagrant).


The similarity between Otto and Waypoint are only skin deep: they both use the word "deploy." The way they deploy, their philosophies, etc. are COMPLETELY different. I talk about that in this comment a bit here: https://news.ycombinator.com/item?id=24790491

Waypoint is fully open source. There is no risk of it going closed source. If you are alluding to Otto here, Otto's source was never removed either, it is in the Git history of the project that's still on GitHub and we provided a one-click download link on the Otto website when we shut it down.

EDIT to add: If you're afraid of us shutting down projects, please note Otto is the only project in our entire history we shut down like that. So we don't (or shouldn't!) have the same reputation around that that Google has. Terraform was effectively failing (no growth) for almost 12 months and we stayed committed to that.


The concern here is probably more that they become effectively dead without vendor support. While some projects escape that fate they seem to be rare.


As ever but chicken meet egg.


I imagine that year lag time is what they call lagging indicators. I think with something like terraform it can take some time before it picks up serious steam that stays steady


Is there anything else they've discontinued though? Vagrant has waned in popularity given Docker, but it's still supported . Nomad is much less popular than Kubernetes, but it's still being actively developed, with features that make sense for its niche.


Sorry, I think my original comment is easy to misinterpret. I'm not saying I'm very worried that will happen, but I am mostly suggesting Hashicorp needs to be very careful post-launch to make sure people know this thing will be more of a Vagrant and less of an Otto :)

(And note that Vagrant was _extremely_ popular/'hot' for a time. It's still incredibly useful and widely used, even if it's not a slick new JS-based tool that everyone fawns over these days.)

Nobody wants another Google in terms of product support over time!


Nomad is still technically being supported, but looks looks to be low in priority and resources at the moment.


We've continued to grow the Nomad team, and are working towards the big 1.0 release milestone later this month. The OSS usage continues to grow double digit every quarter and our commercial offering generates millions in revenue. Nomad is also the backbone of both HashiCorp Cloud Platform and Terraform Cloud. Suffice to say, we continue to support Nomad and depend on it!


That's great to hear. I want to thank you for your work on Nomad. Its been a pleasure to use and has allowed us to scale without unnecessary complexity.


Awesome! I like Nomad a lot, I just know how hot and cold particular spaces can be.


On the contrary, the development of Nomad has really taken off over the last 9-12 months.

At the beginning of the year when my new team was starting to dig into the technologies we were going to be using in our new (to us) ecosystem, Nomad was hovering at the bottom of the list mainly from the lack of critical features we would be needing (namely, persistent storage solutions for containerized jobs but there were some other reasons I can't remember off the top of my head).

It was, I believe, around the 0.10 release that they laid out a roadmap for implementing the CSI spec on top of things like adding Autoscaling that we decided to go with it. This was back sometime before COVID lockdowns started and since then we've deployed it, along side Consul and Vault, to run our new internal production metrics/code coverage services.

Given we have committed to it, I've closely followed and have participated in it's development (in the form of a documentation contribution as well as assisting with a few issues related to the CSI implementation) and definitely believe the products development is moving much faster than it had been.

Keep in mind, Cloudflare is now using Nomad in (almost?) all of its data centers and Roblox has built out it's game server infrastructure using Nomad as it's orchestrator too. Hashicorp would be insane to throw away a product being used at that kind of scale.


Thanks, it's great to hear this kind of commitment. While it would certainly be possible to switch to Kubernetes if Nomad was ever abandoned, the thing is that I really wouldn't want to. Nomad, along with the rest of the Hashistack, is so much fun to use and feels so polished. Keep up the great work!


That's great to hear! I try not to buy into the "what's hot right now" mentality, but I'm just cautious after being left in the dark on products I've depended on. Glad they're continuing to invest in it.


When I think back to the era that Otto was released in it was a time of all sorts of vendors feeling the container adrenaline rush that made them think all they needed were containers to replicate the Heroku experience. Otto largely reflects that time in my opinion and a lot has happened in the container and control plane ecosystem since then. I respect HashiCorp's decision to pull the plug on it. (That's coming from someone who works somewhere that takes the complete opposite stance on these things so I'm fully aware of the trade offs.)

I've only skimmed the release blog post for Waypoint but from where I'm sitting it seems like a "meta Control Plane" which makes it a wildly different tool compared to Otto.


Please don't quote text in code blocks. Makes it incredibly difficult for mobile users to read. Reformatted for mobile users:

> It was an ambitious project, and we feel that it has not lived up to expectations. Rather than having a public project that does not meet HashiCorp standards, we have decided to close source the project so we can rethink its design and implementation. The source code is still available for download, but Otto will no longer be actively maintained or supported.


I wish HN had direct support for quotations. It would benefit the discussion culture and solve these formatting problems.


+1

It would also be nice if HN could fix the formatting of code blocks.


Hello HN! I'm the founder of HashiCorp.

Waypoint is our 2nd day HashiConf announcement and I'm excited to share and talk about it! Compared to Boundary, Waypoint is definitely weirder, it's trying to do things differently. I'll be around here to answer any questions.

I think the most common question will be what is this and why? I cover that in detail on the intro page here so I recommend checking that out: https://www.waypointproject.io/docs/intro

Here are some major things Waypoint is trying to do:

* Make it easier to just deploy. If you're bringing a Ruby app to Kubernetes for example, Waypoint uses buildpacks and an opinionated approach to deploying that app for you to Kubernetes. You don't need to write Dockerfiles or any YAML. You can `waypoint up` and go. If you have existing workflows already, you can use a plugin that just shells out to `kubectl`. The important thing here is you get a common workflow "waypoint up" and your future apps should be much easier to deploy.

* Provide common debugging tools for deployments. Waypoint gives you `waypoint exec`, `waypoint logs`, etc. and these work on any platform Waypoint supports. So while K8S for example may provide similar functionality, you can now get identical behavior across K8S and serverless, or your VMs, etc.

* Build a common workflow that we can plugin other tools around. This is similar to Terraform circa 2015. There wasn't a consistent way then to think about "infrastructure management" outside of a single target platform's tools. With Waypoint, we're trying to do something similar but for the application deployment lifecycle.

As always, a disclaimer that Waypoint is a 0.1 and there is a lot we want to do! We have an exciting roadmap[1] that includes promotion, config syncing with KV stores, Vault and others, service brokering for databases, etc.

And also, lots of jokes about Otto in the comments here. :) I think Waypoint and Otto similarities end at "they both deploy" (and Otto with HEAVY quotes around "deploy"). They're totally different tools, one didn't inspire the other, though we did make some major changes to avoid Waypoint hitting the same pitfalls as Otto.

Super excited to share Waypoint today!

[1]: https://www.waypointproject.io/docs/roadmap


Hi Mitchell,

Could you please elaborate further on how you see Waypoint integrating with CI systems?

It seems like you expect developers using Waypoint to deploy straight into production from their laptops, which is an anti-pattern, particularly for any company larger than a few people. Companies really need to make sure that code is checked into version control and that tests are run before code is deployed into production.

How do you see Waypoint being used in companies where there are more structured pipelines to build code?


We have a section dedicated on that with specific CI systems called out: https://www.waypointproject.io/docs/automating-execution

Generally speaking I like to describe Waypoint in CI similarly to Terraform for infra. Prior to Terraform, all CI environments had this messy step that was like "create/manage infra" and it was mostly homegrown scripts that are cloud-specific.

Today, CI systems often have "deploy" which are similarly: home-grown logic that is platform-specific (kubectl for K8S, packer/terraform for VMs, etc.). We believe the "deploy" step in CIs can be replaced with "waypoint up" or "waypoint deploy".

The link above shows you how to do this.


I think I see what you're getting at - you're trying to create a standard way to build and deploy applications, in the way that Terraform is a standard way to build and deploy infrastructure, so to speak.

However, I think that one of the big differences here is that a lot of the reason why infrastructure scripts are messy is because they have to deal with state that's hidden behind a remote API, and that Terraform took the imperative APIs for providers like AWS and abstracted them behind a declarative configuration model that also kept track of state. So this was a huge value add compared to infrastructure scripts that would often get corner-cases involved with state wrong.

I'm not sure how much that's the case with application development though. Dockerfiles are declarative, as are Kubernetes manifests. The state of the environment is less of an issue - if a new version is being built, that's what gets deployed, and it overwrites what previously exists.

If there's any state management around application deployment, it's around detecting errors and rolling back if there are any. If Waypoint would move in the direction of standardizing that, so that developers could encode smoke tests for their application that Waypoint is constantly checking, then that would be huge - but it doesn't seem to be on your roadmap?


> I'm not sure how much that's the case with application development though.

Not all apps fit neatly into a container. In the data warehousing space an app could consist of client code, middleware, and database objects, and then there’s probably data in object storage, and ETL code to deal with. There’s code in your workflow/scheduling system to be updated. The CI and test frameworks need to be updated.

And it all has to be deployed and tested in a synchronised way, especially if you want it to be reproducible.

I’d rather use a purpose built tool for this, than the mashup of tools and scripts that I currently use.


Hey Mitchell,

It's an interesting idea, but as a Devops Lead, I'm not sure this is any where near our biggest pain point. We've been in the midst of a Devops transition from manually managed EC2 and chef towards CI/CD and infrastructure as code for two years. Terraform solves one of our biggest problems - consistently representing infrastructure as code. But Waypoint won't solve the other. In fact, Waypoint doesn't seem to touch on anything that's actually much of a problem.

The deployment step isn't the problem - it's everything surrounding it. See, we're not on CI or CD yet. We're working our way towards them from a legacy system. In that intervening time, what we need is a generalized build tool that provides the ability to do CI or CD, but also allows you to just write general build and deployment pipelines that allow you to gradually automate that process and run them manually outside the context of an automatically triggered pipeline. We've been in this stage for two years, and the only tool that actually does this coherently is Jenkins. But Jenkins is an absolute nightmare to work with on many levels. Our only other real alternative would be to cobble together a mess of bash/python/nodejs scripts or use chat ops. There isn't a good solution in this space.

Gaia Pipeline, an open source project in development by a small team lead by a Hashicorp engineer, looks like it could be our long wished for Jenkins killer. I'd really love to hear your thoughts on whether Hashicorp might be willing to throw its full weight behind it: https://github.com/gaia-pipeline


Depending on where you are in the transition, Garden (https://docs.garden.io/basics/how-garden-works) might be a good fit. It definitely touches on all of the surrounding problems such as managing dependencies, integration tests, and running arbitrary tasks (think DB migrations)

You define one part of your system at a time and Garden compiles that into a graph of modules that it knows how to build, test, and deploy. This means you can adopt it piecemeal and use it to run an entire CI pipeline from any given context, e.g. your local machine, VM or actual CI for that matter.

Full disclosure, I'm affiliated with project but your comment describes a problem we see from a lot our users so I couldn't help but replying.


You can check out Drone.io. It's based around plugins, which are basically Docker containers which accept env variables for configuration and a volume with the code (which Drone clones from git). You can easily write custom plugins in any language you want, test them, and run them locally outside of Drone with little configuration.


Totally agreed ! I still haven’t found something other than jenkins that would work for our use case which is very similar to what you stated.


I'm not Mitchell but I think this page already answers your question:

https://www.waypointproject.io/docs/automating-execution

So basically you can integrate waypoint to your existing CI/CD solution.

Kind of using it as a Makefile for pipelines, if I understand correctly.


Ooops, burned :P


Hi Mitchell!

A problem I had with Gitlab Auto CI/CD was that even though it fully abstracted away Kubernetes and Helm, when something went wrong I was suddenly confronted with having to learn and understand all of it.

Then when I fully understood it, writing my own simple .gitlab-ci.yml was orders of magnitude easier and simpler than using the big abstracted auto ci/cd system.

Waypoint seems to be similarly abstracting away a lot of complicated details. Is there a philosophy about what waypoint expects of its users, especially when deployments break out of the cookie cutter standard situations?


The level of Waypoint abstraction is the same as Terraform. You are not expected to use Waypoint and not understand the underlying platform. This is similar to Terraform: you still have to know something like AWS well to use Terraform.

With Waypoint, we aren't doing any magic, you specifically opt into a Kubernetes plugin for example. If something goes wrong, you'll have to understand Kubernetes. Waypoint isn't trying to replace that, we're just trying to provide a common workflow on top of these platforms (same as Terraform with infra).


Product Leader from GitLab here - In support of what Mitchell said. One lesson we learned from Auto DevOps is that composability and transparency were key. Just like you mentioned tinco - users struggled when things eventually broke - or they wanted to customize beyond the out of the box customizations that were available. That's why we evolved to have composable Auto DevOps[1] and Helm installs via CI/CD pipelines.[2]

[1]https://docs.gitlab.com/ee/topics/autodevops/customize.html#... [2]https://docs.gitlab.com/ee/user/clusters/applications.html#i...


Hey, yeah sorry I didn't mention that. We were a pretty early adopter I guess, we've been on it for well over a year now and it's evolved a lot since then. That it's now split up in separate modules makes it much easier to work with. Something that I couldn't find back then was tutorials or blog posts on how to really work with it, not just deploy happy case apps. The product wasn't very popular yet so there was really no community producing that kind of blog post.

Also that the Gitlab issues have now merged into one repository has made reporting an issue so much clearer. You guys are constantly improving at such a rapid pace, definitely kudos for that.


I'm quite curious about the URL service when deploying the server onto Kubernetes. How is the public `waypoint.run` able to access deployments in a Kubernetes cluster? I know it uses Horizon. But, are the requests somehow proxied through the deployed Waypoint server?


We talk a bit about how this works here: https://www.waypointproject.io/docs/url

Our entrypoint reaches back out to the URL service. This feature is optional. That’s how it works. On our roadmap page you can see that we are planning various improvements to continue to make this more secure as well.


Is the source for waypoint.run open source? Can we run our own?

I've not had a chance to dig for the source and it wasn't obvious when I skimmed the project.


Yes it is: https://github.com/hashicorp/horizon and https://github.com/hashicorp/waypoint-hzn

Not we haven’t tried to make that easier to self-run, but we didn’t purposely make it difficult either. It just wasn’t a priority for an initial launch. We’ll continue to improve this going forward.


Could you be more transparent with the roadmap for Waypoint and Boundary?

I couldn't find a roadmap anywhere. I'd like to know a rough timeline if I am to consider these even for non-prod for the projects that I do.

I understand these are both v0.1. I'm just humbly asking for some transparency in regards to the future.



Maybe I'm an idiot but there's no timeline anywhere in those links.


That's likely intentional? My employer doesn't give timelines either because of how hard it is to predict software development times.


Yes this is intentional. There are a couple reasons:

(1) It really depends on how much community feedback we get. If we’re planning feature A but suddenly loads of people want feature B, we’re gonna reprioritize and change timelines.

(2) People get really, really, really upset when you don’t hit your timeline. Its sort of a lose/lose for us. So, we use keep them close to heart.


Curious if you are familiar with Habitat (https://community.chef.io/products/chef-habitat/) and how this compares?


great work! just curious - could you describe your product development timeline for Waypoint at a high level?

how you came up with the idea, how you iterated on it, and how you decided what features to include and not include in the initial release


is there a complex build example, say non-trivial java (e.g spring,jetty,etc) or rust?


I work at Product Mgmt at HashiCorp. There is a Java Spring example here that uses buildpacks from either Heroku, Cloud Foundry Paketo (VMware Tanzu team that works closely with Spring), and GCP Buildpacks.

https://github.com/hashicorp/waypoint-examples/tree/main/doc...

You don't have to use buildpacks, you could use the Docker plugin with a Dockerfile as in this reactjs example: https://github.com/hashicorp/waypoint-examples/tree/main/doc...


I think these are still trivial though in that they don't use any external resources like a database. Are there no examples that do that?


I'm late in the thread and this will probably get buried but:

I think this looks amazing. Lot of comments here saying it's yet another useless abstraction.

Sure, maybe it doesn't do anything you can't already do yourself, but the experience looks like one I would enjoy and could see myself using.

Obviously I can't pass judgement that quickly so will reserve further comments until I can spend some time this weekend experimenting, but it's an experiment I look forward to.


Can't wait for the next layer on top of Waypoint which will allow me to build, deploy and release applications everywhere with one click!


Or the one after that where I can do it from my Amazon smart speaker!


Alexa, Deploy Application.

"Ordering Applebee's"


I always love HashiCorp products!

But...

This feels like Otto v2 to me, and it seems like it hasn't actually solved the underlying problem with Otto: that it was simpler to just use and learn the underlying tools instead of a very specific DSL that transformed into them. If I have to look up how to use Waypoint to create Dockerfiles/Kubernetes manifests anyway, why not just learn how to use Dockerfiles and Kubernetes manifests?

I'm pretty excited by Boundary, but I don't really see the point of this.


> why not just learn how to use Dockerfiles and Kubernetes manifests?

The problem I see with the entire Docker ecosystem is that the Docker/Dockerfile build system is fundamentally terrible, principally because the cache system assumes a linear dependency tree and build graphs are, well, graphs. To get reasonable build performance, you need a tool that understands this. This is something Nix gets right, and it even lets you build Docker images without the Docker/Dockerfile build system.

I'd also like to see something that takes it a step further--build the docker image AND a set of kubernetes manifests that reference that docker image (a "kubernetes app package" if you will) such that you have a single artifact that represents your application that a "kubernetes package manager" can deploy (the current crop of k8s package managers altogether punt on the contents of docker images).

I understand that this is a very different way of thinking about Kubernetes and Docker builds than what we practice today, but I'd really like to see this in practice or hear some debate about its merits (or lackthereof).


Broadly, I agree, and I've written a lot about it in the past[0].

OCI images are not quite as bad as they used to be. Linear caching is no longer baked directly into the image format, instead it's an assumption that has been carried forward by Dockerfiles and docker build.

Docker folks are working on at least the docker build part in buildkit[1]. In the meantime though I prefer Cloud Native Buildpacks, which are able to perform layer rebasing as an update operation.

Disclosure: I have previously worked on buildpacks technology for Pivotal, now VMware.

[0] https://docs.google.com/document/d/1M2PJ_h6GzviUNHMPt7x-5POU...

[1] https://github.com/moby/buildkit


> I'd also like to see something that takes it a step further--build the docker image AND a set of kubernetes manifests that reference that docker image

You might want to check out Garden for that (https://docs.garden.io/basics/how-garden-works). It thinks about your stack as a graph and manages build and deploy dependencies. And it supports the use case you're describing, i.e., you can combine a Docker image and a set of K8s manifests into a single "module" and the manifests can reference the Docker image.

Full disclosure: I'm affiliated with the project.


I believe BuildKit solves the dependency graph problem. Docker has shipped with it since 18.09. It is opt-in for now so you have to use e.g. `DOCKER_BUILDKIT=1 docker build ...`.

https://github.com/moby/buildkit


I had looked into this a while ago, and I forget why, but I thought it was a mismatch; however, looking again at the list of downstream projects, this one at least seems compelling for the use case I described:

https://docs.earthly.dev/examples/monorepo

At least it addresses building the right Docker image--it doesn't seem to address building the broader Kubernetes manifests, but I'll take what I can get at this point.


CNAB[0] gives you the kubernetes app package, and can push that and the underlying images to a container repo.

[0] https://cnab.io/


Hello! Compared to Otto there are some major simplifications we made:

* Waypoint does not own/manage any infrastructure, Otto attempted to manage the infrastructure. With Waypoint, you have to bring this yourself.

* Waypoint has pluggable components for build/deploy/release. Otto focused in on our tools (mainly because we had to manage the infra, point 1). This makes it much easier to slide into existing ecosystems.

On your question of abstractions: this same argument can be made for Terraform so I think my answer there would be the same but for applications. The idea is that Waypoint can give you a consistent workflow to use different tools, and additional value add features such as deployment URLs, exec, logs, etc. that work across platforms.


Thanks for the response Mitchell, and I love your work!

You make an excellent point re: abstractions and consistent workflow. But for Terraform, you still need to know the underlying provider; to use Terraform in AWS I have to know what AWS resources I'm looking for, Azure resources in Azure, and so on. But Terraform adds value through abstraction. I don't need to learn the specific AWS/Azure calls; Terraform does it for me. Terraform provides a sane, consistent syntax. And it encourages a declarative workflow that the cloud providers themselves don't do very well/at all.

I don't necessarily see Waypoint providing that same value. You need to know the underlying provider to know what you want to do with it, but the abstraction seems to make it more difficult to use that provider, not easier. But I am a devops professional, not an application developer, so I might just be the wrong market for it.

Either way, congratulations on the release, and I'm excited to see where Waypoint goes from here.


Given the prevalence of Kubernetes, I am not sure more abstractions are needed. Kubernetes if anything is too complete in how it manages it's deployments, and all this does is add another level of abstraction on top of that, while encouraging people not to think about the operational part of the loop. For example, no health or readiness checks are defined as part of the deployment examples in this link. If you are still running multi-deployment with Ansible/ Puppet / Cloud formation, etc I see value, but i think most are probably better off focusing on going K8s.


Managing Kubernetes configuration is a task unto itself. I'd say one of the best things that "deserves to be built" is a user-friendly layer that generates Kubernetes configuration that can start simple (e.g. total boilerplate) and be configured by hand as you have deeper requirements. Additionally, build-steps (e.g. how do your containers get built) etc can't be ignored. This all tallies up to me to about a day of engineering work to get a project setup with all the bells and whistles every time I want to start a new serious project on K8s. I definitely think Waypoint (or a similar product) could reduce this time to minutes, so long as I can customize more as the project evolves.


As a System Architect, I would never be responsible for starting the use of Kubernetes. So there's a clear market for replacing it.


As a not-that-old-but-still-grizzled desktop/embedded developer, I suggest this headline be changed to:

"Waypoint: Build, deploy, and release _web_ applications across any _cloud_ platform"


These don‘t have to be web applications. Maybe desktop applications should be referred to as ... „desktop applications“.


Yes, it's very misleading. I thought it would be about apps for desktop and mobile.


This is great. I'm glad a good company is backing an open source project like this. I've built abstractions like this at two companies, and I hope one day I can stop writing my own. Some comments:

I see "promotion" on the roadmap, so this is probably in the works. But the current "Workspaces" each have their own build. I really want to be able to promote the same artifact from one environment (staging) to another (production). I'd also like to be able to choose the artifact build/version to deploy, not just what happens to be in my local repo.

The concept of multiple environments also brings up the need to vary things by environment. App config (env vars) are obvious. But also settings like number of replicas or auto scaling min/max (auto scaling is also required).

The biggest thing most of the tools in the space lack, like all the ones that try to copy the docker-compose.yml syntax, is standardized reusable app settings. Imagine I have two common types of app, "API services" and "background workers". They have common settings, like maybe they all default to using /healthz for health checks and auto-scaling at 70% CPU. Then within each group they vary, "API services" use internal load balancers and "background workers" don't.

I don't want every individual "API service" app to have to say "my health check endpoint is /healthz" and "I run on port 8080 and need a load balancer". Those are the defaults, and if the app uses the defaults it shouldn't need to be configured. But at the same time I want the app to be able to override the defaults. Within a well standardized environment 90% of app infra settings (not env vars) are the same, or can use the same template (like the image is docker.example.com/svc/{app_name}:{build_id}). I want to be able to reuse the settings.


Happy to say that everything you brought up has also been brought up internally prior to release and is on our roadmap.

(1) Promotion - yes, exactly as you described.

(2) Shared app config - kind of already there in the API but not exposed yet.

(3) Inherited settings - little further out, but some concept of module or parent Waypoint config to inherit from is something we'd like to do. We'd also like to be able to allow people to /force/ this by platform, i.e. "ALL K8S apps must have these annotations." And so on.

This is all very possible with Waypoint and I expect everything you mentioned will exist by this time next year.


I think what you're looking for is micro => https://github.com/micro/micro or the hosted version https://m3o.com. Our goal is basically to abstract the entirety of this away to a common experience from the developers viewpoint.


Possibly, thanks, I'll look into it. One key is that I'm not looking for an abstraction, I'm looking to provide an abstraction. I run infrastructure teams that provide the infrastructure as a product to other engineering teams, including build and deployment systems. So I need a system that is flexible enough that I can create the abstraction I want to provide. Most of the tools in this space either abstract away everything, in an opinionated way that I don't agree with. Or they abstract away too little, so I would have to write a heavy abstraction over the abstraction. Waypoint, with its plugin system and if the HCL syntax is expanded to support reusable templates/modules, may provide a good building block for me.


Garden (https://docs.garden.io/basics/how-garden-works and https://garden.io/) might actually be a good fit. Some of our current users (I'm affiliated) actually use Garden for just the use case you're describing.

But to your point, abstractions are hard, and we've tried to design Garden such that it takes care of the _what_ and _when_ but allows the users to decide the _how_.


If you are a builder of platforms then I agree. You are like me except building it for your own audience. In which case waypoint is quite useful for tying together a number of the pieces.


So that's what they've been doing with all the Heroku people they've been hiring.

http://hashiroku.com/


Another tool that seems quite heavily focused around making the simplest possible cases look snappy and simple. The simple cases aren't the problem. And when it comes to the complex cases, I have much more faith in the extensibility and power of Nix, which is designed as an actual real programming language to give users the power to solve problems the designers had never envisioned for themselves, than HCL, which seems to be arbitrarily restricted and half-thought-out at the best of times.


Author of Waypoint seems to be interested in Nix https://github.com/hashicorp/waypoint/commit/f58c13d9fe3581a... :')


I'm not buying the premise that there is a need for yet-another-abstraction over a high level tool like Kubernetes. You need to learn your underlying deployment platform anyway, introducing yet another tool feels like a distraction.


I disagree for the most part. Developers shouldn’t have to know and understand their infrastructure. It’s great if they do, but making it a requirement is just adding complexity for complexity’s sake.

Kubernetes is high level if you’re an infrastructure or ops person. It’s not high level if you’re a developer. This argument sounds an awful lot like “learn C before you learn JavaScript because you’re going to need to know C anyway” combined with “C is a high level language”. That might technically be a true statement but it’s only true for a very small number of people.

In my opinion the the goal should always be to have a platform where developers don’t need to worry about the underlying infrastructure. Software development is hard enough as is, and Kubernetes isn’t exactly easy.


You can't use this tool if you don't know the platform you are deploying to.


This is pretty interesting.

I spent the last year building an in-house, Kubernetes based PAAS that has a lot of similar functionality. What we don't do, however, is support multiple execution environments -- we're tightly coupled to Kubernetes.

The fact that it's OSS makes it a compelling starting point for other organizations who are beginning their own in-house PAAS efforts (or restarting them). If I were starting over I'd definitely dig into this before deciding to roll my own.

I know I'm certainly going to need to make some time to learn more about Waypoint. It might be too much pain to migrate to, as our in-house system is pretty mature -- but it'll at the very least serve as a source of inspiration. NIce work!

Do you know how you plan to monetize this in the long-run? I assume it's via a managed offering, where the end-user isn't responsible for hosting the software themselves and instead pays a monthly rate (or pays per use). Knowing what that trajectory looks like we help me feel more comfortable using it, as it'd clarify how it will / will not change.


I work at Product Mgmt at HashiCorp.

We do hope to have Waypoint as a HashiCorp Cloud offering in the future. Yet we expect the have the core workflows available in open source. There may someday be features in a paid Enterprise edition, similar the approach we've used well with Vault.


Gotcha, thanks!


This seems handy for teams that aren't using Kubernetes. If you're already on Kubernetes, there are better tools out there like Argo, Scaffold, Harness, etc with a tighter focus. Curious to know why `waypoint test` is missing, that could be interesting...


This is clearly the successor to Otto. I think the key point here is this:

> "This workflow is consistent across any platform, including Kubernetes, Nomad, EC2, Google Cloud Run, and over a dozen more at launch. Waypoint can be extended with plugins to target any build/deploy/release logic."

It makes a lot of sense. You provision your infrastructure with Terraform and deploy with Waypoint. This is basically Terraform but for Deployments.

I think that's pretty cool. I'm wondering how clever is this though. If I have something running locally a deploy with Waypoint do they figure out all the configuration automatically?

One big challenge that I have always had with fresh deployments into new environments it's making them work out of the box. I don't remember the last time I have deployed something without having to change a configuration or mess with the command line of my host to make it work.


If I understand how it works, there's something I find fundamentally wrong about it, which is that an application should not know where nor how it's deployed. It has to know how it's build, of course, but it should be deployment agnostic.

Anyhow, I tend to like all Hashicorp projects and I'm grateful for their work.


I will drill deeper into the documentation during the weekend, but if Mitchell is still around I have a few questions about how this might work for some apps I'm building.

1. Is there ECS support for Fargate, specifically, and how does waypoint exec (for a shell) work if Fargate is supported? We hacked a bastion container together using SSM remote activation, and it feels so brittle.

2. If I want to deploy multiple versions of my app into the same VPC is that possible? My use case is having 1 giant VPC/RDS that multiple app/feature versions will take a slice off and use. Rather than setting up a VPC + all services per staging/feature environment.

3. Is there support for dynamic URLs using native cloud tools? For example, I'd love to use ALB auto-provisioned certs and carve off dynamic URLs on a subdomain.

Thanks :)


1. ECS yes, Fargate I don't think so yet but we're looking into it.

2. Yes, you want this: https://www.waypointproject.io/docs/workspaces

3. Yes: https://www.waypointproject.io/plugins/aws-ec2#aws-alb-relea... with more on the way.


Thanks for the responses and pointing me in the right direction, I'm super excited about this project and I love the tools your company builds.


Oh man, I got excited and thought you guys were finally gonna start developing a CI product. As a Jenkins heavy user I like how extensible it is but it could be simpler and work better and easier with tools like Nomad.

I hope you guys hop into the CI server scene. A man can wish.


After reading through some examples, I came away with the same thought. I hope that the roadmap extends this into the CI space further!


There are way too many CI tools out there, covering myriads of use cases and niches.

Jenkins doesn't do anything better than any other tool.


Man people are still deploying docker applications on a single server with docker-compose or docker-swarm for it's simplicity. Some of my personal projects are still deployed like this as it is really simple (it's docker with a very thin yaml), works exactly like on my machine and is against my expectations really stable.

Is there some plan to support docker-compose or something like this? The "magic" why it is working so good for simple solutions is the service discovery. Since version 1.10 dns resolution by container names in created docker networks is possible which would be enough to replicate most of the functionality of docker-compose. And the feature to start multiple containers for an application is required. Is there some plan to support a lot more docker configuration options? Or is the docker mode just a small neat feature which will not receive much development as kubernetes and nomad will be the main deployment systems for most of the users?


How exactly are you deploying with docker compose? In my head the only approach would be to somehow sync the docker compose file(s) to a server, have them use the latest tag, and then have something like watchtower regularly check for updates on the images and restart the service.

Is there some well defined way of doing this remotely from say a Gitlab pipeline?


I am not doing this automatically, and waypoint seems to follow the same manual approach. I just ssh into my tiny vm and change the docker tag within the env and restart everything.

This step is so small it could even be automated but i prefer to control the deployment manually. And every release has a different tag to be able to rollback to an old version, which is not easily doable if you just deploy the latest tag.


I really like the focus on dev ergonomics, but as someone who works on a crufty older application, the problem I have is not deploying, it's managing a build system that needs to work around existing application code. If I could just refactor the application to use buildpacks, I wouldn't have a problem in the first place!


The documentation says that Waypoint does not want to replace PaaS, but rather to integrate with both PaaS and lower-level infrastructure like Kubernetes.

However, the documentation also says that Waypoint injects a “smart entrypoint” into the containers it builds, and that entrypoiny seems very intrusive. Among other things, it calls home to the waypoint server and registers on a “URL service” which will then route traffic back to it. That is basically a PaaS service mesh.

So my understanding is that Waypoint, while claiming to be complementary to PaaS, is actually a trojan horse to inject its own PaaS into my infrastructure. I understand why that is valuable to Hashicorp; and I may even be interested in evaluating a Hashicorp PaaS. But I am not a fan of the way this is being snuck in. Injecting a service mesh into my stack is a major change; please be upfront about it so I can compare it to other PaaS.


Hi, we are not intentionally hiding how Waypoint works. We attempted to make the Entrypoint functionality very clear in the documentation with explicit instructions for how you can turn off Entrypoint injection either during build time (it's never put into the artifact) or at deploy time (a configurable on-off switch). If you turn it off you lose features like the URL Service, logs and exec, but the build, deploy and release workflow is still there.

https://www.waypointproject.io/docs/entrypoint/disable#disab...

https://www.waypointproject.io/docs/entrypoint/disable#disab...


Thanks. My issue is not with the feature being hidden, but the positioning being misleading.


The waypoint server is run on your infrastructure.


I am aware of that. What’s your point?


What is that conf format? Looks neither YAML, nor JSON. (if your conf files are not a standard format, this project is likely not going to go anywhere IMHO).

edit: Apparently non-standard [0].

[0] https://github.com/hashicorp/hcl


We use HCL broadly across the HashiCorp portfolio. It was popularized by Terraform, but also used in Vault, Consul, Nomad, and Packer.

Important to note, that HCL is designed to be completely interoperable with JSON. In practice, we find that HCL tends to be more human readable and writable and is significantly less verbose.

However, if you are doing any sort of machine generation, it can be convenient to generate JSON and feed that in.


Terraform uses HCL, and it’s done pretty well! I think it’s significantly more ergonomic than YAML and JSON (and even slightly more than TOML).


And yet terraform uses the same format and is the leader in its space?


HCL is a superset of JSON, so tools can still output json. having worked with HCL before for terraform, I can say it is way nicer for human-edited config files than either YAML or JSON.


I’m glad to see new tooling targeting developers and the local development loop which was neglected because historically was hard to monetize. As an consultant involved in cloud migrations and CI/CD I do see the value of a common language for build/test/deploy, it’s a mess in the enterprises nowadays. I’m curious to learn what is Hashicorp’s opinion about CNAB (which somehow tries to solve the same problem). As Waypoint is complementary to CI tools, do you see this also complementary for the GitOps tooling which is gaining traction in the Kubernetes world?


Hopefully someone at Hashicorp can double check my read: I feel this could be compared to Skaffold or Tilt, but with a heavier emphasis on a plugin architecture. Is that right?


Hey Jacques! Good to chat with you again :)

Yes, the plugin architecture is an essential part of Waypoint and we hope to see a large ecosystem of plugins. Rather than focuses exclusively on k8s, Waypoint is designed to center on the build, deploy, release workflow to any platform via plugins. There's a lot of plugins already built-in [1] and the Plugin SDK [2] makes it pretty straight-forward to build new ones.

[1] https://www.waypointproject.io/plugins [2] https://www.waypointproject.io/docs/extending-waypoint


Nice to see you too! And thanks for the answer.


> You can use exec to open up a shell in your app

Ok, maybe we don't exactly speak the same language, but shouldn't that be "a shell in your server running your app".


I'm totally buying into the vision. Was discussing this with a co-worker a couple of months ago: there hasn't been much innovation w.r.t deploying apps within the last ~4 years (for on-premise at least). Still the same tools with bad UX, scripts, CLIs. Hope Waypoint can improve the status quo, it sounds like the missing glue.


Hopefully the next revolution is rediscovering the wonders of dynamic linking.


Which tools are you talking about? I'm new to this and I'm just looking around for better options.

My use case is app releases and deployments across multiple on premise servers behind private vpn.

Deployments are a very manual process, due to the vpn, but we use git to pull the project, docker for managing all the different services, and ansible to manage config per host and run deployment playbooks.

What other/better opinions are there, for a very small team of fullstack debs?


Wow, this looks great! I'm a big fan of Azure DevOps for CI/CD, but I love the idea of something as powerful as Waypoint, but is open source and I can run it anywhere.

I wonder if it might be a good tool for deploying infrastructure too - I miss something simpler than Ansible.

I'll definitely be dabbling with it.


Don’t get too attached to AzDO. Microsoft is trying their damndest to kill it, they’re doubling down on GitHub.


Well, now I know what I'll be doing over the weekend. Specially on using it in tandem with Nomad.


Waypoint feels similar to Skaffold mixed with some Eirini or Google’s dead kf project.


What I see here is that commands "logs" and "exec" are being commoditized and consumers (developers) are expecting these features out of their platform, whether it's kubernetes or anything else.


How many wrappers of abstractions of wrappers of abstractions do we need?


I think codified infrastructure has too much cognitive load/learning curve. It always gets messy, hard to maintain and understand, especially for new developers.

Building a multi cloud UI tool is the future.


is this Otto 2.0 ;-) https://www.ottoproject.io


Is Otto going away?


Work on Otto ceased many years ago


Is this kind of like Capistrano on steroids?


Cap was a motivation to a certain extent for sure. Both myself and Evan Phoenix (the first two core members of the team) come from heavy Ruby backgrounds and we yearned for the days of cap.


Run through of docs really gives me capistrano vibes


In my experience with HashiCorp products, things don't work like the page says (Vault) while other's are sometimes half-finished (Terraform will let you create, but not destroy resources). Edit: To clarify, I ran into some cases where the creation code for a resource existed, but not the destroy code.

I might wait on this and read the Github issues for a few months before trying it


Terraform does let you destroy resources: https://www.terraform.io/docs/commands/destroy.html


I wish Terraform still allowed destroy-time provisioners to access variables, it seems that it went away due to some refactoring, and it isn't coming back.

https://github.com/hashicorp/terraform/issues/23679


Yes, I know. I meant for some resources (the code for creating was added, but not for destroying those resources). I had to write a custom provider to temporarily fix it


I have been using Terraform since 0.11 and it's always allowed to destroy resources, what do you mean?


This reminds me a bit of Chef Habitat and Automate.


Great that make life easier for developers. Will be a keyword for logging in the future


This feels a bit like that XKCD strip about fourteen competing standards.

But I'd love to be proven wrong!


May be worth mentioning that Hashicorp has a lot more sway and influence in this arena than most who might have or are attempting or maybe thought to attempt this in the past. There may be something to be said for that. I know Google and AWS are pretty all in on kubernetes, but this does offer an alternative & a layer of abstraction on top of that.

My bigger worry is all these deployment abstractions might lead to a scenario where when something breaks very badly, nobody really knows how to fix it other than to nuke the whole thing and try again. I feel like we're already there with that in a lot of ways with these devops systems. The weird kind of lock in they have to me is that they aren't enabling smaller teams to easily adopt these things, its aimed at either cloud providers or large insitutions that can afford to pay a dedicated engineer to manage these systems.

I'd love to see something aimed at smaller teams that makes these orchestrations easy and multifunctional, and work across more platforms, like DO. Hashicorp et. el. know what the best practices are and how to keep things secure, just give me something that enforces those things on any fleet of servers/containers/whathaveyous I want to use.

As a developer on a smaller team who spends part of my time working on these concerns, that would help immensely.


It's exactly that! https://xkcd.com/927/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: