The learning curve for AWS is steep; but the use of tools with great develeoper experience like Heroku and Vercel is limited to small projects. Teams end up choosing AWS or other big cloud providers, partly because of free credits, but mostly because they know that they'll need something from them that a PaaS cannot provide.
And then there is a huge cloud-native ecosystem and infrastructure-as-code and Kubernetes and all that. If you want to do DevOps right then PaaS doesn't seem to be a good option.
So its either move fast, or build a future-proof stack. We thought that's wrong, and built Digger.dev
Digger.dev automatically generates infrastructure for your code in your AWS account (Terraform). So you can build on AWS without having to deal with its complexity.
You can launch in minutes – no need to build from scratch, or even think of infrastructure at all!
- Easy to use Web UI + powerful CLI
- Deploy webapps, serverless functions and databases: just connect GitHub repositories
- Multiple environments: replicate your entire stack in a few clicks. Dev / staging / production; short-lived for testing; per-customer
- Zero-configuration CI with GitOps: pick a branch for each environment and your services will be deployed on every git push
- Logs, environment variables, secrets, domains: never touch AWS again!
I personally have a lot of interest in this space and used to work at AWS. Feel free to contact me at the email in my profile if I can ever be helpful.
It seems like the product is either:
(a) A tool for indie or small dev teams to build infrastructure before they have learned the AWS stack.
(b) A tool for small DevOps teams to simplify managing and developing their Terraform developments.
The marketing of the site seems to be selling scenario (a), but the paid plans make it seem like you'll only be making money in scenario (b).
If I imagine myself being in scenario (a), I can see becoming pretty disillusioned the second my first issue or wall popped up, since I am not going to be supported by AWS or the product. It seems someone in this scenario is way better served by choosing a managed hosting solution of some kind.
As someone personally in scenario (b), the idea of "do more without understanding it" is a very off-putting sales pitch. Terraform and AWS have way too many gotchas to fully abstract away all but the simplest of implementations. Sweeping those under the rug with abstractions is too much risk if the team doesn't understand what's happening. If the pitch was something more like "speed up your teams Terraform development and management experience", it would be a lot more interesting.
We may well be wrong. But we believe that "learned the AWS stack" is not something most software engineers should do, ever. If you look at what DevOps originally was, it was about culture, not job specialty. But it became a specialty anyway - because it's so complex. Currently it's the only job in the spectrum that is "second-order" - in a sense that for developers to be productive, first the DevOps folks need to do some work. So it ends up as a permanent bottleneck – unless companies create in-house PaaS-like tools for developers to self-serve. And big well-funded companies end up doing it over and over again. We did it at Palantir and Fitbit; Uber did it, Shopify did it, and dozens more.
So you can think of Digger as such "PaaS builder" in a sense. AWS is still available for all kinds of troubleshooting - Digger doesn't make it any harder. But in 90% of scenarios people won't need it. This allows to reduce DevOps to software engineer ratio from current 1:10 to smth like 1:100.
But the onboarding flow is brutal IMO. The splash page doesn't help me understand when I should reach for Digger - as a customer with an AWS account, I've obviously had to learn enough to be functional in AWS. I would like it if you described a common use case to help me understand when I should be considering Digger.
Once I actually try it out, it's very sterile and I feel lost in Apps and Environments and the UI is mentioning commits for some reason. The docs focus a lot on what Digger is, but I'm really missing an onboarding guide that orients me with a step-by-step guide of how to set up my first environment.
You still need lots of DevOps knowledge to use Terraformer. None needed with Digger - you can ignore this part of your stack entirely, until you need to customise smth specifically. And then you actually can customize anything.
I find this statement from the documentation unfair, given that the "target" concept this introduces seems to be mainly based on Terraform modules to _reuse code and expose an interface_. Terraform has its problems, but this doesn't seem to be right.
At best, this seems to be a curated set of Terraform modules and a managed CD pipeline execution SaaS. I get that it is supposed to simplify things, but it is lacking documentation for what it will do to an AWS account (you'll still pay for it, after all) and even provides documentation on how to drop "raw" Terraform into it. Why not go with Terraform directly then instead of sending your AWS credentials to a SaaS?
A raw Terraform module is quite hard to reuse out-of-context for someone who isn't familiar with devops / sysadmin concepts. What's a VPC? Security group? ACL? Each service exposes a bunch of config options that won't make sense to people who are facing it for the first time. TF mimics AWS interface, and it's more like a pilot's cockpit than a car interior. All tools imaginable out there, but you got to know what you are doing to use it.
Targets on the other hand are exposing high-level concepts only. How many services? Is it container or a function? Enable or disable database? Got it, starting building. More like a car interface or a phone UI which you can figure out by doing.
Current implementation of Targets is very simplistic. It just does the job but not much more. In Targets v2 we are planning to introduce proper dynamic generation with a "stack state" API that would allow to create truly incapsulated, smart components that would adapt to a number of environments.
Maybe you have great ideas for this target concept, but the claims in your documentation that this is new and the inference that Terraform isn't capable of this don't hold up:
> it describes a particular architecture that can produce many possible variations for a wide variety of stacks, depending on configuration.
You can do exactly that, with Terraform modules, today, no digger needed.
I am speaking from my own experience as a former front-end dev and making a bold assumption that there are many others like me. Whenever I'm using Terraform, even ready-made modules, I find myself thinking of things that I neither want to be thinking about, nor I need to be thinking about. Most of my brainspace is occupied by frontend intricacies; however I still do want to get control of the entire stack. The further some tool is from my primary competence the less capacity I have for various details about it. I want my webapps and containers to work somewhere, that's all. But when I'm facing a problem - a specific problem - I also want it to be solvable. Like autoscaling or load balancing. And I want it to be solvable in a way that's not against industry best practices. Because today I may have a team of 3 but in a couple years that may be a team of 300. I don't want to have to rebuild from scratch half way through. But I also don't want to waste time building something future-proof on day 1.
I think that the documentation is making several technical claims (from the quotes I've provided) that are factually false. You're agreeing that it CAN be done with Terraform. Best practice isn't what is being discussed in the documentation, it claims that reusing isn't possible.
Granted, I'm not your target audience, but I would recommend to a) rephrase those claims so they're closer to the truth and b) start documenting the architecture of your targets and the quality of your Terraform code (does it pass tfsec tests for example).
If someone asked me to review this product for their startup, I would primarily see Terraform modules with unknown quality or architecture.
What I mean by "interface" is "My stack needs infrastructure for 3 containers and 2 webapps and container A needs a Postgres DB and container B needs a queue"
In today's IaC, including Pulumi, you actually need to specify _which particular_ way of running containers, with all the configuration details. Same for database. That's implementation. Swtching languages doesn't make it any simpler.
The exact same stack can be run on one EC2 box via docker-compose, and on a Kubernetes cluster with managed databases. Same interface, different implementations. What Digger accomplishes is allowing to swap implementations at any time as long as the interface stays the same.
Switching languages does not make this simpler. Switching the _implementation_ of an interface does. For example, I could implement a "queue" interface three times - once for Confluent Cloud's Kafka, once for Kinesis and once for EC2 instances that run OSS Kafka. The interface remains stable, the implementation changes. This can also be done across clouds.
I think it's worth you doing some more research into what Pulumi opens up before using it as an example like this in marketing material.
I have a few small feedback items:
- The AWS Account ID is not very well blanked out in your documentation. I can easily see what the actual digits are (under the red scratched out parts).
- I realise English is not your first language, but there are many typos and mistakes in the documentation. Once you get a bit further on, it'll be worth sending it to someone to do an edit pass to clean it up a little :)
- Some of the AWS terms are incorrectly written in documentation. For example 'SecureSecret' instead of 'SecureString'.
- On the subject of secrets, would a better option not be to store a Secret using AWS Secrets Manager with the value you need to acquire? Also, I know you mention that the secret value is used and never stored, but how do we know that? If you have access to the secret via ARN and IAM policy, then in theory if your SaaS was compromised, the secret is still retrievable from the customer's account. How about using something like Vault to store secrets?
You could do that, but you can also throw money in the bin. Secrets Managers is basically a paid for wrapper around SSM Parameter Store. Last I checked the only nice thing it had was automatic key rotation. The price for that ? 50cents per secret per month. That will add up pretty quick.
If Parameter Store goes down or suffers a huge slowdown, we’ll that’s just your problem.
If Secrets Manager goes down or suffers a huge slowdown, then you’ve got some recourse to support — and getting your money back.
Parameter Store is also a one-by-one thing per each and every secret you want to store, whereas Secrets Manager lets you store a whole bunch of components inside of one “secret”.
It’s your choice either way, but for me personally, I’d rather use a service that has an SLA.
On the Enterprise plan we are more flexible, and you get things like PCI-DSS, SOC2 etc. We could also act as an "automated DevOps consultancy" with a legal arrangement similar to that of an agency (with liabilities on) but without actually providing services beyond enterprise-level support.
Using AWS managed services is a huge win for maintainability. A lot of host-your-own PaaS tools are spinning up EC2 instances that you're then responsible for maintaining/patching/securing.
how does it compare to convox? (never used it, but I think it’s similar?)
What we do in Digger is automating this glue. Kubernetes or not actually doesn't matter. Our default orchestration engine is ECS Fargate just because it's so worry-free. But you can totally switch to K8S. The value of Digger is automating DevOps - not automating K8S cluster management.
Conversely, if you do take advantage of any special features of AWS, then you can’t use Terraform.
So, why should I use your tool on AWS versus any other provider?
Quick question - would you do compliance infrastructure?
E.g PCI DSS, iso 27001, HIPAA, etc ?
There is also AWS config to check the configuration. But I would pay for a tool that creates it in the first place in an AWS-Config compatible manner.
I would have loved a startup friendly plan here.
Does Digger infer all the terraform necessary from a Dockerfile?
When you connect your repositories Digger asks you to confirm a few basic options like your container port or build command for your webapp. By connecting a repository you define a Service. You can also define Resources like databases. This way you describe the "logical structure of your stack". No infrastructure is created at this point yet.
Then you create environments - and it is at this point Terraform is generated, combining the "logical structure" of the stack with this particular environment's configuration.
More here: https://learn.digger.dev/overview/how-it-works.html
That said, if your entire project or most of it is a Wordrpess or Drupal site, you could be better off by using a specialised hosting like Kinsta or WP Engine. They provide lots of nice extras specifically for WP, whereas Digger is more for building SaaS applications with a bunch of backend services and webapps.
"Don't be snarky."
Particularly please don't do this in Show HN threads. We don't want a putdown culture on HN, especially when people are sharing their work.
We are a compiler of higher-level concepts into Terraform, with an option to just go and write Terraform if you need smth very custom.
My problem with this product is selling you the idea that you can evade the responsibility of properly managing your AWS account.
Doing it primarily because multiple layers suck when used for their intended purpose may very well be useful, but it's not a sign of health.
This is like having three layers of assembly that are all horrible to write and are built one on top of the other, all operating at roughly the same "level" of the hardware/software stack, and then coming along and writing C for the third one, which will in turn generate the second, and that, the first, which will, finally, actually generate something a processor understands.
Whatever demand there is for this isn't this product or company's fault, but it's definitely a sign that something's not right. Config primitives being driven by orchestration scripts being driven by orchestration scripts being driven by orchestration scripts.
The product may be entirely fine, but the situation is ridiculous.