Hacker News new | comments | show | ask | jobs | submit login

You know what I've realized that's really important. More AWS tutorials is really needed. There's numerous of new programmers who want to learn AWS, but can't finish building anything because they get buried in documentation.

I find there are a lot of high-level abstracted tutorials, but for the new services, there aren't a lot of detailed tutorials.

For instance, an implemented cognito->gateway->lambda->dynamodb is really hard for a newbie to do.




I agree that a lot more AWS tutorials and cookbooks would be helpful- but one reason I find so many developers having issues wrapping their head around AWS concepts is that most developers don't have the basic understanding of what they're doing with AWS. Programmers that just write code don't really need to know ANYTHING about AWS because someone else should be setting it up for them. I see so many people doing the most ridiculous things or leaving things completely wide open because they have no concept of network topology, firewalls, segments, VPCs, security groups, deployment templates, etc. I would not trust the vast majority of developers I've ever met to properly setup a simple architecture in AWS because the things you NEED to know aren't covered in the things they've been focusing on for most of their lives. It was predominantly in the domain of the operations/networking/system admin side of things.

I think there's a big need for a crash course for devs that starts with all the crap they previously ignored or had someone else do for them and I say this as someone who has always written code first and done sysadmin second.


> There's numerous of new programmers who want to learn AWS, but can't finish building anything because they get buried in documentation.

I agree partially. In the end the documentation is what you are going to need to read sooner or later. Or trainings equivalent to that documentation. Good tutorials are good for starting, but doesn't make you a professional.

I guess that in 10 years any one will be able to create websites for billions of internet connected users. But for now, as easy at it is, it is still complicated enough to require an expert. In the same fashion that 20 years ago you needed an expert to make a 3D game and nowadays there is plenty of technologies that allow you to do that with a limited amount of programming knowledge.

I have seen horrible things, usually security related, because non trained people think that they can achieve anything just with standard configurations and quick tutorials. And it looks like they where able to do it, until something really bad happens. Even people with long experience can make mistakes because it is a complex thing.


AKA what happens when you take a bunch of God complex programmers and set them loose operating production infrastructure on the Internet?


The real issue is that AWS isn't designed as a tool for product developers. Product developers get asked to use it but do not usually have a clue about good systems engineering. AWS was designed for ops and systems engineers first and foremost.


> Product developers get asked to use it but do not usually have a clue about good systems engineering. AWS was designed for ops and systems engineers first and foremost.

Yeah that line is blurring, too.


That may have been true 5 years ago, but a lot of the newer services require developers to just read the documentation, not have a serious amount of background in the setup of the service they are using. Look at things like ECS and Lambda for compute, SQS for messaging etc.


I found this series very helpful: https://medium.com/aws-activate-startup-blog


There's a tutorial on the AWS Labs GitHub that goes through this exact scenario: https://github.com/awslabs/api-gateway-secure-pet-store.


I definitely agree that better tutorials and/or a simpler interface would makes AWS more accessible and user friendly.

A YC S15 startup, Convox (http://convox.com/), aims to "make AWS as easy as using Heroku." It looks really promising.


Convox member here. Thanks for the shout out.

This is definitely a goal of Convox: to remove as much AWS complexity as possible.

Our approach matches this guide to a tee. We are using CloudFormation to set up a private app cluster, as well as to create and update (deploy) apps. We are also using ASGs.

The instance utilization point is spot on too. The fist thing convox does to make this easy is a single command to resize your cluster safely (no app downtime).

Coming next is monitoring if ECS and CloudWatch and Slack notifications if we detect over or under utilization.

I strongly believe that these AWS best practices can and should be available for everyone. For anyone starting from scratch or migrating apps off a platform or EC2 Classic onto "modern" AWS.


I hadn't heard of Convox before, but it sounds interesting.

I'd like to use AWS more, but each time I tried to get into it I felt overwhelmed. I currently use PagodaBox a lot, which is great (most of the time) because it handles a lot of the complexity for me, but it can often be expensive. How does Convox compare to PagodaBox?


I've never used PagodaBox but it looks like a nice PaaS.

Convox has the same goal of a PaaS: to give you and your team an easy way to focus on your code and never worry about your infrastructure.

One big difference with Convox is that we accomplish this with single-tenant AWS things. You and your team's deployment target is an isolated VPC, ECS (EC2 container service), and ELB (load balancers).

If you're asking for a cost comparison, we're building Convox to be extremely cost competitive by unlocking AWS resource costs for everyone.

Its easiest to compare the cost of memory across platforms, though not always apples to apples...

The base Convox recommendation is 3 t2.smalls which is 6 GB of memory which costs about $100 / month. If your app can be sliced up into 512 MB processes, you can easily run 10 processes, which could be 2 to 5 medium traffic PHP apps on the cluster.

I'm finding PagodaBox pricing calculator a bit confusing but 6 512 MB processes, so 3 GB of memory, is $189.


So eventually Convox will have to make money, but I'm not sure I see the path... given that I have free access to the software and the infra is provided by AWS.

Do you plan to eventually charge a monthly fee for using the command-line tool?


Thanks for your question. A much more thorough pricing page is in the works.

The most straightforward model, and where we are already making some money, is running a Convox as a managed service.

In this setup you and your team get Convox API keys. Convox installs, runs and updates everything for you in our accounts. You get a monthly bill that's your AWS resource costs plus a percentage to Convox for management.

We will be tweaking this model to sell packages so bills are really easy to understand.

Some other experiments we're doing...

We sell support packages and professional services for app setup, migration and custom feature development.

We have a per-seat model for productivity features. Private GitHub repos and Slack integrations are $19 / user / month. There are more closed SaaS tools like this coming.

Infra is trending to commodity prices industry wide.

We'll be selling SLAs, support, productivity tools on top of that infra.

You'll get a cutting edge private platform without hiring and managing your own devops team to build and maintain it.

Open source users will help grow the user base and make the platform better without us running a freemium platform.


Any plans to support the other public cloud providers?


Short term, no.

The plan is to get the Convox API locked in while mastering advanced AWS like VPC, ECS, ELB, Kinesis and Lambda behind the scenes.

Long term, yes, and in tandem with when the other cloud providers leveling up. For example Google Cloud Logging (for continer logs on GCE) is still in beta.


As an alternative, Cloud Foundry already drives AWS, as well as Azure, vSphere and OpenStack. Those coming from Heroku will find most of what they want, including buildpacks. Those who want to skip buildpacks can use docker images instead.

Disclaimer: I work for Pivotal, who donate the majority of the engineering effort to CF.


Convox member here.

CloudFoundry is a really solid platform, but there is a very important distinction between CloudFoundry and Convox.

Convox is a very thin layer on top of "raw" AWS. It gives you a PaaS abstraction but behind the scenes is well configures VPC, ECS, Kinesis, Lambda, KMS, etc.

For those of us with no need to run on multiple clouds, using pure AWS is simpler, cheaper and more reliable than a middleware like CloudFoundry or Deis.

If you want to run a private platform without bringing in operation dependencies like etcd (Deis) or Lattice (CloudFoundry), give Convox a look.


how does convox scale when moving to non-trivial aws scenarios? if i have three autoscale groups running seven different apps with associated kinesis streams, s3 buckets, elasticache, rds and elasticsearch instances how cleanly does convox handle this? what about monitoring and reporting?


We are working to normalize the AWS scenario for all apps.

Every Convox cluster is an autoscale group managed cluster of ECS instances.

Every Convox app gets its own Kinesis stream for logs, ELB for load balancing, and S3 buckets for settings, build artifacts and encrypted environment. And the app processes are run via ECS.

So I'm confident we could handle the 7 apps in a single cluster, scale the cluster instance size and count, scale any individual app process type, and handle any individual app load balancing or log throughput.

You can provision some services like RDS with our tooling which make it really easy to link to apps. You can also bring your own services like elasticsearch or pre-existing RDS and set them in an app's environment to use it.

There are a couple monitoring tools built in.

We automatically monitor AWS events like ECS capacity problems and send Slack notifications.

You can also use our tooling to forward all your logs to Papertrail and configure your searching and alerting there.

More CloudWatch Logs and Metrics work is coming in the near future.


i appreciate you taking the time to reply, thanks.

some additional questions though:

1. are security groups opaque within convox or are they exposed to developers?

2. when you say monitoring tools are built in and you have tools for logging does this lock me in to the convox log pipeline and monitoring? what i want to use sensu on my instances? do i have to add sensu to every container? if you run, for instance, five containers per vm do i pay the overhead of five seperate sensu instances? same question for something like logstash?

3. you mention vpc. aws has proven stingy with vpc service limit requests in the past. i have trouble getting them to grant more than low double digits per region/account. can i run multiple convox racks per vpc or is the one vpc per rack a hard requirement?


1. Devs don't have to worry about security groups with `convox install && convox deploy && convox ssh`. But security groups are created with CloudFormation and you can use your AWS keys to introspect and change them.

2. Currently every app gets a Kinesis stream and we tail all Docker logs and put them into Kinesis. Then `convox logs` can stream logs from Kinesis, and `convox services add papertrail` adds a Lambda / Kinesis event source mapping to emit the stream as syslog to Papertrail.

I'm pretty happy with this setup and think it represents a good default infrastructure that is still extensible.

Would Kinesis -> Lambda -> Sensu make sense too? It's a pretty new pattern but this seems a lot saner to me than per-container log agents, or even bothering with custom logging drivers.

That said, one user has been using logstash by bringing a custom AMI with his logstash agent and creds baked in.

3. It's one VPC per rack, but I could see modifying that. We've already started to parameterize some VPC settings like the CIDR block to help integrating with your existing VPC usage.

https://github.com/convox/rack/blob/master/api/dist/kernel.j...


Obviously I disagree with the specifics of relative merits. As a nitpick, Lattice was a project to extract core components out of Cloud Foundry into a self-contained unit intended for experimentation, not a standalone component folded into CF.

The feedback was that Lattice is not what developers wanted, so it's been wound up in favour of MicroPCF[1], which is a single VM image that runs an entire, actual Cloud Foundry installation.

When developers decide they want to scale up to any size, they simply retarget a regular AWS/vSphere/OpenStack/Azure CF API server and push again.

I'm sure Convox has a single-VM version I can tinker with on my laptop.

[1] https://github.com/pivotal-cf/micropcf


There are also quite a few Ansible core modules for EC2 if you're looking for a uniform way to manage your resources outside of the AWS GUI, although to be honest I'm not sure if that's "easier" so much as more flexible. Still solves the problem with mistake #1.


Would recommend Terraform here - https://terraform.io/


Try Kubernetes may be?


That's for containers. It isn't used to, say, setup SQS.


As much as I agree with your point about being hard, I'd also like to add out that there isn't anything even remotely similar to documentation on how a pure dev person could do this outside of AWS, with the lack of managed cognito, API gateway + lambda, and dynamodb.

Making those notions and technologies easy and cheap to access AWS suddenly gave devs the idea that they can roll out complex infrastructure on their own, similar to copy-pasting a piece of code. Well, it is still a bit harder than that, and if you are that kind of dev (which I'd applaud), you'd better dedicate some time to learning those technologies.


This is probably one of my biggest gripes. Our CTO/PMs are like "you weren't able to containerize and get our app autoscaling in a few days? I don't get it? How long will it take? While you're at it, can you implement kinesis/lambda/microservices for the parts of our app that are taking more than 3000ms?"

Me: "I just said I was interested in DevOps and would like to give it a shot, I don't know how long it'll take, I'm working on it."


Great point. Also, the pipeline you mention removes many risks related to the 'common' mistakes mentioned in the article (scaling, monitoring, provisioning it all done for you - or almost).

We're using gateway -> lambda -> dynamodb, and there are a tons of gotchas and small things that AWS need to iron out, especially with gateway -> lambda.


Agreed. Was looking into that for a serverless model inspired by (1) but there is still loads of things they can't do/won't do

1 https://github.com/serverless/serverless


Working on it :) We are hacking away full time on the Serverless Framework and have some big features coming in the next few days.


I have some good experience with qwiklab. Their self-paced lab starts with UI, then turned to aws cli. They also give a nice lesson on CF.


CloudAcademy provides several good ones.


acloud.guru has some pretty decent video tutorials. It was easier for me to start with those tutorials before diving into the AWS documentation.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: