From the docs: "search for a Policy named 'AdministratorAccess', click next and confirm".
It would be much more convenient to request only the needed privileges while teaching the user what is the tool doing in the background.
That is, to be able to function these things need stuff like CreateRole, CreatePolicy, and the equivalent assignment actions (as your functions need a role and policy to execute under).
The AWS policy descriptors are not sufficiently robust to allow the creation of roles/policies that, themselves, only have certain policies/rules attached to them. That is, you either have the ability to create a policy of anything, or you do not have the ability to create a policy, period. Same with role creation.
Because of this, having the necessary permissions the framework needs to do what it says on the tin necessarily means it also can give itself -any other permission-. You can't give it the ability to create and assign policies, but prevent it from accessing, say, EC2 instances, because the role the framework operates under can create a policy for all EC2 actions, for all resources, and assign it to its own role.
So while, sure, it would be nice to enumerate just the permissions it actually needs, and what's going on behind the scenes, just so the user knows exactly what is going on, that set of permissions is still effectively administrator access. There's no security benefit.
To back up my first point, the framework owner could easily instruct users to set up a temporary user/role that creates the initial function roles/policies as one-time setup. Then a user/role with only the service-level permissions (Lambda etc.) could manage all the actual resources.
The devs who build these frameworks want them to be super simple to use, otherwise they won't retain users!
And to your first point, it's not just being super simple though; fundamentally you have to have that admin access to be able to deploy initially.
After that, sure, provided you make no changes to the policies, the framework could theoretically diff the policies with just the iam:Get* actions applied, ensure they're the same, and deploy without touching it. But you get into weird situations where you decide "oh, this function now needs access to Dynamo, so let me modify the policy for the role the lambda is executing under", and now you have to have admin credentials again. Any new resources in AWS you want access to require you to have the iam:PutRolePolicy action, which, given you already created it and attached it, is again equivalent to admin rights, and the ability to do anything.
dawson will soon switch to using an AWS CloudFormation Service Role, which allows us to require users to grant fine-grained permissions.
We will then provide a copy-pasteable Policy for users to set, and update the documentation accordingly.
Currently, since CloudFormations runs with the CLI user's AWS Credentials, such user needs to be granted the permission to do every action, including, for example, managing DynamoDB Tables, S3 Buckets etc...
Also, imagine you're adding an S3 Bucket as a custom resource to your app. dawson will create such S3 Bucket using CloudFormation; CloudFormation needs to be run by a user with S3::CreateBucket permission. This applies for each resource managed by dawson/CloudFormation.
Again, thanks for your valuable inputs and feedbacks!
~ The maintainers,
NONE of your App's code will run using such "AdministratorAccess" policies. Each function will run in its own iam:Role with limited permissions defined by the developer.
The AdministratorAccess is currently required only for the CLI but, as said in my previous comment, we'll eventually move to using a Service Role and providing a more restrictive policy.
EDIT: Not global admin privileges, just full read/write to whatever AWS service I am basing the tutorial on.
There are a lot of good IAM policy setting tutorials out there that I usually point them to.
It is actually quicker to point to one of the built in or pre-made AWS policies, which are usually fairly generic.
If you mean some of my AWS tutorials, a couple are:
Speaking from experience, the developers themselves probably don't know off hand. Its one of those things where they probably selected AdministratorAccess permissions with the plan to later take inventory of the actual permissions used, but never ended up doing the second step. If they want people to think their framework is polished, that second step is necessary.
If the tutorial provides a policy ready to copy & paste with all the required privileges, you will be able to customise it better or just using it as it is.
They're working on a thing called SAM which is built on CloudFormation: https://github.com/awslabs/serverless-application-model
More analysis here, which basically notes that SAM is more declarative in nature, and will likely push 3rd party frameworks to be more cross-platform and/or application-centric in focus: https://techbeacon.com/aws-open-sources-serverless-model-wha...
After reInvent, I was pretty excited about SAM because it seemed like a perfect system to integrate with the Code Build/Pipeline tools, but after spending a week on it and not getting anywhere, I couldn't really recommend it to anyone.
Either way, it's a confusing term for the uninitiated and sort of presumes that only the web-dev world exists (because as you say, serverless already has a meaningful definition in the computing world)
I am new to the whole serverless concept, can someone explain how it works? Let's say I have to do some processing on the server (schedule some cron jobs, generate a PDF or send an email), how do I go about doing that? Do I use AWS Lambda for that?
But seriously what they mean is that you rent an allotted amount of resources on a server farm to run your application for a short period of time, and then the resources are freed again. Not all applications will fit this environment, such as any embedded databases, persistent connections, and so on.
But that's a mouthful, so "serverless" seems to fit the bill outside of what feels a bit like pedantry.
Given all the work this appears to save one, there's a curious explosion of "serverless" articles, frameworks, APIs, 'practices', crash-courses, workshops, talks .. ah well, par for the course with anything tech that gains a catchy moniker.
(Perhaps that's why FP doesn't take off quite as much, "Monads" should have been called "Promises" 30 years ago, and Peano Numbers "numberless maths"!)
"Run code without provisioning or managing servers [...] pay only for the time you consume"
Makes sense, but based on the name, I initially expected to see a framework that lets you spin up an app without writing any backend code, perhaps like using Firebase but more framework and less service. Then I thought there are even more ways to remove the server.
Maybe there should be multiple levels of "serverless". Type 0 serverless is an html file. Type 1 serverless is a static web page. Type 2 serverless is a dynamic app with a backend service; no writing backend code. Type 3 serverless is a dynamic app plus backend code with a backend management service; no provisioning or managing backend instances.
Essentially, the motivation is to abstract away from caring about the underlying operating system and virtual machine. That is, in the same way that a VM attempts to abstract away from the underlying hardware (so that you, the system operator, need not care about it), serverless attempts to abstract away from the underlying VM and operating system. You the developer shouldn't care about that; you should just care about your code.
So how is that achieved? Well, in AWS, Lambda is a key portion of it. They are small units of execution, defined by arguments in, arguments out, and side effects (same as a function in any language). They have access to a set amount of cpu/memory resources, as well as a bit of hard disk scratch space (note: this space is -not- guaranteed to exist across function invocations; any state you wish to persist between functions must be stored in another services, such as RDS, Dynamo, S3, etc). So you can certainly create a function that, when called, executes some bit of functionality (based on the input, or not, as appropriate).
What then matters is how it is called. Per another comment, you can trigger them via Cloudwatch to execute at a scheduled time (so perfect for cron jobs). You can also cause them to trigger based on certain events in AWS (such as S3 uploads; whenever a file lands in S3, spin up a lambda to process it in some way). But more than that, you can stick an HTTP(S) endpoint in front of them to create a RESTful API. You can do this via the AWS API Gateway. You map how the HTTP request gets passed into the lambda, and then your code can process it whenever the HTTP(S) endpoint is hit.
So with this you effectively have a REST API that will scale predictably. There is cost associated with it (and you have to evaluate whether the cost makes sense; it may not if you have already invested in sysops style resources, or if your load is so high that the Lambda + API Gateway cost is prohibitive compared with running equivalent EC2 instances), but that cost scales with use. And there are a few other caveats as well to learn about and be aware of. But if you then rely on other AWS services for things like data storage, messaging, etc (including email sending; have a look at AWS SES) you can create an entire backend without any servers that you, personally, have to maintain or care about.
'Serverless' frameworks tend to focus on the Lambda + API Gateway integration, to provide you with a developer environment that you can run locally, and then with a single command deploy to AWS, getting both your function and your API in place. They generally have no knowledge of other related infrastructure (which you will likely need, though you can likely configure just once and be done with it; unlike a lambda, you're not changing it frequently)
It has a very active community, is extremely powerful (because for AWS it is built on AWS CloudFormation), and very robust. I wrote by own framework and eventually moved to Serverless.
Not to mention in supports other languages (although the Python documentation is lacking... Java has good documentation though) and other providers (Azure & Google).
For a quick comparison:
Dawson - 4 contributors on Github, and almost 650 commits
Serverless - Over 200, and almost 6000 commits
I don't want to be the type of person that discourages innovation. Sometimes you need to innovate because the current state of affairs is broken. But in this case Serverless Framework is not broken... it is very very good.
Footnote: I'm not a contributor but I am a long time user. I don't have any personal stake in the Serverless project.
Anyway, if someone's interested in diving in deeper, instead of just counting our stars, this deck might be a good resource: http://slides.com/lanzone31/dawson20170303
A few corrections to your comment:
>is extremely powerful (because for AWS it is built on AWS CloudFormation)
dawson is also built entirely upon CloudFormation
>Not to mention in supports other languages
we'll merge python in master in a few weeks
I wasn't counting stars, I was counting contributions. You have some catching up to do.
For what it's worth I tried to weight my post because I think the project looks pretty good and I don't want to discount that achievement. Just Serverless is mature and used but a lot of people in production / live environments you have a lot of work to catch up.
One place where you can potentially not just meet Serverless but win is getting started documentation.
And I don't mean just hello world.
For example: how do I create and use a DynamoDB database in your Framework? Serverless certainly supports that but it is a learning curve and trips up new users.
Another concern I have is the edge cases. Stuff Serverless isn't even good at. For example:
- CloudFormation has a 200 resource limit. How do you deal with that. Severless hasn't solved it yet.
- How do Lambda functions get versioned? Serverless does this very well.
- How do I hook up a lambda to a SNS stream. Serverless does this really well too but the documentation is poor.
One think Serverless does very well is letting you set up various Lambda triggers without having to know much of the internals.
Just as a caveat, though, they've had a lot of breaking changes in the last few releases. Each release, it does feel broken for a while.
Hasura is a PaaS (based on Docker+Kubernetes) + BaaS. There are some built-in backend components (like auth and a data service). You can also write any code and deploy (via docker images) to multiple clouds.
Disclaimer: I work at Hasura.
Depends on what you consider "unopinionated"...
It handles all the deployment lifecycle across all providers but won't abstract away all the implementation details. It is not possible to "lift & shift" from one provider to another without making some changes.
At the implementation level, the providers do differ. From small things (like the function interface) to bigger things (like event services).
I did a talk on this a while ago:
1.) Ability to provide a custom domain for http triggers
2.) Scheduling/cron to trigger functions. E.G. trigger function foo() every 22 minutes. Trigger function bar() on April 10th 09:24:23.
I think serverless is a bit sensationalistic and ultimately unhelpful of a name. I don't blame new projects for adopting the term since it's already in the public awareness, but I can't say I'm a fan.
(no videos though)
# run your app locally
$ dawson dev
> You should be able to start using dawson without creating any configuration file
The configuration obviously exists. It is scattered around as
* environment variables for AWS secrets
* the api property of functions for defining path, params, etc 
* a package.json for everything else, maybe not necessary for a minimal deployment without db 
- allows running of lambda functions locally, which significantly speeds up development
- provides you with a full production-ready environment (CloudFront + S3 + API Gateway + Lambda + ...) out of the box
- just a few cents of S3 Storage (if you're out of the free tier) to host versioned Lambda deployment packages - which will expire automatically anyway.
- in production, by default, dawson creates an AWS WAF (Web Application Firewall) WebACL which cost ~5$/mo. You can opt out this behavior if you don't need one.
Website is cool though :)