So basically, instead of embedding the AWS access keys in HTML, you use AWS.config.credentials = new AWS.WebIdentityCredentials(...) with the OAuth access tokens you get from Google or Facebook.
For RDS, I suspect DocSavage is likely to soon learn that browsers don't speak RDBMS wire protocols; they'll need a CRUD wrapper at least. The canonical AWS "serverless" solution would be API Gateway + Lambda.
You can always find somewhere else to host your stuff, but I’d say I’m pretty happy with a deployment workflow that’s basically
bundle exec jekyll build
For now, I favor the traditional REST API layer because REST APIs are well understood. They also may require less effort to access via IOT devices and expose publicly for other developers to hack with.
Overall, I'm interested in the best. If you have thoughts on how to incorporate this approach into JAWS for optimal effect, I'm definitely paying attention.
As far as JAWS goes, the next step is to be able to define your API in JSON via Swagger, and then import it into AWS API Gateway to instantly create/update your API. This should dramatically reduce development time and make it much easier for everyone building their JAWS REST API :)
I've checked out with AWS people  and asked on stackoverflow , but it seems not possible yet. They suggested me to use lambda to delete unwanted content after it was uploaded, which surely works, but is a waste of bandwidth and of lambda time. Did you find the same problems? Any other solution?
Beside, it looks like a great approach, but it surely ties you to AWS way more than other approaches.
> Bucket Limit Increase: You can now increase your Amazon S3 bucket limit per AWS account. All AWS accounts have a default bucket limit of 100 buckets, and starting today you can now request additional buckets by visiting AWS Service Limits.
Second there is already a product out there called jaws: http://www.freedomscientific.com/Products/Blindness/JAWS
The last time I checked (and my information is very dated, 5+ years) this was a fairly popular piece of software in the blind community. I also do not know the status of trademark around the term. If you were going to change names, its early days for you now would be the good time to do it.
I doubt that FS will come after this project for trademark infringement, since it's not a threat to them, not even in the same field.
(pronounced "J-Sauce" in case you were wondering)
I'm not affiliated with Amazon, nor do I know anyone over there. I'm a freelancer in Oakland, CA., and my goal is merely to offer some open source code to help people take advantage of emerging tech so that they can build ridiculously cheap and scalable web apps.
When you get a chance, think of this post and come back and read it again!
AWS introduced API Gateway because they want back-ends being built on AWS Lambda. They will raise the limits, but this is all bleeding-edge tech and they are likely moving forward with caution.
We'd like the "Total size of all the deployment packages that can be uploaded per account" increased. Since you have to "require in" (as you put it) all your dependencies with each published function it's plausible to hit that limit pretty quick.
The average node module is 1.6MB . If your functions have 3 dependencies, you're near 5MB. That limits you to ~300 functions. That initially seems like a lot until you realize you may have to start breaking up complex logic into multiple functions to hit time limits. You also might need to maintain multiple versions for backwards compatibility (think v1 endpoints). And if you use Java 8, you're just screwed. That size limit can be hit quick.
 - https://web.archive.org/web/20150315092544/http://docs.aws.a...
 - https://www.npmjs.com/package/npmjs-stats
I could also see Lambda adding a paid-for option to keep a function "warmed up" at all times.
It's going to get crazy up in here...
The "no server" movement definitely seems to be gathering steam, with Firebase and Parse on the data side and Lambda (augmented with JAWS and others) on the computation side. Am excited to see what this area will bring, in particular due to the combination of scalable hosting and containerisation.
(Plug - we're working on StackHut (http://www.stackhut.com) in the "no server" space that provides a language-agnostic Lambda-like platform (currently JS/ES6 and Python with Ruby soon) that's powered by Docker and is accessible via JSON-RPC over HTTP. Would love to hear your thoughts on this and the "no server" space in general.)
I'm presuming you're on Sky in the UK? They are currently blocking HTTPS on various CloudFlare endpoints (such as ours!) We're attempting to get this sorted ASAP. Let me know if this isn't the case!
Cheers -- Leo
Is there some blog/resource that provides an estimate of the pricing when we go this route, instead of a seperate server for the APIs.
I would like to know the rough pricing in an order of magnitude level?
Would it be twice as expensive as a django app for an application with light usage, etc?
That plan hasn't changed! Just need to find more time for open-source projects :P
We'll get there!
AWS Pop-Up Loft
AWS has an office/loft in downtown San Francisco where you can meet with an AWS engineer for free to discuss your project and your use of AWS services. It's an incredibly valuable resource, but I don't think many people know about it.
All you have to do is schedule a meeting in advance. Meetings can last up to 1 hour and can be scheduled M-F during normal business hours.
Further, their office and loft is also a free co-working space, as long as you are using AWS services. They have some nice couches and desks and it's a great way to network with other developers who use AWS services.
Again, all of this costs nothing. If you're in SF, or traveling there, check it out!
But seriously, there aren't any servers. There is a local development server included, to help you code your web site locally. But, it's not a server you use for production.
Much of my time as a developer is spent deploying/configuring/monitoring/managing/scaling/paying for servers. I'm tired of that. Further, I'm broke. AWS Lambda and on-demand cloud computing in general can reduce and eliminate a lot of that.
Much love for servers, much love for containers, but AWS is putting out technologies that are more efficient. The goal of JAWS is to help everyone take advantage of this new tech.
But my worry has been on the performance side of Lambda...that near-instant feel is going to be compromised if there's much latency introduced and I still haven't done any performance testing to see how much latency Lambda introduces. Have you done any exploration along these lines? In particular, I'll need to figure out what the consistent overhead is and how often Lambda doesn't reuse previous instances and forces db reconnects and the like.
What you say makes no sense. There is a server , you just don't manage it yourself.
> it's a function on the backend
And where is the "backend" ? that's right , on a server.
> It's being called by the JS SDK on the browser,
What does the SDK call ? a server when your functions lie.
Lambda + api gateway are a BAAS. You might have some logic running in the browser but the lambda functions run @AWS. So there is a server.
It's a bit like Couch Apps by the way,except that you can scale at the 'design document level', not just at the 'node level'.
Specifically, each Lambda function is a containerized app...
> no server
But you have to rely on AWS infrastructure. So what you meant is no server management.
This is very cool. I would love to see more aws resources exposed as well through JAWS like dynamo or RDS.
So this would be cheaper than just having an EC2/DO instance running flask with a couple of endpoints?
You pay for EC2/DO even if your app isn't used. With JAWS, you pay only when your app is used.
Here is Lambdas pricing FYI: https://aws.amazon.com/lambda/pricing/
However, the benefit here is that you don't have to provision to peak capacity needs + (some safety fudge factor), thereby having paid for, but idle capacity.
It's a very intriguing proposition for new services where you may not be able to optimize costs as much for fear of losing up time, but I'm sure it makes a lot of financial sense for AWS. Nothing stopping this from being a win/win though.
Getting bogged down in managing servers, or a fleet of instances, or the devops stack to automate a fleet of instances; it's all a time sink with either an actual cost or at the very least an opportunity cost, orders of magnitude beyond that $10/month.
This is especially true for organisations where provisioning an instance requires an epic chain of approvals "because it's a server". Unfortunate but all too common.
Of course if you were planning to become (or already are) a competent devopsy admin then provisioning instances is not necessarily a waste of your time. For everyone else, platform-as-a-service such as Lambda can be an huge win. Sure, there's still a tipping point but it's much greater when time, rather than simply price, is considered.
On the other hand, and contrary to the Amazon-is-scalping-me gripes I hear on HN all too frequently, my experience has consistently been that AWS want you to minimise costs due to design. Every well-run business knows the long-term virtuous cycle of satisfied & continuing customers far outweighs a short-term revenue grab.
ObDisclosure: I am a former AWS architecture manager but do not speak on behalf of my ex-employer.
It would be nice to integrate some ideas from LambdAuth into JAWS.
To get around execution time limits, invoke another Lambda within your current Lambda function.
As far as legal stuff goes the american federation of the blind has been known to litigate against technology companies.
The real challenge is how to manage within the 60 second, 1024 thread limits in a sane way, however. You have to approach application decomposition in a much more rigorous way.
While AWS continues to improve Lambda performance (it's still early tech), I've started an "Optimization" section in the wiki, detailing how everyone can get better performance out of JAWS.
Further, JAWS is optimized for AWS Lambda performance.
Could you share some of your gulp optimizations so that I can incorporate them into the project?
You don't have to share their code if you don't want to, but if you could write down what the gulp tasks are, I could incorporate them into the JAWS Command Line Interface.
Thanks for the positive feedback btw :)
Reminds me of Divshot , a heroku-like service for hosting static sites. It's great that you can roll your own site, and don't need a middleman to host for you.
Perhaps I am exposing ignorance about the needs of the blind community, but it would seem to me that a primarily important piece of software for them could be more confusing to find and update and so on if another piece of software appears with the same name. Maybe you should consider changing the name of your software before it's too late? It's possible that your Jaws may, in the long run, cause considerable confusion for the users of the pre-existing Jaws software.
I'm making a case based on empathy rather than trademark law and so on.
Seriously though, these kinds of architectures come at a cost. The website's claims are either misleading, or the authors haven't understood the problem domain. E.g.:
Each of your API URLs points to one of these Lambda functions. This way, the code for each API Route is completely isolated, enabling you to develop/update/configure/deploy/maintain code for specific API urls at any time without affecting any other part of your application(!!!).
Even if you add more exclamation marks, you'll still end up with front end code depending on the interface exposed by your backend code, meaning you'll have to do versioned rollouts, except you no longer have a single deployable but a zoo of store proce^W^Wlambda functions, with unclear interdependencies and (usually) no proper versioning or tracking.
This might be the right architecture for certain apps, but it's not the easy win/win that's presented here.
I'm not really sure what you mean by "tracking," but you can log anything you like from a Lambda function, and include version information in those logs.
I do agree with your analogy and caution, though.
Relying on stored procedures in a hosted database system also means you cannot write locally reproducible, reasonably hermetic tests.
It also (obviously) completely binds you to a single vendor.
In short, it has all the drawbacks this architecture had when it was still Oracle database servers and stored procedures in PL/SQL (and/or the equivalent from MS, IBM, ...). It also has similar benefits (no need to manage a separate server, simpler architecture, often simpler to get things done for novices).
There are good reasons why that architecture largely lost in the market by now, I'd be very careful to make sure this is a good fit for your problem. But of course only you can be the judge of that.
Swagger JSON can be imported to create/update your REST API instantly via AWS API Gateway.
Meaning, you can still do larger, versioned roll-outs by simply updating the JSON in your Swagger file.
This will give beautiful structure and simplicity to the JAWS workflow.
The single deployable server you advocate is a single point of failure.
I'm not talking about a deployable server, I'm talking about a deployable as a unit of deployment, e.g. a war archive. A deployable is a compile time artefact, a single point of failure is a runtime concept. In that sense, saying a deployable was a single point of failure is like saying a line of code is a single point of failure.
Please reconsider leaving this comment here.