Edit: adding linebreaks...
2nd edit: Realized there were AWS access tokens in there. It's probably not a good idea for those to be exposed to the environment they're running this user generated code in. They should probably be wrapping these in vm.runInNewContext()
In this case, while these environment variables appear to be sensitive, they are ephemeral keys, uniquely issued to the Function as part of running within Lambda. Their associated actions and permissions have been reviewed, approved and are required to run the Function. There is no risk of any Twilio customer’s Function being accessed or modified by disclosure of these keys.
That said, we much appreciate the community raising potential concerns. If y’all ever see anything that looks like a vulnerability, we’d love for you to submit it to our Bug Bounty program: https://bugcrowd.com/twilio
Maybe they actually intend for people to be able to use all the functionality of Lambda, some of which (I guess) requires using the AWS access key?
Or are people naively assuming their own Lambda container is a reasonable sandbox for third-party code? If so, yikes.
Another take on this is that they're creating value for customers by shifting the reliability requirements from the customers themselves to AWS / lambda
... which I think is brilliant.
I feel like this new feature will make it super easy to work with Twilio. It's almost a no brainer.
But despite all that, I feel like setting everything up in one place would be a lot simpler.
Now if Twilio had a way to deploy serverless functions via git... that would be a clincher.
Also, a hell of a lot of work went into this feature. You can try setting this infrastructure up yourself if you think it will save you money. You already know they're using Lambda!
Sometimes systems that are secure with two parties become horribly insecure when you add a third. I don't know a lot about IAM but generally I'd be very cautious about using standard access control mechanisms to implement sandboxing while running the code "in your account", as often you find resources are made available to "the whole account" with no further access control settings, because the system designers didn't imagine customers running true third-party code.
That said Amazon has some pretty thorough access control features and I'm sure Twilio has looked into it and figured out something reasonable.
If you think about this product, you are paying for the service, so if you tried anything malicious, at worst you would do is impact yourself, and ultimately pick up a bill for your usage at the end of month, heh.
From what I can tell API gateway costs $3.50 a month per 1m calls, so unless they've negotiated some deal with AWS or something...
Happy to see we've come to the same conclusions about utility independently :).
If you have a big enough user base/set of use cases, the next step after opening an API to your service might be adding a Baas/Faas, so people with small use cases or prototypes can get started easily without using a server.
But it's not like Linkedin or Zenefits users would get more value out of "Linkedin Functions". It probably makes more sense if you can "eat" what would have been a separate (micro)service.
The one thing I don't see though, is if somebody is sold enough on Faas, why would they use your siloed Twilio Faas instead of AWS Lambda/whatever where you can interact with Twilio _and_ eg Shopify, Instagram, other APIs...
Maybe then an interesting idea for the big FaaS providers duking it out for market share right now might be an ability for SaaS providers to build "$YOUR_SAAS Functions" on top of their offering.
The end-user would see Lambda/Function/whatever templates pre-filled with the right API calls an a UI for choosing the different API-related triggers. This could be like the AWS marketplace, but for Faas.
So I guess the answer to my clickbait-ey question at the top is mostly likely no, but maybe, let's see it play out. It definitely feels like a good way to sell more API usage.
"Today, we’re excited to announce Twilio Functions, a serverless environment to build and run Twilio applications so you can get to production faster."
I did just that ... I signed up for twilio, wrote a simple twiML inside their web editor, and attached it to a phone number.
No servers were involved (I am not sure how I could involve a server) and I "got to production" very fast.
What would I do differently now ?
 "Ring Forever"
Like communicating to separate service or database within a given request from an SMS or Voice callback, etc... functionality that until today would have required web-hooks running on some other server (or cloud function)?
Wrote a bit of behind the scenes just the other day https://zapier.com/engineering/behind-the-cli/ if this sort of stuff tickles your fancy.
could they really set up an AWS account, lambda function, IAM role, API gateway, with cloud watch logs? ive done this in AWS. you have to use 5+ AWS services that have funny names and look nothing alike, and make them all work together.
or lets say you wanted to buy a domain, hosting for it, and put your code on there for twilio to hit with a callback URL. thats hard too.
if you are going to do either of the 2 approaches above, how many hours are you going to spend getting the callback URL foundation set up (and working) before you even get to use a twilio product?
If callback URLs, SSL, etc. are difficult to understand, I consider that a sufficient barrier which requires people to have a clue about what they're building and understand the implications of their choices.
If the callback URL can use an IP address and doesn't require https, you could set up a Digital Ocean droplet for it in a couple of minutes. Or you could set up a port-forward on your router and run an HTTP server on your machine, but I think the Digital Ocean route might be quicker.
This, coupled with just giving them a raw IP address has been good enough for me so far.
Twilio is interesting because they're offering this as a business-specific offering i.e., integrate with Twilio directly (SMS, voice), which on its face is actually more valuable than, say, a "generalized compute platform" (which we've referred to ourselves as, at times). I think it'll be really interesting to see how Twilio markets + plays with this model --- theoretically if it sells their other services they could get away with this being a loss-leader, which is an intriguing concept.
I'm thinking you mean the COST of compute here. If that's the case, I actually disagree on cost trend direction. I think the raw cost of compute will slowly increase. Long story but most of it borders on the economics of cloud computing.
So you can figure out their loss/margin based off the public Lambda pricing model.
Lambda costs 0.00001667 per GB-s... so if each Twilio function used 1GB of memory and ran for 1 second, it would cost them $0.17 per month to serve you.
Since the max memory in Lambda is 1.5GB and the max runtime is 5 minutes... the worst case is that twilio is spending $75 for those 10k requests. I assume they were smart enough to use lower amounts of memory and set the maximum runtime of the Lambda function pretty low, like a few seconds.
Good on them for integrating it.
Or is that just not a thing and it has to be done OOB?
It's not at all common in the US, and Twilio doesn't seem to support it anywhere.
Also, anyone know why all these serverless environments that are coming out focus on Node.js?
AWS has the resources to grow teams around different languages / runtimes, smaller players (or recent IPOs) do not in the same capacity. Targeting anything other than Node would be "the largest" disservice to your customer base in the case of Twilio, and would artificially slow any other player.
I'm betting AWS Lambda, and Twilio's offering as well, are mostly built around such a model. Mind you, Lambda now also offers a "prefork" container-based stack for function execution, but it didn't at first, and for good reason: it's harder to build, harder to operate, and has much higher overhead than a cluster of regular processes with internal sandboxing. This is much the same reason that Heroku started with their Alpine stack—an cluster of internally-partitioned Ruby processes, and boy was that a feat—before building their container service.
† Given that you could pretty easily multitenant-sandbox Lua evaluation... it'd be quite easy to extend OpenResty (https://openresty.org/) into being a FaaS, wouldn't it? Anyone hacking on this?
It's not using OpenResty though (although OpenResty is very cool)
We are going to be adding Node as well though, for the reasons you mention above around Ecosystem.
Tropo certainly lacked some things compared to Twilio while I was there, but this is undoubtedly an area where Twilio is playing catch up.
I'd add that "X already does this" comments like this are a routine feature of Twilio HN threads, as long as the company's been alive. https://kev.inburke.com/kevin/six-years-of-hacker-news-comme...
I don't think they are to bothered by us pointing things out like this.
Their documentation is horrible and you are extremely limited when running the code on their servers. If you need to do anything complicated, you have to have things running through their WebAPI.
They have made breaking changes to their API without communicating with their user's.
Except for the 6 billion SMS messages sent each day.
We've got a trial account that will let you play around before spending any money. You'll only be able to text numbers that you've verified with us, but it should give you a feel for things (this is what the vast majority of students at Hackathons use).
If you want to give it a try: twilio.com/try-twilio
When you're ready to make the leap, $20 buys you about 2500 text messages in the US.
(disclaimer: work on Developer Community team)
If every engineer with a six-figure salary decided they would pay the couple bucks it costs to test drive a product, the customer long-tail for dev products would add up to something appreciable and as a result you might actually see a lot more innovation around dev tools. As it stands, one of the best developer products of our generation, Heroku, even had a hard time pricing / selling to their long tail and we saw shutdowns of great products like Parse.
Twilio actually has a very reasonable model. $20 deposit, non-recurring, is way less than you'd spend on a Thursday night date, for example, and you get to build something with it. Hell, speaking of building, you've probably spent more on a Lego set. (I have. As an adult... for myself.)
(What actually ends up happening is developer companies go full OSS, give everything away for free for massive adoption, then AWS figures out how to productize it and sell to Enterprise. You can trace the points from Docker's inception to AWS Lambda processing trillions of trades per day.)
This is an amazing move on their part. I think this lowers the barrier to use their services and it was already stupid simple to set up.
Since you have no need for their service, not having your account on their system is best for Twilio.
If they built this ON lambda then I'd bet against it very strongly, but I can't imagine smart engineers doing that in 2017 (ha...)
Serverless functions on Twilio (or any other platform for that matter) is not competing with Lambda.
The point of serverless functions is to take your Twilio app that you were already creating and to completely eliminate the back-end.
Now you can make a fully functioning Twilio app without messing around with heroku.
In a vague roundabout way both industries are used for sleeping.
But the connection is so remote that the answer is no.
So Lambda and Twilio functions are similarly "competing". But in reality there is less than 1% of situations where someone would seriously be deciding between the two.
Edit: It is using Lambda!
If you had to work around the 5 minute limit then lambda may have been the wrong technology choice. It's really neat for short quick bursts of compute, not long running processes.
What are some of the shortcomings you ran into using it?