
How to build and deploy a simple Facebook Messenger bot with Python and Flask - sarumantlor
http://tsaprailis.com/2016/06/02/How-to-build-and-deploy-a-Facebook-Messenger-bot-with-Python-and-Flask-a-tutorial/
======
mcescalante
If you have something built in Flask that you want to run on your own VPS
rather than Heroku or another IaaS, it's incredibly easy using gunicorn or
uwsgi with nginx. gunicorn is a bit faster and easier to get started with out
of the two.

Here's super rough instructions in a gist, and you can find plenty of great
guides on Google of course:
[https://gist.github.com/mcescalante/5db616b9a826605f1df35f79...](https://gist.github.com/mcescalante/5db616b9a826605f1df35f79b09cf6f6)

~~~
TTPrograms
I've been using nginx for reverse proxying mostly, but what's the purpose of
using something like gunicorn or uwsgi over using just bare Flask? I'm not
very familiar with best practices in the area. Is the goal to filter requests
for performance or something else?

~~~
mcescalante
The server config in the gist is definitely just a proxy, forwarding all of
the requests through to gunicorn. Many people add more to the configuration
block, to do something like serve static assets with nginx rather than their
Flask application.

From the top paragraph of
[http://flask.pocoo.org/docs/0.11/deploying/](http://flask.pocoo.org/docs/0.11/deploying/):

"While lightweight and easy to use, Flask’s built-in server is not suitable
for production as it doesn’t scale well and by default serves only one request
at a time. Some of the options available for properly running Flask in
production are documented here."

In short, the Flask server is not really written or tuned for production, so
we use gunicorn or wsgi which are WSGI containers that are production ready :)

------
tshtf
So I can deploy a Facebook Messenger bot on commodity hardware and services;
yet I cannot run Facebook Messenger on my mobile browser or write a compatible
Messenger app for iOS or Android?

~~~
swsieber
I was just thinking about this. I have to wonder if it's possible to
create/run a personal matrix integration server.

Edit: which if course wouldnlet you use any compatible chat app, including
Vector.

~~~
lsseckman
Surely someone's done this, but I can't find any examples

------
webo
I started using AWS API Gateway + Lambda for these type of problems where I
don't actually need a 24/7 running server.

Bots / webhook listeners do really fit for API Gateway + Lambda use cases.

~~~
LAMike
Are you using Javascript for that? I thought the API needed to have a server
that's always running so the chats can be instant, but is it possible to do
that with Lambda?

I'm currently using Modulus and I think it will get pretty expensive if I keep
making more bots, so a cheaper way to do it would be nice!

Edit: Found a tutorial: [https://medium.com/@igorkhomenko/run-facebook-
messenger-chat...](https://medium.com/@igorkhomenko/run-facebook-messenger-
chat-bot-on-aws-lambda-2fa800a67d76#.4v5va9iaz)

~~~
happyslobro
Lambda functions initialize in <10ms when warm, <100ms otherwise, in my
experience. They seem to stay warm for 15 minutes or so after a cold start.

The excellent Serverless framework is pure JS, but you can also deploy jars to
Lambda, which lets you use any JVM language. Clojure Lambdas have a more
extreme profile: ~500ms for a cold start, <1ms once warm. You could keep it
warm with a timer event source, which could do double duty as a watchdog /
heartbeat thing.

