Hacker News new | past | comments | ask | show | jobs | submit login

We're using FastAPI in production at InvestSuite. Highly productive framework, very well written documentation. You get jump started right away.

Only caveat with async python. Might be tricky when using 'sync' libraries. It's not always straightforward and you'll find yourself wondering why your server is blocked from time to time. It's not a problem of FastAPI, but you need to be aware that if you do a blocking call (function is not prefixed with 'async') to a db that it blocks the event loop.




I'm glad FastAPI is useful! And thanks to InvestSuite for being one of the FastAPI gold sponsors.

If you are having problems, you can ask in GitHub issues.

But for the async stuff, a simple rule of thumb is to always use normal def functions and blocking (non async) libraries, that way FastAPI will do the right thing and make sure to run it in a threadpool (thanks to Starlette, the underlying library).

And for the specific path operations (endpoints) where you need to optimize performance, then you can use async and carefully choose async libraries, or run the blocking code with run_in_threadpool, but you can leave those details and possible extra complexity for the cases that actually need the extra performance or async support.


We use fastapi in production too, but the problem we faced with sync stuff in combination with sqlalchemy was that the sessions (which we inject using Depends) were created before all the actual sync functions were executed, so the connection pool ran dry and everything became unresponsive. With flask I had a better experience because it creates the session in the same thread as the function that will handle the request. If you overload it a bit (say, 100 concurrent requests with a connection pool of 30) all the Depends calls will block because there are no threads left in the pool to actually handle the requests.

I understand that fastapi is more suited for async stuff for which it truly works great, but it would be nice if there was a idiomatic solution within fastapi and/or starlette that prevents these kind of problems.

Great work otherwise!


Thanks, it would be great if you could create an issue with a simple way to replicate the problem so that I can check it out properly.


Any plans for gevent compatibility?


Yes and it's made worse by the fact that there's no way to get the raw body of the request in a synchronous endpoint.

I also ran into some really bad validation / serialization performance degradations for large response bodies. Serializing responses with a few 100 small objects or neural network embeddings would take a function that takes 7ms and blow it up to 100-200ms.


My understanding was that if you write a regular function (`def` rather than `async def`) then FastAPI (or really Starlette which it uses under the hood) executes the function in a thread pool so that no blocking of the main event loop should occur.


I didn't explain it well in my comment. Consider the following example:

def blocking(): time.sleep(5)

@app.get("/") async def index(): blocking()

The `blocking` function will blocking the event loop. This is something you need to be aware of. Gist with a few scenarios: https://gist.github.com/lukin0110/0074ec5325224674010193bb95...


Isn't the point that you should be using `def`, not `async def` here?


Yes. But this is a very basic example. When you have an async function defined with `await` statements in it and later on in the function you do a call to a `blocking` function you need to be aware that you have to run in the threadpool.

You don't always know that a function call is blocking, because you don't always know what is happening behind the scenes of that function and on what it depends.


what is the benefit of threadpool though? am I understanding it correctly that due to GIL, python will just keep switching the threads, so instead of running A then B both at 100% speed, both will run concurrently at 50% speed (+/- overhead)?


Only if you're CPU bound, but usually your webserver is blocking on disk IO or database calls or whatever, not calculating stuff, in which case the GIL doesn't matter.


You can get benefits with IO bound work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: