Hacker News new | past | comments | ask | show | jobs | submit login

> In some cases, for pure data validation and processing, you can get performance improvements of 20x or more. This means 2,000% or more.

Amazing! Excited to try it out.

Slightly OT: But what are some use-cases where you'd still use Flask over FastAPI? I really like FastAPI's devEx and don't see myself going back to Flask anytime soon. Curious to hear what others think.




Flask has been around much longer than FastAPI and, as a result, is a much more mature framework. Some examples:

- There's a memory leak with a particular combination of packages FastAPI [0]

- Before Pydantic v2, you would validate your data on input (when it's stored in the db) and then every single time on retrieval. There is no way to skip validation, for example, when you are generating a response on data that was already validated when it was persisted to the db. [1]

- FastAPI has documentation only in the form of tutorials. There is no API documentation and if something is not clear looking through the source code is the only option

- You need ORJSON for maximum serialisation performance (perhaps this has changed with Pydantic v2) [2]

- Using FastAPI with uvicorn doesn't respect log format settings [3]

I don't mean to imply that FastAPI is a bad framework. The Flask ecosystem has had over a decade to mature. FastAPI and the ecosystem will get there but it _needs_ time.

- [0] https://github.com/tiangolo/fastapi/discussions/9082

- [1] https://github.com/pydantic/pydantic/issues/1212

- [2] https://fastapi.tiangolo.com/advanced/custom-response/#use-o...

- [3] https://github.com/encode/uvicorn/issues/527


> - You need ORJSON for maximum serialisation performance (perhaps this has changed with Pydantic v2) [2]

The common orjson trick no longer works in v2 and will throw warnings, but it appears it's no longer necessary since the JSON formatting leverages the native seralizer which happens in Rust-land.


Exactly my issue too. Why there is no API documentation for FastAPI? It is very difficult to know what is available and what is not beyond the tutorial style docs.

(I love FastAPI and use it for all my projects, this one little thing troubles me sometimes)


Well the performance increase is so huge because pydantic1 is really really slow. And for using rust, I'd have expected more tbh…

I've been benchmarking pydantic v2 against typedload (which I write) and despite the rust, it still manages to be slower than pure python in some benchmarks.

The ones on the website are still about comparing to v1 because v2 was not out yet at the time of the last release.

pydantic's author will refuse to benchmark any library that is faster (https://github.com/pydantic/pydantic/pull/3264 https://github.com/pydantic/pydantic/pull/1525 https://github.com/pydantic/pydantic/pull/1810) and keep boasting about amazing performances.

On pypy, v2 beta was really really really slow.


This is simply not true. So sad.

We removed benchmarks from the docs completely when the rule of "only show benchmarks with comparatively popular or more popular libraries" no longer made sense, and maintaining benchmarks with many hobby packages was obviously going to become burdensome.

Please show me a sensible benchmark where your library is faster than pydantic?


> This is simply not true. So sad.

Ah sorry, so, just coincidentally pydantic happened to be slower than any other library that had a PR to be added to the benchmark, but that was not the reason they were rejected.

Better now?

> Please show me a sensible benchmark where your library is faster than pydantic?

    $ python3 perftest/realistic\ union\ of\ objects\ as\ namedtuple.py --pydantic
    (1.2192879340145737, 1.2595951650291681)
    $ python3 perftest/realistic\ union\ of\ objects\ as\ namedtuple.py --typedload
    (1.0874736839905381, 1.114147917018272)
I'm not a math genius but I'm fairly sure that 1.08 is less than 1.21.

So much for your invite to be gentle and cooperative :D (https://github.com/ltworf/typedload/pull/422)


¯\_(ツ)_/¯ - I get different results, see the PR.


It seems you're running on apple. I really can't reproduce since i don't own one, and unless I get it as a gift I never will.

Anyway no server code runs on apple, so it isn't that important to win benchmarks only on apple, I think.


Well that is a bad look. I am sure the highlighted performance metrics have had a lingering impact on library decisions.


Is it still a bus factor of one? I veer to the side of boring technology and FastAPI is still too in flux for me. I do not ever want to be the vanguard discovering novel problems with my framework.


Definitely this. Flask is old and well-tested, with a solid feature set and little need to change how it works.

Also you'd use Flask for basically anything that isn't an "API", but where you still want something lighter-weight than Django. I believe other traditional Python web frameworks like Pyramid fall into the same category.

The "Fast" in FastAPI refers to the speed of getting a working prototype running, specifically for an API that accepts and emits JSON and implements an OpenAPI schema. If that's not your use case, then you might not need or want FastAPI.


Litestar.dev have a team of dedicated developers , also have similar API to FastAPI. You can watch commit activity and the teamwork.


Coming from django primarily, but written some flask code. I would say the ecosystem. As of my last try, adding authentication (via cookies) to a fastapi project was somewhat cumbersome.

Usually as the projects grow, and I start reinventing the wheel, I come to regret not going for a "full" framework.


Not to be excessively negative, but this really means very little without more context. Maybe it was very slow before, or it's a particulary unused scenario. I'm always skeptical when people write such praises of their own software without giving a comparison point.


> Maybe it was very slow before

That is at least partly the case. I maintain msgspec[1], another Python JSON validation library. Pydantic V1 was ~100x slower at encoding/decoding/validating JSON than msgspec, which was more a testament to Pydantic's performance issues than msgspec's speed. Pydantic V2 is definitely faster than V1, but it's still ~10x slower than msgspec, and up to 2x slower than other pure-python implementations like mashumaro.

Recent benchmark here: https://gist.github.com/jcrist/d62f450594164d284fbea957fd48b...

[1]: https://github.com/jcrist/msgspec


Eeey hello :D

Eeh come on, I think it's a bit unfair to compare, because msgspec doesn't support regular python union types… which are the number 1 source of slowness… at least in my real world use case of the thing. I've got hundreds of classes with abundant nesting and unions.

In pydantic v2 they did the same thing i've been doing in typedload for a few versions already: check the field annotated with a Literal and directly pick the correct type, rather than do try and error. So now the speed for unions has become better.

Even so, for being binary vs pure python, I'd have expected much more.


Pydantic was a pure python library and was rewritten in Rust recently. To be fair, I have seen some critiques of this rewrite. Specifically saying that the validation model could have been much faster in Python and switching languages papers over the deficiencies. I'm not in a good place to judge if this is true or not.


I wrote so in other comments… I was surprised to see that for the benchmarks of my library (typedload), it now manages to win a few… but not all of them.


would love to see a benchmark where typedload is faster than Pydantic V2. Could you share a link?


You realise that you made version2 3 days ago?

I re-do the benchmarks of typedload when I make a release. The benchmarks will be updated when the next release happens.

I will not do a new release because you need new benchmarks after 3 days. You are free to include benchmarks on your own website (but we both know you won't do that).

This is because of how my whole setup works, requiring a git tag and a finished CHANGELOG. Running the command to regenerate the website would cause documentation from the master branch to be published.

The benchmarks will be here, as usual. https://ltworf.github.io/typedload/performance.html

I run them just getting the latest available version. But since I can't time travel, I can't get versions from the future to appease you, sorry.

I just ran them locally (like you could do by yourself) https://news.ycombinator.com/item?id=36644818


Yes it was incredibly slow and inefficient.

I maintain typedload (a similar project, that I started before pydantic's first release) and pydantic 2 somehow still manages to be slower than a pure python library that got no funding to improve performances.


You can use gevent and no need to replicate every library under the sun for async io.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: