> FastAPI is already being used in production in many applications and systems. And the test coverage is kept at 100%. But its development is still moving quickly. New features are added frequently, bugs are fixed regularly, and the code is still continuously improving. That's why the current versions are still 0.x.x, this reflects that each version could potentially have breaking changes.[1]
What kind of weird reasoning is this? This should be why you use actual semver, and you use the major version to indicate backward compatibility breaking changes, minor versions for new features, and patch versions for bug fixes that don't actually actually change the public API surface.
If you don't want to use semver, just don't use semver.
Sure, depending on how you interpret initial development. FastAPI has 60k stars on GitHub, has been extremely popular for at least four years, and is widely used in production by a lot of people. It's the maintainers' decision what their goals are for a v1 release, but I was personally surprised to learn that it hasn't had one yet. I can see why one might argue that they're not following the spirit of semver at this point.
As long as the developers/maintainers feel like "Anything MAY change at any time" and "The public API SHOULD NOT be considered stable" is true, both the "spirit" and specification (if we may call it so) say it should be on 0.x.z
> If your software is being used in production, it should probably already be 1.0.0. If you have a stable API on which users have come to depend, you should be 1.0.0. If you’re worrying a lot about backward compatibility, you should probably already be 1.0.0.
Yeah, doesn't stroke well with "FastAPI is already being used in production in many applications and systems. And the test coverage is kept at 100%. But its development is still moving quickly." now does it?
It doesn't matter how quickly you move, you can apply real semver numbering just fine. Five years and a 0 dot one hundred is obeying the letter of the law while being utterly ridiculous.
If that were the case, then the docs would call that out and warn people not to use their project in production. Instead, it's listed in a way that any normal person will read as a point of pride.
This is a good example of people confusing the letter of the law with the spirit of the law.
Semver is not a goal onto itself. Semver expresses a process to help consumers of an interface infer the implications of an upgrade without having any context on what specifics went into a release. Major changes imply breaking changes, minor changes imply addition of backward compatible changes which can prevent future downgrades, and patch releases means a drop-in replacement that fixes bugs.
Each and every single one of these scenarios reflect decisions made by product maintainers on their work's reliability. Semver is the tool, not the cause.
If product maintainers do not care about stable versions, it makes no difference if they comply with semver or not. What number they tack onto a release means nothing. They might as well tag a timestamp.
If they cared about ensuring their consumers that their work is reliable and stable but still reserve the right to break changes, they could blindly do major version releases and even drop any contract testing from their pipelines. They could even go a step further and claim they only support the N last major releases.
Nevertheless, competent maintainers and product managers know beforehand what is supposed to ship with a release, and even plan when and how to ship those changes. If anyone knows beforehand what changes go in a release, they can easily tell beforehand if that release should be major, minor, or patch. This is not rocket science.
> If product maintainers do not care about stable versions, it makes no difference if they comply with semver or not.
Exactly. Here, the maintainer reserves the freedom to break anything at any moment, and correctly uses semver to signal that through version number.
(Not) using semver as a versioning scheme is orthogonal to whether an API is stable.
As a potential user of said package, seeing this (among other things), I choose to use another package (Django REST Framework) whose maintainers do care about not beeaking the API.
You're a few years behind: people actually went "haha, good joke... for real tho that's actually how we version, let's make it official" and then started using it in earnest.
There've been plenty of projects that officially use 0.x versioning for years now.
Yeah, really not though: you clearly haven't seen projects that actually use that as their official versioning policy ever since semver.org got published. So no, the joke didn't go over my head, because zerover stopped being a joke years ago. Unfortunately for everyone, the world took it seriously.
> Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.
That said, I'd argue that this is a little silly. Maybe a better design is to use something I am calling "zero-calver": 0.YYYY.MM.DD. Then use semantic versioning once stability is reached.
This is using semver in the exact way it’s intended but with predictably silly outcomes. For several years now I’ve been of the opinion that this particular aspect of semver is … very bad actually.
For personal projects I skip any zero major prefixes. In their place, I use alpha/beta suffixes and version Major/minor/patch from an initial 1.0 alpha. Hasn’t caused any problems for me.
And to clarify the approach, the equivalent version would be 1.0.0-alpha.100.0. Clear in its meaning, clear in versioning resolution, ambiguous only in its maturity but that’s what the project is trying to do.
Edit: or probably it’s 1.0.0-beta.100.0. That’s still much clearer
Which sucks, to be honest. Just accept that your major can go up to 23.8.3, and it does not make it less professional than 0.231.0. Actually it's better because it said which releases broke backward compatibility and which ones did not.
Using semver doesn't magically make it okay to make breaking changes constantly. It's still a pain for anyone using your library.
Semver explicitly encourages you to use major version 0 during early development for this exact reason, it's up to the maintainers to decide when they can be more stable.
Actually, semver would make sense if people used it correctly.
Changing the major means that it breaks backward compatibility. Changing the minor means that it has new features. Changing the patch means that it had bugfixes.
That's useful for libraries. Of course it's a bit less useful for executables. I just increase the major for executables, but if I need to backport a feature or a fix (to somebody who still uses an older version), then I use the minor/patch.
https://litestar.dev is also reaching 2.0 .it is a lot faster than FastAPI yet much better maintained. Has DTO, event and channels, Repository, Service DDD style framework built-in. No promotional commits.
I had written the reason of moving away from FastApi here and intro to Litestar 2.0.
I think you raised some valid concerns there, I see 18 open issues in the fastapi repo, what's going on there? Are they just moving everything into a discussion? A bit concerning
There are thousands of issues which about 30% of them are actual bugs, including some of them are those we found on production and reported, the maintainer wrote a script that convert all to discussion and never looked back, so we never look back too.
We have quite powerful channels and event systems that goes along with websocket and realtime systems too.
Plus , if you use Repository + Sqlalchmey Plugin and DTO , you can also work with CRUD events before_update/after_update/before_insert/after_instet etc
Starlette can be considered pure server framework in the lines of CherryPy, wezurg in wsgi/sync world.
Litestar is a lot more battery included with built-in integration to Sqalchemy, many other ORMs as plugin. Built-in security and authentication middleware.
Join our discord, we have good community there too .
We haven't test Django ninja in term of performance.
But from our experience of Django 4.2 async , Django isn't truly asynchronous yet. Since ecosystem is not asynchronous I am not sure how asynchronous API would benefit from that when the extension aren't.
> In some cases, for pure data validation and processing, you can get performance improvements of 20x or more. This means 2,000% or more.
Amazing! Excited to try it out.
Slightly OT: But what are some use-cases where you'd still use Flask over FastAPI? I really like FastAPI's devEx and don't see myself going back to Flask anytime soon. Curious to hear what others think.
Flask has been around much longer than FastAPI and, as a result, is a much more mature framework. Some examples:
- There's a memory leak with a particular combination of packages FastAPI [0]
- Before Pydantic v2, you would validate your data on input (when it's stored in the db) and then every single time on retrieval. There is no way to skip validation, for example, when you are generating a response on data that was already validated when it was persisted to the db. [1]
- FastAPI has documentation only in the form of tutorials. There is no API documentation and if something is not clear looking through the source code is the only option
- You need ORJSON for maximum serialisation performance (perhaps this has changed with Pydantic v2) [2]
- Using FastAPI with uvicorn doesn't respect log format settings [3]
I don't mean to imply that FastAPI is a bad framework. The Flask ecosystem has had over a decade to mature. FastAPI and the ecosystem will get there but it _needs_ time.
> - You need ORJSON for maximum serialisation performance (perhaps this has changed with Pydantic v2) [2]
The common orjson trick no longer works in v2 and will throw warnings, but it appears it's no longer necessary since the JSON formatting leverages the native seralizer which happens in Rust-land.
Exactly my issue too. Why there is no API documentation for FastAPI? It is very difficult to know what is available and what is not beyond the tutorial style docs.
(I love FastAPI and use it for all my projects, this one little thing troubles me sometimes)
Well the performance increase is so huge because pydantic1 is really really slow. And for using rust, I'd have expected more tbh…
I've been benchmarking pydantic v2 against typedload (which I write) and despite the rust, it still manages to be slower than pure python in some benchmarks.
The ones on the website are still about comparing to v1 because v2 was not out yet at the time of the last release.
We removed benchmarks from the docs completely when the rule of "only show benchmarks with comparatively popular or more popular libraries" no longer made sense, and maintaining benchmarks with many hobby packages was obviously going to become burdensome.
Please show me a sensible benchmark where your library is faster than pydantic?
Ah sorry, so, just coincidentally pydantic happened to be slower than any other library that had a PR to be added to the benchmark, but that was not the reason they were rejected.
Better now?
> Please show me a sensible benchmark where your library is faster than pydantic?
Is it still a bus factor of one? I veer to the side of boring technology and FastAPI is still too in flux for me. I do not ever want to be the vanguard discovering novel problems with my framework.
Definitely this. Flask is old and well-tested, with a solid feature set and little need to change how it works.
Also you'd use Flask for basically anything that isn't an "API", but where you still want something lighter-weight than Django. I believe other traditional Python web frameworks like Pyramid fall into the same category.
The "Fast" in FastAPI refers to the speed of getting a working prototype running, specifically for an API that accepts and emits JSON and implements an OpenAPI schema. If that's not your use case, then you might not need or want FastAPI.
Coming from django primarily, but written some flask code. I would say the ecosystem. As of my last try, adding authentication (via cookies) to a fastapi project was somewhat cumbersome.
Usually as the projects grow, and I start reinventing the wheel, I come to regret not going for a "full" framework.
Not to be excessively negative, but this really means very little without more context. Maybe it was very slow before, or it's a particulary unused scenario. I'm always skeptical when people write such praises of their own software without giving a comparison point.
That is at least partly the case. I maintain msgspec[1], another Python JSON validation library. Pydantic V1 was ~100x slower at encoding/decoding/validating JSON than msgspec, which was more a testament to Pydantic's performance issues than msgspec's speed. Pydantic V2 is definitely faster than V1, but it's still ~10x slower than msgspec, and up to 2x slower than other pure-python implementations like mashumaro.
Eeh come on, I think it's a bit unfair to compare, because msgspec doesn't support regular python union types… which are the number 1 source of slowness… at least in my real world use case of the thing. I've got hundreds of classes with abundant nesting and unions.
In pydantic v2 they did the same thing i've been doing in typedload for a few versions already: check the field annotated with a Literal and directly pick the correct type, rather than do try and error. So now the speed for unions has become better.
Even so, for being binary vs pure python, I'd have expected much more.
Pydantic was a pure python library and was rewritten in Rust recently. To be fair, I have seen some critiques of this rewrite. Specifically saying that the validation model could have been much faster in Python and switching languages papers over the deficiencies. I'm not in a good place to judge if this is true or not.
I wrote so in other comments… I was surprised to see that for the benchmarks of my library (typedload), it now manages to win a few… but not all of them.
I re-do the benchmarks of typedload when I make a release. The benchmarks will be updated when the next release happens.
I will not do a new release because you need new benchmarks after 3 days. You are free to include benchmarks on your own website (but we both know you won't do that).
This is because of how my whole setup works, requiring a git tag and a finished CHANGELOG. Running the command to regenerate the website would cause documentation from the master branch to be published.
I maintain typedload (a similar project, that I started before pydantic's first release) and pydantic 2 somehow still manages to be slower than a pure python library that got no funding to improve performances.
The Python ecosystem is strange. Where other dev communities will embrace new ways of doing things faster than most people can keep up — the Python community needs to be pulled kicking and screaming into the light once ever decade or so. Python 2 to 3, ~10 years.
async/await has been in Python since 2015, it feels like it's going to be another 5 years before we see people taking async seriously in the big packages. Same problem we had during the 2/3 transition. No library support, no developer support.
None of the mini-framework based on Starlette is close to Django's fullstack facility, but upcoming litestar 2.0 have a lot of features that Django people desires.
We have DTO + Repository + Sqlalchmey plugin system , that - just with definition of SQlalchmey 2.0 model , it will give you and DDD style Repository Service with
CRUD , Filtering , Pagination , and many common apis by default . Those need Extension in django.
For fullstack experience see :
https://github.com/cofin/litestar-fullstack - it has Users and Auth , Roles , Teams , Tagging , Data migration , Caching , Background Worker Services , Background scheduling , manage.py-like cli for creating users , admin users , and docker containers.
The Starlette dev is the founder of Django Rest Framework.. Which is my least favorite part of working with Django. I really wish Django shipped with its own REST Framework.
I’ve been keeping an eye on FastAPI, Starlette, and a few other libraries but, compared to Django, they’re mostly powered by hype. I found a bunch of Pydantic bugs 2(3?)+ years ago that are only now getting addressed in v2.
There are a lot of other things to consider. Django only supports GET and POST out of the box and Django forms aren’t serializers. Those are just the obvious ones. Just returning JSON isn’t practical.
I’m not a fan of the complexity of containers or kubernetes. Everything I’ve built scales vertically on bare OS very nicely.
Regarding litestar, if I wanted to do Python web development I’d use Starlette, which I really liked, but I’ll probably stick with nodejs and typescript from here for web applications.
I think complexity can have multiple interpretations, especially in a fragmented discipline as software engineering.
However, at least for me containers solves 3 main source of problems to shipping code in a current software environment: (1) somewhat consistency around runtime environment between production, development and homologation (2) a portable way to deliver software (just create the image) and (3) packaging between source code and runtime.
I started my career at the late-2000s and at least in my experience the code itself was the least of issues because we needed to develop something in a environment to be delivered at a runtime in another, transfer the files via FTP or replace files with _.old and making sure that it would work in all places.
K8s is complicated , but Docker (docker compose) solves a lot of problems especially when you are working with a team.
Just 5-10 lines of YAML code in your docker-compose.yaml file and it gives you an local infrastructure of db , redis , node , python , nginx and many other things. Then you can share with anyone in your team , and it works on their machine too.
Don't afraid of time saving tech.
We had use nodejs/ts , but nothing meets DX of python , or Python + MyPy yet, and ecosystem is unmatched.
They're totally different in terms of the scope of the project they're meant to be used for.
FastAPI is much closer to Flask in that it's trivial to throw up a single file with a couple of routes, and you can use something like SQLite yourself for persistence to disk, or install a couple of libraries for authentication or such (not sure if that's built in by now).
Django, on the other hand, requires a multi-step process to even start a project and creates a dozen of files, most of which are boilerplate, before you can even see a "hello world" route.
But in exchange, you get not just a program which deals with routing and templates and status codes, but much more - a world-class ORM which integrates pretty much transparently with multiple data stores like Postgres or SQLite, an amazing dashboard out of the box which is really handy for sharing with non-techies, a very mature ecosystem, and perhaps most importantly, the "one right way to do things", which makes it a lot more effective for collaboration between a bunch of engineers. It's of course not infallible, has a learning curve, and comes with a good few footguns that get the uninitiated, but the upside is a real upside. Migrations alone might justify it - for all the grief of merge conflicts between migrations right before a code freeze, I can't imagine how much worse it would be to not have them.
So I wouldn't say there's significant advantages to using either - they're different tools for different use cases. If you want to get something up quick that isn't too complex, FastAPI is great for that. If you know you're eventually going to rewrite Django but worse, you may as well use the real thing.
I've just made a first project in fastAPI, and it was trivially simple compared to my previous adventures in DRF. Plus asyc from the get go, which was also pleasant. I had a task as part of this that takes 10 seconds to run, and there's a built in way to allow it to run in the background.
Yeah, at a past job I had to build out a way to run launchdarkly in FastAPI and it was very unpleasant. Launchdarkly only ships a sync python client so we ended up having to run it in another process to avoid blocking all requests
I found DRF to be great at generating fairly simple APIs very quickly. But complicated stuff gets... more complicated, due to inheritance and magic. I love me some DRF, govscent uses it: https://govscent.org/api/
All from a few lines of code.
But I've mostly switched to django-ninja which is more type safe and faster.
The rustification of Python libraries and tooling continues and it has been brilliant. In the past 6 months I have personally switched projected to ruff[0], polars and now - as of this morning[1] - pydantic 2 and FastAPI 0.100
[0] has replaced pylint, flake8, pyupgrade, isort, mccabe and pydocstyle
[1] bump-pydantic worked well, after porting settings to pydantic_settings.
At the cost of bigger downloads, not working with other python implementations, and in the case of pydantic, the performance gains aren't that impressive compared to pure python libraries.
I still can't stand Pydantic's API and its approach to non-documentation. I respect the tremendous amount of hard work that goes into it, but fundamentally I don't like the developer experience and I don't think I'll ever feel otherwise. I use it because my coworkers like it and I've learned its advanced features because I had to in order to get things done, not because I like it.
I would love to see a FastAPI alternative still using Starlette internally, but using Attrs + Marshmallow + Cattrs + Apispec instead of Pydantic. It would be a little less "fast" to write a working prototype, but I'd feel much more comfortable working with those APIs, as well as much more comfortable that my dependencies are well-supported and stable.
The problem of course is not that gluing those things together is hard. The problem is that now someone has put untold hundreds of person-hours into FastAPI, and replicating that level of care, polish, bugfixes, feature requests, etc. is difficult without putting in those hundreds of person-hours yourself.
Could you simplify your point? I was an ardent marshmallow user and when I finally switched to pydantic, it felt like I finally sat down in my life after standing forever. The documentation sounds good enough to me, but importantly the interface pydantic provides to define your json schema is the most elegant interface I’ve seen in any language and miles better than the mess marshmallow provided.
For many of us especially in the SaaS side, speed of these operations is a distant third priority compared to ease of writing and understanding the code, and ensuring reliable less buggy code. The actual compute happens on a cluster with spark or snowflake anyway.
There is no reference doc. The docs cover a lot of material in a small amount of space, buying important pieces of information and mixing up a large number of topics under unintuitive headlines. Reading the source code is occasionally necessary just to figure out how it all works.
The API is a little weird, particularly around defining validators. The parameter name-matching is an "interesting" design choice. Accessing "values" as a dict[str,Any] is messy if you care about static typing, although I can understand why they did it.
Furthermore, the behavior of validators and the exact sequence in which they run is not defined by the docs. It's not that hard to figure out, but it also might change at any time because there's no user contract. Attrs is significantly nicer in just about all respects here, especially their attention to detail in their extensive user guide and reference docs.
Speaking of user contract, there's no clear separation between private and public. Without a reference doc it all looks like fair game, but without a reference doc it also might all change at any moment. Either you stick to the examples, or you're off doing a guess-and-check dance and hoping something doesn't break.
Even with the Mypy plugin, I often have to write `if TYPE_CHECKING` all over any nontrivial Pydantic class consuming data from external sources. Variable annotations in Pydantic are fundamentally not PEP 484 type hints. That's fine, but it's confusing that they're almost the same, and, as above, it's almost entirely up to you to figure out how it all works, either by trial and error or by digging around in the issue tracker and StackOverflow.
Ease of writing and reliability is precisely my big area of annoyance and concern. Speed of (de)serialization is comparatively unimportant (although I don't like the huge amount of overhead involved and I avoid using it in hot code paths).
I also don't like using Pydantic-defined classes very much, because the actual init method signature is just *args, **kwargs, which doesn't work well with any tooling. It feels like being back in the Tornado & PyMongo dark ages where everything is dynamic or dynamically-generated and classes are just glorified hash tables.
I agree that the JSONSchema integration is outstanding. BaseSettings is also a tremendous productivity improvement, I love that I can define a class and immediately get a proper app-wide config reading from both env vars and a dotenv file. I also like the default error messages that tell you exactly which field failed validation. I also like the validator system (once I figured out how it worked), respecting the order in which I define the validators as well as supporting validators that run before or after the default set of validators (pre=True and pre=False respectively). I was probably being a little too negative before, but my annoyance level with the developer-facing API and documentation remains high, and I will gladly jump to an Attrs-based alternative as soon as one exists.*
Please please take a look at V2, both the code and the documentation (although I admit, the documentation for V2 isn't finished).
I (the developer of Pydantic) had many of the same frustrations with Pydantic V2 which is why I've spent so long rewriting it to try and fix these concerns.
In particular:
* we now have API documentation [1]
* we have first class support for validating `TypedDict` which gives you a typing-valid dict representation of your data straight out of validation
* we now have strict mode
* we're working hard to define an exact spec for what validates to what [2]
* we have a strict separation between public/private - everything private is in a `pydantic._internal` module, and we have unit tests that everything which can be publicly imported is explicitly public
* we now use `Annotated[]` for defining custom validations/constraints, together with annotated-types [3]
* the protocol for customising validation and serialization has been significantly improved [4]
I'd really love to hear your feedback on V2 and what more we can do to improve it - your feedback seem unusual reasonable for HN ;-) - please email samuel@pydantic.dev or create an issue/discussion if you have any thoughts.
I too have made similar observations regarding pydantic and FastAPI.
I was evaluating various Python async http frameworks and landed on a similar stack:
- attrs/cattrs for models
- starlette+uvicorn for HTTP/websocket
- validation I’m still on the fence about. I’ll see how far I get with the built in validators offered by attrs. I use voluptuous at work and generally like the DX but it’s in maintenance mode.
This is purely personally preference, I’m sure devs using fastapi+pydantic are more productive in the long run. It almost feels like I’m hand rolling my own fastapi implementation but at the same time I don’t want to be too locked in to frameworks like that.
Ive been burnt by magic frameworks that do too much behind the scenes and there’s something nice about fully understanding what’s going on when you hand stitch libraries yourself.
If you like cattrs, you _might_ be interested in trying out my msgspec library [1].
It works out-of-the-box with attrs objects (as well as its own faster `Struct` types), while being ~10-15x faster than cattrs for encoding/decoding/validating JSON. The hope is it's easy to integrate msgspec with other tools (like attrs!) rather than forcing the user to rewrite code to fit the new validation/serialization framework. It may not fit every use case, but if msgspec works for you it should be generally an order-of-magnitude faster than other Python options.
This looks like exactly what I've been looking for. I just want strong typing, json <-> struct and validation. Seems like it ticks all the boxes + speed benefits which is always nice. I especially find it useful that I can use messagepack for internal service chatter but still support json for external stuff and dump astuple to sqlite.
Depending on how far in you are, starlite/litestar has good documentation and offers another "batteries included" framework. Performance wise it's about the same and the stack is about the same. Fastapi suffers from the "one solo dev in Nebraska" paradigm (check out open prs and old tickets). For me the main draw of litestar is the batteries + better docs + more active development with multiple developers vs most other python web frameworks.
+1 for litestar[1]. The higher bus-factor is nice, and I like that they're working to embrace a wider set of technologies than just pydantic. The framework currently lets you model objects using msgspec[2] (they actually use msgspec for all serialization), pydantic, or attrs[3], and the upcoming release adds some new mechanisms for handling additional types. I really appreciate the flexibility in modeling APIs; not everything fits well into a pydantic shaped box.
I haven't heard of Starlite or Litestar before. Is one a fork of the other? Their documentation intro text is identical:
> {Litestar|Starlite} is a powerful, flexible, highly performant, and opinionated ASGI framework, offering first class typing support and a full Pydantic integration.
>
> The {Litestar|Starlite} framework supports Plugins, ships with dependency injection, security primitives, OpenAPI schema generation, MessagePack, middlewares, and much more.
What kind of weird reasoning is this? This should be why you use actual semver, and you use the major version to indicate backward compatibility breaking changes, minor versions for new features, and patch versions for bug fixes that don't actually actually change the public API surface.
If you don't want to use semver, just don't use semver.
[1] https://fastapi.tiangolo.com/deployment/versions/