Hacker News new | past | comments | ask | show | jobs | submit login
FastAPI 0.100.0 release notes (tiangolo.com)
129 points by constantinum on July 7, 2023 | hide | past | favorite | 110 comments



> FastAPI is already being used in production in many applications and systems. And the test coverage is kept at 100%. But its development is still moving quickly. New features are added frequently, bugs are fixed regularly, and the code is still continuously improving. That's why the current versions are still 0.x.x, this reflects that each version could potentially have breaking changes.[1]

What kind of weird reasoning is this? This should be why you use actual semver, and you use the major version to indicate backward compatibility breaking changes, minor versions for new features, and patch versions for bug fixes that don't actually actually change the public API surface.

If you don't want to use semver, just don't use semver.

[1] https://fastapi.tiangolo.com/deployment/versions/


This is semver:

> Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.

(from https://semver.org/)


Sure, depending on how you interpret initial development. FastAPI has 60k stars on GitHub, has been extremely popular for at least four years, and is widely used in production by a lot of people. It's the maintainers' decision what their goals are for a v1 release, but I was personally surprised to learn that it hasn't had one yet. I can see why one might argue that they're not following the spirit of semver at this point.


As long as the developers/maintainers feel like "Anything MAY change at any time" and "The public API SHOULD NOT be considered stable" is true, both the "spirit" and specification (if we may call it so) say it should be on 0.x.z


From the same link:

> How do I know when to release 1.0.0?

> If your software is being used in production, it should probably already be 1.0.0. If you have a stable API on which users have come to depend, you should be 1.0.0. If you’re worrying a lot about backward compatibility, you should probably already be 1.0.0.


Yeah, doesn't stroke well with "FastAPI is already being used in production in many applications and systems. And the test coverage is kept at 100%. But its development is still moving quickly." now does it?

It doesn't matter how quickly you move, you can apply real semver numbering just fine. Five years and a 0 dot one hundred is obeying the letter of the law while being utterly ridiculous.


Or maybe the people using a thing under development that could constantly change in production are ridiculous?


If that were the case, then the docs would call that out and warn people not to use their project in production. Instead, it's listed in a way that any normal person will read as a point of pride.


The point is that it shows the maintainers have a severe lack of judgement


> This is semver:

This is a good example of people confusing the letter of the law with the spirit of the law.

Semver is not a goal onto itself. Semver expresses a process to help consumers of an interface infer the implications of an upgrade without having any context on what specifics went into a release. Major changes imply breaking changes, minor changes imply addition of backward compatible changes which can prevent future downgrades, and patch releases means a drop-in replacement that fixes bugs.

Each and every single one of these scenarios reflect decisions made by product maintainers on their work's reliability. Semver is the tool, not the cause.

If product maintainers do not care about stable versions, it makes no difference if they comply with semver or not. What number they tack onto a release means nothing. They might as well tag a timestamp.

If they cared about ensuring their consumers that their work is reliable and stable but still reserve the right to break changes, they could blindly do major version releases and even drop any contract testing from their pipelines. They could even go a step further and claim they only support the N last major releases.

Nevertheless, competent maintainers and product managers know beforehand what is supposed to ship with a release, and even plan when and how to ship those changes. If anyone knows beforehand what changes go in a release, they can easily tell beforehand if that release should be major, minor, or patch. This is not rocket science.


> Semver is the tool, not the cause.

> If product maintainers do not care about stable versions, it makes no difference if they comply with semver or not.

Exactly. Here, the maintainer reserves the freedom to break anything at any moment, and correctly uses semver to signal that through version number.

(Not) using semver as a versioning scheme is orthogonal to whether an API is stable.

As a potential user of said package, seeing this (among other things), I choose to use another package (Django REST Framework) whose maintainers do care about not beeaking the API.


FastAPI doesn't use Semver; it uses ZeroVer: https://0ver.org/


No, they don't. They literally link out to semver.org


Zerover is satire.


You're a few years behind: people actually went "haha, good joke... for real tho that's actually how we version, let's make it official" and then started using it in earnest.

There've been plenty of projects that officially use 0.x versioning for years now.


The joke

.

.

.

Your head


Yeah, really not though: you clearly haven't seen projects that actually use that as their official versioning policy ever since semver.org got published. So no, the joke didn't go over my head, because zerover stopped being a joke years ago. Unfortunately for everyone, the world took it seriously.

Do try to catch up.


https://www.scienceofpeople.com/sarcasm-why-it-hurts-us/

Sarcasm isn’t funny to everyone; GP was understandably confused. This isn’t Reddit.


Semver specifically allows you to use 0.x for this purpose: https://semver.org/#spec-item-4

> Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.

That said, I'd argue that this is a little silly. Maybe a better design is to use something I am calling "zero-calver": 0.YYYY.MM.DD. Then use semantic versioning once stability is reached.


Just use semver from the moment it is usable by a third party.

Then maybe it goes up to 41.x.x until stability is reached. So what? It's not less professional than keeping 0.x.x forever.


Because it seems like devs believe that 1.0.0 means "the API is perfect, it won't break ever again".

Somehow they feel like having a version 15.2.3 looks unprofessional (because the API got broken 14 times), but 0.100.0 is perfectly fine.

I just don't get it.


pydantic has done breaking changes 42 times but is on version 2 :)

https://docs.pydantic.dev/latest/changelog/


Just had this conversation the other day. It's become so unusual to see high major versions that it's basically become taboo.


This is using semver in the exact way it’s intended but with predictably silly outcomes. For several years now I’ve been of the opinion that this particular aspect of semver is … very bad actually.

For personal projects I skip any zero major prefixes. In their place, I use alpha/beta suffixes and version Major/minor/patch from an initial 1.0 alpha. Hasn’t caused any problems for me.


And to clarify the approach, the equivalent version would be 1.0.0-alpha.100.0. Clear in its meaning, clear in versioning resolution, ambiguous only in its maturity but that’s what the project is trying to do.

Edit: or probably it’s 1.0.0-beta.100.0. That’s still much clearer


Not if they've dropped support for several versions of Python over the years, no. More like v5.10.12

This software is absolutely not in alpha anymore, it's used in production and the project even acknowledges this explicitly.


FWIW other frameworks (namely, flask) took a similar approach of not moving to 1.0 for a while.


Which sucks, to be honest. Just accept that your major can go up to 23.8.3, and it does not make it less professional than 0.231.0. Actually it's better because it said which releases broke backward compatibility and which ones did not.


Seriously, this 0ver nonsense makes it so much harder to deal with upgrading packages. Who cares you bump the major version frequently


Using semver doesn't magically make it okay to make breaking changes constantly. It's still a pain for anyone using your library.

Semver explicitly encourages you to use major version 0 during early development for this exact reason, it's up to the maintainers to decide when they can be more stable.


I prefer straight integer version numbers.

It's a tendency of developers to make things complex.


Actually, semver would make sense if people used it correctly.

Changing the major means that it breaks backward compatibility. Changing the minor means that it has new features. Changing the patch means that it had bugfixes.

That's useful for libraries. Of course it's a bit less useful for executables. I just increase the major for executables, but if I need to backport a feature or a fix (to somebody who still uses an older version), then I use the minor/patch.


https://litestar.dev is also reaching 2.0 .it is a lot faster than FastAPI yet much better maintained. Has DTO, event and channels, Repository, Service DDD style framework built-in. No promotional commits. I had written the reason of moving away from FastApi here and intro to Litestar 2.0.

https://dev.to/v3ss0n/litestar-20-beta-speed-of-light-power-...


I think you raised some valid concerns there, I see 18 open issues in the fastapi repo, what's going on there? Are they just moving everything into a discussion? A bit concerning


There are thousands of issues which about 30% of them are actual bugs, including some of them are those we found on production and reported, the maintainer wrote a script that convert all to discussion and never looked back, so we never look back too.


Has litestar CBV (class based views) fastapi refuses to implement them so we build [1]. Or proper lifetime event [2]?

[1] https://github.com/KiraPC/fastapi-router-controller [2] https://github.com/tiangolo/fastapi/issues/617


Yes litestar have class based controllers since day one , class based views with HTMX/Jinja is also available in 2.0 which is now nearing release. Here is lifecycle management : https://docs.litestar.dev/2/usage/the-litestar-app.html#star...

  app = Litestar(on_startup=[get_db_connection], on_shutdown=[close_db_connection])
We have quite powerful channels and event systems that goes along with websocket and realtime systems too.

Plus , if you use Repository + Sqlalchmey Plugin and DTO , you can also work with CRUD events before_update/after_update/before_insert/after_instet etc


just looked into the docs [1] and indeed it has CBV ... I will definitely take a closer look.

[1] https://docs.litestar.dev/latest/#feature-comparison-with-si...


How does it compare to Starlette? (lib FastAPI uses under the hood) I've used vanilla Starlette for recent projects and it's been great.


It was based on starlette first but all functionalities are rewritten ending up in better code quality. https://github.com/orgs/litestar-org/discussions/612

Starlette can be considered pure server framework in the lines of CherryPy, wezurg in wsgi/sync world.

Litestar is a lot more battery included with built-in integration to Sqalchemy, many other ORMs as plugin. Built-in security and authentication middleware.

Join our discord, we have good community there too .


Hi, Litestar maintainer here.

Starlette and Litestar are very different. Starlette is closer to Flask or actually Werkzeug; A micro framework / toolkit for building apps.

While we don’t aim to develop "The next Django", Litestar offers way more out of the box than Starlette and other micro frameworks.


Ok that makes sense! I'll check it out. I pretty much just use Starlette + Mangum + Marshmallow for basic api functionality.


Nice, thanks. I wonder how performance compares to django-ninja.


We haven't test Django ninja in term of performance. But from our experience of Django 4.2 async , Django isn't truly asynchronous yet. Since ecosystem is not asynchronous I am not sure how asynchronous API would benefit from that when the extension aren't.


> In some cases, for pure data validation and processing, you can get performance improvements of 20x or more. This means 2,000% or more.

Amazing! Excited to try it out.

Slightly OT: But what are some use-cases where you'd still use Flask over FastAPI? I really like FastAPI's devEx and don't see myself going back to Flask anytime soon. Curious to hear what others think.


Flask has been around much longer than FastAPI and, as a result, is a much more mature framework. Some examples:

- There's a memory leak with a particular combination of packages FastAPI [0]

- Before Pydantic v2, you would validate your data on input (when it's stored in the db) and then every single time on retrieval. There is no way to skip validation, for example, when you are generating a response on data that was already validated when it was persisted to the db. [1]

- FastAPI has documentation only in the form of tutorials. There is no API documentation and if something is not clear looking through the source code is the only option

- You need ORJSON for maximum serialisation performance (perhaps this has changed with Pydantic v2) [2]

- Using FastAPI with uvicorn doesn't respect log format settings [3]

I don't mean to imply that FastAPI is a bad framework. The Flask ecosystem has had over a decade to mature. FastAPI and the ecosystem will get there but it _needs_ time.

- [0] https://github.com/tiangolo/fastapi/discussions/9082

- [1] https://github.com/pydantic/pydantic/issues/1212

- [2] https://fastapi.tiangolo.com/advanced/custom-response/#use-o...

- [3] https://github.com/encode/uvicorn/issues/527


> - You need ORJSON for maximum serialisation performance (perhaps this has changed with Pydantic v2) [2]

The common orjson trick no longer works in v2 and will throw warnings, but it appears it's no longer necessary since the JSON formatting leverages the native seralizer which happens in Rust-land.


Exactly my issue too. Why there is no API documentation for FastAPI? It is very difficult to know what is available and what is not beyond the tutorial style docs.

(I love FastAPI and use it for all my projects, this one little thing troubles me sometimes)


Well the performance increase is so huge because pydantic1 is really really slow. And for using rust, I'd have expected more tbh…

I've been benchmarking pydantic v2 against typedload (which I write) and despite the rust, it still manages to be slower than pure python in some benchmarks.

The ones on the website are still about comparing to v1 because v2 was not out yet at the time of the last release.

pydantic's author will refuse to benchmark any library that is faster (https://github.com/pydantic/pydantic/pull/3264 https://github.com/pydantic/pydantic/pull/1525 https://github.com/pydantic/pydantic/pull/1810) and keep boasting about amazing performances.

On pypy, v2 beta was really really really slow.


This is simply not true. So sad.

We removed benchmarks from the docs completely when the rule of "only show benchmarks with comparatively popular or more popular libraries" no longer made sense, and maintaining benchmarks with many hobby packages was obviously going to become burdensome.

Please show me a sensible benchmark where your library is faster than pydantic?


> This is simply not true. So sad.

Ah sorry, so, just coincidentally pydantic happened to be slower than any other library that had a PR to be added to the benchmark, but that was not the reason they were rejected.

Better now?

> Please show me a sensible benchmark where your library is faster than pydantic?

    $ python3 perftest/realistic\ union\ of\ objects\ as\ namedtuple.py --pydantic
    (1.2192879340145737, 1.2595951650291681)
    $ python3 perftest/realistic\ union\ of\ objects\ as\ namedtuple.py --typedload
    (1.0874736839905381, 1.114147917018272)
I'm not a math genius but I'm fairly sure that 1.08 is less than 1.21.

So much for your invite to be gentle and cooperative :D (https://github.com/ltworf/typedload/pull/422)


¯\_(ツ)_/¯ - I get different results, see the PR.


It seems you're running on apple. I really can't reproduce since i don't own one, and unless I get it as a gift I never will.

Anyway no server code runs on apple, so it isn't that important to win benchmarks only on apple, I think.


Well that is a bad look. I am sure the highlighted performance metrics have had a lingering impact on library decisions.


Is it still a bus factor of one? I veer to the side of boring technology and FastAPI is still too in flux for me. I do not ever want to be the vanguard discovering novel problems with my framework.


Definitely this. Flask is old and well-tested, with a solid feature set and little need to change how it works.

Also you'd use Flask for basically anything that isn't an "API", but where you still want something lighter-weight than Django. I believe other traditional Python web frameworks like Pyramid fall into the same category.

The "Fast" in FastAPI refers to the speed of getting a working prototype running, specifically for an API that accepts and emits JSON and implements an OpenAPI schema. If that's not your use case, then you might not need or want FastAPI.


Litestar.dev have a team of dedicated developers , also have similar API to FastAPI. You can watch commit activity and the teamwork.


Coming from django primarily, but written some flask code. I would say the ecosystem. As of my last try, adding authentication (via cookies) to a fastapi project was somewhat cumbersome.

Usually as the projects grow, and I start reinventing the wheel, I come to regret not going for a "full" framework.


Not to be excessively negative, but this really means very little without more context. Maybe it was very slow before, or it's a particulary unused scenario. I'm always skeptical when people write such praises of their own software without giving a comparison point.


> Maybe it was very slow before

That is at least partly the case. I maintain msgspec[1], another Python JSON validation library. Pydantic V1 was ~100x slower at encoding/decoding/validating JSON than msgspec, which was more a testament to Pydantic's performance issues than msgspec's speed. Pydantic V2 is definitely faster than V1, but it's still ~10x slower than msgspec, and up to 2x slower than other pure-python implementations like mashumaro.

Recent benchmark here: https://gist.github.com/jcrist/d62f450594164d284fbea957fd48b...

[1]: https://github.com/jcrist/msgspec


Eeey hello :D

Eeh come on, I think it's a bit unfair to compare, because msgspec doesn't support regular python union types… which are the number 1 source of slowness… at least in my real world use case of the thing. I've got hundreds of classes with abundant nesting and unions.

In pydantic v2 they did the same thing i've been doing in typedload for a few versions already: check the field annotated with a Literal and directly pick the correct type, rather than do try and error. So now the speed for unions has become better.

Even so, for being binary vs pure python, I'd have expected much more.


Pydantic was a pure python library and was rewritten in Rust recently. To be fair, I have seen some critiques of this rewrite. Specifically saying that the validation model could have been much faster in Python and switching languages papers over the deficiencies. I'm not in a good place to judge if this is true or not.


I wrote so in other comments… I was surprised to see that for the benchmarks of my library (typedload), it now manages to win a few… but not all of them.


would love to see a benchmark where typedload is faster than Pydantic V2. Could you share a link?


You realise that you made version2 3 days ago?

I re-do the benchmarks of typedload when I make a release. The benchmarks will be updated when the next release happens.

I will not do a new release because you need new benchmarks after 3 days. You are free to include benchmarks on your own website (but we both know you won't do that).

This is because of how my whole setup works, requiring a git tag and a finished CHANGELOG. Running the command to regenerate the website would cause documentation from the master branch to be published.

The benchmarks will be here, as usual. https://ltworf.github.io/typedload/performance.html

I run them just getting the latest available version. But since I can't time travel, I can't get versions from the future to appease you, sorry.

I just ran them locally (like you could do by yourself) https://news.ycombinator.com/item?id=36644818


Yes it was incredibly slow and inefficient.

I maintain typedload (a similar project, that I started before pydantic's first release) and pydantic 2 somehow still manages to be slower than a pure python library that got no funding to improve performances.


You can use gevent and no need to replicate every library under the sun for async io.


I wish Django would take async more seriously. This comment gives a pretty good overview of the current situation (some points are more valid than others): https://github.com/encode/django-rest-framework/discussions/...

The Python ecosystem is strange. Where other dev communities will embrace new ways of doing things faster than most people can keep up — the Python community needs to be pulled kicking and screaming into the light once ever decade or so. Python 2 to 3, ~10 years.

async/await has been in Python since 2015, it feels like it's going to be another 5 years before we see people taking async seriously in the big packages. Same problem we had during the 2/3 transition. No library support, no developer support.


None of the mini-framework based on Starlette is close to Django's fullstack facility, but upcoming litestar 2.0 have a lot of features that Django people desires. We have DTO + Repository + Sqlalchmey plugin system , that - just with definition of SQlalchmey 2.0 model , it will give you and DDD style Repository Service with

CRUD , Filtering , Pagination , and many common apis by default . Those need Extension in django.

For CRUD : https://github.com/litestar-org/litestar-pg-redis-docker/

For fullstack experience see : https://github.com/cofin/litestar-fullstack - it has Users and Auth , Roles , Teams , Tagging , Data migration , Caching , Background Worker Services , Background scheduling , manage.py-like cli for creating users , admin users , and docker containers.


Have a look at Starlette, it’s by the guy who made Django.

FastAPI is built on Starlette and adds more batteries included.

If you’re interested in async you’re far better off to go async native than with a framework that’s synchronous.

Asyncpg is the fastest Python Postgres driver there is, works well with sanic, Starlette or FastAPI.


Correction: Starlette is made by the guy who made the Django REST Framework, not Django. DRF is a separate project from Django.


Thanks.


The Starlette dev is the founder of Django Rest Framework.. Which is my least favorite part of working with Django. I really wish Django shipped with its own REST Framework.

I’ve been keeping an eye on FastAPI, Starlette, and a few other libraries but, compared to Django, they’re mostly powered by hype. I found a bunch of Pydantic bugs 2(3?)+ years ago that are only now getting addressed in v2.


Django does come with it's own rest framework built in. Instead of rendering to templates, just return json.


There are a lot of other things to consider. Django only supports GET and POST out of the box and Django forms aren’t serializers. Those are just the obvious ones. Just returning JSON isn’t practical.


Over 14 years or so I’ve developed major applications with flask, bottle, Falcon, FastAPI, Django, Sanic and Starlette.

My preferred back end web server is now nodejs with typescript and plain old Postgres SQL queries, no ORM. Caddy web server with auth sub requests.


Traefik would be much better proxy if you are using container based dev.

Also Please give litestar a try.


I’m not a fan of the complexity of containers or kubernetes. Everything I’ve built scales vertically on bare OS very nicely.

Regarding litestar, if I wanted to do Python web development I’d use Starlette, which I really liked, but I’ll probably stick with nodejs and typescript from here for web applications.


Great, so we just gotta "ship your bare OS" to prod.

That's exactly what a container is for.

Container is just a logical isolation tool that works at the distribution/deployment level.


> Great, so we just gotta "ship your bare OS" to prod.

Back in 2008 , We ship by DDing the whole disk :D


What’s wrong with shipping source code?

Containers add huge complexity, for what?


I think complexity can have multiple interpretations, especially in a fragmented discipline as software engineering.

However, at least for me containers solves 3 main source of problems to shipping code in a current software environment: (1) somewhat consistency around runtime environment between production, development and homologation (2) a portable way to deliver software (just create the image) and (3) packaging between source code and runtime.

I started my career at the late-2000s and at least in my experience the code itself was the least of issues because we needed to develop something in a environment to be delivered at a runtime in another, transfer the files via FTP or replace files with _.old and making sure that it would work in all places.


K8s is complicated , but Docker (docker compose) solves a lot of problems especially when you are working with a team. Just 5-10 lines of YAML code in your docker-compose.yaml file and it gives you an local infrastructure of db , redis , node , python , nginx and many other things. Then you can share with anyone in your team , and it works on their machine too. Don't afraid of time saving tech.

We had use nodejs/ts , but nothing meets DX of python , or Python + MyPy yet, and ecosystem is unmatched.


I'm really curious about why you would want to use FastAPI over Django Rest Framework. Are there signficant advantages to FastAPI?


They're totally different in terms of the scope of the project they're meant to be used for.

FastAPI is much closer to Flask in that it's trivial to throw up a single file with a couple of routes, and you can use something like SQLite yourself for persistence to disk, or install a couple of libraries for authentication or such (not sure if that's built in by now).

Django, on the other hand, requires a multi-step process to even start a project and creates a dozen of files, most of which are boilerplate, before you can even see a "hello world" route.

But in exchange, you get not just a program which deals with routing and templates and status codes, but much more - a world-class ORM which integrates pretty much transparently with multiple data stores like Postgres or SQLite, an amazing dashboard out of the box which is really handy for sharing with non-techies, a very mature ecosystem, and perhaps most importantly, the "one right way to do things", which makes it a lot more effective for collaboration between a bunch of engineers. It's of course not infallible, has a learning curve, and comes with a good few footguns that get the uninitiated, but the upside is a real upside. Migrations alone might justify it - for all the grief of merge conflicts between migrations right before a code freeze, I can't imagine how much worse it would be to not have them.

So I wouldn't say there's significant advantages to using either - they're different tools for different use cases. If you want to get something up quick that isn't too complex, FastAPI is great for that. If you know you're eventually going to rewrite Django but worse, you may as well use the real thing.


I've just made a first project in fastAPI, and it was trivially simple compared to my previous adventures in DRF. Plus asyc from the get go, which was also pleasant. I had a task as part of this that takes 10 seconds to run, and there's a built in way to allow it to run in the background.

And the documentation is excellent.


> I've just made a first project in fastAPI, and it was trivially simple…

My experience has been, as Seth Godin says, “the long-cut is the most direct route to get to where you seek to go”

Every time I started with Django, I hated the feeling of sitting in boilerplate hell early on.

Every time I started with FastAPI/Flask, I get something working quickly, then hit a wall of recreating everything that comes with Django.

The only solution I’ve found is: embrace boilerplate [1], automate the boilerplate.

[1] Django, or whatever batteries included framework you like (Rails, Laravel, Phoenix, etc)


Yeah, at a past job I had to build out a way to run launchdarkly in FastAPI and it was very unpleasant. Launchdarkly only ships a sync python client so we ended up having to run it in another process to avoid blocking all requests


I found DRF to be great at generating fairly simple APIs very quickly. But complicated stuff gets... more complicated, due to inheritance and magic. I love me some DRF, govscent uses it: https://govscent.org/api/

All from a few lines of code.

But I've mostly switched to django-ninja which is more type safe and faster.


The rustification of Python libraries and tooling continues and it has been brilliant. In the past 6 months I have personally switched projected to ruff[0], polars and now - as of this morning[1] - pydantic 2 and FastAPI 0.100

[0] has replaced pylint, flake8, pyupgrade, isort, mccabe and pydocstyle

[1] bump-pydantic worked well, after porting settings to pydantic_settings.


At the cost of bigger downloads, not working with other python implementations, and in the case of pydantic, the performance gains aren't that impressive compared to pure python libraries.


You might want to check Pylyzer then (https://github.com/mtshiba/pylyzer).

I'm not involved at all. It is still very very early in development. But as it is in the same vein, I thought I'd mentioned it here.


Have you actually got it working as an LSP server? I tried about a month ago without success.

Edit: Scratch that, had a go now with no trouble, passing `pylyzer --server` from my editor (helix).


This project has 433 pull requests.

It used to have literally thousands of open issues. Where did they go? Fixed?

It also had a project owner who refused to form a team of people responsible for the project. How did that pan out?


Not long ago the lead author converted all issues to discussions. I think most of the issues were actually questions.


Did not take long at all to see Pydantic version 2 support. Nice!


I still can't stand Pydantic's API and its approach to non-documentation. I respect the tremendous amount of hard work that goes into it, but fundamentally I don't like the developer experience and I don't think I'll ever feel otherwise. I use it because my coworkers like it and I've learned its advanced features because I had to in order to get things done, not because I like it.

I would love to see a FastAPI alternative still using Starlette internally, but using Attrs + Marshmallow + Cattrs + Apispec instead of Pydantic. It would be a little less "fast" to write a working prototype, but I'd feel much more comfortable working with those APIs, as well as much more comfortable that my dependencies are well-supported and stable.

The problem of course is not that gluing those things together is hard. The problem is that now someone has put untold hundreds of person-hours into FastAPI, and replicating that level of care, polish, bugfixes, feature requests, etc. is difficult without putting in those hundreds of person-hours yourself.


Could you simplify your point? I was an ardent marshmallow user and when I finally switched to pydantic, it felt like I finally sat down in my life after standing forever. The documentation sounds good enough to me, but importantly the interface pydantic provides to define your json schema is the most elegant interface I’ve seen in any language and miles better than the mess marshmallow provided.

For many of us especially in the SaaS side, speed of these operations is a distant third priority compared to ease of writing and understanding the code, and ensuring reliable less buggy code. The actual compute happens on a cluster with spark or snowflake anyway.


There is no reference doc. The docs cover a lot of material in a small amount of space, buying important pieces of information and mixing up a large number of topics under unintuitive headlines. Reading the source code is occasionally necessary just to figure out how it all works.

The API is a little weird, particularly around defining validators. The parameter name-matching is an "interesting" design choice. Accessing "values" as a dict[str,Any] is messy if you care about static typing, although I can understand why they did it.

Furthermore, the behavior of validators and the exact sequence in which they run is not defined by the docs. It's not that hard to figure out, but it also might change at any time because there's no user contract. Attrs is significantly nicer in just about all respects here, especially their attention to detail in their extensive user guide and reference docs.

Speaking of user contract, there's no clear separation between private and public. Without a reference doc it all looks like fair game, but without a reference doc it also might all change at any moment. Either you stick to the examples, or you're off doing a guess-and-check dance and hoping something doesn't break.

Even with the Mypy plugin, I often have to write `if TYPE_CHECKING` all over any nontrivial Pydantic class consuming data from external sources. Variable annotations in Pydantic are fundamentally not PEP 484 type hints. That's fine, but it's confusing that they're almost the same, and, as above, it's almost entirely up to you to figure out how it all works, either by trial and error or by digging around in the issue tracker and StackOverflow.

Ease of writing and reliability is precisely my big area of annoyance and concern. Speed of (de)serialization is comparatively unimportant (although I don't like the huge amount of overhead involved and I avoid using it in hot code paths).

I also don't like using Pydantic-defined classes very much, because the actual init method signature is just *args, **kwargs, which doesn't work well with any tooling. It feels like being back in the Tornado & PyMongo dark ages where everything is dynamic or dynamically-generated and classes are just glorified hash tables.

I agree that the JSONSchema integration is outstanding. BaseSettings is also a tremendous productivity improvement, I love that I can define a class and immediately get a proper app-wide config reading from both env vars and a dotenv file. I also like the default error messages that tell you exactly which field failed validation. I also like the validator system (once I figured out how it worked), respecting the order in which I define the validators as well as supporting validators that run before or after the default set of validators (pre=True and pre=False respectively). I was probably being a little too negative before, but my annoyance level with the developer-facing API and documentation remains high, and I will gladly jump to an Attrs-based alternative as soon as one exists.*


Please please take a look at V2, both the code and the documentation (although I admit, the documentation for V2 isn't finished).

I (the developer of Pydantic) had many of the same frustrations with Pydantic V2 which is why I've spent so long rewriting it to try and fix these concerns.

In particular:

* we now have API documentation [1] * we have first class support for validating `TypedDict` which gives you a typing-valid dict representation of your data straight out of validation * we now have strict mode * we're working hard to define an exact spec for what validates to what [2] * we have a strict separation between public/private - everything private is in a `pydantic._internal` module, and we have unit tests that everything which can be publicly imported is explicitly public * we now use `Annotated[]` for defining custom validations/constraints, together with annotated-types [3] * the protocol for customising validation and serialization has been significantly improved [4]

I'd really love to hear your feedback on V2 and what more we can do to improve it - your feedback seem unusual reasonable for HN ;-) - please email samuel@pydantic.dev or create an issue/discussion if you have any thoughts.

1: https://docs.pydantic.dev/latest/api/main/ 2: https://docs.pydantic.dev/latest/usage/conversion_table/ 3: https://github.com/annotated-types/annotated-types 4: https://docs.pydantic.dev/latest/usage/types/custom/


I too have made similar observations regarding pydantic and FastAPI.

I was evaluating various Python async http frameworks and landed on a similar stack:

- attrs/cattrs for models - starlette+uvicorn for HTTP/websocket - validation I’m still on the fence about. I’ll see how far I get with the built in validators offered by attrs. I use voluptuous at work and generally like the DX but it’s in maintenance mode.

This is purely personally preference, I’m sure devs using fastapi+pydantic are more productive in the long run. It almost feels like I’m hand rolling my own fastapi implementation but at the same time I don’t want to be too locked in to frameworks like that.

Ive been burnt by magic frameworks that do too much behind the scenes and there’s something nice about fully understanding what’s going on when you hand stitch libraries yourself.


If you like cattrs, you _might_ be interested in trying out my msgspec library [1].

It works out-of-the-box with attrs objects (as well as its own faster `Struct` types), while being ~10-15x faster than cattrs for encoding/decoding/validating JSON. The hope is it's easy to integrate msgspec with other tools (like attrs!) rather than forcing the user to rewrite code to fit the new validation/serialization framework. It may not fit every use case, but if msgspec works for you it should be generally an order-of-magnitude faster than other Python options.

[1]: https://github.com/jcrist/msgspec

</blatant-evangelism>


This looks like exactly what I've been looking for. I just want strong typing, json <-> struct and validation. Seems like it ticks all the boxes + speed benefits which is always nice. I especially find it useful that I can use messagepack for internal service chatter but still support json for external stuff and dump astuple to sqlite.


Depending on how far in you are, starlite/litestar has good documentation and offers another "batteries included" framework. Performance wise it's about the same and the stack is about the same. Fastapi suffers from the "one solo dev in Nebraska" paradigm (check out open prs and old tickets). For me the main draw of litestar is the batteries + better docs + more active development with multiple developers vs most other python web frameworks.


+1 for litestar[1]. The higher bus-factor is nice, and I like that they're working to embrace a wider set of technologies than just pydantic. The framework currently lets you model objects using msgspec[2] (they actually use msgspec for all serialization), pydantic, or attrs[3], and the upcoming release adds some new mechanisms for handling additional types. I really appreciate the flexibility in modeling APIs; not everything fits well into a pydantic shaped box.

[1]: https://litestar.dev/

[2]: https://github.com/jcrist/msgspec

[3]: https://www.attrs.org/en/stable/


I haven't heard of Starlite or Litestar before. Is one a fork of the other? Their documentation intro text is identical:

> {Litestar|Starlite} is a powerful, flexible, highly performant, and opinionated ASGI framework, offering first class typing support and a full Pydantic integration. > > The {Litestar|Starlite} framework supports Plugins, ships with dependency injection, security primitives, OpenAPI schema generation, MessagePack, middlewares, and much more.


starlite was the original name, it was recently renamed to litestar due to comments about how easily confused "starlette" and "starlite" are.


data point in favor of the renaming: while reading GP, I've assumed that "starlite" is a typo for "starlette."


My only frustration with fastapi is the lack of API documentation.

Usually docs just have API specified, this project goes the other end of the spectrum and has examples for everything.

Whilst that is nice, APIs are undocumented and it is a bit harder to grok the project without the available functions and methods in a list


FastAPI is a joy to use. Tried using `bump-pydantic` and it worked flawlessly. Thankful for the work by this team.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: