I agree, it's unreasonable to expect devs to know the whole standard library. The VSCode extension Pylance does give a warning when this happens. I thought linters might also check this. The one I use doesn't, maybe the issue[0] I just created will lead to it being implemented.
But the problem remains, because these warnings - whether they come from linters or Python itself - can only warn you about existing stdlib modules. I'm not aware of any way to guard against conflicts with any future new stdlib modules being added.
I've used FastAPI, but haven't done a lot with Starlette directly. If you are building a full stack app, I can imagine the integration between FastAPI and Pydantic can make it easier to work with the data that you might want to render in the HTML that you generate using htpy?
htpy is just server-side rendering of HTML. Your routes are returning strings instead of structured data, so from the perspective of responses you're not going to be using Pydantic at all. That doesn't stop you from using it to validate objects you're passing around in your server, but I personally wouldn't do that because Pydantic can have a pretty hefty memory footprint. I've seen over-reliance on pydantic lead to plenty of OOMKilled errors.
It's a bit different for requests though. FastAPI will allow you to define your request schema (application/json or application/x-www-form-urlencoded) and validate it using pydantic, but starlette doesn't do that OOtB. It's trivial to implement though, and if it were me I would probably choose to do that rather than deal with FastAPI's inflexibility.
Nice to see, over the past months I've replaces pandas with ibis in all new projects and I am a huge fan!
- Syntax in general feels more fluid than pandas
- Chaining operations with deferred expressions makes code snippets very portable
- Duckdb backend is super fast
- Community is very active, friendly and responsive
I'm trying to promote it to all my peers but it's not a very well known project in my circles. (Unlike Polars which seems to be the subject of 10% of the talks at all Python conferences)
As a freelancer, I use Moneybird[0] for this purpose. There is a free tier up to 4 invoices, 10 incoming invoices and 10 bank transactions per month. Most months I fit in the free tier, some months I pay for 1 month of subscription.
Maybe a model like this would work for you as well? The users you are replying to might switch and sometimes pay if they have a busy month.
Could this be why people recently say they see more weird results in ChatGPT? Maybe OpenAI is trying out different quantization methods for the GPT4 model(s) to reduce resource usage of ChatGPT.
I'd be more inclined to believe that they're dropping down to gpt-3.5-turbo based on some heuristic, and that's why sometimes it gives you "dumber" responses. If you can serve 5/10 requests with 3.5 by swapping only the "easy" messages out, you've just cut your costs by nearly half (3.5 is like 5% of the cost of 4).
Serving me ChatGPT 3.5 when I explicitly requesting ChatGPT 4 sounds like a very bad move? They're not marketing it like "ChatGPT Basic" and "ChatGPT Pro".
Now I'm wondering how much you'd look like tiangolo if you wore a moustache.
reply