Greetings! A 2.5 weekends project to teach myself newer Python features (>= 3.10). Conditions are written as Lambda expressions that annotate parameters and return types, and coexist with type annotations. Symbols to share values between conditions are also supported to a limited extend.
I love DBC and use it in racket all the time. Saves me so much time & frustration. So yay, and congrats on making this.
How does using this effect performance?
[edit: strike that i see that in another comment you said you don't know]
I remember someone wrote one of these for ruby and the performance impacts of their implementation were just brutal. Like, 2x slower or something (it was a long time ago). Dunno if it was a bad implementation or just something about ruby.
Thanks! I will have to run some tests very soon I guess. I usually handle functions that do some heavy number crunching where the overhead seems less important. I expect some impact but also see some space for optimization through caching.
To me it is not smart to use that. You make the code complicated that it will be unreadable. Normally the code should speak for itself. This is the key strong point of python.
Also, then you need annotation and tests to validate your annotations!
The example of dimension checking is handy, especially in fields where the same name for an idea has multiple incompatible instantiations or when there's an arbitrary choice that some subfield uses and some other makes differently (left vs right multiplication as a convention, left vs right handed coordinates as a convention, tensor ordering, ...). Sometimes better names help, but often they're just a lot of visual noise for the tiny bit of additional information they portray. Usually I use strings as type annotations for that purpose (compactly describing common constraints like the shape's shape, which dimensions are in common, ...) to communicate to other developers, but I could see a use case for some actual tooling, especially to the extent that somebody could build on top of the library and tailor it to some problem domain.
Seems you refer to a domain specific language in the annotations. This is exactly what https://github.com/AndreaCensi/contracts does (it's an amazing project but has a very large code base). However, I wanted to achieve something similar in pure python and as compact as possible – so there are compromises (like no real symbolic calculus)
Fair. I personally prefer to see the expectation to the arguments (especially in a scientific / data science context) over looking into asserts in the function body, which distract from the algorithm in there. Originally, I planned to automatically include the lambdas in the docstring/sphinx but unfortunately, Python does not keep the source code of the lambda (only the whole line and parsing would be overkill).
I did not understand your second remark, could you clarify please?
Let's say I did not put a focus on speed–and there will be a performance overhead ontop of the actual checks. In a production system, you should probably consider to deactivate it in performance-critical parts (evaluate=False)
How does using this effect performance? [edit: strike that i see that in another comment you said you don't know]
I remember someone wrote one of these for ruby and the performance impacts of their implementation were just brutal. Like, 2x slower or something (it was a long time ago). Dunno if it was a bad implementation or just something about ruby.