> @overload is a decorator from Python’s typing module that lets you define multiple signatures for the same function.
I'll be honest, I've never understood this language feature (it exists in several languages). Can someone honestly help me understand? When is a function with many potential signatures more clear than just having separate function names?
It's an implementation of "ad-hoc polymorphism", where, for example, it may make sense to "add" (+) together numbers of various types: integers, floating points, rational numbers, etc.
Thus the (+) operator for addition is "overridden" or "polymorphic" in the types of numbers that can be added together.
The argument for having a polymorphic signature rather than just multiple separate "monomorphic" functions is similar to that for "generics," otherwise known as "parametric polymorphism": why not just have a function `forEachInt` for iterating over lists of ints, a separate function `forEachChar` for iterating over lists of characters, and so on?
Higher levels of abstraction and generality, less boilerplate and coupling to any particular choice of data structure or implementation.
You could of course go the route of golang which indeed just had you write "monomorphized" versions of everything. Several years later generics were added to the language.
Alternatively, you throw everything out and never have to worry about typing or polymorphism, at the cost of static safety.
I would recommend reading the article example once more. Going from the example, without the overload, the function would return a union type which means any time you use the function, you have to put a type check to the result to know if the output is a list or not. With overload, as soon as an argument to the function is a certain type, the output is determined, so you won't need to type check.
I think that GP's point is that you could accomplish the same thing by simply having separate functions with different names, signatures, and return types.
> subtle AI-written bugs slipped through unnoticed, and we (humans) increasingly found ourselves rubber-stamping PRs without deeply understanding the changes.
If your CI/CD process was able to fully verify a fix then it would have stopped the bug from making it to production the first time around and the Jira ticket which was handed to multiple LLMs never would have existed.
Meh. I've been using 2.5 with Cline extensively and while it is better it's still an incremental improvement, not something revolutionary. The thing has a 1 million token context window but I can only get a few outputs before I have to tell it AGAIN to stop writing comments.
Are they getting better, definitely. Are we getting close to them performing unsupervised tasks, I don't think so.
I very much agree. I've been using Gemini 2.5 pro for coding and I've always given it a simple instruction. Never write comments. It will stop writing them for a time but it's nowhere near the 1M context window.
Now maybe this is more a lack of instruction following than context length but the fact that it works at first and then starts going downhill quickly makes me wary about how much it will pay attention to other details further back in the context.
Remote work did a number on middle management. When many of them realized that if they aren't the strategic brain at the top, and they aren't individual contributers, and they can supervise butts in chais, then they aren't actually providing that much value.
So adapt. Learn to curate your team and their work. Lead by helping people organize, getting obstacles out of their way, shielding them from alarmist BS from higher management, and stop worrying about butts in seats. Focus on agreements, goals, commitments, accountability, growth, and coaching.
Feedback appreciated - https://proxymock.io/
reply