Hacker News new | past | comments | ask | show | jobs | submit login

I've seen a bunch of code like this, usually written by juniors once they get a bit of experience and discover the concept of types.

A surefire way to make your Python code terrible. The whole point of Python is the duck typing. If you're not going to use that then you might as well use a real programming language.




Typically, the selection of a programming language is predetermined in brownfield projects, leaving little room for choosing the "ideal" programming language for migration unless it is absolutely necessary (go can be a suitable option for migrations).

Code, like the one provided by the OP, should be valued and encouraged to prevent future bugs. Incorporating tools like Mypy during pre-commit and pyright during code editing can make a significant difference. This approach helps eliminate the need for writing unnecessary unit tests that check for None type.

In my opinion, as a mid-senior developer, I always appreciate the use of types in Python. If you are confident about having predictable inputs, you can rely on duck typing and use it primarily for scripting purposes.


> Typically, the selection of a programming language is predetermined in brownfield projects, leaving little room for choosing the "ideal" programming language for migration unless it is absolutely necessary (go can be a suitable option for migrations).

Yep - or it's the standard for some tasks & the important thing is being able to use the ecosystem of external packages that everyone else uses. In this case, the author is working on AI & computer vision, for which everyone else uses Python. Python could be a much worse language than it is and using Python would still be the right choice in this scenario.


In computer vision, you use Python for prototyping and research. It of course gets rewritten if it's to be actually used, think for example of the guidance system in a car, plane, or weapon.

It's pretty much the same for AI, except that there is an increasing trend in AI of it being used by people that do not have the skills to do the productionization.


I agree that in most cases it should be rewritten for production, but in my experience that quite often doesn't happen, and AI stays in Python even for production usages.


Using static typing in Python will generally speaking significantly increase the number of bugs in the codebase.


Could you elaborate your statement? How? My experience says otherwise. I am interested to know your viewpoint and experience.


Of course, statically typed code is usually around 3 times the number of lines of code as it's dynamically typed equivalent.

This is due to lots of boilerplate, complex types, usage of abstract base classes, adding code for testing purposes into the production codebase due to disregarding language features such as mocking, etc...

The development time and bugs in a program are directly proportional to the total number of lines written.

You write 3x the number of lines of code using a verbouse code style then you have 3x the number of bugs. Static type checking only catches around 5% of bugs. It's not a significant debugging tool.

If instead of writing static types and all the involed boilerplate required to make it work, you instead invested that type in unit tests and proper qa, you will always come out ahead.


I'm pretty sure any serious company uses C++ for the code that matters, and Python for orchestration (assuming no high scalability requirements), GUI, offline data analysis and calibration, etc.


Your idea of "serious company" is outdated by at least 20 years.


Can you enumerate the languages that you deem "real" and those you don't? I have a very hard time understanding what meaning you give to that word.


Languages that you would build serious production software in, rather than some script.

C, C++ are the main ones.


So you deny that the industry used Python to build serious production softwares?

Understood, no need to argue then, I don't think any amount of proof would change your mind...


I also do not consider websites or smartphone apps to be serious software, despite the huge amount of work that happens to be done there.

You know the rule, 99% of everything is crud.


I find it hard to understand how you can believe something like Dropbox isn't "serious production" just because it involves websites and apps.


the program that does the synchronization of files in the background is.


Last time I checked, that part was written in Python...



And the fact that they're spending so much time on it and were afraid to rewrite it tells you all you need to know.

Rewriting it in C++, leveraging the platform-specific APIs for disk I/O (sync_file_range on Linux, not even wrapped by a Python module) would yield not only much better performance, but also much higher reliability.

As it is this daemon is a recipe for page thrashing. Not that I would expect a Python dev to actually know how a kernel works.


Most Kernel devs are Python devs.

You ship C code which is extensively tested by Python code.

Python is simple to use, got good C interop and the performance of your test frameworks hardly matters.


Lots of production software is written in Python, for better or for worse. In fact, production software gets written in every language eventually, no matter the intent of its designers. Maybe you only write your production software in C and C++, in which case good for you.


I do write Python code used in production, but does that doesn't make it serious production software.

It takes a lot of effort to make a Python program not break on some inputs, so it's not really fire and forget like it is with C++. It's possible but it requires many more iterations.


> ...so it's not really fire and forget like it is with C++.

Seriously questionable assertion to my eyes. Maybe I just haven't used C++ in way too long to appreciate that it's true. I have heard lots of people say that modern C++ is really great.


Fire and forget? I am not gonna even bother to list all the "modern" software in C++ with serious bugs or security defects...


So, you realize Instagram, Pinterest, and a whole host of other unicorn software companies started out with python (often using django) right? And many, many current startups continue to do so.


They're websites, come on.


Instagram is a website with billions of people on it.


Out of genuine curiousity, what do you consider as serious production software? C C++ would probably help in embedded but I highly doubt they are used in every other domains.


The whole point of Python is to interoperate with existing Python code, which is unfortunately already everywhere and won't go away.


A senior Python Dev knows the real answer is to stick 'assert's around the place sporadically and start using optional arguments like 'failonerror=False'

Realistically I think it depends what you are doing. For some code type hinting is useful. Some code it is not.


> I've seen a bunch of code like this, usually written by juniors

Yes and I'm going to be honest, my first worry when type hints came around is that the language would be inundated by the java drones where they would enforce this 100% of the time.

Instead it seems people are mostly using it where it makes sense (APIs, etc)

Type hints are a great asset. You don't have to use it everywhere. And it should remain that way


> The whole point of Python is the duck typing.

Duck typing implies that "Should this work?" has one true answer: "Did it work?" I am at work to work, not to play guess-and-check with upstream libraries.


When you couple that with an interpreted language with no compile time, finding the answer to did it work is often faster than should it work.

Using Duck typing in a scripting language is the correct choice.


(Sure, no compile time. But how much time do you think humanity collectively wastes waiting for CPython to import things? On. Every. Run.)

"Did it work?" is a point-in-time observation. It is always situational. To find out, I have to construct esoteric objects with unobservable internal state and hope it is the state that I actually care about.

"Should it work?" has an answer of infinite duration. I can recall what I learned years later without needing to re-run some experiment.


Yes but "Did it work?" can be discovered faster than "Should it work?". This means companies that ask "Did it work?" are more commerically competitive than companies that ask "Should it work?". The companies that ask "Should it work?" will eventually be eliminated by the market place.


Replace real with different and I'll fully agree with you.


Type hints communicate intents. They can be used for many things.

Django-Ninja use them to do dependency injection and generate schemas for your API (using pydantic). autodoc use them to generate the documentation of your function. etc...

It does not replace duck typing, proof here: https://peps.python.org/pep-0544/


Most people don't know a real programming language. (And using types in Python/PHP is a good way to start.)


Hmm, could you explain what makes it terrible?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: