Hacker News new | past | comments | ask | show | jobs | submit login

> the features that make Python quite good at prototyping make it rather bad at auditing for safety and security

What's an example that makes it bad? Is it a case of the wrong tool for the job?

For example I understand that garbage collection languages shouldn't be used with real time systems like flight controllers.




What audits need most is some ability to analyze the system discretely and really "take it apart" into pieces that they can apply metrics of success or failure to(e.g. pass/fail for a coding style, numbers of branches and loops, when memory is allocated and released).

Python is designed to be highly dynamic and to allow more code paths to be taken at runtime, through interpreting and reacting to the live data - "late binding" in the lingo, as opposed to the "early binding" of a Rust or Haskell, where you specify as much as you can up front and have the compiler test that specification at build time. Late binding creates an explosion of potential complexity and catastrophic failures because it tends to kick the can down the road - the program fails in one place, but the bug shows up somewhere else because the interpreter is very permissive and assumes what you meant was whatever allows the program to continue running, even if it leads to a crash or bad output later.

Late binding is very useful - we need to assume some of it to have a live, interactive system instead of a punchcard batch process. And writing text and drawing pictures is "late binding" in the sense of the information being parsed by your eyes rather than a machine. But late binding also creates a large surface area where "anything can happen" and you don't know if you're staying in your specification or not.


Interesting. What kinds of software get this level of audit scrutiny?


There are many examples, but let's speak for instance of the fact that Python has privacy by convention and not by semantics.

This is very useful when you're writing unit tests or when you want to monkey-patch a behavior and don't have time for the refactoring that this would deserve.

On the other hand, this means that a module or class, no matter how well tested and documented and annotated with types, could be entirely broken because another piece of code is monkey-patching that class, possibly from another library.

Is it the case? Probably not. But how can you be sure?

Another (related) example: PyTorch. Extremely useful library, as we have all witnessed for a few years. But that model you just downloaded (dynamically?) from Hugging Face (or anywhere else) can actually run arbitrary code, possibly monkey-patching your classes (see above).

Is it the case? Probably not. But how can you be sure?

Cue in supply chain attacks.

That's what I mean by auditing for safety and security. With Python, you can get quite quickly to the result you're aiming for, or something close. But it's really, really, really hard to be sure that your code is actually safe and secure.

And while I believe that Python is an excellent tool for many tasks, I am also something of an expert in safety, with some experience in security, and I consider that Python is a risky foundation to develop any safety- or security-critical application or service.


Thanks for this, super insightful perspective




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: