Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Type hinting feels like a bandaid for a fundamental limitation of dynamic languages. I've just gotten back to a complex, experimental codebase after only a couple months of absence, and am refactoring it to accommodate for the implementation of a number of previously unplanned features. Even with type hinting and heuristic linting it's such a huge pain! After making a large number of changes I end up having to rerun the code repeatedly to find and squash bugs, and that says nothing of the code paths I don't end up taking. Is there a better way to utilize the convenience of python for experimental code without running into the scalability issues of large python codebases?

Contrast this to my experience with C# in visual studio (not visual studio code, which is inferior). The state of flow I can get into when performing large scale refactoring, with immediate, accurate feedback in the form of a clickable, line by line list of what's broken is unmatched. I would love such a pleasant experience in a python IDE but I'm not sure it's possible because of the nature of duck typing. It's like a real time, automatically generated checklist of exactly where to propagate changes, takes a massive load off my working memory and provides an uninterrupted flow of dopamine. A true state of zen if I've ever experienced one.




Counterpoint: I feel like static typing is a bandaid for the fundamental problem that the language isn't powerful enough to allow one's code to truly be OAOO.

Back when I wrote C#, static typing was indeed helpful for making large-scale changes and providing an automated check that this wasn't destroying everything. But then, it's only in languages like C# that I have to make these kinds of large-scale changes. There's no macros or syntactic abstraction. When you want to construct the same category of operation in different contexts, and functions and classes don't operate on the right axis, you're SOL. You just end up repeating yourself.

When I write Lisp, I never have to make changes that cover more than their one area of responsibility. At worst, I'll rename a function, and a simple textual find-and-replace is more than sufficient -- better than most refactoring browsers, even, since it will hit my comments and documentation. Do fancy refactoring tools update your README and docs/ folder yet?

(In one case, I knew an experienced C# programmer to build and compile an expression at run-time using LambdaExpression [1]. It takes about 10 times as many lines to achieve the same thing, and you have to write in a style that looks nothing at all like a normal function, so in practice nobody ever does this. In contrast, the way to accomplish this in Lisp takes one character, and the code looks identical to a normal function, so it's not unheard of.)

I feel that Python, in many ways, combines the worst aspects of both worlds. It's not strict enough to be a good static language, and not powerful enough to be a good dynamic language. Sadly, the most popular dynamic languages today are Python and JS and PHP, so a lot of failures and limitations of these languages get blamed on dynamic languages in general.

[1]: https://docs.microsoft.com/en-us/dotnet/api/system.linq.expr...


> Python ... is not strict enough to be a good static language

Yes, but Python was never meant to be a static language. The static type annotation feature was borrowed from static languages to reduce bugs, not to make Python a static language.

> Python ... not powerful enough to be a good dynamic language.

I strongly disagree with this. Python is a very powerful dynamic language. It lets you code with any paradigm that suits the problem you’re trying to solve. Functional, OOP, procedural, or whatever. CPython has hidden features that allow for low level customization of how it works (when it is appropriate).

> Sadly, the most popular dynamic languages today are Python and JS and PHP

There is a big difference between Python and those other languages. JavaScript and PHP became popular because they had to be used to write frontend code and backend code respectively for websites on affordable web hosts. Python became popular because programmers chose to use it; because it’s really good.

Perhaps you think Lisp is better; that’s fine, but notion that Python is not good or not powerful is just wrong.


As someone mostly writing functional languages I'm always having a hard time writing decent functional code in Python. There's almost no facilities to support modern functional programming so it just doesn't feel idiomatic in Python.


Agreed. Javascript certainly has more sharp edges, but if you stick to the good parts (or better, use Typescript) and throw in an fp library like Ramda, it's a lot more nimble for working with complex data structures than Python imho. And leveraging concurrency is of course much easier in Node-land too, especially since async/await have become standard fare. That said, Python has some great features (list comprehensions are awesome) and tons of great libraries, so it's still a solid choice for many use cases in spite of its data-wrangling limitations.


The functional paradigm is a matter of taste. Python is willingly limited in its functional capabilities.

You may not like it, but it's not an error, it's a design decision.


I think it wasn’t the OP’s point to question the design process behind Python — he was mostly just reacting to the commenter above who claimed that:

> Python is a very powerful dynamic language. It lets you code with any paradigm that suits the problem you’re trying to solve. Functional, OOP, procedural, or whatever.

Which simply isn’t true (as is proven by you as well) — Python was never designed to “let you code with any paradigm that suits the problem”. It has severely limited functional and (nonexistent?) metaprogramming capabilities, so the claim might be better phrased as follows:

> Python is a very powerful dynamic language. It lets you code with any paradigm that suits the problem you’re trying to solve, as long as it’s OOP.

(Please excuse the sarcasm: I work with Python and I like it for its simplicity, good tooling and a myriad of other things, but “flexibility”, for a lack of a better word — especially compared with LISP — is not one of them.)


It's perfectly possible to be both a design decision and an error. False dichotomy, as they say :-)


Again, a programming paradigm is like code formatting rules or ide choice. It's a matter of taste. Saying one is an error is just reenacting vim vs emacs. It goes nowhere.


And unanimous functions are basically non-existent. Yes, lambda works but only allows a single line, which forces you to name the function anyway.


I find that a good thing rather than as a shortcoming. JS/TS looks horrendous with the amount of function nesting. Luckily, it looks like futures and await are resolving some of that mess.

In my mind, the following is better than the JS equivalent using closures:

    response = await AsyncHTTPClient().fetch("http://www.google.com")
    self.result = json.loads(response.payload)
You see exactly what is going on, no additional nesting, and no need for semi-inner state that potentially complicates state reasoning. I.e. people get tempted to put code afterwards thinking it'll run after the previous line.

I may get some flack for this or my language, but I have absolutely no idea why anyone bothers so much with the closure garbage outside of select places where they make sense. It's non-intuitive and looks bad. Heck, I have a bad enough time explaining to seasoned devs the difference between threads and async concepts. Then you throw that sort of stuff at a junior JS dev and chaos will most certainly ensure with a buggy FE where no one can reason about state and there are null checks all over the place, just because. I've seen it in static FE languages like C#, too. State becomes too difficult to reason about, so null-checks are required everywhere.

Edit. Formatting.


This is by design; GvR was opposed to functional programming. Example: https://www.artima.com/weblogs/viewpost.jsp?thread=98196


Seconding. I write a Lisp a lot (including professionally), and I don't think there was even one time I was wishing for the kind of automated refactoring I had available when I worked in Java. 99% of the time, manual fixing, Emacs search and replace (or editable Occur), or grep are enough. The remaining 1% of the times are the cases like renaming a slot name in a class, where I need to track down and fix all the relevant slot-value and with-slots calls, but somehow this never ends up too big of a problem.

I didn't figure out why things are like this. I'm guessing something about Lisp makes me write software in a way that defeats the need for automated refactoring. It's not only that they're not possible to implement reliably[0], I don't find myself in any need of them.

Also, spot on with OAOO. I have cases where equivalent Java code could occasionally benefit from automatic refactoring of couple dozen functions and classes, because they're all hand-written. My Lisp code is usually one macro invocation that generates all these functions/classes, so "automated refactoring" involves changing the macro definition (and rerunning the top-level form using it, in the live image I'm working in).

--

[0] - Good luck automatically refactoring calls to slot-value that use slot names generated at runtime, or any kind of non-trivial macro. That said, I sometimes wish this wasn't the case, because I could use better autocomplete, or more reliable "who references this" queries.


Interesting. I did two years of Clojure after a decade plus of Java and C#. It never particularly clicked for me. This was developing misc LOB apps with a bunch of misc business logic. Maybe the wrong domain for a lisp to shine. Boilerplate wasn't much of an issue in that domain, and without type safety I constantly felt like I was driving blindfolded. Given that feeling, I felt the density and power of Clojure was actually a disadvantage relative to, say, Python.

The clencher was after rewriting a large app in F# and having two working versions for a while, I decided to move to async calls. In F# I was able to change a few core functions and then follow the red squiggly lines, wrapping logic into async and asyncSeq monads until everything compiled again. And after a day of doing so, everything worked the first time. With Clojure I couldn't even find the courage to begin something like that. It would have to be a complete rewrite. I never touched Clojure again. Maybe lisp is different.


Type safety is definitely something I generally miss with Lisps too.

Doing Common Lisp these days, I put a lot of type declarations on function signatures - the implementation I work with (SBCL) uses them for both compile-time type checking (with type inference) and generating more efficient native code. So I have some of that back, though it gives nowhere near the confidence I had with Java.

I haven't yet worked with languages like F# or Haskell so I can't comment on it, but I very much like what you've described in the second paragraph.


I had to look up OAOO.

For others - http://c2.com/xp/OnceAndOnlyOnce.html


Finally a decent critique of Python. Have read so many type and whitespace complaints that aren’t particularly useful. “Middle-brow dismissal” I think it’s been called here. But this one, wow made me think.


There is a better approach by using statically typed functional languages like F#, Ocaml, or Haskell.

They don’t need to have the type declaration as most things can be inferred.

So you get the feelings of writing Python with the full suite of benefits of a strongly typed functional language when using say F#


I actually find that excessive type inference is much harder to understand. It’s almost like the worst of dynamic and static types. You have no idea what the types are, but you know it won’t compile because of a cryptic error message.


You can usually get the compiler to tell you the inferred type of an expression, if only by writing in one that's obviously wrong, e.g. () and looking at the resulting error message. Some languages, e.g. Haskell support a "holes" mechanism that formalizes and expands on this 'trick' to enable a kind of 'dynamic', exploratory programming even in a wholly static language.


Fair point. Perhaps my issue was less about the visibility of the annotation and more that the types in functional languages tend to be more abstract or complex (e.g., monads, functors, etc) and harder (for me) to reason about despite the code being terser.


Sounds like c++ templates, it is really the worst of both worlds. When something throws an error you don't know if it's the caller, the callee or the provided type that is wrong, and usually these errors come five gazillion layers down the call stack. They are introducing 'concepts' in c++20 to remedy this by removing the need for inference and constraining the type at the usage site.


With f# the ide can annotate types of expressions for you.


I’d be very interested if any of these decided on non-atrocious syntax. Any progress on that front?


Can you expand on this? Haskell or F# have less noise than Ocaml. Plus with F# you can just pipe things neatly. I started and still love Python, these languages have ALOT of stuff going for them that blows the standard language used nowadays out of the water.

Download .netcore and setup VSCode + Ionide plugin and you’re ready to rock and roll (fastest IDE setup). As you write your code, you’ll see the function signature and what the compiler can infer from your code.

Also checkout the basic intro, syntax is very clean and straight to the point.

https://docs.microsoft.com/en-us/dotnet/fsharp/introduction-...


I think the issue with Haskell's syntax is it has always been a research language that has evolved very rapidly, and it shows. Honestly, I'm more impressed than anything that it manages to remain quite usable in spite of this.

There are small warts like unary negative, others are things like templates that were added later.

The most obvious, and most syntactic, is having braces and indentation layout. I've found, after using mostly indentation layout, that I've forgotten exactly which structures are blocks.

Others are simply bad designs, like data record accessors. A `data` declaration is declaring accessor functions for me, whether I ask for them or not, putting stuff into my namespace. Then there are more kludges to work around the obvious problem of having fields with conflicting names. (Oddly, there's a very elegant update syntax, and pattern matching works beautifully. It seems like a simple tweak to that could have avoided accessor functions entirely!)

Some issues are inherent to the design, and reflect a conflict between the language designers and the users.

Ubiquitous currying is a great example. Making all functions unary is mathematically very elegant, but it doesn't reflect the way we usually think.

If I intend to declare a function f(a, b), I simply can't. Its signature must be A -> B -> C. That's especially odd when you're declaring a binary operator.

I'm looking at this through the lens by which Scheme reformed LISP's defun:

    (defun square (x)
      (* x x))

    (define (square x)
      (* x x))
It's subtle, but it reflects a nice minimalism in syntactic design as it uses the same structure for declaration and for application.


I can't understand what you mean, because

> A `data` declaration is declaring accessor functions for me, whether I ask for them or not, putting stuff into my namespace

It won't if you don't use record syntax. If you want the accessors, use record syntax. If you don't, don't.

> If I intend to declare a function f(a, b), I simply can't

Sure you can.

    f(a, b) = a + b
> it uses the same structure for declaration and for application

The same as Haskell


There's a core dispute here, and it's a pretty common one that can never quite be resolved.

The practicality argument I think you're making is necessary when you're working on an existing language feature, and that's because you're invariably forced into tradeoffs.

The argument from design I'm making is more appropriate to a completely new language feature; the more your design is based on math and especially the linguistics and psychology of user's intent, the fewer clever hacks the user has to make and the fewer tradeoffs you'll be making in the future.

Haskell did get a lot of design choices right because they had smart people who worked through the math, but I think regarding linguistics and psychology, it's much more of a "greybeard" language.

> It won't if you don't use record syntax.

"Don't use that feature" is simply acknowledging a feature is broken, it doesn't make it not broken.

And you typically need named fields because you have complex code and want it to be maintainble, that's also when you don't want your namespace polluted.

> Sure you can. f(a, b) = a + b

How does that work declaring an operator?

You're trying to write a binary function, but to do that thing you have to do something else, write a unary function that accepts a tuple.

It's hard to claim that "oh, n-ary functions are really just functions that accept n-tuples" when Haskell plainly doesn't believe it. That's evident in the fact that you lose the benefits of currying, sections, etc.

> > it uses the same structure for declaration and for application

> The same as Haskell

Nope.

    f :: Int -> (Int -> (Int -> Int))  -- Parens for clarity
    f a b c = a + b + c

    x = ((f 3) 5) 2
So we have two mismatches here:

1. The associativity of type is the reverse of application.

2. Neither of them reflect the simple case where I want to call a function with three arguments.

And, look, I get that math is what makes #1 an issue. I'm more an advocate of ubiquitous partial application:

    f :: (Int, Int, Int) -> Int
    f(a, b, c) = a + b + c
    x = f(3, 5, 2)
    y :: (Int) -> Int
    y = f(3, 5, ...)
    z = y(2)
It's stating exactly what you mean, and still has the benefit currying, and it's clearer to the reader what's a function and what's a value.

Function types become more complex, so I get why currying is attractive, but, again arguing from design, that's letting implementation drive interface.

And the math behind currying is certainly sound, but it's obscuring the fact that an n-ary function plainly isn't a unary function. They're just two different things.


Yeah, f# is not bad. How is it on linux, are there enough third party libs? Not inclined to use MS tools unless both reqs are met.


I’ve used it on osx and it runs great actually, and the Linux story is the same. The standard .NET libraries are obviously sharpened steel and the third party libs are fewer than say NPM but higher quality and address some of the missing pieces. Though again, the F# batteries are great and the .NET libs cover most if not all of your needs.

Check it out. You get sweet syntax, great compiler and tooling, and you won’t suffer things like nulls etc.

Also parallel and or concurrent work is a breeze since everything is immutable by default, and it makes it ugly to try to stick it to a mutable procedural or oop style.

I highly, highly, suggest you check it out. It’s the step child that Microsoft refuses to acknowledge in full force. But I think that story will change soon as they realize how GIANT the Python sector is and so F# is a great gateway drug into the .NET world.

edit:

By the way I used to hate on MSFT for the longest and I haven’t used Windows in over a decade. But F# is fire.


Cool, yes MS has made great strides but I still don’t trust their telemetry needs.


Yes, refactoring tools can do text in comments and docs, other files across your project, and will provide plenty of options to specify what to change, and also provide previews before “enacting” the changes. One of the many reasons I find IDEs superior to plain editors.


I highly disagree with using macros in place of actual refactoring. Macros can have their place I guess, but when you use them instead of refactoring, you’re simply making code less straightforward to read for other people, creating layers of redirection that have to be fully understood before being able to grok the code. Basically every IDE for modern languages quickly and correctly refractors, without any risk that you accidentally change a string value, for instance, or have to worry about scope rules, and then you have code that is understandable without redirections or having to resort to clunky text replacement.


>Do fancy refactoring tools update your README and docs/ folder yet?

Yes actually. Resharper plus Swagger handle 95% of that for me automatically.

And I absolutely do not agree with your point about syntactic abstraction. Interfaces, dependency injection, generics, very light use of inheritance/abstract base classes, and extension methods generally provide all the abstraction I need to avoid repeating myself.


> [Python is] ... not powerful enough to be a good dynamic language.

Could you elaborate? I guess you're comparing it with Lisp -- what makes it significantly more powerful than Python? (I read your paragraph about Lisp and I don't see why it couldn't just as easily apply to Python, apart from `lambda` being 6 characters instead of 1.)


OAOO, and the power (expressiveness) required to actually achieve it, reminds me of the Alan Kay paper about DSLs (STEPS paper from VPRI https://news.ycombinator.com/item?id=11686325 - the breakthrough is that it should be easy to make DSLs)

And of course macros are useful for making DSLs.

Rust's macro system(s) are pretty powerful (ergonomic, hygenic - typesafe, simple, as expressive and powerful as you wish via procedural macros that can do almost everything with the AST).

Safely and cleanly abstracting any kind of program even with DSLs is rarely easy because of the need to pass the right context, to find the right interfaces, to manage data/state/context/systems dependencies, and something like Scala3's implicits might help with it. (Though maybe simply more vigorous refactoring might also work to keep data-passing to a minimum. But then I fear that turns into an over-abstraction fest, and we get back Java's FactoryFactory-like monsters.)


LambdaExpression is a part of the dynamic language runtime (DLR) and is absolutely useful if you are into run-time code generation. The analogous contract for this in the lisp world would be quoting, but wouldn’t be as fine grained as the DLR since it limits what abstraction can be applied (eg if you want to compute what parameters or what operations are needed at run-time).

Given the performance it allows for run-time generated code, nothing in production really beats the DLR. It is a shame Microsoft nerfed it in UWP because they ripped out the jit in favor of ahead of time compilation.


I'm not familiar with how .NET runtime works, but why does AOT compilation nerf runtime code generation?

Take SBCL - an open source Common Lisp implementation that compiles everything by default. That includes any run-time generated code. The code is simply AOT-compiled at runtime, at the point of generation.


They don’t include a JIT in UWP, so run time generated code is interpreted rather than dynamically compiled.

Run-time generated code in the DLR is very programmatic using the full C# language (eg expressions are values that you can store in a field to be retrieved later and completed), they aren’t just simply quoted templates. So an AOT compilation of run-time generates code is very impossible (consider if you used the DLR to execute code for your own live programming language layered on top...).


> they aren’t just simply quoted templates

how is that related to Lisp? Is code generation in Lisp limited to 'quoted templates'?


It is if you want ahead of time compilation to work on your generated code. People often confuse macros with full programmatic code generation, but they aren’t equivalent at all, of course.


How is that possible, given that macros are arbitrary Lisp procedures, which are transforming arbitrary input expressions (and world state) to output expressions (and new world state)?

Also given that code generation is not limited to macros in Lisp. Any function can generate code and hand it over to the embedded AOT compiler.


I’m talking about the quoting inside to implement macros, they don’t allow for generalized run-time code generation.

An embedded AOT compiler is compiling code at run-time, which isn’t allowed without writable code pages (which similarly makes the JIT disallowed as well).


Sorry, I still don't understand. What do you mean by quoting vs. generalized code generation? In Lisp the macro can be a procedure, which takes expressions and then uses an arbitrary computation to generate new expressions. Using a quoting mechanism and code templates is fully optional.


It can be, but generally isn’t as most macros don’t require generalized code generation, so quoting works just fine. Going back to the first comment I was replying to:

> (In one case, I knew an experienced C# programmer to build and compile an expression at run-time using LambdaExpression [1]. It takes about 10 times as many lines to achieve the same thing, and you have to write in a style that looks nothing at all like a normal function, so in practice nobody ever does this. In contrast, the way to accomplish this in Lisp takes one character, and the code looks identical to a normal function, so it's not unheard of.)

I assume they mean that they didn’t need to manipulate expressions in a general way because what they wanted was a simple macro that lisp supports with nice template/quote syntax, whereas C# doesn’t support macros at all but you can do something hacky with general run-time code generation.


For others reading this, this was for security reasons (UWP sandbox restrictions). Turns out having memory that is both writable and executable isn’t great for sandboxes.


It's a sad state of things. Given that for any runtime-generated code, you can make an equivalent by building an interpreter and interpreting data - only significantly slower - this doesn't seem to me to be buying any security at all, and the cost is a deal-breaking level of inconvenience for the programmer.


There are at least two considerations here.

If you’ve found an exploitable memory safety bug, the goal is usually to execute some attack payload and do something useful to an attacker. This generally involves injecting some sort of code. If you can write to executable memory, this is easy. If the attack target contains an interpreter, and you can convince the interpreter to interpret your payload, you also win. If the target doesn’t contain an interpreter, your job is considerably harder.

In locked-down experiments, e.g. iOS but also plenty of SELinux-ish things, there may be no interpreters available and no ability to execute newly delivered code. This does add some degree of security, and it requires a lack of JIT support.


We live in an age where we can circumvent processor hardware security using prefetching. I’m not sure what we can really say about security anymore.

Also, the DLR wasn’t a very popular feature used by .net developers, even if it was really well done (wrt to performance and static type compatibility). None of the DLR languages ever took off (ironpython and ironruby), and very very few use the API to generate code, it’s nerfing isn’t inconveniencing many programmers.


It's inconveniencing those hoping for a proper Lisp on CLR :).


Sounds like you might like Hy: http://docs.hylang.org/en/stable/


I disagree. On the other side is static languages adding things such as "auto" and other ways to automate trivial tasks. Type annotations in dynamic languages is the flip-side, trying to reach this "ideal middle" from the other end.

What's great with Python type annotation is that you can give it as much as you want, and it'll do the best it can with that. If you don't want to give it any type, then don't. If you want to give type for one function, then only do for that. You get to choose how deep down the well you wan to go.


I disagree. Type inference in statically typed languages (e.g. auto or var) does not weaken static typechecking guarantees. So it is not the same as writing type annotations only as much as you want. It is stronger.


I never said it weakens it, but it's trying to keep the strong type checking while making the language easier. Python is the opposite, it's easy to write but doesn't have the strong guarantees, and it's trying to become safer while keeping the ease of writing.

This ideal "middle state" I speak of is a language that is both as easy as a dynamic language but as safe as a static language. Static languages are making it easier to write and dynamic languages are making it safer.


If you limit the model to source code diagnostics, you have a good point.

But statically typed languages also use the types for other things: optimization, generic code instantiation, symbol resolution, and even certain kinds of metaprogramming. Optional type checking is a nice way to reduce certain kinds of errors and tighten up some underspecified designs, but it falls very much short of actual static type checking.

That being said, most of the unique things static typing provides aren't needed for typical CRUD apps and glue logic, so type annotated python isn't a bad choice.


The way Python's type checking is implemented (it's literally just annotations) it can be used for all of the above.

There are already libraries that existed before where you can accelerate specific functions, like Numba. Those libraries could switch to using type annotations instead.

Similarly, IDEs are already use type annotations to enhance their symbol resolution and so on.

Lastly, I'm not sure what you mean by "falls very much short of actual static type checking". With mypy and PyType, you can already catch many bugs that you wouldn't have otherwise, without running the code. That's the definition of static type checking. It may not do as much as c++, but it still catches significantly more than most other pure dynamic languages with no type.


There are other implications, but binary linking is fraught. Minor errors corrupt entire executables, systems, and data stores, including entirely unrelated subsystems. Often statically typed systems use (at the binary level!) type information to trade off defensiveness (checking that array indexing is in bounds, maybe) for other things (latency, throughput, etc.).

Point being, the type information is used for more things than just telling coders about API mismatches. Maybe some libraries could use type information to dynamically use better code paths, but statically typed languages use this information far before that... when the equivalent of a wheel is created, for instance.


Easier to learn != easier to write.

For a person like me who has learnt a few languages with static type systems like Scala or C++ first, which are also arguably harder to learn than Python, it is easier to write and read Scala or C++ than Python.

This is because static type annotations serve as documentation and they enable various helpful IDE features like code completion, find usages, safe refactoring, real time correctness checking etc. These features work best when the type information is complete.

For some of the projects I had to use JS / Python / PHP and I find writing these much harder. Indeed, it was very easy to learn the core of the language, but then not having reliable IDE support made writing the actual code (and using libraries or digging into existing code) slower.


If you didn't have reliable IDE support while developing python then you didn't use a good IDE or the code you were working with had too much dynamic voodoo nonsense. One must never go "full dynamic", because you then get the worst of both worlds.

I've generally had reliable IDE support for python for almost a decade now. First in the form of PyDev on Eclipse, and now PyCharm.


IMHO PyCharm is good among dynamic languages, but it's still far from the level of IDE support you get for statically typed languages. It is very conservative in the number of bugs it highlights. Same problem in VSCode and JS. While autocomplete is quite nice and works most of the times, they fail to catch most problems when I type.


`auto`, type inference, etc. has nothing to do with dynamic types. The types are still 100% static, not some sort of compromise or middle ground between static and dynamic.


You missed my point. Static languages try to make it easier to use, while dynamic languages try to become safer. They each try to keep what makes their side good, while getting some hints of what the other side has.


I guess I didn’t understand your point because I think statically typed languages are more easy to use than Python, not less, as soon as the program is longer than 500 lines or so.


Right, but in python, especially with use of the third party libraries, I can glue together something using less than 20 lines of code.

I once had a script that recorded the microphone, converted the waveform, passed it to TTS to turn it to text, translated it, then converted it back into audio and played it. All with under 30 lines of Python.

Meanwhile with C++, just trying to manipulate some strings will take that many lines. I guess we have different definition of "easy to use".


C++ is not the only statically-typed language.

Can you elaborate on why Python's lack of type checking specifically (as opposed to any of the other ease-of-use features of Python) made it easier for you to write this program? Were there cases where you were using something that a statically typed language would have considered the "wrong" type for a function argument (or anything else), but it worked fine in Python?


This isn't the first time I've seen this argument- clearly there are people that it rings true for, but I just have a hard time seeing it. The benefits you get from types start to very quickly diminish as soon as you start adding untyped values into the mix. Sure, there are the odd cases where you might happen to have a typed function and a typed value and the checker can find a bug for you, but my experience writing python tells me these cases in reality are vanishingly rare.

And the cost of this flexibility seems to be quite high. The overall quality of type checking that mypy gives you seems to be pretty abysmal, and it requires quite a bit more handholding and awkward workarounds to deal with the proliferation of any's and uninferable types than a language that just starts with types in the first place.


Not really. Enable something like MyPy or PyType on a big enough codebase with zero explicit annotation and it'll already find plenty of bugs and unhandled cases from the inferred types alone. Some of these are stuff a strong IDE may catch too (using the wrong function name, passing wrong number of args, etc), but some other ones are actually deeper in the code.

So already, with zero annotations, you already get value out, let alone once you type a few tricky variables that are harder for Python to track.


I've never seen that personally, but most of my python work has been on small to medium sized projects (generally in the 5-10 KLOC range).


What kind of "untyped data" are you dealing with?


That's like saying the value of unit tests quickly diminish once you add non tested code to the mix.

For fearless refactoring, sure, you need quite high type-coverage to not get a false sense of security, but still, the more you have the safer it is. For just finding bugs and code navigation, any addition of types is going to be a win.


> What's great with Python type annotation is that you can give it as much as you want

That's also what's Not Great about this whole 'annotation' or 'gradual typing' approach. Having only some arbitrary part of the program be statically typed introduces a huge amount of 'interaction' or 'blame' points between the static and the dynamic portion of your program, that significantly impact the usual benefits of static typing. Dynamic types should be used only when strictly necessary, everything else should be made static as soon as feasible.


Hard disagree: interfaces should be statically typed, but implementations need not be unless it adds value.

Languages like python with a typechecker are much closer to rust (safe, but with the ability to have unsafe areas) than true statically typed langs, because static languages don't give easy escape hatches (you end up having to cast everything, no static Lang I'm aware of ships with an Any type like python's).


>no static Lang I'm aware of ships with an Any type like python's).

Isn't that similar to the dynamic type C# has had long before Python?

https://docs.microsoft.com/en-us/dotnet/csharp/programming-g...


Looks like it, TIL. Thanks for the link.


Rust is a true statically typed language; `unsafe` in rust relates to code that violates memory safety guarantees, but you can't escape the static type checking using it.

Rust actually has an `Any` trait which allows for a certain amount of runtime type asserting if you don't know what concrete type you'll be working with.


This is a mostly semantic distinction. Rust provides safety guarantees, but you're able to opt out of some of those in certain well defined ways.


Never actually seen it used, but: https://en.cppreference.com/w/cpp/utility/any


Sounds like a good rationalization. I really think it's hinting for an IDE so that it knows where and when UDTs are used.

The cool thing about python is that the stdlib is readily memorizable. It's a little idiomatic, but it's relatively small. The onus is on the IDE--if you want one--to be smart. Type hinting probably doesn't catch many bugs, especially with metaprogramming, but it might add some value to large codebases or team development.


It’s the same feeling I have watching people work in Java; their IDE and whole language development experience is a tier above Python. I am surprised that the world decided on Python as the informal lingua franca of science.


As a Java developer, I still like Python as a language for small to medium-sized programs, or: scripts, if your will. To me, it's just quick to iterate. The condition is, though, that I need to be able to quickly run the program to validate just all of it to work. In other words, when it comes to test coverage and longevity and reliable refactoring it gets hard. I'm sure these things are possible in Python too (I know there's testing frameworks, although it seems more of an eclectic mess than Java, which seems to have mostly settled on a small set of established testing libraries instead) but somehow the Python projects I stumbled upon have less of a testing and thus maintainability culture than the Java projects I stumbled upon. I have often wondered why that is, and I keep getting back to culture. Maybe that's too simple, and maybe the testing is just fine for business applications, and maybe my general usage of Python is too much command-like scripting, where testing and mocking is just a little harder to do...


> The condition is, though, that I need to be able to quickly run the program to validate just all of it to work.

That's a condition for all TDD.

> I know there's testing frameworks, although it seems more of an eclectic mess than Java

If you look at the test suites for popular libraries like numpy, django, airflow it's mostly `pytest`, `unittest` (part of the standard library), and `nose`.

> where testing and mocking is just a little harder to do...

Mocking is actually pretty easy to do in python using pytest.monkeypatch or unittest.mock. Compared to mocking in a strongly typed language like C++ (and I assume Java) if an object you're mocking implements a particular interface, you would only have to mock out the parts that get exercied by the codepath in the tests you care about.


> That's a condition for all TDD.

I was saying, run the program, not run a test in a larger suite.

> > where testing and mocking is just a little harder to do...

You were quoting me trying to say that mocking out system interactions, such as I /O, things with external side-effects, tends to be harder, regardless of Python versus Java.

> Mocking is actually pretty easy to do in python

Let's disagree. As a Java developer, doing some Python takes me a moderate amount of online searching, unless I'm writing the test code, during which the online searching and associated trial and error skyrockets.

Just my modest experience.


The problem is small and medium programs have a tendency to metastasize into large ones.


There's a controlled way of doing that. Prototype in python, as soon as it's clear what/how things need to be done, implement in C++, add python bindings for manual testing, unit-tests to solidify things.

The bindings will end up being throw-away code, with https://pybind11.readthedocs.io that's not too bad in terms of time spent.


Hopefully you would implement new, greenfield projects in Rust or perhaps Go, not C++. There's also a side-benefit in that Rust interoperates more easily with Python and similar languages, due to its features being a closer match to the C ABI.


Rust still needs to do a lot to catch up with 30 years of tooling and libraries.

And since we are speaking about Python integrations, Rust is still at the departure line to anything GPGPU related, even something like .NET has better tooling currently.


A modern C++ is head and shoulders above Rust. Rust might get there in a decade or two; not today.

Go is a complete joke if it wasn't so sad. Just no.


Sure, but isn't any code hygiene a matter of discipline, in this case switching over before it gets out of hand? The only reason my small programs are in Python and not in BASH is because of the same discipline. Also, being in a post-monolith era I hope this is all less of an issue.


In my opinion, there is always a lot of pressure that keeps people from maintaining good discipline, so there's a lot of value in tying your hands up-front in a way that ensures a bare minimum of maintainability.


Python has the unittest module, which is a port of JUnit, in the standard library. (And a mocking library as well.)

The other popular testing library is pytest. So that's more choices than in Java but fewer than in C#.


> So that's more choices than in Java but fewer than in C#.

There are far more unit testing libraries available on the JVM than just JUnit.


Yes, I meant "more choices that are popular" - of those libraries JUnit is the obvious default choice that most people use, unlike in C# where you have NUnit, xUnit and MSTest all enjoying comparable popularity (to each other).

My point is that the Python situation, with two popular choices, is hardly "an eclectic mess". Although strictly speaking it might be true that two choices is twice as eclectically messy as one choice, it's still fewer choices and thus less of a mess than C#'s three choices (and I haven't heard anyone calling that a mess).


For the combination of JUnit, Mockito, Hamcrest, Rest Assured and Spring Test (each of which playing its own role in testing) I am able to more-reliably find answers than for the combinations of Python libraries that I seem to run into. They may be less in absolute numbers, but if (my perception) they appear in too many permutations, it's often hard to find the right answers.


JUnit -> unittest (std lib) Mockito -> mock (std lib)

Rest Assured -> Convenience library, not needed for testing APIs in python. Spring Test -> Only needed for the Spring framework. Hamcrest -> No idea wtf this is or why it's needed for unit-testing, but there is a python port: PyHamcrest

The two actual libraries needed for unit-testing are there, fully-featured, and part of the standard library. The other examples you cite are completely irrelevant to normal unit-testing and seem more borne out of the Java ecosystem or your particular methodology for unit-testing web-apis, rather than actually performing a role that can be defined as existing "cross-language". So no, from my observation, there does not appear to be any "mess" in the python unit-testing ecosystem.

However, in general, there are a lot of python libraries out there and they all solve similar problems in different ways. If you go out searching for the "how to do this unit-testing convenience feature" in python you're of course going to find a lot of answers. The same way I found inconclusive results for java when researching this response.


An equivalent to Spring Test + JUnit in Python might be Django's testing framework, which extends the standard library's unittest module. REST Assured might not need an equivalent - the documentation says it "brings the simplicity of using [dynamic languages such as Ruby and Groovy] into the Java domain". As for Hamcrest, I don't know what you would use with unittest (other than the build-in assertions), but I think pytest does some clever introspection to give similar results when it comes to error reporting.

Since I've only tried the boring default options, I'd be interested to learn about the more esoteric ones people are using. Could you give examples of some of the permutations of Python test libraries you have run into in practice?


It was very surreal moving from a team using idea for java development to a team using vim for python development.

A lot of the smartest people are super productive and super considerate of all the possible codepaths when making changes. Bugs still occur due to statically discoverable coding errors.

Just how much harder it is to navigate a new codebase is staggering.


There are many tools available outside the editor as well as testing to catch these errors.


>their IDE and whole language development experience is a tier above Python. I am surprised that the world decided on Python as the informal lingua franca of science.

That's because that however good the IDEs are, their language is full of ceremony and stiff-OO abstractions, and Python can achieve in 10 lines what takes Java 100.

Plus it interfaces much better with C/C++/Fortran/etc code, which serious data science libraries are written in.

This makes it much better fit for science than Java. Plus a lot if not most of science code is short scripts and one-off. So the benefits of types for larger projects etc don't come in at all.

Unlike enterprise users, scientists wanting to write a program don't have no time for 100 lines of ceremonial crap to make it look "enterprisy" or have much tolerance for a language with much less expressive power.


Petzold offered the canonical IDE rant => http://charlespetzold.com/etc/DoesVisualStudioRotTheMind.htm...


I think it may be partly to do with the fact that a lot of people do not like using an IDE. Then they don't get as much of the benefit. (Particularly a few years ago there was a lot of negativity about IDEs on forums like this)


Unfortunately to get that tier you have to put up with a gargantuan high-latency IDE that thinks a lot. Helpful yes, but not a clear win on non-huge projects.


It's a short-term vs long-term trade-off.

Learning a dynamically typed language like Python is moderately easier than learning a statically typed language, especially old-fashioned languages like Java without type inference. OTOH using a statically typed language, once you've learned how to handle it, makes for a more powerful software engineering experience, but you'll have to climb the hill of typing first.


How tall is that hill? I wrote C# professionally every day for 3.5 years. The last day was as painful as the first.

I can appreciate making an investment in tools that make me more productive in the long run, but at some point it has to start paying off. Programming languages have finite useful lifetimes. Common Lisp took me years to learn well, too, but it started paying big dividends after only a couple weeks.


Are you still a Common Lisper, and what do you use it for?


People don't use dynamic typing over static typing because dynamic typing is "easier" to learn. Types are not a difficult concept.


I think what replaces static typing as a safety rail in dynamic languages is testing, and the rise of dynamic languages the last 1-2 decades has been made possible by the parallel rise of automated test suites.

Personally, I would not attempt large refactorings without a decent test suite. I consider enabling fast and safe refactoring to be the main advantage of test suites!


Except that as anyone on large enterprise projects knows, unit testing is the bullet point that comes just after having documentation ready.


I have to disagree. Unit testing comes as soon as you write _any_ code. You write a new function/class? Your pull request better have unit tests for it or I'm rejecting that shit.


Is the customer paying for the work hours doing code reviews?

If they do, usually code reviews come after unit tests.


I don't doubt that true in many places.

Writing tests is also an absolute must in many organizations.

My advice to you is to look for work in organizations that are not garbage. We're out here!


Agree about the dopamine flow from working with a good compiler/IDE. Golang's compiler is super fast and super fun, with very helpful messaging. Enjoyable refactoring process, to be sure.

In terms of dynamic languages, the Clojure ecosystem has a lot of techniques that help with dynamic language codebase scalability. Very few are exhibited in Python codebases I see, I imagine for various reasons.

The most important is short, single purpose functions arranged in layers- essential for working at the REPL, where you may want to inspect or interject at any point in the data flow.

Related is a clear segregation between code and data, and explicit data flow/state control. I see this violated all the time in Python classes, which are very clever but should almost never be used.

Clojure Spec is also a revelation, much more expressive than a static type system and also (IMO) much more comprehensible. Nothing equivalent in the Python world.

Python works hard to be readable in the small, but it is a struggle in the large, both because the language does not help you and because the ecosystem is still pretty immature.


I concur. I am writing a new project in Python after some months writing exclusively Clojure. I decided to write the code using small, pure functions and avoiding classes unless strictly necessary makes the codebase much easier to maintain in the long run.

However I don't think the Python ecosystem is immature, for me it is the contrary. This project started with Python since I didn't find a good way to reduce the Clojure startup time, and believe me I tried everything I found (from GraalVM to ClojureScript). And there is gaps everywhere in Clojure ecosystem when you want to create command line tools, while Python has literally everything I needed.


BTW, if there is anyway that I can get Clojure startup time below 0.1s (actually, until 0.2s wouldn't be so bad, but this is my limit) that doesn't involve something like a daemon running I would be all ears. I would really like "real" Clojure in this case (i.e.: not ClojureScript), since the ecosystem from Clojure is much better.


Other options like SBCL would allow this easily:

  $ echo "(format t \"Hello\!~%\")" | cat - > /tmp/test.lisp
  $ time sbcl --script /tmp/test.lisp
  Hello!

  real 0m0.014s
Executables are easily created, though they are not small in SBCL.

  $ sbcl
  This is SBCL 1.5.9, an implementation of ANSI Common Lisp.
  More information about SBCL is available at <http://www.sbcl.org/>.

  SBCL is free software, provided as is, with absolutely no warranty.
  It is mostly in the public domain; some portions are provided under
  BSD-style licenses.  See the CREDITS and COPYING files in the
  distribution for more information.
  * (sb-ext:save-lisp-and-die "howfast"
       :toplevel (lambda () (format t "Hello!~%"))
       :executable t)
  [undoing binding stack and other enclosing state... done]
  [performing final GC... done]
  [defragmenting immobile space... (fin,inst,fdefn,code,sym)=1026+935+18027+18435+25326... done]
  [saving current Lisp image into howfast:
  writing 0 bytes from the read-only space at 0x20000000
  writing 432 bytes from the static space at 0x20100000
  writing 26804224 bytes from the dynamic space at 0x1000000000
  writing 1990656 bytes from the immobile space at 0x20300000
  writing 11935744 bytes from the immobile space at 0x21b00000
  done]
  
  $ time ./howfast
  Hello!

  real 0m0.016s


Did Graal not work because of the limitations in the AOT compiler? I haven't spent a lot of time with it but the few experiments for e.g. little command line utils yield startup indistinguishable from C or Go apps. (Compilation time required by graal still completely non-competitive but could potential work as a packaging step...)


Yeah. When you start anything remotely complex with GraalVM it starts to break, i.e.: anything related to eval (and this wasn't my code actually, however a good part of Clojure ecosystem depends on eval).


https://planck-repl.org/

Might help? Clojurescript, but fast start up at least...


Planck/Lumo for actual projects are really slow and I don't know why (0.5s of startup time). For some reason, the REPL is fast, and I didn't investigate why REPL is fast and an actual project is slow.


Probably not using clojure and using another lisp with a good startup time.


Using a Common Lisp or Scheme with AOT compiler like SBCL or Chez would have sorted it out.


> Related is a clear segregation between code and data, and explicit data flow/state control. I see this violated all the time in Python classes, which are very clever but should almost never be used.

Could you expand on this one?


Appreciate the question. The crux of this observation is that the easy opportunity Python affords to create class-based encapsulations introduces a tension between a world where data flow and state is hidden, implicit, and private- in service of attempting to define precise type-like abstractions- and a world where data flow and state is explicit and public.

Although Python does a lovely job in defining type-like protocols that classes can participate in- e.g. all the special dunder methods- most Python code in the wild that I see is not beautiful-in-the-large, well-developed, reusable, leverageable abstractions. It's just business logic, conditionals, utility code, scripts-turned-into-apps, etc. Plenty of reusable functions, very few reusable types.

Putting business logic mush into classes with some state and calling it a type is a straight line to unmaintainability. It becomes very difficult to extend such a system with new "types" or to modify behavior of existing types, because there is a lack of clarity about the semantics of the existing types.

Rather than trying to define types with state, much better to treat state as data- the difference is that the term state implies some special smart type-like semantics, while data is just dumb keys and values.

The comparison I drew was with Clojure, which doesn't make it so easy to make classes that hide state or create fake types. Instead it has graduated options for lightly packaging data elements together- start with maps/dicts and move onto records- and optimizes for just chaining functions that operate on those data blobs. "Simple", as Rich Hickey famously says. There is no spaghetti-inducing tension between keeping logic "encapsulated" vs coding it "in the open."

For regular line of business apps, the latter is so much better when it comes to maintainability, but the presence of classes provides a constant temptation to find reusability/encapsulation/abstraction where it doesn't actually exist. In Python, IMO, classes should only rarely be used.

Hope that helps?


I would love to see clojure spec in python.


Pyflakes should get you ~90+% of the way. Pycharm etc can get farther. Eventually you fall back on your automated testing and manual spot checks on approach of the quality target.

It’s not quite the burden hn-ers make it sound.


> Type hinting feels like a bandaid for a fundamental limitation of dynamic languages.

What...? Everything you describe here is the symptom of a bad codebase, not the symptoms of a dynamic language...

> I end up having to rerun the code repeatedly to find and squash bugs, and that says nothing of the code paths I don't end up taking


I prefer tools that don't optimize for the unicorn "good" code base but rather help me with real world code. Also, many problems are irreducibly complex. Just because there are a lot of code paths doesn't mean it's bad code. Not every software project is a toy.

Everything in their post describes real world code and all projects beyond toy-size.


> Is there a better way to utilize the convenience of python for experimental code without running into the scalability issues of large python codebases?

Type hints and doctests.

If that doesn't get you to pretty big static typing wins, then you are probably just dealing with a toxic codebase or something.


I'm a Python programmer who has to interoperate with a messy legacy C# codebase. Consequently I don't have a good impression of C#.

Can you suggest some modern, best practice C# projects for me to see how the language should be used?


Jellyfin is modern and written in C#, but not sure if it follows any best practices. https://github.com/jellyfin/jellyfin


It mightn't be the best example in the world. It's a fork of Emby, and one of the Jellyfin devs on the Jellyfin reddit says it's been a lot of work cleaning up


> Can you suggest some modern, best practice C# projects for me to see how the language should be used?

I'm curious about this as well.


What are the issues you’re running into?


Sure, but would you rather have a bandaid or nothing at all?

I don't think many people disagree that strong typing is a requirement for huge projects.


I keep jumping between C++, Python and Java for various projects, and really, they are different worlds.

In C++ you think hard about your problem, create strong abstractions that translate into fast execution. That's a highly satisfying engineering job.

In Python you do happy exploratory programming. Changing data structures on the fly, hacking your way through unexpected problem by doing crass things like adding members to a class at runtime.

You can do exploratory programming in C++ but that requires a lot of additional work to experience a fraction of that freedom. And you can do solid engineering in python but that requires as well a lot of additional work.

Different tools for different things. Python is the off-road bike you take for a bit of fun in the forest. C++ is the family car you take to do the week's grocery.


> In C++ you think hard about your problem, create strong abstractions that translate into fast execution. That's a highly satisfying engineering job.

C++ is an awful language that requires too much thinking. Make a mistake and you'll pay for it with hidden memory leaks and segfaults. It offers no help. No other mainstream language is quite so unforgiving. Rust, Go, and Swift will hopefully relegate it to the dustbin. (Of the three, only Rust is suitably non-GC'd for bare metal requirements.)

> C++ is the family car you take to do the week's grocery.

C++ is the Soyuz space capsule. You'd better pack a gun, because there might be bears where you land.


Specially when one insists in coding C++ like C.


> It offers no help.

Try using a compiler that wasn't made in the last millennium.


Let me preface this by saying, this post isn't saying that Python is better than C#. I worked extensively in C# before I worked in Python, and if it weren't for licensing, I would still be working in C#. Both languages have tradeoffs and they are different enough that they're just hard to compare. Looking strictly at the development experience of using the language, I'd be hard-pressed to say one is better than the other.

> Type hinting feels like a bandaid for a fundamental limitation of dynamic languages. I've just gotten back to a complex, experimental codebase after only a couple months of absence, and am refactoring it to accommodate for the implementation of a number of previously unplanned features. Even with type hinting and heuristic linting it's such a huge pain! After making a large number of changes I end up having to rerun the code repeatedly to find and squash bugs, and that says nothing of the code paths I don't end up taking. Is there a better way to utilize the convenience of python for experimental code without running into the scalability issues of large python codebases?

What's your unit test coverage like? I'm not asking for coverage percentages (those are effectively useless) I'm asking: have you built out test coverage to the point that you trust your test suite?

C# is probably the best mainstream example of a strongly-typed, statically-typed language[1]. People coming from strong, statically-typed languages tend to rest heavily on the type system. So when they come to a dynamically-typed language, they feel like a vital tool for preventing bugs and structuring code has been taken away from them.

In his essay, Yes, We Have Noticed the Skulls[2], Scott Alexander notes that outsiders often look at a group and see a pile of "skulls" in the group's wake--problems the group has had in the past--without realizing that those problems are just as obvious and concerning to the insiders of the group, and that they are already addressed.

The Python community is no exception: we know that a lack of static types makes our code prone to bugs. And the solution the Python community has come up with is automated testing. It's no mistake that test coverage, test-driven development, behavior-driven development, etc., were popularized in communities built around dynamic languages, and there's extensive, effective tooling for these in Python.

And here's the thing: automated testing gets you more than static typing does. Haskell programmers in particular like to make the claim that "if it compiles, it's correct", but even with Haskell's type system which is stronger than C#'s, that's just not true. Types aren't a silver bullet: there's no shortcut to having to run your code to test, and if you have to run your code to test it, then it makes sense to automate the running of your code to test it.

And unit tests, the static type system gets in your way. Yes, you absolutely can (and should!) write unit tests in C#. But compilation slows down your unit tests. Having a static type system makes it harder to tell where you should write a test and where you should rely on the type system. You might want to pass a null value to a function in a test because the parameter doesn't matter for the test, but if that parameter can't be null in production, you don't want to have that parameter be nullable, because you want your type system to catch that, meaning you have to hydrate a potentially complex object just to satisfy the type system. Mocking is much more simple in dynamic languages--in C# you end up creating a lot of interfaces which are only implemented once, just so you can mock them. The list goes on.

The result is a sort of Pareto 80/20[3] situation: for equivalent C#/Python codebases with high reliability, C#'s type system gets you 80% of the reliability with 20% of the work, but unit testing to cover the last 20% ends up filling up the other 80%.

There is probably an upper bound to this. If you really need 100% reliability, it will probably take an extra 200% effort in C#, but it might not be possible with current tooling in Python. But very few projects fall into this category.

My impression of type hinting in Python is that the endgame for type hinting actually has little to do with finding bugs: it's more about optimization. The fact that type hints can be used for code verification is just a happy side effect.

As a final note, I'll add that assertions are an extremely useful tool which I feel are underused in both the C# and Python communities. In C#, assertions can fill out a lot of the gaps where the type system can't catch errors. In Python, assertions can be used to verify constraints without the boilerplate of a unit test (the tradeoff being that an assertion doesn't inherently run when you run your test suite). In both languages, assertions increase the value of your unit tests (because they test the constraints whenever the code is run in the unit test, even if the constraint isn't what the unit test is intended to test). And in both languages, assertions make debugging easier because they move the error detection closer to where the bug originates.

[1] There are better examples of strongly-typed, statically-typed languages, but they aren't mainstream. I'm not criticizing anyone's language, so please don't bite my head off about this.

[2] https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-th...

[3] https://en.wikipedia.org/wiki/Pareto_principle




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: