
Implement with Types, Not Your Brain - Tehnix
https://reasonablypolymorphic.com/blog/typeholes/
======
quickthrower2
> I don't need to use my brain to do programming

...

    
    
        hoistStateIntoStateT
            :: Sem (State s ': r) a
            -> S.StateT s (Sem r) a
        hoistStateIntoStateT (Sem m) = m $ \u ->
        case decomp u of
            Left x -> S.StateT $ \s ->
            liftSem . fmap swap
                    . weave (s, ())
                            (\(s', m') -> fmap swap
                                        $ S.runStateT m' s')
                            (Just . snd)
                    $ hoist hoistStateIntoStateT x
            Right (Yo Get z _ y _)     -> fmap (y . (<$ z)) $ S.get
            Right (Yo (Put s) z _ y _) -> fmap (y . (<$ z)) $ S.put s
    
    

...

I definitely need to use my brain to _read_ that code though.

~~~
munchbunny
I have a similar experience with mathematics. "The differential form
expressions for Maxwell's equations are incredibly simple!" Yup, they're
simple after you've done all of the work to wrap your mind around differential
forms in the first place. I did it at some point. It was rewarding, and it was
elegant, and it was even _fun_ , but it was anything but easy on my brain.

Haskell feels a lot like that. That code snippet gave me the same feeling of
having to climb a conceptual mountain to appreciate the metaphorical view at
the top. I have no doubt that there is something beautiful there, but of all
of the conceptual, programming related mountains I need to climb, the language
abstraction one is pretty low priority, because the language I am using really
is not my limiting factor. I appreciate that others who care spend a lot of
time thinking about this problem, but Haskell's got a long way to go before
I'm convinced that it's worth using in everyday contexts.

~~~
schwurb
Note that Haskell makes very few assumptions about the users; the quality and
complexity of Haskell code varies greatly. The author of this block is dealing
with an abstract topic, resulting in somewhat arcane looking code (I am sure
that after some refactoring it would read nicer). It is however very possible
to write really easy to understand Haskell code by ignoring its more fancy
features and sticking to the most used abstractions: Monads, Applicatives and
Functors. They alone are enough to write great code, no category theory,
effect library, type magic or whatever is needed. Simple, real-world Haskell
exists, it is just not often blogged about.

~~~
munchbunny
Fair enough, I just disagree with the premise that heavily relying on the type
system actually lets you think less hard about how to implement your code. In
my experience, defining the types you'll use at module boundaries is the
actual hardest part. The stuff in the middle is usually straightforward until
you realize you need sub-modules, at which point you're defining module
boundary types again.

~~~
seanmcdirmid
A good type is like a heads up display about where you are in your
implementation. Haskell types are often far away from being that, however.

------
tempsolution
Sorry but this is just crazy. I always wonder if people writing blog posts
like that have ever worked in a big company. You need to spend at least 2-3
hours a day doing code reviews and most of the rest you are writing code,
which subsequently involves reading the code you wrote yourself and others
have written.

Now we currently use Java here. I worked with other programming languages in
companies too, for instance C and C++ which were a nightmare with no equal.
Don't get me wrong, when I was in college I just loved creating crazy shit
with template metaprogramming and preprocessor metaprogramming. Once you
mature out of it, this is just the most craziest thing you could ever do.

The reason is simple. Even plain, stupid Java is already incredibly hard to
verify during a CodeReview. Spend 2 hours in a row trying to find problems
with other's people well written Java code and your brain feels like it
explodes. Now with C++ that same task is almost impossible and will likely
require 8 hours of your day for the same code volume. With Haskell I would go
out on a limb here and say it would require 24 hours of your daily time to
handle the same code volume...

Not to mention that most programmers even at Google and Amazon would not even
be able to either read or write such code and people like me, who could and
did in the past would just shake their heads and move on. Nobody wants to deal
with such code outside of university...

~~~
mabbo
> most programmers even at Google and Amazon would not even be able to either
> read or write such code

Hello, I am at Amazon. I spend a decent portion of my time working with an in-
house Clojure library (not entirely dissimilar to Haskell) which handles some
mission critical business logic my team owns.

It terrifies me. I hate it.

For every hour I spend working with it, I spend another hour trying to
convince everyone we need to rewrite it in Java, immediately, or replace it
with a different library. None of us have a clue how it works. We can't debug
it. The guy who wrote it quit three years ago.

I mean, it works. It works very well. But the day it doesn't work, we're not
going to have a clue why that is. And then we're in real trouble.

~~~
rfrey
You could use those hours currently spent lobbying for a rewrite to, you know,
learn clojure.

~~~
tempsolution
Yet another myth that comes from not working in big companies... Learning a
language is mostly useless for several reasons: 1) You will be among the only
person who is willing to learn it 2) You will have no capable reviewers, and
usually you need about 2 for each CR 3) Your team has no production knowledge
and muscle memory for that language. Ergo pretty much all you do will be
garbage for a very long time and a huge operational risk. 4) Other teams at
your company will not know this language either.

Bottom line: Pick the language most teams at your company are familiar with.
At Amazon that simply is Java for anything that is not frontend, period. Even
getting into Kotlin presents a major obstacle. Clojure? Yeah right.

Edit: Just to clarify. Working in a big company is not meant as a restriction.
In fact, having worked in several startups before, these would do well to
adopt the same principles for programming and prohibit this "university
graduate" way of developing software. In the end, if your startup is going to
survive, it's going to become a big company itself someday.

~~~
ulucs
Learning a language that is almost completely useless for your company is
still useful for you. It adds to your skill set, increases your market value
and keeps you learning.

Only learning corporate-approved languages is a bit shortsighted. Tech
companies, development paradigms and programming languages all have shorter
lifespans than humans. COBOL and Fortran, once very successful and popular
languages have stopped being mainstream for a while. And those two were the
survivors of their batch. Who knows how many then-popular languages have faded
away to obscurity?

Also you have to keep learning even if you are to stick to the same language.
Java is not 1.6 anymore. Languages keep evolving by borrowing good ideas from
other languages. So why not use your learning time to stay ahead of the curve?

Edit: All languages have their places, even in big companies. Could Whatsapp
and Discord have grown at the same pace while using Java/C#? BEAM is the right
tooling for them and it has paid off.

~~~
apta
> Could Whatsapp and Discord have grown at the same pace while using Java/C#?

Arguably, yes. The JVM and .NET CLR are both superior platforms when it comes
to performance. They also both offer actor models with supervision (e.g.
Akka/Akka.NET).

------
kazinator
> _Which means you can slowly use type holes to chip away at a difficult
> implementation, without ever really knowing what you 're doing._

Yeah, look! I started with a function signature and a little _. Lo and behold,
fifty seven holes later, I have a graph-coloring optimal register allocator
for MIPS for my compiler back end. Don't ask me how it works, but the type
system assures it.

~~~
state_less
This type system, is it powered by AI?

~~~
mrkeen
There's nothing fuzzy going on. If I said f(x) = 2, you could quickly work
backwards from that and state: 2 is an Int. x isn't used so it could be any
type, let's say a. That means f has type (a -> Int).

------
hardwaresofton
This post is _not_ for beginners/non-haskellers, I don't understand why it's
being posted here. Is this intentional or are people in the haskell community
_this_ tone deaf? It's like we're determined to prevent new people from
joining.

Dear everyone, code like this is _not_ where Haskell starts to get useful,
it's useful much earlier. I won't reproduce it here but I _just_ wrote a post
with some practical code (and how haskell helped) taken from a project of
mine:
[https://news.ycombinator.com/item?id=20260095](https://news.ycombinator.com/item?id=20260095).

If you are using Coyoneda in your code, please do not use it to extol the
virtues of Haskell, except if you're sure your audience is expert
haskellers/mathematicians/researchers/others who can actually gain something
from the post, or if the writing is so sublime that it has reduced the
complexity to near zero.

Please OP, you're actively hurting the adoption of Haskell by posting stuff
like this in the wrong places/context.

The author of the article (Sandy Maguire) created a library called polysemy
that improved on the efficiency of using a concept called Free Monads in
Haskell -- their main benefit is that they let you write functions that deal
_only_ in your business domains. A rough corrolary would be a dependency-
injected function that _could not_ do anything outside of what the dependency
injected pieces do, but also allows for different dependency implementations
to be mocked out entirely within the languag (i.e. _without_ the
XML/annotation hell that is Spring DI magic).

~~~
danharaj
> This post is not for beginners/non-haskellers, I don't understand why it's
> being posted here. Is this intentional or are people in the haskell
> community this tone deaf? It's like we're determined to prevent new people
> from joining.

There are plenty of submissions on HN that are aimed at specialists of a
field. They get to the front page because people are curious about specialized
fields. So, even though I disagree from the pits of my soul with the content
of the submission (as a professional Haskell developer, I should add), I think
you're being very unfair to say that stuff like this shouldn't be shared with
a broader audience.

~~~
xelxebar
Oh man, I'd absolutely _love_ to work in haskell professionally. Mind if I ask
for your advice? As someone with ~4yrs in industry as a mobile developer and a
master's in math, what are some routes I should be look into?

~~~
danharaj
So, I got my first job by going to my local Haskell meetup and chatting with
someone who would end up being my boss, and I've stuck with that crew since. I
think that's rather serendipitous but it definitely doesn't hurt to go hang
out with the Haskellers in your area. There's a regular meetup in many major
cities, including Boston, SF, NYC, London and Zurich off the top of my head.
Our shop has also hired people who respond to our job postings (on Reddit, for
example), as well as people who participate in the community on IRC and
Discord either by word of mouth or recommendation.

Now, as someone who evaluates resumes and interviews people, the only Haskell
specific thing I care about is making sure the candidate knows how to use the
core of the language, which is basically Haskell2010 + some indispensable
extensions. I expect fancier stuff to require training, and to be honest it's
sometimes better to properly contextualize that stuff for someone instead of
having them come in already thinking they know how to use the footguns :^). My
company asks candidates to attach or link code samples if they have them. If
they don't, we ask them to write a small Reflex [0] app, because we're
primarily a Reflex shop.

Now, every shop is going to be different and expect different things from
their candidates. That's one reason why getting involved in the community is
so useful. If you hear directly from people who employ Haskellers in your area
what they want, then you can build your credentials for that. Besides that,
you just gotta apply. If you're a strong candidate in another tech stack,
you're a strong candidate for a Haskell stack too if you can demonstrate you
can use the language. There's currently a remote job posting on r/haskell if
you want to give it a shot. I wish you the best of luck :)

[0] [https://reflex-frp.org/](https://reflex-frp.org/)

------
sfink
This reminds me of solving physics problems by getting the units right.

Blah blah ...15 meters per second... blah blah ...after 10 minutes... blah
blah ...how far... blah blah => aha! I probably want to multiply 15 by (10 ×
60).

~~~
tome
Yes, very much the same flavour! In each case although it doesn't guarantee
the right answer it gets you closer.

~~~
dllthomas
Obligatory: [https://xkcd.com/687](https://xkcd.com/687)

------
jrochkind1
Are they suggesting writing code that you can't later understand or debug?

Does this actually work out for Haskellers? I mean, I guess I'll believe you
if you tell me it does, Haskell is different enough from the kind of
programming I do that I don't think I know anything at all about it.

~~~
jeremyjh
Some of the type-machinery code (hint, it has words like "hoist" or "lift"
that describe type operations, not data operations) _is_ like that, and its
really not a problem because it does the only thing it can do given the types,
and the things it does have nothing to do with your business domain or I/O so
its never going to change unless a library you are using changes. When that
happens its annoying but you just play type tetris again until the red
squiggly lines go away and then you are fine for another few years. It isn't
something I'd really brag about though as its really hard to explain the
benefits of this stuff to non-Haskellers and you can end up wasting infinite
amounts of time polishing these abstractions instead of getting work done.

~~~
jrochkind1
> because it does the only thing it can do given the types

Okay, this may be (probably is) a stupid question, because I know no Haskell
whatsoever, but... if there's really only one thing it can do given the types,
why does the code have to be there at all? Why can't you just write "Do the
only thing you can do given these types"? If it's code humans aren't meant to
read anyway, and it really is the _only_ thing that could be done... why can't
the compiler just _do_ it without you having to put the non-human-intelligible
code in your source files?

~~~
syrak
This is an interesting question.

Technically there is more than one possible implementation, but you would also
have to go out of your way to get it wrong. Automatically determining what
constitutes "going out of your way" so that the code can be generated
automatically is an active area of research (program synthesis), but as this
post already shows, for "simple" cases, we're getting pretty close.

As the post mentions, there are certain dead giveaways of an incorrect
implementation, such as unused variables. Conversely one may still have some
rough idea of the desired code to guide the implementation. The point is not
about entirely removing one's mind from the process, but to allow oneself to
only think about the choices that matter, which are few.

Alternatively, another way to look at the problem is that these types, while
already being quite precise, are still not as precise as they could be,
because the type system is not sufficiently expressive. And even with the
ability to describe the desired properties to uniquely determine the
implementation:

\- it may not be obvious that the implementation is in fact uniquely
determined;

\- a solution, unique or not, may not be easy to find automatically. Program
synthesis is essentially the same problem as proof search (c.f. Curry-Howard
correspondence); knowing that there is a proof of Fermat's last theorem is not
sufficient to construct an actual proof of it from scratch.

For these reasons, it may still be desirable to nail down the implementation
explicitly even if it is hard to read.

It's also worth considering the fact that polysemy (the package that the
controversial snippet at the beginning of the post comes from) is very much an
implementation of a state-of-the-art effect system using state-of-the-art
features of Haskell's type system, so it can be expected that the abstractions
to make this code more digestible (for the right audience) are still missing,
because no one has ever thought of how to express them yet.

------
mpweiher
This piqued my interest, because that's the same way TDD works for me.

And then it starts with "Let's go through an example together. Consider the
random type signature that I just made up:"

Hmm...I _never_ have a type signature as a starting point, random or not. I
have some sort of requirement, some sort of _functionality_ I want out of the
code. Soft fuzzy requirements.

And the types rarely if ever tell me what the function _does_. Heck, a
function that's _int x int - > int_ could be just about _anything_. How about
_string - > int_? Does it convert the string to an integer or count the
vowels?

~~~
golergka
> Heck, a function that's int x int -> int could be just about anything. How
> about string -> int?

That's because using types like "int" and "string" is not a real, valid use of
a good type system. Instead, you'd typically have a function type of `newton
-> sqMeter -> pascal`, or `username -> accessLevel`. All these types would be
implemented through basic int and string types, but declaring them explicitly
with very limited conversions inbetween would actually use type system to
verify correctness of your code.

~~~
mpweiher
So you're going to not have arithmetic or string manipulation?

~~~
tathougies
Not really. I may have vector length manipulation and address formatting
though. But vector lengths are not integers (or natural for that matter) and
an address is certainly not a string. Perhaps you could use these _primitives_
to represent these things but we should not confuse representation with
equality.

~~~
AgentOrange1234
Yes so much.

An integer can represent all sorts of things, so forever programmers have been
using integers to represent all sorts of things.

If instead of “int age, int size, int color” you have “age_t age, size_t size,
color_t color”, and a type system that helps keep these apart, so much
confusion can be avoided...

------
noncoml
I really wanted to get into the promise-land of Haskell, and tried hard to,
but my enthusiasm went south when I found out I can hit bugs with lazy-IO that
the compiler wouldn't warn me about.

~~~
jose_zap
Lazy Io was an unfortunate historic accident, one which is preached much
against in the Haskell community. Alternative standard libraries abound in the
ecosystem that remove this accident.

------
slifin
I've never read a code base where I thought the type system was doing a good
job of being a DSL for describing business requirements

Types are great imo for checking against primitives because the semantics of
int string array etc are strong but most languages are encouraging wrapping up
primitives in some user defined wrapper then the semantics of the wrapper are
implicit, brittle and known only to the author

The semantics of your Bob class is more likely to change as it's used over
many different contexts than the int type is, if the int type is being swapped
out it's usually because the wrong data is being used, which is what we are
using types to protect against, it's not usually because there's a problem
with what int is

~~~
syrak
> I've never read a code base where I thought the type system was doing a good
> job of being a DSL for describing business requirements

Would refinement types help in that respect? For example, the F* language.
[https://www.fstar-lang.org/](https://www.fstar-lang.org/)

I don't have any experience with "business" code, but my naive understanding
is that application-level code is typically very monomorphic which is exactly
where refinement types are useful, whereas the OP leverages polymorphism to a
large extent (not necessarily), and that works well for general-purpose
libraries which don't and must not care about their users' data.

------
dan-robertson
I read the argument in the article as something like “strong advanced type
systems like Haskell’s are good because instead of manually writing hard
functions, one can just write the type (ie the meaning of the function) and do
a bunch of menial compiler-directed work to build up one’s function in little
steps. This is great because one doesn’t need to “actually” write such
horrible functions.

I find it slightly silly because one still has to read such functions so it is
still hard to know that they do the right thing. For example, consider the
function in the article which has type:

    
    
         Sem (State s ': r) a
      -> S.StateT s (Sem r) a
    

It seems pretty obvious what this morally should do.

In some cases it is possible for polymorphism to force a value of a certain
type to always behave a certain way. E.g. a value of type [a] must be the
empty list because that is the only value with that type. On the other hand
there are multiple values of type [[a]] (I.e. an empty list or an infinite
list of empty lists or anything in between). A more complicated example is
that the only thing a function with the signature of compose ((b -> c) -> (a
-> b) -> (a -> c)) can do is compose functions whereas a function of type
(Int-> Int) -> (Int-> Int) -> (Int -> Int) can do just about anything. Outside
of abstract Haskell libraries, code tends to look more like the second case
than the first.

In this case it is hard to be sure that the function must behave in the
correct way because of its type. I think it hinges on whether it is possible
for Put to change the type of the state. If it can then types force you to
always take the newly put value and return it with Get, otherwise it would be
possible for Put to eg only sometimes work. So if one has to check that the
code is correct, one must either read very carefully and check the code, or
one must very carefully read the type signature and deduce that the code is
correct.

In the first case the fact that the compiler made the code easy to write
doesn’t really help much. Perl5 taught us that just because a program was easy
to write it does not mean it will be easy to read/check.

In the second case, why do we have a program at all? If there is only one
correct program we could have written then it seems to me that the program is
in fact the type and the compiler ought to have been more clever and written
it itself. (There are of course plenty of issues with that statement).

It seems to me that claiming “Haskell is great because it helps automatically
write difficult functions which are impossible to check” or “Haskell is great
because it makes me do the manual steps in writing a function that can only
ever do one thing and mixes that pointless code in with the thing I care
about” is a bit silly.

This all being said, I do think holes are a useful feature but I don’t think
they are useful for entirely writing a function for you. They are particularly
useful in proof assistants like Agda where one must manually exhibit a million
trivial propositions and holes help fill in all the boring gaps. Agda doesn’t
really suffer from the “multiple things a type could mean” issue.

~~~
syrak
> In this case it is hard to be sure that the function must behave in the
> correct way because of its type.

I would argue that in this case it is quite easy to ensure that it does the
right thing, because even if the implementation is not unique, the number of
possibilities is extremely limited. In the case of `Put` you can return either
the new state or the old one. It takes a single unit test to ensure it's
putting the new one, with parametricity to generalize from one case to all
cases, still without looking at the implementation.

> In the second case, why do we have a program at all? If there is only one
> correct program we could have written then it seems to me that the program
> is in fact the type and the compiler ought to have been more clever and
> written it itself. (There are of course plenty of issues with that
> statement).

Program synthesis being basically proof search, even after the right
automation is developed it may still be more practical to write the program by
hand.

> They are particularly useful in proof assistants like Agda where one must
> manually exhibit a million trivial propositions and holes help fill in all
> the boring gaps.

I think the same mechanisms and benefits are at play here in a non-
dependently-typed setting, even considering "entirely writing a function for
you" as an unwarranted exaggeration.

------
Gormisdomai
I would really love to see an IDE that auto suggests / autocompletes code
based on typed holes like this.

Does such a thing exist?

~~~
gcommer
Holes in Agda are a good bit more powerful than in Haskell, and agda's emacs
mode has a bunch of very powerful commands for working with them, eg: listing
possible values, automatically filling, splitting them by case, etc.

For example, for this post's "jonk" example I just had to copy the type into
emacs, reformat it a bit into Agda syntax, then press C-c C-a and it
automatically figured out the solution that the author worked through
manually: λ z z₁ z₂ → z₁ (λ z₃ → z₂ (z z₃))

See the full list of commands at
[https://agda.readthedocs.io/en/v2.5.2/tools/emacs-
mode.html](https://agda.readthedocs.io/en/v2.5.2/tools/emacs-mode.html)

------
wellpast
> Once you really understand the typesystem, most of the time, types really
> are the best documentation --- they often tell you exactly what the function
> does, in a way that English comments never will.

Snake Oil.

Apply this claim to any set of real world functions, however well designed
they are.

This is classic Haskellism. Obsessed with the lattice of types in the room.
Door closed. Real world outside.

~~~
quickthrower2
Not completely. Types are documentation. When going from JS -> TS you suddenly
have the ability to create types that allow you to easily discover what is in
an object. You can now confidently say foo.bar.replace(...) and know it'll
work at runtime.

Otherwise you hope that someone has written in a doc somewhere that bar is a
string. And judging by npm docs I've seen - that is very unlikely. So you
debug at runtime, head over to the github repo, or do an
console.log(typeof(... hmm!

~~~
Scooty
My biggest problem with this claim is typescript doesn't include any support
for runtime type checking.

My experience with TS has been great until I need to deal with data that comes
from an external system (API, client, database). Then I either have to cast
the type (which can lead to bugs where compile and run time types don't match
and IMO isn't an option server-side) or I have to effectively duplicate my
types by writing type guard functions.

~~~
kyllo
Speaking of TS, here is a good argument for static typing:

>Change a front-end React prop, then follow static type errors through the
statically typed API, into the backend, all the way down into the database
where they force a change in the Postgres schema. The system won't even
compile unless React can talk all the way down to Postgres.

[https://twitter.com/garybernhardt/status/1140695491685933060](https://twitter.com/garybernhardt/status/1140695491685933060)

~~~
wellpast
You’ve just described hell.

~~~
wellpast
Okay, let me be more specific. I've built complex React apps full-stack down
to Postgres and maintained them over ever-evolving/increasing product
requirements. To schedule. With a tiny team, part-time dedicated. And little
maintenance/defects/outages in between feature pushes.

The success here lied in using various tools, applied _pragmatically_ at each
step according to the given context. Probably the most important tool is
defensive coding and building up a system from decoupled, almost visibly-
correct components. Then some nice technical tools thrown in: dynamic
development with hot loading, some modest storybooking of components, a repl,
good backend libraries, conscious and measured unit testing, strong
build/deploy/release tools.

The ability to turn on a dime and incrementally refactor things was paramount
to the success and efficiency of this development.

Then someone says we need to use strong static typing. All the way. No
judicious application of this tool, we'll use it everywhere. We'll make
everything cohere to the type system. This is no pragmatic choice. This is
religion. You know its religion because its an ideology that pervades the
whole system. There is no judicious application. The extreme tax of contending
with types _as your system evolves_ is high. I've only ever seen strong typing
enthusiasts deny this truth. Will one enthusiast be honest here? You can
literally stand over a strong typing enthusiasts shoulder as they spend an
hour running all over their system adjusting their types for one minor change
in business requirements and they'll still say, "No, no, types don't cost
anything..."

~~~
kyllo
You sound smart, but I don't want to have to be that smart when I'm
refactoring code. I want the compiler to do as much thinking for me as
possible. I want to be able to change it in one place and have type errors to
remind me of any other places in the codebase I forgot to change.

Static analysis doesn't solve all your problems, but it solves enough of them
to be a very useful technique.

~~~
wellpast
Static typing cannot ensure about their code what the coder can and should
ensure about their code.

And often -- very often -- especially in strongly typed languages -- the
static checker will reject code that is _otherwise_ valid. First point.

Second point: Statically typed PLs (especially strongly typed ones) enforce
the verifications _across the code_ , with no (at least non-ridiculous) way
for the coder to be judicious about things.

Third point: statically typed languages have weaker runtime
features/abilities: polymorphism, homoiconicity, true REPL (ie true
read->eval), etc.

All of these points add up to MORE cognitive complexity for the coder who is
trying to develop non-trivial business applications -- not less cognitive
complexity.

It's not being smart. It's learning. Once you learn algebra, say, many classes
of problems _without algebra_ would be too much cognitive complexity.

Type systems are focused on one thing: type coherence. But the real world
demands runtime dynamics are what the customer is paying for; being as close
to that need as possible IS lower cognitive impedance.

~~~
kyllo
"enforce the verifications across the code" is the point.

I've worked with a ton of Python code that looks like this:

    
    
        def foo(bar):
            # do stuff
            return baz(bar)
    

What does `foo()` return? I have to go read the body (or docstring if I'm
lucky) of `baz()` to know that. And `baz()` might be another level of
indirection to `quux()`. If I change what `baz()` returns, now I need to grep
my codebase for all call sites of `baz()` and verify that the new return type
is acceptable at each of them, and make changes if not. This is super time-
consuming and error-prone, especially when a compiler (especially with an IDE
refactoring tool) could do take care of it for me in seconds. It's easier if I
have a good test suite, but that means I'm just implementing static type
checking with runtime tests, which is more code that I have to maintain.

~~~
wellpast
You're looking at this Python code and blaming its deficiencies on lack of
types. It's deficiencies are not lack of types, but lack of separation of
stable vs volatile code, lack of contract specification and documentation,
lack of refactoring and versioning idioms (see
[https://www.youtube.com/watch?v=oyLBGkS5ICk](https://www.youtube.com/watch?v=oyLBGkS5ICk)).

Why does the blame always go to lack of types?

Whether or not you have types you're still going to suffer if you don't have
the other things I mention. And if you have the other things I mention then
static typing become much more of a nuisance. Ergo...

