
Cold Showers: For when people get too hyped up about things - vthriller
https://github.com/hwayne/awesome-cold-showers
======
Multicomp
I love that sqlite article. It seems like "everyone" is certain that sqlite
can only be used for up to a single query per second, anything more and you
need to spin up a triple sharded postgres or Hadoop cluster because it 'needs
to scale'.

I love being able to show that study, if you properly architect your sqlite
system and am willing to purchase hardware, you can go a long long way, much
further than almost all companies go, with your data access code needing
nothing more than the equivalent of System.Data.Sqlite

~~~
bob1029
SQLite is incredible. If you are struggling to beat the "one query per second"
meme, try the following 2 things:

1\. Only use a single connection for all access. Open the database one time at
startup. SQLite operates in serialized mode by default, so the only time you
need to lock is when you are trying to obtain the LastInsertRowId or perform
explicit transactions across multiple rows. Trying to use the one connection
per query approach with SQLite is going to end very badly.

2\. Execute one time against a fresh DB: PRAGMA journal_mode=WAL

If at this point you are still finding that SQLite is not as fast or faster
than SQL Server, MySQL, et.al., then I would be very surprised.

I do not think you can persist a row to disk faster with any other traditional
SQL technology. SQLite has the lowest access latency that I am aware of.
Something about it living in the same process as your business application
seems to help a lot.

We support hundreds of simultaneous users in production with 1-10 megs of
business state tracked per user in a single SQLite database. It runs
fantastically.

~~~
saberience
Are you living on another planet? Are you seriously suggesting people replace
SqlServer or MySQL with SQLite?

If you can come to my company and replace our 96-core SqlServer boxes with
SQLite I'll pay you any salary you ask for.

~~~
bob1029
I can assure you that I live on the same planet as everyone else posting here.

Whether or not I could perform this miracle depends entirely on your specific
use cases. Many people who have this sort of reaction are coming from a place
where there is heavy use of the vendor lock-in features such as SSIS and
stored procedures.

If you are ultimately just trying to get structured business data to/from disk
in a consistent manner and are seeking the lowest latency and highest
throughput per request, then SQLite might be what you are looking for.

The specific core counts or other specifications are meaningless. SQLite
scales perfectly on a single box, and if you have some good engineers you
might even be able to build a clustering protocol at the application layer in
order to tie multiple together. At a certain point, writing your own will get
cheaper than paying Microsoft for the privilege of using SQL Server.

~~~
nicoburns
Is SQLite likely to be faster than postgres? In terms of ease of use / admin
overhead I consider them mostly equivalent. I thought the main problem with
SQLite was it was slow tih concurrent writers. Whereas the "bigger" SQL
databases have code that allows concurrent writes.

~~~
bob1029
WAL mode is how you address this problem with SQLite.

See: [https://www.sqlite.org/wal.html](https://www.sqlite.org/wal.html)

"Write transactions are very fast since they only involve writing the content
once (versus twice for rollback-journal transactions) and because the writes
are all sequential. Further, syncing the content to the disk is not required,
as long as the application is willing to sacrifice durability following a
power loss or hard reboot."

~~~
JAlexoid
As long as you don't mind volatile memory issues, that is.

------
Taikonerd
I read this and thought, "oh, the author is calling out formal verification as
overhyped? Hillel Wayne ([https://hillelwayne.com/](https://hillelwayne.com/))
is going to be angry! Wait, who wrote this..."

~~~
wenc
Me too.

I recall that F Scott Fitzgerald said "the test of a first-rate intelligence
is the ability to hold two opposed ideas in mind at the same time and still
retain the ability to function."

~~~
Taikonerd
In Hillel's defense, maybe they're not opposed ideas... this article talks
about problems with formal _verification_ , but I think his thing is more
formal _modeling_ (with TLA+).

~~~
hwayne
If I find any cold showers with formal modeling, I will absolutely include
those too. :D

------
ravenstine
I think the best takeaway from this is that the software industry makes lots
of claims about development processes, but so little actual research is done
in trying to validate those processes. It's all mostly based on opinion.

~~~
vii
It's hard to do objective research. Some studies try A/B tests on student
volunteers. But then this setting is clearly different to professional teams
working on a project for a long time.

The value of new methodologies, languages, and techniques is partly that the
enthusiastic proponents of them are given a chance to prove out that there is
value, and so become motivated to go the extra distance to achieve the project
specific outcome.

This value is destroyed if people are forced to use the technique, instead of
championing its introduction. So measurement is made even harder!

------
david-cako
It's really hard to point at studies to evaluate these types of hyped
development paradigms. Some thoughts, as someone who loves static typing and
microservices:

My favorite thing about static typing is that it makes code more self-
documenting. The reason I love Go specifically is because if you have 10
people write the same thing in Go, it's all going to come out relatively
similar and use mostly built-in packages. Any API requests are going to be
self-documenting, because you have to write a struct to decode them into. Any
function has clear inputs and outputs; I can hover a parameter in my IDE and
know exactly what's going on. You can't just throw errors away; you always are
aware of them, and any functions you write should bubble them up.

Typescript addresses this somewhat, but basically offsets that complexity with
more configuration files. I like Typescript in use, but I can't stand the fact
that Javascript requires configuration files, transpilers, a million
dependencies. Same for Python and mypy.

Yes, I could just look at class members in a dynamic language, but there's
nothing that formally verifies the shape of data. It's much more annoying to
piece apart. I don't use static analyzers, but my guess is that languages like
Go and Rust are the most compatible with them. Go programs are the closest
thing to a declarative solution to a software problem, of any modern language
IMO. As we continue experimenting with GPT-generated programs, I think we're
going to see much more success with opinionated languages that have fewer
features and more consistency in how finished programs look.

Microservices are also great at making large applications more maintainable,
but add additional devops complexity. It's harder to keep track of what's
running where and requires some sort of centralized logging for requests and
runtime.

~~~
networkimprov
You certainly can throw errors away in Go -- in various ways. It's one of the
notable flaws in a largely cohesive, sensible language. (Which I use daily.)

    
    
        success, err := fail()
        do(success)
    
        success, _ := fail()
    
        fail()

~~~
jchw
The second one is pretty intentional - it'd be annoying it were downright
impossible.

The first and third one fails on go vet. The first one also fails to compile
if you never read from err in the entire function.

~~~
networkimprov
Vet accepts the third, and may (I forget) accept the first if err is
redeclared or simply assigned.

~~~
jchw
My bad, I was thinking of errcheck, which is more thorough and integrated into
most Go linting tools like Go CI linter.

[https://github.com/kisielk/errcheck](https://github.com/kisielk/errcheck)

In any case, it’s trivial to detect via static analysis.

------
rwoerz
We software engineers are still more like alchimists rather than chemists.

That list reminds me of [1], which rants about this state of affairs and [2]
that puts many beliefs to the test.

[1] [https://youtu.be/WELBnE33dpY](https://youtu.be/WELBnE33dpY)

[2] [https://www.oreilly.com/library/view/making-
software/9780596...](https://www.oreilly.com/library/view/making-
software/9780596808310/)

~~~
fmakunbound
In addition to that, I also feel calling ourselves engineers is a stretch.

~~~
hwayne
I used to think this, and then I interviewed people who did both traditional
and software engineering professionally, and now I'm not so sure. I did a
first draft of what I learned here:
[https://www.youtube.com/watch?v=3018ABlET1Y](https://www.youtube.com/watch?v=3018ABlET1Y)

I'm hoping to have a written version by the end of September.

~~~
ozim
This is great. People romanticize construction, mechanical and other
engineering like there would be no failures in those disciplines. Buildings
collapse, machines break down in unforeseen circumstances. My pet theory is
that in software it is just a lot easier to create a lot of stuff, so it is
also a lot easier to create issues.

You can add that in eastern Europe you can get engineering degree which is
"technical bachelor" from technical university, so I am software engineer as
it is printed in my diploma.

~~~
gjulianm
It's not about the failures, it's about the modes of failure. I assume that
the modes of failure of a bridge, or of a building, are pretty well
understood.

Software has far more distinct pieces than any other product you can find
anywhere (maybe the human body?) so it's impossible to completely check the
modes of failure. I was just reading before about a hardware corruption bug
due to a kernel feature [1] and it's hard to imagine the same chain reaction
in other engineering areas.

In software it's also really hard to model behavior. In engineering you'll get
tolerances, strength and other features of the pieces you use. In software,
you can't even benchmark something and expect the same benchmark to translate
to a different computer.

1: [https://lwn.net/Articles/304105/](https://lwn.net/Articles/304105/)

------
timClicks
Hillel (the editor of this list) is one of the people in this industry that is
going to make a tremendous difference to the world. His ability to make formal
verification understandable, and therefore useful in practice, is
unparalleled.

~~~
carlmr
I think the biggest issue with formal verification, is that you need to
rewrite the important parts of your code in (for example) TLA+. If it's
integrated into the language, like ADA Spark, you don't need to learn so much
additional syntax or rewrite parts of your codebase in a language you rarely
use (given that you already work in ADA).

~~~
pydry
My biggest issue with formal verification after doing it a couple of times was
how absurdly complex the specification needed to be for it to work.

If the spec is 5x more complicated than the code would be then I'm not sure I
see much of a point coz you're just creating different spaces for bugs to hide
in.

~~~
mbrock
The aim is to have a spec that is much LESS complex than the code, written at
a higher level, abstracting away details. If the spec is 5x more complex than
the code then indeed there’s no point.

~~~
qznc
Here is a counter argument: [http://www.pathsensitive.com/2018/10/book-review-
philosophy-...](http://www.pathsensitive.com/2018/10/book-review-philosophy-
of-software.html)

My summary would be: The spec must cover all possible implementations so it is
usually larger than the most simple one.

An example from there:

> The authors of SibylFS tried to write down an exact description of the
> `open` interface. Their annotated version of the POSIX standard is over 3000
> words. Not counting basic machinery, it took them over 200 lines to write
> down the properties of `open` in higher-order logic, and another 70 to give
> the interactions between open and close.

> For comparison, while it’s difficult to do the accounting for the size of a
> feature, their model implementation is a mere 40 lines.

~~~
jacobr1
How much of that is open/close being a poor abstraction, or overloading a
bunch of semi-related functionality vs a more generalizable consideration
systems designed with verification in mind.

------
hansitomani
"Scalability! but at what COST?" Is a very good example on how frustrating it
can be.

We are throwing a lot of resources against a problem because we are not able
to educate people good enough to understand basic performance optimizations.

You are a Data Scientist/anyone else and you don't understand your tooling?
You are doing your job wrong.

~~~
Ar-Curunir
Maybe the available tooling for extracting good performance is not friendly to
people that aren’t already familiar with it?

------
quonn
I wish it would be possible to have better studies for that. I believe that
static typing has huge benefits as software scales. I also believe that the
type system of TypeScript is actually stronger in practice than the Java or C#
one (despite theoretical weaknesses). It has the right tradeoffs (e.g.
structural equivalence, being able to type strings, being able to check that
all cases are handled, etc.)

It would be nice to have proper studies, but it‘s difficult to control the
other variables ...

~~~
GlennS
People who like static typing seem to _really_ like static typing.

I'm honestly not convinced it helps that much. And it seems to cost a lot to
me.

I like database and API schemas though. And I like clojure.spec and function
preconditions a lot.

~~~
pantaloony
I don’t get the cost claims. The time it takes to note which type I intend
something to be is mostly either so low that I recover it via improved hints
and such very quickly, or larger but only because I’m documenting something
complex enough that I should have documented it anyway, whether or not I was
using static types, because it’ll be hell for other people or future-me to
figure out otherwise. It seems like a large time _savings_ to me—throw in
faster and more confident refactoring and stuff like that, and it’s not even
close.

I just don’t get how people are working that it represents a time _cost_
rather than a large time _savings_. I don’t mean that as a dig, I just mean I
genuinely don’t know what that must look like. And I’ve written a lot more
code in dynamic languages, and got my start there, so it’s not like I “grew
up” writing Java or something like that.

~~~
simiones
I think the general feeling is that there are some code patterns that are safe
and easy to do with dynamic typing, but impossible with simple type systems or
more complex with more advanced type system.

An example would be Common Lisp's `map` function [0] (it takes a number of
sequences and a function that has as many parameters as there are sequences).
It would be hard to come up with a type for this in Java, and it would be a
pretty complicated type in Haskell.

Another example of many people's experience with static typing is the Go style
of language, where you can't write any code that works for both a list of
strings and a list of numbers. This is no longer common, but it used to be
very common ~10-15 years ago and many may have not looked back.

[0]
[http://www.lispworks.com/documentation/HyperSpec/Body/f_map....](http://www.lispworks.com/documentation/HyperSpec/Body/f_map.htm#map)

~~~
Tyr42
Haskell's not too bad once you understand ZipList

[http://learnyouahaskell.com/functors-applicative-functors-
an...](http://learnyouahaskell.com/functors-applicative-functors-and-monoids)

    
    
        max <$> ZipList [1,2,3,4,5,3] <*> ZipList [5,3,1,2]  
        > [5,3,3,4]

~~~
tome
Yeah it's not at all complicated in Haskell. I'm not sure what GP is talking
about.

~~~
simiones
I replied to the parent as well, but not only is the solution the parent
showed significantly more complex than the CL version, I'm not even sure it
actually does what I asked.

More explicitly, the expression there seems to rely on knowing the arity of
the function and the number of lists at compile time. Basically, I was asking
for a function cl_map such that:

    
    
        cl_map foo [xs:[ys:[zs:...]]] = foo <$> xs <*> ys <*> zs <*> ...
    

Edit: found a paper explaining that this is not possible in Haskell, and
showing how the problem is solved in Typed Scheme:
[https://www2.ccs.neu.edu/racket/pubs/esop09-sthf.pdf](https://www2.ccs.neu.edu/racket/pubs/esop09-sthf.pdf)

~~~
tome
Sure it's possible in Haskell. I'm not sure where in that paper you got the
impression it isn't. Of course one can't define variadic functions in Haskell,
but that's a more fundamental difference from Clojure, not a "code pattern
that [is] safe and easy to do with dynamic typing, but impossible with simple
type systems or more complex with more advanced type system."

    
    
        > traverse_ print (sequenceA [ZipList [1,2], ZipList [3,4]])
        [1,3]
        [2,4]

~~~
simiones
As far as I can tell, your example calls a unary function on each element of a
list of lists. It's solving the variadic part of map, but not the part where I
can call an N-ary function with each element of N lists.

Basically, instead of your example I would like to do something like this:

    
    
        > cl_map (+) [ZipList [1,2,3], ZipList [4,5,6]]
        [5,7,9]
    
        > cl_map (+ 3) [ZipList [1,2,3]]
        [4,5,6]
    
        > cl_map max3 [ZipList [1,2], ZipList [3,4], ZipList [5,6]] where max3 x y z = max x (max y z)
        [5, 6]
    

Can this be done? What is the type of cl_map?

Note: If this doesn't work with ZipList, that's ok - the important part is
being able to supply the function at runtime. Also, please don't assume that
the function is associative or anything like that - it's an arbitrary function
of N parameters.

~~~
tome
The functions in those examples have fixed numbers of arguments, so one would
use the original formulation shown by Tyr42.

    
    
        > (+) <$> ZipList [1,2,3] <*> ZipList [4,5,6]
        ZipList {getZipList = [5,7,9]}
    
        > (+3) <$> ZipList [1,2,3]
        ZipList {getZipList = [4,5,6]}
    
        > let max3 x y z = max x (max y z)
        > max3 <$> ZipList [1,2] <*> ZipList [3,4] <*> ZipList [5,6]
        ZipList {getZipList = [5,6]}
    

If you want to use "functions unknown at runtime that could take any number of
arguments" then you'll have to pass the arguments in a list. Of course these
can crash at runtime, which Haskellers wouldn't be happy with given an
alternative, but hey-ho, let's see where we get.

    
    
        > let unsafePlus [x, y] = x + y
        > fmap unsafePlus (sequenceA [ZipList [1,2,3], ZipList [4,5,6]])
        ZipList {getZipList = [5,7,9]}
    
        > let unsafePlus3 [x] = x + 3
        > fmap unsafePlus3 (sequenceA [ZipList [1,2,3]])
        ZipList {getZipList = [4,5,6]}
    
        > unsafeMax3 [x, y, z] = x `max` y `max` z
        > fmap unsafeMax3 (sequenceA [ZipList [1,2], ZipList [3,4], ZipList [5,6]])
        ZipList {getZipList = [5,6]}
    

So the answer to your question is that

    
    
        cl_map :: ([a] -> b) -> [ZipList a] -> ZipList b
        cl_map f = fmap f . sequenceA
    

except you don't actually want all the elements of the list to be of the same
type, you want them to be of dynamic type, so let's just make them Dynamic.

    
    
        > let unwrap x = fromDyn x (error "Type error")
        >
        > let unsafeGreeting [name, authorized] =
        >    if unwrap authorized then "Welcome, " ++ unwrap name
        >                         else "UNAUTHORIZED!"
        >
        > fmap unsafeGreeting (sequenceA [ZipList [toDyn "tome", toDyn "simiones", toDyn "pg"]
        >                               , ZipList [toDyn True,   toDyn True,       toDyn False]])
        ZipList {getZipList = ["Welcome, tome","Welcome, simiones","UNAUTHORIZED!"]}
    

and the type of cl_map becomes

    
    
        cl_map :: ([Dynamic] -> b) -> [ZipList Dynamic] -> ZipList b
        cl_map f = fmap f . sequenceA
    

One could polish this up a bit and make a coherent ecosystem out of it, but
Haskell programmers hardly ever use Dynamic. We just don't come across the
situations where Clojurists seem to think it's necessary.

~~~
simiones
So in the end, as I claimed initially, this function can't be written in a
simple, safe way in Haskell; and as the article I linked claims, Haskell's
type system can't encode the type of the cl_map function.

It's nice that Haskell does offer a way to circumvent the type system to write
somewhat dynamic code, but it's a shame that in order to write a relatively
simple function we need to resort to that.

Note that the type of cl_map is perfectly static. It would be `Integer N =>
(a_0 ->... a_N -> r) -> [a_0] ->... [a_N] -> [r]` assuming some fictitious
syntax.

~~~
tome
> So in the end, as I claimed initially, this function can't be > written in a
> simple, safe way in Haskell

Steady on! You posed a question and I gave an answer. You weren't happy with
that answer. I think it's a bit premature to conclude that "this function
can't be written in a simple, safe way in Haskell".

> as the article I linked claims, Haskell's type system can't encode the type
> of the cl_map function.

Could you say where you see that claim in the article? I can see three
mentions of "Haskell" in the body, two of them mentioning that one
researcher's particular implementation doesn't handle this case, but not a
claim that it can't be done.

> Note that the type of cl_map is perfectly static. It would be `Integer N =>
> (a_0 ->... a_N -> r) -> [a_0] ->... [a_N] -> [r]` assuming some fictitious
> syntax.

OK, fine, it's a bit clearer now what you are looking for. How about this:

    
    
        > cl_map (uncurry (+)) ([1,2,3], [4,5,6])
        [5,7,9]
        > cl_map (+3) [1,2,3]
        [4,5,6]
        > let max3 (x, y, z) = x `max` y `max` z
        > cl_map max3 ([1,2], [3,4], [5,6])
        [5,6]
    

Notice that the function arguments are have different, statically-known types!
The type of this miracle function?

    
    
        cl_map :: Default Zipper a b => (b -> r) -> a -> [r]
    

And the implementation?

    
    
        -- Type definition
        newtype Zipper a b = Zipper { unZipper :: a -> ZipList b } deriving Functor
    
        -- Instance definition
        instance a ~ b => D.Default Zipper [a] b where def = Zipper ZipList
    
        -- These three instances are in principle derivable
        instance P.Profunctor Zipper where
          dimap f g = Zipper . P.dimap f (fmap g) . unZipper
    
        instance Applicative (Zipper a) where
          pure = Zipper . pure . pure
          f <*> x = Zipper (liftA2 (<*>) (unZipper f) (unZipper x))
    
        instance PP.ProductProfunctor Zipper where
          purePP = pure
          (****) = (<*>)
    

Given that the only two lines that actually matter are

    
    
        newtype Zipper a b = Zipper { unZipper :: a -> ZipList b } deriving Functor
        instance a ~ b => D.Default Zipper [a] b where def = Zipper ZipList
    

and the rest are boiler plate that could be auto-derived, I think this is
pretty satisfactory. What do you think?

~~~
simiones
First of all, thank you for bearing with me this long!

Still, you haven't written exactly the function I was asking for. You require
a manual, compile-time step of transforming the N-ary function to a unary
function taking a tuple. Still, it's impressive that this can define variable-
length, variable-type tuples. Unfortunately I am not able at all to follow
your solution, as it's using too many types that I'm not familiar with, and it
seems to require some external packages, so I can't easily try it out in an
online compiler to understand it better (as I have been doing so far).

Either way, I would say we are well outside the limits of an easy to
understand way of specifying this kind of function - even if you are only
showing 2 lines of code, it seems that your definition requires, outside of
lists and functions (the objects we intended to work with): ZipList, Default,
Functor, Profunctor, ProductProfunctor, Applicative, and a helper type. Even
if these were derivable, someone seeking to write this function would still
need to be aware of all of these types, some of which are not even part of the
standard library; and of the way they work together to magically produce the
relatively simple task they had set out to do.

> Could you say where you see that claim in the article?

The claim is presented implicitly: for one, they conjecture that, were Haskell
or SML to "pragmatically support" such a feature, it would be used more often
(offering as argument the observation that both Haskell's and SML's standard
libraries define functions that differ only in the arity of their arguments,
such as zipWith/zipWith3 in Haskell). This implies that, to their knowledge,
it is not pragmatically possible to implement this in Haskell.

Similarly, given that in their "Related Works" section they don't identify any
complete implementation of variadic polymorphism, it can be assumed that they
claim at least not to have found one.

~~~
tome
> Still, you haven't written exactly the function I was asking for

I'm afraid I'm now completely stumped about what you're asking for. If you
have a function with a known arity and want to apply it to a known number of
arguments then you can use the original formulation:

    
    
        f <$> args1 <*> args 2 ... <*> argsN
    

You then asked what happens for unknown numbers of arguments, so I produced a
solution that works with lists, which isn't very Haskelly, but does the job.
After that you said you wanted something with a more specific type, so I came
up with the answer that works generally over tuples (or indeed any type that
contains a sequence of arguments). That's not satisfactory either! It seems
you literally want a function with type `Integer N => (a_0 ->... a_N -> r) ->
[a_0] ->... [a_N] -> [r]`. Well, I don't know how to do that in Haskell --
maybe my most recent solution extends to that -- but nor do I know why you'd
want to do that! If you have a known number of arguments the first solution
works fine. If you have an unknown number of arguments then you must have them
all together in one datastructure, so the most recent solution works fine.
Haskellers would be very happy with either of those and I don't see how we're
missing out on programming convenience because of that. Maybe you could
elaborate?

------
j1elo
I'm already thrilled waiting for an addition with "Kubernetes everywhere"
_ice_ shower.

~~~
hwayne
I don't have the necessary background to find a good "Kubernetes everywhere"
ice shower, but if someone else found one and submitted it I think I could
evaluate it.

~~~
j1elo
I said it in a jokingly way, but it would really be a nice read.

The hype is huge in that train, but I'm sure there must be lots of
professionals that have already learned about its shortcomings. Not sure if
proper studies exist about Kubernetes yet, though. Hopefully you'll get a PR
with some content.

------
beh9540
I was really surprised docker or kubernetes wasn't one of the items on here.
While I use both, they definitely both could use cold showers to make sure
they provide value.

~~~
hwayne
I seeded most of the list and know basically nothing about docker or
kubernetes, so don't know of any cold showers myself. But I would be more than
happy to edit a submission by someone who knows the space!

------
moby_click
There should be a section about the benefits of cold showers.

~~~
gonzo41
Is taking cold showers like a machismo SV thing?

~~~
dTal
Is thinking that everything is a Silicon Valley thing, a Silicon Valley thing?
Is it really plausible that no one thought to feel manly about cold showers
until a bunch of nerds came along?

~~~
coldtea
> _Is it really plausible that no one thought to feel manly about cold showers
> until a bunch of nerds came along?_

No, but it's quite plausible that it was a niche thing that might have been a
fad at some points in the past, only to be revived by a new generation that
includes many fad-chacing types, SV people, and BS-artists (aka
influencers)...

~~~
luckylion
I'm pretty sure cold showers have been a thing to show your ability to live
without comforts ever since hot showers became a possibility.

And the term fits here, I believe: cold showers do very much wake you up and
bring you into reality quickly. There's no dreaming about hypes when you're
under a cold shower.

------
babbledabbler
Cold showers are awesome! Get outta here!

[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5025014/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5025014/)

------
the_af
A Cold Shower for (early) testing of software, maybe:

There used to be an often cited paper by Boehm about the cost of catching bugs
early vs late on production, usually mentioned by advocates of testing early,
where the quoted conclusion was something like "studies show it's 10 times
more costly to catch bugs late on production" or something like that. This is
a very well known study, I'm likely misquoting it (the irony!) and readers
here are probably familiar with it or its related mantra of early testing.

I haven't read the paper itself (I should!), but later someone claimed that a-
Boehm doesn't state what people quoting him say he said, b- the relevant
studies had serious methodological problems that call into question the
conclusion he did say, c- there are plenty of examples where fixing bugs late
on production wasn't particularly costly.

edit: I'm not arguing testing isn't necessary, in case that upset someone
reading this post. I'm not really arguing anything, except that the study by
Boehm that most people quote was called into question (and was probably
misquoted to begin with). This doesn't prove/disprove anything, except maybe
hinting at a possible Cold Shower. It does show that we as a field have a
serious problem in software engineering with backing up claims with well
designed studies and strong evidence, but this shouldn't come as a surprise to
anyone reading this.

~~~
hwayne
Laurent Bossavit tears the Boehm paper apart in his book "Leprechauns of
Software Engineering"[1]. It's a good read for anyone interested in the
empirical side of software research.

[1]: [https://leanpub.com/leprechauns](https://leanpub.com/leprechauns)

~~~
the_af
Thanks! I think it might have been an extract from Bossavit's book what I'm
struggling to remember now. The title definitely rings a bell.

------
dvfjsdhgfv
Regarding the bare metal issue, there are many more caveats, see for example:
[https://jan.rychter.com/enblog/cloud-server-cpu-
performance-...](https://jan.rychter.com/enblog/cloud-server-cpu-performance-
comparison-2019-12-12)

------
Laakeri
I was expecting it to a have a section about deep learning hype.

------
ajbonkoski
This gets at the heart of one of my big gripes about how we talk about
engineering and technology.

Often a fancy new thing is introduced with a very long list of pros: "fast,
scalable, flexible, safe". Rarely, is a list of cons included: "brittle, tough
learning curve, complicated, new failure modes".

This practice always strikes me as odd because the first law of engineering is
"everything is a trade-off". So, if I am going to do my job as an engineer I
really need to understand both the "pros" and "cons". I need to understand
what trade-off I'm making to get the "pros". And only then can I reason about
wether the cost is justified.

------
vharuck
>Researchers had programmers fix bugs in a codebase, either with all of the
identifiers were abbreviated, or where all of the identifiers were full-words.
They found no difference in time taken or quality of debugging.

I would not have expected that. Still, I prefer to use full(er) identifiers. I
don't like to guess how things were abbreviated, especially when consistency
isn't guaranteed. If I were using a different language and IDE, this might be
better.

------
api
The big data one is outstanding.

If you don't have more data than can fit on a reasonably large hard drive, you
do not have big data and you are likely able to process it faster and cheaper
on one system.

Today that threshold would be around 10TiB.

~~~
julienfr112
It take hours to read a whole 10TiB hard drive. Depending on use cases, it can
be unacceptable.

------
theptip
> Agile Methods: The Good, the Hype and the Ugly (Video)
> ([https://www.youtube.com/watch?v=ffkIQrq-m34](https://www.youtube.com/watch?v=ffkIQrq-m34))

Thoughts on this one? I found the presentation to be somewhat mixed.

I found the initial comb through of the agile principles to be needlessly
pedantic ("'Simplicity... is essential' isn't a principle, it's an
assertion!"); anyone reading in good faith can extract the principle that's
intended in each bullet of that manifesto.

The critique of user stories (~35 mins in) was more interesting; it's
something we've been bumping up against recently. I think the agile response
would be "if your features interact, you need a user story covering the
interaction", i.e. you need to write user stories for the cross-product of
your features, if they are not orthogonal.

I'm not really convinced that this is a fatal blow for user stories, and
indeed in the telephony example it is pretty easy to see that you need a
clarifying user story to say how the call group and DND features interact. But
it does suggest that other approaches for specifying complex interactions
might be better.

Maybe it would be simpler to show a chart of the relative priorities or
abstract interactions? E.g. thinking about Slack's notorious "Should we send a
notification" flowchart ([https://slack.engineering/reducing-slacks-memory-
footprint-4...](https://slack.engineering/reducing-slacks-memory-
footprint-4480fec7e8eb)), I think it's impossible (or at least unreasonably
verbose) to describe this using solely user stories. I do wonder if that means
it's impossible for users to understand how this set of features interact
though?

Regarding the purported opposition in agile to creating artifacts like design
docs, it's possible that I'm missing some conversation/context from the
development of Agile, but I've never heard agile folks like Fowler, Martin,
etc. argue against doing technical design; they just argue against doing too
much of it too early (i.e. against waterfall design docs and for lean-
manufacturing style just-in-time design) and that battle seems to have largely
been won, considering what the standard best-practices were at the time the
Agile manifesto was written vs. now.

------
sgt101
Yet again the "we don't need big data" demoed on an example that fits on a
disk. Big data is north of 30TB.

------
pragmatic
I think functional reactive programming belongs on this list.

Rxjs, etc.

Angular uses typescript and rxjs excessively and, while I used to like
typescript, the combo has made me reconsider.

Rxjs send like an overcomplex way to do common tasks. Has RRP caught on
anywhere else? Is there a usage that doesn't suck?

------
jdmoreira
> Static vs Dynamic Typing

All research is inconclusive? Sure. I wonder what kind of type systems were in
there? I guess Java and similars are accounted and yet I wouldn’t put any
faith in them. ML, Swift, Haskell... now that’s something else.

~~~
pantaloony
It doesn’t account for the communication value of static types. Personally, I
consider static types _primarily_ a communication tool, so IMO the review’s
interesting but not very useful _per se_. Also the main point of it seems to
be “research on this topic is mostly bad, so far, so who the hell knows what’s
true”. It could be that the research has sucked, not that there’s little
discernible difference between the two on the dimensions measured.

~~~
archarios
It seems to me that tests are equally good as a communication tool.

~~~
wtetzner
My experience is that tests tend to be much harder to read, and take more
effort to understand, than types. Types are a higher-level approximation for
your program.

~~~
archarios
That's fair

------
Robin_f
The first link to the PDF unfortunately doesn't work.

------
arendtio
Reads like recipe for flame wars. Pick any of those topics and you can be sure
to find someone with an opposite opinion of your own ;-)

------
andikleen3
I can confirm the issues with formal methods.

I was working on a new type of locking mechanism and thought I would be smart
by modelling it in spin [[http://spinroot.com](http://spinroot.com)], which
has been used for these kind of things before.

I ended up with a model that was proven in spin, but still failed in real
code.

Given that's anecdata with a sample size of 1, but still was a valuable
experience to me.

------
red_admiral
First link (formal verification) currently gives a 404 for me. Someone not
wanting the extra publicity?

------
touchpadder
Hype: this document

Shower: Out-of-date sources

------
esquire_900
The title doesn't really relate to its content very well; the concept of
taking cold showers has some scientific backing ([1] & [2]), and is also
slightly hyped. After taking cold showers and getting some (minor) benefits
for some years, the term "cold shower" started to get a positive association
in my mind.

This article isn't about showers, nor positive results, making the title quite
confusing :)

[1]
[https://www.medicalnewstoday.com/articles/325725](https://www.medicalnewstoday.com/articles/325725)
[2]
[https://www.wimhofmethod.com/science](https://www.wimhofmethod.com/science)

~~~
ahelwer
Only on Hacker News could you find someone taking issue with the widely-used &
understood phrase "cold shower".

~~~
eitland
Reminds me about one of my favorite HN comments of all time:

[https://news.ycombinator.com/item?id=8289007](https://news.ycombinator.com/item?id=8289007)

Topic: The curious case of the cyclist’s unshaven legs

From a comment (this part clearly intended to be witty I think): Really, I
thought it was weird, and probably inappropriate, to mix in so much of an
outsider's amateur and unsopported opinion about science into an otherwise
interesting story about leg hair drag.

