
Be Nice and Write Stable Code - jaytaylor
http://technosophos.com/2018/07/04/be-nice-and-write-stable-code.html
======
ShaneWilton
A lot of people are commenting that SemVer doesn't work, because it's still at
the mercy of humans choosing good version numbers.

Elm's package manager, elm-package, actually tries to remove humans from the
equation, by automatically choosing the next version number, based on a diff
of the API and the exported types of a package: [https://github.com/elm-
lang/elm-package#publishing-updates](https://github.com/elm-lang/elm-
package#publishing-updates)

It's not perfect, but it's better than anything else I've seen.

~~~
User23
Is API compatibility computable in general? My instinct is that it is, but
I’ve never seen a theorem.

~~~
dbaupp
No, it isn't computable (that is, correctly determining one of "these
functions behave the same" or "these functions behave differently", and not
"unknown") in general, as it is equivalent to the halting problem. Consider
these two versions of a function, are they API compatible?

    
    
      def foo():
        return True
    
      def foo():
        return halts("some turing machine program")
    

They're only truly API compatible if the program halts, but the program can be
arbitrary, so proving that any 'foo' of this style are equivalent is solving
the halting problem.

Of course, one can still likely get useful answers of "definitely
incompatible" etc., with much more tractable analyses. AIUI, the Elm version
ends up just looking at the function signature, and catches things like
removing or adding arguments: for appropriately restricted languages, it is
likely to even be possible to determine if downstream code will still
_compile_ , but that's not a guarantee it will behave correctly.

~~~
gmueckl
I wonder if tighter guarantees can be given if tools can work with constructs
like D's contracts. These are extra sections in each function that are
intended to check invariants. If these invariants change, then they were
either broken and needed fixing or the function had a semantic change.

~~~
dbaupp
That's an interesting idea. It seems like it would be a way for tools to flag
"this function is likely to have changed in an interesting way", but changing
invariants doesn't necessarily mean the function breaks semver.

For one, an invariant might be changed syntactically, without actually
changing what it is asserting, reducing to exactly my example above: in its
simplest form, the contract _could_ go from in(true) to in(halts("...")).

Secondly, an contract could have been made more permissive, e.g. in(x > 1)
becoming in(x > 0), or out(y > 0) becoming out(y > 1). Assuming violating a
contract is regarded as a programming error (as in, it isn't considered
breaking to go from failing an assertion to not failing), these are also non-
breaking changes.

Lastly, changing behaviour doesn't necessarily mean changing
invariants/contracts.

------
gregmac
I love the MaxInboundMessageSize example. I've run into that many times.

Often there will be a note in the release notes about it, and I know I
_should_ read the release notes in detail when I upgrade dependencies, but
like many people I don't always. Sometimes it's just laziness or complacency
-- especially for "utility" libraries like for compression or encoding -- but
other times it's a challenge with release notes:

* Each version has each release note published independently (or worse: only on the Releases tab in GitHub, and you have to click to expand to read each)

* The release notes are really long or dense, and breaking changes are easily missed

There's also worse problems:

* The release notes don't actually call out the breaking change (you have to read each ticket in detail)

* The release notes just say "Bug fixes" or there are no release notes

I think along with the suggestions in this article, library authors should
also put effort into making good release notes. This includes realizing
sometimes people are using from a couple major versions and/or years ago.

~~~
CoryG89
While it is a good example, you could also use the same example and conclude
that the problem was inadequate tests. SemVer is great, but you can't count on
dependencies that you do not control actually adhering to it, either
intentionally or unintentionally.

The only thing that could have prevented something like this for sure was
mentioned:

> And while nothing in our early testing sent messages larger than 256k, there
> were plenty of production instances that did.

To me, this was the clear failure; not the fact that some dependency broke
semver. Their production system relied on being able to send messages larger
than 256k, and their tests did not.

~~~
gregmac
While it's easy to say, how far do you go? Do you test every bit of every
upstream library you use? The ideal is probably yes, but the reality is this
rarely happens.

Even with a test, you may not find this. In the IOException example, the
author calls out why:

> When we upgraded, all our tests passed (because our test fixtures emulated
> the old behavior and our network was not unstable enough to trigger bad
> conditions)

The only way to catch this type of thing is to emulate the entire network side
of things, and that's still only as good as _your_ simulation of the real
world. Again, reality is even if you test your upstream to this extent, you're
probably mocking a bunch of things, and that may mask something in a way you
won't see until possibly production use.

~~~
Swizec
> While it's easy to say, how far do you go? Do you test every bit of every
> upstream library you use? The ideal is probably yes, but the reality is this
> rarely happens.

It depends. I have a friend who used to work on banking systems. They had full
test coverage of every dependency. Even standard lib functions and language
features.

One time they found a bug in the md5 implementation in a minor version of a
popular database.

~~~
nostalgeek
> It depends. I have a friend who used to work on banking systems. They had
> full test coverage of every dependency. Even standard lib functions and
> language features.

these are not dependencies anymore then, they are part of your code source and
should be vendored with it. I don't know what language your friend is using
but I'm pretty sure most std libs and languages already have tests with very
good coverage.

> One time they found a bug in the md5 implementation in a minor version of a
> popular database.

Every piece of code can have bugs. 100% code coverage doesn't eliminate bugs,
it just says all code path are tested, an algorithm can still be wrong for
some values even if 100% code path are tested.

~~~
Swizec
The lesson learned isn’t about code coverage or oaths tested. It’s to not
blindly trust 3rd part anything, even “languages already have test with very
good coverage” when the stakes are high.

If billions of dollars are riding on your code, you better be damn sure you
trust everything it relies on.

Fun side note: every piece of internal code was always developed in parallel
to the same spec by 3+ teams so they could cross validate. If all 3 functions
don’t return the same value for the same input, every team gets to build it
again until all implementations behave the exact same.

High reliability engineering sounds “fun”

~~~
zaarn
Certainly it's fun from the pure engineering perspective but I guess also
somewhat tedious.

On the other hand, if a billion dollars depend on your code working or not, or
in other cases human lives like in space rockets, you don't get a second
chance. If you fuck up, lots of important things get flushed down the toilet,
usually including your job.

So you have 3 teams do in parallel to be 99.999999999% certain that it'll work
as advertised. It's also sorta why banks are slow to adopt new changes since
they want to be sure that whatever is going on, it'll work and not flush down
Grandma's rent.

------
int_19h
> Stop trying to justify your refactoring with the "public but internal"
> argument. If the language spec says it's public, it's public. Your
> intentions have nothing to do with it.

This is so wrong. APIs are for people, not tools, so intent is primary. When
tools are not expressive enough to capture and enforce intent, you document
it, but it's still primary. Someone using a "public" API that clearly says
"for internal use only" is no different from a person that uses workarounds
like reflection or direct memory access, and there is no obligation to keep
things working for the.

~~~
p1necone
What reason would you have for publishing something in a public API if it
actually is for "internal use only".

~~~
habosa
In some languages / project structures you need a way for internal components
to connect that happens to be "public" but is not meant for public use.

I see this a lot in Java libraries, for instance.

~~~
gmueckl
C++ and C# have the same kind of problem: except for the iffy freind
declaration in C++ there is no way in the language to denote that some method
is bot meant for use in other modules. C# has the internal scope for each
assembly, but this breaks in combination with unit tests placed in seperate
testing assemblies.

Generally, proper unit testing is at odds with strict scope restrictions in
the tested code. I guess we need more allowances fornunitbtesting at the
language level to fix that. E.g. allow testing code to be marked as such and
ignore that certain things are declared private, but in turn only allow it to
be run in a testing context, but not regular builds, to prevent abuse.

~~~
pjmlp
> C# has the internal scope for each assembly, but this breaks in combination
> with unit tests placed in seperate testing assemblies.

That is what _[assembly: InternalsVisibleTo()]_ is for.

~~~
gmueckl
This only covers a small part of the problem. Things that should be private
and requite separate yesting are atill required to be more visible than they
are supposed to.

~~~
pjmlp
I have the opinion that is the job of functional tests anyway.

Unit tests should only exercise public interfaces, with internals and private
parts being tested as side effect of calling them.

~~~
gmueckl
This simply cannot work in many cases. It is quite unrealistic to test complex
logic that is hidden behind a narrow interface completely. You are hit with
the full combinatorial complexity of what is behind that interface, even if
might consist of independent parts internally. If you can test these parts
independently, the number of required tests is a fraction of what a black box
approach requires.

Another situation is checking numerical code for correctness and accuracy.
There it is extremely advantageous to have testable small functions that map
to individual mathematical expressions. But these are again implementation
details that need to be hidden behind interfaces.

~~~
pjmlp
That leads to program for unit tests, exposing parts that shouldn't be visible
in first place.

Your numerical code example can be achieved with an Assembly of internal
functions/methods, exposed only to the implementation and unit tests.

Of course, this is easy to do in a greenfield project from the start, not so
easy on legacy code.

~~~
gmueckl
Your first statement is exactly what I've arrived at. It's just not avoidable
in general.

I have to clarify that I'm not fixated on C#. Sure, you could create a helper
assembly in .NET that is a mess of essentially of disembodied functions for
computing every slightly more complex function that happens to be in your
program. But this breaks OOD.

In C/C++ you can't do quite the same. The best you could do there is break OOD
and try to hide these global functions by using private headers (which are
ugly in their own ways).

------
rtpg
Django is my gold standard for this. They have great deprecation policies
where they deprecate something in the same release that they add alternatives
(allowing for you to fix things up before upgrading Django), they document
these changes liberally and offer alternatives, make good use of the warnings
system (meaning you can run tests in "deprecated functions not allowed" mode
to catch stuff), and generally are careful.

I'm still shocked at the number of projects that make breaking changes without
first releasing a "support both versions" release that lets people test their
changes easily. Especially frustrating when you have really basic environment
variable renames that could support the deprecated name as a one-liner so
easily.

Give people the space to upgrade please!

~~~
sametmax
Same. They are not only very patient, but provide migration documentation and
helpers.

It's also amazing how long they stayed at 0.96 despite being very stable. Then
1.x for a long time too.

It has a cost though: django has a hard time going async since it breaks
everything.

All in all, the python community has a good culture for this. Even the 2 to 3
migration was given more than a decade to proceed.

Yet, i feel like we still get a lot of complains, while in js or ruby land you
can break things regularly and fans scream it's fine.

------
laurence-myers
SemVer is a social construct, not a contract. It's nice when it applies, but
you cannot rely on other developers to adhere to it.

One man's bugfix is another man's breaking change. If product A implements a
workaround for a bug in product B, but the bug gets fixed in a patch version,
it could break product A's code, so it becomes a breaking change. The only way
to anticipate these changes is reading the change logs/release notes, and
thorough automated regression testing. (Obviously unfeasible for every
dependency.)

Maybe versions should be a single number, like a build number. It just gets
tricky when you have multiple versions out there, each requiring patches.

~~~
nemothekid
> _SemVer is a social construct, not a contract. It 's nice when it applies,
> but you cannot rely on other developers to adhere to it._

Whats the solution here? Fuck standards? Imagine if we had that same attitude
with regards to HTTP.

~~~
pvorb
A problem with SemVer I often see is that it's unclear whether a project
adheres to it. You just can't assume every project having x.y.z version
numbers uses semantic versioning.

~~~
hinkley
If the version number is less than 3.1, odds are very good they don’t.

And I just described 80% of the node module ecosystem...

~~~
lbm
Not necessarily. There are plenty of projects in their infancy that follow
semver correctly. I'd argue that a project with a high major number is more
likely to be indicative of improper usage.

~~~
wild_preference
I don’t think high major version number tells you that. Maybe they left 0.x.y
(unstable) too early and were just honest with their early churn since which
is as semver as you can get.

But one if the main semver violations I see in the wild is a project slotting
major changes into the minor version number because they want to avoid high
major version numbers for some reason or have some romantic idea of what a
major version bump “should be”.

------
jarfil
This is all great, but I feel like all these problems could be caught just
with properly written tests. If your tests correctly cover the API usage of
your code, and I mean both your code complying with the intended API and the
API complying with the intended usage, then the implementation behind that API
should be totally transparent. No need to check versions, release notes, or
any of that, just run the API compliance tests on the new version, and if it
works then your code should work too.

~~~
jakobegger
Relying on tests is naive. Your tests can't cover every case. The article even
mentions this -- their tests passed, but it failed in production.

~~~
crdoconnor
They built tests that explicitly assumed that the library's interface wasn't
going to change:

>When we upgraded, all our tests passed (because our test fixtures emulated
the old behavior)

Doing that, upgrading your dependencies and expecting everything to work just
because those tests passed? _That 's_ naivete.

If they'd built decent integration tests that used the actual library (instead
of "assume nothing changes" fixtures) and made more of an effort to simulate
realistic scenarios then their tests probably would have flagged up most of
the issues they had.

Alas, this seems to be one of the side effects of following the "test pyramid"
"best practice".

~~~
jakobegger
I wasn't talking about that paragraph, but the following paragraph where they
had tests, but they didn't test with large enough packets.

Tests can never cover every scenario. They are very useful, and they catch a
lot of unexpected regressions. But they're just a part of the puzzle, not a
replacement for good development practices.

Updating a dependency without bothering to read the release notes because you
have tests -- maybe naive is the wrong word, maybe hubris fits better.

~~~
crdoconnor
Tests can't cover every scenario, no, but had they made a bit more of an
effort to test realistically then it's absolutely possible that they could
have covered every scenario that mattered here.

Over-reliance on unrealistic unit tests (which is likely what led to them not
testing large packets) is a pattern I've seen cause issues like this many,
_many_ times before.

I upgrade pretty regularly without reading release notes - relying on
realistic tests to catch everything. What they do catch is usually not in the
slightest bit obvious from release notes (often a regression in the
dependency). Call it hubris if you like, but it works for me.

------
hackernoon
Argh! I love the deprecation example! How elegant! How did I never think of
this (or why did I never think to ask)!

edit: pasted here –

    
    
      func ListItems(query Query) Items {
        ListItemsWithLimit(query, 0, 0)
      }
    
      func ListItemsWithLimit(query Query, limit int, offset int) Items {
      // ...
      }

~~~
SamuelAdams
Wouldn't it be easier to write optional variables, allowing you to keep the
same method name? This results in cleaner code that doesn't break existing
usage.

For example:

    
    
      func ListItems(query Query, limit int = 0, offset int = 0) Items {
      // ....
      }

~~~
jniedrauer
This only works in languages that support optional variables or overloading.

------
jve
> But for less intrusive changes, I personally feel like you can make some
> minor SemVer transgressions provided...

Kind of contradicts "Often, it seems that version numbers are incremented by
"gut feel" instead of any consistent semantic: "This feels like a minor
version update."

> The value of MaxBufferSize was adjusted downward to 2048 because we
> discovered a buffer overflow in a lower level library for any larger buffer
> size. See issue #4144

Technically it's a major version bump, as I understand it. But security is
important, so what should we do here in addition to writing it down as first
sentence on release notes? Perhaps having an excuse to potentially break
downstream code in the name of security should be OK and well communicated
(i.e in readme)?

------
timwis
It would be helpful to mention that these guidelines -- and semver generally
-- don't really apply to applications from what I've read. Versioning
applications still feels like a gut feeling.

~~~
Zanni
I'd assume marketing considerations dominate the version number discussion for
(consumer) applications--whether you want the release to be seen as the good
old whatever you know and trust or as new and improved.

------
asimpletune
This is something that can I cannot stress the importance of, yet rarely ever
receives the recognition that it should. Being able to delicately and
compassionstely work on old code, while still bringing valuable updates is
hard.

It’s even harder to make your changes look easy and obvious in hindsight.

This is something that most engineering organizations are not equipped to
recognize and promote as a virtue - it’s sort of hard to explain as it is. If
anything, this patience can be considered an enemy to progress.

When you see people who do this well, take note.

------
tabtab
Regarding exception handling, letting internal exceptions define external
behavior is perhaps a bad idea. The possible exception types can be wide and
change over time as new parts or features are added. Example:

    
    
        // pseudo-code 
        qry = new query(sql=theSql, dbConfig=DB_FOO);
        if (! qry.Execute()) {
           errMsg = "Something went wrong during your query. ";
           if (qry.errorExceptionName=="DB_Busy") {
              errMsg += "The database appears to be busy.";  // append more 
           }
           displayAlert(errMsg);
        } else {
           processQuery(qry.resultRows);
        }
    

Here any fatal errors are caught inside the query object, but details are
available if and when you wish to take advantage of them outside the query
object. The query object (API) user doesn't have to know all possible
exceptions types in order to handle an exception properly (or at least in a
good-enough way).

~~~
jdbernard
It also bugged me that he was upset when the behavior they were relying on was
an internal detail not included in the API's contract.

 _(Incidentally, the API itself did not change because it was something like
func Read(in Reader) error, where error was a parent of all exceptions)_

That's not incidental. In my opinion the authors of the API were completely
fine changing the internal detail of which specific exception type was thrown
because their public API never made a guarantee beyond it being an instance of
error.

~~~
tabtab
Future-friendly error-handling can indeed be tricky. I ran into this trying to
make a lasting email-sending API. I was hesitant to depend on the API's
specific exception types, and so considered mapping them to more general
categories, yet still giving details for troubleshooting.

    
    
         // pseudo-code
         err = new Error(hasError=false); // innocent until proven guilty
         try {
           sendEmail(...);
         catch (e in excptn1, excptn6, excptn7)  // dummy names
           err.hasError=true;
           err.recipientProblem=true;
           err.errorType = e;
         catch (e in excptn2, excptn4, excptn9)
           err.hasError=true;
           err.contentProblem=true;
           err.errorType = e;
         catch (else)
           err.hasError=true;
           err.errorType = e;
         }
         ...
         return(err);
    

One could make an emumerable list of error categories, but in this case I
wasn't even sure they were mutually exclusive because it still sends to the
rest if one recipient is bad.

------
GlitchMr
What counts a breaking change? I would say that strictly speaking anything
could be a breaking change if an user is crazy enough, so increasing major
version all the time is not particularly useful - I mean, you could do that,
but what is the purpose of semver then. Consider a following function.

    
    
        f(arg: String): Output
    

Now, let's say that we change it to be generic. Let's assume that `String`
implements `SomeInterface<Output>`.

    
    
        f<T: SomeInterface<U>, U>(arg: T): U
    

An user is crazy and passed an empty generic type list and their code broke as
there are now two generic arguments instead of none.

    
    
        f<>("a")
    

Or for a different example, let's say that you want to introduce a new
function, `g`, but the user does the following.

    
    
        import yourlibrary.*
        import otherlibrary.*
    

An user is using the function `g` from `otherlibrary`, and their code doesn't
compile anymore due to an ambiguity.

I would say it's a minor change, but in theory it's possible for it to be a
breaking change. I often had situations like this where something could be
breaking, but the code had to be really unusual for it to break (and if it
would break, it would be a compilation error).

------
planetmaker
It's a bit funny that the title reads "write nice and stable code" \- but it's
actually a plea for "proper" versioning.

Starting with the versioning a few examples are given - and I read it as "this
is not good versioning" as it continues with the very _basic_ description of
how semantic versioning works. Yet all those examples given above are
excellent examples of how semantic versioning works at its best! """ 10\.
Build metadata MAY be denoted by appending a plus sign (...) Examples: (...)
1.0.0-beta+exp.sha.5114f85. """

And these meta data, referencing used library versions and actual hash of the
commit used in the build, are given there - and exactly these meta info on
used library versions may help a great deal when it comes to checking bug
reports as programme behaviour may differ between versions, but work with any.

------
denkmoon
That headline is a summary of all the things I try to do but fail.

------
wool_gather
> To that end, these are safe as part of a feature release: > > Adding a field
> or method to a struct/class/enum/etc.

Um, adding fields is a breaking change if you do a binary distribution.

> Z is the patch version. Changes to this indicate that internal changes were
> made, but that no changes (even compatible changes) were made to the API.

then, contradicting that

> Mark a thing as deprecated as soon as it is considered deprecated, even if
> that is a patch or minor release.

Deprecation should be considered a change to the API. I'm not going to thank
you for filling up my build logs with warnings when I pull in a patch update
of your library.

> Deprecation, after all, is a warning condition, not an error condition.

No, not if you use -Werror or equivalent.

------
brian_herman
[https://sentry.io/welcome/](https://sentry.io/welcome/) The best way I have
found to write stable code is to capture all exceptions with sentry and just
fix them.

------
unrealchild
I try to encourage everyone to define a contract, code to that contract, and
update the contract when it is no longer accurate. The exact nature of the
contract is contextual, sometimes a schema, sometimes a well documented
comment header or perhaps a project README.

------
torstenvl
Sooo adding a field to a struct is supposedly a non-code-breaking feature
(yeah, right) but tweaking a constant isn't?

This does not fit with my experience of the world.

~~~
uryga
Could you describe some situations where adding a field broke stuff? I can
only imagine that happening if your code cares about the exact binary
representation/layout of a struct - so serialization, FFI or just bit-
twiddling.

~~~
torstenvl
Serialization is one, but anything involving casting from one struct to
another (e.g., if the Berkeley sockets implementation changed the size of
struct in_addr), anything requiring careful management of memory alignment
(e.g., SSE), etc. could potentially be broken. Never mind the inevitable
reckless coding practices that arise in the wild all the time (someone decides
to #define a magic number for allocation purposes instead of using sizeof...).

On the other hand, a large number of constants are _supposed_ to be tweakable,
to the extent that many are designed to be set at compile time.

Anyway. OP article had good points overall, I'm just not sold on some of the
specifics.

------
blablabla123
I think the title should rather be "write stable architecture", this is kind
of misleading. The code can still be unstable according to this ;)

------
h8liu
fwiw, I maintain a website called smallrepo
([https://smallrepo.com](https://smallrepo.com)). It builds go language code
together, and maintains an always buildable commit set as a "super repo". If
you sync to smallrepo (rather than using go get), it can shield you from many
unexpected build breakages.

------
wintorez
Or at least "Be Stable and Write Nice Code"

------
askmike
I'm maintaining an open source project[0] and I'm struggling with using SemVer
because my "app" doesn't have a single API but a few:

At it's core it's a node app. Though I also include a small web server that
wraps around it (and a UI frontend).

1\. It allows people to write scripts (js) that receives inputs and passes on
events based on an API (the strategy API)[1].

2\. It has extensive configuration[2] that sometimes changes form (the config
API).

3\. It talks to a number of external services (crypto exchanges), over a
"common" protocol called the "exchange wrapper API"[3] (I am ignoring the
version of the exchange API being consumed).

4\. The wrapped webserver comes with an API (REST + WS)[4].

5\. The "core app" is a chain of plugins, when they change the required
config/events also change (usually breaking changes 1 and 4)[5].

I could take all of these components apart (microservice way) and version them
separately, but I like the monorepo style I use now where pulling one repo
means that everything is working together. Also the fact that (in bug reports)
people only have to refer to one version (and when on nightly maybe the git
commit if I need more details).

But versioning is a mess.

[0]: [https://gekko.wizb.it/](https://gekko.wizb.it/)

[1]:
[https://gekko.wizb.it/docs/strategies/creating_a_strategy.ht...](https://gekko.wizb.it/docs/strategies/creating_a_strategy.html)

[2]: [https://github.com/askmike/gekko/blob/develop/sample-
config....](https://github.com/askmike/gekko/blob/develop/sample-config.js)

[3]:
[https://gekko.wizb.it/docs/extending/add_an_exchange.html#Ge...](https://gekko.wizb.it/docs/extending/add_an_exchange.html#Gekko-39-s-expectations)

[4]: [https://gekko.wizb.it/docs/internals/server_api.html#REST-
AP...](https://gekko.wizb.it/docs/internals/server_api.html#REST-API)

[5]: [https://gekko.wizb.it/docs/internals/events.html#List-of-
eve...](https://gekko.wizb.it/docs/internals/events.html#List-of-events-
emitted-by-standard-plugins)

\-------

This is not a criticism, I typed this out in the hope that someone can point
me in a sane direction (given the discussion on versioning)

~~~
drblast
Your project looks cool. Here's some overly harsh criticism from an old dude
in no particular order.

The problem I see upon an extremely cursory view of your project is that it's
trying very, very hard not to be a sellable product.

I'm a good programmer. Why do I want to learn your API/library instead of
calling the exchange API's directly? Is this saving me time? Is it saving me
time long term, even when your code changes and breaks things?

If you had to make this into a single web API and sell access to _that_ as
your product, what would it look like? There's your versioning and design
answer.

Why is the app a "chain of plugins?" If the app breaks when the plugins change
then they're not really plugins, are they? Does this design solve a problem or
did it just seem like a cool way to do it?

Also, anything that relies heavily on configuration to work is a fundamentally
broken design in my opinion. Configuration is global state that is hard to
change, and the heavy-handed presence of it in a project is usually an
indicator that the abstractions are wrong and most of the code probably relies
on some hidden state that's really hard to debug unless you're the code
author. If there's a legitimate runtime choice you don't want to make for the
user, that's a function parameter. If that looks messy, you probably left too
many decisions for the user to make.

An ideal library is stateless so that the user can handle wrapping it with
simpler calls and configuration settings. Make building blocks, not
skyscrapers.

Sometimes I want to grab all of you young people by the shoulders and shake
you until you stop reinventing ever more convoluted ways to do RPC.

Finally, take everything I say with a grain of salt because I'm heavily
medicated right now.

~~~
askmike
Woah great feedback. This is very much appreciated!

I'm not sure if going specifically into all of your points right here is the
best way forward. But suffice to say I am very happen to hear them :)

> the abstractions are wrong and most of the code probably relies on some
> hidden state that's really hard to debug unless you're the code author.

This is very much spot on, definitely something I want to work on.

\-----

The main reason that everything is so spread out (plugins, web API, internal
API, etc. etc) is because a ton of people are doing different things with it.

99% of the people only touch the basics, and they don't need to touch any
config file, they can go through the UI that handles all of it automatically
(Gekko is focused on tech savy but not perse professional programmers). They
don't know what (my concept of) a plugin is, and they don't care about any API
(nor any version for that matter).

It's about the other 1% who are kind of spread out over:

\- people who want to hook into certain lifecycles (to push certain data to
google spreadsheets[1] for example)

\- people who only use subparts of the app, for example to have something that
can fetch normalized market data from a number of different websites <\- this
is a big part of the project, but not the sole, hence it should not dictate
versioning.

\- Or people who only want to create their own prediction making logic (with
AI or whatever) and use the execution logic of my app. <\- in the process of
pulling this out as a standalone library.

\- people building tools on top of the web API that bruteforces a problem
space to figure out new solutions[2].

So all the people that care about the versioning (not the 99%) are exactly the
hobby DIY hacking people who want to open it up and take it apart. And it
feels impossible to steer them into "don't touch this because the interface is
not a standardized API".

\-----

The main thing I am going to do now is rethink the entire config strategy,
because it's a huge mess and I think I am the only one who understands it[3].

[1]: [https://github.com/RJPGriffin/google-forms-gekko-
plugin](https://github.com/RJPGriffin/google-forms-gekko-plugin)

[2]:
[https://forum.gekko.wizb.it/thread-56589.html](https://forum.gekko.wizb.it/thread-56589.html)

[3]:
[https://github.com/askmike/gekko/issues/956](https://github.com/askmike/gekko/issues/956)

------
3pt14159
This is a great piece. I think my only addition to it is to point out two
small insights.

1\. SemVer works great in some communities (Ruby) and shit in others (Python).
Generally the more computer sciency the community the better I find the
SemVer'ing. Python has a bunch of scientists using it, so it's less reliable,
even if many of the core libraries follow it pretty well.

2\. Apps, plugins, frameworks, and libraries are different things with
different SemVer strategies. With a paid app versioning is often a marketing
decision. Framework plugins versioning is often a "match the framework to
reduce mental burden" decision. Whereas frameworks and libraries I find a much
higher adherence to what SemVer strives to do.

