The ladders inputs and outputs almost become like a set of invariants like you would find in a functional program, but the monad for state is implicit.
// a page of code
foo = bar + 5;
// more code
// a page of code
int foo = bar + 5;
// more code
I feel like the topic of inlined code is similar but is not quite the same.
I once spent weekends and late nights fixing a bug which, if it had shipped, would have eventually bricked an embedded product. It caused the software to hang in a way which evaded every watchdog and reset timer. This bug was only barely caught by the edge of a multi-week stress test. As it turned out, the bug itself wasn't in any of the code I had written, but in an open-source dependency.
There are several ways to approach this problem. Besides simply testing your product over an extended period of time, you can build a test harness which simulates time at an accelerated rate. Another method is to periodically reboot your system from scratch after the longest period of uptime you're able or willing to test. It also helps to use software and hardware watchdogs which automatically reset your system, in whole or in part, if it becomes unresponsive (fails to "feed" the watchdog).
However, watchdogs by themselves aren't foolproof - in my case, the program's main I/O loop kept running, but a critical part of the stack no longer functioned correctly.
And two from 2015:
Maybe it’s because C is a lower level language but for me these limits seem much looser than is discussed in higher order languages, specifically for me ruby.
Take Sandi Metz rules for practical object oriented [quality] code.
1. Classes can be no longer than one hundred lines of code.
2. Methods(functions) can be no longer than five lines of code.
Its a long way from a 5 line constraint to a 60 line constraint.
I agree with Sandi that smaller ‘single purpose’ functions lead to less coupled, easier to maintain code.
Is 60 down to 5 lines just about a progression of thought, with 60 lines being more what used to be acceptable and is possibly still acceptable with ‘older languages’?
Is one better than the other? How to explain the difference?
If your code does a 30 line operation exactly once, it can be inlined. If you want to move it into a function to validate error handling, that is great and now your unit tests help validate code paths.
The fact that is is 30 or 100 lines is immaterial. Making "micro functions" to keep each small just makes it harder to follow the stack. Readability is key and micro functions hurt readability.
In fact the Rule of Three all but says that it’s okay to repeat yourself once, twice is not good, and three is right out. That’s substantially more repetition than DRY prescribes.
Instead, and I admit this sounds a little vague, the code should say what it does. The bigger the thing it’s trying to do, the harder it is to state that clearly, or to verify, so you break it down into separate concepts and string them together. And then you notice how awkward it is to state the same thing many times so you are highly motivated to reuse those statements, refine them, and make them as accurate as possible.
Inlining frequently fails this test because you lose the boundaries of the “thing” that is happening, and people start slipping non sequiturs in, start rambling, which makes it very hard to follow their reasoning.
If you can’t follow their reasoning, you can’t defend it. You won’t defend it. And pretty soon it doesn’t say what it used to and that’s where you get regressions. So if you care about that, you want to write your code so that they understand, in which case breaking your intentions becomes pretty indefensible.
It is idiotic to treat, say, the implementation of some kind of complex 3D rendering algorithm in exactly the same way than you would treat an UI view for a login form.
Guidelines should come with a rationale so that in the very least following them when it contravenes their intent can be justifiably avoided.
If I can't understand and appreciate the rationality behind a guideline and I'm not required by coding standards or tooling to follow it, I'm not going to.
Here's a guideline; don't surrender your common sense and let someone's generalized ideology dictate the design of your program.
While I see advantages to minimizing the length of functions, I can't imagine a well written program following this five line rule.
My concerns are that it increases the length of the source code which damages readability, that it scatters functionality and obscures control flow, and that it unnecessarily requires the formation of many interfaces.
The rule is far too general in it's application "all functions" and at the same time too specific "5" to be useful.
If you said instead "in general, try to make your functions do one thing well and divide and conquer the problems until each function is not very hard for you or someone else to understand",
Admittedly Erlang's powerful pattern matching makes it easier, but it definitely can be applied to multiple languages. The biggest problems I've found trying to apply it are the lack of pattern matching in most languages, and the problem of naming.
For example, imagine a very simple bytecode interpreter: In C, you need a function to dispatch all of your bytecodes to the correct implementation functions with the right arguments. In Common Lisp, this is a prime target for a macro to turn a configuration file into actual code using fill-in-the-blanks boilerplate iterated a few times. In C, you have to write the switch statement by hand, which leads to writing a function as long as the number of individual bytecodes your interpreter knows about. Exceeding sixty lines is entirely possible assuming formatting directives are adhered to such that you can't bunch statements up on a single line. However, that switchyard function is the simplest way to write that code; breaking it up would only make things needlessly complicated and harder to check.
The line limit rule sounds to me like a CS professors homework assignment instruction that escaped into the workplace.
You want something that allows you to implement the smallest unit of complexity without having to artificially breaking it down. If you implement a FFT with just 5-line functions, the result is probably going to be more messy than if you did with less restrictive requirements.
Flexibility increases when there are more things the code can do. Verifiability increases when there are more things the code can't do.
Of course most code could be simultaneously improved in flexibility and in verifiability, but ultimately there's a tradeoff.
Abstract problems have the complexity removed from them, what leads to smaller code chunks (be them functions, classes, modules, declarations, whatever). Those are also more reusable and flexible, because they are focused on general features.
Concrete problems can not have the complexity removed from them. That leads to large chunks of code and an implementation that can not be used for different things. That is the code that is expected to specialize those abstracts chunks from above into something that solves a real problem.
All that people naively claiming that they've "seen short code and it is so much simpler to understand. Why can't everybody just write short code?" are completely missing the point.
As an example, DGEMM is a very abstract function, applicable to rotation in 3-D space, Markov-chain occupancy computations, finite-state machine simulation, ANNs, and numerous other applications. But it doesn't use indirection. And it's not a particularly short function. (The same is true of most of the rest of LAPACK.)
imo splitting architecture into functions is more about reusability than some arbitrary metric of "one thing" a function is supposed to do, since you can make that "one thing" as granular as you want and leads to a lot of terrible code.
For cars, there is an ISO standard for SW - and the rules are likely a lot stricter than these. I wonder - will there ever be an ISO standard for space travel?
The software design aspects of these rules have probably evolved in conjunction with DO-178, the industry standard for designing safe computing systems for avionics.
It doesn't require MISRA, but it does actually mention it as an example.