Like everything in software development, it's about evaluating the right methodologies for the problems rather than hitting every screw with the same hammer.
For instance, caching is functionality that could be backed by redis, memcached, etc. and I think it makes sense to create a cache API in most cases. On the other hand, if I am processing data with Kafka, I want to interface with Kafka directly for all the special functionality and nuance that makes it great for particular use-cases.
I always write simple functions like "CustomerFunctions.GetCustomersByCountry" and keep all SQL in that file. Makes the code more readable and testable.
I hear a lot of frustration in your words, as if you had difficulties trying to explain to your colleagues why a certain thing could be done better in a certain way. I think that sometimes they might be right, sometimes reinventing the wheel is pointless, sometimes it needs to be invented, and sometimes you need to add that layer of complexity. It seems that information is not shared correctly at your workplace, as not everyone is probably aware of the same problem to solve, or maybe not everyone has the same skills, in which case you need to mentor and explain. Or it seems that you don't have constructive design discussions.
And no, this does not depend on the company/team size. Probably in larger teams it's actually easier to convince, as generally there is a process in place for such things like design, refactoring, etc.
In the latter scenario, generating momentum is damned difficult, as everyone is trying to herd cats to their own priorities.
Smaller teams in that regard can be more flexible / nimble, as there are fewer total competing priorities, but that isn't to say that it is always true in either direction.
But seriously, that sounds like a very good idea if one has to target multiple platforms work with a change-prone API.
we wrapped it into our engine language. Even though we were a sony only shop we still just talked our stuff. For instance we didn't use any reference to any of the input apis, we just had a global input class g_Input and Input that did all of the input logic. It was pretty easy to make this work for direct input on windows vs the state machine polling system on PS3.
However, I'm surprised the author did not point out that they are calling malloc() without checking for a NULL return value.
To quote malloc(3):
On error, these functions return NULL. NULL may also be returned by a
successful call to malloc() with a size of zero, or by a successful call to
calloc() with nmemb or size equal to zero.
If you port to some new platform, of course you need to go back and check the docs for new error codes.
For instance, I generally abort() on NULL after printing a message.
Ok, but...how are you printing the message?
On many platforms, with many standard libraries (glibc, for instance), printf et al will themselves malloc memory, so you're testing for a system telling you it couldn't allocate memory and then in response, potentially triggering further quite-likely-failing allocations that may or may not be handled by the library.
This is why people generally don't bother to check for NULL mallocs. Even if you're on a system that will even tell you you're in that situation (rare, these days, outside of the embedded world), there's almost nothing useful you can do.
A few years back I read quite a few papers on NASA’s development standards and did quite a bit of kernel dev also.
I no longer do any allocations on daemonized or driver code like this. It’s far better to preallocate pools, store temp objects on stack and generally never malloc outside of the pool management.
When we hit the alloc wall, we now have well-defined behaviors and can gracefully choose whether to reject, clean or error out. Additionally our code is generally more performant, less buggy, and easier to debug.
Honestly, in my own code I try to avoid lots of little heap allocations (duh), and when writing embedded things I don't have printf or malloc anyway.
- come up with a requirement about handling the bad result
- put in the code which does it
- test the code
> I generally abort() on NULL after printing a message.
On most modern systems, accessing the null pointer will also abort; you just won't get the nice error message. It will look the same as if the program hit a runaway recursion, or corrupt memory. The location in the program where it happened is equally traceable either way (with a core dump or debugger). Neither a segfault nor abort() do any graceful cleanup of anything. Basically, this is only a tiny increment in software quality over not handling the NULL at all.
This is only true on Linux, and isn't even guaranteed there. I'm pretty sure the default overcommit handling mode is still to refuse allocations that are much larger than there is free memory to support. (This was the case around 2.6.)
It's generally preferable for software to check if malloc() failed and have a chance to recover.
The only time you should avoid checking malloc() return values are for special mallocs that really cannot return NULL. One example is FreeBSD's kernel malloc() when called with the M_WAITOK flag.
I've encountered many situations where malloc will return null in real world applications. Some of which we really wanted to handle that scenario and fail gracefully or continue with less memory usage (and thus less performance).
> Checking the return of malloc() for NULL is not currently considered good practice
This is simply false. I was a TA for the intro C class at CMU last year, and we still teach to check for NULL. In fact, we mark down students who do not do so.
This way hey are calling malloc() is very common in professional code bases and a safety measure; I’m surprised the author was surprised.
I am not used to that style so it seems odd to me, but I suppose if it was common in a codebase one would get used to it.
I've spent most of my life dealing with C code dating from 1986 and earlier to present day. Once you get used to the stylistic conventions one of the things that falls out is the simplicity even for huge, old legacy code bases. Often little more than grep or etags is sufficient to competently navigate and follow the flow of this kind of code. Simple things are simple: what callstacks can end up here? etc. The code often has the property that if you printed it you'd still be able to navigate it without issue. Given a random page and a line, getting to somewhere would be possible.
In comparison so many modern code bases are just completely incomprehensible without a tool in the form of an IDE (and often still quite challenging with them because even the IDE isn't sure, and so you must resort to runtime analysis). So many "tricks" used which require tools and require the tools to be bug free and robust.
The closest I have come to "pick it up and you can read and understand it without assistive technologies" is Go.
It's confusing for readers and new team members. And can even lead to strange when someone forgets to type "struct".
So why would that be "the preferred way"? I've been writing C for some time and no one who knows C casts the return value of malloc.
Also, the fact that they cast the result of malloc makes me think they were using a C++ compiler, which will complain that there's no cast.
Ah ok, makes more sense in re-reading now. I was lazy and went off of a child comment.
>Also, the fact that they cast the result of malloc makes me think they were using a C++ compiler, which will complain that there's no cast.
Definitely could be that.
Is it really right? In C, isn't it considered bad to cast malloc result as it masks some mistakes? Also, the code doesn't check for NULL for the malloc result.
Now the profligate use of leading underscores. That is an issue.
This is a pretty common way of shoehorning access control into languages that do support them natively.
And anyway, it is dirty - like crossing the street without looking for cars because traffic around here is very mellow, or peeing without washing your hands afterwards.
(That being said, I once helped somebody to write a program whose task it was to deliberately get the host system to the point where malloc(3) failed. It was not as trivial as one might think.)
There is already a well-known and popular library for graph analysis (in C++) named "Lemon Graph Library". It is 10 years old. As far as I can tell, this new library is not related in any way. (Am I mistaken?)
Did they not simply google for the words "lemon graph" before they published this code base? Or am I missing something?
I've been using vim and cscope forever, but I'm wondering if there's anything new and interesting out there.
...or even screencast recommendations of folks that are particularly proficient at this.