The macro does not look (necessarily) as a bug, it simply does a counter intuitive thing of zeroing everything, including the capacity. Maybe the way it is used, the capacity is saved, then the array cleared, then it is set back; or more likely it is used only after the allocation of the object, when everything requires to be zeroed (but if this is the case it should be called "_init" and not "_clear", for clarity). Does not look as the most sounding interface of course. Also this is the kind of thing that should not be done as macro regardless of speed... and only turned into a macro in case of very aggressive profiler-drive optimization.
If the way it is used requires the user to break the abstraction/encapsulation and manually buffer some fields in order not to break the data structure and leak memory, I would call that a bug.
There is one use of sc_array_clear() in the test code [1] which really makes it look as if it is being used in a way that I think (again, I haven't single-stepped this code, only read it) leaks memory.
I agree on the pain of everything being macros, it's more pain than it's worth I think and will likely lead to code duplication (and more pain in debugging, probably).
I would even go so far as to think that this kind of single-file design, where each file is independent of the others, makes it harder and more annoying to implement more complicated data structures.
I'm too lazy to look at the code that does the alloc, but if this came my way in a PR I'd ask if the code doing the allocation is using malloc or realloc.
If all allocations are performed using realloc[1] then I have no problem with that macro.
[1] Sometimes it's just easier. Why conditionally call malloc/realloc when you can simply call realloc all the time? Realloc(NULL, size) is equivalent to malloc(size).