I don't like the use of macros for things like this. Macros in general should rarely or sparingly be used.
I'm also not certain one should use "end pointers." Conventionally, it seems more advisable to use `size_t capacity`, `size_t length`, and `void *data`.
Great use of Cunningham's Law, though! I appreciate C posts on Hacker News.
The lexer makes progress by consuming bytes from the buffer, incremeting the read position. The buffer is reallocated whenever such a read would overrun the write position: I/O functions are called to fill up the buffer and push the write position further ahead.
Had they been pointers within the [buffer, capacity[ range, they would have become invalid whenever buffer is reallocated for expansion since its location in memory would change. So I chose to implement the read and write heads as offsets to the buffer's base address and calculating those pointers only when they're needed.
Yes, but GLib arrays are also lossy. They require you to externally manage capacity or to know with perfect foresight exactly what your capacity will be for the lifetime of the memory allocation.
I don't think those are impossible scenarios, but the cost of one additional pointer in terms of size leaves you with a lot more functionality. Saving one pointer in size and not having capacity makes GLib arrays nearly useless, which I find confusing.
You could simply pass a pointer and a size around instead. Which is what most people actually do when they don't need resizable data layouts.
If you're working with C, I think you have to just accept that void pointers happen. Working around losing compile-time type data requires you to create runtime structures, which I don't find acceptable.
I'm also not certain one should use "end pointers." Conventionally, it seems more advisable to use `size_t capacity`, `size_t length`, and `void *data`.
Great use of Cunningham's Law, though! I appreciate C posts on Hacker News.