Hacker News new | past | comments | ask | show | jobs | submit login

C is fundamentally confused, because it offers (near) machine-level specifications but then leaves just enough wiggle room for compilers to "optimize" (through alignment and such) while ruining the precision of a specification. You end up not getting exactly what you want at the machine level. It's infuriating.

The bitfield stuff in C would be fantastic if it weren't fundamentally broken. E.g. some Microsoft compilers in the past interpreted bit fields as signed...always. In V8 we had a work around with templates to avoid bitfields altogether. Fail.




I think the problem with this is the C compiler has to find a solution which work with all the architectures it is expected to support. It order to achieve this, it must generalize in some areas and have flexibility in others. C programmers are required to be familiar with both the specifics of the architectures they are building for and the idiosyncrasies of their complier. I always assumed most other compiled languages were like this since I started with C and moved to x86 assembly from there. However, the more I read about people disliking C for this reason, the more I believe this may not be the case.

> The bitfield stuff in C would be fantastic if it weren't fundamentally broken

Bitfields in C can be manageable. Each compiler has it's own set of rules for how it prefers to arrange and pack them. Despite them not always being intuitive, I use them regularly since they are so succinct. If you are concerned about faithful and predictable ordering you generally just have a test program which uses known values to verify your layout as part of your build configuration or test battery.

> ... some Microsoft compilers ...

I've used many C compilers but I have always avoided microsoft ones, going so far to carry a floppy disc with my own when working in the lab at school.


> Bitfields in C can be manageable.

For the use case of specifying a more efficient representation of a fiction confined to the program, then no harm, no foul. But the use case of specifying a hardware- or network- or ABI-specified data layout, then you need those bits in exactly the right spot, and the compiler should have no freedom whatsoever. (I'm thinking of the case of network protocol packets and hardware memory-mapped registers).


This shouldn't be an issue unless you are attempting to have the bit field span across your type boundary. Bit fields by definition are constituents of a type. It doesn't restrict you from putting the bits where you want them but how you accomplish that. In this case, you'd either have to split the value across the boundary into two fields or use a combination of struct/unions to create an aliased bit field using a misaligned type (architecture permitting of course). You either sacrifice some convenience (split value) or some readability (union) but it is still reasonable.

The compiler itself is not taking a liberal approach to bit field management, it is only working within the restriction of the type (I am speaking for GCC here, I can't vouch for others). But if you think of them as an interface to store packed binary data freely without limitations I can understand why they seem frustrating. They are much more intuitive when you consider them as being restricted to the type.


I’ve seen them used for hardware register access a lot. But there were usually a separate set of header files for when you are not using GCC/Clang - I didn’t look at those




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: