> Wrapping around is what happens when you decrement the minimum value of any integer type, including signed types.
No, signed wraparound is undefined behavior in C, whereas unsigneds are defined to wraparound. If you use -ftrapv, signed wraparound is an immediate abort().
While C like in many other places fails to define the correct behavior to avoid shaming the processor or compiler makers that fail to provide it, there are only 2 correct behaviors on overflows and underflows, like when incrementing the biggest number or decrementing the smallest number.
Both for signed integers and for non-negative integers, the 2 alternatives of correct behavior on overflows and underflows is to either generate exceptions or to saturate the result to the biggest or smallest representable number.
Wraparound is the correct behavior for integer residues, which are a distinct data type from either signed integers or non-negative integers.
While some people criticize C for making easy for careless programmers to make certain kinds of bugs, like access outside bounds, those are easily mitigated by using appropriate compiler options.
For me a much more serious defect of C is this confusion promoted by it in the heads of most programmers, who do not understand which are the fundamental integer types and which are the correct conversions between them, because C uses "unsigned" instead of at least 3 distinct types that it does not have, bit strings, integer residues and non-negative integers. More rarely, "unsigned" is used for other 2 types that are missing, binary polynomials and binary polynomial residues. All these 5 types must be primitive types in a programming language because all modern processors implement in hardware distinct operations for all 5 types, which can be accessed only through assembly language when these types are missing.
When I was building my computing stack out of x86 machine code I noticed that even if my high level language only had signed numbers (I'm still pretty brainwashed by C, which leads to the conclusions of OP), I still needed the ISA's unsigned jumps to deal with addresses (which can have the MSB set). So my big "insight" was to name unsigned comparisons "address comparisons".
This feels like you’re trying to “gotcha” me, but I didn’t say C specifies 2s complement wraparound, I said that’s what happens in virtually every language (including the major C/C++ implementations). It seems like you are “violently agreeing” with me.
Which makes them even less safe than unsigned, where it is defined, yes? The optimizations that can lead to are incredibly hard to predict.
Besides, for safety there are much clearer options, like wrapping_add / saturating_add. Aborting is great as a safety tool though, agreed - it'd be nice if more code used it.
You can have the trap during production, and then it is safer. If you need to catch the problem at run-time, there are checked integer options in C that you can use.
MEMS oscillators do use a vacuum, and that's why they're susceptible to helium. A helium atom is tiny compared to an oxygen or nitrogen atom, and can leak through many otherwise-perfect seals.
So, a (badly-designed) MEMS oscillator is basically a region of vacuum enclosed by a membrane that only helium can permeate. Of course exposing it to helium is going to permanently change its behavior! Once the helium gets in, there's no reason for it to leave, due to the vastly higher atmospheric pressure outside.
Very interesting. Good info about both the gyroscopes and oscillators. I didn't know that much progress had been made.
I remember when MEMS started getting capable of micro fluid handling, but wasn't aware of what they were doing with "lack of fluid" :)
Really bringing the little vacuum chambers out of the big vacuum chamber and onto the street.
Over the decades I have often thought about doing something like that, but not really micro.
Now I'm suspicious about something I hadn't considered before at a previous employer's chem lab. Last month when they called me back in, there was inconsistent oscillation being applied (sometimes not) to a key analog sensor on one instrument which is about the same vintage as the iPhone 6 where the problem showed up from the early MEMS oscillators. These instruments have been there for over a decade and it may be some other electronic problem, but this does coincide with a couple additional gas analyzers they brought in a few months ago which are now wasting about 5x more helium than I was using when I was controlling it. Exhausting into one lab, but not having a completely isolated ventilation system.
Now I know something to try next time and that's not even why they called me in this time.
This could actually be one of those problems that shows up intermittently depending on which way the "wind" blows ;)
Sometimes the best way to fix things is to wait until you're smarter :0
You don't have to 'build' anything. Just spin up a GitLab docker container. Bonus: If you put it behind a VPN, you never have to worry about updating it.
Edit: to be clear for anyone reading, don’t do this, it’s a really bad idea. You need to patch your stuff, even behind a VPN. If you’re unsure, just google: “examples of security breaches of unpatched software behind VPNs”
Yeah I literally work on that side of things, Platform/etc. Managing GHE, CI across so many CI platforms over the past almost two decades, etc. It's a disaster thinking you can "just spin up X". Sure, if you're a two person operation, but scale becomes a factor real fast. Too many people on HN are giving away they have little to no experience. And I'm the one that made the initial comment in this thread, lol. It's frustrating, but I don't want to go back to "managing build agents" and the control plane (aka whatever your central Git Host is) for everything and all jobs, etc. That's a pain in the ass and 90% of environments have no fucking clue how to maintain that shit.
If someone wants to pay me to do lead such an effort, I'd do it, and yes, it'll all be done and managed in code, and it will be done better than 90% of folks could do (I've seen the production build systems most environments have built, some big names too, and they're trash...in no small part because it's simply hard to do), but it'll take a while to smooth the experience out for all of your requirements. You're gonna be paying, and very well, for a while.
> Exactly, the application logic is the target. Actually doing seccomp bpf base but for managed bindings (Java, Node, Go, ...) add a lot of complexity....
Maybe proofread the slop before posting it next time?
Just having a bad english. But yes, the application logic is where the vulnerability can occur. I am adding support for seccomp-BPF but this is complicated for managed runtimes like Go, JVM, Node, Python.
I am a lead consultant and depending on the size of the project, I do have a team of less senior consultants who are dotted line reports. If I ever found one of my reports who missed deadlines because they refused to do something as simple as launch Claude (with our $5000 a month allowance a piece) and ask it to debug an issue, yes I would write them up.
There is a direct easy to measure line about the revenue that anyone below me makes the company. My revenue per hour isn’t as exact since I support pre-sales and follow on work.
Sure, but that operator can propagate a carry all the way to the most significant bit, so a check for bitwise equality after "strip[ping] few least significant bits" will yield false in some cases. The pathologically worst case for single precision, for example, is illustrated by the value 2.0 (bitwise 0x40000000) and its predecessor, which differ in all bits except the sign.
reply