There's a reason zig made unsigned overflow undefined: it allows for more optimisations (Zig can also afford it thanks to their wrapping addition (+%) operator).
No, I really want to point into the stack and walk stack frames, look at the guts of objects, arrays, map memory, forge pointers to new code (because I wrote a JIT compiler), etc. That is brazen UB in C/C++ and compilers will do horrible things to you.
If you really want guaranteed frame/stack layout I don't think there is any other way than writing your own compiler/IR. No optimizing compiler written in the last 50 years will give you such a guarantee.
If you "just" want introspection, that's a little bit more reasonable; in principle something like dwarf unwind info could be extended for that. But, a) that a lot of work for an extremely niche problem and b) there is no guarantee that the in-memory representation of objects is stable at instruction boundaries, I think you would need something like GC write barriers.
With C there is the ABI which is platform specific but can't change without all hell breaking loose. Also after twenty terrible years for profilers frame pointer are returning.
So the ABI allows you to do this sort of stuff reliably as long as the compiler isn't doing inane things with UB.
The ABI doesn't mandate were locals are located on a stack frame thought so I'm not sure how would you inspect those.
If you simply meant that you want rely on an ABI, then that's fine, although relying on those details might be UB for standard conforming code, it is obviously well defined for compilers that conform to the ABI. Just because it is undefined from a standard point of view, doesn't mean is still undefined for a specific compiler + platform.
You will still need to use compilation firewalls and barriers to avoid aliasing or pointer provenance issues of course.
It was announced after the end of Leap was proclaimed, again feeding suspicion that Leap 16 is meant to save some goodwill rather than part of a solid plan of supporting users.
It all looks opportunistic, the total opposite of what users of such distros expect. You can built out your use of it at your peril.
I got a lot of unhappy feedback from SUSE for it, but they did not deliver solid tech info I asked for, even given 2 days to do so.
TL;DR summary: they're focussing on immutable distros now, but unlike rival efforts such as Endless (Debian + OStree) or immutable Fedora (OStree all the way down) or Ubuntu Core (Snap all the way down), SUSE implemented transactional packaging using Btrfs snapshots and plain old RPM.
So, underneath, it's structured pretty much the same as conventional SUSE. That means you can turn the immutability function off, if desired, and be left with something quite conventional.
Depends on the context but in general Zig wants you to be explicit. Writing 0.0 is fine at comptime or if you're e.g. adding it to an already known float type, but defining a runtime variable (var x = 0.0; ...) will not work because x doesn't have an explicit type (f16, f32, f64, f80, or f128?). In this case, you would need to write "var x: f32 = 0". You could write "var foo = @as(f32, 0)" but that's just a weird way of doing things and probably not what OP meant.
> Growing and scaling biocomputers is straightforward as it is just the result of natural expansion. This process is significantly simpler than scaling silicon based CPUs and GPUs.
I am by no means proficient in the field of neuroscience, but aren't signals in the nervous system sent by pumping Na+ and K+ ions in and out of the dendrites? From what I have been taught in my highschool biology classes, responses from stimuli will take in the order of milliseconds to fade out.
It may use several orders of magnitude less power, but I don't see how they can get compute that comes close to even decades old CPUs. Are signals in the brain sent through an entirely different process, or are they using a completely different feature of brain tissue that solves some problem we have in silicon-based processors?
Don't go for the more niche distro's. The majority of them are based off of another (better supported, more stable) distro with slight changes in configuration that don't matter most of the time.
Linux Mint is probably the only exception I'd make to this rule, because they have been around for long enough and have proven themselves to be stable.
I am not familiar with the Android development ecosystem, so if you are in need of relatively recent packages (<1 year old) the most suitable distros would be rolling release distros like OpenSUSE Tumbleweed and Arch (although that one requires quite a bit of setup).
Stay away from Manjaro and Pop as they have a history of breaking packages, and in the case of Pop not contributing upstream and causing drama.
> Stay away from Manjaro and Pop as they have a history of breaking packages
Further to that, if you intend to use valgrind as part of your
development workflow, note that valgrind sometimes stops working on
Manjaro and languishes that way for months. For complicated but
ultimately boring reasons that you can research for yourself, the
issue isn't resolved by reverting to a previously working valgrind
package. I was on the verge of switching from Manjaro to Arch for that
reason but lately it started working again so I'm giving it a
reprieve. If I were starting fresh I'd use Arch.
It's just like Wirth's law: software will become slower faster than hardware becomes faster.
reply