Hacker News new | past | comments | ask | show | jobs | submit | atiedebee's comments login

And by the time technology has adapted, there will be an order of magnitude more people using said technology.

It's just like Wirth's law: software will become slower faster than hardware becomes faster.


Well, depending on how much python is written around the C library, you'll get a lot of overhead.

There's also the fact that having everything be compiled simultaneously results in the optimizer being able to get a lot more work done.


if( x > INT_MAX - n ) // overflow for x + n

Your definition of a far more complicated dance is absurd.

Signed integer being undefined makes a lot of integer math faster actually: https://kristerw.blogspot.com/2016/02/how-undefined-signed-o...

There's a reason zig made unsigned overflow undefined: it allows for more optimisations (Zig can also afford it thanks to their wrapping addition (+%) operator).


Yes, thanks.

With C23 there are also now checked integers to do overflow checking for arithmetic operations.


In my experience, the majority of behaviour you actually want to use is implementation defined as opposed to UB.

No, I really want to point into the stack and walk stack frames, look at the guts of objects, arrays, map memory, forge pointers to new code (because I wrote a JIT compiler), etc. That is brazen UB in C/C++ and compilers will do horrible things to you.

See for example: https://github.com/titzer/virgil/blob/master/doc/tutorial/Ra...


If you really want guaranteed frame/stack layout I don't think there is any other way than writing your own compiler/IR. No optimizing compiler written in the last 50 years will give you such a guarantee.

If you "just" want introspection, that's a little bit more reasonable; in principle something like dwarf unwind info could be extended for that. But, a) that a lot of work for an extremely niche problem and b) there is no guarantee that the in-memory representation of objects is stable at instruction boundaries, I think you would need something like GC write barriers.


With C there is the ABI which is platform specific but can't change without all hell breaking loose. Also after twenty terrible years for profilers frame pointer are returning.

So the ABI allows you to do this sort of stuff reliably as long as the compiler isn't doing inane things with UB.


The ABI doesn't mandate were locals are located on a stack frame thought so I'm not sure how would you inspect those.

If you simply meant that you want rely on an ABI, then that's fine, although relying on those details might be UB for standard conforming code, it is obviously well defined for compilers that conform to the ABI. Just because it is undefined from a standard point of view, doesn't mean is still undefined for a specific compiler + platform.

You will still need to use compilation firewalls and barriers to avoid aliasing or pointer provenance issues of course.


How so? 15.6 was just released

It was released after community pushback on 15.5 being the last.

Then they are now calling something Leap 16, but it hasnt much tot do with Leap: https://www.theregister.com/2024/01/17/opensuse_confirms_lea...


It seems that SUSE wants to shift its enterprise offering to ALP (adaptable linux platform) instead of the legacy SLE platform to better support modern workloads. https://www.suse.com/c/suse-salp-raises-the-bar-on-confident...

it makes sense to then also shift the open source stream in that direction, which is happening with Leap 16.


It was announced after the end of Leap was proclaimed, again feeding suspicion that Leap 16 is meant to save some goodwill rather than part of a solid plan of supporting users.

It all looks opportunistic, the total opposite of what users of such distros expect. You can built out your use of it at your peril.


That's my article, FWIW.

I got a lot of unhappy feedback from SUSE for it, but they did not deliver solid tech info I asked for, even given 2 days to do so.

TL;DR summary: they're focussing on immutable distros now, but unlike rival efforts such as Endless (Debian + OStree) or immutable Fedora (OStree all the way down) or Ubuntu Core (Snap all the way down), SUSE implemented transactional packaging using Btrfs snapshots and plain old RPM.

So, underneath, it's structured pretty much the same as conventional SUSE. That means you can turn the immutability function off, if desired, and be left with something quite conventional.


The only standard explicit memset is in C23

If you want bounds checked C, use Dlang's -betterC. Its no use trying to convert C to something it isn't.

I'm not very in the loop regarding zig at the moment, but coming from C I would think you could just use 0.0 or 0.0f? Is that not the case?


Depends on the context but in general Zig wants you to be explicit. Writing 0.0 is fine at comptime or if you're e.g. adding it to an already known float type, but defining a runtime variable (var x = 0.0; ...) will not work because x doesn't have an explicit type (f16, f32, f64, f80, or f128?). In this case, you would need to write "var x: f32 = 0". You could write "var foo = @as(f32, 0)" but that's just a weird way of doing things and probably not what OP meant.


From Finalsparks website:

> Growing and scaling biocomputers is straightforward as it is just the result of natural expansion. This process is significantly simpler than scaling silicon based CPUs and GPUs.

I am by no means proficient in the field of neuroscience, but aren't signals in the nervous system sent by pumping Na+ and K+ ions in and out of the dendrites? From what I have been taught in my highschool biology classes, responses from stimuli will take in the order of milliseconds to fade out. It may use several orders of magnitude less power, but I don't see how they can get compute that comes close to even decades old CPUs. Are signals in the brain sent through an entirely different process, or are they using a completely different feature of brain tissue that solves some problem we have in silicon-based processors?


Don't go for the more niche distro's. The majority of them are based off of another (better supported, more stable) distro with slight changes in configuration that don't matter most of the time.

Linux Mint is probably the only exception I'd make to this rule, because they have been around for long enough and have proven themselves to be stable.

I am not familiar with the Android development ecosystem, so if you are in need of relatively recent packages (<1 year old) the most suitable distros would be rolling release distros like OpenSUSE Tumbleweed and Arch (although that one requires quite a bit of setup).

Stay away from Manjaro and Pop as they have a history of breaking packages, and in the case of Pop not contributing upstream and causing drama.


> Stay away from Manjaro and Pop as they have a history of breaking packages

Further to that, if you intend to use valgrind as part of your development workflow, note that valgrind sometimes stops working on Manjaro and languishes that way for months. For complicated but ultimately boring reasons that you can research for yourself, the issue isn't resolved by reverting to a previously working valgrind package. I was on the verge of switching from Manjaro to Arch for that reason but lately it started working again so I'm giving it a reprieve. If I were starting fresh I'd use Arch.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: