That's a fair workaround for my specific example. But I believe it's possible to contrive a different example where such a solution would not be possible. Put differently, I only tried to convey the overall idea of what I think is a shortcoming in Rust at the moment.
Edit: Also, I believe your code would fail my second section, as the `convert` function would have difficulty accepting a `[u8]` slice. Converting `[u8]` to `[MaybeUninit<u8>]` is not safe per se.
Yeah, you’d need to do something like accept an enum that is either &mut [u8] or &mut [MaybeUninit<u8>], and make a couple of impl From<>’s so callers can .into() whatever they want to pass…
But I don’t think this is really a shortcoming, so much as a simple consequence of strong typing. If you want take “whatever” as a parameter, you have to spell out the types that satisfy it, whether it’s via a trait, or an enum with specific variants, etc. You don’t get to just cast things to void and hope for the best, and still call the result safe.
Wow, it’s funny because the last time I ran gkrellm was 23 years ago when I first started using Linux and I thought I was a l33t h4x0r…
And just today, now that I actually write code for a living and use Linux on my work machine, I found myself really wanting a good display to tell me when my memory usage was growing.[0] I was using the gnome activity monitor but it takes up way too much screen space and was always behind the window I was using. It looks like this could actually be useful for me to run now!
[0] I was running a local kubernetes cluster with an opentracing implementation, where I hadn’t quite worked out the configs for memory usage yet, and it kept spiking and OOMing when I wasn’t looking. It’s fun when your mouse cursor just stops moving and you’re wondering whether you need to hold down the power button or what…
Mapping a Vec<T> to Result<U, E> and collecting them into a single Result<Vec<U>, E> made me feel like a ninja when I first learned it was supported. I’m a little worried it’s too confusing to read for others, but it works so well.
Combined with futures::try_join_all for async closures and you can use it to do a bunch of failable tasks in parallel too, it’s great.
My 2¢, it’s perfectly reasonable to bring up other languages in defense of criticism, because it explains why these decisions were made in the first place. GP literally said that rust isn’t a good fit for you if you’re in a position to use a GC. The comparison to C++ is important because it’s one of very, very few contemporary languages that also doesn’t require a GC/refcounting everywhere. So it’s useful to compare to how C++ does it.
Yet another issue with people who criticize rust: they don’t want anyone to defend rust, and complain loudly about anyone defending rust as being a literal problem with the language. You can do better than that.
It’s important that a for loop takes ownership of the vec, because it’s the only way you can call things inside the loop body that require the element itself to be moved.
If you don’t want the loop to take ownership of the vec, there’s literally a one character change: put a & before the thing you’re iterating (ie. for x in &y). That borrows the vec instead of moving it.
You seem to want rust to decide for itself whether to borrow or own the contents, but that way lies madness… it will be really hard to reason about what more complicated code is doing, and changes would have very non-local effects on how the compiler decides to use your code.
For me, move-semantics-by-default is the key idea that rust got right, and it’s a very simple concept. It’s not intuitive, but it’s the key idea behind all of rust’s benefits for memory management and preventing concurrency bugs. “Learn one simple but non-intuitive thing and you get these huge benefits” is a tradeoff I’m very much willing to make, personally.
> You seem to want rust to decide for itself whether to borrow or own the contents, but that way lies madness…
Most of what Rust does already feels like madness, like the concept of implicit moves, etc., but I understand your point. I don't think the reasoning really makes sense in terms of actual logic, but as I wrote in another comment: It's possible that I've misunderstood the sales pitch of Rust trying to be GC-less GCd language.
> For me, move-semantics-by-default is the key idea that rust got right, and it’s a very simple concept. It’s not intuitive, but it’s the key idea behind all of rust’s benefits for memory management and preventing concurrency bugs. “Learn one simple but non-intuitive thing and you get these huge benefits” is a tradeoff I’m very much willing to make, personally.
I can respect that and seen this way (where we accept that we're simply going to have unintuitive and incorrect rejections of programs) it does make a lot more sense.
C++ “move” semantics are quite complicated. That said, those C++ semantics are much better at handling some edge cases in systems software that Rust largely pretends don’t exist. It is a tradeoff. C++ is much uglier but also much better at handling cases where ownership and lifetimes are intrinsically ambiguous in a moved-from context because hardware has implicit ownership exogenous to the code.
The equivalent of the C++ move in Rust are the function take/replace (like mem::replace ajd option::take).
And it is fully memory safe.
You can build all the ownership you want by using raw pointers in Rust. And there is nothing wrong with a specific problem requiring unsafe because the problem cannot be taught to the borrow checker. But there is a point in your stack of abstractions where you can expose a safe and ergonomic API.
If you have a concrete example I would love to get a crack at it.
> Remind me again how move, copy and clone works in C++ /s
Sarcasm, but it’s worth outlining… C++ “move semantics” are (1) precisely the opposite of rust, and (2) not move semantics at all.
- Rust doesn’t let you override what happens during a move, it’s just a memcpy
- C++ has an rvalue reference (&&) constructor, which lets you override how a thing is moved
- Rust doesn’t let you use the moved-from value
- C++ absolutely has no problem letting you used a value after wrapping it in std::move (which is really just a cast to an rvalue reference)
- Rust uses moves to allow simple memcpy’ing of values that track resources (heap space, etc) by simply making sure nobody can access the source, and not calling Drop on it.
- C++ requires you to write logic in your move constructor that “pillages” the moved-from value (for instance in std::string it has to set the source string’s pointer to nullptr and its length to 0.) This has the consequence of making the moved-from value still “valid”
For copies:
- Rust’s Copy is just “memcpy, but you can still use the original value”. Basically any type that doesn’t track some resource that gets freed on Drop. Rust simply doesn’t let you implement Copy for things that track other resources, like heap pointers.
- C++’s copy happens implicitly when you pass something by value, and you get to write arbitrary code to make it work (like copying a string will malloc a new place on the heap and copy the buffer over)
- Rust has an entirely different concept, Clone, which lets you write arbitrarily code to duplicate managed resources (analogous to how you’d use a C++ copy constructor)
- C++ has nothing to help you distinguish “deep copy that makes new copies of resources” from “dumb copy that is just a memcpy”… if your type has an expensive concept of deep copying, callers will (perhaps inadvertently) use it every time they pass your type by value.
IMO C++’s “move” still letting you touch the moved-from value is what made me realize how much C++ had lost the plot when C++11 came out. Rust’s semantics here are basically what happens when you look at what C++ was trying to do, and learn from its mistakes.
Completely unrelated tangent: Jesus Christ Reddit is such a cesspit.
Tried tapping that link on mobile, got a screen to view the corresponding post. Tapped it, and I got taken to the App Store. No thanks, force quit the App Store and go back.
Now I get a full screen notice on the original Reddit tab saying “didn’t go where you expected? Next time try the long press!” With instructions to not use private browsing and to long press any link and open in safari. (Wha? You, Reddit, are what are trying to force me to use your app!)
So I long press like they say, open in new tab, and what do I see? A large blank page that just says “REDDIT” in all caps, with the button “Get the app” on the bottom. The link was just to “reddit.app.link” the whole time.
Can’t a company who has a website, just … let me use the website? At every possible turn, Reddit HATES anyone using Reddit from a browser. They will ruin every single aspect of the website they possibly can to try to push you to the app. The entirety of reddit.com seems to be just a broken honeypot to get you to use the app instead. I just can’t fathom how a company can be that broken.
Just delete the Reddit website, it would make more sense.
> The entirety of redit.com seems to be just a broken honeypot to get you to use the app instead. I just can’t fathom how a company can be that broken.
It's their intention to have the website be a funnel so that they can get more mobile users.
I sometimes use https://old.reddit.com, though it doesn't look that great on mobile, maybe there are some other alternatives.
I know reddit will connect accounts together based on device ID, i wonder if their data becomes more valuable if you can tie multiple independent accounts together in to one profile?
Its a site where users will often have multiple login for different subjects of discussion.
> Tried tapping that link on mobile, got a screen to view the corresponding post. Tapped it, and I got taken to the App Store.
It's obnoxious, but if you really want to view the post you can switch the screenshot page to desktop mode, and the "View post" button shouldn't redirect to the App Store. The result isn't pretty but it's readable in a pinch.
(They're still not desperate enough to track the UA and detect the switch.)
The article you posted describes a patient using ChatGPT to get a second opinion from what their doctor told them, not the doctor themself using ChatGPT.
The article could just as easily be about “Delayed diagnosis of a transient ischemic attack caused by talking to some rando on Reddit” and it would be just as (non) newsworthy.
Malloc and free aren’t handled by the operating system, they’re handled in user space.
Underneath malloc is mmap(2) (or in older unices, setbrk), which actually requests the memory. And with delayed/lazy allocation in the OS, you can just mmap a huge region up front, and it won’t actually do anything until you write/read to the individual pages.
Point is, you only need one up front call to mmap to write to any page you want.
The SNES doesn’t have any concept of user space. Your program has full control of the hardware. You can do whatever you want. There is no operating system at all.
I was responding to your second paragraph, where you talk about modern programs having to request memory from the OS with malloc and free. This isn’t true, malloc and free are not operating system concepts, they are ways for your program to divide up memory address space that is already mapped to you.
To bring this back to the SNES, you could totally use malloc and free on the SNES, but it would be just vending pointers to the address space you can already use. But my point is that this is no different from a modern OS, because malloc and free are just managing the address space you already got from the OS using mmap. And plenty of malloc implementations avoid repeated calls to mmap by mapping a large amount of space up front.
My point is, “having full access to the hardware” is completely orthogonal to whether malloc and free are a good idea. You can use malloc/free on a flat address space, just like you can use them on a big fat mmap() region. Instead, the reason you’d generally avoid malloc/free on SNES is that the amount of physical memory is so tiny that it’s generally a bad idea to do any dynamic memory management. Instead you want fixed regions representing in-game entities and logic, and the memory addresses you use should be managed manually in fixed size buffers.
(If you’re still not convinced, consider that malloc and free work just fine in DOS, where there’s also no virtual memory and you have total access to the physical memory space in your program. DOS doesn’t have mmap, and malloc implementations on DOS just stick to managing the flat, physical address space. No MMU or virtual memory needed.)
> If you’re still not convinced, consider that malloc and free work just fine in DOS, where there’s also no virtual memory and you have total access to the physical memory space in your program.
The point is that "the system has only one program running at all times" is not an explanation for why there's no dynamic memory allocation, because modern operating systems use virtual memory to give the illusion of a flat address space that the program is in full control over. You can use the .data/.bss sections of an executable exactly as you would use memory in a SNES game.
And in fact, on many game consoles newer than the SNES (such as the PS1, N64, GC/Wii, DS/GBA, etc.) there's no operating system and the game is in full control of the hardware and games frequently and extensively use dynamic memory allocation. Whether you manage memory statically or dynamically & whether you have an operating system or not below your program are almost completely orthogonal.
Rather, the reason why SNES games don't use dynamic memory management is because it's impossible to do efficiently on the SNES's processor. Dynamic memory management requires working with pointers, and the 65816 is really bad at handling pointers for several reasons:
- Registers are 16 bits while (far) addresses are 24 bits, so pointers to anything besides "data in a specific ROM bank" are awkward and slow.
- There are only three general-purpose registers, so register pressure is extreme. You can store pointers in the direct page to alleviate this, but addressing modes relative to direct-page pointers are slow and extremely limited.
- There is no adder integrated into the address-generation unit. Instructions that access memory at an offset from a pointer have to spend an extra clock cycle or two going through the ALU.
- Stack operations are slow and limited, so parameter passing is a pain and local variables are non-existent.
All of these factors mean that idiomatic and efficient 65xx code uses static, global variables at fixed addresses for everything. When you need dynamism, you make an array and index into it instead of making general-purpose memory allocations.
But as you get into more modern 32- or 64-bit processors, this changes. You have more registers and better addressing modes, so the slowness and awkwardness of working with pointers is gone; and addresses are so long that instructions operating on static memory addresses are actually slower due to the increased code size. So, idiomatic code for modern processors is pointer-heavy and can benefit from dynamic memory allocation.
I think bash has an alias “rehash” that does the same as hash -r too. But zsh doesn’t have it, so “hash -r” has entered my muscle memory, as it works in both shells.
Bah, you’re right! I got it backwards, it’s zsh that has rehash, bash does not. And hash -r works in both.
I guess I’ve been using zsh longer than I thought, because I learned about rehash first, then made the switch to hash -r later. I started using zsh 14 years ago, and bash 20+ years ago, so my brain assumed “I learned about rehash first” must have been back when I was using bash. zsh is still “that new thing” in my head.
the odd thing is, at some point I ended up with `hash -R` as muscle memory that I always type before I correct it to a lower case r, and I'm not sure why, I can't remember any shell that uses `-R`.
Something like:
(Honest question, actually… because the above may be impossible to write and I’m on my phone and can’t try it.)Edit: it works: https://play.rust-lang.org/?version=stable&mode=debug&editio...
reply