To be fair, it has been persistently offline for the past year or so every time I checked, so I'm not surprised that other people would jump to the same assumption.
I recall towel.blinkenlights.nl mentioned you would get a different version of the video (with colors) if you connected from IPv6. I've found rips online of the plain grayscale version, but not the colored one.
> There are services which allow you to upload via CLI and download via web browser, but they host your file so you have to wait for the full upload to finish before sharing the link.
There are exceptions to this; I've been making copyparty[1], an httpd which lets you start downloading a file that is still being uploaded[2]. If you catch up with the uploader, it'll throttle the speed so the browser doesn't drop the connection. Uploads and downloads can be done through browser and/or cli.
I recall there was at least one other alternative with similar functionality somewhere on the awesome-selfhosted list, but I'm failing to find them right now... It was prominently mentioned in the project's readme, but maybe that's no longer the case.
if you stop sending data entirely, browsers tend to drop and reopen the connection after a few minutes, assuming something got stuck along the way. Throttling seems to prevent that nicely.
Unfortunately this seems like it’s something I’d need to host myself, and that’s something I specifically don’t want to do. I only need to share on occasion, and always want to do it with the least hassle possible.
Yep, and the server bandwidth can become a bottleneck if the peers are fast enough, so true peer-to-peer is still the better choice, or something webtorrent-based if multiple people are grabbing the same file.
But there's been enough last-minute submissions of DJ material by now that I'm still happy it was added as an option :-)
There's also the fact that an NVRAM variable is never overwritten inplace; the new value is written elsewhere, and the pointer is updated to use the address of the new value. This is probably mainly for wear-leveling, but I guess it could also introduce fragmentation?
Just an observation from when I was debugging a board that selfdestructed when booting a particular efi-file so I had to dig into the flash contents to figure out why, but I think this particular code was straight from tianocore.
Probably for atomicity. It’s likely only a pointer sized block can be updated atomically so in order to safely update the value that may be larger you write it somewhere else and atomically update the pointer. That way you can only observe the old or new value, and not some intermediate result if power was lost part way through writing the new value. The same techniques are used in journaling file systems.
I'm curious how the NVRAM is actually stored. In embedded systems on microcontrollers the constraints of the HW make you do what we called "Emulated EEPROM". You are able to erase data in increments of, for example 4kb and write it in increments of 16 bytes (but it varies with implementation). So on write you just say... this is block foo and it stores value bar and you append it to the latest not written data in the block. When you recover data, you just look for the latest valid value of foo and say "the value of foo is bar". You might have multiple instances of foo written, but only the latest is the valid one. Once the active block is full, you swap out all the current values of all the NvM blocks to the next erasable HW block.
Yes, this achieves atomicity, yes this gets you wear leveling (with the caveat that the more data you store, the worse the lifetime gets because you need to do more swaps) but it also is a consequence of HW constraints and the approach flows directly from it. It might be the consequence of HW/SW co-design at some point in the past as well, but I have no idea whether this is true.
This information is based on my experience in automotive.
Automotive did 'roll your own' flash handling since almost forever...
I have a toyota where the odometer resets back to 299,995 miles on every power cycle because whoever designed the wear levelling didn't plan that far ahead...
True, I was trying to find the variable storage requirements in the UEFI specification but couldn't (is it Section 3? 8?), so I resorted to linking to the struct definition in the EFI shell package that the author used.
The amount of visual flare that thing managed to produce in the heydays of WinXP was stunning, felt like operating a scifi-movie prop with all of the alphablended animations shooting out of the player window at times.
One trick not mentioned in the article is to repeat the ddrescue with the same CD in different drives; this almost entirely saved my unreadable CDs, as no single drive could read them entirely.
My biggest savior was a Pioneer BDR-AD07BK which managed to read some discs that other drives couldn't even get the TOC off!