Hacker Newsnew | past | comments | ask | show | jobs | submit | danudey's commentslogin

> What, specifically, are we learning?

That some countries are willing to hold tech companies accountable (or claim their intent to hold tech companies accountable) for breaking the laws. In turn, this could lead to court cases deciding if companies can hide behind "It wasn't us, our AI did it!" or if they have to take responsibility for the thing they made.


Well FWIW, the original poster's anti-C++ statements aside, removing the borrow checker does nothing except allow you to write thread-unsafe (or race condition-unsafe) code. Therefore, the only change this really makes is allowing you to write slightly more ergonomic code that could well break somewhere at some point in time unexpectedly.

Nope. Anything which wouldn't pass the borrowck is actually nonsense. This fantasy that magically it will just lose thread safety or have race conditions is just that, a fantasy.

The optimiser knows that Rust's mutable references have no aliases, so it needn't safeguard mutation, but without borrow checking this optimisation is incorrect and arbitrary undefined behaviour results.


I know a lot of people who refuse to use Ubuntu outright specifically and solely because of snaps and how awful they are. Our developer laptops at work are meant to be running Ubuntu and I have some coworkers who only begrudgingly switched over after discovering how to prevent the 'fake snap firefox' package from being installed[0].

I get what they're going for - a way to ship self-contained (usually end-user-facing) applications with any dependencies they need without any risk of breaking other applications in the system. Unfortunately, it just results in breaking those applications specifically instead, in weird and stupid ways that are difficult to debug.

I think if snaps did the Flatpak thing - extract to a local directory instead of living on squashfs forever, or even storing them as an uncompressed disk image instead of squashfs - it might be more reasonable, but at that point you may as well just use Flatpaks like everyone else wants.

[0] - Add the following to `/etc/apt/preferences.d/no-ubuntu-firefox`:

    Package: firefox
    Pin: release l=Ubuntu
    Pin-Priority: -1
Then install the apt repository as described here: https://support.mozilla.org/en-US/kb/install-firefox-linux#w...

This will make any `firefox` package from any repository with the `Ubuntu` label (i.e. an official Ubuntu repository) have a -1 priority, or 'never install ever'.


Keep in mind that for a lot of Chinese companies, it's difficult to (legally) access some outside resources.

My company hosts our docker images on quay.io and docker hub, but we also have a tarball of images that we post to our Github releases. Recently our release tooling had a glitch and didn't upload the tarballs, and we very quickly got Github issues opened about it from a user who isn't able to access either docker registry and has to download the tarball from Github instead.

It doesn't surprise me that a lot of these companies have the same "release process" as Wii U homebrew utilities, since I can imagine there's not a lot of options unless you're pretty big and well-experienced (and fluent in English).


That's the list of hardware they've explicitly tested on. Always bear in mind that, for any given standard, no matter how straightforward, there are going to be dozens of vendors who screw it up for no real reason other than incompetence or malice.

The older a piece of software is, the more workarounds it will have accrued for various hardware bugs or vendor misdeeds, so it's reasonable for the project to disclaim that it's only been tested on a small number of physical hardware devices even if, in theory, it should work out of the box on all of them.


I think what it means is:

1. It's an EFI application

2. It doesn't require any external runtimes, any setup, etc. (i.e. your UEFI system can boot straight into it without anything in between).

At least, that seems to be the case.


New tools during the transition, hopefully fewer tools in the long run. Also things making a lot more sense in the long run.


I was hired at a small startup (~15 employees total) and one of the first things I did was to migrate their SVN repository to Git. Not too difficult, brought over the history and then had to write a bunch of tooling to handle the fact that not all of the source code was in one giant heirarchy anymore (since everything was microservices and self-contained libraries it made sense to split them out).

After I left that company I ended up at a larger company (~14k employees) in part because I'd worked on SVN-to-Git migrations before. Definitely a different beast, since there were a huge amount of workflows that needed changing, importing 10 years of SVN history (some of which used to be CVS history), pruning out VM images and ISOs that had been inadvertently added, rewriting tons of code in their Jenkins instance, etc.

All this on top of installing, configuring, and managing a geographically distributed internal Gitlab instance with multiple repositories in the tens or hundreds of gigabytes.

It was a heck of a ride and took years, but it was a lot of fun at the same time. Thankfully 'the guy who suggested the transition' was the CEO (in the first company) or CTO (in the second) nothing went wrong, no one got thrown under buses, and both companies are still doing a-okay (as far as source control goes).


I don't understand the code itself, but here's Debian's patch to detect overlapping zip bombs in `unzip`:

https://sources.debian.org/patches/unzip/6.0-29/23-cve-2019-...

    The detection maintains a list of covered spans of the zip files
    so far, where the central directory to the end of the file and any
    bytes preceding the first entry at zip file offset zero are
    considered covered initially. Then as each entry is decompressed
    or tested, it is considered covered. When a new entry is about to
    be processed, its initial offset is checked to see if it is
    contained by a covered span. If so, the zip file is rejected as
    invalid.
So effectively it seems as though it just keeps track of which parts of the zip file have already been 'used', and if a new entry in the zip file starts in a 'used' section then it fails.


I wonder if this has actually been used for backing up in real use cases (think how LVM or ZFS do snapshotting)?

I.e. an advanced compressor could abuse the zip file format to share base data for files which only incrementally change (get appended to, for instance).

And then this patch would disallow such practice.


Debian's `unzip` utility, which is based off of Info-ZIP but with a number of patches, errors out on overlapping files, though not before making a 21 MB file named `0` - presumably the only non-overlapping file.

    unzip zbsm.zip
    Archive:  zbsm.zip
      inflating: 0
    error: invalid zip file with overlapped components (possible zip bomb)
This seems to have been done in a patch to address https://nvd.nist.gov/vuln/detail/cve-2019-13232

https://sources.debian.org/patches/unzip/6.0-29/23-cve-2019-...


Yep, these kinds of format shenanigans are increasingly rejected for security reasons. Not zip bombs specifically, but to prevent parser mismatch vulnerabilities (i.e. two parser implementations decompressing the same zip file to different contents, without reporting an error).


I think these mitigations are misguided and I've had false-positives at least once. Rather than caring about structural details (overlapping files etc.), decompressors should just limit the overall decompression ratio by default (bytes in vs bytes out). It shouldn't matter how the ratio is achieved.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: