Hacker Newsnew | past | comments | ask | show | jobs | submit | nithssh's commentslogin

The post had some nice structural discussion about digital forensics


The author wants a DRM that won't stop working in older platforms. He wants DRM. The problem is that the DRM is on a rolling release, and non-latest versions are not supported. His concerns would be solved if the games are able to pin to a specific DRM version/policy


OK but what does that look like. Does valve monitor every single game? Do they force developer to provide snapshots of every single version? Or is it up to the developers to support it and valve just gives them the tools to do so? In which case basically no one would do it. Because its a lot of work.

Also what does it look like if you use multiple DRM's? Does Apple Fairplay have a carve out in their contract that doesnt let me also have Denuvo? This is honestly a minefield and as smart as the poster might be, I dont think he has answers even he would like for all these questions.


Lots of the key people working on things like SDL that Value very heavily relies on are also very vocal critics of the Steam Input API. Steam Input is a bastardization of SDL technically.

I have personally benefitted from the steam input feature in niche cases, but the way it hijacks everything by default even when not enabled does seem to be poor engineer-ship. I have run into some issues when doing controller management within games, only to realize it has to be solved at the steam level.


This is the best explanation I've read of this limitation


I used to wonder what was the play with pouring so much money into making a React framework the default. Now I understand the play


Sounds interesting, is there anything online about this?


It would be more grey on white than you think. It's not just Gaussian blur, they also have noise layers in the shader.


Very enlightening piece on how corpo sponsorship of OSS work. Thanks to the author for writing this.


There are a lot of reasons why just making a copy of the files you need to another FS is not sufficient as a backup, clearly this is one of those. We need more checks to ensure integrity and robustness.

BorgBackup is clearly quite good as an option.


> BorgBackup is clearly quite good as an option.

After one enables rsync with checksums, doesn't Borg have the same issue? I believe Borg needs to do the same rolling checksum over all the data, now, as well?

ZFS sounds like the better option -- just take the last local snapshot transaction, then compare to the transaction of the last sent snapshot, and send everything in between.

And the problem re: Borg and rsync isn't just the cost of reading back and checksumming the data -- for 100,000s of small files (1000s of home directories on spinning rust), it is the speed of those many metadata ops too.


As with rsync borg does not read files if their timestamp/length do not change since the last backup. And for million files on modern SSD it takes just few seconds to read their metadata.


> As with rsync borg does not read files if their timestamp/length do not change since the last backup.

...but isn't that the problem described in the article? If that is the case, Borg would seem to the worst of all possible worlds, because now one can't count on its checksums?


If one worries about bitrot, the backup tools are not good place to detect that. Using a filesystem with native checksums is the way to go.

If one worries about silent file modifications that alters content but keep timestamp and length, then this sounds like malware and, as such, the backup tools are not the right tool to deal with that.


> If one worries about bitrot, the backup tools are not good place to detect that. Using a filesystem with native checksums is the way to go.

Agreed. But I think that elides the point of the article which was "I worry about backing up all my data with my userspace tool."

As noted above, Borg and rsync seem to fail here, because it's wild how much the metadata can screw with you.

> If one worries about silent file modifications that alters content but keep timestamp and length, then this sounds like malware and, as such, the backup tools are not the right tool to deal with that.

Seen this happen all the time in non-malware situations, in what we might call broken software situations, where your packaging software or your update app tinker with mtimes.

I develop an app, httm, which prints the size, date and corresponding locations of available unique versions of files residing on snapshots. And -- this makes it quite effective at proving how often this can happen on Ubuntu/Debian:

    > httm -n --dedup-by=contents /usr/bin/ounce | wc -l
    3
    > httm -n --dedup-by=metadata /usr/bin/ounce | wc -l
    30


The latter type case is what the article is talking about though. At the same time, as the article also discusses, it's unlikely to have actually been caused by malware vs something like a poorly packaged update.

Backup tools should deal with file changes lacking corresponding metadata changes despite it being more convenient to say the system should just always work ideally. At the end of the day the goal of a backup tool is to backup the data, not to skip some of the data because it's faster.


Amen!


This is great for web developers who have to manually write multi-browser compliant code. Fat frameworks might take care of the cross compat stuff, but for those raw dogging, this will be good.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: