Hacker News new | past | comments | ask | show | jobs | submit | vdm's comments login

Came here to write all of this. Even once a day is totally worth it.

I got myself a waterpik (water flosser) and got a sore throat for a few days after I first started using it. I'm going to assume this was due to the crud moving out from between my teeth. Recommend water flossers, I much prefer it to using conventional floss.

A warning though, water won't 100% wash away any scum. I recall reading an article somewhere saying that dentists still recommends conventional floss once in a while to remove any stubborn scum if you're using Waterpik. (That dentist doesn't say you should stop using Waterpik, though.)


Thank you @EGreg for sharing this.


Def. I geek out on this stuff, as I am building my own distributed systems. I have had discussions with a lot of people in the space, like Leslie Lamport, Petar Maymounkov etc.

You might like this interview: https://www.youtube.com/watch?v=JWrRqUkJpMQ

This is what I’m working on now: https://intercoin.org/intercloud.pdf





Ways to control etag/Additional Checksums without configuring clients:

CopyObject writes a single part object and can read from a multipart object, as long as the parts total less than the 5 gibibyte limit for a single part.

For future writes, s3:ObjectCreated:CompleteMultipartUpload event can trigger CopyObject, else defrag to policy size parts. Boto copy() with multipart_chunksize configured is the most convenient implementation, other SDKs lack an equivalent.

For past writes, existing multipart objects can be selected from inventory filtering ETag column length greater than 32 characters. Dividing object size by part size might hint if part size is policy.


> Dividing object size by part size

Correction: and also part quantity (parsed from etag) for comparison


Don't the SDKs take care of computing the multi-part checksum during upload?

> To create a trailing checksum when using an AWS SDK, populate the ChecksumAlgorithm parameter with your preferred algorithm. The SDK uses that algorithm to calculate the checksum for your object (or object parts) and automatically appends it to the end of your upload request. This behavior saves you time because Amazon S3 performs both the verification and upload of your data in a single pass. https://docs.aws.amazon.com/AmazonS3/latest/userguide/checki...


It does and has a good default. An issue I've come across though is you have the file locally and you want to check the e-tag value - you'll have to do this locally first and then compare the value to the S3 stored object.


https://github.com/peak/s3hash

It would be nice if this got updated for Additional Checksums.



It's similar but no really the same thing. It has to be done up front by initiating a multi-part upload to start. The parts are still technically accessible as S3 objects but through a different API. But the biggest limitation is that each part has to be >5MB (except for the final part)


It's totally different thing and requires special way to initiate multi-part uploading.


totally different how?


More constraints?


I too have scripted time(1) in a loop badly. perf stat is more likely to be already installed than hyperfine. Thank you for sharing!


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: