Hacker Newsnew | past | comments | ask | show | jobs | submit | bayesnet's commentslogin

But this is literally the point of MCF: third-party logistics for selling off-Amazon. All these brands chose not to sell on Amazon for one reason or another and yet, without explicit opt-in, were being surfaced on a marketplace they didn’t want to be on. If I send a video through gmail that doesn’t mean I want it on YouTube.

I haven’t tried this out yet but my gripe with pandoc is that it produces latex (and typst) that no human would ever write. It looks messy and is annoying to share with coauthors.

This is not to say that pandoc isn’t a fantastic tool.


I recently had a baby. My wife and I are not on socials but wanted to share pics with close friends and family, so we created an iCloud shared album. We only realized that these shared albums had likes and comments once someone asked us if we had seen that my grandmother had left a comment.

I think the shared album is almost the perfect form of social media. We invited about 20 people who we all know well. This “community” has a singular and shared purpose of being interested in our baby. Content is presented in chronological order. There are no ads, no other content, not even suggested posts or “you may also like…”. If you want to see more you just swipe to the next picture and perhaps read what other people you know have to say about the funny face our child is making.

The author observes that social media creates bubbles and that people are tired of socializing. In some ways the shared album is the ultimate bubble and provides only a very limited way for our community to socialize. Nonetheless many of our friends, also twenty-somethings, have told us how lovely it is to interact with us and each other on such a limited platform.

I think—well, maybe I hope—that the future of social media is “hyperlocal” like this. It will not be as easy to meet people and find new perspectives, sure, but it will let the internet serve its purpose of connecting people who are physically far away but still very much in each others thoughts.


I wasn’t aware of the efail disclosure timeline. Apparently Koch responds to the report by noting that GPG prints an error when MDC is stripped, which has eerie parallels to the justification behind the recent gpg.fail WONTFIX response (see https://news.ycombinator.com/item?id=46403200)

I think the two cases are different. The EFAIL researchers were suggesting that the PGP code (whatever implementation) should throw an error on an MDC integrity error and then stop. The idea was that this would be a fix for EFAIL in that the modified message would not be passed on to the rest of the system and thus was failsafe. The rest of the system could not pass the modified message along to the HTML interpreter.

In the gpg.fail case the researchers suggested that GPG should, instead of returning the actual message structure error (a compression error in their case), return an MDC integrity error instead. I am not entirely clear why they thought this would help. I am also not sure if they intended all message structure errors to be remapped in this way or just the single error. A message structure error means that all bets are off so they are in a sense more serious than a MDC integrity error. So the suggestion here seems to be to downgrade the seriousness of the error. Again, not sure how that would help.

In both cases the researchers entirely ignored regular PGP authentication. You know, the thing that specifically is intended to address these sorts of things. The MDC was added as an afterthought to support anonymous messages. I have come to suspect that people are actually thinking of things in terms of how more popular systems like TLS work. So I recently wrote an article based on that idea:

* https://articles.59.ca/doku.php?id=pgpfan:pgpauth

It's occurred to me that it is possible that the GnuPG people are being unfairly criticized because of their greater understanding of how PGP actually works. They have been doing this stuff forever. Presumably they are quite aware of the tradeoffs.


The PGP could obviously should throw an error on any MDC integrity failure!

Thank goodness it’s real JavaScript and not that knockoff js unscrupulous vendors are using to cut costs


Thank you for writing one of my favorite blog posts of all time! I am curious: What is your favorite thing you’ve written?


I have an Equinox EV which, as I understand it, just runs Android and has no CarPlay or AAuto support. I thought that I would miss CarPlay but I don’t at all from a UX point of view.

What I do miss is the carefree feeling that I can drive somewhere without my car telling GM and Google about it. It is extraordinarily creepy that they know when I’ve had a bad morning and need a McMuffin or whatever. I use a burner Google account but given that the GM account must be associated with my actual identity I don’t think it does much.


> making the timing of the next run independent of the last via the memoryless property of the exponential distribution

Nit: you’re not relying on the memoryless property here but just plain old independent sampling. You’re right that memorylessness means that the elapsed time since the last job provides no information on when the job fires next, but this is orthogonal to the independence of the sleep intervals.


I must correct you! Wayland has not and indeed cannot remove features because Wayland is a “protocol”. It is the compositors that are removing features.

This dilution of responsibility should make you feel much better.


To provide the solution to the second part of the question, there is no closed-form solution. Since floating point math is not associative, there’s no O(1) optimization that can be applied that preserves the exact output of the O(n) loop.


Technically there is a closed form solution as long as the answer is less than 2^24 for a float32 or 2^53 for a float64, since below those all integers can be represented fully by a floating point number, and integer addition even with floating point numbers is identical if the result is below those caps. I doubt a compiler would catch that one, but it technically could do the optimisation and have the exact same bit answer. If result was intialised to a non-integer number this would not be true however of course.


A very good point! I didn’t think of that.


This is why you have options like -ffast-math, to allow more aggressive but not 100% identical outcome optimizations.


You can split the problem into chunks, where each chunk has the same exponents all the way through. It doesn't get you O(1), but it gets you O(log(n)).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: