Amazonbot doesn't respect the `Crawl-Delay` directive. To be fair, Crawl-Delay is non-standard, but it is claimed to be respected by the other 3 most aggressive crawlers I see.
And how often does it check robots.txt? ClaudeBot will make hundreds of thousands of requests before it re-checks robots.txt to see that you asked it to please stop DDoSing you.
Very glad to see the work that byroot is doing as the new ruby-json maintainer!
Since I was mentioned by name in part 3, perhaps I can provide some interesting commentary:
> All this code had recently been rewritten pretty much from scratch by Luke Shumaker ... While this code is very clean and generic, with a good separation of the multiple levels of abstractions, such as bytes and codepoints, that would make it very easy to extend the escaping logic, it isn’t taking advantage of many assumptions convert_UTF8_to_JSON could make to take shortcuts.
My rewritten version was already slightly faster than the original version, so I didn't feel the need to spend more time optimizing it, at least until the simple version got merged; which I had no idea when that'd be because of silence from the then-maintainer. Every optimization would be an opportunity for more pain when rebasing away merge-conflicts; which was already painful enough the 2 times I had to do it while waiting for a reply.
> One of these for instance is that there’s no point validating the UTF-8 encoding because Ruby did it for us and it’s impossible to end up inside convert_UTF8_to_JSON with invalid UTF-8.
I don't care to dig through the history to see exactly what changed when, but: At the time I wrote it, the unit tests told me that wasn't true; if I omitted the checks for invalid UTF-8, then the tests failed.
> Another is that there are only two multi-byte characters we care about, and both start with the same 0xE2 byte, so the decoding into codepoints is a bit superfluous. ... we can re-use Mame’s lookup table, but with a twist.
I noted in the original PR description that I thought a lookup table would be faster than my decoder. I didn't use a lookup table myself (1) to keep the initial version simple to make code-review simple to increase likelihood that it got merged, and (2) the old proprietary CVTUTF code used a lookup table, and because I was so familiar with the CVTUTF code, I didn't feel comfortable being the one to to re-add a lookup table. Glad to see that my suspicion was correct and that someone else did the work!
Thanks so much for your work, and also thanks for some insight into choices you made.
I'm not familiar with the internals of the JSON gem, but in general... yeah, it's funny right? PRs are almost never ideal. Always some compromise based on time available, code review considerations, etc.
Thanks for your work. I understand why you tried to keep it simple. Getting ignored or rejected by a maintainer is one of the least fun things I've ever experienced. Takes real skill to get something merged in, and not just technical skill.
Western Digital themselves are literally calling the WD_BLACK line their gaming line[1], and their page for the SN850X in particular is dripping with "gaming"[2].
Maybe that doesn't make it a bad comparison, but the SN850X is def intended to be a gaming part.
What is a competing part that you think would be more comparable?
Gamers are the ones buying expensive parts so it makes sense to market to that. The next tier after this is basically server-class 10-20k machines which Apple is definitely not competing with (and SSDs are not really that much better in that class anyway). Dismissing SSDs as “gaming” parts as if it’s diminishing the quality misunderstands what’s happening here. It would be one thing if WD was ignoring fsyncs to achieve this performance but gamers don’t care about writes so much anyway and there’s no indication WD did that.
Source: I have the WD and Samsung parts as well as cheapo random SSDs.
The other product lines would be WD Blues (marketed at "creative professionals working with large files") and WD Reds (marketed specifically for use in NAS's), but neither of these really support the argument that the SN850x isn't a good comparison, because both the Blue and Red lines are cheaper and less performant (and the Blues are even rated for less longevity), and just make it seem like Apple is price gouging even more.
The point I was trying to make by pointing out that the SN850x isn't a "gaming part" is that the SN850x is literally the top-of-the-line, most expensive consumer SSD sold by WD, and has practically the same specs as other top-of-the-line, most expensive competing parts like the Samsung 990 Pro. Being one of the most expensive SSDs on the market means that saying that the SN850x is a bad comparison because it's supposedly "lower price" is just false on its face.
Ahh you misunderstood what the lower prices is in reference to. Gaming parts often have a real premium, it’s specifically the price at a specific performance level where they preform well.
To be more clear, getting equal performance without sacrificing anything would raise costs even further.
I personally don’t think anything is a great comparison.
It’s easy to say moderate premium over normal business grade SSD’s but that doesn’t mean any specific number is correct. I’d say the equivalent to a 130$ to 220$ SSD assuming a stand alone equivalent exited, but the actual number depending on info Apple isn’t sharing. And yes the range is both above and below the specific part suggested.
Counter-flow heat exchangers. A parallel-flow heat exchanger would result in the average, as you say; but a counter-flow exchanger means that as the formerly-warm air gets progressively cooler, it is exposed to progressively colder air.
I've got a counter-flow heat exchanger, but it looks like they're using a different design:
> Each OpenERV TW4 module has a very quiet pair of fans, pointed in opposite directions, and a heat exchanger in a 6 inch pipe, that goes through a wall. The hot, polluted air from inside goes out for 30 seconds, and the heat from it is stored in the heat exchanger.
> Then, the fan reverses direction, moving clean air from outdoors to the indoors. On it's way in, it picks up that heat from the heat exchanger. This type of heat exchanger is called a regenerative heat exchanger, or less commonly, a regenerator. The kind shown in the video is a recuperative type, not regenerative. Recuperative types are what most people think of, consisting of a thin layer of material that separates two gas streams. Regenerative heat exchangers are different. They briefly store the energy while air flows in one direction, then release it when the air flow reverses.
> The OpenERV TW4 modules are made to always work in pairs. One always sucks air while the other blows air, synchronized over WiFi. This should be done, or hot air would be pushed out from the building through the walls during the ingress phase, causing heat loss.
Counter-flow heat exchangers can be very efficient, without a heater or cooler attached. That said, I don't think I've seen a commercial ERV claim to be more than 80% efficient, so I'm skeptical of the 90% measurement.
(I've seen ERVs with heaters attached; but for the purpose of avoiding frost buildup when it's below freezing outside.)