The problem it's originally fixing is bad scrapers accessing dynamic site content that's expensive to produce, like trying to crawl all diffs in a git repo, or all mediawiki oldids.
Now it's also used on mostly static content because it is effective vs scrapers that otherwise ignore robots.txt.
Several sites using cloudflare are partially down since just after 18:00Z.
AO3_Status reports it's Cloudflare routing issues for IPv4 traffic. (but the Cloudflare status page reports nothing useful)
I'm using Windows on this laptop, this seems to affect all sites with any sort of Cloudflare protection, including Cloudflare's support and discourse sites, pubs.acs.org, and annoyingly some sites necessary for my current projects, that I need to access daily.
To clarify, this is the "Verifying your connection is secure" captcha before the site is even displayed, not a reCAPTCHA after the site is loaded.
I've tried clean browser profiles with no difference, on my normal profile I just use uBlock, Tree Style Tabs (FF)/Tabs Outliner (Chrome), and Tampermonkey for some sites (but not the ones affected by the captcha loop, in this case).
I mean, technically they have? Maybe not in this update, but they include local AI models for webpage translation. It's pretty useful (when it's a supported language pair, at least)
Yes, you'd just need to have a way to provide input (probably easiest as a demo file), and a machine big enough to simulate the pattern.
You'd definitely be waiting for a while to see any action, though :P
If you're interested in dives that deep, you might like Gynvael Coldwind's hello world in Python on Windows dive [1]. Goes through CPython internals, Windows conhost, font rasterization, and GPU rendering, among others.
The `single_ref` field is a fixed-size array in both of the objects referenced in this line, so this line can't panic, and no bounds checks are involved (since the compiler sees the index < length at compile time and doesn't even need to emit one -- although I think it still does, and it's LLVM that gets rid of it actually)
Causing memory leaks is possible in safe Rust even without any arcane invocations, you can construct a cycle of Rc<T> counted objects. There's even a perfectly safe Box::leak in the standard library that gives you a &'static reference to any object by leaking it.
Preventing leaks is outside of the scope of Rust's safety system.
In my experience, if you have to build an older maven project, you have to go through a lot of painful hoops, mostly around HTTP vs HTTPS repos and java source/compiler versions.
I have encountered many old projects that don't build out of the box, and of those, I've only managed to get about half working.
I went back and forth, both ways.
I started out in Python with snake_case identifiers, then moved to Javascript with camelCase, and now I mostly use Rust with snake_case for names and UpperCamelCase for types.
I don't find either style better than the other, the only time I notice anything is when I switch between style conventions (e.g. switch between JS and Rust or C++), because I need to break the habit of typing one way rather than the other.
Still, I find readability of the two styles pretty much identical at all times once you get used to both.
I mostly find the readability the same, except for acronyms in the middle. RemoteHTTPIPAddr vs remote_http_ip_addr. I prefer the latter, but my go-to language these days encourages the former.
I work in telecoms where there are so many acronyms that snakeCase is really painful (but we still use it).
As for your second example, why not remote_HTTP_IP_addr ?
Even more readable IMHO.