The A17 Pro (originally in the iPhone 15 Pro; now also in the iPad Mini) and A18 Pro (currently only in the iPhone 16 Pro) are the only chips Apple has produced with a "Pro" suffix.
Apple used to use the X suffix for bigger versions of their phone processors that went into iPads (starting with the A5X); that went away when the M-series was introduced.
And the "Pro" suffix itself doesn't seem to denote anything in particular-- there was never a non-Pro A17, and the "A17 Pro" going into the iPad Mini is itself a cut-down version of the chip that went into the iPhone 15 Pro (it has one GPU core disabled).
Monopoly was patented-- in 1935. So that's long-expired.
And many of the non-Hasbro -opoly games (that use that as part of their name) actually are licensed. Hasbro's been known to go after unauthorized users of the name for trademark infringement.
WSL2 is "just" Linux running in a Hyper-V VM (with some special sauce on top of it to handle things like interacting with the filesystem or doing Wayland and X11 graphics, plus containerization stuff to allow multiple distributions to be installed and run under one VM and one kernel).
WSL1 was a completely different approach, adding a Linux compatibility layer to Windows itself. There, you never had a Linux kernel running at all-- Linux syscalls would call into WSL, which would talk directly to the NT kernel to do whatever that syscall needed to do.
WSL1 didn't last very long (still present, but not actively being developed)-- turns out that reimplementing one operating system on top of another is a Hard Problem (see also: Wine). WSL2 avoids this entirely, and also avoids most of the impedance mismatches that you get when trying to reimplement POSIX on top of NT. WSL2 solved a whole bunch of compatibility problems essentially overnight that WSL1 never even got around to.
Windows 10 only does 16-bit DOS and Windows apps on the 32-bit version of Windows 10, so it only has a VM layer for those 16-bit apps. (On x86, NTVDM uses the processor's virtual 8086 mode to do its thing; that doesn't exist in 64-bit mode on x86-64 and MS didn't build an emulator for x86-64 like they did for some other architectures back in the NT on Alpha/PowerPC era, so no DOS or 16-bit Windows apps on 64-bit Windows at all.)
Unlikely. Localhost can be a secure context because localhost traffic doesn't leave your local machine; .internal names have no guarantees about where they go (not inconceivable that some particularly "creative" admin might have .internal names that resolve to something on the public internet).
One can resolve "localhost" (even via an upstream resolver) to an arbitrary IP address. At least on my Linux system "localhost" only seems to be specially treated by systemd-resolved (with a cursory attempt I didn't succeed in getting it to use an upstream resolver for it).
So it's not a rock-hard guarantee that traffic to localhost never leaves your system. It would be unconventional and uncommon for it to, though, except for the likes of us who like to ssh-tunnel all kinds of things on our loopback interfaces :-)
The sweet spot of security vs convenience, in the case of browsers and awarding "secure origin status" for .internal, could perhaps be on a dynamic case by case basis at connect time:
- check if it's using a self-signed cert
- offer TOFU procedure if so
- if not, verify as usual
Maaaaybe check whether the connection is to an RFC1918 private range address as well. Maybe. It would break proxying and tunneling. But perhaps that'd be a good thing.
This would just be for browsers, for the single purpose of enabling things like serviceworkers and other "secure origin"-only features, on this new .internal domain.
> One can resolve "localhost" (even via an upstream resolver) to an arbitrary IP address. At least on my Linux system "localhost" only seems to be specially treated by systemd-resolved (with a cursory attempt I didn't succeed in getting it to use an upstream resolver for it).
The secure context spec [1] addresses this-- localhost should only be considered potentially trustworthy if the agent complies with specific name resolution rules to guarantee that it never resolves to anything except the host's loopback interface.
No, you can't. Besides the /etc/hosts point mentioned in the sibling, localhost is often hard-coded to use 127.0.0.1 without doing an actual DNS lookup.
You shouldn't do this if you value things working, though-- this is a pretty rare configuration (you have to go way out of your way to get it), so many developers won't test with it and it's not unheard of for applications to break on case-sensitive filesystems.
If you absolutely need case-sensitivity for a specific application or a specific project, it's worth seeing if you can do what you need to do within a case-sensitive disk image. It may not work for every use-case where you might need a case-sensitive FS, but if it does work for you, it avoids the need to reinstall to make the switch to a case-sensitive FS, and should keep most applications from misbehaving because the root FS is case-sensitive.
Most things work fine, but it will break (or at least did break at one point) Steam, Unreal Engine, Microsoft OneDrive, and Adobe Creative Cloud. I'm rather surprised about the first two, since they both support Linux with case-sensitive filesystems. I took the opposite approach as you, though: making my root filesystem case-sensitive and creating a case-insensitive disk image if I ever needed those broken programs.
I keep a case sensitive volume around to checkout code repositories into.
For everything else I prefer it insensitive, but my code is being deployed to a case sensitive fs.
And the data that's collected here includes the full page URL-- which, in this case, includes the fragment and therefore whatever data is being "stored", at the time of capture.
This is probably beyond the author's control, but they shouldn't host it somewhere that can inject scripts outside their control (like Cloudflare) and then claim "privacy".
(The Cloudflare script makes a request to `/cdn-cgi/rum`, with the full page URL in its JSON payload at `timingsV2.name`.)
That'd pretty much be Portal; the technique he's using to render there is basically the same as the one Portal uses (the Portal engine can do "invisible" portals like this too, but there's only one point across both games where they're used in the finished game).
I suppose something around non-Euclidean levels without user-created portals could be interesting, but I think it'd be hard to flesh that out to a point where there's enough there for a whole game without coming up with some other gimmick.
I know it was left as an exercise for the reader, but I never worked out where that was used. (The turret live fire exercise comes to mind, but I thought I'd ruled that out.) Do you happen to know?
> They are also used twice in the campaign (contrary to the commentary's claim that they are only used once): in the GLaDOS wakeup sequence where they are used to connect the incinerator shaft to GLaDOS' chamber, and in Finale 2 where they are used to connect the "trap" chamber to the main map. These are the only uses of this entity in the final game.
I was in Nashville for the last Brood XIX emergence (2011)-- where they actually emerge is pretty patchy, but they were pretty dense around my office building, to the point where you couldn't walk down the sidewalk without stepping on cicadas or cicada shells with almost every step. Went to an area renaissance fair that same month, and they were just about deafening in the forest around the site (thankfully weren't swarming too badly where people were, at least).
I'm not looking forward to this May. Not sure what to expect at home this time around (still in Nashville, but a different part of town)-- hopefully they're not too bad, but I suspect my noise-cancelling headphones will be getting a workout while they're out.
Apple used to use the X suffix for bigger versions of their phone processors that went into iPads (starting with the A5X); that went away when the M-series was introduced.
And the "Pro" suffix itself doesn't seem to denote anything in particular-- there was never a non-Pro A17, and the "A17 Pro" going into the iPad Mini is itself a cut-down version of the chip that went into the iPhone 15 Pro (it has one GPU core disabled).
reply