And they can certainly be reversed: They're just Ethernet ports on a Linux box [with a front-end routing-oriented configuration/distro widget called OpenWRT]. They can do whatever tasks a person wishes to assign them to do.
And the limitation of 1x 2.5G ethernet, for those who it actually makes a difference to, is broadly sidestepped by using a managed switch and VLANs (just like ye olde Linksys WRT54G -- a device that only really had a singular Ethernet interface -- did internally).
One can have as many 2.5G ports as one wishes -- just add more/bigger switches. They can be WAN or LAN or whatever you want. (They'll all share the same full-duplex 2.5G pipe to/from this router and the outside world beyond, but so what? OpenWRT makes it easy.)
Is the idea to have modem and router on the same VLAN connected to that switch, and have the LAN traffic on another VLAN? So only one 2.5G port should be enough for everything?
Switches vary in their nomenclature and defaults, but in the switch it could look something like this:
Port 1: Uber-fast WAN-device. VLAN 10 [WAN], untagged.
Port 2: Router-box. VLANs 1 [LAN, default] and 10 [WAN], tagged.
Ports 3-eleventy: Everything LAN. VLAN 1, untagged (or the default OOTB port config works for my stuff at home without any other considerations). More (cheap! dumb!) switches can be daisy chained here if one desires[1].
This keeps the traffic secure and separated. VLAN 1 [LAN] can't talk to VLAN 10 [WAN], and VLAN 10 can't talk to VLAN 1, but the router on port 2 can talk to both VLANs and it treats them as separate networks.
In OpenWRT, these individual VLANs show up almost like any other network interfaces do. The difference between a physical port and a VLAN member is pretty much just the name of that port. That part is easy -- in fact, it's all pretty easy.
---
And at the end of the day, this provides (up to[0]) ~2.5Gbps of WAN bandwidth to/from the LAN, minus whatever pittance of data is used for things like LAN DHCP or whatever other services are run on the router-box itself, in both directions, concurrently.
[0]: The most severe bottleneck here seems more likely to be the packet-processing capabilities of the dual-core 1.3GHz Cortex-A53 CPU than anything else. For instance: On the surface, it seems unlikely to have enough grunt to be able to do things like SQM at these kinds of data rates. But that bottleneck would exist with physical ports, too.
[1]: And in the context of home networking, daisy-chaining switches is not quite optimal, but it's also just fine, and it's far better and more-consistent than wifi. I just moved into a new place and I've already got 3 different network switches in use, for convenience as much as anything else, to get hardwired network access where it needs to be while running a minimum of cabling. (Maybe some day I'll go full structured cabling on the place with a rack of switches and shit somewhere, but meh: Just because I can doesn't mean that I should.)
I can only see one scenario where it doesn't work out to 2.5gbps, if we have symmetrical 2.5Gbps WAN, then to upload and download at 2.5g from a LAN device wouldn't reach 2.5, because the router has to use that single link for ingress and egress from both sources.
Ah. Yes, that's a thing. WAN <-> LAN bandwidth is multiplied by a factor of 2. I didn't work all the way out on the loop. (Apologies -- I don't work with constrained networks at this level every day, and I appreciate the correction.)
It may be fine for "normal" use: 2.5 in, or 2.5 out, or a mix of some of each. Unless we're heavy into torrenting, I'd like to suggest that most households aren't maxing out both at once: We're doing either a big upload or a big download (but usually not both at once), plus whatever baseload we have for background noise (streaming services, browsing, and whatnot).
It's good enough, maybe? Especially for asymmetric sources?
But really: If a person has needs symmetric 2.5, then maybe they should be shopping for a router-like-device that costs more than $90. This little box from the OpenWRT folks is pretty neat, but it's still certainly built down to a price. :)
«The YAML 1.18 specification was published in 2005. Around this time, the developers became aware of JSON9. By sheer coincidence, JSON was almost a complete subset of YAML (both syntactically and semantically).»
The YAML 1.0 spec says no such thing, it doesn't even mention JSON. Neither does the YAML 1.1 spec. The YAML 1.2.0 and 1.2.1 specs do say exactly that. 1.2.2 no longer does, but it reiterates that the primary focus of YAML 1.2 was to make it a strict superset of JSON.
Your post's remark about YAML 1.2 being opt-in with a "%YAML 1.2" directive is true for the parser you are using (LibYAML), but is not compliant with the 1.2 spec. The spec specifies it should assume 1.2 and 1.1 should be opt-in.
$ python3
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import yaml
>>> yaml.safe_load('{"a": 1e2}')
{'a': '1e2'}
>>>
--
> The spec specifies it should assume 1.2 and 1.1 should be opt-in.
The spec for 1.2 says that, but that's the reason parsers can't upgrade to 1.2, because changing the default version will cause backwards-incompatible parsing changes. Without the version directive there's no way for the parser to know which version was intended, so it defaults to 1.1, so people writing YAML will write YAML 1.1 documents, because that's what the parsers expect.
The only way YAML 1.2 is going to displace older versions is in a greenfield ecosystem that has all its tools using YAML 1.2 from the beginning, but that requires an author who both (1) cares a lot about parser correctness, and (2) wants to use YAML as a config syntax, which isn't a large population.
It is indeed true for LibYAML, which is used by Python, Ruby, and the Haskell yaml package. The majority of YAML implementations I've seen that aren't based on LibYAML implement 1.2. I'm pointing it out because it's highly implementation dependent.
I have extremely fond memories of the ranch. It was what got Throttled Pro off the ground and was by far the best Objective-C resource I could have possibly imagined.
Aaron will forever be a hero in my book. Best of luck in the future.
This model should be done more. Develop the product in the open, offer a fully functional open source version, and then offer a paid version on commercial app stores.
This can bring in some extra revenue for the project, plus covers the app store dev work.
I bet a bunch of users would buy it just to support the dev team. Plus I would argue an app store purchase (one click) can be done faster/easier than any donation flow I have seen for an open-source project.
Always loved the model in MacOS that developers charge a small one time fee, usually in the range of $5 - $20 to support them to build relatively good apps pushing the overall quality of the third party app ecosystem instead of either crappy free or corporate built big/slow/expensive mess on other platforms.
Anki has a similar ish model. Free on desktop and web, a community made Android app compatible with the free cloud saving, and a paid iOS app by the team. If you can’t afford the iOS app you can use the web version.
And when that's not enough to cover the yearly $100 fee, and Apple makes it purposely difficult to ship compiled binaries in any other way, you get no (niche) OSS software on the platform.