If you're interested in similar edge cache programs:
https://www.facebook.com/peering/ (though I don't see FNA specifically mentioned there)
(I suppose the flippant argument is 'the end user' but that's not quite what I mean...) :)
You're right that powering racks isn't free either though: if the cache operator doesn't utilize the rack well and isn't offloading enough traffic to it, the ISP will give them the boot (as in, "we wheeled your rack out to our docking bay, come pick up your piece of junk if you want it back"). It's uncommon since _most_ of the orgs running these racks are competent, but does happen from time to time.
Also, even though it's a net win for the ISPs, there are still cases where the operator ends up paying them a fee. This has less to do with the economics of edge caching within an ISP network and more to do with the bargaining power certain ISPs have. The 2014 Netflix/Comcast peering agreement is a good example of how those things sometimes pan out.
This lets me deduce the ISP is way over-subscribed on their upstream internet connection, but also has a Netflix appliance.
What's frustrating is if they complain enough, the company will send a tech out who will "adjust" their antenna, or suggest it's a line-of-sight problem and they need a bigger tower, should cut down some trees, etc.
If it is the wireless link, getting link budget up does really help.
Sounds like my experience on Three UK here in London. Just awful! Vodafone and O2 are at least 20X faster at peak times despite Three actually having a significantly stronger 4G signal at my flat. Absolute joke of a network. I'm so angry that I was basically tricked into a 12 month contract with them.
Some good news: they are aggressively rolling out SDL (supplementary downlink) which will really improve things. They also announced today their 5G network (which has by far the most spectrum of all UK carriers) has went live. I would expect insanely good speeds on that network.
But yeah, turns out selling unlimited data/tethering/roaming/text/minutes for as low as £11/month isn't a good business strategy.
Generally though: EE is good everywhere. Vodafone is good in North London, O2 better in South London. Three generally awful everywhere.
No complaints, 100-300Mbps on average.
They seem to have done great at the 5G spectrum auctions, they’re the only UK carrier with 100+ Mhz.
I’m also using them for mobile 4G and yes that’s often awful, as others are reporting. I should probably switch to giffgaff (safer to use different providers anyway in case Three has a catastrophic outage), but Three’s "Feel at home" free roaming in many countries beyond the EU (including the US) is super handy and has no equivalent that I know of.
But this is not really the Three network. It's a separate network with a different network ID and cannot be accessed with ordinary Three devices/SIM cards.
It's based on the network formerly known as "Relish", which was bought by Three in 2017. The areas covered by Three Broadband's 5G are basically exactly the same areas which were always covered by Relish. They just upgraded the equipment to use 5G radio.
Relish held a lot of spectrum in the 3.5 Ghz (n78) band. Three, by arrangement with OfCom, added Relish's holding to their own 3.5 Ghz spectrum won at auction to give them the contiguous 100 Mhz.
There is an additional 200 Mhz of 5G spectrum being auctioned this year, which should bring the other operator's 5G holdings up to comparable levels with Three's:
I'm not in the UK and ask purely out of curiosity, but what SIM/configuration do you use then?
I suppose this will improve when they build out the network more, but even in locations with strong 5G signal the speed seems to vary a lot.
Seems like 3's 5G network has still not actually launched, but they're now promising it "by the end of February".
Is 3's SDL in addition to their carrier aggregation ("4G+") rollout? Because I already have that and it doesn't seem to have helped much!
Yes, SDL is on LTE1500, LTE Band 32. It is another carrier to aggregate with the other ones. Both 3 and Vodafone have some spectrum in that band, IIRC 3 has a lot more. It will really help with overloaded cells in the short term.
Vodafone floats between bands 7 (20 Mhz channel!), 20, and 1, but gets good speeds on any of them.
Three always slow at peak times, sometimes unusably slow, despite having by far the strongest signal.
The networked sucked at first but they are slowly improving things and offered far better customer service and contracts than the existing 2-3 monopolies.
But sadly if you really want the cutting edge of speed and spectrum you need to use one of the monopolies.
Isn’t this argument basically debunked? Places like South Korea and Japan have no issues whatsoever with extremely fast wireless internet at a fraction of the price.
It's pretty clear that Three's spectrum is woefully lacking compared to the others, except on 5G and the LTE SDL band 32 (both not widely deployed yet).
Especially problematic when you consider that Three are the ones who have been selling super cheap unlimited data packages for the last couple of years!
On one hand you have people saying 4G is good enough, and say 5G is hype only ( It is over hype, but surely not hype ), on the other hand you have people complaining about 4G capacity.
It seems most people when discussing 4G or 5G have absolutely no idea what Capacity and Bandwidth is about. They only care about absolute speed.
Good thing the net neutrality meme has died down, would be quite the interesting PR headscratch if not.
Some of your ISP's connectivity probably comes via settlement-free peering, yet we pay for that too. None of this is a secret, it's just how the internet's plumbing works.
It seems like their niche is as an easy to deploy, more traditional looking CDN that can run out in the RAN, which does still have value.
Edge nodes don't have to be just caches. Plenty of major sites do compute too.
Something I wish they do for iOS Time Capsule.
The main requirement is how much traffic you need to be doing with the peer before they give you a rack - Apple states 25Gbps; more mature programs like Netflix are 5Gbps. Sometimes that's negotiable and they'll hand them out for even smaller traffic amounts (I've heard less than 1Gbps for some of those listed, though won't mention which ones). Like most things in this area of networking, there's a lot of variability and personal contacts involved.
Everyone also meets up twice a year at NANOG.
Popular app and OS updates can be cached on a local Mac, with the recommended machine being on the network using a wired connection and always connected (desktop/Mac mini)
I'm thinking more generally userbase wise and not quite the tech centric crowd we have here on HN.
I would expect most people wouldn't notice.
It has the same information as Apple's official documentation, but it highlights the advanced options.
Apple TV+ costs a flat fee for unlimited access to (for now very limited) collection of original content. Surely they plan to launch more shows.
Exactly how SSL is handled (or not) varies by provider, but one thing I'll mention is that these will typically never have a cert so important that it can't be easily revoked, and the most important data will likely not be flowing across them - exposure is very limited. Using video streaming as an example, one option is to only do TCP termination for example.com (or to not even terminate that domain on the local cache, but back at your main datacenter), then use subdomains with individual certs for the local cache (eg. isp1.cache.example.com). In that case, service calls like login, retrieving the manifest, etc. are secured by the certs you're keeping in your primary dc, then the manifest has a set of https://isp1.cache.example.com URLs pointing to the local cache only for video segments.
Another tricky aspect is making sure that your main network treats them as untrusted so someone with local access can't use it to get a foothold into the rest of your infra.
I think this is specifically addressed with the introduction of TLS Delegated Credentials. This allows the CDN edge to use a very short lived credential in the place of the certificate's private key.
It's already supported in evergreen browsers and in certificate profiles from commercial CAs like Digicert.
The delegated creds draft that regecks mentioned is also relevant. That will make issuing lighter weight, so this sort of 'burn the cert and roll the DNS name' procedure becomes significantly cheaper operationally.
Apple own the intelligent layer, the one that hold the API. Once you query those, it answer with the location and the hash, which allow you to download it from the distributed box and safely verify the content.
- TLS is still important to stop tampering of video content or images, as well as user privacy over what content was specifically viewed.
- Some ISPs have (and still do) intercept plaintext video content -> transcode to a much lower bitrate -> cache that for their users. That hurts the content provider, as they lose visibility (logs/metrics), and the user who may suffer a reduced experience that the content provider can’t easily fix. End-to-end TLS solves that.
You bring a good point about user privacy, and for sure a key can help with that, but at the end of the day, there's not much you can do about this once the physical server is somewhere else, TLS or not, you'll need to trust the one that hold physically that server not to void user privacy.
I would still suggest a TLS connection for that server, but it would most probably be self signed with a different root certificate to avoid someone else to be able to make others trust him being Apple over something that wouldn't verify the content with hash coming from platform owned by Apple.
Then the client can reject any tampered stuff even without encryption on the transport.
TLS does wonders for preventing transparent proxies, DPI & ISPs from modifying or tracking what you see.
You won't have tampered content if the data is rejected, that's absurd. You can do the same with TLS by the way, that's the whole point of TLS, being able to verify (and thus reject or not) data.
Sure it break the client, but you can do that in any situation, just have to unplug a wire ;).
Note that I understand the importance of securing private keys in general. I'm just asking about ISPs because you specifically mentioned them.
It's not necessarily that the ISPs will want to break into equipment, just that they are the weakest link bad actors can use to target Apple/Apple Customers.
Being able to sign some things as Apple / MITM some Apple URLs strikes me as interesting enough to get state-security level interest, at which point you fall back to ISPs being made of people, and people being coercible and bribeable.
Also, there's a surprising number of colo facilities that are pretty lax about who is allowed to come and go.
They announced Keyless SSL way back when: https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-tec...
I'm a fan of Facebook's take on this, though: https://engineering.fb.com/security/delegated-credentials/
Also see, Google's answer to JWTs: Macaroons-- https://hackingdistributed.com/2014/05/16/macaroons-are-bett...
A single domain name and DNS to route is uncommon because it doesn't give you fine-grained control of load - you need to be mindful of the rack's capacity, and you also need to make sure that most of that ISP's customers go to the rack/people who aren't that ISP's customers don't go to it.
Anycasting isn't going to be great for traffic management or long-lived TCP conns, and if you can avoid the complexity of each rack needing a bgp session into the ISP's network you're going to be much better off.
Typically this is going to be directly routed to the rack via a unique DNS name after some form of service call.
Also they could just be signing content at the application layer like APT does, and the transport is plain HTTP.
I’ve never seen a download from the store faster than about 50 megabits per sec even on a gigabit line.
Not to be flippant, but I recognized your name and when I visited your bio I see you now work at Netflix. You could probably find an excellent answer as to why Apple would choose to start its own program rather than rely on existing CDN colocation by asking your internal team why Netflix does the same. My guess, without inside knowledge, is it is a combination of price and performance.
What I'm saying is that Apple is probably already doing the same thing (using their own servers in colocations). They're just opening it up to the public now.
I'm not sure what you mean that this is being "opened up to the public now". This seems to be limited to ISPs as far as I can tell. I'm not sure who, other than an ISP, would be doing 25Gb/s of Apple-content related traffic.
For example IMGX uses Macs in its data center for image processing.
I remember reading somewhere the performance they got with CoreImage was worth the extra hardware cost.
Must have been a heartbreaker sitting on all that infrastructure in July 2019 when the cheese grater was announced and the newest trashcans were still selling for full price with a six-year-old processor.
(Here's a comment from 2015 which expresses these concerns in the future tense: https://news.ycombinator.com/item?id=9500301)
These poor saps bet BIG on a non-standard, and seem to have discovered why we like standards after all.
> In the end they are now effectively at 200Gb/s encrypted video streaming from FreeBSD per server.
RHEL for WebObjects and other servers.
Alas, it is but a dream.
Apple obviously isn’t beholden to the licensing terms that they release OS X to their consumers under.
If they want to run OS X on commodity hardware they can, and moreover it doesn’t change their positioning to the outside at all.
It probably wouldn’t look good if they hosted this on third party machines running Mac OS.
For all other apps (which load static image assets and much smaller dynamic response payloads), meeting 25Gbps minimum peak is going to be a challenge.
Let's do some rough math. Let's say your app needs to load 10MB of assets in every user session. Let's say your user's network speed is not a constraint. Then least number of concurrent users needed to drive 25Gbps of traffic is 25Gbps/10MB = 313 new user sessions per second. If you want to sustain this for 5 minutes or so to register as peak 313 users/second * 5 minutes = 93900 concurrent apple user sessions. Let's say your users realistically have 10Mbps of speed, we will have to multiply 93900 with 8 (because it takes 8 seconds to load 10MB with 10Mbps speed)!
There are two routes, the key differentiator is are the resulting name hierarchy public (like .com) or is it yours and jealously to be guarded from all others (like .mil) ?
If the former ICANN will also require you to do a bunch of legal work to ensure that when you fail (because realistically you will) any names can be scooped up and preserved by a new operator of the TLD.
If the latter you're likely to have a tougher time defending why you should own this, unless you're a huge global brand.
Then you need to either spend a lot of money (again estimate $1M a year at least at first) yourself on infrastructure to serve your TLD, or you need to pay somebody else with relevant experience to do it for you.
A surprising number of companies bought vanity TLDs which they then don't use at all because of course they're much less convenient than a short name in an existing TLD. For example the KerryProperties TLD isn't used at all, kerryprops.com is much easier.
Why so much money?
I wish there was something up at https://google.google or https://apple.apple
$ whois goog
% IANA WHOIS server
% for more information on IANA, visit http://www.iana.org
% This query returned 1 object
organisation: Charleston Road Registry Inc.
address: 1600 Amphitheatre Parkway Mountain View, CA 94043 US
address: United States
name: Domains Policy and Compliance
organisation: Google Inc
address: 601 N. 34th Street
address: Seattle, WA 98103
address: United States
phone: 1 202 642 2325
fax-no: 1 650 492 5631
Sadly it always defaults to the first logged in user, which makes it almost entirely useless to me.
> If your business meets these requirements, request an invitation.
Doesn't that defeat the purpose of being "invitation only" which, to me at least, implies the other party knows who they want to invite? That is, invitation only implies hand picked, or pre-chosen by some prior criteria to me. If it's exclusive to select ISPs that meet the criteria and they have to apply, why not just say that instead of the using wording that requires additional explanation to get past people's likely initial interpretation?
The whole point of saying the party is invitation only is so random people don't ask you to come.
I think the normal way this is normally done is to say "we're being very selective with who we partner with at this point. Please apply if you think you qualify and we'll get back to you."
That they've chosen not to do the obvious seems purposeful, and that it was done so jarringly on purpose is odd.
Why now? Apple has had hundred of Millions iOS users for years and fast approaching a billion. Why didn't they do this earlier? Or they did but it wasn't public ?
What are the chances this is a Mac Pro rack, even though it is highly unlikely to be running on macOS ?
Do they Cache iCloud Backup, Photos and upload with these Edge Appliance? Same as macOS Server?
Q: are you getting paid for allocating such a cache? Or should you feel honoured that Apple thinks you are eligible to freely distribute their content?
Imagine that your Internet service is just an Ethernet cable that goes to a router in a datacenter. This is Apple offering to plug their servers into that router. Now you can get to their servers without going out to "the Internet" via a Tier 2 or Tier 1 ISP. That is where the word "internet" comes from -- interconnected networks. More connections is more internetting.
This is all super common. Many big companies are happy to peer with small ISPs if they're already in the same building.
Edit to add: The edge cache thing that this article is about is similar, but not quite the same as what I'm describing. Instead of connecting you to their network, they just put some of their servers in the same datacenter as your network. Even less latency!
Site owners generally hated it, too, since tampering proxies were a perennial source of compatibility bugs and protocol violations even before you had things like the ones which tried to “optimize” images by recompressing them, giving everyone on that ISP a bad experience which you don't know about. Stack Exchange has a number of threads where someone was trying to figure out why only some customers had complained months-stale content (Hi, Telemundo!), low-quality images (Hi again, Telemundo!), mismatched languages or truncated/corrupted contents, etc.
That makes me wonder... I wonder if there is a process by which providers would sign certs to individual ISPs and providers to let them intercept low/medium security content like streaming.
Like, if Netflix is going to serve streams over TLS for philosophical/privacy-from-government/privacy-from-wifi reasons, but wants to lets ISPs cache data, they could create a certificate for each ISP/organization and provide the keys to that org.
Then, if NF can identify you are coming from a particular ISP, they can have your content served from the ISP's netflix subdomain, and the ISP could intercept/cache/re-encrypt the data.
Just a thought.
The first kind is the ones that Apple wants - they are the likes of Comcast/Optimum/etc. There are the ISPs that have lots of eyeballs and have captive audience. If Apple has shitty connectivity to them, it is going to be bad for Apple ( because consumer cannot replace such ISP - there's no real competition in those markets ). These are also ISPs that happen to peg their transit PNIs or free peering PNIs as much as possible forcing others to to buy paid peering to have non-shitty connectivity to them. These ISPs are going to charge Apple for colo, power and bandwidth and Apple is going to pay and it is going to pay through the nose ( just like Netflix does and just like Akamai does )
The other set of ISPs are the ones that want Apple a lot more than Apple wants them. There's no way Apple will pay for access to their customers. Those ISPs are going to give Apple space, power and even bandwidth for free. Hell, they may even have to pay Apple.
Source: did that for both ISP side and content provider side (at different times)
Edit: the flip side of this relationship is also interesting - if Apple or whoever doesn't offload enough of their traffic to the rack, then it isn't cost effective and can really annoy the ISP. I've known some ISPs to boot these caches out of their network when the related company wasn't utilizing it effectively.
No peering on immense libraries of HD video content?
Don't be silly. Obviously this is a business arrangement from which both parties expect to benefit, it's governed by a binding contract, etc... It probably even does have a set of billing rates for stuff at the margin, even. But no, it probably doesn't involve much real money flowing.
None of this is "for free" in any kind of economic sense.
- End users get a better experience because there are fewer hops between the content and their computer.
- ISP gets to pay less on transit
- The content provider also spends less on transit while providing a better end-user experience
Do they have something against using dashes?
That's what they do elsewhere:
For less ambiguity, I should have used the term "hyphen," but imagined it would be clear.
The figure dash _literally_ serves the same purpose as the hyphen:
ASCII was limited, compromises were made, and "dash" and "hyphen" became interchangeable.
Most style guides accept both under the term "dash":
* Unicode has changed this somewhat, though not perfectly.
Hyphens are a form of dash (-) which we use between words or parts of words.
"Apple-supplied and -managed hardware"
The product they're presenting isn't significant, and other people had already made fine points on it, so I pointed out something that I care about that was on-topic instead.
Further, the product is not technically interesting. It's a CDN with an Apple logo on it.
This sounds horrifying.