Going back to David’s post, edge caching, in particular, while useful for static content hosted which is generally consumed (e.g. video) does not help with most Cloud applications. Cloud applications largely require two-way sending and receiving of data, so if we are to see more businesses being able to run interactive applications on the Cloud, more local POPs are required which carry the full application stack. For example, while your Google Photos viewing may be cached, for someone to upload the photos – you still have to upload them to a datacenter in Europe. In David Weekly’s tests – Google and Apple content is actually served off a local edge cache in Nairobi, however that does not serve DNS traffic for example from Google’s famous 8.8.8.8 IP address which is interactive (send-receive). Another tricky problem is if people use 8.8.8.8 – a CDN provider such as Akamai will see your request as coming from Europe, and will serve the request out of a European datacenter as opposed to the node in Nairobi.
CloudFlare will be rolling out edge hardware in Africa this year to address the latency problem. We will also be rolling out DNS in Africa as well so that DNS queries will not need to leave the continent (or even country in many cases).
For dynamic content we will be supporting our Railgun technology to help alleviate the backhaul latency from these edge machines to origin servers.
These problems are solvable and Africa has not been forgotten (at least by us). We have many clients in Africa and, of course, many people accessing CloudFlare managed sites from African countries.
> Another tricky problem is if people use 8.8.8.8 – a CDN provider such as Akamai will see your request as coming from Europe, and will serve the request out of a European datacenter as opposed to the node in Nairobi.
Correct. EDNS0 is useful, but CloudFlare doesn't need to rely on that. A name like news.ycombinator.com resolves to the same IP address wherever you are in the world. We then route that IP address depending on the location of the user using BGP Anycast. This the 8.8.8.8 problem is irrelevant for us (might not be for legacy providers).
It looks like Google Global Caches and other edge caching is proxying everything including the DNS queries in Europe, so what's returned is the proxy's IP, which, of course, points to Europe.
I think I remember the time CloudFlare came to Johannesburg sometime last year. Lots of sites had very dramatic speed boosts.
Also, latency came down to 80ms which is a huge deal. With ADSL it is sub 20ms. Of the major websites, it was only Google that was local at that point.
We are currently already rolled out in South Africa and will be adding PoPs in two locations in North Africa, and one each in East Africa and West Africa.
The idea is to get good latency coverage across African countries. Once we are rolled out in all those locations we'll be looking at latencies from specific countries to see where we should be adding extra PoPs. The actual locations of our PoPs depends partly on the political geography and partly on the Internet geography. Our goal is minimal latency for maximum population. For example, it might not make sense to have (hypothetical here) a PoP in Ouagadougou if the Internet connectivity to Accra is fantastic, but despite the relative proximity it might turn out that a PoP in Abuja is better than trying to serve Nigeria from Ghana. We'll monitor performance see where we should be.
Thanks for the clarification, and the link. I'm from Zimbabwe, i guess due to our political geography we can wave goodbye to any dreams of a PoP here, lol.
These costs trickle down to end users. Even in "progressive" (heh) South Africa 3G can cost as much as ~$150 per GB (so called "out-of-bundle").
I was attempting to set up IMAP in Thunderbird for my parents (who are forced to use 3G due to location) - the obvious merit there is the cloud backup of their emails. To conserve bandwidth they previously had POP3 set up to download headers only. This is impossible to set up with IMAP because, according to Thunderbird HDD space is more expensive than bandwidth and hence re-downloads items if you choose to have headers only. This is a well-known piece of software.
Keep the high costs of poor little Africa in mind when designing your software. Many of us are lucky enough to have access to "high-speed" 10Mbps uncapped copper but there are significant amounts of people who are outright extorted by the cellphone networks.
Please design your software to be kind with bandwidth. Latency isn't the only issue and, in fact, of all the people I know only gamers care about latency (because we have the bigger bandwidth/speed problem to deal with). The average African is quite happy to wait 1s for a website to load, they aren't happy if that page costs them $1 to load.
I agree 100% with your general sentiment, but just to nitpick:
You would have to really cherry pick the worst case possible scenario to be paying R1 per GB and even then R1024 is $82.44.
Contracts come with data included, all the networks have data bundles, most of them run frequent 2-for-1 style promotion deals and you could always shop around.
Once you factor in all of that, mobile bandwidth is pretty on par with most of the world. Really expensive when you take into account people's incomes, though.
ADSL and other fixed line / point-to-point options are much cheaper once you require anything more than a tiny amount of data, but still really expensive considering the typical South African's income.
Things are very different from the first world, but you have to factor in physics too - we're at the opposite end of the world of just about anything you would want to connect to (latency!), our neighbours didn't all have connections already set up that we could just piggy-back off, etc.
In Ghana and Kenya, Open Learning Exchange tackles the bandwidth cost and latency issue by utilizing Ground Computing hosting based applications that require 2-way data interactions and adhere to Eventual Consistency. What Eventual Consistency means is that the apps are hosted in several communities on Ground Servers but "eventually" the Ground Servers connect to the Internet and sync with a central node on the Internet where other Ground Servers are also syncing to. This isn't so great if the latency of the 2-way data you care about is between someone in Nairobi and Atlanta but it is a big improvement between for 2-way interactions between people in the same community using the same Ground Server. Yet, there are certain kinds of 2-way data where gaps between syncs matters less as is the case of book and movie ratings in our Open Source Learning Management System that we built and host in schools, libraries, and community centers.
> Another tricky problem is if people use 8.8.8.8 – a CDN provider such as Akamai will see your request as coming from Europe, and will serve the request out of a European datacenter as opposed to the node in Nairobi.
This is a tricky problem but there's a solution for it: edns-client-subnet, which allows the DNS provider to pass along your subnet for services which don't return the same response for everyone:
I find it interesting that the assumption is that the cloud needs to be pushed out to the edges, as opposed to making the endpoints smarter and able to speak directly to each other. To me this looks like a large factor in favour of (for example) dedicated mobile apps with their own caching logic and background processing over a one-size-fits-all HTML browser approach.
There's a reason Amazon/Google and the other major providers tend to push their availability zones to the edge. It's because this matters. You can't always cache in the background especially for multimedia content. It's simply too heavy.
Not all content providers will be able to run the infrastructure that Google does, which is why it makes sense for cloud service providers to do the same.
The difference in user experience between HTML apps that don't even _try_ to cache anything locally vs mobile apps that sync in the background and cache things locally is night and day and it just gets bigger and bigger as your bandwidth quality gets worse.
This is the biggest reason why I'm concerned for the future of the open web.
CloudFlare will be rolling out edge hardware in Africa this year to address the latency problem. We will also be rolling out DNS in Africa as well so that DNS queries will not need to leave the continent (or even country in many cases).
For dynamic content we will be supporting our Railgun technology to help alleviate the backhaul latency from these edge machines to origin servers.
These problems are solvable and Africa has not been forgotten (at least by us). We have many clients in Africa and, of course, many people accessing CloudFlare managed sites from African countries.