Back in the day (old Unix), the sync call would return right away, and the kernel would sync in the background. Unless there was a current background sync happening -- then sync would block until the first one finished, which is why you would have two sync's in a row. The third sync was thrown in just for luck.
> The only reason telcos talk about heavy users is that they want to engage in price discrimination and they know it confuses people who are used to dealing with commodities whose dominant cost is the unit cost rather than ones whose dominant cost is the fixed cost of building a distribution system.
I'd say it's reasonable to charge heavy users more. Firstly, the cost is not totally fixed for the ISP, higher usage involves investment in their own infrastructure (routers, transit etc.). Secondly, it's arguable that heavy users derive greater utility from the service so won't object to higher prices.
> Firstly, the cost is not totally fixed for the ISP, higher usage involves investment in their own infrastructure (routers, transit etc.).
You want to find out how small a portion of the total cost that actually is? Require the ILECs to lease the physical wire from the customer premises to the central office and space in the central office for the lessee's terminating equipment, prohibit the last mile provider from sharing ownership with a backhaul provider and then have the likes of Level 3 and Verizon compete with each other to sell connectivity from your local central office to the wider internet.
> Secondly, it's arguable that heavy users derive greater utility from the service so won't object to higher prices.
Yeah, you can. The functionality is called "services". I'm not sure how auto-versioning would work with git though. I have packages building from SVN, and OBS updates the spec file automagically to set the version to the SVN revision.
In both cases, it would depend on the nature of your work and the kind of client. If your work varies, you may want to highlight some areas of your portfolio to be more appropriate for a specific pitch. Similarly, a quote is rarely as simple as "you want a website/design/some sysadmin work, that'll be $price". You need to evaluate the task in order to give a sensible idea of your costs.
No, you could still have one cached copy for everyone. The SSL termination happens before the user's request gets to the caching server. As far as the cache is concerned, it is a regular http request. The only problem is you cannot have generic caches that live closer to the end user; the cache has to be controlled by the person controlling the SSL termination.
cURL doesn't provide a CA bundle any more , it's the job of your OS to provide this. As I understand it, all tools that provide SSL support will fail safe if there are no root CAs on your system.
It might be worth trying again, at least in theory, you should get correct CDN endpoints whatever happens. I suppose there might be an exception to this if a CDN has edge nodes within your ISP though. There's a bit more detail at https://developers.google.com/speed/public-dns/faq#cdn