There's a fundamental disconnect between what the ISPs are saying and what the users are saying.
Being in a fairly rural area, my options are currently limited to Clearwire, which is a pre-wimax service that advertises 1.5MB/sec for around $40/month.
What actually happens is there is something of a "soft cap". Download more than a few GB a day, and you get a phone call threatening to cut you off (and are forced to pay the contract ETF). So what they are actually selling you is rolling capped service, and what you thought you were buying--what they were advertising, in fact--was throughput-based service. And, FYI, this isn't something that's published anywhere, not even the fine print: just an extremely vague AUP that says something like "Don't use the network too much."
This is on top of the packet shaping and interfering with torrent traffic and (somewhat legitimate) issues like transfer power and crosstalk.
The statement that "If they really had to guarantee bandwidth it would cost way too much" is totally beside the point. What's happening is that consumers are being sold a service with advertising that is false and misleading. I might buy that services that advertise "up to" speeds are fair enough for the normal person to figure out, but when you throw packet shaping and soft caps into the mix, "up to" advertising is flat-out lying. It's fraud.
SLAs aren't necessarily 100% guarantees. It's not unreasonable to expect that a Broadband provider will provide advertised speeds some reasonable percent of the usage time - say "100% of advertised speeds 95% of the month, < 75% of advertised speeds for no more than two hours/month" - or some such SLA. I'd be willing to pay a bit more for those assurances.
Suggesting that Broadband providers should be held accountable, at least to some degree, for their advertised speeds doesn't cut it with me.
The thing about SLAs is - as they are conventionally provided in the mainstream - that they're largely meaningless in terms of the provider's liabilities if the fail to honour the commitment.
Not only is the burden of proof on the user to show that the provider did not meet the committed information rate at a particular time, but you don't really get to collect anything.
Forget residential/consumer-grade broadband connections; even the business-class, "dedicated" connections like point-to-point leased circuits (e.g. T1s) that typically do come with SLAs don't give you much bang. A typical SLA about T1 uptime, for instance, will offer to credit the user by prorating their monthly rate proportionally to any downtime.
OK, so, I'm paying $400/mo for a T1 and it goes down for a whole day. 400 / 30 = $13.34. So, I get about $13 back. If I've got an office of 10 people I'm paying an average of $16/hr to work in an Internet-dependent way, we shall say conservatively, that means that I just paid $1280 for 10 people to sit on their butt, not to mention lost business and other arguable opportunity costs, etc, etc. In any event, it's obvious that $13 doesn't cover the business impact of the outage, even in a commonsensical sort of way.
No provider would take the risk of crediting you the entire rate - or even half - for a day's outage or something like that.
Besides, if you're dealing with anyone but the telco and/or cable MSO that actually owns and operates most of the facilities involved, the issues involved in circuit repair are so phenomenally out of the control of the ISP that the exposure would just be insane. And if you are dealing with the actual megacorp, their repair processes are so bureaucratised and slow that they wouldn't take the hit on the MTTR (Mean Time To Resolution) either.
The goal of a SLA is to insure that the service is up most of the time and can provide a given bandwidth most of the time. If you really want to have high up time you need 2 or more independent connections. T1 + cable works just fine. It's not about preventing downtime so much as reducing the window for a double failure.
PS: You can get an SLA that covers double time with a per indecent penalty, the ISP really will hop to fix the problem, but costs even more.
Oversubscription is an interesting question that always came up in my ISP days. It's also at the heart of a great deal of economic and engineering endeavours in telecommunications in general; after all, the T1 circuit was developed as a means of time-division multiplexing interoffice trunk lines on a digital facility. Oversubscription and contention is at the heart of everything. I think the statement that it "results in greater efficiency and lower overall costs" doesn't really emphasise or do it enough justice; it doesn't just increase efficiency somewhat or lower costs somewhat. There's simply no way that an ISP can offer any amount of megabits/s to a residential user at the kind of price point that makes it affordable without gargantuan oversubscription.
I think this statement is a little misleading, though:
"This may sound egregious, but it’s really not. It’s a reflection of typical usage patterns on broadband networks."
I don't think it's so much just usage patterns as it is the nature of the traffic - specifically, the bursty nature of the traffic. I remember we were running a small local ISP here in Georgia and had about 180 DSL customers on 1.5down/256up or 3.0down/384up connections. The resulting usage from these that was generated out of the back of the broadband aggregation router at peak was about 5 mbps. It has to do with the statistical dampening that comes with the fact that packets - even in what looks to the user to be a relatively continuous data stream - just don't come flooding all at once. The closest you could get to that is every user were literally making a huge download that maxed out their pipe, and even then, there's a lot more dampening than meets the eye. Besides, that's just not the way people use their Internet connections. Surfing over to this web page and looking at it for a few seconds, clicking on another link, etc. are all things that leave gigantic gaps of time in which other people loading other web pages get served, and depress the overall metric.
But the nature of the traffic really does matter, especially as applications get more diverse. There are other economic factors at play in the oversubscription game.
Take, for example, one of the reasons VoIP is such a huge headache for traditional IP network operators. It's not because it takes up a lot of bandwidth; the canonical clear-channel, completely uncompressed codec is ITU-T G.711u/A, which is basically the packetised form of the quantised PCM data in a synchronous DS0 TDM channel; 64 kbit/s (8 kB/s) per call. OK, OK, if you add Layer 2/link layer framing overhead it becomes more like 80 kbit/s if Ethernet is the encapsulation, and perhaps a little more with the encapsulation layers that go into typical aggregation and service delivery infrastructure for ADSL, which usually involves ATM/AAL5 for transport, L2TP tunnels inside it, and PPPoE sessions inside that.
The real issue is the packet-per-second throughput. Normal data packets for application-level protocols like HTTP are full to the brim (meaning, the smallest MTU in the path minus headers, which on Ethernet is 1500) with payload, which is comparatively efficient. But you can't buffer voice for long; it introduces way too much delay for comfortable conversation, once you factor in the actual network latency as well. Humans won't put up with much more than 100 ms round trip time. So, you pretty much have to buffer just a tiny bit of audio and ship it right off. Most VoIP infrastructure uses a packetisation duration of 10 to 30 ms, and 20 ms is canonical, more or less regardless of codec; that means that every media packet contains 20 ms of audio. That means ~52 packets per second. In each direction. So, over 100 PPS just for one conversation. Times several hundred for a somewhat busy subscriber-facing voice edge.
Every packet requires a certain amount of overhead for a router (or firewall, or any other Layer 3 device) to process; at the very least, the header must be inspected and some sort of forwarding decision must be made by consulting the routing table, not to mention any applicable ACLs, firewall rules, stateful NAT connection tracking, and other lookups. While a lot of higher-end routers have ASIC-assisted hardware planes and CAM for doing a lot of this stuff, it's still a substantial additional CPU burden for every stream. It's the same reason BGP has different (slower) convergence properties than a fast IGP like OSPF or EIGRP; Internet core routers couldn't handle the amount of updates required to maintain a 250,000+ route table in an update scenario with the latter protocols.
The PPS requirements of VoIP have forced considerable backplane and CPU upgrades as routers strain to handle equivalent amounts of bandwidth that is split into a hundred times more chunks. A router that can handle a clear-channel DS3's worth of conventional data traffic (~45 mbit/s) is very likely to fall over handling 45 mbit/s of VoIP calls without some serious horsepower investment. Hardware firewalls are even worse. And, of course, all this has serious QoS implications, since a media stream disrupted by the buffering and processing of a large packet - say, HTTP - on an aggregate level is going to result in jitter (arrival of media packets at an inconsistent temporal delta) and other things that make users cringe and whine about static.
Anyway, the point of this rant is that as applications diversify and more things converge in this world, more factors have to be taken into account than just bandwidth. It's one of the reasons the various broadband operators have such an ambivalent attitude toward VoIP and video that isn't generated from their own gear (i.e. comes through their IP upstream, which they pay for by peak usage) and provisioned via dedicated facilities.
Oversubscription based on expected use is no excuse for routinely not providing what was promised. It's the ISP's job to measure actual patterns and make sure enough capacity is there. How would these ISPs feel if their banks told them "sorry, we've loaned out the money in your account and don't have the liquidity to cover your withdrawal"?
Being in a fairly rural area, my options are currently limited to Clearwire, which is a pre-wimax service that advertises 1.5MB/sec for around $40/month.
What actually happens is there is something of a "soft cap". Download more than a few GB a day, and you get a phone call threatening to cut you off (and are forced to pay the contract ETF). So what they are actually selling you is rolling capped service, and what you thought you were buying--what they were advertising, in fact--was throughput-based service. And, FYI, this isn't something that's published anywhere, not even the fine print: just an extremely vague AUP that says something like "Don't use the network too much."
This is on top of the packet shaping and interfering with torrent traffic and (somewhat legitimate) issues like transfer power and crosstalk.
The statement that "If they really had to guarantee bandwidth it would cost way too much" is totally beside the point. What's happening is that consumers are being sold a service with advertising that is false and misleading. I might buy that services that advertise "up to" speeds are fair enough for the normal person to figure out, but when you throw packet shaping and soft caps into the mix, "up to" advertising is flat-out lying. It's fraud.