Basically it is a counter-strategy to the net neutrality people. While net neutrality is usually argued as being a proven way to get the best possible outcome for the entire market as a whole, zero rating counters this by appealing to individual greed of the small-minded ("But I like free YouTube now more than your lofty it's-gonna-be-better-for-all-in-the-end future utopia!").
While I strongly detest it, using this strategy in this context is a stroke of genius. The base strategy already is generally proven to work great in all target demographics, but applied in a way in in which the modern, urban, don't-need-to-own-stuff-cause-sharing-economy-and-streaming-exists metropolitans which are traditionally rather opposed to old-school big-corp power grabs, actually get something immediately valuable to them out of the deal additionally boosts its effectiveness. A big fraction of the people that would otherwise possibly take part in the movement to advance the net neutrality cause are now placated by endless Spotify and Netflix on their phones.
Moreover, congestion isn’t the only cost to account for. Take the total number of dollars of capital and maintenance costs of the network over it’s useful life and divide by the total number of bits sent during that useful life. That produces a cost per bit that seems quite reasonable to apply to customers based on how many bits they send.
The problem with your roads analogy is that there's more than one road to take. Not all roads are tolled.
And the cure isn't getting rid of data caps, the cure is increasing competition on a local level.
You have effectively unlimited RF bandwidth assuming ever smaller cell locations. Bluetooth is the equivalent of a new cell in under every 10 meters. And that’s with a tiny slice of the RF spectrum.
But how does that affect the anticompetive argument? I don't think it does. It still holds if you're the only game in town.
That would mean a 10Mbit connection == 3.24TB/month
That's definitely a cap that is directly related to line speed. But these 150GB caps are purely because the internet companies are also content companies - and their content channels are zero rated.
And with a 10Mbit connection, it only takes 33 hours to exceed their arbitrary cap over the whole month.
The best system would be to charge based on bandwidth usage, and raise and lower prices based on congestion per tower. But most ISPs don’t have the billing capability to support that, and bandwidth caps sort of approximate it.
On the other hand, something like this might incentivize developers to take advantage of lower pricing during periods of lower congestion. Mobile OSes tend to provide the user a choice between downloading updates anytime or only when on WiFi, but if the pricing structure made this useful, there’s no reason they couldn’t download updates overnight too.
And we did have developers take advantage of that; for example, there was a popular fork of eMule that had extra scheduling features, so it could automatically run just in that period.
I try very hard to be civil on HN, but everyone defending artifical data caps are a bunch of idiots (in the true sense of the greek work: a person that can't live in society and should be voted to be ostracized).
Not that hard I guess.
Data caps are the only reason you have affordable consumer internet because they allow a significant amount of oversubscription which matches the mostly-idle bursty behavior of consumers.
You can get leased lines from ISPs with no caps easily. You just won't like the real price that comes with guaranteeing a customer that kind of bandwidth.
Back in the day, we used to take a dim look at companies who amassed horizontal or vertical monopolies, and broke them up. Well, they're like the replicators from Stargate SG1 and have reassembled into something even uglier.
And yet when we discuss this, you fall back on caps and justification by contention ratio... But why are their services zero rated, eh? Of course we know why - it's to kill the opposing services like Netflix, Hulu, or others... except for their services!
Logically, the actual lines should belong to the people (government). The money We've(royal) paid has been above and beyond what it would have cost for a REMC-style internet-ification. I don't want the US or State govts being ISPs themselves, but muni style arrangements have worked out well in the past - except when Comcast and ATT haven't lobbied them out of existence.
I mean there is also the fact that TV and broadband run on entirey separate infrastructure that just happens to share a physical wire. On FiOS, for example, TV is a separate wavelength of laser and is a broadcast signal, not an IP signal.
The whole concept of separation seems like a dodge. Is there any real advantage to using separate equipment for TV vs. using the same hardware resources to provide more data capacity and using IP multicast? Or is that what they are doing, and just doing it on separate hardware in order to claim separation?
No, a huge part of the cost is the active equipment, which also requires maintenance and upgrades. Have you priced aggregation routers with 10G ports recently?
> Is there any real advantage to using separate equipment for TV vs. using the same hardware resources to provide more data capacity and using IP multicast?
Yes, injecting the TV signal as RF over glass is vastly simpler, cheaper, and more reliable than using IP multicast. There is also the fact that it pre-dates streaming services, and is an outgrowth of the analog CATV distribution systems that were in places for decades before streaming Internet TV.
At something like $1000/port, for 100 Mbps symmetric broadband customers that's $10/customer divided by the oversubscription ratio, per hardware refresh cycle (say three years). The percentage of the customer's monthly bill rounds to zero.
And the maintenance cost may be significant, but it also shouldn't really be proportional to the traffic level.
> Yes, injecting the TV signal as RF over glass is vastly simpler, cheaper, and more reliable than using IP multicast.
It just seems like the opposite of how everything else is going. Virtualization, SDN, etc. Get everything running on the same kind of hardware so everything is fungible, failed hardware can be swapped out with universal replacements, idle resources can be reallocated to other workloads without changing hardware, etc.
> There is also the fact that it pre-dates streaming services, and is an outgrowth of the analog CATV distribution systems that were in places for decades before streaming Internet TV.
This would be easier to believe from Comcast than Verizon.
If you're only building enough active equipment to support 100 mbps service, then your point about sharing capacity on the fiber itself with RF TV is entirely irrelevant--you've got nowhere near enough active equipment to exceed the capacity of even a single pair of wavelengths.
Compare that to something like Comcast's Gigabit Pro. Each user gets their own port on a 10G aggregation router. The CPE is a Juniper ACX-2100. And you need a beefy router after that to do traffic shaping. (Due to the way the traffic policing works, Comcast will happily deliver 25MB at 10 gbps before the policer kicks in, which completely overwhelms any consumer router you put after the Juniper.) That's thousands of dollars per user in active equipment--and you're still using just a single pair of wavelengths.
> It just seems like the opposite of how everything else is going. Virtualization, SDN, etc. Get everything running on the same kind of hardware so everything is fungible, failed hardware can be swapped out with universal replacements, idle resources can be reallocated to other workloads without changing hardware, etc.
There is nothing cheap, simple, or reliable about SDN. Flexible yes, but not those other things. It takes enormous amounts of hardware to, e.g. build an SDN switch with VPP that can compare to a $1,000 Juniper 10G switch.
RF over Glass is super easy and cheap. You get a feed from a head-end, and you inject it into a passive fiber network. There is no IP, no routing, nothing active in the delivery network until you get to the CPE.
Either way, it doesn't mean they aren't competing. Legacy TV business hates Internet video, even though they know well, they are finished and either they'll go bust or they need to switch to Internet video themselves.
Early on (1998-1999) ISPs axed their newsgroup servers. Nobody paid them for those CDNs. But soon after, CDN companies paid ISPs to install their content.
Money. Its all about money.
Traffic out to the internet involves way more than your last mile connection.
The natural data cap is the line speed of the ISP's peering links to upstream providers divided by the number of active customers. But you don't need monthly limits for that, it's true all the time inherently.
It's also kind of a scam on the ISP side because content providers are generally happy to peer with ISPs at no cost to the ISP, so it's not as if there's a non-artificial bottleneck there.
I have no doubt this is a reason, but most ISPs also oversell their bandwidth. If every customer with a 10Mbit line is only using it 10% of the time, your upstream lines only 'need' to be 10% of 10Mbit * the number of customers (plus whatever margin for spikes).
What I do care about is fraudulent business practices. I expect a minimum speed alongside a maximum speed. And if their contention ratio is 10:1 then I expect 1Mbit-10Mbit for that connection.
But no, the content/internet media companies play insane games, zero rating their stuff, enforcing arbitrary 'kill netflix' limits, and evil layer 7 filtering. They need broken up into lines owned by the state, service over lines sold to whomever provides service (like an ISP or a content company), and customers leasing the lines like how power works.
These megagiant media corps should have never owned the physical connections. At all.
As a result of this, people who use more total data generally incur higher infrastructure costs to the provider than those who use less; even if those two groups of people have the exact same link speed.
The only thing that can be viewed as network management is bandwidth limiting based on the current network load. I.e. your realtime bandwidth limit.
Limiting monthly data has nothing to do with network management IMHO and is a simply a method to fleece users and push them to pay more for less capped plans.
This is why the trick is to deliver data asap to low usage users, and throttle high usage users to limit the amount of total load that they take up. This is of course a problem if they lie and advertise "unlimited" data.
Sure it does, but not limiting it will make the network die even faster if it can't handle said simultaneous usage.
Data caps on the other hand can't help with that at all. I.e. users with such caps connecting at the same time will produce the same negative effect as ones without caps.
The only proper solution is to build up the network, if it's plagued by congestion all the time. Once it's congested - it's already too late and you can only make it degrade gracefully until it's built up.
Telecom companies could work around this by making it cheaper to use data in off hours, but that makes it complicated. Most electricity providers don't have that for consumers either, even though it's actually super expensive to temporarily shut down many forms of electricity generation (and they can't keep them running either, because that would fry the network).
I mean, seriously? Just because it is possible for someone to buy a 1 TB package and then use it all on the first day of each month ... doesn't mean that most average people don't have relatively predictable usage patterns and relatively constant use throughout the month, does it?
Packets have, at best, a nominal per-mbps cost to transmit.
Like, what? What do you consider a "real" cost of producing a kWh?
Packets do not have a meaningful marginal cost. Metering is a crap model.
OK, so in the case of wind and solar power, what is burned (not literally) is hydrogen (being fused into helium), which noone is paying for, so there is zero cost for what is being burned to produce each kWh of wind or solar power. So, if you are being billed per kWh for wind or solar power, that's fleecing the customer then?
Also, for coal, the coal per kWh costs about 3 cents, you pay at least three times that for electricity, usually more--is that also fleecing the customer?
> Packets do not have a meaningful marginal cost. Metering is a crap model.
What do you mean by "meaningful"? I mean, either they have a marginal cost, or they don't right?
At any time other than a load emergency, yes. The ideal pricing mechanism for a 100% renewable grid would be to pay a flat rate based on your service size (100 amp, 200 amp, etc.) and then get paid by the grid for dropping your load when the grid is overloaded. That would give the power company the incentive to prevent overloads by providing adequate capacity.
> Also, for coal, the coal per kWh costs about 3 cents, you pay at least three times that for electricity, usually more--is that also fleecing the customer?
That price doesn't include externalities. It can make sense to use pricing to discourage usage of resources with otherwise unpriced negative externalities.
> What do you mean by "meaningful"? I mean, either they have a marginal cost, or they don't right?
Network equipment may use more electricity when forwarding packets than not. But when this is in the amount of pennies per TB of data, the accounting overhead of keeping track of it would exceed the measured dollar amount.
So, do I understand you correctly that someone who uses their electric oven (so, ~ 3 kW) once a day for half an hour but uses no other electricity should pay the same total as someone who runs some 3 kW machine 24/7, unless the grid is overloaded, in which case the latter pays less?
> That price doesn't include externalities. It can make sense to use pricing to discourage usage of resources with otherwise unpriced negative externalities.
Except that's not what is happening here?
> Network equipment may use more electricity when forwarding packets than not. But when this is in the amount of pennies per TB of data, the accounting overhead of keeping track of it would exceed the measured dollar amount.
Except that that's a minor part of the marginal cost of moving a packet?
Yes, exactly. Because they both want to come home and use 3 kW for a half an hour at the same time, so the grid needs 6 kW of capacity just then. But if it has 6 kW just then with a source that generates 24/7 the same amount at no marginal cost, the person who is also using that amount the whole day isn't costing the power company anything more. The capacity was needed for that half hour regardless of what you do the rest of the day, so why should what you do for the rest of the day change what you pay?
> unless the grid is overloaded, in which case the latter pays less?
No, you get compensated for not using what you paid for, not for not using what you normally use. If they're both not using then they both get paid. Obviously this gives the power company a good incentive not to have an overloaded grid.
Now in practice what's going to happen is that the person who uses 3 kW for a half hour a day isn't going to buy 30 amp service for that, they're going to get 10 amp service and a 5 kWh battery that can run their oven or clothes drier from time to time and then slowly charge back up over 24 hours.
The analogy for broadband is you pay for 50Mbps service but it will burst up to 500Mbps for a few seconds at a time, so the person who doesn't use 24/7 has de facto 500Mbps instantaneous service for the price of 50Mbps.
And there are some complexities in power generation that don't apply to broadband, e.g. grid-scale batteries and the fact that solar doesn't actually generate 24/7. But even that doesn't really change that much, especially if you tie the service level to time of day, e.g. you can order 10 amp from 4PM to 10PM and 100 amp from 10PM to 4PM and that costs a lot less than 100 amp 24/7, but it means you're not allowed to use more than 10 amps during peak hours.
And it would make sense to sell broadband that way as well, with Mbps rather than amperes. Why can't I get 10Mbps during peak hours and gigabit the rest of the time? And then someone who needs 20Mbps during peak hours can get that and pay twice as much as me, even if I'm using a thousand times more data than them during off peak times.
> Except that's not what is happening here?
It's not administratively what's happening with power generation, but it's de facto what's happening, and so there isn't a lot of cause to change it just because we're doing something sensible and calling it something else.
By contrast, there is no comparable negative externality caused by sending data.
> Except that that's a minor part of the marginal cost of moving a packet?
That's the entire marginal cost of moving a packet. The infrastructure cost is a fixed cost. You don't get it back if you don't send the packet and the network goes underutilized.
Except that's not how it works on the large scale. If you have 1000 consumers coming home and using their 3 kW ovens for half an hour "at the same time", you very reliably get nowhere near 3 MW of load on the grid. For one, people do not in fact come home at the exact same time, and then, the exact switching intervals of the oven thermostats are essentially random, so the actual simultaneous load on the grid is pretty close to the thermal loss of all ovens combined, rather than the total peak power that they could consume if synchronized.
> No, you get compensated for not using what you paid for, not for not using what you normally use. If they're both not using then they both get paid. Obviously this gives the power company a good incentive not to have an overloaded grid.
So, the power company calculates the maximum energy that you could use given the service that you are buying, makes up some per-kWh price, multiplies the two, and then bills you that amount minus the kWh that you didn't use, and that gives the power company a good incentive not to have an overloaded grid?
Could you provide an example where the total of that bill wouldn't be identical to just billing the same per-kWh price multiplied with the kWh used?
> Now in practice what's going to happen is that the person who uses 3 kW for a half hour a day isn't going to buy 30 amp service for that, they're going to get 10 amp service and a 5 kWh battery that can run their oven or clothes drier from time to time and then slowly charge back up over 24 hours.
Well, that depends on the pricing? If it costs the same to buy 30 amp service and use 10 kWh per day as it costs to buy 10 amp service and use 10 kWh per day, why would I buy a 5 kWh battery?
> The analogy for broadband is you pay for 50Mbps service but it will burst up to 500Mbps for a few seconds at a time, so the person who doesn't use 24/7 has de facto 500Mbps instantaneous service for the price of 50Mbps.
So, the analogy to a hard rate limit where you have to operate your own energy storage is a setup where you can burst the service? I am not sure I get what you are trying to say.
> And it would make sense to sell broadband that way as well, with Mbps rather than amperes. Why can't I get 10Mbps during peak hours and gigabit the rest of the time? And then someone who needs 20Mbps during peak hours can get that and pay twice as much as me, even if I'm using a thousand times more data than them during off peak times.
Well, sure, that certainly could make some sense, but is completely orthogonal to whether billing happens based on Wh/bytes or W/bps? You also could have a time-dependent per-kWh or per-GB price, couldn't you? (And in the case of electricity, you usually can.)
> That's the entire marginal cost of moving a packet. The infrastructure cost is a fixed cost. You don't get it back if you don't send the packet and the network goes underutilized.
Well, infrastructure costs are fixed costs, yes, but infrastructure costs aren't fixed, as any given infrastructure has limited capacity. So, the marginal cost of one additional packet while you are under the capacity limit is a few femtocents (whatever the additional energy cost of moving that packet is, I haven't done the calculation), but the marginal cost of the next packet after you reach the capacity limit of some component is a few, possibly many, thousand bucks. Thus, the marginal cost that has to be used for calculating customer prices is the average over the range of realistic infrastructure expansion, which is quite a bit higher than the energy cost for an additional packet on existing infrastructure.
Also, wouldn't it follow from your argument that internet connectivity should be provided for free to most people? I mean, once you have the network built, it's a fixed cost, and providing service to an additional customer has nearly zero cost in most cases, so shouldn't people be able to sign up for free?
Which is why they don't actually need 30 amps of capacity, only something less than that and a battery to smooth out the load. Or for the power company to sell "10 amps capacity" as a five-minute average that allows for temporary surges above that capacity as long as the average stays below it.
You're also focusing on one thing, and not the many other things that are synchronized. Grids often have problems on hot days, why? Because everyone wants to run their A/C at the same time. It'd be fine if half the people would do it at night or three days from now, but the grid needs that much capacity now. If you don't have it right now you can't make it back up over the rest of the month no matter how much people don't use.
> So, the power company calculates the maximum energy that you could use given the service that you are buying, makes up some per-kWh price, multiplies the two, and then bills you that amount minus the kWh that you didn't use, and that gives the power company a good incentive not to have an overloaded grid?
> Could you provide an example where the total of that bill wouldn't be identical to just billing the same per-kWh price multiplied with the kWh used?
You only get money back for not using when the grid is overloaded, which if the power company is doing their job should rarely if ever happen. And if it does it's because most people are using their full capacity, so the people who stop get paid to stop and the people who weren't to begin with get paid to not start.
> Well, that depends on the pricing? If it costs the same to buy 30 amp service and use 10 kWh per day as it costs to buy 10 amp service and use 10 kWh per day, why would I buy a 5 kWh battery?
Which is why per-kWh pricing is problematic. It's better for the grid for you to buy the battery and reduce your peak usage, and $-per-kWh gives you no incentive to do that. So then the grid needs more capacity because you consume more at peak times, which is more expensive for everyone. Meanwhile the cost of that would go on the person who is productively using a lot of zero-marginal-cost power during off-peak hours.
> So, the analogy to a hard rate limit where you have to operate your own energy storage is a setup where you can burst the service? I am not sure I get what you are trying to say.
Suppose you have 10 amp service, but a 5 kWh battery that will charge whenever you're using less than 10 amps. Then if you've been using only 3 amps for 20 hours, your battery is charged and you can run your 3 kW oven for half an hour even though it uses more than 10 amps.
Suppose you have 50 Mbps service, but the rate cap is a rolling average over 60 seconds. Then if you haven't downloaded anything in 60 seconds and you go to download a 100 MB file, you get the whole thing at link speed (e.g. 1000 Mbps) because your 60 second average is still below 50 Mbps so the cap hasn't kicked in yet.
I suppose the closer analogy is that you can do exactly the same for both, i.e. you get a 50 Mbps rolling average or a 10 amp rolling average and you can go over for a few seconds at a time as long as the short-term average stays below the rated capacity.
> Well, sure, that certainly could make some sense, but is completely orthogonal to whether billing happens based on Wh/bytes or W/bps? You also could have a time-dependent per-kWh or per-GB price, couldn't you? (And in the case of electricity, you usually can.)
Only the ISPs aren't doing that. And even then, it would mean that the off-peak price per-kWh or per-GB should be zero, which it isn't.
But even that doesn't quite capture it, because it's not just about peak hours in a given day, it's about peak hours in a given year. The grid has more load at 7PM in the fall than at noon in the fall, but it has more load at noon on the hottest day of the year than at 7PM in the fall.
If you want to run your A/C on the hottest day of the year or live stream the Superbowl then the network needs that much capacity on that day, which means it has that much capacity on every day. But none of the other days require capacity to be expanded further to support them.
> Well, infrastructure costs are fixed costs, yes, but infrastructure costs aren't fixed, as any given infrastructure has limited capacity. So, the marginal cost of one additional packet while you are under the capacity limit is a few femtocents (whatever the additional energy cost of moving that packet is, I haven't done the calculation), but the marginal cost of the next packet after you reach the capacity limit of some component is a few, possibly many, thousand bucks. Thus, the marginal cost that has to be used for calculating customer prices is the average over the range of realistic infrastructure expansion, which is quite a bit higher than the energy cost for an additional packet on existing infrastructure.
What you're really getting at is that marginal cost when the network is below capacity is much different than marginal cost when the network is at capacity. But that's the point -- almost all of the time, the network is below capacity and there is immaterial marginal cost.
> Also, wouldn't it follow from your argument that internet connectivity should be provided for free to most people? I mean, once you have the network built, it's a fixed cost, and providing service to an additional customer has nearly zero cost in most cases, so shouldn't people be able to sign up for free?
This is what we generally do with roads. The main cost is the fixed cost and the marginal cost is trivial, so the government pays for them from taxes and everyone can use them for free. This is also why toll roads are very stupid -- you pay the same fixed cost and then discourage use of an already paid for resource with a trivial marginal cost, by charging a non-trivial marginal cost.
But the fixed costs still have to be paid somehow. Having the first customer pay ten billion dollars and all the others pay nothing doesn't exactly work in practice. It's also not how the cost structure works, because a huge part of the infrastructure cost is per-customer -- if you want to service twice as many customers you have to wire twice as many streets.
On the other hand, having everyone who wants the same capacity pay the same monthly fee works pretty well. It still discourages people from signing up compared to the public roads model, and it would be better if it didn't, but probably doesn't discourage very many because the value of having internet service is much greater than the cost.
By contrast, charging high prices per byte at anything other than the all-time peak consumption period does in practice discourage productive use for no benefit.
On the one hand, you suggest that billing should be based on peak power, because it is supposedly better for the customer to run their own battery, but then it would also be OK for the power company to offer bursting with billing based on peak 5 minute average power, which essentially means that the power company is selling you use of their battery. But then, if they are selling you use of their battery, why only for 5 minutes? What is wrong with them selling use of their battery for all your storage needs? In particular when the power company often has options at their disposal that are far cheaper than actual batteries to achieve the same goal, such as simple averaging over a large number of customers, or moving loads that aren't time critical to otherwise low-load times, which don't need any actual storage at all.
You seem to be focused on incentivizing everyone to flatten their own load curve as a supposed method for optimizing the utilization of capacity. While it is obviously true that if everyone had a flat load curve, global utilization would be maximized, you seem to be simply ignoring the fact that there are some real-world requirements behind peoples' load curves, so the actual power or bandwidth consumption of each individual can not be flattened. At best people could pay for some sort of storage (in the case of electricity) that transforms their actual load curve into a flat curve on the grid. But at the same time, it is perfectly possible to flatten the global load curve without the need to flatten every individual load curve. In fact, you can flatten the global load curve by making some individual load curves more bursty. If you sell electricity cheaper at night, that incentivizes some people to install storage heating, causing them to create an artificial load peak at night--leading to the global load curve becoming flatter.
The ability to increase utilization by combining different load curves is one of the great possibilities of many users sharing a common infrastructure, so why would we possibly want to disincentivize that?!
It's two different use cases. It's confusing that we keep talking about the same load, so let's change that.
On the one hand, you have an electric oven. It uses 30 amps, but only while it's heating, which it only does for 30 seconds out of every 90 even when it's in use. So the average is 10 amps. The power company sells you 10 amp service, your average over a few minutes is indeed 10 amps and the power company can average this out with other customers with similar loads, so you don't need to pay more for 30 amp service even though you're periodically using 30 amps for a few seconds at a time.
On the other hand, you have a data center. The servers use 30 amps at all times. It has a UPS, so you've already paid for an inverter, and now you buy twice as many batteries as you would have. Then you order 5 amp peak and 40 amp off-peak service, which is much cheaper than 30 amp all the time service, and run the datacenter on the batteries during peak usage hours. The power company couldn't have averaged this over other customers because the average customer wants to use more at peak hours than off peak, by definition.
And the method by which they get loads to move to other times is by not limiting usage at other times at all, which is what's happening here -- when you order 5 amp service you get 5 amps during peak hours, but during off peak hours you can use however much you like, including to charge your batteries to reduce the peak consumption rate you'd otherwise need to buy.
> While it is obviously true that if everyone had a flat load curve, global utilization would be maximized, you seem to be simply ignoring the fact that there are some real-world requirements behind peoples' load curves, so the actual power or bandwidth consumption of each individual can not be flattened.
"Making the curve flatter is more efficient" is true even if you don't ever actually completely flatten it. If people use a total of 500GWh during peak hours and 200GWh during off peak hours, and you can get that to 450GWh and 300GWh, you've reduced the required generation capacity by more than 10%.
There are a lot of pricing structures that can achieve this. A lot of them are really just the same thing using different terms. One of the better ones is to price based on "maximum average consumption rate during peak hours", i.e. the most of the resource you're entitled to use over a few minute period during peak hours. What you're going to need to A/C your place during peak hours on the hottest day of the year. Because that's how much capacity they need to build, and then have all the rest of the time too.
But the single thing that all of the good pricing strategies will have in common when there is no marginal cost and you're just amortizing the fixed cost of building capacity is that off peak usage is unmetered. Which is specifically the thing that "GB per month" caps get completely wrong.
Well, it may well be one of the better ones, depending on what you are comparing it to, I guess, but I would think it has a pretty serious flaw: Anyone who has the option to not use any power during peak hours would get to extract value from the grid without contributing anything to its construction or maintenance, so those people or businesses who are in the unlucky position of needing power at certain fixed (peak load) times would end up paying the full cost of building and maintaining the grid that everyone is using. That doesn't exactly sound like a fair way of sharing the costs, does it?
While the cost of building the grid is determined by peak load, it's not like building a grid that could only deliver the base load would be free to build. So, while it makes sense to have a price structure so that users who cause the load to exceed the base load pay for the additional costs of building a higher power grid, I don't see why it is appropriate to make them pay the total cost of building the grid, let alone how that leads to optimal use of resources.
Now, your argument might be that anyone is free to just buy a bunch of batteries for their peak-load needs and thus avoid paying for their electricity (up to the point when everyone does so, so the global load curve becomes flat, thus it's peak load time 24/7, and everyone will start paying based on the energy taken from the grid after all ...)--which is true, of course. But what's the point of that? What is the problem with the grid having storage built-in and billing you for the use of that capability, rather than incentivizing/forcing you to install your own storage? In particular when some forms of storage can be cheaper than batteries, but completely unrealistic for personal use (such as pumped-storage hydroelectric).
And all of that is completely ignoring that a flat load curve isn't actually desirable anymore with renewable sources, as the generation capacity is just not capable of providing that (without massive overprovisioning). In particular the A/C example that was a big problem for traditional power grids has the nice property that you need A/C roughly proportionally to the intensity of sunshine--and luckily, the generation capacity of solar panels is also roughly proportional to the intensity of sunshine. And not only does this mean that additional power from that source is available exactly when it is needed for that purpose--it doesn't even need a stronger distribution system in the case of roof-top solar, as the power is generated right where it is needed, so no need to move it long distances.
> But the single thing that all of the good pricing strategies will have in common when there is no marginal cost and you're just amortizing the fixed cost of building capacity is that off peak usage is unmetered. Which is specifically the thing that "GB per month" caps get completely wrong.
Well, yes, "GB per month" caps don't create any incentive towards a particular shape of the customer's load curve on any scale smaller than a month, that's maybe not quite optimal. But off-peak usage being unmetered is a pretty bad solution as well in that regard, as that obviously doesn't amortize anything, and creates an incentive to avoid certain useful investments because of a free-rider problem (you won't start a business that needs bandwidth at peak times if the fact that you have to subsidize other users of the infrastructure makes your business unprofitable).
Also, while that approach doesn't solve that problem, that doesn't make it useless. Most consumers as a matter of fact have a relatively flat load curve on the scale of a month (both for electricity and for bandwidth), and a cap influences the amplitude of the curve, and thus does influence infrastructure costs. And realistically, at least most bandwidth uses of consumers have little oppportunity for incentivizing a flatter load curve. Much of the consumer traffic is videos and streams on demand, which users generally don't want to watch at 3 am, and also don't want to pre-order to watch it the next day. Of course, it would still be nice to have the option of buying cheap traffic during the night for uses that can profit from that, which obviously would also benefit the ISPs to some degree.
Data caps are without any question fleecing. Not a need driven idea.
Or to express it slightly differently: if there weren’t any data to move you wouldn’t have to build any infrastructure and you wouldn’t have any costs. If you want to move data you have to build the infrastructure which cost money, which means that it costs money to move data.
Not every single bit adds to the overall cost, but in general, more data sent means more spending.
It's perfectly reasonable to charge for data. Though that doesn't mean Comcast isn't charging for anti-competitive reasons.
Given current obscene prices that IPSs are already charging, they have enough money to upgrade the infrastructure. They just need to line their pockets less, and actually spend money on upgrades.
I doubt whether the entire uplink of my first ISP was 1 Gb/s. Many colleges' uplinks sure weren't.
Somebody's out there making Internet infrastructure better some of the time.
Too bad GF sizzled out, but it had a good positive effect to disrupt slumbering monopolists.
Yeah, Verizon jumped to gigabit to match Google’s marketing. It was a non-event. (After about 100 mbps the shittiness of the modern web stack makes further upgrades pointless unless you’re a huge downloader. I’ve got gigabit FiOS load-balanced with 2-gigabit Comcast fiber at my house. I literally cannot tell the difference from the 150 mbps FiOS I had before, except in speed tests.)
Google Fiber really has nothing to do with these increases. That’s total make believe. There is an upgrade treadmill for DOCSIS that the industry follows, just like for CPU fabrication. And like improvements in fabrication technology, staying on the treadmill requires massive continual investment (faster versions of DOCSIS only work if you keep building fiber closer to the house and decreasing the amount of coax and the degree of fan-out). ISPs were spending that money before Google Fiber, and continue to spend that money now Google Fiber is on life support.
You sound like McAdam. First, he conceitedly claimed that no one needs or will need gigabit so customers can get lost expecting it. And then he "magically" changed his tune. Which is totally the indirect effect of Google Fiber, which affected Comcast and AT&T which affected Verizon. McAdam had to swallow his conceit, shut up and deploy gigabit.
All this "no one needs" bunk falls apart very quickly even at the slightest sight of looming competition. Imagine what could have been if there was real one around.
And my totally uninformed understanding is that the ongoing maintenance costs, and the costs of expanded service are both pretty minimal.
And ongoing maintenance and support costs are very high. Even if you don’t trust Verizon’s SEC disclosures (showing 5% or less in operating profit for wireline) look at the financial statements for something like Chattanooga’s EBP. The vast majority of revenue goes to ongoing costs, before you even get to paying down the initial build out.
And on top of that, most simply prefer to pocket them instead of investing into the network, with "no one needs it" excuse. Something they would never have done with healthy competition.
It's analogous to product dumping.
(Did you want to say that it's a symptom of the disease of any price structure under which users get charged extra on top of their subscription fee, according to some function of their data use?)
It'd actually be kind of neat if bloatware were discouraged this way, since currently there is no cost to bloatware as long as whatever it is remains within the acceptably performant range.
This hardly works for the connection which is actually providing your WiFi, or if you want to try to use cellular exclusively.
It's more the trust-building that's going on between the giants of industry. It's the 1920s all over again, just with different tech.
Taking "necessary" out of necessary evil, leaves just evil.
Why would the providers need to buy statistics on their customers? They already know who is listening to and watching what content.
There is no way to opt out of that practice as it is included in most contracts by default.
T-mobile has a setting where you can turn it off and on
Setting up these methods are also not trivial for companies.
Setting up adaptive streaming based on bandwidth available has been a solved problem since RealVideo in the late 90s. All providers do it now. Anyone can set this up with WireCast.
The selection process is hidden (there is no information available), looking at the participating services most probably don't bother either due to cost, restrictions or administrative reasons.
Every streaming provider in the US took advantage of it.
Streaming your own library is excluded in the contract, so it is not music, only what T-Mobile says is music. I call that censorship.
You can stream your own audio through Apple Music through the Music Match (?) Service.
This is all really a moot point now that T-mobile only sells unlimited plans now and if you really want to opt out of compressed video you can pay $10 more.
Wireless is different, no matter how much money a wireless carrier is willing to spend, there is a finite limit of how much data can be transferred through the air and only a limited amount of spectrum that is good for cellular service.
The video/music producer can opt out and be treated like all other data and watch their customers prefer providers who went through the trouble of filling out some paperwork or they can be on the same level playing field as their competitors.
". In comparison, the countries with prevalent zero rating practices from their wireless carriers consistently saw data prices increase. This makes sense; carriers have an incentive to raise the costs of exploring alternatives in order to make their preferred, zero-rated choice of content more attractive."
Did you even RTFA or did you came here to defend TMobile.
Facebook tried to build a walled garden development platform on top of TCP/IP, bundle it with free low-bandwidth internet necessities (consisting of Facebook and several other deliberately non-Google properties) and ram it dowm the throats of a billion impoverished and technologicly unfamiliar new internet users of India.
At that time in 2013, FB market cap was $100B, GOOG was $282B. FB had 1.1B users with ARPU of $1.63, GOOG had 1.3B users with ARPU of $10.09. Looking avoid a market correction, FB aimed to add 1B new users from India and simultaneously prevent them from becoming new Google users and disguised the scheme as philanthropy. It didn't work.
FB was then forced to moved fast and break: data access control policies, respect for their users, expectation of privacy, and lots of pesky regulations. By distributing user data for free as an investment in the future, then buying the competition to control the demand, FB cemented their position as a gatekeeper of the online commons and dictator of social media.
Insights gained from this freely available, or loosely guarded user data helped explode demand for the user manipulation as a service offering FB had newly monopolized.
Gloves now off, FB leveraged this position and acheived hockey stick profit growth after just one US congressional election season, a midterm year at that.
FB's Q42018 ARPU ~ $7.37, MAU 2.23B.
Step 2: Cripple the regulatory powers of the government by convincing everyone that government is the problem.
Step 3: Soak your now captive customers in a regulatory friendly / competion-free environment.
Additionally, in any given location most of the spectrum is unused but reserved, smarter devices could take advantage of this, but current regulations prohibit this from becoming a reality. With certain portions of cellular spectrum in particular this could be hugely advantageous to consumers, at the cost of governments who wouldn't be able to make money selling that spectrum.
There are alternative solutions - like opening up wide swaths of spectrum to smarter devices that are able to share that spectrum broadly.
Government enforced monopolies present the worst of both corporate and regulatory worlds - a disaster for the consumer. Shouldn't we care more about the market for services the consumers get than the market for spectrum that providers buy?
That is what happens for every natural resource. Your gov establishes rules for land and ground water just like that. Actually, even for IP and patents...
I don't see how you get from "gov regulated process of resource allocation" to "gov enforced monopolies". Yes, we should care about the value generated to consumers. The theory is that companies use exactly this money from consumers to get the spectrum.
> There are alternative solutions - like opening up wide swaths of spectrum to smarter devices that are able to share that spectrum broadly.
That only works with strong regulation (I'd guess you don't want gov involved in details?) or in situations were cooperation will always win out. Otherwise you'd usually end up with with some kind of Tragedy of the commons / Prisoner's dilemma like situation. Like every neighbor here upping their WLAN power...
Patents and copyrights are an intentional breaking of this market to encourage innovation - explicitly with the intention of creating a temporary monopoly. One can hope that government doesn't go out with the goal of creating a monopoly on cell service.
A good regulation would be one which ensures competition, for instance by ensuring infrastructure in rural areas can be used by multiple companies isntead of making entry into the market expensive.
As it is now we have "innovators" who have access to private bands, and should know better (the cellular telecoms) threatening to trash the ITU bands with 5G coverage. It's a travesty since the result will be cellular Big Co basically squashing your Wi-Fi and the smaller players who can't afford private spectrum.
For instance, T-Mobile's "Music Freedom" zero-rates a whole bunch of music streaming services. In the US, where data is expensive, that could easily cause someone to pick T-Mobile over one of the other providers, if they listen to a lot of music. With "Music Choice" I can get by on the smallest data plan. Without it, I'd have to step up, maybe even to unlimited.
In a country where data is cheap, something like "Music Freedom" wouldn't make much difference, and so I could see less ISPs bothering with the technical and administrative overhead of having such a program.
Also keep in mind that zero rating is itself an explicit admission that network capacity and overhead aren't factors in the price. The whole deal is that the wireless company lets customers on those plans use unlimited data at no extra charge as long as it's for zero rated content. Allowing customers at that same price point to use that same unlimited data without arbitrary restrictions would ultimately be just as profitable.
The same thing could be accomplished without zero-rating if they made their data plans something like N gig of high speed data and unlimited low speed data, and then the phones provided some way for applications to specify whether a given connection should use high speed data or low speed data.
For some applications that would be easier for the developer. Music streaming apps could always ask for a low speed connection. But what about file download apps? Whether they should use my limited high speed data or my unlimited low speed data is probably not something the app can determine on its own, because it depends on how much of a hurry I'm in. So a lot of apps would probably need to expose this decision making to the user.
That wouldn't have any net neutrality issues, but I bet it would be a UI nightmare.
(Actually a lot of carriers do kind of do that. A lot of unlimited plans are N gigs high speed unlimited low speed, except rather than trying to optimize which is used on a per connection basis it simply uses high speed until you've run out and then uses low speed for the rest of the month).
No, that's not what that means. You can easily take special means to get direct peering to zero rated partners or install CDNs so that zero rated traffic does have any impact on peering links. Congestion at the last mile is only a small part of what an ISP deals with.
Note, that in spite of my opposition to net neutrality, I strongly support using traditional antitrust mechanisms to prevent firms' excessive market power and last mile monopolies from leading to unfair prices.
"Using applications participating in the DPP is two up to 77-fold cheaper compared to using applications via general data volume. This strong incentive for customers to use participating applications infringes on the rights of consumers to use applications of their choice and the rights of CAPs to provide services independent of the origin of their users."
Up to 77 times more for neutral data than data to their partners - sounds scary, but how do they get that figure? Well, they take MEO's smallest month-to-month contract which offers 250 minutes + SMS + 500 MB of data + free in-network calls, divide the amount of data by the total cost, and compare this with the nominally 10 GB Smart Net addon which only offers data to the included services. That is, they're treating the phone and SMS part of the all-internet plan as though it costs nothing when it definitely does not.
I think the two-fold cheaper figure on the lower end is wrong too - on paper the non-neutral Smart Net is more like three times cheaper than comparable prepaid data, at least for people who make good use of the Smart Net data limit. Bear in mind that as I understand it each Smart Net plan is for access to one of Messaging, Social, Video, or Music, which includes a handful of the main sites in that category. I imagine most people will have usage that is relatively low and spread across multiple categories plus some outside-of-package usage, in which case a general internet access plan will work out cheaper.
I filed a complaint with the California PUC and they told me they don't regulate Comcast/Xfinity because they are not a landline telephone service.
It seems horrible that there might not be any regulator that is keeping Comcast/Xfinity from harming consumers like myself.
It seems like the problem as you're phrasing it is that they are deliberately harming consumers -- it costs them the same to deliver the content whether via Netflix/Hulu or their own services, but they are choosing to overcharge customers to force them into their own service.
An alternative way of looking at it is that their own service is offering a subsidy -- bumping you up implicitly to a higher tier in exchange for using their service.
In the latter case, it's not really abusive; they're offering an enhancement; the "actual" price is the next tier up, but you're getting a "discount" in exchange for preferring their own video service.
Which is to say that this is awful behavior, and nothing is more frustrating than the fact that (the lack of) competition makes it so that it is nearly impossible to switch to another internet provider that does not engage in this kind of monkey business. The other problem is that I think many customers prefer this model -- they're willing to make the sacrifice and get the subsidized package rather than shell out the extra money. I'm certainly guilty of this; avoiding paying minuscule subscription fees for websites even though I hate the ads. The value of my hatred is still lower than the cost of the subscription.
I feel you are out of the loop with the whole net neutrality fiasco (https://www.eff.org/issues/net-neutrality)
The current FCC may be too industry friendly, but it will not be that way forever. We, as consumers, need to continue to fight against these monopoly providers that are harming us.
Given the repeal of federal net neutrality regs and California putting it's net neutrality rules on hold pending the result of a federal lawsuit, under what active law is the illegal?
The really hard work of the regulator is to ensure that telcos don't abuse their access to spectrum and other resources. IMHO the best way to do this is to force telcos to give each other access to their infrastructure at a reasonable price. For example, when margins* are high enough, new virtual telcos must be able to start up with minimal infrastructure.
The consumer side does not need a lot of regulation. If there is enough competition, consumers will vote with their money.
Edit: Changed "prices" to "margins".
In the case of wireless, where there is no natural monopoly, the best approach is to simply open up lots of spectrum and ensure there are a sufficient number of competing carriers. There is a ton of spectrum being wasted for things like television that could be used for broadband instead.
On my last visit to the US, I used both "Straight Talk" and "Trac Phone". These are virtual telcos that use Verizon/AT&T and T Mobile infrastructure. New customers can choose a SIM card before activating the service.
Surely the prices paid by these virtual telcos are set by the regulator.
Here is South Africa, the third and fourth mobile operators roam on the first and second mobile networks. AFAIK, the regulator force these roaming agreements apon the operators. (Here I can get 50 GB of prepaid data for only R500 = $38. Much cheaper than the US!)
So there are ways to set prices.
That was the basis of a lot of early 20th century regulatory thinking. That was thoroughly discredited because it turns out that government price controls are worse than duplicated infrastructure.
Governments only have an incentive to keep prices low, which is not the same as the economically efficient price. From the government’s point of view, it’s better to have cheap 3G networks forever than to have prices that lead to investment into 4G and 5G networks. Low prices kill innovation, which is why all the innovation is happening in IPhones and Macs and Surface tablets and not cut-rate Acer and HP products.
MVNO pricing is not set by regulators. The FCC’s last major foray into rate regulation, in connection with DSL loop unbundling, ended up killing DSL as a viable competitor to cable because wholesale prices were set so low there was no incentive to upgrade DSL networks.
I'm sorry, I don't buy it.
Especially when you consider in many western countries many people are experiencing congested lines at peak hours. Duplicated infrastructure in these cases are not only a negative but a positive.
So without duplicated infrastructure, we would be experiencing even more congestion so I don't get what your not buying.
Ideally wholesale prices should be set before the infrastructure is built. Then telcos know what they get themselves into.
In South Africa, DSL was never unbundled. But it's still in decline due to competition from 4G and fiber.
IMHO a lot of innovative business models are built on zero rating. Based on what the article says about prices and your thoughts on innovation, zero rating may actually be a good thing.
Repairs can be either spread evenly or by amount of infrastructure owned.
When I travel to Japan or Germany, it makes me cry to come back to American public roads and trains. But not privatized American broadband. We’re doing something wrong and that isn’t giving government insufficient involvement in the construction of infrastructure.
While there were some improvements, customers in general do not get better condition today. I think overall using a train got more expensive. To a degree that you should think about using a plane.
Infrastructure investments were low before and are low right now.
I believe this is what we in America had with dial-up and DSL up until about 2005, when DSL was reclassified as a title I information service. I seem to recall having a hell of a lot more ISP choices/competition in those days.
I have 300 mbit link for 50 USD and no caps.
I have full lte and 2TB downlink cap for 25 USD.
But we do have strong competition and it seems it was never about capacity. Its about who offers more. Its obvious that they can afford this since nobody is loosing money, and all of this on a very small and marginal market where isps purchsing power is small.
You are being bullshited to, dear Americans.
Then they only show two years , 2015 vs 2016, where there is a slight increase of 2% in prices , without error bars. Then there is this:
> we repeated our analysis for zero-rating oﬀers introduced in 2016 or 2017. However, initially this did not produce statistically signifcant results in any category. Closer examination of the data however revealed Finland to be an outlier market, in which the replacement of a single oﬀer signifcantly changed the prices in almost all data volume aaskets. This is likely due to the fact that unlimited data plans, which do not sensibly admit a price per gigaayte calculation, are prevalent in Finland. We therefore repeated the analysis but excluded Finland from our dataset. In this case, we found a statistically signifcant result (p=0.04) for markets in which zero-rating was introduced between 2015 and 2016. These markets showed a 1% price increase between 2016 and 2017, whereas markets without zero-rating in both cases showed a 10% price decrease.
I think they are stretching it with p=0.04 on a cherrypicked sample of n=30, and present a rather peculiar conclusion about their data. Zero rating is obviously marketing garbage, but i am very unconvinced that it is the reason why ISPs are not investing in their networks.
(It also took 10 minutes to download their 5MB pdf - talk about bad internet ;) )
I agree. They even seem to admit it's a pretty suspect analysis:
"However, since zero-rating offers are now prevalent in almost all EU countries this analysis cannot be extended into the future."
Is this expensive?
I pay $50 for 1GB in Canada.