There are both technical and business reasons for the gap between upstream and downstream speeds
There are a few technical reasons. There is less upstream frequency available in the upstream. 42 MHz is the standard upstream/downstream split, 85 MHz is a midsplit, 200 MHz is a high split, and the downstream goes up to 1Ghz or even 1.2Ghz. The upstream modulation runs at 64 QAM (~27Mbs) and downstream modulation runs at 256 QAM (~38 Mbps). The upstream is also more sustainable to interference and impairments. So the technology is inherently imbalanced.
From a business prospective it is very difficult to cost justify. Moving the upstream/downstream split and/or upgrading to DOCSIS 3.1 is very expensive. The reality is that the typical customer use very little upstream bandwidth. It isn't uncommon for me to see a node with 24 or 32 downstream channels at 80% usage and 4 upstream channels are at 25% utilization or less. We see a downstream to upstream usage ratio of about 15:1. If you move the US/DS split that means you are taking away from available downstream frequency which means less DS data bandwidth or fewer video channels. It's hard to make a business case to invest in increasing upstream bandwidth when the data says people aren't going to use it.
Newer technology like DOCSIS 3.1, Docsis FDX, Remote PHY and modern plant designs like N+0 or N+1 (which means fewer customers per node) will bring higher upstream bandwidth, but it will be awhile before we see anything close to symmetrical speeds.
Most of their passive and active equipment in areas which have been fully "digitalized" is 3.1 compliant, they just need to finish the whole city and then they will start marketing the offer. Modems will have to be changed as well as equipment on the customer's side which they haven't done yet, so it will take quite some time.
They "promised" it by the end of 2019. (in 2016.), seems it will be 2025. By that time the Chinese government and Telekom Srbija will have FTTH to every home.
If they did upgrade all their actives then 3.1 upstream is likely in the works. DOCSIS has some pretty amazing diagnostics and troubleshooting - I can look at a modem and say "approximately 71 ft from the modem there is a break in the cable causing an RF impairment" or "These 14 modems in the same neighborhood show the same impairment - there is likely a fault at this specific segment of cable". DOCSIS 3.1 expands on that even more and as more 3.1 CPEs get deployed operators are able to leverage that additional data to get the plant ready for 3.1 upstreams. OFDM frequencies are also not backwards compatible with 3.0 CPEs but a 3.1 CPE can bond both types of channels, so as the ratio of 3.1 CPEs increases that allows operators to shift more frequencies from legacy modulation to newer 3.1 modulation.
How is 300/15 better than 1000/50?
edit: the ratio is the same, I just haven't powered up my brain yet.
I’m not trying to disagree with this especially since you are talking about coax, but the new mmwave WiFi installations at places like Ohio state stadium have shown the opposite. The vast majority of their traffic were photo and video uploads to social media. Consumers want upload; it’s just overshadowed by streaming.
On an unrelated note - I've been pretty disappointed with the WiFi at The Shoe. The first game I tried to use it I couldn't event join the network, the second time I could join but it wasn't usable. It was early in the season though so they could have still be working out the bugs - I can't even imagine how much goes into supporting a system like that. Trying to get 100k devices on a WiFi network is impressive.
Edit: Although it wouldn't be fair to leave out that it's a EuroDOCSIS 3.0 setup with 16/4 downstream/upstream, with very limited 3.1 being used on a small scale.
IIRC, Australia has limited broadband options, they're expensive and data is expensive relative to other countries. Also, isn't data transiting outside of AU more expensive than intra-national data?
In the US, there's a 1 GbE variant of Comcast "cable modem" service called Xfinity that often has a 1000 GiB "data plan" (varying by state, some states have unlimited data) that imposes a charge per GiB beyond that. Where I use it, the network is all optical with copper from poles to customer. During power outages, it only lasts about a day... and we now have 3-6 day "public safety" outages in the summer.
Though on second thought, it might help out your ratios.
I won't call it inherently, or even "technical reason(s)".
Nothing stopped the standard to split the frequency bands for up and downstream in a more way, or even swap the frequencies for each. They're simply configured/designed this way is because of the business reason.
". I should also note that since the cable operators' infrastructure was built around sending the same TV signals to all houses, they basically just run one cable to your neighborhood, and then use passive splitters and dumb amplifiers to split that one cable to all the houses in your neighborhood. So you don't have a dedicated coax cable directly from your house to their headend equipment, so they can't change the frequency split on a house-by-house basis."
There are passive devices out in the field that split the upstream and downstream (such as directional couplers). Passive devices are favored over active devices whenever possible because they are much less likely to fail, are less expensive, and require less maintenance.
It’s not about demand, it’s about climbing poles and rolling bucket-truck crews to replace filters and other expensive equipment on a massive scale.
The top comment in the link there is spot-on.
I was involved with the “digital switchover”, moving the traditional analog channels over to the digital QAM channels, potentially freeing those now duplicated analog channels for upload channels. Maybe more upload is technically possible but the work on the poles is going to be expensive.
 oops, pasted duplicate.
1000 MBit/s downstream, 35 MBit/s upstream is an 30:1 ratio, far more extreme than any DSL profile.
Other interesting info: ADSL is dreadfully slow by today's standards, but VDSL can actually hit some decent speeds. The big problem with both is attenuation of copper wire is fairly significant over the distance the lines generally travel, much more so with VDSL, due to it's higher frequency range. That's why companies like AT&T, with their U-Verse brand, bring fiber to the neighborhood, to lop of a significant distance of copper. They can increase the distance they service by making use of two lines paired together. AT&T takes a portion of legacy ADSL service now and puts VoIP over it, with the VDSL service having more than half it's capacity generally devoted to IPTV bandwidth. One single good condition line could generally carry 118 Mbps, but they would split that up for TV, leaving only 45 Mbps. Now they are mostly pushing FTTH/FTTP though, leaving no option for POTS/copper. That is all good though, because most the copper run is such garbage that it absolutely sucks being a technician. You would be gigged for the line runs getting water in them and making service drop out, even if everything tested great when you installed, or even if you turned a ticket in the line techs might kick it back and say everything is fine.
Our national telecom operator, Telekom Srbija, had a monopoly on internet access in most areas due to the fact they own all the phone lines. In cooperation with Huawei and the Chinese and Serbian governments, they made a plan to kill POTS service per line and use ALL of the bandwidth for VDSL, as well as building DSLAMs on street level, to provide very high speeds (100/10 Mbps was the max they achieved before stopping) with short copper lengths. Phones would work over VoIP, so all new modems were VDSL compliant with IAD (integrated access device) "certification" (a SIP client and 2 POTS ports).
In the end the Chinese lost interest and it was all forgotten and covered with ash. The new idea, which is why I say it failed in a good way, is to skip the street DSLAM buildout and just do FTTH. And they did, though not with symmetric gigabit speeds, because reasons (bla bla customer demand, costs, etc..) The highest speed is 1000/400 Mbps for $120/mo. (which is super expensive compared to what was promised by the Chinese and also compared to general purchasing power in the country).
For example, in Niš, which is the most developed city when considering internet infrastructure and the number of ISPs, you could have 600/400 Mbps for $10/mo with ~300 TV channels on unlimited* TVs. You'd even get a [VoIP] landline.
* all the TVs in your home, basically up to 5 for 99% of users
Then Telekom Srbija got the Chinese money and bought 10 largest ISPs other than them and SBB (the other side of the duopoly), and now there's 200/20 Mbps on FTTH for $30/mo on up to 3 TVs with 260 channels and a landline that has call routing problems. Users which sign new contracts are downgraded and pay more, those that don't sign new contracts suffer with TV and Internet dying, and are told to "sign or sue us".
Ironically, I chose the second option, and won (https://news.ycombinator.com/item?id=21733553) but the thing I "won" is free contract termination and being repaid fees that were charged but are not mentioned or required under the TOS and Contract at the date of signing (given I have not signed anything else giving permission or agreeing to anything new, and they didn't officially force new contracts on people so that people can't cancel to run away)...
(Gpon, mentioned above, is mostly time division multiplexing. I have worked for 2 ISPs that use gpon, and we've never sold anyone an asymmetric plan, except for the "free plan" that Google Fiber had. Gpon is perfectly happy to be symmetric, but it does have a limited bandwidth shared between multiple subscribers.)
The biggest problem I've found is that the Internet isn't really ready for customers that can download at 10Gbps. When I set up a 10Gbps connection for myself, I had a lot of trouble finding anything on the Internet that would send me data that quickly. Even my own servers were AWS instance types that only supported 5Gbps burst. So that is probably the reason why you aren't seeing consumer ISPs selling you 10Gbps circuits... even the servers don't have 10Gbps! A new wave of upgrades are needed for this to become viable; 100Gbps networking equipment is still pretty expensive.
TCP is dumb and if it can saturate your connection it will increase your latency. I feel like being unable to saturate a connection is a good thing.
There are other measures you can take to avoid saturation, but simply having an oversized pipe is hard to beat.
Fast links are a simple solution to a lot.
To me, I heard "A full 5% of our users want this badly enough to hack around and get this for themselves"
This would be like Comcast saying "Less than 1% of our customers are using 10 gigabit connections, so clearly this isnt a thing people want"... You dont even offer that speed so of course the user base is low (assuming even top businesses can get those speeds). It's your job to increase the user base.
Yes, I want symmetrical broadband. But I would rather have 200/20, than 10/10. Don't mistaken that for "lack of demand" but rather "settling for what the area monopoly provides"
Many ISPs who are also cable companies sell the content that people pirate, so they definitely do care.
In Canada, Rogers and Bell (two major quad-play telcos) are at the forefront of pushing for legislation to crack down harder on piracy by eroding privacy rules that protect consumers' Internet usage.
Just think, who do you know outside of some geeks that bother with torrents anymore? Even most geeks who know how to pirate video and music don’t bother. They just give their $5 to $15 to netflix/hulu/amazon/spotify and don’t bother with the hassle.
The only caveat I’ve heard of is that in some older buildings, WaveG offers only non-symmetric 1Gb down/100Mb up, but i am yet to encounter that myself. I didn’t look into this myself, but i have a feeling that one might actually be just a DOCSIS instead of the real fiber.
There is measurably no demand even if you look at fiber. Most fiber deployments are GPON which is asymmetrical as well. They even make symmetrical deployments which however means you can’t (in theory) use much of your upload. However since almost nobody uses their upload nobody notices.
In Austria even fiber uses a very low upload (500/70, 1000/100 are common) because cable also doesn’t have more and there is little customer demand to increase upload.
I personally stopped working on a few P2P communication and personal server projects in the late 90s. Letting people communicate and share files with their friends directly with their own local private server could have been a way to fight back against the FAANGs centralizing the internet. Unfortunately, the limitations from the ISPs made this impossible for most people.
I received timeout, not block, when I queried gmail on port 25. If it were blocked, I would get a TCP reset.
Not nearly all consumer isps block port 25, but a lot do.
At the same time, fiber is full duplex so this is much less of an issue.
The difference is that applying a voltage at the transmission side in coax will be noticed by the receiver, (cause electricity goes both ways) whereas shining a light into fiber doesn't get to the detector (because light only goes one way). At least, this is my limited understanding. Would love if anyone more knowledgeable could correct me.
Cable operates in RF bands, and frequencies are denoted in MHZ and sometimes GHZ(included in the linked answer)
Fiber operates in optical bands and frequencies are denoted in nanometers(wavelength) -- downstream data, downstream video, and upstream data are all operated on different wavelengths on the same piece of glass, and they are received at both the transmitting end and the receiving end.
They are typically separated and filtered by optical prisms to ensure the correct wavelength hits the correct receiver(most receivers are wide-band and will receive a wide range of optical wavelengths)
The medium is typically shared at a neighborhood level, in that a single optical strand may be split 16 or more times to serve multiple houses. Bandwidth is shared on that strand by the houses served by it, typically via TDM(time division multiplexing)
End users don't typically notice this because the line rates are higher than the maximum bandwidth package sold. IE downstream rate 2.5Gb/s, upstream 1.5Gb/s, max package available is 1Gb/s.
Edit: removed incorrect statement regarding gepon/csma
DOCSIS is somewhat inherently asymmetric due to nature of the CaTV network. Even if the upstream and downstream channels would have same bandwidth, the upstream direction tends to have significantly worse SNR. Historically this was caused by splitters along the way, but also the cable modem is relatively cheap device installed on customer premises and thus has inherently lower transmit power and worse SNR than CMTS. This is the main reason why CMTS tend(ed) to have more upstream receivers than downstream transmitters.
Simplex fiber, that is point to point, may in fact be able to use the same frequency, but it's not done in practice. Possibly due to the difficulty in directing the outgoing and incoming signals to different pieces of hardware(receiver and transmitter)
GPON networks(typically used by Fios, etc...) have multiple passive splitters on the strand. For example, a common PON splitter would have 1 strand in from the OLT(optical line terminal, essentially the head-end), 32 strands out. The light received from the OLT is split x32 and sent to each downstream port. The devices on the downstream ports(ONU/ONT, Optical Network Terminal) receive this signal and also transmit it on the same strand. That transmitted signal must be coupled back into the single upstream port, from every attached ONT. I imagine there is significant difficulty getting the light to traverse the splitter and exit only via the upstream port. So a different wavelength is used so that it doesn't matter if the light is received by the other units, and also it can be more easily filtered (by wavelength instead of direction).
My knowledge of optics is not sufficient to fully explain why it's easier to filter by wavelength instead of direction.
Reciprocity. For most optical devices, if x% of light coming in port A comes out port B, then x% of light coming in port B comes out port A. Non-reciprocal materials exist, but they’re unusual and may be rather inefficient and complicated to work with. Wavelength splitters are reciprocal and are relatively easy to construct. In fact they’re cheap enough that people use them for art. (The usual splitter is a piece of dichroic glass.)
The Wikipedia article is so-so:
Therefore it would seem strange to me to claim that a single strand of fiber can do full duplex, as frequency "B" of light could instead be used to send data in the same direction as frequency "A". So really you are just choosing to allocate some part of the overall capacity of the single fiber to return traffic.
A common use is carrying Ethernet over coax when retrofitting analog CCTV cameras to IP cameras, and you don’t want to run new cable.
It is certainly POSSIBLE to do WDM on a PON, but you need separate matched pairs of transceivers for each connection which kind of defeats the purpose (you now need a mux/demux at the end which adds more attenuation to the line, as well as more expensive electronics as you went from needing one GPON port to N Gbe ports). As such, wavelength division is typically used over point-to-point links to get more discrete links over a single fiber (or pair).
There are typically 3 wavelengths.(frequency division). Two for data(downstream and upstream) and one for video (downstream).
The downstream channels(both video and data) are broadcast and not time divided. The streams are filtered by the ONT. The upstream data channel is time divided to permit multiple devices transmitting without collision.
DOCSIS 3.1 allows it on paper, yes. But in the real world, bandwidth is a huge constraint, especially if you’re also serving digital cable. No cable ISP in North America today on D3.1 has a network that can support it even if they wanted to.
For example, Comcast has 4 upstream 6mhz sc-qam channels (which is basically using none of the advantages offered by 3.1) and 32 downstream, and their video service is 720p across the board and overly compressed. They are constantly trying to free up data channels and migrate to newer codecs, and eventually go to IPTV, but that requires changing hardware both in their network in the customer’s home, or the customer’s TV goes black and their internet goes out and you get to hear their rage when you tell them they have to spend a few hundred dollars on new cable boxes/modems so that Comcast can build out their network. It’s a massive infrastructure upgrade to use DOCSIS 3.1 to its full potential that is further complicated by a need to support all of the legacy hardware during a transition.
Add to the mix that upstream channels are far more susceptible to unwanted ingress, most homes are full of amateur RG59 coax cable, and it becomes a support/troubleshooting nightmare as well. When you get into higher upstream speeds, these deficiencies become a much bigger problem.
I have observed speed tests on an uncapped docsis 3.1 modem on the comcast network getting full bandwidth on all channels, and it was still way below 100mbps upstream (around 80mbps).
They argue they will be able to have symmetry on 1Gbit on HFC when they complete their DOCSIS 3.1 rollout, but that has been going on for years with no news. Ironically their own FTTH footprint is pretty small so they end up, in most cases, providing better upload speeds when reselling Telefonica infrastructure than with their own.
The most demanding thing more than, probably, 0.01% of customers do is livestreaming. If your line can actually sustain 10mbps (vs advertised/claimed), you're good. The only other real scenario is someone learning that they have to back their data up, for the first time, which is really a once in a lifetime usecase for fast upload.
What would likely improve the experience of 99%+ users is latency, where fiber has consistently proven to be king (for me at least).
ISPs should be regulated like public utilities.
Edit: looks like you've done it at least twice before. That's too much. Please read https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here.
Cable TV didn't have any upstream signaling once upon a time; it was a purely a system of repeating and replicating an analog signal in the downstream direction.
Early scrambling methods for "pay TV" channels could be undone using circuits that were published in electronics magazines.
But upstream signalling predates cable modems by a certain period; originally the upstream was intended for things such as city 'public access' stations to feed back to the headend (to then be transmitted back out to customers). It's existence was certainly handy for the use of cable modems.
Where lies the falsehood?
While in most cases downstream frequencies start from 54 MHz to 1 GHz
I have symmetrical gigabit fiber at my house. When a new Ubuntu torrent drops I’m able to seed a TB relatively quickly. Bandwidth caps would make it so much less useful and interesting.
The only way to peer for free is if your traffic profile is 50% send and 50% receive. Cable companies offer these asymmetric pipes to their customers to make this balanced traffic profile impossible to achieve, so everyone ends up paying the cable companies big bucks for transit on their network.
There are better exceptions, like Altice who are in the process of replacing their coaxial network footprint with fiber optics. Interestingly, according to them it will actually cause them to save money in the long run due to lower maintenance costs for fiber optics.
I guess other coaxial ISPs are too greedy for short term profits to look ahead.