Every single layer, and every single generation, is broken.
Example: The encryption has been home-grown in every generation, and every generation has been broken. They keep reinventing their own shit, even though EVERYONE knows you DO NOT DO THAT.
Another example: The backbone of cross-operator traffic has ZERO authentication. If you're lucky it has ACLs on IP addresses. (and if you thought BGP hijacking on the internet was lax and unmonitored...)
Another: The GTP protocol on this network has a "high security" mode, where it only allows clients who set the "yes, I'm authenticated" bit in the header. Yes, really. A bit.
And operationally like half the nodes in phone networks have a password of "letmein", "password", or "Secret" (capital 's', very high security).
I've seen companies accidentally log in to their competitors nodes, because the both used "letmein" as password!
There is NO POSSIBLE WAY anyone can be this incompetent. I give the benefit of doubt, but we're approaching half a century of EVERY SINGLE THING, standards, implementation, policies, and operations, being completely broken. At what point can we say for certain that this is malice, this is deliberate backdooring of all phone infrastructure?
All this stuff is designed and built by hardware companies. I haven't used a single piece of software written by a hardware company that didn't suck (not counting Apple as a hardware company). If hardware companies can't (or don't care about) writing decent software even when a bad user experience can affect their bottom line, I can't imagine they're particularly likely to write secure software.
This probably won't change unless hardware companies stop paying so poorly, as anyone who can code well can get a huge salary increase just by moving from a hardware company to a software company.
The cynic in me, of course, suspects this is no accident.
The hardware integrators have a 90% chance of being from the world of industrial systems where closed networks and point to point runs means they get to pretend security isn’t their problem. They apply the same principles to their hardware installs.
AT&T wants to be able to administrate and manage their new hardware without keeping 1000 hardware integrators on payroll, so they have their contractors’ contractors develop a manual that can be used by AT&T Network Services, but this is out of the initial CAPEX quota, so it’s done on the cheap and no one does any real work. They just grab all the default passwords and architecture estimates from the integrators and put it into a set of hundred page books for different parts of the network.
The government, customers, and shareholders choose to not hold AT&T accountable for any failures in their network. They instead accept it as “the way things must be”, and AT&T is happy to buy into that story and do the exact same thing on every capital project and maintenance.
This is how you get telcos failing at security every time for decades. Their incentives as a customer-facing company running the worlds largest infrastructure projects encourage them to farm out all the infrastructure work to several layers deep of contractors with no real responsibility enforced by anyone.
Like I said, absolutely everything is broken. I've not seen this in other industries.
I've seen a vendor tell a telco "if you change the password from 'Secret' then your support contract is void", so yes, this part is completely broken.
But it's everything. Standards are broken, implementation is broken, one chassi running Linux with different architectures on the chassi, control card, and line cards. (x86, MIPS, and Sparc, not in that order). Another solution that had three boxes where they so shipped the org chart. One box ran linux, another freebsd, and the third openbsd. Why? "Uhm… well we have an openbsd core committer employed, so…".
And honestly that last example was one of the better ones. I had to read them RFCs to convince them their implementation was broken, but still.
Oh no, now you got me ranting about all sorts of things. Back on topic.
Take crypto. Some team designed the crypto in every single generation of phones. For more than 40 years the best possible interpretation of what they did is that they gave the job to someone fresh out of school, who said "neat! Crypto, that sounds exciting. I'll make some shit up, it'll be great!". But even then that can't be true. The crypto is too good to have been made by a newgrad. A newgrad's crypto would have been broken in 5 minutes my experienced attackers, but it took quite a while to crack e.g. A5/1, and some amount of resources.
It's baffling. It's both a lot of effort to make their shitty crypto, and also completely wasted. If they wanted to be lazy they should have slapped AES-ECB on it and called it a day. It would still have been shitty, but it would have been done in 5 minutes.
But maybe you're onto something there too. You can't make the cash by slapping on AES-ECB on anything. You need to make it look like it took all the billable time you charged for. And it probably did take all that time. I couldn't make something as good as A5/1 from scratch if my life depended on it.
But that is malice. And that's what I'm getting at. Whatever teams designed the crypto in phones for over 40 years have been malicious. They are at best liars and fraudsters, and at worst compromised by spy agencies and organized crime.
I can understand one generation. If it's just once you're just incompetent. But every single generation for half a century? Now you know what you're doing, and it's malicious.
There are people not born the first time these people screwed up, who now have grandchildren.
Yes, all big infrastructure has waste and corruption, but mobile networks don't have a single competently designed piece of infrastructure.
The concept of "lawful intercept" is baked into the networks from a fundamental standpoint.
This might be a reason why there's less care about these things.
At least for police powers. For intelligence agencies, sure.
Anything that gets standardized will see wide use. What if the standardization picks a technology you are a market leader in? What if the standardization picks a technology you have a patent on? Yeah, you will be forced to let people license the patent. But you will be getting licensing fees.
Anything that needs to be standardized and stable for decades needs to be simple, and it shouldn't matter whether there are vulnerabilities, because those are inevitable.
Also because airtime is a scarce resource it's not as simple as "just give me a lower layer and I'll run VOIP". The requirements (and performance and reliability) of voice calls is higher than skype over an IP network on mobile.
E.g. there's a reason SCTP is actually used here. Phone networks are in some ways rightly very different from pure packet Internet. Sometimes just for historical reasons from the olden times, but often also justifiably so.
I could go on and on, but tl;dr: it's not that simple, but you're also not wrong.
> it wouldn't take any time for someone to brute force any of these nodes and cause a ruckus.
1) Who says it doesn't happen?
2) Generally these things aren't on "the internet". They're behind firewalls and on this "other internet" I mentioned between the operators. You can legit buy access to this network for a few thousand dollars, sure. But if you're that serious you're probably not a rando after "rampant vandalism".
* Academic researchers excl.
But... what is it? Higher bandwidth? Lower latency? Is it the IoT dream, my smart microwave connects to a cell tower instead of my private subnet? Does it replace my wired home internet connection?
And, bonus question - what's the theoretical bandwidth limit per person for, say, a football stadium full of people? Does this limit improve on 5G vs older specs? At what point does physics prevent us from having better standardized wireless networks?
5G uses the same radio waves that 4G has, in many cases - T-Mobile US, for example, uses 600MHz and 2.5GHZ frequencies for 5G (and 4G). Sprint has been using 2.5GHz for 4G since 2008.
The biggest change that 5G could bring today honestly is capacity - if you've ever tried to use LTE in a busy train station, you can tell the impact that congestion has on that network's subscribers. Thousands of people connected to a few cells leads to significant slowdown. Generally, higher frequencies lead to shorter range and higher throughput, so in specific circumstances like Airports with multiple antennas, 5G can allow for much higher throughput to many devices at once, alleviating congestion.
5G can also more efficiently make use of spectrum, which means 5G networks can reach further than 4G networks built on the same frequency.
There's a lot more to this, and I'd recommend reading into the Wikipedia page on 5G for an in-depth look if you have time - but the basics are, 5G is a standard, not any one set of devices or antennas or expectations.
There's a feature that allows devices to go into a low power mode. The tower can then "wake up" a device remotely. It's designed for a variety of IOT usecases.
Unfortunately, although listening for these paging messages requires less power than having a full connection, it's still non-zero.
For really lower-power applications, 5G (and I think some of the later 4G extensions) support Mobile-Initiated Connection Only, which essentially means the device goes into low-power mode but doesn't even listen for paging messages - instead, it wakes up occasionally (maybe even just once a day) and sends and receives messages. The tower knows to not even bother trying to page it.
It is a staple of any always-available battery powered device.
A DAS can certainly be an answer but it's never been a very attractive one, and that's from when there weren't other options on the horizon.
True enough. In some sense, 5G is just the next version of the protocol that mobile operators run. Its more than just a technical protocol, it comes with organizational practices as well.
However, the current standard was made with much intentionality. It was very much designed to enable gigabit wireless speeds, low latency and smart microwaves.
This part I don't understand. I spend a lot of time on business and pleasure in places where cellular coverage is unavailable or unreliable. I thought that 5G signals don't go as far as 4G, so how can they reach "further" into towns and places that don't have cellular service?
(FWIW, there are a number of places in my regular [pre-pandemic] travels where the 3G signal is better and even faster than 4G signals.)
The reason 2G and 3G can sometimes reach further than LTE is for a similar reason - because it's easier to "hang onto" a 2/3G signal. The reason it's easier though is different - not because 3G is more efficient, but because it's less complex. This reddit thread explains it better than I can, so I'll paste a comment from it here:
>>> The modulation scheme (how the digital "data" is packed into the "analog" wave to transmit it over the air) is simpler for [2G], which requires a lower wave quality to decode. It's the same reason you are more likely to get an [2G] signal farther away than LTE
Note that the reason 3G might be "faster" is probably due more to the congestion issue I talked about before - when the LTE network is oversubscribed, meaning too many people are connected to it and are slowing it down, sometimes dropping back to 3G (which very few people are connected to in 2021) can lead to you fighting less over your data.
So in rural areas 5G signals would still use frequencies similar to 4G,so the more efficient use of spectrum will improve coverage and speed.
Regarding the observation that sometimes 3G signals are better than 4G - that might as well be because 4G has problems with congestion when many clients are connected to the same base station. One of areas which also 5G is also improving.
The same as existing mobile frequency stuff has about the same penetration as existing service, but because its more efficient, it allows towers to increase power to expand their coverage area. Generally towers will modulate their output power to reduce coverage when congested, hoping devices will attach to other towers; works well when there's enough towers with overlapping coverage, but not as well when towers are sparse.
It doesn't have to be purely power either, antenna angle makes a big difference, and phased antennas mean you can change effective angle without mechanically changing the angle.
This will rely heavily on the network design decisions made. Where I live, the digital TV switchover happened after LTE buildout was well on it's way, so all those juicy 700 MHz bands went to LTE. I literally can't remember the last time I saw my phone on 3G here, even out in the countryside with marginal coverage/dropouts. It's been at least 4 years.
For instance, one of the biggest benefits of 5G is that channels (bandwidth) can be much wider, and several can be stacked together, which means more data can be transferred. But even though that can be done, there may not be enough spectrum at a specific frequency to be able to take advantage of that.
Then the high-band (millimeter wave) can have even more channels than the low- and mid-band 5G. But high-band doesn't travel far and it doesn't penetrate walls well.
If you want a good primer on it that is accessible, I recommend the regularly updated "What Is 5G?" article from Sascha Segan at PCMag. I think he's the best journalist writing about 5G.
The first is eMBB: Enhanced Mobile Broadband. In other words faster mobile internet. This is where most operators start.
The second is URLLC: Ultra-Reliable Low Latency Communications. This is mainly aimed at using 5G for things like self-driving cars. But also things like long distance remote control. This is where people see potential for innovation without being clear what the exact innovation will be.
The third is mMTC: Massive Machine Type Communications. This is meant for IOT but also for factory control. The IOT thing is mostly allowing extra low battery useage, low speed, cheap connnectivity. The factory control thing is about getting the advantages of 5G (and e.g. URLLC) and allowing a factory to quickly set up their own private 5G network.
This is on the consumer facing side. On the operator facing side, infrastructure is moving more towards virtualization and decoupling. Trying to make it easier to use multiple vendors, and stop requiring custom made hardware. And in general, moving towards commodity hardware and something closer to 'infrastructure as code'.
This also helps roaming and virtual operators (for e.g. the factory control). It also helps a bit with the ultra low latency part by decentralizing the routing part and moving it closer to the devices.
So "what is 5G gonna do for me" is mostly the 'faster internet'. But the idea is that it will enable widespread innovation that you can later use. With some luck (governments are thinking) being ahead in deploying 5G might also help boost your economy by boosting innovation.
Where does their own WiFi network fail here?
This granularity can mean more precise location data/telemetry and some interesting opportunities for edge caching and edge compute.
Existing GPS, in my experience, is far from perfect for geocoding more dense areas, so the idea that 5G can reliably put you out in front of a restaurant, or even in a particular floor and room of a building is promising (and a bit scary).
What if your games were streamed from a local edge node, and you only played with people on the same node at near-zero latency? Or maybe you're at a stadium, and your phone is streaming replays of the game directly from the stadium without going over the internet. And your phone knows exactly where the nearest vending machine is, and the vending machine is used as an edge device to give you live stock data and process the transaction.
I think it's a good supplement to LTE. People are going crazy because it's not an in-your-face speed improvement, but the reality is that it can mature to keep dense urban areas connected in a way that LTE wasn't really designed for.
In terms of it replacing WiFi/fixed line, I think one good reason it might is that it's simple. Down the line, some people might look at the process of "getting internet installed" and setting up a modem/access point as archaic, when you can just buy a device and have it connect. I kind of like having a separate fixed internet line though, because if one goes down for some reason, I still have the other.
The features of 5G are higher bandwidth (especially in situations with high interference/poor reception), a higher density of users supported (up to 100k users/per square kilometer iirc), and better performance at high speeds (e.g. on bullet trains).
The Wikipedia page is pretty good.
For the details below I'm going to not use the term "5G", 5G like 4G is marketing. The technical specifications that more or less make up "5G" are the 3GPP standards releases. The 3GPP is the standards body that ratifies the wireless network standards that nearly the entire world uses. For this discussion I'll ignore alternatives since "5G" effectively means the 3GPP standard.
The standard of 3GPP Release 15 (and newer) are improvements and build off the existing standards of LTE (releases 8-14). Its an evolution of the standard, much like 3GPP Release 8 (first LTE release) was an evolution on Release 5-7 (HSDPA-HSDPA+). While release 15+ are evolutionary, they are not revolutionary in that there is no magically discovered new physics behind it. The improvements largely lie with increased support for higher modulation levels (256 QAM was introduced with Release 14 LTE-Advanced), increased spectrum efficiency (variable sized framing allowed across difference devices and upload/download), mixing upload/download division types (i.e. using TDD for download and FDD for upload), improved MIMO (up to 64x64 in massive MIMO), improved beam-forming and additional frequencies.
Some of these improvements in Release 15+ were available in Release 14 or unofficially rolled out in release 14 + NR draft. I know one carrier that was pushing 64x64 MIMO for TDD LTE.
The new frequencies, many in the "millimeter wave" range, will help with with congestion in the "football stadium". There are two main limitations in high capacity events, the first is backhaul. Have to connect the stadium back to the core, and this is _always_ a bottleneck. The second limitation is available spectrum. No matter how many antennas you have in the DAS, there is a physical limitation to the amount of data that can be sent over the frequencies. The new millimeter wave help here, because while its very short range, its large width allows for a significantly higher number of concurrent connections.
The new frequencies, along with increased efficiency in existing frequencies, plus core changes are the main driver for the "latency" and "bandwidth" improvements. The "connected cars" and "connected IoT to cell network" are just marketing/sales departments pushing for new customers. The main "advantage" "5G" brings here is an increased capacity in the network to handle this.
A few other notes, unlike "3G"->LTE, the upgrade to Release 15+ for carriers will be a lot smoother. First, everyone is now on LTE, aka the precursor so there is no CDMA/EVDO networks that are incompatible that need rip and replace + compatibility modes (ehrpd). Second "NR" is designed to be compatible and multiplexable with existing LTE/LTE-Advanced enodebs, this means in one area you can have NR and LTE towers, and the NR towers can broadcast LTE for devices that are LTE only. This was not the case with the original eNodeBs, which could not handle backwards compatibility without physically separate BTS/nodeBs. Third, the new core for release 15 is designed with backwards compatibility with existing enodeb's. Unlike the previous transition which required a new core that was largely incompatible due to major design changes. So with "NR" RAN elements and existing LTE enodeb's the core can be seamlessly upgraded without having to run two complete networks for multiple years like in the LTE transition.
 TDD-> Time Division Duplex, FDD -> Frequency division duplex. Most LTE networks are FDD, a few (i.e. Sprint, Softbank, China mobile..) have certain spectrum they use as TDD. The difference is with TDD, you use the same exact frequencies for upload and download but you divide the by time. So basically t0->t2 is for download, t3->t4 is upload, etc. With FDD the frequency or "band" is divided into to two parts, one for upload and one for download. There is no time division for FDD but you lose of the size of the channel.
5G also comes with a lot of changes higher in the stack. Many of them essentially fully internal to the telco's themselves. Part of this we might notice are new protocols for registering a device with a cell tower, new protocols when roaming or getting higher speeds (and more packet drops) for specific applications when requested.
Other parts are having lower latency because the packets get routed closer to the cell-tower. Having your own private 5G network at your job-site with guaranteed uptime. Having more reliable service in busy moments because the operator back end is more virtualized and can thus spin up more capacity when needed.
The use of a different radio band, therefore less contention in the existing mobile bands - less congestion results in better speeds overall.
Reducing the range of base stations. shorter range means less clients, less congestion and therefore better speeds, whilst also deploying them more densely to cope with wider areas and higher bandwidth densities. Also, shorter range reduces the power requirements, meaning mobile devices will have longer battery life (nothing magical, probably not even noticeable to the average user), or it can be built into smaller/low power devices such as IoT.
Utimately, 5G is irrelevant to end users until it's actually deployed widely. Just like 3G and 4G, the end user has no impact on the deployment of the network other than the demand for it. So, all the hype around 5G is almost entirely marketing, politics etc. It only really matters once 5G is deployed across the areas you visit daily, and until then the previous generations of mobile connectivity will continue to serve just fine.
Your suggestion about a football stadium is an interesting test case. Ideally, an area that size would be served by up to a dozen base stations, spread throughout the stadium. Compare this with a single 3G base station that would cover the stadium, plus a large portion of the local area, and you can see the pros/cons fairly easily. But how many people are surfing the web whilst watching a game? or taking calls, answering texts etc. Very few during active play time, but there'll be large bursts of traffic in any breaks in play which will stress the older mobile generations to breaking point whereas 5G is designed to deal with that scenario fairly well.
The jury is still out about real world mm-wave 5g becoming widespread any time soon outside few exceptionally crowded public places. Besides network support, a lot of phones don't support it either.
To me the whole angle of this seems wrong. Who out there has a solid LTE signal and is going “oh if only this were faster”.
On the other hand when I have 1 bar I might has well have nothing at all. Shortening the range of the base stations doesn’t seem like it would help this.
In the US I went from unlimited data 3G to 10gb during the 4G LTE days, down to eventually 5gb (Verizon).
There are many I know with the same personal experience.
I have no doubt that big data plans will one day be ubiquitous -- but I have much more doubt that mobile providers are actually trying to provide me with a better experience and more freedom to do what I want.
They care about profit, and that's about it.
They gave away big data plans when few people cared about actually using them, and now that the phones and the userbase has caught up to those numbers the providers pull the rug from under them in order to secure further profits -- god forbid the user demand forces upgrades, that'd ruin the profits even further.
10Gbps would, of course, be even better, at home and mobile.
20-40 Mbps is more than enough for streaming. So I suppose you’re regularly downloading very large files or something?
Genuinely curious what use cases you notice a difference.
Or maybe it’s the better latency of your gigabit that you notice more so than the throughput?
I was referring to a laptop in my original statement. I usually plug it into the LTE router directly with a gigabit cable, or use Wi-Fi which generally exceeds the uplink capacity. 5G fixes that, for a wireless LAN.
As for why you'd need higher bandwidth on a mobile device, it is simple: to live-stream the 2160p@60fps captured from the device's sensors. Another good reason is app updates: doing app updates on a mobile device frequently includes a few gigabytes of downloads. Same with laptops, of course, which are increasingly connected via mobile data. Many AAA games have updates in the 20-200GB range.
A lot of this kind of stuff assumes that someone is on mobile temporarily until they get back to a "real" wifi connection (iOS didn't let you download any apps over 2GB on mobile data for a long while). For some of us, or all of us at some times, there isn't a "real" connection to go back to.
I suspect it's a small enough minority nobody is too worried about it from a policy point of view.
I'm sure that number pales in comparison to the number of truckers, oil workers, and construction types globally that spend weeks or months on the road at a time. It's probably easily 100 million people that will immediately directly benefit from increased mobile bandwidth.
That's not even counting the dozens of developing countries where they just skipped cabling altogether and mobile data is the only internet access available. That probably boops the figure up to a billion or more.
However, huge chunk of those people you mention are nothing like mobile only, and for many the model of heavier downloads on wifi works just fine.
My point was particularly about catering to the mobile only crowd, which is pretty small (US/ EU etc., anyway).
It's important to understand the positive aspects of the consequences of data collection. Of course information is power and it short circuits democracy. But on the other hand, computers give the extraordinary possibility to gain an insight through actual statistics. It's a loss for a gain. Maybe that's how democracy will evolve, and maybe it will help voters, in the end?
It's murphy's law.
Telematics in cars will be mandated shortly and enable stuff like road vs fuel taxation and congestion pricing. That enabled regulatory changes that basically eliminated most local autonomy over cellular tower placement. Basically, the FCC is “yimby” for anything 5G, and used national security regulations to limit permitting, taxation, etc.
They can’t really avoid running fibre with mmWave cos they have to backhaul it. Sure there is point to point radio, but in the main they’ll need to get fibre almost as close to you as with a fixed line direct to you. But instead it’ll be fibre to base stations on top of every building? It’s almost the same cost in terms of fibre infra.
I think they use wireless backhaul too.
That’s pretty much it. Some telecoms seem to be positioning this as an opportunity to provide home internet access running through 5G infrastructure which would cut down on last mile costs, but at the same time it seems like it would saturate the spectrum pretty quickly.
During all the 5G hype I’ve been buying up stocks of companies based on how much backbone fiber they own, because as far as I can tell that’s where the real staying power is anyway.
Reminds me of when Bill Gates was on breakfast television flogging Intel's Pentium processor. My mom was suddenly of the opinion that all of my computer equipment was obsolete and that this one chip was going to solve all of the world's problems.
Are there any technical mitigations coming? Or just heavier phones with bigger batteries?
The 4G speeds nowdays are fast enough that they are usually not the limiting factor in day-to-day usage of mobile internet. The advantages of 5G are not that big for the average mobile user, there are more drawbacks than advantages for using 5G, at least in the upcoming 5-10 years.
I said that it's capitalism marketing bs, because that's what it is: hyping the technology to more than it is for the sole purpose of increasing profits for telecom and hardware manufacturers.
Does this imply that in the near future 5G will have better coverage and will be cheaper than 4G?
In that case it feels like maybe it will just replace 4G.
This critic could be made about almost all of the technological advances made possible by the capitalism system. Long live "marketing bs" that allows incremental improvements like this one
Exactly, it is an incremental improvement that is marketed as a revolutionary one. When 4G and 3G came out they were simple stating that it is much faster, but now with 5G everywhere you see how it will revolutionize the world and make new things possible that were never possible before, like remote surgery, articles like this one: https://www.digi.com/blog/post/5g-and-the-future-of-telemedi...
If you want lowest latency, use a wired connection which has existed for many years. Why would you use a more unreliable like 5G that might lose connection when someone waves his hand instead of a faster, more reliable wired one? There are tons of other examples like this.
For the average user in most cases the download connection speed is actually limited at the server end, not at his end so even if he has 10GB/s download speed, it won't be able to use it. Not only that, but also data caps, storage write speed, infrastructure and other current limitations make the promoted benefits of 5G simply non-existing for at least several years.
If you want a future-proof phone, yes you can get a 5G one, only that it's not future proof. Other components of the device must be drastically improved too before they can take advantage of 5G, which means that you would still have to change your device before you can actually use the promised benefits.
I love technological advancements, but I hate it when the public is tricked into thinking something will greatly and instantly improve their life when it reality it won't change anything.
This one annoys me so much. I've seen this exact promise every other year for 20 years. And it's never going to happen (outside of some PR stunts maybe).
5G seems like it's just improvement over 4G. It can use the same frequencies, but uses them more efficiently. The millimeter communication that made most wear a tinfoil hat probably won't be used much. I think maybe it will be useful in open places where there a lot of people like stadiums.
I think most of improvement on 5G won't really be noticeable to average user.
EDIT: Link to Verizon talking about it. It could all just be hype to make people want 5G and to get governments to invest in it, I don't know.
"Today, internet-connected cars rely on 4G LTE technology to stream music and engage other connected services, but 5G will usher in a step change not only for in-car connectivity, but for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication.
The implication is clear: Cars will not only “talk” with one another in near-real time, but also with sensors installed in streets and traffic lights, sharing information on roadways and weather conditions, and alerting drivers on the same stretch of highway to potential hazards. Connected vehicles will be able to crowdsource near-real-time routing information to avoid backups and streamline traffic flow. Next-generation networks should also lead to improvements in driver safety by helping mitigate the unknown—a truck, for example, sensing that its driver is about to run a red light and alerting other vehicles approaching the intersection of the hazard. The National Highway Safety Administration has concluded that the introduction of systems to prevent collisions at intersections alone could save 1,300 lives a year."
I’ve heard this a lot but it doesn’t ring true. I believe I’m in a category with many others where your work involves configuring networks, especially LANs, and you are often entering IP addresses.
If you're configuring LANs you're unlikely to be configuring anything deeper than a /64 per LAN, so the effort is approximately the same as IPv4 (four numbers, except that each number is four hex digits instead of three decimal digits).
Similarly, if you're setting up IP rules on a firewall, you're unlikely to care about anything smaller than a /64. If you want to ban a bad actor, blocking a specific /128 isn't going to achieve anything, since the bad actor likely has the ability to use any address within the /64 (SLAAC). You'd just ban the /64.
Lastly, if you're picking your /128s like the static DHCP leases case I mentioned above, nothing prevents you from zeroing all the segments you don't care about. Each of my static leases has all zeroes in the lower /64 except for the last hex digit. Net result is 2001:db8:1234:1::1, 2001:db8:1234:2::1, 2001:db8:1234:3::1, etc. The 2001:db8:1234::/48 is what I get from my ISP so it's already in my muscle memory, so it's negligible extra effort to remember individual machines' IPs.
It's one of the reasons I moved from Virgin Media (Cable in the UK) to Zen (FTTP) ... proper dual stack so I have native IPV4 AND IPV6.
No, CGNAT-only is the worst solution.
> everyone is going through CGNAT for the parts of the internet that are IPV4 only.
What choice do they have? There are more humans than IPv4 addresses, so if every ISP were dual-stack, the price would go to infinity. It would be more productive to complain about CGNAT-only ISPs, than those who are actually trying to fix the problem.
idk if they sell user-specific data, but german operators DO at least sell agregate info from cell-tower usage.
Couple of sources, in Dutch:
Here's an anonymized email address, please tell me how to de-anonymize it:
Don't believe "anonymous" data is shared like that? Look at what New York Times could do when purchasing one of these datasets.
<redacted>@gmail.com + pfundstein is how it almost always fails.
Go and read the peer reviewed publications about 5G/6G. Localization, identification and "sensing" are what the whole project is about. There's no requirement that someone have a phone for the antenna arrays to be able to perform those functions either. I saw one industry whitepaper that mentioned collecting "physiological data" as an explicit goal. The mm-wave and sub mm-wave frequencies used in these projects are specifically good for this purpose, for relatively high resolution sensing. Electromagnetic waves in these spectra have nearly optical properties, and definitely resolve enough information to be highly invasive. So who's getting access to this information?
Maybe this specific vulnerability wasn't intentional, but somebody is meant to get this data. That's the whole point of this project.
In my uninformed opinion this is a likely reason for five eyes nations to put up strong a strong fight against Huawei 5G hardware.
> This opens the door to situations where if an attacker manages to compromise an operator’s edge network equipment, they could abuse 5G protocol functions to launch denial of service attacks against other network slices or extract information from adjacent 5G network slices, including customer data such as location tracking information.
 (Well if you have perfect reception of my signal following me around constantly you may be able to track my specific signal, but once you lose me for a second in a group of people you don't know which one I am coming out)
Like, high-precision and real-time enough to kill anyone from a satellite/drone/missile/etc at any time with no additional effort?