Having worked in the telco space, I'm not sure there were ever truly "bellheads" and "netheads". It was mostly an industry of carriers trying to charge as much as possible, while delivering as little as possible. As on example, CallerID cost almost nothing to offer, but rather than just make a free convenience service, telcos charged ~$10/mo. for you to be able to see data that was already present anyway.
As internet connectivity became a standard utility for most households, telcos were forced to offer it, or die. Voice-specific equipment then became more of a liability than an asset, so we saw the networks switch to being primarily IP carriers, and voice calls got degraded to using things like SIP out of convenience for the carriers.
> I'm not sure there were ever truly "bellheads" and "netheads".
In my experience, I've seen these terms used to refer to circuit-switched/deterministic vs packet-switched/opportunistic routing.
Bellheads, having done frequency-division and then time-division multiplexing, derisively referred to packetized voice as "statistical multiplexing" and ridiculed its pathetic jitter characteristics. (Radio stations still use ISDN for some things because it guarantees latency and jitter performance that Ethernet can't.)
Netheads, convinced that when all you have is a hammer, everything looks like a packet, said bellheads and their obsessions with latency and jitter were stuck in the past. Throw enough bandwidth at the problem, was the theory, and it's just not a problem anymore. Literally everything that matters to anyone could be stuffed into packets, and if that failed to meet some QoS requirements, then the requirements were wrong. And nobody would care because it'd be so much cheaper.
Ultimately the latter approach seems to have panned out, but IMHO, more's the pity. I still miss the imperceptible latency of a good old POTS connection. The way we all step on each other in Zoom calls today is a special hell we could've avoided.
Oh I'm aware, but it's a field-bus, rarely seen more than a few hundred meters in diameter. You can't order an EtherCAT circuit with one end in Montreal and the other end in Dallas and maintain the same determinism. That used to be as simple as dialing a phone.
I very specifically used and heard used around me these exact verbatim terms and I can tell you the cultural divide was palpable between the two camps one from valley startups another from post monopoly AT&T. This influenced decisions on building wires all the way up to wan/transmission and standards. The Class 5 PSTN approach vs the best effort of VoIP Packet switching, IP as Ethernet everywhere vs ATM and Frame Relay nailed circuits, the ITU vs the IETF overlaps. H323 vs SIP is a perfect example of this tension and in fact the categories still exist. H323/Megaco/QSIG et al reference designs are precursors of the spaghetti of 5G/IMS etc and SIP is still the simplified out of band signaling template for many Web protocols.
Back then there was more friction because Internet and Data Comms was the new kid on the block basically disrupting a fairly established PSTN / Post & Telegraph infrastructure. And I say all this as NetHead who had many BellHead friends so it never got in the way of going out to lunch together.
This reminds me of the switch from circuits to packet-switched networks. Basically what happened is that telco operators only wanted to sell circuits which, like a circuit board, were essentially a closed loop. This is why t1’s were always dedicated 1.5mbps symmetrical links that were basically unshared bandwidth. You got 1.5mbps because that’s what att provisioned (until they started over provisioning the headend/noc and sold ‘shared T1’s’, but that’s a different topic).
What happened as I understand the lore is that one year a NANOG you had a bunch of young network operators basically get together and say “well, we’re done with this circuit stuff, we’re moving to packets where it’s a network of routers that make decisions about how each chunk of data is moved through the network based on headers.” The telco operators laughed at them because packet routing is much harder to guarantee deliverability of data than a circuit and they didn’t understand why people would want cheaper unreliable data routing for their internet service.
Of course, over time, packet routing has won for everyone except the largest orgs who have dedicated circuits that aren’t shared bandwidth. For anyone that’s used a direct 10GB fiber line that isn’t shared vs a cable modem that is overprovisioned to hell, the dedicated circuit is a whole different class of internet access (and 100-1000x the cost).
They did also sell X.25 (not circuit switched), but generally it was not competitive form a price/performance perspective with an IP network built from T-1s.
People wanted packet switched service, but found it was cheaper/better to build it themselves as an overlay network from circuits leased form the telcos.
> It was mostly an industry of carriers trying to charge as much as possible, while delivering as little as possible.
I remember when carriers upped the cost of text message from 15c to 25c per message. All four major carriers, iirc, changed their pricing within a month of one another. This is another service that costs them almost nothing. This is like one of those "trout in milk" evidence. I am positive they all got together and decided to raise rates but nobody went to prison for this collusion.
I mean I wouldn't have minded if they only charged for messages sent but they charged me for incoming messages as well. My friends had unlimited texting and as a poor college student I was trying to save money. I was so glad when grandcentral/google voice became available. I joined it and never looked back.
>I am positive they all got together and decided to raise rates but nobody went to prison for this collusion.
Do you think it could have been simple as they all realized that the market value of sending a text message had increased? If you recognize that a competitor is able to make more money by raising the price of something you also sell you can use that as a case study for why you should raise your own rates.
Theoretically it should be a market advantage to charge less (while still being profitable) than your competitor for the same service so you can get more customers.
It would matter how much of an advantage it would be. It isn't good enough to just be profitable, they want to find the most profitable thing to do. You would need to get enough people to text more / new customers to offset the potential profit you would have gotten from charging more.
At one point 2 weeks of SMS revenue alone generated more revenue than an entire year of all Hollywood’s.
What the OP fails to emphasize is that SIP was used to perpetuate the business model and all-important pricing. That’s why it was allowed to prevail: it did not challenge the status quo that really mattered.
In my country (Italy) now providers are starting to switch users from copper to fibre optic (and thus VOIP) at no extra cost. In a couple of years this switch will be mandatory, and the copper network will be completely dismantled (as far as I know they are already doing that in big cities).
For VDSL services (called also FTTC, fiber to the cabinet and then copper to the house) we already switched to VOIP, even if in theory both VDSL signal and the old analog signal, since it's cheaper to give to the users a router with VOIP integrated than do the conversion cabinet side.
Same in the UK - POTS (plain old telecom services) are being withdrawn at the end of 2025, copper will still be used for broadband but analogue phone service won't be supplied any more. Customers still using analogue phones will be given a device capable of converting it to VoIP locally.
The distinction was more of a research thing... it was the distinction between bell labs' networking efforts and the various internet groups, and in some sense very specifically between people like Sandy Fraser (who did the research behind ATM) and people like Jon Postel. The essential disagreement was where intelligence in the network lies... naturally teh bellheads (like Sandy) wanted a smart network and dumb endpoints, while the netheads (like Postel) wanted a dumb network and smart endpoints.
I personally think the bellheads were right (putting intelligence in the endpoints is why the Linux Kernel needs such a massively complicated IP stack), but obviously that's not what happened in the real world, perhaps for the best.
Arguably, putting the TCP protocol in the Linux kernel is an example of bellhead thinking. The machines are now so large that in-kernel networking is no longer on the edge. Networking in the kernel on a machine with hundreds of CPUs—and hundreds of unrelated tenants—amounts to a smart network. Putting the network stacks in the user processes, and leaving the kernel as a dumb pipe, works better and is more aligned with the end-to-end principle.
Since you can never have "fully dumb" endpoints, this model requires upgraded end devices to make use of upgraded networks, and upgraded networks to enable newer end devices. That's an inherent economic disadvantage from the start.
Also, the computational and memory complexity of maintaining the state of every existing connection at least at some point in the network sounds baffling.
I've heard Douglas Comer talk about how he used to get ribbed by the Bell Labs guys about "when are you going to come work on The Network, instead of that 'internetwork' toy?" There definitely was a cultural divide between the two groups, at least in the early days.
That said, I'd believe it if someone told me that that people kept talking about "bellheads vs netheads" divide long after the divide stopped actually existing.
Comer taught at my school and I interned at a company making hardware to locate in RBOCs back when local loop unbundling was a thing and there definitely was a divide.
The general thought of bellheads was that voip was much lower audio quality and that nobody wanted to have to reboot their phone.
Both are true, but people got used to lower quality with mobile phones, and it's not clear that they ever wanted to pay a premium for reliable voice quality, when they can get "probably good enough" for substantially cheaper.
I should also say that Comer had his own biases. He had an extremely dim view of ATM[1], but ATM is a great protocol if you want to keep POTS[2] latency low and still share the network with non POTS data. ATM was a good technical solution to the specific problem. Whether or not it was a problem worth solving is another question.
Oh, there totally were. I started my career at the CNET, France Télécom R&D. At one point they had proudly produced a glossy brochure extolling their “Web protocol without IP networking” project, essentially running HTTP over ATM switched virtual circuits, because TCP/IP had cooties or something. That was so many layers of wrong I can’t even begin to unpack it.
[SIP author here, for better or worse] SIP's victory over H.323 was definitely an odd story. Henning Schulzrinne wrote a draft called SCIP, which was essentially HTTP for multimedia calling. SCIP overlapped with what Eve Schooler and I were already doing. We thought his version was too complicated (ha!), so I rushed out a draft based on what I had been coding in the MBone session directory sdr, which was a simpler UDP-based SIP. We presented them both at the same IETF meeting in 1997 where I was already standardizing SDP, and got general support from the room, but the main message we took home was there can be only one! So we met up at Columbia a month later and spent a day arguing out which features of the two protocols to keep. The resulting protocol is the basis of the SIP we know today.
By this time, H.323 was gaining lots of momentum and Microsoft and Intel had adopted it. What chance for a few academics against Microsoft and Intel? But I was chairing the MMUSIC WG in the IETF and, quite simply, no-one told us to stop. So with the hubris of youth, we just carried on anyway. Loads of people told us we couldn't possibly win, but H.323 was just too ugly and telco-like, and (in the early versions) took so many round trip times to do anything at all, that we really didn't like it.
Around this time it started to occur to some of the telcos that Internet traffic would before long exceed phone traffic, and it would then make no sense to run two networks. In particular Henry Sinnreich at Internet MCI started to look around for what they should do, and he thought SIP's proxy architecture would fit their needs better than H.323. As a result I went down to Dallas and gave a couple of days tutorial to all the Internet MCI folks about how we saw IP-based internet multimedia working out. After that, Internet MCI started to push SIP, and other telcos noticed. Increasingly it seemed our main allies were telcos. And so it turned out that in the early VoIP space, telcos held more sway than Microsoft and Intel. Eventually Microsoft moved to SIP too. The downside was that SIP rapidly accumulated lots of telco cruft, and the original peer-to-peer email-address-based nature of SIP wasn't what actually took off.
So, H.323 came from bell-heads, but got backed by net-heads. SIP came from a few net-head academics, but got backed by bell-heads. In the end, for better or worse, the latter won. Ever since an IETF meeting around 2000 where there was more than 100 SIP-based internet drafts, I've regarded SIP as being my success disaster. It's a huge success, but it could have been so much simpler and cleaner.
Need to connect two two-way radio systems? its SIP in a fancy dress.
Need to connect a two-way to a console system? its SIP in a different colored fancy dress.
NXDN is completely SIP internally, it's not standard SIP, but you can still construct a call ladder with conventional tools (or just read the captures), and have a rough idea of what it's doing, even if the headers are weird. P25, and DMR use SIP in various other places, either for logging, or for console connections. The workaround so you have Calling Party ID on radio systems is to imbed the metadata for a talkspurt inside the audio stream.
The one gripe I have about SIP, is - I suspect mostly because (but not entirely) of the telco cruft, you can have two standards compliant SIP implementations that can't interop in a meaningful way.
The lack of a required minimum codec support does add to those challenges, but I've seen SIP also fail to work even when both sides have the same codecs, but rather do not emit headers the other side expects. (For example when OPTIONS must occur before REGISTER)
Gripes aside, I will take a SIP derived protocol any day over a binary protocol that needs some sort of magic decoder ring, I have to work with those, and I loathe them for obvious reasons. So thank you for the work you did to standardize something that has touched my daily working life for 15 years now.
> you can have two standards compliant SIP implementations that can't interop
I spent years in telco. You would not believe how painful getting telco A to talk to telco B was, despite it all being SIP. There was a bazillion-dollar market for session border controllers that did nothing but protocol fixup all day.
Incredibly, one of the things that SBCs had to do was transcode audio in near-real-time between one telco and another. This was very technically demanding since end users don't like glitchy audio, but also, in some sense, totally a waste of everyone's time and expensive DSP resources, since the two endpoints spoke the same codec but had to cross internal networks that did not.
That doesnt surprise me, there is a market at least for conversion between uLaw and aLaw - I've used Asterisk (as well as some other boxes from AudioCodes and another manufacturer who's name escapes me), in a similar role.
This is a really good read. My team has been implementing VoIP/IM protocols since around 2003, and we started off with H.323, then ISUP/PRI then H.324, then (briefly) IAX, then SIP, then IMS, then RCS, then added XMPP, and then decided to start afresh and we created Matrix in 2014.
The really frustrating thing was how SIP just mimics the 1:1 circuit switched calling semantics of the old PSTN. Stuff like messaging and group-calling and group-conversations is bolted on badly. Stuff like E2EE barely exists at all. From our perspective, SIP monumentally failed: it had an incredible opportunity to create an open decentralised communication layer for the internet using "nethead-friendly" culture... but all it did was reinvent the semantics of the PSTN across private federations.
Meanwhile, the actual communication semantics that people expect today (and for the last ~10 years) don't resemble the PSTN at all: they expect synchronised conversation history across multiple devices; typing notifications; read receipts; presence; file transfer; multiway video conferencing; stickers; reactions; E2EE etc. etc. In other words, they want the iMessage/Facetime/WhatsApp/Slack/Discord/Teams/Telegram featureset - they do not want something that pretends to be a 1890s vintage candlestick telephone.
So this is why Matrix attempts to provide a nethead-friendly open standard for the featureset that users expect for communication these days: i.e. conversation history synchronised between devices; E2EE; group conversations as a first class citizen; and we even provide an open standard (possibly the first one?) for multiparty voice/video calls: https://github.com/matrix-org/matrix-spec-proposals/blob/Sim....
> The really frustrating thing was how SIP just mimics the 1:1 circuit switched calling semantics of the old PSTN. Stuff like messaging and group-calling and group-conversations is bolted on badly.
Yeah, I remember the period when XMPP was unsuccessfully trying to grow audio/video extensions (eventually fixed by Google's contribution of their Jingle efforts). Meanwhile the SIP ecosystem was unsuccessfully trying to grow messaging features. The two worlds (messaging and telephony) were for a long time like oil and water.
I'm glad those days seem to be behind us, though the tendency towards a WebRTC monoculture (which in turn descends from Jingle) does concern me a bit.
> (possibly the first one?) for multiparty voice/video calls
> The protocol for multi-party Jingle was specified by the Jitsi team back in 2011:
oops, I stand corrected. I don’t think I’ve ever seen a heterogenous XMPP conference call though (eg a Jitsi call but with participants using entirely different conferencing clients from different vendors?) Whereas MSC3401 and MSC3898 is implemented by Element call, Hydrogen, Third Room, maybe nheko, and maybe some proprietary Matrix clients now - and folks regularly mix & match clients (eg voice-calling into a Third Room world). Eitherway, it’s cool that the jitsi folks standardised their signalling. Last I checked with them they seemed to be all about “URL based calling”, and less about supporting interoperability with other clients.
> Indeed, the only thing that remains “circuit switched” about the PSTN today is the per-minute billing model — based on telcos mutually pretending to one another that they're still operating a circuit switched network that justifies this kind of billing.
When was this published?
The reason I ask: I sat through biz meetings at Bell Labs in the late 90s, back when long-distance calls were still being billed on a per-minute basis, and the market price was approaching $0.05/minute. This was a big deal because the cost to bill per minute (all the tracking and back-end collection work) also worked out to about $0.05/minute, so it was clear that per-minute billing was going away by y2k.
So I'm a little amazed that per-minute billing is still viable more than two decades later.
Edit: It makes me wonder if the author is confusing "usage" billing (use measured in minutes) with "per-minute" billing (an itemized list of call times and phone number destinations.)
On 4G "mobile call" you get 8 kpbs - 64 kbps for the voice.
> In fact, VoIP over 4G actually offers higher call quality than standard mobile calling. Mobile calling compresses voice to about 8 Kbps, while VoIP calls can use up to 64 Kbps with a professional provider such as IDT. Instead of the ‘tinny’ voice typical of mobile calls, clients called from VoIP on 4G will hear you clearly.
Interesting idea. You'd need some kind of system to modulate the sound wave to carry the data, and then demodulate it at the other end to get the data back out.
> ..."a little amazed that per-minute billing is still viable more than two decades later."
I'm not so amazed....the anser tends to be: because they can. That is, because they can charge someone, and those someones either don't know better and pay, or are not given a choice and must pay. I guess I could have just replied with something like: Because capitalism....but now i'm not sure which sounds worse/sadder for society. :-(
I was involved in the IETF in the 1990s and had a good friend who was one of the architects of SIP. I was not involved in SIP myself but marveled at how complicated the protocol was. I believe RFC 2543 (SIP) was at the time the longest RFC ever published. I had the impression that the complication was due to undue influence from bellheads. But looking back it was probably more due to the necessity of interoperation with the PSTN.
SIP didn't start out complicated. If you go back to the origial internet drafts Henning and I wrote, it was really really simple. Well, our original UDP-based SIP was simpler than Henning's TCP-based SCIP, but neither was complicated. It gained a little complexity when we merged our protocols, but it really gained complexity along the way to standardization as people kept wanting to cover telco-like corner cases.
At the start, Microsoft and Intel had backed H.323, and SIP was just us academics doing our own thing. Everyone told us we had no hope against Microsoft and Intel. But SIP started to get traction when the telcos (particularly Internet MCI) started to get interested in VoIP and concluded that SIP (especially SIP proxies) fitted what the wanted better than H.323. The downside with having your main allies being telcos was that telco-stop kept creeping in, until in the end SIP was no longer simple.
That has always been my impression is sip. Every time I deal with it I go "There is no way you would invent something that complicated in a greenfield ip native environment, it is too full of odd gadgets inherited from telco land".
> Microsoft no longer has the mental dominance among developers, or even a sufficiently significant minority of them, to be able to dictate an independently viable culture; instead, they must appeal to the crowd to whom UNIX is the normal way of doing things.
They don't among developers, but they do among system administrators. As far as I can tell, most corporations still run on the Microsoft stack (Windows/Exchange/Office/MS365), although there is some competition from Google.
This is true, and with MS Teams gaining popularity I think Microsoft has a significant "telecom" presence in the days of average users that cannot be ignored.
However, unlike in the early 2000s, a lot of software works despite Windows rather than based on Windows. Microsoft's own software stack integrates with Windows directly most of the time, but if you need something Microsoft doesn't offer, the chances of that software using the specific MS APIs are significantly lower.
Libraries like OpenSSL have replaced CryptoAPI in many places. Video decoding support is nice but why bother learning the MS API when you can just plug in FFMPEG. The Windows GUI API makes it possible to make incredibly fast, light-weight, and responsive applications, but why would you when Qt/Electron are right there?
The .NET GUI ecosystem is the last developer space where I think Microsoft matters all that much, but even there they're losing the battle against team browsers-as-a-GUI-library.
Had Microsoft won then Gmail would be running on Windows Server right now. Instead, Azure advertises support for Google's Kubernetes product when it comes to attracting customers. Between an entire generation growing up with Chromebooks, Apple's MDM slowly growing towards feature parity and Microsoft starting to move AD and other management features to the cloud, I don't know if the monopoly the MS stack holds over business will be around for all that long.
With the way things developing, I think Excel (the world's most used programmable database that's not a database) will be the last standard MS will be able to use for staying relevant, and even LibreOffice is good enough to do most people's common Excel work these days.
The [mid sized, ~6000 employees] company I work for just switched last week to Teams from Slack. Teams sucks. The reason it is gaining popularity is that Microsoft basically makes it free for companies that are already using things like Office 365. Slack couldn't compete with that, and our IT folks don't care about the experience, they are being judged strictly on the dollars.
Just add it to the very long list of reductions in quality of life we've been subjected to because productivity is difficult to quantify as dollars.
I completely agree: Teams is bad. However, it's the best piece of software that does what it does. It's an unholy Office+Skype+MSN hybrid that integrates incredibly well with the systems many sysadmins already use. Chat seems to have been added as an afterthought with how many people report issues about notifications.
GSuite has similar integration, but it's not as tight. It's also equally terrible at least, but in its own different ways.
However, I'm not exactly optimistic about Slack either. The only good thing about it that I can name is that at least I can bridge it to Matrix easily. The frontends for Slack are so mediocre that I would consider running an internal XMPP/Matrix server before I would recommend Slack to anyone. I'd rather have Teams if I had to make a choice between the two.
There are a couple things that Teams hurts my feelings on, coming recently from Slack. The biggest one, by far, is the terrible Outlook meeting handling. We had a Slack plugin that would reliably tell me about meetings before they started, give me a link to go directly to the zoom call, notify me instantly when I was invited to a new meeting and let me decide on the spot if I wanted to decline or accept. All within Slack. Great for portability.
The other one is notifications. I'm on a Mac, and I have no idea why Teams can't make basic notifications work. Slack would do the red dot on the icon for non-urgent notifications, and then bounce the icon for DMs and mentions. Teams will make a noise, but the majority of the time it will not reliably even put a notification dot on the icon, much less bounce it. Pretty sure it's because I have two instances of Teams running on different computers, so it gets confused.
I'm back to missing meetings sometimes, unfortunately, because Outlook has always been hit-or miss for me on meeting reminders. It does not help that I have a 49 inch screen so the little pop-up is way off to the right. When I'm focused, I can miss that for many minutes. And even when I do look in Teams for an Outlook meeting, it actually presents the URL for the zoom meeting as plain text, not clickable. Sigh. Such a terrible user experience.
Slack has warts, too, but Teams feels like state of the art from about 10 years ago.
Microsoft’s customers are central IT, not the end users.
This strategy is great and is why windows phone crushed apple’s attempt with that laughably keyboardless “i phone”. I expect the same level of success with Teams: users universally consider it shit, but they don’t pay the bill, do they?
> Moreover it's designed to support federated calls between internet domain names, just as email is federated by domain names. Theoretically SIP could have been deployed on the public internet in much the same way email is today, with email addresses reused as SIP identifiers, meaning that you could call an email address directly and without contracting with or paying any intermediary.
What hurdles are there to this happening and phone numbers being replaced with email addresses?
The mother of all network effects. People ask for your phone number based on the PSTN format tel:800-NNN-XXXX. 0% of people have a sip:username@company.com. Such a transition would rely on phone companies voluntarily relinquishing their control of the address space, spending huge sums of money on training customers, to de facto commoditize their service. Every single phone with just a number pad would have to be replaced. Hence this will not happen.
You can already bypass the PSTN by using Facetime, Signal, Whatsapp, etc. But when someone doesn't use the same service or doesn't own a smartphone at all, good luck telling them to set it up or not contact you at all.
Nothing at all, however I made the mistake when I set up my sip system of using names, it turns out there is a really awkward disconnect between the ui of traditional phones and using names. something about it being very hard to type in a name from a numeric keypad.
However, you don't have to use names, ip addresses work just as well as phone numbers, and you can type the on a traditional phone keypad.
They skip the history of ATM, the bellheads attempt to transfer the SS7 signalling system to modern technology. But ATM was never more than a transport technology for IP.
ATM was a horrible transport technology for IP. Start with 48 byte packets, then add virtual circuits because those packets don't leave enough room for data. Then make the virtual circuits semi-permanent because setting them up adds too much latency.
+ Telcos build massive businesses as monopolies in regulated enviros. Their budgets, people and processes were built in this biome.
+ Meanwhile, their services and OSS/BSS were built in the packet-switching biome.
Internet was a K-T boundary meteor to this biome. Many telco engineers were brilliant. It didn't matter (1) because the DNA of the business around them could not adapt fast enough to match the speed and execution of the startups which built their DNA in the new biome.
(1) at the business and systems level...plenty of tech concepts adapted into VoIP concepts..and even at lower levels into H.323, SIP and WebRTC).
Simplex SIP is so frustrating coming from my debut in the tech stack as a helpdesk grunt installing, punching down, toning out - analog
Voip is best effort and it sucks to talk over each other
Analog is an actual circuit that completes, not this width washy best effort internet crap AND the kicker is I can talk over people and they shut up fast.
I’m not a big fan of what has happened to telephones.
Preferred the legacy on premises SIP PBX I was sold 72 or 80 months ago. The cloud hosted bullshit powered by session border controllers needs to stop.
Finally made the swap to cloud PBX and it absolutely sucks ass looking at you Spectrum Enterprise
"Analog is an actual circuit that completes, not this width washy best effort internet crap AND the kicker is I can talk over people and they shut up fast."
And then data communications ate voice communications.
It’s a post I wish I’d have written. I represented my employer at the time (98-00) at GSMA, MWIF, 3GPP/3GIP. The difference in cultures in the “big room” was a shock to this young engineer and product manager. Of course the really interesting stuff happened in smaller ante-rooms of the forum. I left that period orders of magnitude less naive about such large bodies, politics, vested interests, lobbying, and the standards making process in general. Incase you hadn’t guessed, I was part of the small delegation of “netheads”.
Bellheads gave us a network that was almost perfectly reliable, with latency as close to zero as the propagation of electrons through a wire (or light through a pipe) would allow.
Honestly, this is true. In some ways, it reminds me of grow up technologies v. grow down technologies [1]; the highly-reliable high-cost technologies ("groow-down" technologies) tend to get out-competed by cheaper and less reliable but good enough technologies ("grow-up" technologies). After those technologies win, the need for the high reliability of the grow-down technologies still exists for some use cases, so the grow-up technology tends to have facilities to allow it to attain that level of reliability tacked on. This generally seems to get the grow-up technology almost, _but not quite_, to the original level of reliability of the grow-down technology. The IP version of this for VoIP is "use a private network, carrier hotels, QoS", etc.
On all aspects but reliability I don't see much to advocate for in the PSTN though. The lack of separation between the network and the applications running over it is particularly awful (the modern materialisation of this issue is that you might be able to call number X, but not SMS it, or vice versa, because the way these different applications are transported and routed is my knowledge completely different.) We saw what the telco conception of networking looked like with things like ISDN or X.25. I'm pretty happy that vision didn't win.
> On all aspects but reliability I don't see much to advocate for in the PSTN though.
VOIP latency is significantly worse too. Reliability is good enough, IMHO, but I think most people are using 20ms samples and two samples per packet, which is 40ms behind, plus a jitter buffer, etc. On the plus side, nethead routing may be using better routes than bellhead, but that's probably only saving 10ms (if that) on trip from one coast to the other.
This is a good point. Latency tends to be the Achilles heel of any digital technology not specifically designed to keep latency low. In true "grow-up" vein Ethernet has had to try and improve in this regard with TSN, etc. The IETF also has a 'deterministic networking' WG.
You can find all sorts of dead interconnect technologies which genuinely offered better latency/jitter/etc. than grow-up technologies like Ethernet or USB, like Fibre Channel or Firewire. Interestingly if you go digging, there was once some specification for using Fibre Channel for audio/video transmission... wonder if anyone still has any of that equipment lying around. It's pretty sad how these things die off when they can offer superior performance, but it seems to end up just not being better _enough_.
VoIP latency is definitely a pity. My guess is we could probably improve the situation with specialised codecs which focus on latency rather than bandwidth efficiency, but I can't imagine ever beating the latency of TDM. The synchronicity of TDM networks is certainly one of the more interesting aspects of them; you can read a book about T1/E1 now and be struck by the synchronicity of it all relative to Ethernet. To my knowledge a major motivation for Ethernet's TSN extensions is to allow clock signals to be reliably propagated to cell carrier hardware which is switching from T/E-carriers to Ethernet, and which is accustomed to being able to transfer not just data but a clock reference via their uplinks.
Betamax was technically inferior in one way that mattered quite a lot.
VHS could play a two-hour movie with a single cassette right from the start. Betamax at the time was limited to one hour, so anyone who bought or rented a typical movie of more than one hour but less than two had to switch cassettes midway through. VHS could play the whole thing in one go.
Even for the rare movie that was more than two hours, this meant two VHS cassettes vs. three Betamax cassettes.
Betamax II and III were introduced later, with longer running times due to slower tape speed (and lower quality), but these were too late to the party.
Like digital TV having a channel switch delay there's some inherent delay to VoIP. It's not noticeable if it's designed correctly. The wheels come off when there's way too many hops and layers and best practices are not followed. For example a customer has their cellphone on speakerphone in a weak signal area while driving, and the employee is using an in-browser softphone on VPN on Wi-Fi on cable Internet. That's when you get 500 ms of delay that's like satellite.
The "Bellhead" network was a machine for extracting as much surplus value as possible while preventing unlicensed innovation. The internet let people invent services without having to pay the monopolist.
A similar thing played out with smartphones; telcos wanted to capture the entire value, until they were outcompeted by Apple capturing all (or at least 30%) of the value through the app store instead.
Government regulation gave you that - it was the price the company had to pay for a monopoly, driven by utility commissions.
I had to take an escalation on-call shift today. I’m sitting on top of a ski mountain right now, outside, on HN waiting for my family after dealing with a problem. Network latency didn’t even enter into my mind in the last month.
In the Bellhead world I’d be sitting in my living room chained to the phone or on a $200/mo phone from my employer close to home. Making our modern technology stack wouldn’t have represented the best return on assets for AT&T and its successor Bell companies. Why would you deliver 500 Mbps service to a mountain?
People generally prefer an 80% functionality solution at 50% cost. Hence MP3 music downloads at 128 Kbps, the same smartphone optimized layout being reused as the desktop website, and Electron apps.
> If you look at the web, it's very much centralized today.
The web is not the network. The network, the Internet, is a collection of decentralized networks peering with one another. There are nearly 90000 ASNs in use, each of which represents one or more networks controlled by a single entity, routed globally via BGP.
I don't know about you, but nearly 90000 entities does not sound very centralized to me.
Your average user will transit at minimum 3 networks to reach any given website. The network of their ISP, the network of at least one transit ISP, and the network of the host for that site (which may also be the destination). As peering relationships for some host networks like Netflix, Google, Facebook, and AWS get deeper, it's possible that they may not need a transit ISP to connect your ISP into their host network. That said, with the exception of Google Fiber customers (which is anyway still in a separate ASN), your ISP and the destination are separate and are inter-networked in a distributed fashion.
While the "web" is relatively centralized (although has a /very/ long tail), the networks that underlay it are not.
It's actually incredible how accessible BGP is these days. Anyone can get their own "personal" ASN. Try a RIPE LIR even if you're outside Europe, it's easier and cheaper than dealing with ARIN in the US. They'll require you to have a European presence (a VPS counts!) You can then peer with various providers all over the place (either VPS or dedicated), use your IPs there, or tunnel your connectivity back to where ever you want.
I'd much rather deal with the occasional spam message than having monopolies charge everyone through their nose per the kilobyte and deciding which applications are dignified enough to even make it onto their shiny network.
Also, ironically, the only spam I get these days is via text or robocalls.
I'd gladly get rid of my phone number altogether, if it wasn't for businesses insisting on using it as a primary user identifier, verification method, and communications channel.
I receive at least 10-20 scam, robocall, or hangup calls per day. That’s hardly “occasional”.
At least we both agree on how we would get rid of our phone numbers. But the reason businesses use it as a verification method is because it is hard to get one - and it costs $$$. It’s basically a outsourced, ubiquitous, federated human identity service.
Notice how they all filter out voip numbers - because voip numbers are too cheap to acquire to rely on as a filter. They assume that only humans would pay the monthly charge for a landline or mobile number. So $$$ to pay for phone service is used as the universal “captcha” of last resort.
I think of the difference more as a Kuhnian paradigm shift, rather than than as clan warfare.
The whole value chain changed. PSTN is a highly fragile, highly standardized, high expertise technology that was possible using very rudimentary but custom hardware.
IP telephony is essentially a repurposing of commodity IP connectivity, that uses modern commodity hardware (highly complex but well encapsulated) and requires a fraction of the personnel with a lot less years of expertise to work.
> Voice calling between domains without prior contractual relationship, as is the case for email, is nowhere in sight.
This is exactly one of the things that I tried to build first with Communick after my time working at Deutsche Telekom. They spent so much time and money building products around the idea of letting people buy DIDs, or to let people continue pretending that they care about their phone number, and I was there asking "why don't we just provide voice call by SIP and let people create their own addresses? Then you could get customers even from competing phone companies."
In retrospect I wonder if VoIP was worth it. Phone calls now have worse quality. Many of the Internet backbones merged with telcos and got infected with their gold-plating mindset so now the Internet costs more (although it's probably negligible compared to last-mile costs).
In my opinion, it is very important to remember what the "net neutrality" regulations gave us in term of progress when you remember the fight at that time and lobbyists pretending very convincingly that it would hinder innovation and have all actors to go into bankruptcy!
Once there are universally reachable cellular data networks, the days of the phone network are numbered. A "phone number" is an antiquated concept, and the network has been ruined by spam. Messaging apps will prevail.
There will never be universal coverage. Even in the largest cities there are dead zones.
A car rental business that assumes 100% Internet access availability discovered the problem of customers parking the cars in parking garages (signal dead zones). They did not have an offline activation mode and had to tow the car. When any given customer triggered too many tows, they fired the customer.
The PSTN is not going anywhere. You can choose to cut yourself off from it by refusing to take calls unless it's via an app.
I expect we will see a low-frequency supplemental band carved out to ensure low-bandwidth connectivity in cellars and carparks before too long. You won't be streaming movies over it, but they'll be able to talk to that car.
With all the incumbent users lower than 600 MHz, I don't know how realistic that is. They've already been pushed out of 600/700/800 MHz. Those pushed out won't want to invest in another move.
So far I've received all B2C messages (e.g. dentist appointment reminders, airline boarding notifications, promotional spam etc.) via SMS, and none via any messaging app (and registered with my phone number with almost all of them).
> A "phone number" is an antiquated concept
As much as I wish that was true, my observation is that phone numbers aren't going anywhere, both as an addressing and authentication layer.
For the last 2 years I've received 100% of my B2C messages via email or whatsapp, and none via SMS. I keep a phone number as a DID on a voip service, and can receive SMS through it (to email), but the only actual texts I get are personal (and those from the handful of friends who aren't on telegram or whatsapp). I almost never take phone calls, because they're universally spam; I chat to family & workmates on whatsapp, zoom & gmeet.
> as an addressing and authentication layer
Phone numbers are a horrendous authentication layer. That'll hopefully die sooner than phone numbers for addressing.
The Microsoft paragraph was pointless and wrong in many places. NT as a cross over of DOS and VMS? It is more that the VMS tech virtualized DOS like a good VMS stack at that time did with many things (as did NT with POSIX, win32 and OS/2) and below Intel, MIPS, alpha and whatever the forth was.
Probably 95% of businesses and 99% of businesses with >200 employees use O365. They aren’t going anywhere except to move more services to Microsoft cloud.
Sure they eventually mandated all-IP core, but the overall approach still reeks of the old telco, voice-dominated approach. They’re still trying to resist the idea of the “dumb network”.
Looking at phone plans in the EU, I'm not sure I'd say net neutrality has won. Many different networks offer plans that make Facebook and a handful of other services free, but not other data.
It’s way to early to declare victory against the business model where you pay extra for different services to the network.
As soon as the networks can re-assert their power they will, they still own the network. The companies that need the network have lost their collective hold on the psyche of Americans.
There are so many streaming services now it’s worse than cable, no one will care if Verizon takes a chunk of Netflix’s revenue.
Even if SHAKEN/STIR was added to SS7 and H.323 it's not clear it would be added to any of the software for the remaining TDM switching hardware. Those platforms are in sustaining support and many of them are unlikely to see any software updates at all in the remaining ~15 years they have before being turned down.
SIP was supposed to be the equivalent of email, but instead we got a bunch of proprietary systems which do not interoperate well or at all (and sometimes even offer SIP as an extra-cost, limited-feature addition).
[In a former life, Director of SWE for Call/Media Processing at a large SIP consumer service provider.]
From our standpoint, Bellhead vs. Nethead was very obvious. But it was also pretty obvious that the Netheads were driven by a eglitarian, idealistic dream that nobody actually wanted.
LECs were the old, fossilized enemy with ponderous protocols driven by processing constraints that no longer existed. A system to track phone number assignments. Another system to track where calls to those numbers should be directed. A system for authenticating outgoing calls. A system for authenticating incoming calls. A system for call control. A system for media control. A system for actually transmitting the media. System, system, system. All good ideas once upon a time, but driven by long-gone resource contraints and most importantly by incremental layering. (Think USB-C x1000.)
On the other hand, the Netheads dreamed of a completely decentralized network. No evil central power controlling a customer and limiting his power. Customers would directly plug endpoints into the Internet, and anyone could directly call anyone. Maybe some common directories, but even less centralized than DNS. NAT? Never.
The result was a system that is in some ways just as complicated as the the bad-old telcos. Per SIP, no one player really has a complete view of the state of a call. It's smeared out between the caller endpoint, the callee, gateways, proxies, registrars. Most of SIPs complexity is in the negotiations between all the parties, detecting inconsistant views, and re-converging. Most of our software headaches were from needing to cajole hardware (much of which we controlled!) to pretty please do what was wanted.
For example: there is a SIP "Best Current Practice" to park a call. It requires a special-purpose service endpoint to merely hold onto the call. The call-control flow diagram for it is... bloated. [1]
In reality, customers didn't want unfettered access to their phones across the Internet. They wanted a centralized service provider who (a) held multiple users and devices under an "account" umbrella, and knew how it was configured ("I want both my phones to be do-not-disturb."), (b) knew global state ("Someone's on phone 1, roll it over to phone 2."), (c) performed verification and filtering of incoming calls ("Oh, that's Alice at internal ext. 251", "What! My local hospital is calling??!!"), (d) served as a shielding proxy (don't tell them my IP). And (e), especially (e) "I just want to plug the blasted box in and have it work. I'm not a phone administrator."
As an implementor, what I really wanted for us-to-customer was a simple "make phone ring", "phone wants to call", "send media", "receive media", and "receive button pushes". For us-to-trunk it would be "initiate call leg", "receive call leg", and send/receive media. Nowadays, audio media would just be Opus. Ah well, I could wish...
There is a table of contents here https://www.devever.net/~hl/ This is the most recent post, and the date there matches the bottom of the page (January 27, 2023).
VoIP won, at the cost of making mostly everyone just ignore all calls now. The calls are incredibly cheap, so 99.997% of calls the average person gets are VoIP spam.
Funnily enough, this problem is very localized. I've received two or three spam calls in my life, all of them foreign numbers hanging up immediately and hoping I'd call them back so they can charge excessive service fees.
Call spam is more of a cultural problem than a technological one, the relevant culture being the government's and the telcos' culture of not giving a damn about people when there's money or bribes/"campaign investments" to be made.
Compare, for instance, your average messaging app: WhatsApp/Facetime/Signal/Telegram offer calls for no cost at all, yet spam is almost entirely non-existent there, because these companies have an interest in keeping their customers happy.
I think you are right. My parents German landline received way more spam calls in the 1990s than now. Now it's almost non existence, despite the number being unchanged since 40 years and in public telephone books.
- Unsolicited commercial calls got outlawed.
- Giving your ok to being called for ad-stuff can't be combined with other oks.
- Foreign telecom networks can't pretend to have a German number. Local providers are required to strip the number, making the call look less trustworthy.
- Having the minute price being said when calling an expensive number.
I'm in the Netherlands, but other European countries seem to report very similar things.
Business phone numbers, especially mobile numbers, get attacked by scammers constantly, but even those don't always bad enough to warrant the need for a USA-level call filter.
Plenty of other scams (for example, targeting the elderly through their landlines or over WhatsApp) but non-business numbers get very few scam phone calls.
Adding to this, the last time it was mentioned here I started watching SIP SRV requests on my DNS servers and sure enough it's non stop. I added bogus records and the bots follow them and try to connect.
Now I just need to find a medium interaction SIP honeypot / tarpit. I already do this for SMTP. Perhaps there is a way to run Asterisk in some debugging mode to accept anything, never tried.
There are kamailio setups that just respond to every request with a 200 OK, and then log the source in a db. If you want to get fancy with that you could make it reply to invites with 100 Trying and make them wait a bit for whatever timout they want to have. The problem with tarpiting them though is that it is hard to make them tie up more resources then you spend on it. SMTP can abuse tcp to send replies 1 charachter at a time with long delays between each one. SIP TCP doesn't really have the same capability.
With SIP you'll probably need to exploit SIP directly, not the underlying protocol.
Things like redirect loops may confuse callers. With a bit of luck you can redirect the caller to call sip:defaultspamuser@localhost so it calls itself, or redirect it to another bot you've detected to make the spammers bother each other.
I will give Kamailio a shot. It's at least a starting point so I can get some stats on what a majority of them are doing. It seems Kamailio is already in the Alpine Linux repository so that saves me a step.
An extra "overlay signaling network" has been added: nobody answers a phone call unless they received an email/signal/SMS/slack to let them know it is coming.
I get about 4 or 5 "probably a scam" calls a day. It's incredible. I used to bait the scammers and waste their time, but I think they put me on their shit list so I get even more calls now.
As internet connectivity became a standard utility for most households, telcos were forced to offer it, or die. Voice-specific equipment then became more of a liability than an asset, so we saw the networks switch to being primarily IP carriers, and voice calls got degraded to using things like SIP out of convenience for the carriers.