Hacker News new | past | comments | ask | show | jobs | submit login
Single Pair Ethernet (SPE) – a leap forward in technology (2020) (digikey.com)
147 points by Brajeshwar on Feb 13, 2022 | hide | past | favorite | 95 comments



So that's three separate single-pair standards that I'm aware of, and I can't get _anyone_ to tell me if they interoperate. Perhaps they're different names for the same thing? Perhaps the wheel just keeps being reinvented?

One is the Single Pair Ethernet industrial partner network mentioned in this post:

https://www.single-pair-ethernet.com/en

which seems to be standardized as 802.3cg and calls itself 10base-T1L.

The second is the One-Pair EtherNet (OPEN) Alliance:

http://www.opensig.org/

which seems to be standardized as 802.3bw and calls itself 100base-T1 and 1000base-T1. I believe this is the one I see deployed in production cars today.

The third is HDMI Ethernet, which rides pins 14 and 19 of the HDMI connector. It is absolutely impossible to find information on. Does it have a standard? Does it use a standard MAC? Where can I buy the PHY?

What a stinking mess.


100base-T1 and 1000base-T1 are indeed the old OPEN, and are indeed what's deployed in production cars. 100 Mb PHYs are available from multiple vendors, and are mature and quite pleasant to work with (built in TDM for detecting wiring issues, etc). 1000 Mb PHYs are rarer (I've only seen Broadcom in person), but it's clearly coming. The 100- and 1000- protocols are pretty different; both are really focused on allowing cheap crappy wiring, but this is hard enough for 1000- that things like FEC are added. The equivalent of PoE for -T1 is PoDL (power over data lines), which is much more civilized than PoE (no transformers); and since 1000- runs at a higher lowest frequency than 100-, it's significantly easier to get power over it with smaller inductors.

I'm not too familiar with the current state of -T1L, but the little I've seen about it in the past was a focus on taking the advantages of -T1 (low wiring cost, simplicity, robustness, PoDL) and applying it to industrial applications, where MUCH longer cable runs than in automotive are desired. While I was following this was mostly done by adding in a 10base-T1L physical layer, giving up an order of magnitude of speed in exchange for an order of magnitude or two more run length. Don't know if there's any intent to have faster -T1Ls, haven't followed that far.

10base-T1L, 100base-T1, and 1000base-T1 are all pretty different physical layers, although they share some principles. I /think/ the Broadcom 1000base-T1 PHY can run in 100base-T1 mode, and maybe even auto-negotiate (yuck), but they're quite different under the hood.


Thanks for sharing your experience! What is the advantage of using single pair ethernet in for example cars? My company owns a largish drone and it has a regular CAT5 network so surely it can't be weight or size.


I've spent ten years designing electronics for cars. It's very often harness reduction under various motivations. The bundle needs to be small to fit through chassis holes (which are kept small to keep structures strong). Keeping the pin count low increases available connector choices and reduces connector costs (especially where they must be waterproof). Off in the distant corners of minds is weight.

Your drone would probably do the same if the manufacturer had more engineers to spend on optimizing the design. But it's probably also sold in lower volume than car parts. Alternatively, maybe it's using CAT5 in a nonstandard way. For example, maybe two links or non-PoE power or even not Ethernet at all.


How are car communication done in modern cars? Is there a single cable run from the ECU in the front to the cabin and one to the back, and then some kind of switch connects stuff like gauge cluster, AC, buttons etc in the cabin, and lights, camera etc in the back? How about fiber cable, is there any movement towards that? Some cars have fancy 360 camera systems that I imagine can use a lot of bandwidth.


Yes, pretty much. There are usually 5-10 different CAN or FlexRay strings, each for a "topic" (Powertrain, Convenience / Controls, Sensors, Infotainment, etc.), all linked together by a Gateway module.

Then for high bandwidth needs there is a separate set of buses, usually optical. For audio this has been fiber-optic MOST for years. For video it's a grab-bag of weird optical standards with occasional LVDS links being used for shorter runs.

Besides lights which are still often driven by a centralized Body Control Module with a huge number of wires coming off of it, most analog wiring is localized to a much smaller area than in old cars. For example, now you'll have a Door Control Module which connects to the window switches, locks, mirrors, and window motor, instead of a bunch of wires running between the door and relays under the dashboard.


Cars use CAN for most controller networks. It can go up to 1 Mbps but lots of time maybe 250 kbps. It has advantage of being broadcast so all devices get messages at same time. The wiring tends to have noise reduction properties and floating reference voltages as i recall. For cameras i'd imagine they use something else for bandwidth and ease of use reasons.


CAN is content addressed rather than sender/receiver addressed. The address is also used to arbitrate transmit priority on every single message frame. This allows for messages to be sent with well controlled latency, which is important for stuff like drive-by-wire steering and braking.

CAN bus wiring isn't floating in that it isn't isolated. The transceivers are required to deal with a large common mode voltage range, though, and there are isolated transceivers for specific use cases. The noise reduction properties are simply that the digital signal is encoded as the difference in voltages between two wires, and those wires are supposed to be twisted together. Also, the typical behavior is for a CAN controller peripheral to automatically re-transmit frames that didn't make it on the bus, for example, if there was too much noise that a receiving node got a CRC mismatch.

A video feed from a camera would be way too much bandwidth to run over a typical CAN setup. For reverse cameras, it's probably HDMI or something. For more elaborate safety and convenience systems, it's likely either close enough to the computer that it's a parallel bus, or far enough from the computer that it's a differential high speed interconnect that's qualitatively similar to HDMI.


There's a lot of approaches for various subsystems. The cameras that you mention are quite-likely some kind of serialized MIPI (e.g., using some generation of FPDlink or GMSL to tunnel asymmetrical bidirectional data across a micro-coax cable). An SOC somewhere (often the same one driving the display) will receive the deserialized MIPI streams and do whatever video processing is desired and display the output stream. The display is typically eDP or sometimes HDMI.

Old-school hardware that doesn't have large bandwidth requirements will usually use CAN, a multi-drop bus that is capable of up to 1000 kbps but is typically used at 125, 250, or 500 kbps. Its advantage is simple UTP cabling. Its disadvantage is that it's a planned bus - you need to have termination resistors in known locations and you have to plan the addresses of devices in advance. It's kind of brittle and the tiny packet size means folks jump through hoops to send larger blobs of data. This is probably one of the great barriers to being able to buy some random car part from a manufacturer as Joe Q Public and use it. The OEM is used to making lightly-customized firmware for every customer.

A typical car is a mix of hub-and-spoke topology and a daisy chain. There will be a handful of CAN buses that go to various devices around the car. You might have a CAN-connected cabin controller that has a bunch of motor control chips for seat adjustments and seat heaters or window heaters and which daisy chains with door controllers and mirror controllers on the cabin bus. There might be another daisy chain in the engine compartment; one device might control the HVAC system and another controls the engine. Usually the hub is some higher-performance device or at minimum a gateway that taps each of the various low-speed buses and is used to provide a firmware flashing and diagnostic gateway. That's increasingly frequently the same computer that runs the infotainment system. So let's say you're at the end of the factory line, you can plug in a thumb drive via USB or drop an Ethernet connection to the car and rapidly upload all the firmware blobs to the infotainment computer and it'll run a program that passes all of them along to the various widgets.

10Base-T1{S,L} might start to displace CAN in some of these widgets so long as MCUs and PHYs are very cheap. 100Base-T1 has already seen some adoption in things like backup cameras that use chips that also implement video compression and streaming. I believe BMW and Mercedes started using standard Ethernet for their diagnostic gateway and then bridged to 100Base-T1 internally between their higher-bandwidth widgets since ~2007. Personally, I'd love it if everything ended up on faster IP networks and no longer needed custom firmware on a per-implementation basis.

As of a few months ago, I am no longer in the automotive business.


Having single-pair vs. cat5 is kind of a bonus, for wiring simplicity and weight. I could also see the power delivery part being useful for things like sensors and other oddly placed/small powered devices.

The main benefit is Ethernet. It has already started, but in the next few model years of cars, there is a large transition happening from CAN -> Ethernet as the primary communication bus. The simplest reason for that is software + bandwidth + integration.

To implement semantics like an RPC call across two modules in car via CAN, it is kind of ugly and almost always results in awkward software interfaces and extremely inflexible implementations. Whereas with Ethernet now you can do something like use gRPC and protobufs, which is entirely flexible and results in well-defined/formed software interfaces. CAN will still be used for a while for what it is best at: low-level, reliable, low-bandwidth chatty data interfaces that make sense for electro-mechanical parts - especially those that don't have (or need) anything more than a very bare-bones microcontroller. Cost is still a huge factor in automotive design, because a $1 difference x 1 million cars blah blah it makes a difference.

Bandwidth is obvious. CAN can't do video, and in practice audio either (although theoretically possible I guess). Cars have lots of audio and video devices now, and it is unnecessarily complicated to always have command/control and media data on separate buses. Imagine if your computer used one physical network interface for HTTP requests and another interface for streaming video. Sure it'd work fine but you'd have a bunch of extra cables, ports, hardware, and software complexity to make it work.


Surprised it isn't running a CANBUS system.


CAN is slow compared to Ethernet and really optimized for tiny payload sizes, so for plenty purposes the former isn't that useful. (Which is also why Ethernet is pushing into automotive)


> I think Broadcom 1000base-T1 PHY can run in 100base-T1 mode

Marvel claims their 88Q211x PHYs can do that. No auto-negotiation.

[0] https://www.marvell.com/content/dam/marvell/en/public-collat...


Putting HDMI ethernet channel thoughts here, since I have no first-hand experience with it. My understanding is that it's basically 100base-TX under the covers, and that's vaguely confirmed by [1] and others. The big difference, as you mentioned, is that it uses a single pair, instead of separate RX/TX pairs like 100base-TX usually does. Rather than doing the 1000base-T trick (share the lines with echo cancellation), it looks like it uses an analog hybrid [2]. Still never seen it in the wild though.

[1] https://www.semiconn.com/ep91h1 [2] https://en.wikipedia.org/wiki/Telephone_hybrid


One of the key differentiators is that most of these standards are point-to-point--they only connect 2 devices. So, it doesn't work very well for replacing things like CAN.

10Base-T1S is multidrop on single pair--which is basically a CAN replacement.

Unfortunately, I haven't seen much movement at all in the T1S space.


It's able to supply power over the same two wires (in the point to point case and the multidrop case), IEEE calls it PoDL (power over data line) and I think it supports about 100W in total for one pair. Even in the point to point case, this could offset the demerit of losing multidrop. Multidrop with 100Mbps and 100W of power would be a gamechanger.

About 2 years ago I tried to make a prototype using 100baseT1 with PoDL but most of the components weren't on the market yet. I did manage to get samples of some but not all parts, and gave up and kept using this[1] tiny switch. There's also a 1000base-T version.

One thing to note about using ethernet in embedded applications, and also the reason why you can't find many small microcontrollers that support it, is ethernet uses a lot of power. About 0.5W per port (meaning 1.0W per line) for 100Mbps, 0.7W per port for 1Gbps, and 4W per port for 10Gbps. There are low-power standards that work by idling when not in use, but I've yet to see any devices that support it. The 2 wire standards are designed for much, much lower power consumption.

[1]https://gadgetsmyth.com/product/ethos-lite/


I have yet to see a reasonable case for CAN replacement, as it's good enough. Maybe we'll get more wireless connectivity from the IoT space, but that's probably not immediate.

There's really no goal for these to do anything other than PtP with these solutions.


CAN is lousy if your messages are larger than 8 bytes. And, invariably, your messages eventually end up larger than 8 bytes. Then you have to deal with CANOpen and your life becomes hell.

I also find some of the CAN architecture decisions kind of questionable nowadays. A lot of stuff like ACK and retransmit in the hardware made sense back when microcontrollers were expensive. Nowadays, its just something that chip implementors are likely to screw up that you can't fix with software.


Module reflashing. The effective data rate of a 1Mbps CAN network is about 180kbps of payload. Barely faster than dialup. And CanFD doesn't improve as much as you'd think, because all the overhead still runs at 1Mbps.

If you've ever moved a few meg of firmware over CAN, you know how agnozingly slow that turns out to be in practice. Which is fine enough in development, okay, go get a coffee and chat with the other engineers for a bit, it'll be done soon enough. But in the field, incentives change:

At the dealer, reflashes are usually not paid by the customer, so time is money in the negative sense. Get the car done as fast as possible and go back to work that pays.

For over-the-air reflashing, it's even worse. You have to guess that the car is gonna be parked for a while (or get user consent), and perform the reflash before they change their mind and decide to go for a drive again. A 2-minute reboot on a phone is obnoxious enough, a 20-minute reflash for one module is a death knell for OTA, and if you need to update several modules at once in order for their versions to be in sync, that can well take an hour or longer. Absolutely unacceptable.

PtP is fine because it's just one pair, and you're running back to a gateway module anyway. The gateway module houses the flash memory lake that stages the OTA update packages. When you get the go-signal, blast those out to all the affected modules, finish in 2 minutes, and the car is driveable again.

This is HUGE. Everyone on HN mocks automakers-other-than-tesla for not having OTA figured out yet, and Ethernet is a key enabling technology for it.


The use case for 10BASE-T1S isn't quite clear to me. I get that you can do multidrop ethernet, which would simplify things a lot; no need to add switches or daisy chain anymore. And it's also not too slow, and you only need two wires.

But are microcontrollers with ethernet common? How cheap are they? A MAC needs a lot of pins compared to CAN which will increase footprint. And to really take full advantage of ethernet, you need a good networking stack and a solid set of utilities and applications on top. So Linux is the way to go in that case. But then you need a big processor with external RAM, eMMC, PMIC, several layer board, and don't forget a couple of Linux weirdos (speaking as one) who can actually develop in that environment. A lot of embedded guys can understand CAN pretty easily; show them gRPC and their heads may explode.

So I guess the use case is: embedded system with a lot of Linux boards that don't need that much bandwidth? Does that accurately reflect the architecture of a modern car?


> But are microcontrollers with ethernet common? How cheap are they?

Define "common". However, they are available. I see some right around $4 from ST.

> A MAC needs a lot of pins compared to CAN which will increase footprint.

Normally the pins on the connector are far more critical than the pins on the microcontroller. RMII takes up 9 pins on the microcontroller.

> And to really take full advantage of ethernet, you need a good networking stack and a solid set of utilities and applications on top. So Linux is the way to go in that case. But then you need a big processor with external RAM, eMMC, PMIC, several layer board, and don't forget a couple of Linux weirdos (speaking as one) who can actually develop in that environment. A lot of embedded guys can understand CAN pretty easily; show them gRPC and their heads may explode.

You are WAY too high-level. UDP works just fine for a lot of applications and takes next to nothing in terms of resources. And TCP doesn't require as much resource as you think (remember TCP dates back to the 1980s when memory was scarce). Even straight HTTP isn't too bad.

The biggest problem is HTTPS. That invokes a whole crapload of overhead.


CAN doesn't scale to high data rates.


But the underlying RS-485 has nice noise rejection and is dead easy to manage.


Are you confusing CAN with ModBus-RTU perhaps? The wires are similar, but RS485 uses differential encoding for both 0 and 1 while CAN only uses it for 1. In CAN, 0 is represented by lack of difference between the two wires.


Correcting myself, actually CAN is the opposite: 1 (recessive state) is lack of difference, 0 (dominant state) is presence.


Yes, I more recently used modbus and lazily conflated them in my memory.


Lots of the PHYs I've seen are 2-port switches for daisy-chaining. Cost about 50% more.


> One is the Single Pair Ethernet industrial partner network mentioned in this post:

> which seems to be standardized as 802.3cg and calls itself 10base-T1L.

> The second is the One-Pair EtherNet (OPEN) Alliance:

> What a stinking mess.

One is high speed point-to-point, to replace expensive proprietary crap like MOST, FLEXRAY, etc. Another to replace slow buses, like CanBus, ModBus.


There's also EFM/2BASE-TL which was a sort of IEEE rebadge of the ITU DSL standards with the OAM layer replaced with Ethernet OAM. I never quite figured out if there was any point to it beyond NIH.


For what it's worth, I clicked some links on the posted page and the first datasheet I opened was for a 10base-T1 transceiver. So I believe this "single pair ethernet" encompasses both T1 and T1L, with L probably standing for "long distance". It's confusing but it is nice to have a family of standards that hopefully work the same, but have different physical layer for different use cases.


The key differentiator here is reach.

10Base-T1L (the subject of the article) is really an industrial/sensing application and supports cables up to 1km in length.

The 100Base-T1 and 1000Base-T1 standards are targeting the transportation market with cables <50m in length.


The article states that the technology being referred to is 1Gbps, plus power, over a distance of up to 40m.

(Although it is an 18 month old article, so things may have changed since).


Good luck finding any consumer products that ever supported HDMI Ethernet Channel (HEC). In contrast, HEC capable HDMI cables with pins 14 and 19 in a twisted pair were widely available to the confusion of many.

Audio Return Channel (ARC) uses the same pins as HEC. Enhanced ARC (eARC), available on some newer TVs and AV Receivers/preamps/soundbars, takes advantage of the twisted pair to support greater audio bandwidth and surround channels (ARC was limited to 5.1), so it’s a happy ending for folks who bought HEC compatible HDMI cables.


T1S is a thing, but the only chip on the marked is bugged beyond being usable, and Microchip wanting $4 for it.


>Is Spe Replacing Existing Ethernet Infrastructure?

>Most likely not. In the current state of development and standardization, SPE transmits 1 GBit/s only up to a distance of 40 meters. 8-wire Ethernet, however, up to 100 meters transmission length. In addition, a large number of interfaces and devices would have to be converted, which is not necessary. Ethernet via just one pair of wires is used where advantages arise through enormous space and weight savings. These are, for example, applications in the railway industry. Here, less weight means enormous cost saving potential.

Interestingly as I was looking up at single pair ethernet, there was even a standard for 2.5Gbps and 5Gbps Ethernet to bring Single pair working within 15M.

[1] https://blog.siemon.com/standards/ieee-p802-3ch-multi-gig-au...


Do you get to send in both directions over the same pair simultaneously, like DSL, or do you take turns, like coax Ethernet?


The technical term is full duplex if you want to look it up.


Simultaneously, like 1000base-T.


That uses 4 wires. I was curious to find out how it's done with two. Sites promoting Single Pair Ethernet are amazingly unhelpful.

The relevant standard is IEEE Std 802.3ch™‐2020 [1] This is actually the automotive standard for 10Gb/s over a single pair, but the industrial version seems to be the same thing with different connectors. The relevant section is 149.1.3, "Operation of 2.5GBASE-T1, 5GBASE-T1, and 10GBASE-T1".

It's a single twisted pair of wires, but with a grounded shielding braid. It's not an unshielded untwisted pair, like DSL, or unshielded twisted pairs, like CAT 5 and up. The shield is grounded through the connector. That keeps the noise level down to where 10Gb/s can work. The cables will be more expensive and will be harder to make up than CAT 5, etc. Cable length limits are rather short.

It uses a modem, of course. The basic signal is 4-level pulse amplitude modulation. There's echo cancellation, so sending doesn't blind receiving. That's how it does full duplex over one pair. One end is master, the other is slave, and this is negotiated automatically at startup. The master end provides the bit clock, and the slave end synchronizes to it. That simplifies echo cancellation; the interference is the same on every bit. (OK, symbol.) At startup there's a link characterization process, where each end finds out exactly what voltage levels it is getting from the other end, and adjusts for that. That's called "training" for historical reasons. There's 340 bits of forward error correction, because crosstalk from other cables might be a problem.

Although this is nominally a baseband system, actually all the information is in the 1GHz - 4 GHz part of the spectrum. The data is, as with most modems, XORed with a cyclic scramble pattern to eliminate the DC component. That leaves room for DC power over the same wires.

One cute feature - if the two wires are reversed, that's supposed to be detected at startup and corrected. But the connector, which came after the PHY layer spec, isn't reversible.

[1] https://standards.ieee.org/ieee/802.3ch/7058/ Requires an IEEE account.


1000BASE-T uses 4 pairs and each pair is simultaneously used in both directions. 250Mbps per pair, 1Gbps total, full duplex.

You could trivially chop the 1000BASE-T standard down to 1 pair and get 250Mbps full duplex. That would be similar to what SPE does. Everything you just described applies to bog standard 1000BASE-T too - PAM, echo cancellation, error correction, scrambling, etc.


I didn't realize that 1000BASE-T used the pairs bidirectionally. Thanks.


Stupid question, how would you transmit the XOR scrambling pattern?


I haven't looked it up for that standard, but the pattern is often built into the protocol: it doesn't matter if it repeats after a while.

That topic reminds me of the interesting visualization posted for GPS 26 days ago: https://ciechanow.ski/gps/ (see "chips")


It's a good question. There are a couple of ways to do it:

https://en.wikipedia.org/wiki/Scrambler#Types_of_scramblers


10base-t1l would be very useful for me, if there were consumer priced products available. I've got a single extra pair running to a gate controller that can do 10baseT ethernet, which would be a perfect fit for this as a bridge.

With the power over data line standard, it could be super easy. One side has a power input, a rj45 connector and a two pin screwdown (or whatever), the other side just the rj45 and the two pin. There's a product out there that's using an analog hybrid to run 10baseT over a single pair, but my run length seems to be just a bit too long for it to work.



You could also run SDSL modems, which have been the traditional way to do this for years. They'll adapt to the line's properties and give you as much throughput as they can. I'm not sure if any commercial offerings have a way to superimpose power, but I presume there should be some sort of power available at the gate too, no?


> presume there should be some sort of power available at the gate too, no?

There is and there isn't. The gate controller does have power, but the manual says not to tap into it, and it's fed low voltage a/c from a transformer elsewhere. The SDSL modems are usually too big to fit inside the box with the controller too (and it's metal, which makes wifi not an option). And, of course, it's only nice to have, so limited desire to make things fit or drill holes for antennas.


You are likely going to be waiting years for this. In the meantime, a directional CPE pair[0] as a WiFi bridge would probably do the trick for you in almost any scenario. You can run 24v down the single pair for power, if needed.

Then again, for a gate controller, I would probably prefer to see if it used 900/433mhz and do that instead directly from the house instead of doing a 5GHz bridge. You could also use the existing pair as a serial bridge, or even an antenna for the 433/900.

0: https://store.ui.com/collections/operator-airmax-devices/pro...


So coax is back, but it’s not coax. But 1000Base-2 sounds nice.


Could I use passive devices breaking out to get 4 connections on rj45 and so move switches back to my wiring cupboard but keep multiple devices in the remotes? I only wired one rj45 per (bed)room.

Domestic, so <20m runs of cat5e/cat6. Don't need more than 100mbps per host.


If you don't need more than 100mbps, you can run two parallel 10/100 connections per wire. There are dedicated splitters for this on AliExpress.


Yes, the cabling will handle it, but you'll need a media converter (e.g. [1]) for each device, so probably not a win for most use cases.

[1] https://phytools.com/products/100base-t1-media-converter


> media converter

A proper 4-end-device-port media converter:

https://www.amazon.com/NETGEAR-5-Port-Gigabit-Ethernet-Unman...


No win. Device needs power. I'd hoped for passive splitter


There is the UniFi NanoSwitch[0] if a dumb switch with passthrough power will work for your usecase. Works on anything 12-24v.

0: https://store.ui.com/collections/operator-airmax-and-ltu-acc...


Yeah, PoE makes this sort of thing much easier. The Unifi Switch Flex Mini [1] is even cheaper, but requires "real" PoE (not passive 24V), and doesn't pass through PoE. The various in-wall access points [2] are PoE powered, PoE pass-through, flush mount to a wall plate, and have an integrated four-port switch -- depending on use case the WiFi is almost just a bonus. If you have ethernet cabling in your place, it's definitely worth putting PoE on it for the options it opens up.

[1] https://store.ui.com/collections/unifi-network-switching/pro... [2] https://store.ui.com/collections/unifi-network-wireless/prod...


I'd be interested in connecting two computers over existing unused POTS wiring. Would this even work on the existing wiring? I believe it's simply 22 gauge wire.

I can't seem to find any charts (BW vs length) for SPE.



All of this mess of absurd mutually incompatible standards for single pair on copper is never seen in the ISP world, if we want a single pair of anything we use 9/125 singlemode fiber.


This would be useful for the gigabit runs in my house that aren't quite up to spec. Usually it's a single pair that was damaged by a finishing contractor.


I remember 3 years ago I moved houses and had 1 jack with really bad performance. Looked in the attic and found that somebody had (for no reason) spliced the cable. With wire nuts. 1 per wire.


I just moved into a house where the contractor thought you could daisy chain ethernet like a standard phone line. The ports at the end of the chain did work amazingly, but unsurprisingly, the ones in the middle with tens of feet of unterminated stubs... Didn't.


That is kind of amazing that it worked at all.


10/100BaseT is quite robust one.

I did an Y-splice for three computers, without switch/hub, it worked.

I saw a damaged link with 50% loss on 3com 905 on the server end, RDP was unusable. Replaced with RTL8139 (other side was the same lost to time noname unmanaged switch) - less than 5% loss with ICMP ping. RDP worked fine.


Same here, house was built around 1900 and they really never intended you to run anything more than power... Basically impossible to get anything else through without tearing up the flooring/wall, and the rooms are askew too.

Damaged a CAT-8 a little while ago, trying to pull it into my office.


Why spend on cat8? You have that bandwidth on a home connection? Or a nas capable of serving it?


Given the effort required to put cabling in e.g. mine is underneath plaster, future proofing it as best you can could save aggravation in the kind of timeframe that people live in houses for.


The best future proofing is running a conduit, not just copper.


> future proofing

The best future proofing thing you can do in a living house is lay down at least x2 cable per drop.


It wasn't that much more expensive and I knew that it was going to be such a pain that I was happy to do it once and future-proof. Probably won't max it out in my lifetime, but it's nice to have options.


PowerLine adapters exists.

Not the best solution, and usually quite pricey, but it works. If your power cabling is not from 1900 too. *grin*


Funny enough, I use a powerline adapter for a run to my standalone garage and I get about 20mbps. Enough for a light wifi backhaul, but that's about it.


Guys building RC submarines are managing 300m with working camera feed over HomePlug.


Is this used by starlink? A recent video review was complaining about a proprietary Ethernet cable with a weird adapter and high cost.


That Starlink adaptor is just a regular RJ45 in waterproof housing. The complain about it has no merit as it's meant to be connecting to the dish outside the house.


There's two major versions of Dishy (and two variants of the first with the same cabling). The first round Dishy has what looks like an ordinary Ethernet cable. But there's a surprise inside! 100W of PoE with a wiring incompatible with every other PoE standards. The second square Dishy has a proprietary connector. But the cable itself is just a Cat5E cable. Again with unusual PoE wiring and a lot of power.

The square Dishy v2 ships with no ethernet ports btw; you're supposed to use the proprietary cable between their router and Dishy, then use WiFi to the router. Yes, everyone hates it. You can buy an ethernet adapter for cheap but it's a few weeks to get it shipped. Speculation is this is all related to a parts shortage.

Details on the proprietary cable: https://www.reddit.com/r/Starlink/comments/s0rih3/gen2_dishy...


No, that's all firmly within the consumer class end of the spectrum.

IIUC (this is speculative presumption, the worst kind :P) the Ethernet port may use a form of USB, or at least what looks like a USB-like connector, possibly to provide power to the module so it can do its <proprietary>-to-Ethernet thing. (I am of course boggled at how the proprietary thing is cheaper than just adding Ethernet full stop.)

It would seem sufficiently low numbers of users want an Ethernet port that this strategy is actually cost effective. I'm a little surprised at this to be honest, all the CPE (consumer premises equipment) junk/ewaste I've ever seen is typically littered with 100/1000Mbps Ethernet ports (usually four), regardless of Wi-Fi capability.

And this is not to mention the fact that StarLink is touted as a low-latency network, so wouldn't it only make sense to proactively maintain that status quo over the last few feet?

I definitely would get just a tad uncomfortable if I were ever handed eg a Wi-Fi only fiber modem...


> (I am of course boggled at how the proprietary thing is cheaper than just adding Ethernet full stop.)

I think their hope was that if the cord were to be yanked with enough force, it would not damage the expensive dish module and force the customer to mail that whole thing back for repairs.

If you yank hard enough on an RJ45 cable, it will probably damage the female jack. That jack is cheap and can be replaced, but not by customers in the field.

The dongle doesn't appear to have any kind of retention clip, so it should just yank out without any damage to anything. And if the dongle gets damaged it's cheaper and less expensive to ship overnight than a whole new dish.

This is just my theory.


When/where can we buy this for use at home?


(2020)


How will this impact FTTx?


It won't, as all of these standards are shorter than the necessary pole-to-house link.

If you have a situation which needs relatively high-speed short-range networking between subsystems and is space-constrained for the wiring -- inside a vehicle, mostly -- these will be helpful.

Fiber is great at everything except installation and robustness, and if you know the distance you'll be going and/or can coil up extra, installation is solved with pre-terminated connectors.


if weight is the factor... what about optical fiber?


MOST150 was a previous attempt at this, tunneling 100 Mbit ethernet and a separate realtime channel over a 150 Mbit incredibly-cheap plastic optical fiber. It's pretty well cost-optimized -- designed to use single quantum well LEDs instead of laser diodes, 1 mm POF with pretty lax coupling, etc. Hard to beat super-cheap twisted pair copper, though.


Hmmm, this could be interesting for high-voltage/current isolation applications, eg generator or battery bank control/monitoring.

Would be really cool if there were prefabricated Ethernet-to-<this> boards out there.

A quick check of Amazon finds 100Mbps Ethernet to fiber adapters for around the $20-40 range, but that doesn't cover the cost of the (mainstream) fiber cable they use.


This also carries power over the wire, which optical can't do.


Power over fibre does exist, but it's only used for niche applications (I think some MRI machines use it, for example) due to the low efficiency. I think there are also problems with heat dissipation.


What think it interesting is you get 50W of power. You could run lights and small appliances off that. Since it's low voltage wouldn't require an electrician to install it either.


Where is Panduit?!?


I certainly believe SPE is the future but not as an immediate replacement to ethernet for the following reasons:

- BASE-T1 PHYs and connectors are going to be more expensive than cheapo BASE-T PHYs and RJ-45s for a while.

- The entire industry is based around BASE-T, try finding off-the-shelf mobile imaging hardware (or other embedded sensors) that supports BASE-T1.

- BASE-T technology is less fragile, ultimately a single pair is more susceptible to signal loss / reflections and needs more complex encoding/transmission schemes to overcome this. (For example, 100BASE-T uses 25MHz, two states, four pairs. 100BASE-T1 uses a 33.33MHz, three states on a single pair. This reduces the tolerable noise floor and requires the use of better cabling. 100Mbit ethernet is damn-near indestructible I've found. (There is another alternative here which I will get to in a moment).

Despite these issues, SPE's benefits will win out for mobile applications that have Size and Weight requirements. GigaBit speeds (or even up to 10 gigabit) on a single pair is just too damn tempting and can translate directly into better products. Want an example? Imaging you're making a payload for a drone; you probably use slip rings and your electrical contacts are limited. Using SPE means you can cram more data down the same slip rings, resulting in higher resolution imaging compared to a competitor using standard ethernet.

My understanding is that 10BASE-T1L is designed for long reach applications like factory automation, to connect between clusters of devices using 10BASE-T1S (all sitting on the same pair of wires). 100BASE-T1 and 1000BASE-T1 are for ideally for mobile applications.

Anyway this is all theoretical, it so happens that I run a hardware startup called BotBlox where I design compact ethernet hardware. I've designed a compact 1000BASE-T to 1000BASE-T1 converter, partly as a working prototype to get used to this technology. I believe there needs to be affordable, compact bridging hardware BASE-T and BASE-T1, and I intend to develop it.

My current tests with 1000BASE-T1 is that the range is not near 40m, I managed to get around 10m before it dies. I need to do more testing on this to see whether it's the cabling, and the firmware I made.

I have a 10BASE-T to 10BASE-T1L board in manufacture because I'm ridiculously excited about the prospect of 1.6km copper transmission. We'll see how this goes.

Anyway, there's a link below. The chip shortage has destroyed my ability to scale up on this board, hence the high price. I expect once chip shortages abate, achieving a $100 price point on this kind of board is achievable.

https://www.botblox.io/products/tiny-single-pair-ethernet-co...


One point I completely forgot was Gigabit Home Network, (G.hn) as a alternative to 1000BASE-T1. It essentially does the same thing and can achieve long ranges. Ultimately this technology is designed to pump 1000Mbps down nasty home wiring, so using it on a single pair of wires is probably going to result in a very good range.

The encoding scheme here is OFDM, aka, break the signal down into multiple signals at different frequencies, and send it down the wire.

It's more robust than SPE, but it requires more costly and larger ICs than SPE. When you can define a connector and cable, SPE is still better in terms of cost and size I believe.

G.hn could have its uses in retrofitting older wiring systems.


BTW, since I have the hardware on hand, let me know if y'all want me to conduct specific tests. I have a beauty of an oscilloscope that can get very good sampling resolution on this.


> Transitioning to SPE from standard Ethernet

What's this "standard ethernet"? Would that be the "standard" version that used fat coax and multidrop? (which happens to be two-wire)

I thought the essence of ethernet was the protocol, not the physical transmission media. You could set up an ethernet circuit using sound waves, microphones and speakers.


There’s only one L2 Ethernet, but there are many many L1 Ethernets. 1000BASE‑T and 10GBASE-LR are by far the most popular L1 standards and that’s probably why you don’t hear about the other 1000 standards.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: