In my case, cat6 cables and the added price of a 10G RJ-45 switch would have been more expensive than a 10G SPF+ switch and some twinax & fiber cables. Amazon has finished SPF-to-SPF assemblies for €10=$12. And for TP-Link, RJ-45 is like 1.5x the price of SPF+ equipment. Also, RJ-45 has a fixed minimum latency, due to it needing to support backwards-compatibility with 1G 100M etc. That means you need much larger send/transmit buffers to saturate the link as ping goes up. Especially if you need multiple hops, fiber is just faster. In my tests, 0.2ms vs 3ms roundtrip time for 10G SPF vs. 10G RJ-45.
If I remember correctly, the IEEE 802.3an line coding overhead is around 2.6ms for each time you switch between SPF+ and RJ-45. That's in line with my measurements.
SPF+ switch to OM3 fiber to SPF+ PCIe card => 0.2ms
SPF+ switch to RJ-45 cable to RJ-45 PCIe card => around 3ms
NAS-[10GbE]->switch-[1GbE]->asus router, 2 hops ping test:
```
yatli@yatao-nas ~ % ping 192.168.50.1
PING 192.168.50.1 (192.168.50.1) 56(84) bytes of data.
64 bytes from 192.168.50.1: icmp_seq=1 ttl=64 time=0.236 ms
64 bytes from 192.168.50.1: icmp_seq=2 ttl=64 time=0.195 ms
64 bytes from 192.168.50.1: icmp_seq=3 ttl=64 time=0.300 ms
64 bytes from 192.168.50.1: icmp_seq=4 ttl=64 time=0.271 ms
64 bytes from 192.168.50.1: icmp_seq=5 ttl=64 time=0.153 ms
64 bytes from 192.168.50.1: icmp_seq=6 ttl=64 time=0.216 ms
64 bytes from 192.168.50.1: icmp_seq=7 ttl=64 time=0.300 ms
64 bytes from 192.168.50.1: icmp_seq=8 ttl=64 time=0.165 ms
64 bytes from 192.168.50.1: icmp_seq=9 ttl=64 time=0.163 ms
64 bytes from 192.168.50.1: icmp_seq=10 ttl=64 time=0.265 ms
64 bytes from 192.168.50.1: icmp_seq=11 ttl=64 time=0.174 ms
64 bytes from 192.168.50.1: icmp_seq=12 ttl=64 time=0.272 ms
64 bytes from 192.168.50.1: icmp_seq=13 ttl=64 time=0.397 ms
64 bytes from 192.168.50.1: icmp_seq=14 ttl=64 time=0.256 ms
^C
--- 192.168.50.1 ping statistics ---
14 packets transmitted, 14 received, 0% packet loss, time 13168ms
rtt min/avg/max/mdev = 0.153/0.240/0.397/0.065 ms
```
Yeah I think so. There might also be better converter modules. Anyway, my setup is caused by an oversight of not burying light pipes into the floor. Otherwise I guess I'll go with SFP+ too. I'm using TP-Link TL-ST1005 which is the cheapest 10GbE switch out there and it cost like $150. Also RJ-45 adapters/switches run anecdotally hotter than SFP+ ones.
That's what I did. A few months ago, I bought 2 10GbE NICs off Ebay to connect my main desktop and basement server directly. I already had cat 6a running between them.
Some advice: make sure you know the form factor of your NICs. I accidentally bought FlexibleLOM cards. They look suspiciously like PCIe x8, but won't quite fit. FlexibleLOM to PCIe x8 adapters are cheap though.
I use Intel 82599ES SFP+ and a TL-SX3008F router. But let me warn you: Things are affordable, but NOT consumer-friendly. I needed to study the 500 page PDF manual and do basic link configuration through telnet via USB before I could connect to the router via Ethernet and use its web-GUI to finish the setup.
How's the routing performance on that switch? I'm really struggling finding a device to fit my needs ( routing, 6-8 ports, Ethernet, at least two 2.5/5/10G).
Wow, that is surprising, maybe it's time I started upgrading my LAN...