Hacker News new | past | comments | ask | show | jobs | submit login
240-Gbit/s sub-THz wireless communications using ultra-low phase noise receiver (jst.go.jp)
120 points by ulrischa 7 months ago | hide | past | favorite | 63 comments



Apropos of absolutely nothing, but IIRC a "Star Trek" movie where a screenshot of one of the displays on some ship or another had the Shields Frequency up which allowed the enemy to match it and disable it, and back in the 80s(?) when this came out I remember thinking "237GHz" (or something like that) was an insanely high frequency and doubted it could ever be reached with any degree of stability. Physics: HMB


I thought it might be Generations, but that was 257.4 MHz https://www.youtube.com/watch?v=6A9IZHWz45Q&t=20s


That's the scene. Says it's from 1994.

After that, I'm sure the Owner's Manual that comes with your new Enterprise-class Starship now has a "DO NOT DISPLAY YOUR SHIELD FREQUENCY" in the "Warnings" section


To be fair, we don't know if Starfleet ever figured out how the Klingons bypassed the shield. Nor do we know if they ever discovered someone installed a rootkit on Geordie's visor.

Also, given Starfleet's continued examples of not learning their lessons from previous encounters, I suspect all ships still display the shield frequency in big, bold font. Possibly even visible from the viewscreen of the bridge on one of the rear consoles.


I mean I get that series like Star Trek rely on technobabble and suspension of disbelief, but surely an enemy can just... detect the frequency, or rapidly adjust their own like tuning in until it matches?


That would work, but modern starships use a rotating phased tachyon field to prevent that.


With an inversed polarity!


And fitted with six hydrocoptic marzlevanes to the ambifacient lunar waneshaft to effectively prevent side fumbling.


go home, Scotty, you are drunk!


For those who have never had the oportunity to learn of the marvels of the Turbo Encabulator: https://www.youtube.com/watch?v=Ac7G7xOG2Ag


Yes. It happens every third episode.


That's the shield modulation frequency though, so the actual frequency of whatever field the shield consist of is unknown. It's also entirely possible the numeric strings displayed in the bottom part of the image constitute the secret key to a pseudorandom binary sequence that is overlaid onto the useful shield signal, and that's what the Klingon actually use to penetrate them.


Something not totally clear from the title, but it seems the claimed rate was actually achieve with a transmitter structure similar to what you would find in coherent optics (see figure 4). Instead of coupling to fiber, they couple to a high speed photodiode that radiates at the ~140GHz laser wavelength.

EDIT: Noticed that after a closer reading of the paper, the real goal was to assess the LO phase noise improvement when moving from a RF synthesizer to a SBS laser and PD based LO.


While it looks like they got lower phase noise using the optical LO than the traditional source, there are better conventional sources, like the LNS Series from NoiseXT.

I applaud their novel optical based system of generating the RF to then transmit via the antenna. The world is getting quite interesting, where you can actually buy test equipment these days that will tell you the frequency (down to the Hertz!) of Blue Laser Light.


They make a big point of using HD-FEC rather than SD-FEC.

To me, that makes no sense - the extra power usage of SD-FEC can be tiny, even at high data rates. The FEC problem can be parallelized (if the protocol is designed for this) so it doesn't need to run at line rate too.

(SD-FEC lets you get substantially more data throughput in a given channel).


I can think of a number of cases where using a hard decision decoder can be a better choice. Power would be one factor and I would strongly disagree that the power delta between soft and hard codes at these data rates is small. Unfortunately, I can not find any public data to back up that claim for what appear (based on coding gain and overhead) an HD-FEC using RS and a SD-FEC using braided BCH or LDPC.

Other factors can include reduced routing complexity and area requirements of HD since you have to shuffle around soft information. Extra die space is expensive so you want to avoid it if possible.

However, I think the most likely is the latency reduction you get when using HD-FEC. I know that some applications of microwave links are extremely latency sensitive, could be that this research is targeting one of those applications.


Guessing that needs line of sight & stable environment?


Does 20-m here stand for 20 meters or something else?

"We also demonstrate successful 20-m transmission at a data rate of over-200 Gbit/s data rate."


Yes, it's referring to transmitting with 20 meters between transmitter/receiver. Their highest speed transfer was done at 30 centimeters between transmitter/receiver.


275 GHz. Getting close to terahertz. Nice.


I'm curious about the use cases for 275 ghz wireless communication, are microwave links still common? it's almost far infrared


> are microwave links still common?

Yes, they have their purposes. I work in finance and we use microwave links to talk between systems in Chicago & New York. Fiber links are the backup, here! The reason is purely latency as the bandwidth of the fiber links is orders of magnitude higher. The major downside is the microwave links drop quite frequently as they're sensitive to the totality of the weather between data centers (e.g. any sort of precipitation as a crow flies between NYC & CHI and there's potential for dropped links).


The same use as optical wireless like LiFi. Higher speed over short distances.

Optical has the advantage to bouncing off walls, but EHF may have bandwidth advantages.

This wouldn't be useful for microwave links since it is absorbed by the atmosphere and has short range.

275 GHz is one the edge of EHF. Terahertz is 300 GHz to 3 THz. Above that is infrared.


> This wouldn't be useful for microwave links since it is absorbed by the atmosphere and has short range.

Nit-pick; 275GHz isn't terrible for atmospheric absorption. What gets you is the phase dispersion when the humidity is high.


From the PDF:

> Requirements for 6G include extremely high data rates (>100 Gbit/s), ultra-low latency (<0.1 ms)....

> For the wireless communication of the 6G era, new radio frequency bands in sub-THz, ranging from 100 GHz to 300 GHz, have been identified as one of the most promising bands.

> There is also a technical challenge for seamlessly connecting wireless communication systems with fiber-optic communication networks


By far most communication is in the microwave bands, such as WiFi, cell phones, Bluetooth


sure, but wifi doesn't go over 6ghz for plenty of reasons

edit: forgot about 60ghz wifi


60GHz wifi products are pretty common now. With the notable quirk of having difficulty passing through paper.


Paranoiacs will be finally be able to wear regular hats!


Seems useful as a cable-replacement for e.g. uncompressed video; DisplayPort tops out at 10.8 Gbps, so even allowing a 19x reduction for "real world" limitations, you could send DP over 20 meters with this.


WiFi is microwaves.


My ISP actually uses microwave links! I get consistent gigabit speeds with them.


Satcom downlinks for laser connected sats


Microwave links are very common


When incorporated into Wifi 18 in the year 2046, it will prove to realistically get around 500 MB/s, and have a hard time getting through most walls.


They are only transmitting 30 mm, when they moved to transmitting 20 meters it had to drop from 64QAM to 32QAM and lost some throughput. It's also laser generated so I don't believe it's omnidirectional, just point to point.


Sounds more practical for satellite-to-satellite communication than for consumer devices then if it's being sent with lasers


Might work well in open office and large public spaces though, there's quite a lot of things in between consumer and space.


IIRC there are modern enterprise wi-fi APs that sit in the rafters, and form star-topology laser relays between devices.


The transmitted signal is diode generated, not laser-generated, and it's coupled with a typical EHF horn antenna.


Jokes aside, I get a consistent 1600mbps over wifi 6E which is good enough for me.

Of course you need an AP in every room to get consistent performance, but wifi has always been like that. And unless your applications use forward error correction wifi will always have latency spikes from L2 retransmission if there's even one wall between the AP and device


An AP in every room, incorporated into the light fixture. Sounds like a business model just waiting for some funding.


Why incorporated into the light fixture?

I paid ~$280 for each U6 Enterprise and that's the only thing that limits how many I have. I'm sure it's the same for anyone that cares about wifi performance. How much of a premium would you pay for better aesthetics?


You placed one in each room right? Each room has at least one light fixture. AP becomes invisible, saves space.


Space on the ceiling where I have essentially unlimited space?

I'm not saying there aren't advantages to this idea. Eg powering the APs with 120V from the light fixture would mean they can use fiber instead of PoE 2.5G ethernet. That would save me the cost of an extra switch and also reduce power consumption on both sides of the cable. But I'm sure a combined light+AP unit would be expensive enough to make that a moot point. If anything, I'd prefer to sacrifice even more aesthetics and use a bare PCB to save money if I could.


Already exists


Could you share a link?


If you live in a small/medium apartment then 6E is quite good. I've been using one of those "enterprise" tri-band APs for around a year and my experience has been also amazing, with just one AP in the entire flat.

Enabling all bands should allow your device to just drop to 5GHz/2.4GHz when needed. This has been seamless for me.


Depends on the wall construction. Bricks or some AAC will be fine. Steel reinforced concrete panels - and you might as well call it Faraday cage.


If I may ask, what is your hardware setup like if you're achieving consistent 1.6Gbps? Is that a reproducible, every day speed? Is that only for LAN or both LAN+WAN?


> what is your hardware setup

U6 Enterprise AP, Hasivo ethernet switch from aliexpress (used as a media converter from the AP's 2.5G copper to 10G fiber), MikroTik RB4011iGS for NAT (router on a stick), 56G Mellanox SX6036 for wired LAN. 56G optics from eBay and 10G optics from fs.com

>only for LAN or both LAN+WAN?

Wired LAN is 56gbps nominal.

Wireless LAN is 1.6gbps actual throughput.

WAN is 1.4gbps actual throughput (limited by Comcast DOCSIS)


Consider: the faster the packets go, the faster the channel gets free to let other TDMA stations talk. A big office with a lot of computers still experiences QoS losses over 6E when someone starts watching a 4K video. Get those bursts of video-buffer-filling done faster, and other traffic will stay smooth.


Oh absolutely! That's the main reason why every room needs an AP. A 20mbps 4K video only uses up 1% of the air time for wifi 6E. Any more than that will noticably increase p99 latency. A device outside the room could easily use 10x as much airtime for the same bitrate.


> wifi has always been like that

not for me


Nor for me; I have a 3000 square-foot house and run a single access point; I ran Ethernet to a very central place (we have a centrally located open stairwell between the two floors, which helps) and got a PoE WAP that covers essentially the whole house (if you hold your phone in the very corner of the corner rooms, you can get it to drop).


> if you hold your phone in the very corner of the corner rooms, you can get it to drop

See clearly we have very different standards. If you're seeing packet loss in the corner of the room, there will also be a fuckton of L2 retransmissions throughout the room. The latter will not be visible as ping loss%, but it has the same effect on p99 latency


I guess I don't do things on my phone that are that latency sensitive. Web browsing used to be latency sensitive, but now the typical website does nothing for 250-750ms when I click on a link, and (seemingly randomly) takes multiple seconds about 5% of the time even if I'm connected via GigE, so any network latency is masked by that pretty well.


Every website I use (besides reddit and wikia/Fandom) load almost instantly.

> typical website does nothing for 250-750ms

Two common causes:

- Your HTTP cache is slow because your workload size is larger than your SSD's SLC cache. Cheap 1TB SSDs only have 50GB SLC which gets nuked with every background software update, so buy a better one.

- your p99 DNS response time is slow. Recall that many websites require many DNS queries to randomly generated subdomains and the whole page is limited by the slowest response. Set prefetch=true in unbound and use multiple DNS servers in parallel with pihole/dnsmasq to eliminate that issue.


> … have a hard time getting through most walls.

Thankfully, the Class Wars of 2037 has caused most of society to move into tents, so signal penetration through walls was no longer an issue.


> have a hard time getting through most walls.

Dial up the TX power.


Skip ads in 5,4,3,2,1,...


Would like to see the environmental impact and public health studies done with this


Poe's law at work? There are people who unironically want to hold back faster wifi and wait for environmental impact studies




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: