
Super-fast Wi-Fi coming: 802.11ac-2013 - tanglesome
http://www.zdnet.com/super-fast-wi-fi-coming-802-11ac-2013-7000025352/
======
zanny
I read this stuff and just think "so the SoCs just have to be 3x as expensive
and power consuming to manage double / triple the channels, while still
broadcasting at a higher frequency, plus doing computations for beamforming
and its ilk".

Pretty much all this guarantees is that wireless n routers will stay $50, AC
will stay $100 - 150, and AC-2013 will be $200+, due to complexity. Maybe in a
few years n routers will spin down demand and AC will have market effects drop
the price, but then I worry about spectrum limitations. We already see huge
issues with n routers congesting on the 2.4 band, and making channels even
wider for AC means congestion leads to even worse bandwidth and range loss.

~~~
yaantc
802.11ac is the same as 11n in the 2.4 GHz band. All the new features are for
the 5 GHz band, and its support is required for 11ac. At 5 GHz there's plenty
of spectrum so wider channels still make sense there. But not at 2.4 GHz
indeed.

------
mkautzm
I honestly don't see the 2.4Ghz standards going away any time soon.

Sure, 2.4Ghz is hilariously congested, buy as 5Ghz devices are sold, it will
be less so and what I half expect is when people start trying to beam 5Ghz
through three office walls (and fail), they'll want to know why and eventually
find themselves back on a 2.4Ghz solution, simply because it's not so
sensitive to solid objects.

~~~
jseliger
_I honestly don 't see the 2.4Ghz standards going away any time soon._

I don't either, which is part of the reason I've spent a couple dollars on
ethernet cables that connect some of the obvious places. A year ago I wrote
this comment
([https://news.ycombinator.com/item?id=5192451](https://news.ycombinator.com/item?id=5192451)):

"Ethernet was developed in the context of the internet with its seven levels
of the ISO reference model," he said. "So we had a job to do at level one and
two, and we didn't burden Ethernet with all the other stuff that we knew would
be above us. We didn't do security, we didn't do encryption, we didn't even do
acknowledgements."

Very interesting story—I wish it had been more detailed in places, but I love
this middle section, about how simplicity sometimes beats complexity.

It's also amazing how prevalent Ethernet still is, even when wireless is a
competitor. The other day I left this comment:
[http://news.ycombinator.com/item?id=5052448](http://news.ycombinator.com/item?id=5052448)
on HN, because in some circumstances running a cheap ethernet cable from a
router to a desk, couch, or other work station can still be a real win,
especially given how inexpensive even very long ethernet cables are from
Monoprice.com.

They last forever, aren't subject to the level of interference wireless is,
and, in many conditions, have faster data transfer speeds. Ethernet is still
great.

~~~
yaantc
I have to disagree with your comments on Ethernet:

\- there is security at the Ethernet level with 802.1x. Mostly used in
professional settings, to prevent anyone with access to an area to plug into
an Ethernet jack and sniff traffic with ARP poisoning or access anything;

\- the ACK support for WiFi is perfectly fine at L2. The reason Ethernet
doesn't do ACK is that the PER is very low so there's really no need for it.
The reason why all wireless standards do some form of ACK is that wireless is
very lossy and relying on higher layers for retransmission would give poor
performance (retransmission at L2 is faster, and transparent to TCP). There's
ACK for WiFi but in wireless LAN (e.g.: 4G/LTE) that are also lossy and with a
relatively high latency you even have two levels of ACK: at the L1/L2 with
HARQ, and at L2 with ARQ. Different contexts, different mechanisms.

------
arbuge
That's all great, but without a commensurate increase in broadband speeds,
will real life consumers see any benefit? I'm thinking you might if you live
in S Korea, but probably not the USA.

~~~
selectodude
I bounce a lot of files and do a lot of screen sharing between my laptop and
my AppleTV - 802.11n can get bogged down with that - and so the increased
speeds would be something I could get behind. Unfortunately neither my laptop,
nor my router, nor my AppleTV support it so it's a "moo" point.

------
nawitus
I'd be fine with a 100Mbps Wi-Fi which would be as reliable as a wired
connection.

~~~
sliverstorm
What do you mean by reliable?

My main snag with WiFi right now is its half-duplex nature and dropped
packets. 100Mbps WLAN is horrendously worse than 100Mbps LAN for networked
storage

~~~
georgemcbay
OP was a bit oddly worded but I think nawitus meant that he or she would be
happy with 100Mbps wireless _if_ it were as reliable as a wired connection, so
basically exactly what you're saying.

I personally ended up using power-line ethernet to replace a lot of the
wireless throughout my apartment. While the wireless connection was much
faster in theory (and in practice when everything was perfect) than the
powerline ethernet speeds, trying to stream HD movies and such was always an
exercise in frustration with the wifi no matter how good the routers and/or
repeaters were on the wifi connection and how well they were set up. With the
powerline ethernet adapters, OTOH, I just plugged them in and everything Just
Worked, much slower peak speed, but nice and consistent and more than enough
bandwidth for HD movie streaming.

In any case the horrible state of US ISPs is the bottleneck for networking at
home for me right now regardless of how I wire myself internally. My two
options are Time Warner Cable (which cuts out between the modem and their
headend for minutes multiple times a day and was never fixed despite half a
dozen on-site technician visits) or AT&T DSL which has been much more reliable
but is annoyingly slow for big downloads and such. I'm currently using the
AT&T connection, but both options are terrible in their own way and nothing
else usable is available to me.

Fuck you, US ISPs.

~~~
tanzam75
> _I personally ended up using power-line ethernet to replace a lot of the
> wireless throughout my apartment. ... With the powerline ethernet adapters,
> OTOH, I just plugged them in and everything Just Worked, much slower peak
> speed, but nice and consistent and more than enough bandwidth for HD movie
> streaming._

Powerline networking is still a shared medium. You're sharing the available
bandwidth with all of the neighboring houses on the same utility transformer.

It does have the benefit that signal strength drops off with linear wiring
distance. Thus, you're farther from your neighbors than you would be with Wi-
Fi.

However, as with 5 GHz, you're getting good results right now because
powerline networking isn't too popular. But if every house were to get
powerline networking, then you would start encountering interfering networks.

~~~
sliverstorm
Circuit breakers have pretty dramatic attenuation. You go through at least two
to get from your power sockets to your neighbor's.

~~~
tanzam75
> _Circuit breakers have pretty dramatic attenuation. You go through at least
> two to get from your power sockets to your neighbor 's._

Yes, but not enough attenuation to block powerline communications.

This is not a theoretical concern. People have tested HomePlug units and
successfully connected to neighboring houses. There's even explicit provision
in the spec for neighboring networks, so that they share the bandwidth instead
of stomping all over each other.

------
shmerl
How is Linux support for these?

I had tough luck getting even 802.11n speeds on Linux. Are there any USB WiFi
cards which actually work? I got D-Link DWA-160, but it never connects on the
n band, only on b/g which limits it to approx. 50 Mbit/s. (I'm using rt2800usb
driver).

------
outside1234
Super-fast is not what we need. We need super-low-power for the next tier of
internet of things battery devices - when are we getting that? :)

~~~
angersock
Why does everybody have such a raging hard-on for the IoT these days?

Especially when we know machines are routinely compromised to circumvent user
freedom, why on earth would you want to help give a foothold to others in your
personal space computationally?

~~~
IgorPartola
Atomic power gave rise to both the nuclear bomb and the nuclear power plant.
Does one justify the other? Probably not, but it sure is nice to have a source
of clean energy available.

I don't think that Internet of Things is that big of a practical gain, but I
do see the potential there for some pretty cool applications. Imagine if every
power switch in your house could be controlled from your phone. Or if you
could do inventory in a warehouse by asking each crate what its contents was.
Or a farm where every tomato plant can tell you if it needs more water.

The privacy issues are separate from the possibilities that the technology
provides. Surely we can figure out a way to build a system that's secure. Or
choose not to introduce these things into our daily lives. I think just the
industrial uses are very interesting here.

~~~
angersock
Atomic tech is a red herring here--note that many are still not sold on the
utility of nuclear power plants (even if those people are being silly). This
is a very real, well-documented problem, and one we have at least a chance of
defusing.

Let us draw a difference between private and industrial IoT. IIoT (your tomato
example, maybe shipping, etc.) is done for practical logistical reasons, and
likely as not will end up on a private network or embedded system. There is
little likelihood that a government acquiring the tomato files is going to
ever truly hurt the consumer, because there is an airgap of commerce between
that information and the consumer--I buy the tomato from a supermarket with
cash, and the trail goes cold.

PIoT, though, means that I have dozens of little spies all reporting on every
aspect of my life, and that eventually that data is collected and correlated
in one place--as it must, if it is to be of maximum utility. And that data,
once concentrated, is something worthy of scrutiny by third parties. It
suddenly makes it a lot more useful to spend a little effort to break into a
management system or throw the book at a service provider, because the return
is so great.

Moreover, even without extracting the data, the devices required afford
computational capabilities, and the only truly trustworthy computer systems
that exist are those that are powered off and disassembled--anything else is
suspect (for the truly paranoid).

I just don't think that the personal internet of things is a good idea.

~~~
IgorPartola
So we agree that the technology behind IoT (IPv6, low power connectivity, low
power small form factor highly perform any computers) is valuable. We also
agree that industrial usage of IoT is probably a net gain. We disagree about
the PIoT. I believe it can in fact be built to be secure from spy agencies
provided that the hardware and software are OSS. You believe it will be too
much of a target for the spy agencies to ignore (I agree with this) and that
the agencies will find a way to break any implementation (I disagree).

~~~
angersock
In essence, yep. The tech behind IoT is basic enough that it is widely
valuable--especially because it is not solely for IoT; instead, IoT is a side-
effect of those technologies.

I would be curious how you think that open source will make PIoT any less
risky or a compromise any less dangerous.

~~~
IgorPartola
The compromise would be very dangerous. It's something to design against.
Using vetted OSS software, running atop open hardware, and being able to
verify that neither has been tempered with is the way to do it. If I decide to
run an IoT based on some very basic microcontrollers + bluetooth adapters + a
Cubieboard, running OSS software I verified myself, behind a firewall I set up
myself, I'd give it a decent amount of trust (all of the above are examples
and of course can be swapped for parts that we can give more trust).

------
IgorPartola
Is my understanding that higher frequencies mean less range and more
interference correct? I want good coverage and uniform performance over extra
bandwidth. I also want good latency over speed in lots of cases.

~~~
yaantc
Higher frequency means less range (everything else being equal) but not more
interference. Interference depends on how used / crowded a given piece of
frequency is, and the 5 GHz band is much less crowded than 2.4 GHz: there is
more spectrum there, less 5 GHz devices, and because the range is smaller the
interference from neighbors is lower.

------
zurn
Seems the article confuses the raw symbol rate with delivered L3 bandwidth (as
evidenced by the testbench comment in the beginning).

See eg.
[http://www.oreillynet.com/pub/a/wireless/2003/08/08/wireless...](http://www.oreillynet.com/pub/a/wireless/2003/08/08/wireless_throughput.html)

------
ksec
I would be glad if I could get 1Gbps Real World Speed from behind a wall.
Added with Low Latency, I could finally ditch the Cable and use WiFi instead.

Then I would hope Router could work on their software and not crash every
month or so. It would be great if they could restart itself every week during
at night when there is Zero activity.

Added with a decent implementation of USB 3.0 / 3.1, so i could use the
external USB HDD as an NAS instead or buying a separate unit.

------
rektide
I don't suppose beamforming also self-moderates the power used to send?

I have plenty of 100mW APs, and AFAIK they use that power all the time. Sure
would be considerate if they'd scale down to use the appropriate amount of
power required to reach each particular node.

~~~
yaantc
The transmit power budget allowed by the standard is shared with the various
antennas at least. Even if a given transmission uses the maximum power to
maximize the modulation and coding scheme used BF will at least reduce the
interference to other AP and devices, so it's still has a positive impact on
interference, which is reduced.

------
higherpurpose
Wait - they integrated WiGig into 802.11ac? Interesting, and a better way to
go than releasing a separate 802.11ad actually (even though the routers would
obviously have both ac and ad, just like we see b/g/n support today).

~~~
rektide
Not at all. Channel sizes in 802.11ad's 60GHz spectrum space are 2.16GHz wide,
and can fit four of those channels in most countries. 802.11ac is still 80MHz
channels in 2.4GHz space and now 160MHz channels in 5GHz space, and can fit
two and three or four of those "big" channels". 802.11ad is definitely a major
user of beamforming, relying heavily on reflectance, as it doesn't penetrate
solid stuff very well.

802.11ad also has engineering work being poured into protocol adaption layers,
so interconnects such as DisplayPort or PCIe can run atop it and presumably
the 802.11 MAC that's supposed to be core to 802.11ad.

------
sp332
_A further doubling of the data rate is achieved by increasing the maximum
number of spatial streams to eight._

Does that mean you need 8 antennas to max out the range and bandwidth? Cool!

------
goggles99
Does anyone really need this kind of speed in their home? A fast internet
service may provide 20mbit/s so no benefit coming from the internet side.
Streaming music locally does not consume much bandwidth either. Locally hosted
video is the only place I could see this making much sense. This is somewhat
of a niche market though. How many people rip their DVD and blu-ray disks for
local hosting? Not that many. I suppose More people pirate HD MKVs, but I
guess the underlying question is, does the performance difference really make
a significant difference? The movie may start half a second sooner - so what?

Network backup is another potentially time consuming task, but this is
typically done late at night. Enterprise WiFi adoption is unlikely because
this makes a company vulnerable to hacking and DOS (via interference or
jamming) attacks.

