
A deep dive into why Wi-Fi kind of sucks - nikbackm
https://arstechnica.com/information-technology/2017/03/802-eleventy-what-a-deep-dive-into-why-wi-fi-kind-of-sucks/
======
willidiots
I build public Wi-Fi networks, you've probably used one of them. AMA if you
have Wi-Fi questions.

To say it "sucks" is a bit harsh. It's delivering multiple hundreds of Mbps to
you via an unlicensed contention-based medium. The air interface is like
sending packets over a noisy Ethernet hub; it's impressive it works as well as
it does. That said, this article's a good primer on some of the protocol's
fundamental challenges.

In the coming years we'll hear more about 802.11ax, which is thankfully
focused on efficiency vs. raw numbers, but likely won't be ratified until
2019/2020.

~~~
koolba
Which router do you recommend?

Is the 5G band consistently worse than the default (i.e. constant connection
drops) or is that just my experience?

~~~
willidiots
An unfortunate side effect of being in the industry is that I constantly have
a surplus of enterprise-grade hardware at home. It's been a while since I
looked at consumer hardware. Apple's Airports were solid, but have been
discontinued. I've used both of the OnHubs and both were performant / stable.
I've heard generally good things about Google's new Wi-Fi pucks, and Netgear's
nighthawk series.

5G is the better of the two bands - it gives you many more channels and
there's generally less interference. That sounds like something particular to
your environment.

If you have a Mac, I recommend installing Wi-Fi signal
([https://itunes.apple.com/us/app/wifi-
signal/id525912054?mt=1...](https://itunes.apple.com/us/app/wifi-
signal/id525912054?mt=12)) - it gives you much greater insight into what's
happening on the air. I'd start by installing something like that, and
monitoring for correlation with your drops - what else happens when your
connection drops? Does the SNR drop? Does the AP change channels? etc.

~~~
rsync
"An unfortunate side effect of being in the industry is that I constantly have
a surplus of enterprise-grade hardware at home."

Great. Which do you like best ?

------
dom0
There is a German tech saying that goes "Wer Funk kennt, nimmt Kabel" ("Those
who know wireless, use wires").

~~~
agumonkey
Funk means wireless ??

~~~
dom0
Funk means RF/radio/wireless. The term originates from the first radio
transmitter ever, which used spark gaps (spark = Funke, gap = Strecke/Spalte)
to generate detectable RF.

So German Funk is actually directly related to the English funk (as in music)
etymologically.

~~~
agumonkey
Thanks for the detailed explanation.

------
olivierlacan
This article should be read by everyone working in open space offices across
the world expecting to get decent speed and reliability out of Wi-Fi with more
than a dozen people in a single room.

An open space office (or any office) for a company that depends on the
Internet for any of its work without Gigabit Ethernet cables sticking out of
every workstation is pretty damn foolish.

~~~
fulafel
5 GHz has plenty of non-overlapping channels (24?), so in this situation you
can use multiple APs in different room corners set to low power.

~~~
scott_karana
Only if the other floors in your building aren't doing the same thing. Which
is exactly why this approach is no longer viable with 2.4GHz. ;)

------
coleca
I once had the opportunity to listen to an extremely talented WiFi expert from
Aruba Networks explain on a whiteboard to a rapt audience of infrastructure
engineers how this works. And how adding multiple SSIDs is a contributing
factor to this problem.

The most fascinating piece was what he called the "butter factor". The closer
a substance is in consistency to butter, the more it will absorb WiFi signals.
Aruba had one heck of a challenge installing a WiFi network in the Land
O'Lakes manufacturing facilities. They had to use directional antennae mounted
mounted at eye level down each aisle of the factory.

~~~
swiley
> They had to use directional antennae mounted mounted at eye level down each
> aisle of the factory.

Did they ever mention the permanent, cumulative damage concentrated 2.4ghz RF
does to the human eye?

~~~
sathackr
Care to cite a credible source for this claim?

~~~
JBReefer
I have a counter claim: near face high powered WIFI router, 20/10 vision.

Non-ionising radiation is safe to fairly high levels.

~~~
sathackr
Agreed. The primary concern is tissue heating that cannot be sufficiently
dissipated. This is nearly impossible at "wifi" power levels(Below 1 watt)
without carefully contrived situations -- such as placing your eyeball(which
is the least-equipped organ to dissipate heat) at the exact focal point of a
specially designed parabolic dish.

I have been safely working around thousands of watts of RF power for 20+ years
and have maintained my 20/15 vision.

------
bsenftner
My day to day happiness quotient shot up the day I ditched my wireless wifi
and ran Ethernet cables through my spaces. I'd realized I was constantly
noticing wifi issues, and an apple mbp I use for email seems to drop wifi
every half hour. But no longer, with them all wired up! Realizing there are
USB3 to Ethernet gadgets really hits home how our modern technology is market-
driven-dumb and anti-consumer: all modern laptops don't even have Ethernet
ports anymore!

~~~
emodendroket
I splurged on one of the expensive tri-band routers and honestly now the
wireless is good enough that I don't care.

~~~
Klathmon
I used to live about 1/4 mile away from any other houses, and I never got why
people always complained about wifi.

In my experience it always worked well, was fairly fullproof, and had fast
enough speeds that it was never an issue.

2 years ago I moved into a condo that has 25 wifi APs within range of me right
now. Now I get it.

Getting one of the high-end tri-band routers as you said did help, but it's
still a difficult experience.

~~~
extra88
I never heard of a tri-band router before. It's a misnomer because there is no
third band, it just means it can handle 2.4GHz but will use two channels of
5GHz at once. There have been other routers in the past that used more than
one channel but it sounds very selfish to use in a dense apartment/condo
living situation.

~~~
Klathmon
Well luckily there is a massive chunk of 5ghz channels that nobody is using
right now so i'm okay, and the routers that use the 2 bands of 5ghz actually
will step down to only 1 if they see someone else stepping on either of them.

But just the fact that the higher end routers handle the extra congestion much
better was more the point.

------
nerdbaggy
As somebody who deploys high density wifi for a living I can agree that WiFi
sucks. 5Ghz is already super crowded and it's only getting worse. The new LTE
over 5Ghz is going to kill 5Ghz I think once it becomes deployed. Some of the
cameras in arena run on 80mhz 5Ghz frequency that hop around and can't be
channel planned. They are the worst.

~~~
petra
You mean lte-u ,right ?

Why is there a difference between the interference of LTE-u to the
interference of paid wifi,like wifi-offload ? And.how big is that difference ?

~~~
nerdbaggy
The biggest issue is that we would be the company that is providing the WiFi
offload and the paid WiFi so we have control over most of the frequency and
can channel plan and all that stuff. With LTE-u its going to be a lot harder
to channel plan with the cell phone carriers. A lot of the LTE-u installs I
bet are going to be contracted out. So to get to the right people to channel
plan is going to be a lot of layers. And they may not even want to channel
plan, since its unlicensed some people refuse to channel plan.

~~~
sathackr
And they have a monetary interest in wifi not working. Of course, any
intentional act to cause interference with wifi users would be illegal, there
are tons of things they can do or not do that will cause interference with
wifi that might not be provable as intentional interference.

You can't add more users to a frequency without the existing users losing
something, regardless of how much the LTE-U people sugar coat it.

------
r1ch
Lots of legacy tech is also one of the reasons why Wi-Fi sucks. If you do one
thing today, disable 802.11b on your router. 802.11b beacons alone can
completely jam a 2.4 GHz channel in dense deployments, exasperated by those
ISPs that broadcast their own SSIDs from your home router.

I wrote a more in depth blog about this at [https://r1ch.net/blog/wifi-beacon-
pollution](https://r1ch.net/blog/wifi-beacon-pollution)

~~~
extra88
Huh, I have an older Airport Extreme, it doesn't look like I can disable
802.11b. I can't disable the 2.4GHz band either. Airports were never known for
their configurability.

~~~
watersb
On my iPad right now. My first-generation "Time Capsule" has four options I
can set via the iOS Airport Utility.

802.11n (b/g compatible) 802.11n only (2GHz) 802.11n (a compatible) 802.11n
only (5GHz)

I gave away my AirPort a while ago, it was also first-gen of its kind so I
think it only did 802.11a

------
anf
Sounds like "everything is amazing and nobody is happy" syndrome [1] :-)

1\.
[https://www.youtube.com/watch?v=dgEvjW1Pq4I](https://www.youtube.com/watch?v=dgEvjW1Pq4I)

------
d33
It also sucks from the security point of view even the problems could in most
cases be fixed with solutions known to the current state of cryptography:

[https://github.com/d33tah/call-for-wpa3](https://github.com/d33tah/call-for-
wpa3)

~~~
jradd
* True, but I doubt anybody is going to sniff my wireless, force me to de–auth and capture the 4–way handshake which she saves to crack offline in the comfort of her home, using cloud computing or gpu's. [0]

* The WPS attack doesn't work with _every_ router that supports it, but allows for an easy way to compromise most modern routers. [1]

* RADIUS/EAP-TTLS is still rock solid. We all know WEP has already been broken and forgotten.

[0]:
[https://wiki.installgentoo.com/index.php/Breaking_WPA2](https://wiki.installgentoo.com/index.php/Breaking_WPA2)
[1]: [https://docs.google.com/spreadsheets/d/1uJE5YYSP-
wHUu5-smIMT...](https://docs.google.com/spreadsheets/d/1uJE5YYSP-
wHUu5-smIMTmJNu84XAviw-yyTmHyVGmT0/edit#gid=0)

~~~
d33
She doesn't necessarily have to go offline given that everyone has a phone
with internet access nowadays. Also, with week passwords, dictionary attacks,
rainbow tables etc you might actually get compromised in minutes. And instead
of de-auth, she might just as well wait.

As for "I doubt anybody would do that" \- it's not really a good security
argument when we have the means easily available.

------
searchfaster
Many poeple also don't understand, how easy it is to frame a deauth packet and
disconnect clients from APs. Hotels are known to use this to kick you out of
using your own personal hotspot on phones. Thankfully 802.11w PMF solves this
and FCC has started imposing fines on hotels doing this.

------
Tempest1981
If anyone is involved in the standards committees -- please try to make the
naming more user-friendly. My non-techie friends are totally confused by A,
AC, AX, B, G, N, WiMax.

No need to reveal the inner workings of the standards committees to the
public. Simple numbering would help.

~~~
hchenji
You mean like 3G/4G/LTE?

------
scurvy
The author kinda bungles his analogy/explanation of collision avoidance and
detection. Wireless networks don't use CSMA/CD. They use CSMA/CA. There's a
huge difference, and it's one big reason why wireless throughput won't ever
come close to PHY speed.

Wired ethernet uses CSMA/CD and it's one of the reasons it won the LAN
networking wars of the 80's and 90's.

~~~
dboreham
Do today's wired networks (1G+/FD/Cat6/fiber) still use CSMA/CD? I was under
the impression that they don't.

~~~
hchenji
No, since wired networks are full duplex and are their own collision domains.
No need to do carrier sensing at all.

~~~
scurvy
Not entirely true. You can run half duplex gigabit connections on hubs. Not
sure why you would, but you can. IEEE 802.3 covers all Ethernet standards and
it still lists CSMA/CD. Largely for historical purposes, but it is in the
spec.

Starting with 10gig, full duplex is required so that was the first IEEE speed
that did away with CSMA/CD.

------
rconti
_adjusts Meraki APs to auto power, 40mhz channel width on 5ghz_

I was dismayed at how terrible range was in my new, Silicon Valley-sized house
(~1100sqft). Even with a Meraki AP at the front wall, it would work line of
sight about 30 feet and start having issues as soon as I stepped behind a
wall.

Having a couple APs has mostly solved my issues, but even so, it feels like
overkill for such a tiny house. But the neighbors on either side are pretty
close, and I see a lot of interference. Worked a case with Meraki support for
a long time, and that seems to be the real problem.

I didn't realize I couldn't "shout over" my neighbors though, so I had signal
strength set to max.

Back when I was troubleshooting, I tried everything. 5ghz only. 2.4ghz only.
Tweaking channels manually. Tweaking everything manually. The funny thing was,
nothing helped.. but when I set things back to auto (Except max power), it all
got better. Every incremental change I made caused slightly worse performance,
but not enough so to notice. Going back to auto fixed it all.

Hoping auto power helps as well.

------
sniglom
What's going on in this article? Is it the author speaking about having bad
hardware and configuration?

> In real life, if you had your devices close enough to each other and to the
> access point, about the best you could reasonably expect [with 802.11b] was
> 1 Mbps—about 125 KB/sec.

I used 802.11b a lot. In a non crowded situation reaching ~5.5mbit was not a
problem at all. I remember seeing transfer speeds of about 700KB/s.

Why the author ignores the theoretical top speed which is something around
~60% of 11mbit is beyond me.

Then the author continues with the same thing again;

>your best case scenario [with 802.11g] tended to be about a tenth of that—5
Mbps or so

This again is not true. In a non crowded situation I had no issues reaching
2-3MB/s, which is closer to the theoretical limits of 802.11g after factoring
in some signal loss.

Surely, today when everybody is having wifi you would probably not reach
700KB/s on 802.11b or 3MB/s on 802.11g, but back when it began it was actually
feasible.

------
matwood
I suggest reading up on how WIFI works and some of the problems like hidden
nodes [1]. Sometimes I'm amazed WIFI works at all.

[1]
[https://en.wikipedia.org/wiki/Hidden_node_problem](https://en.wikipedia.org/wiki/Hidden_node_problem)

~~~
mhandley
The classical hidden terminal problem, where two clients of the same AP can't
receive each other and so collide when they transmit, isn't such a big deal
with WiFi. First, there are, by definition, no hidden terminals for the
downlink (and most traffic is downstream), or the clients couldn't associate
with the AP. Second, although two clients can't receive each other's
transmissions, if they're associated with the same AP they can usually hear
each other well enough for carrier sense to work.

The usual problem these days is too many overlapping networks. Different APs
on the same channel will not only defer to each other when they can hear each
other, but also because the AP tends to use a shorter contention window than
clients, when they do transmit, they still collide with each other with
moderately high probability. Worse, modern 802.11n and 802.11ac only get good
performance by forming aggregates of many packets (up to 64KB in 802.11n, more
in ac) to reduce the overhead of medium acquisition. Often they don't use
RTS/CTS because this reduces performance in benchmarks. When such aggregates
collide you lose the whole aggregate, not just one packet.

~~~
matwood
Thanks for the explanation. It's more evidence to my point that I'm always
amazed it works at all :)

------
apenwarr
Some even more in depth slides about why wifi doesn't always perform:
[http://apenwarr.ca/diary/wifi-data-
apenwarr-201602.pdf](http://apenwarr.ca/diary/wifi-data-apenwarr-201602.pdf)

------
ksec
Is there any reason why we cant use LTE as a standard for Wi-Fi use case? So
we have in house LTE Router with Wired connection, and when you are out of
range you are still LTE with your carriers. This is not LTE-U in Rel12 or LAA
in Rel 13, which both require a functioning LTE connection as Anchor point.

WiFi used to be good in 3G / WCDMA days, but i think the appearance of LTE
with constant innovation and advance in both carrier and phone maker has made
the LTE experience so much better. And it will only get better with LTE
Advance Pro and 5G.

------
xbryanx
What software tools do people use (OS X, Linux, Windows) to test out and debug
wifi connections?

~~~
lucb1e
On GNU/Linux (either Android/Cyanogenmod or Debian) I use wavemon. It's just
an apt-get away and tells me more than I can understand (which is fairly rare,
especially compared to Android apps which are universally underwhelming).

------
hchenji
The article misses out on explaining WHY we cannot get the promised bitrates.
The answer is Shannon's limit on channel capacity, which mandates that you pay
in either bandwidth or high power or low noise (SNR) to get higher capacity.
Now these WiFi devices have internal rate adaptation algorithms that choose a
particular modulation and coding scheme (MCS) index based on the measured SNR.
A higher MCS index means better higher bits per symbol (modulation) and lower
overhead code rates, and more antennas (spatial streams), which is how you get
the xx Gbps bandwidth advertised on the box. List of MCS indices:
[http://mcsindex.com/](http://mcsindex.com/)

In today's devices, interference is considered as noise, which means that SNR
simply drops to a point where the higher MCS indices are not chosen at all.
So, even though the device is capable of the advertised XX Gbps bitrate, the
SNR isn't high enough to switch to those higher rates.

~~~
freyr
> _interference is considered as noise, which means that SNR simply drops_

You're arguing that PHY data rates are low because SNR is low, and SNR is low
because interference is high. In my experience, this is not the case. PHY
rates can be quite high, but the overall throughput is low due to inefficient
channel access at the MAC layer.

Wi-Fi _doesn 't_ consider interference as noise. Since it uses CSMA/CA to
manage channel access, a device can only transmit if no other interfering
device is transmitting, or if background interference is very low. This is why
your device can be operating at a high MCS, but actual throughput will be so
much lower. If your devices are using a low MCS, it's more likely that they're
getting a weak signal from the access point.

~~~
hchenji
In my experience WiFi clients at 2.4GHz never end up using the high MCS
indices whenever they transmit (whenever the channel is clear). This has
something to do with the fact that the CCA thresholds are fairly high, and
that high SNR (30-50dB) is required for activating the high MCS indices[1]. I
don't think these SNRs are achievable in a typical setting.

It is true that MAC backoffs are also a contributing factor to the throughput
being low. The CCA assessment procedure detects both wifi preambles and non-
WiFi interference (pure energy detection), which is why I say interference is
modeled as noise. CCA does not for example, have some intelligent coexistence
algorithm for dealing with zigbee or LTE-U or other ISM traffic.

[1]: [http://www.revolutionwifi.net/revolutionwifi/2014/09/wi-
fi-s...](http://www.revolutionwifi.net/revolutionwifi/2014/09/wi-fi-snr-to-
mcs-data-rate-mapping.html)

------
davidgerard
My loved one went for a junior sysadmin job. They'd decided to remove all the
wiring and use wifi for everything because of just the sorta hype mentioned
here. Loved one pulled out a Palm Tungsten C and proceeded to crack all their
WEP passwords there in the interview. Got the job too ... and the task of
putting quite a bit of wiring back.

~~~
znewman
WEP passwords? What year is it again?

~~~
extra88
Palm Tungsten C dates the anecdote more than using WEP.

------
amelius
The article doesn't seem to speak about directivity. Sending a narrow,
directed beam of information could reduce contention issues, and reduce power
requirements. But you'd need a more advanced antenna (probably an array), more
advanced signal processing, and smarter software.

~~~
aidenn0
Directivity is great for open-space p2p links, but indoors it's much less
useful as walls scatter 2.4GHz with aplomb.

There's also the small matter that a dipole is already close to the ERP limit
set by the FCC, so most directional antenna setups are not legal

------
pedalpete
This statement is points to why it doesn't suck for most people "In practice,
it wasn't a whole lot better than dial-up Internet—in speed or reliability."

At the time of adoption, many people were on dial-up or just moving to
slightly faster internet speeds and they were accessing the internet via wifi,
so they didn't notice a drop in performance. Wifi speed increased along with
access to faster internet.

Is it as fast as it can possibly be? No, but it's like having a Ferrari in
highway traffic. Most people can't take advantage of the technical
capabilities of anything that would be considered better.

------
brokenmasonjars
My trading terminal I have directly wired in as just issues with wifi can
become costly in the middle of a trade. That said, my leisure laptop and all
I'm totally fine with wifi. Then I again the leisure laptop really doesn't get
the heavy use such as gaming. Some lectures on youtube is probably the most
test it gets. My phone (iphone 6), despite being relatively new has always
been terrible with wifi which I always found weird.

------
tdy721
I think it's getting Bits and Bytes a little mixed up. You can expect about
1/10th speed?

Most file transfer dialogs I've seen ("real world"?) display transfer rate in
Bytes. Advertisers use bits, that little marketing wave can actually explain
the speed drop to 1/8th of the "advertised" rate.

~~~
aviraldg
I believe the article takes that into account and describes the speed loss
you'd see over and above the "loss" from conversion. (it does use bytes
somewhere in between, so you know they're aware of the difference)

------
mentat
He's testing performance with cheap laptops and blaming the adapter? Pushing
data isn't free. If you have a bad CPU or memory architecture or even bad
drivers, you're not going to get network rated speeds. This has been true for
wired as well since the beginning of ethernet.

------
VeejayRampay
Wifi sucks because Ethernet is still better in all aspects but mobility to
this day.

~~~
emodendroket
Wow, it's better "in all aspects but mobility"? You're kidding. You may as
well complain that batteries suck because the only advantage they have over
just plugging something in is portability.

~~~
SomeStupidPoint
I mean, isn't that true and why almost all of our electronics plug in?

~~~
woah
One difference is you don't see people going around on forums saying
"batteries suck because you have to charge them"

~~~
SomeStupidPoint
Sure you do: look at forums on portable tools with both battery and cord
versions.

The charging time (and necessary backups for continuous use), weight, and low
power provided all are cited as weaknesses of batteries.

The only advantages batteries have is increased portability and ability to
provide power without existing infrastructure. Which is why we see batteries
primarily limited to highly portable devices and backup systems (because they
can store power more stably than capacitors).

Wifi sucks for anything that doesn't need to move or have issues preventing
cabling (eg, infeasible to run a line through walls, but you can get signal).
Batteries suck for anything that doesn't need to move a lot or an independent
source of limited power.

~~~
emodendroket
A lot of people choose wireless even for stationary machines because you don't
have to bother running the cable.

------
Filligree
This article is all about unwrapping the marketing copy and explaining why it
sucks.

That's still valuable, but I was hoping for a more technical view on it.
Anyone have an article that explains why the systems are as limited as they
are?

~~~
mckoss
Read more (page 2) - there is a considerable discussion of channel congestion
and interference due to over powered zones and large number of devices.

~~~
Filligree
I did. It's still fairly superficial, and not the focus of the article.

What I'm looking for is e.g. an explanation of how MIMO works, or why explicit
timeslots were considered useful for 802.11n but not 11b.

~~~
kogepathic
MIMO isn't that complex. It's using multiple antennas to take advantage of
multipath propagation. [0]

If you want a very simplistic explanation, it's like giving you multiple
Ethernet cables to improve throughput. (To gloss over _all_ the details about
RF and boil it down to the practical benefit: more bandwidth)

Per the article, not everything with MIMO is advertised as 1x1, 2x2, etc. You
frequently find WiFi routers mentioning 1T1R (1x1), or 2T2R (2x2). Or maybe
just the Chinese routers I look at.

This article is mostly on about interference, which is definitely a big issue
in populated areas. But there are definitely issues with Linux and the drivers
used by AP manufacturers. There's a project: Make Wi-Fi fast [1] which aims to
address these issues.

They're making good progress, especially in environments with a lot of
clients. Just having one 802.11b or g client can really ruin throughput for
newer devices due to the timesharing algorithm used by default.

[0]
[https://en.m.wikipedia.org/wiki/MIMO](https://en.m.wikipedia.org/wiki/MIMO)

[1] [https://www.bufferbloat.net/projects/make-wifi-
fast/wiki/](https://www.bufferbloat.net/projects/make-wifi-fast/wiki/)

------
ff7c11
Is there a way I can just not obey the collision domain and transmit over
someone else anyway? Would that allow me to skip the queue or would nobody get
any service?

~~~
jakub_g
More likely, not obeying the rules would lead to collisions -> damaged packets
-> retransmissions -> higher channel utilization -> bigger probability of
further collisions; and hence lower speed for everyone.

------
logicallee
Can we have "home" (home Wi-Fi) added to the title? I agree with others here -
the author doesn't talk about how amazing it is that you can connect to a busy
cafe WiFi in a busy street with five other networks coming in strong, and a
dozen (or few dozen) people using the one you're on, and it still more or less
works! (Wow.) Their only focus seems to be on fixed installations,
specifically at home.

------
exabrial
Fortunately my Mac doesn't have Ethernet.

Wait...

~~~
Moru
I think that if I was to buy a computer and notice it's missing an ethernet
port, I would return it as defect...

~~~
bartvk
It's not a big deal. USB-based adapters usually don't need drivers and are $5
to $15, so cheap enough to just put on every desk you usually sit.

I've got office space as well as a desk at a regular client. On both desks,
there's a USB hub with an ethernet adapter hooked up. I plunk down my laptop,
connect to the hub and I'm done.

------
bitxbitxbitcoin
Is it right in this instance to lay blame on the marketers?

------
mrmrcoleman
Excellent article. Thanks.

------
employee8000
If it's a mess then as a consumer they're doing a great job hiding it from me.
I have 20 devices connected to my wifi and everything is running particularly
smoothly. I did get frustrated at my 802.11n 4 years ago and dropped $250 for
the best ac wifi router on the market and have had no complaints since then.
Sure it could be more efficient from a technical standpoint but that's not
something I'm interested in and I'm happy at how seamless things are for me
right now.

~~~
massysett
Same here. I have a Roku sitting literally four inches from my router and I
haven't hooked it up by wire because I've been too lazy to go get a patch
cable (where "go get" means to my basement, not to the store.) I know wired
would be faster but wireless has long been fast enough for me not to care.

------
draw_down
I hate wires. But the only thing worse than them is wireless technologies.

