
4 geeky laws that rule our world - 001sky
http://www.neatorama.com/2012/09/05/Four-Geeky-Laws-That-Rule-Our-World/
======
btilly
3 of the 4 are good.

The last is not. Reed's law is a extremely optimistic version of Metcalfe's
law which gives clearly nonsensical results. And Metcalfe's law in turn seems
to be overly optimistic. See
[http://spectrum.ieee.org/computing/networks/metcalfes-law-
is...](http://spectrum.ieee.org/computing/networks/metcalfes-law-is-wrong) for
evidence that the real scaling law tends to be more like n log(n). (See
<http://www.dtc.umn.edu/~odlyzko/doc/metcalfe.pdf> for several other lines of
argument leading to the same result.)

Note that when I say "tends to be" I mean that, depending on the details of a
social network, the scaling law can differ. In particular a network that
relies on relative strangers having interactions with other relative
strangers, as happens with eBay and Airbnb, is going to scale much closer to
Metcalfe's Law. By contrast one that depends on developing a circle of
friends, such as happens with Facebook, the scaling law will be closer to n
log(n).

Disclaimer, I have a bias here since I'm one of the co-authors of the n log(n)
law. (Andrew Odlyzko did all of the work. And then we found out that Bob
Briscoe had independently arrived at the same conclusion based on data that he
had access to at British Telecom.)

~~~
jcr
The trouble I have with Metcalfe's Law and similar is that all of them are far
too optimistic. Most English speakers have heard the phrase, "Point of
diminishing returns," and such points do exist. Worse yet, there's also the
phrase "Collapse under its own weight," describing the point where something
got so big or widespread to retain its integrity, so its groth essentially
harmed itself and failed.

If you look around, you'll see this kind of diminishing returns and self
destruction in a lot of different places, from "social" environments like this
forum, to your over-flowing email box, to the constant interruption of your
cell phone.

~~~
btilly
It is definitely true that a vocal minority of users are destructive to any
network they find themselves in. Examples of negative users include things
like trolls and spammers.

Another problem is that the natural desire of the people running the network
to monetize it creates incentives to weaken the experience.

Both of these can lead to sub-linear growth in overall value as a network
grows. But measuring value is already hard. Modeling all of the ways to screw
up a network is even harder, and I doubt that there is any simple approximate
general law that describes it.

~~~
jcr
Same here. I've never seen a simple general law, rule, or even guideline to
accurately predict the stagnation and decline of complex systems, or more
accurately, growth-decay cycles. It's a bit like knowing the sun regularly
rises somewhere over there ( _jcr points vaguely towards the "East-ish"
direction_ ) but having no idea _why_ it does, and hence, having no way to
accurately predict where it will rise. Trying to figure it out can be both
fascinating and frustrating.

There was a mathematician or scientist who said, "To measure something is to
know it," but unfortunately, I can't remember his name. Anyhow, I agree our
inability to accurately measure and model (or even notice) a lot of the
factors involved in complex systems results in our inability to describe or
predict them.

BTW, I kind of look at "measuring value" and "modeling ways to screw up the
network" to be mostly the same thing. In one case you're identifying,
measuring, and modeling the beneficial (value-increasing) factors, and in
other, you're identifying, measuring, and modeling the harmful (value-
decreasing) factors. --I have a funny feeling that I've missed some thing
obvious, so did I misunderstand your statement?

~~~
001sky
_Our inability to accurately measure and model (or even notice) a lot of the
factors involved in complex systems results in our inability to describe or
predict them_

Structured processes | SDIC [1] vs Resolution is a legitmate issue. Not at all
processes can be modeled the same, like a "complicated" but ultimately simple
one. In the latter, Logic helps "bridge" the resolution issues. Many
deterministic processes cannot be brute-forced, though.

[1] ie, displaying Sensitive dependence upon initial conditions; just how
accurate is your measure and can it ever be accurate _enough_ to deduce an
originating function?

~~~
001sky
Edit: If I may expand on the point above. Which may seem cryptic.

Complex systems are interisting in that they can be both predictable (in
theory) and not predictable (in practice). What's more is that they can be
_mischaracterized_ by analyzing the data. Ie, this data is a mess, it must me
unpredictable. There are a whole class of deterministic processes which
generate these types of results. Whereas a normal, linear process can be
infered from medium/high resolution data, even with higher-resolution data we
can't infer the underlying logic of the complex system. We may, as a result,
either oversimplify the model or proclaim the data to not support _any
deterministic process_ at all. The classic example is the class of processes
that exhibit sensitive dependance upon initial conditions. ie, variations on
the notion of deterministic Chaos. Their sesitivity is such that the
resolution of the dataset required to deduce or infer the origination function
would never be feasible if it was anything other than complete. Whereas, with
deterministic processes that are more traditionally tractable, you can make
progress in your knowledge with data-sets of increasing resolution. ie, you
can run a regression to infer y=mx+b or a monte-carlo to fit a gaussian curve
or what not. But you cannot "brute" force a fit to a choatic process from a
montecarlo, becaue you will never have enough resolution nor enough precision
in your data set to infer an origination function. [1]

The summary thought is tha sometimes gaps in data can be bridged with higher-
level logic or heristic, but this is not always possible (either in theory or
practice). Yet, we should _not_ infer a problem is unsolvable or untractable
just because of this. =D

[1]
[http://en.wikipedia.org/wiki/Chaos_theory#Sensitivity_to_ini...](http://en.wikipedia.org/wiki/Chaos_theory#Sensitivity_to_initial_conditions)

------
lsc
I am consistently amazed by the durability of this "cloud" hype. I mean, it's
been at the peak of the hype cycle now for what, six years? I have been
expecting a backlash for most of that time, and even now, when I make comments
that are not glowing about "cloud" I get funny looks.

Part of this is that 'the cloud' is now how we refer to perfectly ordinary
services that have been around as long as the web, like FTP space. I mean, I
walked in on a group of guys discussing their 'time machine' backup scheme for
their mac. Now, being a service provider with a whole lot of disk, my first
thought, of course, was "can I sell some kind of standards-based something
that will let people time-machine to my hard drives in a datacenter?" I got
about half way through asking if such a thing existed, and they said, "Oh, you
mean backup to the cloud?" at which an involuntary scoffing noise escaped my
throat.

(the upshot is that apparently you can backup the time machine data to 'the
cloud' but the people I talked to did not know what mechanism was used to
upload/store the data.)

~~~
jcr
I think what we're observing is just the evolution of marketing to people with
increasingly lower amounts of technical acumen. At one point, "on the system"
was understood to mean something is stored on the time-sharing mainframe,
simply because everyone used dumb terminals to access the infamous "system".
As technology progressed and computers became smaller, more affordable, and
more capable, the lingo changed to "on the server" since you actually had the
capacity to store something on your "system". The odd part is we still
expected the "server" to be located locally (in building, on campus, &c.),
When a server wasn't located (implied) "locally", it was called an "off site"
server and was typically accessed over private, dedicated lines/connections.
This lingo survived for a while when access methods changed to using
connections over the public Internet -- TYPICALLY DRAWN AS A CLOUD IN
DIAGRAMS!

Some marketing person who didn't really understand technical diagrams of
networks saw the amorphous "cloud" representing the Internet in the drawings
and started referring to "the Internet" as "the cloud." Needless to say, this
lingo caught on with the non-technical folks, and has been used ever since. I
sincerely doubt we'll get rid of the phrase any time soon since the majority
of people on this planet are non-technical and they tend to use the simple
"hype" names they have leaned.

The most interesting bit to all this linguistic history is you can accurately
profile people with it. For example, "Did he capitalize 'Internet'?"
--definitely an old fart.

~~~
bigiain
No, we've already lost to the people using "the cloud".

When you've got players like Microsoft, IBM, Apple, Oracle, Cisco, Amazon,
Sales Force, etc - all referring to important parts of what they do as being
"the cloud" (and all having different explanations of what "cloud computing"
is), it's clear the the terminology is here to stay, no matter what it's
origins or the intents/uses people once had for it.

~~~
jcr
If you read my post again, it seems we're "violently agreeing".

As I said, I doubt we'll get rid of the phrase.

Then again, from this point forward, I have every intention of depicting the
Internet in network drawings as a fishing net labeled "Net" with the hope of
tricking the marketing types into using "In the Net" rather than "In the
Cloud". I doubt it will work, but it would still be fun to try. ;)

~~~
bigiain
Heh. I think I might start drawing Visio diagrams with what used to be
labelled "the cloud" instead labelled "the marketing department". When anybody
asks, I'll say "nobody knows or can explain what happens in there or why, we
just know that you put data in, and usually _something_ arrives at the
destination. Company policy dictates that we have to use it, so we send all
our data through it in ways where we can detect and correct any changes made
in transit" ;-)

------
TimPC
Reed's Law is arguing that the number of cliques I can participate in on a
social network is more important than the number of "friends". The number of
cliques tends to grow exponentially in the number of users except:

(i) Social networks are extremely sparse.. the most well connected nodes have
5000 out of hundreds of millions of users. This greatly reduces then number of
available cliques (although does still leave it exponential).

(ii) Many of the connections are quite weak.. my interest in a random clique
of a social graph in which I'm a member is almost always zero.

(iii) Most cliques that provide value can be extended into other cliques by
inviting members, so I may only be interested in maximal cliques, a further
significant winnowing.

(iv) Cliques aren't even the greatest representation of this because most
groups I participate in on a social network don't have all-to-all friending.

On the whole though I expect the effect of disinterest in most cliques, and
improving the interested one's to optimality by extension reduces the cliques
I'm actually interested to something far slower than exponential.

------
smoyer
Can we rename "the trough of disillusionment" to "the pit of despair"? That's
how I've felt when I had products trapped there and it's also a great movie
reference.

------
bootload

      'Any sufficiently advanced technology 
       is indistinguishable from magic' 
       Arthur C. Clarke
    

You could add more. I particularly like this Clarke observation. cf
<http://en.wikipedia.org/wiki/Clarke%27s_three_laws>

------
MojoJolo
When I read about geeky law, the first thing come to my mind is "Moore's Law".

~~~
noamsml
Moore's law as commonly understood (single-threaded computing power doubles
every X amount of time) is already false.

Moore's law as originally stated (number of transistors per chip doubles every
X amount of time) is probably only true for a short while longer.

~~~
dmckeon
A reference to Moore's law in the lead paragraphs of an article in popular
media is a useful diagnostic.

If the law is mentioned at all, usually in ~200 words, I expect the writer to
handwave with low signal/noise about some supposedly latest-and-greatest tech
and I can decide whether to read the rest of the article.

If the writer butchers an explanation of Moore's law, I can simply skip the
rest of the article, confident that there will be enough other errors to drown
out any surviving signal.

~~~
eru
I'd guess, if they know (and mention) Moore's second law, you are almost
guaranteed a good article.

------
bshep
Loved the youtube clip

------
monochromatic
That first graph could be a lot clearer.

