
UCLA, Cisco and more join forces to replace TCP/IP - ossama
http://www.networkworld.com/article/2602109/lan-wan/ucla-cisco-more-join-forces-to-replace-tcpip.html
======
erik123
_In NDN, all data is signed by data producers and verified by the consumers,
and the data name provides essential context for security._

Centralizing the concept of security in the network's architecture will create
an intractable problem. Certain parties will still want to impose their desire
to be able to eavesdrop on the data. Therefore, there cannot be any real
security in such centralized design for security.

The _in tempore non suspecto_ in which it was still possible to roll out
security jokes such as SSL, is over now. Nowadays, 95% of the world population
(and their governments) will refuse to adopt any centralized security design,
because they do not trust it.

In my impression, the project is dead on arrival.

~~~
jimmaswell
In what way is SSL a "security joke"?

~~~
coldtea
Since most servers were found to be vulverable to crazy simple attacks like
that:
[http://en.wikipedia.org/wiki/Heartbleed](http://en.wikipedia.org/wiki/Heartbleed)
or the Apple one: [http://nakedsecurity.sophos.com/2014/02/24/anatomy-of-a-
goto...](http://nakedsecurity.sophos.com/2014/02/24/anatomy-of-a-goto-fail-
apples-ssl-bug-explained-plus-an-unofficial-patch/)

(and tons of others besides, plus crappy code in the most prevalent
implementations used).

~~~
noselasd
And when someone implements new code dealing with security, such bugs are not
possible ?

~~~
acdha
He's going to write it, so it'll be better than everything in the past and
unicorns will fly by when you use it.

Hypothetical code is always better than the unpleasant kind we actually write…

~~~
coldtea
I hope fighting that strawman you built was fun, but I really don't know where
the BS snark came from.

I didn't say new code will be flawless. Just that OpenSSL code is bad and
insecure.

> _Hypothetical code is always better than the unpleasant kind we actually
> write…_

Well written code is always better than the crap that OpenSSL was (crap as
admitted by most of security experts and groups working with it).

People who cannot tell between code with inevitable bugs and flaws and crappy
code that just welcomes them in, don't really belong in the profession.

~~~
acdha
> I hope fighting that strawman you built was fun, but I really don't know
> where the BS snark came from.

The BS snark which started it was your still-unsupported assertion that “SSL
is a security joke”.

> Well written code is always better than the crap that OpenSSL was (crap as
> admitted by most of security experts and groups working with it).

It's easy to criticize OpenSSL and, well, every other SSL library which has
had problems. It's a lot harder to replace it and actual security experts have
thus far chosen to overhaul OpenSSL rather than trying to replace it from
scratch. I trust the judgment of the OpenBSD and Google security teams over
your assertion that it's so easy to replace.

------
nnain
One of the earliest attempts to replace the the TCP/IP model (or rather the
lower layers of the ISO-OSI model) was the Asynchronous Transfer Mode (ATM).
Despite being a well-intentioned idea, it failed to see real world usage
because of the complexity.

Along the way many developments happened. People learned to live and work with
IPv4. Even IPv6 hasn't picked up despite solving some important problems. So
when it comes to updating the core networking infrastructure, I don't think
TCP/IP is replaceable. It just works _very_ well now -- you can have real time
chats, high throughput data lines, has time-tested code libraries, there's
vast amounts of knowledge so you can build apps fast and all that.

As I understand, what this 'Named Data Networking' technology is proposing is
to replace IP addressing scheme with Names. I'm not sure if the whole internet
backbone infrastructure would change it's networking strategy now.

TCP/IP addressing format is very structured and that's its strength. IMHO
that's actually how communication should take place; not with names that can
have high-variation in format.

~~~
ay
"Even IPv6 hasn't picked up" \- this needs correction.

[http://6lab.cisco.com/stats/](http://6lab.cisco.com/stats/)

[https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...](https://www.google.com/intl/en/ipv6/statistics.html#tab=per-
country-ipv6-adoption)

[http://www.worldipv6launch.org/measurements/](http://www.worldipv6launch.org/measurements/)

You can notice that 9% of the internet users in the US are IPv6-enabled.
Germany is over 11%. Belgium is almost 30% (of course due to smaller
population it's less in absolute host count).

How many IPv6 users this is in millions, is an exercise left for the reader.

The things are moving very very fast - lots of large SPs have bumped the
values within this year from low-mid single digits to nontrivial double-
digits, and lots more are in the pipe.

All major CDNs support it, helping IPv6-enable thousands of sites that don't
run IPv4 on the server itself. I'm saddened by the fact that HN site, being
Cloudflare customer, did not flip the switch - there's really zero excuses
today. ([http://blog.cloudflare.com/eliminating-the-last-reasons-
to-n...](http://blog.cloudflare.com/eliminating-the-last-reasons-to-not-
enable-ipv6))

(On a side note, there are _today_ millions of users not worrying to have any
IPv4 at all - on T-Mobile's network. See:
[https://conference.apnic.net/data/37/464xlat-
apricot-2014_13...](https://conference.apnic.net/data/37/464xlat-
apricot-2014_1393236641.pdf))

Here's another data point, from my home gateway (I'm in the remaining 70% of
folks in Belgium who yet don't have IPv6 so I am using Hurricane Electric
tunnel - and the Vlan50 is the IPv4-only internet connection, so that counter
shows IPv4 user traffic + IPv4 tunnel traffic - so you can count it as
"aggregate").

    
    
      ay-home#sh int Tunnel0 | inc packets|escr
        Description: Hurricane Electric -- Paris
        5 minute input rate 5000 bits/sec, 6 packets/sec
        5 minute output rate 5000 bits/sec, 3 packets/sec
           171400464 packets input, 193001468663 bytes, 0 no buffer
           90187695 packets output, 13837665814 bytes, 0 underruns
      ay-home#sh int Vlan50 | inc packets|escr 
        Description: Outside - internet-facing
        5 minute input rate 143000 bits/sec, 24 packets/sec
        5 minute output rate 34000 bits/sec, 25 packets/sec
           618491041 packets input, 607147678054 bytes, 38 no buffer
           390716032 packets output, 83476555174 bytes, 0 underruns
      ay-home#
    

Do your math.

~~~
tgflynn
Yes but <20% adoption 18 years after the protocol was designed seems a bit
slow, especially given the pace of overall technological change in this
century.

~~~
ay
Let's take a look at the Internet itself.

[https://www.ietf.org/rfc/rfc791.txt](https://www.ietf.org/rfc/rfc791.txt) is
dated 1981.

Figure 1 in
[http://www.census.gov/prod/2013pubs/p20-569.pdf](http://www.census.gov/prod/2013pubs/p20-569.pdf)
shows that the first time they recorded was in 1997, at 18%. That's 16 years
(actually a bit more because my understanding is rfc791 documents the running
code), and still under 20% - so by the same metric, the Internet is a failure!

Of course, then we can see the tail of the S-curve: it's doubled in the next
three years, then slowed down and the last measurement is 71% of households in
2011. 30% of US households don't have internet at all, in 2011.

Now, let's bring the world-wide IPv6 numbers from the past 6 years, numbers
from
[https://www.google.com/intl/en/ipv6/statistics.html#tab=ipv6...](https://www.google.com/intl/en/ipv6/statistics.html#tab=ipv6-adoption):

    
    
      0.05% on 7 September 2008
      0.09% on 31 August 2009
      0.15% on 30 August 2010
      0.34% on 1 September 2011
      0.74% on 30 August 2012
      1.84% on 1 September 2013
      4.42% on 31 August 2014
    

The US figures from
[http://6lab.cisco.com/stats/cible.php?country=US](http://6lab.cisco.com/stats/cible.php?country=US)
(available since 2012):

    
    
      1.32% on 1 September 2012
      4.03% on 1 September 2013
      9.91% on 1 September 2014
    

You can see it's been approximately doubling every year - and in the case of
US more than doubling.

Understandingly, everything starts from zero or a very small number. A double
of a very very small number is still a very small number.

But if you keep doubling, at some point the numbers stop being small.

We're at that point now.

I hope these numbers speak by itself - those who ignore them, are welcome to
continue doing so. They make an easier competition for those who don't.

~~~
tgflynn
Yes, but I don't think comparing Internet adoption with IPv6 adoption is
terribly valid.

The first was a radically new technology and it took years for people to
figure out how to best make use of it.

IPv6 was supposed to be a purely technical improvement to deal with some
deficiencies of IPv4, notably address space limitations. It should mostly
concern only network and systems administrators and systems software
developers and be largely transparent to end users.

It's interesting that the two seem to have similar growth curves, but given
the very different audiences involved I'm not sure what to make of that
observation.

Certainly if you asked knowledgeable people in 1996 how long it would take to
achieve near 100% IPv6 adoption I doubt many would have predicted 20 years.

On the other hand in 1981 I suspect few would have predicted that a technology
developed by DARPA would be used by people in 2000 to buy books and manage
their bank accounts.

~~~
runeks
> It should mostly concern only network and systems administrators and systems
> software developers and be largely transparent to end users.

I'm no network engineer, but as I understand it, to support IPv6, companies
need to replace their switches. I think it's fair to say that there are
literally millions of switches that need replacing. We are talking billions of
dollars in total investments. I really don't see how it's surprising that this
will take a while. Billions of dollars don't hang on trees, companies need to
earn the money before they can spend it.

At the same time, because IPv6 is used less frequently, it is more expensive.
The price of electronics is determined by volume: the more you produce the
cheaper it gets. This means IPv6 has a price disadvantage to IPv4, which is
especially noticeable in the early years (of ~0.1% adoption). A device that is
produced at only 0.1% the volume of the most popular devices will be
considerably more expensive.

This is, in part, why we see an exponential adoption curve: the more people
who buy IPv6 equipment the cheaper it gets, and the cheaper it gets the more
people buy it, this chain reaction helps to cause the exponential adoption
rate.

I'm not saying everyone will end up using IPv6, although I think it is likely,
but I'm saying it should be no surprise that replacing billions of dollars
worth of network equipment takes time.

~~~
_cbdev
>companies need to replace their switches.

Actually, most Switches are just fine and don't need replacing. IPv6 is a
Layer 3 Protocol, most "Normal" Switches operate on Layer 2 (The Ethernet
Level, which stays the same and (in the best case) does neither know nor care
what goes on in Layers above). These can stay and most wouldn't even need to
be reconfigured.

As for Layer 3 Switches (The ones that do some amount of Routing, too), most
"brand-name" Models purchased in the last 10 Years should support IPv6.

~~~
ay
The most hardship, from experience, comes from the apps, especially the home-
grown ones.

Let me back this up with an anecdote from experience in dual-stacking the
websites at my employer (a curious reader might notice that cisco.com,
download.cisco.com, software.cisco.com, tools.cisco.com, cisco-apps.cisco.com
are all dualstack. The last one is interesting because it hosts the ordering
portal, with IPv6 being a transport for a non-trivial portion of the hardware
orders).

While the main cisco.com was dualstack since v6 launch, the rest of the
properties required more work, because there's bazillion different apps there,
so were launched just a ~year ago.

And yet despite all the testing, once we've gone live post-testing, we
realized there was one bug that slipped through. The name of the error quite
especially ironic and the bug, while in a somewhat infrequently used portion,
was very visible for IPv6-enabled users.

[http://www.gossamer-threads.com/lists/nsp/ipv6/47796](http://www.gossamer-
threads.com/lists/nsp/ipv6/47796) for the full externally visible recount of
the matter.

Back then the % of IPv6 users which was accessing the erroring function was
low enough that we did not roll back the entire set of changes, and just had
the fix developed and deployed, and the whole scenario was relatively
painless. (Besides for some semi-friendly beat-up during IPv6 workgroup in
RIPE meeting, where this error showed vividly since we had an IPV6-only pilot
WiFi SSID along with the usual dualstack)

If the same story were to happen with 50% of IPv6 adoption ? That would hurt
way way more.

The moral:

If you're a big shop - start auditing your apps now even if you do not think
you need it until 3 years from now. If you're not sure - there's bazillion
resources and people available to help, but for free and for money.

If you're a small shop and don't have any apps - RTFM, assess, and
JustDoIt(tm), in a staged manner, of course, all disclaimers apply, etc. - the
sooner you get a (small) chance to make your mistakes while doing the first
steps with IPv6, the cheaper those mistakes will be. Of course best to avoid
them, but.

Ok, I'm officially off my "IPv6 soapbox" on this thread, hopefully these were
useful to some folks. ;-)

------
fleitz
I wonder if it's just my naiveté however it sounds like this is more likely to
produce an X400 than an SMTP.

The vision seems pretty grand and all encompassing wholesale replacement of
the entire networking stack, rather than small and easy to implement iterative
approach. It seems that the biggest thing the TCP/IP folks got 'wrong' was the
32 bit address space, and even that small change is taking forever to be
deployed.

Yes you could certainly improve TCP/IP but is it going to be 10X better?

~~~
signa11
> It seems that the biggest thing the TCP/IP folks got 'wrong' was the 32 bit
> address space, and even that small change is taking forever to be deployed.

i guess you are alluding to ipv6 here. and imho, ipv6 provides quite a large
number of changes from vanilla ipv4. it is not _just_ a much larger address
space...

~~~
jacquesm
That is absolutely true but the main driver behind the replacement is the
increased address space. None of the other changes seems to have been a driver
at all.

So as far as the consumers go IPv4 is 'good enough' and if and when IPv6 will
finally take over it will remain the de-facto world wide networking protocol
used to power the internet for a very very long time.

Cisco attempting to drive a wedge between IPv4 and IPv6 in the midst of this
(very very slow) transition seems like a very strange move to me, almost
certainly bound to fail or in the end not replacing IPv4/IPv6 but maybe ending
up as a transport layer underneath it (killing most of the advantages it would
offer in the process).

And that's besides trying to replace TCP which would require re-
writing/adapting of virtually every computer program active on the net today.

~~~
tgflynn
I don't know that they're trying to drive a wedge between IPv4 and IPv6. I
would think that even NDN's supporters see it as a very long-term, post-IPv6
development.

I am surprised however to see Cisco supporting this. It's one thing to have
some academic networking specialists writing papers about NDN, but for a major
corporation to devote resources to a 10+ year development project with an
unproven architectural basis strikes me as odd.

~~~
takeda
Cisco was involved with it since at least of 2012. They actually wrote
software in the protocol as well. It was for video conference if I recall
correctly.

------
jrapdx3
Maybe it's just my nature to be guarded about grand visions, but does this
idea really have a good chance of succeeding? Will it displace TCP/IP given
the extent of IP deployment around the world?

No doubt there are people here who are network experts who can give a more
learned review than I can after quickly reading the overview on the website.

I have a lot to learn about the subject...

~~~
takeda
The NDN was designed based on today's most common Internet use cases. If you
think about it, most of the time we are requesting content from specific
place, but we don't really care where the server is located, what address it
has etc, all we care about is the content and whether it comes from intended
(trusted) source.

Assuming that the same name always references the same data, gives an edges,
because now routers are aware of the data so they now have ability to cache
the content locally and when someone else requests the same thing they can
just forward what they have without having to ask uplink about it.

It gives an edge in certain use cases. Probably the biggest ones would be
YouTube, Netflix etc. There is a lot of effort on TCP/IP network to provide
great experience for the user, through CDN, any cast routing, and other
tricks, with NDN you already have network that is very friendly and makes CDN
unnecessary as long as you design your protocol in such way that you utilize
network's properties. Another nice advantage is on lossy networks like
wireless ones. For example when you requesting content which goes through many
hops if there if the response was dropped, thanks to caching it can be resend
from the same point it was dropped without having to go back to the source.
This might also help in such network when the consumer is on the move. NDN
also has some nice properties, if for example certain name is set up in such
way that can be shared by multiple parties, then it is possible to implement a
chat without need of any server, which is quite cool.

Given these benefits the NDN is a two edged sword though, while it makes
content publishing to many people simple it makes certain tasks harder. For
example implementing something like ssh over it might be a bit difficult. In
fact anything that benefits from pushing data/request (simple example from one
of the project - controlling lighting infrastructure) will be complex. It is
still possible to implement but it is harder to do than in TCP/IP.

As for adaptation, it is hard to say. It definitively won't be easy. The
protocol is not a drop in replacement for TCP/IP everything needs to be
reinvented again. You can possibly convert existing applications to work with
it, and in fact it should be possible to carry TCP/IP over NDN but then you're
losing all of the nice properties of the protocol. Some things would work
better, for example stripping TCP/IP and having HTTP protocol implemented on
top of NDN. Some people already created NDN<->HTTP gateway.

On the other hand it could be extremely beneficial with specific use cases we
are somewhat struggling with, like multicasting of video. One strong point of
it is that the protocol can be implemented on top of TCP/IP, and in fact
that's how NDN testbed is (or at least was when I was there) implemented
currently. The adaptation goal is to have a network built on top of TCP/IP and
as it grows and is big enough eventually the TCP/IP layer below will collapse
and NDN will take its place. That's of course assuming the NDN will handle all
of our needs that will make TCP/IP unnecessary, otherwise it'll be just an
overlay network. They also try to avoid other mistakes of IPv6 and
concentrating on making it attractive not just technically but also from a
business perspective. That's why they are also partnering with vendors.

Source: I actually was involved in NDN between 2010 and 2012. And know people
mentioned in the article in person. One of my projects was video streaming
over NDN.

~~~
SudoNick
Most of the time we do care about (not!) exposing information to third
parties. This even applies to generally lower-importance scenarios such as
watching YouTube and Netflix videos.

ISPs are of special importance, because the exposures can be concentrated. We
make use of end-to-end encryption with specific servers in order to reduce the
information that ISPs (and others) acquire. How will we hide interest data
from ISPs in an NPN world?

~~~
takeda
Oh this is not a big problem.

You simply encrypt the data, the protocol even has a support for marking
encrypted data [1].

That said the NDN does not impose how you do it and it is left for the
application.

Now if the data is only end-to-end you probably would do something similar to
TLS. If the data supposed to be accessible by multiple users, then you encrypt
it with generated key, and then encrypt the key using public keys of intended
recipients.

[1] [http://www.ccnx.org/pubs/ccnx-mosko-
tlvmessages-01.html](http://www.ccnx.org/pubs/ccnx-mosko-tlvmessages-01.html)
(Section 3.5.2.2.1. PayloadType)

------
kv85s
I believe what they're proposing is largely the same, if not identical, to
Content-Centric Networking from Xerox PARC.

The central idea is:

    
    
      Instead of asking one particular server for some content, just ask for the content by name.
    

Since the content may come from any handy server, it is up to the receiver to
validate it is really the content he requested. Nothing about this implies the
evil "centralized security model" people are going on about. Sure, some bad
actor could weasel it in later, but it's not there now.

~~~
takeda
Yes, the CCNx was the first implementation of NDN and the NDNx they currently
using is fork of it.

AFAIK Parc is still receiving some part of the NSF funding to continue working
on CCNx.

~~~
j_burke
The ideas are related, but the codebase is now separate: [http://named-
data.net/2014/08/25/first-public-release-ndn-fo...](http://named-
data.net/2014/08/25/first-public-release-ndn-forwarding-daemon-nfd/)

------
Sami_Lehtinen
There's already IEEE_802.1aq which is optimizing routing and allowing multiple
parallel routing paths.
[https://en.wikipedia.org/wiki/IEEE_802.1aq](https://en.wikipedia.org/wiki/IEEE_802.1aq)

Lecture well worth of watching: Frank Fitzek, Aalborg University: Network
Coding for Future Communication and Storage Systems
[https://www.youtube.com/watch?v=qaJYWrYKVRo](https://www.youtube.com/watch?v=qaJYWrYKVRo)

~~~
pyvpx
this is a layer 2 protocol and has nothing to do with routing, which is layer
3 in the OSI model

------
higherpurpose
Whatever Cisco plans to do, I won't trust it not to have a back door. After
all Cisco is the author of the IETF protocol for "lawful intercept" in
routers, and if I'm not mistaken they also have a pretty high placed co-chair
at IETF.

[http://www.cisco.com/c/en/us/tech/security-vpn/lawful-
interc...](http://www.cisco.com/c/en/us/tech/security-vpn/lawful-
intercept/index.html)

[https://www.blackhat.com/presentations/bh-
dc-10/Cross_Tom/Bl...](https://www.blackhat.com/presentations/bh-
dc-10/Cross_Tom/BlackHat-DC-2010-Cross-Attacking-LawfulI-Intercept-wp.pdf)

------
radicalbyte
So if I've understood this correctly, NDN works be giving each piece of
content a unique address, instead of stopping at the host?

Basically baking a URI into the low-level protocols?

~~~
tgflynn
Yes, and the contents can be cached by the routers, so to get a piece of a
video you don't need a connection all the way to the source of that video but
only to the nearest router that caches it.

That may make sense for content that has few sources and many users, like
video (although I think CDN's mostly already solve this problem).

I don't think it makes much sense for interactive data and hence I don't think
it's a good basis for implementing all networking protocols.

------
Mawaai
People are trying for almost 15 years to replace IPv4. That's almost
impossible, 96% of the traffic worldwide is still IPv4.

This project is dead on arrival.

~~~
wernercd
read other poster(s) above... IPv6 is on the uptick, especially since IPv4 is
more or less out of space.

------
Luker88
This is the old Content Distribution Network. It does work -- provided you can
easily identify a resource in the network. URIs are hierarchal, but do not
follow the network connections hierarchy. Also, now every router needs to be
able to track all the streams that go through it.

In short, everything explodes when you try to scale the thing.

------
allengeorge
Isn't this identical in spirit to Content-Centric Networking (CCN) as espoused
by Van Jacobsen?

I distinctly remember watching his Google Tech Talk on the idea
([https://www.youtube.com/watch?v=oCZMoY3q2uM](https://www.youtube.com/watch?v=oCZMoY3q2uM))
when I first joined AeroFS.

~~~
j_burke
Yes - Van is involved in NDN: [http://named-
data.net/project/participants/](http://named-data.net/project/participants/)

------
shmerl
I hope keeping it all patent free / disarmament patent style is a requirement
for participation.

~~~
bediger4000
I note that patents/"Intellectual Property" wasn't mentioned in the article at
all. I suspect, based on the participants mostly being corporations, that the
whole thing will be covered by patents.

I think TCP/IP as non-patented, slipped by the major corporations. A protocol
anyone can implement, and where the "client" and "server" are pretty hard to
tell apart, is disadvantageous to market encumbents, and to surveillance
agencies. For instance, nobody can charge fees for implementing TCP/IP. Nobody
can license content servers. Nobody can accurately attribute a packet to a
legally-responsible entity ("one neck to wring").

The protocol to replace TCP/IP will be patent encumbered, it will make a
complete distinction between "client" and "server", it will be centrally
routed, it will be subject to surveillance, and servers will be licensed, and
costly. If NDN doesn't do some or all of these things, it's already dead.

~~~
j_burke
The participants are actually mostly universities. IP is discussed on the
project site:

[http://named-data.net/project/ndn-copyright-and-patents/](http://named-
data.net/project/ndn-copyright-and-patents/) [http://named-
data.net/consortium/](http://named-data.net/consortium/)

------
colanderman
Someone who knows more than me; does this intend to _complement_ TCP, or
_replace_ TCP? If the latter, how would one use NDN to implement a system that
naturally fits the "conversation" model of TCP, e.g. an MMORPG?

~~~
legomylibrum
I don't think you would use TCP for an MMORPG; UDP is more common in games
because a dropped frame here and there doesn't matter to most games, and it's
worth the lower overhead.

------
islon
What could possibly go wrong? It's not like the whole internet as we know it
depends at some level of tcp/ip and there's (probably) billions of lines of
code depending on it.

------
scrame
Yeah, good luck with that. I'm more surprised they didn't say they would fix
it with a MongoDB backed Facebook app written in node.js.

------
jdimov
Umm.. how about let's NOT replace TCP/IP with anything because it's may be the
only well-designed thing on the Internet that actually works? If you want an
impossible super-hero project to work on, try replacing HTTP instead - at
least you'd actually be solving a problem.

~~~
antocv
Whats wrong with HTTP? How would you improve it?

Its a Text Transfer Protocol, you can even build applications with text only
clients and servers. One only needs echo, bash and netcat to make a server and
client.

~~~
steveklabnik

        > One only needs echo, bash and netcat to make a server and client.
    

Not as of HTTP/2.0. :(

------
elmindreda
xkcd 927.

------
SixSigma
> more secure

If something is already secure, how can you make it moreso ?

What they mean is "less insecure".

~~~
deciplex
Something can be secure enough that the computational power required to break
it is probably not available to various actors up to and perhaps including
nation-states (e.g. RSA), yet still well short of the "requires more energy to
compute than is available in the visible universe" benchmark (e.g. AES,
probably). Yet both could still be regarded as "secure".

~~~
dchest
Also, protection against different kinds of attacks. For example, we can
consider SHA3 more secure than SHA2, because it's not vulnerable length
extension attack. Likewise, a system which protects against passive attacks is
_secure_ against passive attacks, but the system which protects against
passive and active attacks is _more secure_ than it.

------
ivoras
Nothing except efficiency is preventing us from using names as parts of
network/subnet hierarchy instead of numbers, e.g. : steve.home.town.country
instead of 192.168.5.6 (or the same thing on IPv6), and even efficiency could
be improved by the smart use of hashing... BUT! The major problem I see here
is that there simply _are more numbers than words_.

In practice, especially at large companies, it will _certainly without a
doubt_ degrade into workstation001, workstation002... workstation999 and then
we're in effect back where we started from - using numbers.

This looks like a solution in search of a problem.

~~~
tgflynn
That's not what NDN is about.

NDN assigns names (or addresses) to data contents, not physical
machines/interfaces like IP does. So it's conceptually quite different from
the way IP routing works.

The issue you mention is already solved by DNS.

 _This looks like a solution in search of a problem._

NDN attempts to make content distribution more efficient through caching.
Whether solving that problem justifies rewriting the entire network stack is
highly questionable.

~~~
j_burke
While content distribution is a significant motivation, a fair amount of the
current research is looking at benefits beyond caching: i.e., what do you get
with web-style semantics at the lower layers, per-packet crypto, name--rather
than host-based addressing, etc.

