

Research Scientists to Use Network Much Faster Than Internet - craigjb
http://www.nytimes.com/2015/08/01/science/research-scientists-to-use-network-much-faster-than-internet.html?_r=0

======
sandworm101
> $5 million dollar grant

For fiber? So I assume that they aren't going to be doing much digging, rather
that they are going to string a few more lines alone already-existing paths.

>will make it possible to move data at speeds of 10 gigabits to 100 gigabits.

Wow.

In all seriousness: Kudos to whomever wrote the grant application. 5mil will
keep people in work. But this project won't come anywhere near what is
necessary to stream the data from 1% of the LHC's detectors. A private high-
speed network is all well and good, but this isn't anything remarkable.

~~~
pyvpx
perhaps for optical equipment to connection and switch existing dark and lit
fiber. that makes much more sense.

------
fnordfnordfnord
This article is so bad you can't even tell which grant out of many similar
ones it probably is. About fifteen years ago, it was a problem for researchers
of moving large data stores, and whether it's faster to do it with a network
or with suitcases full of tapes. Who pays for the storage, where it's kept,
and who is allowed to access it. These grants are usually denied, and so each
group of researchers scrapes up their own little partial datastore and ships
graduate students and postdocs back and forth from the uni to the
Cern/DeSy/Fermilab/SLAC/wherever, sometimes with suitcases full of tapes on
the return trips. I'm a little surprised that it's still a problem for them.

------
KaiserPro
So are they talking about software or hardware?

They talk about LHC, but the innovation there was using a different file
system: GPFS
([http://iopscience.iop.org/1742-6596/219/7/072030](http://iopscience.iop.org/1742-6596/219/7/072030))
which meant that data is sharded, managed by age and transparently cached
intelligently

Are they instead talking about using replacing TCP with something more
designed for bulk data transfer?

Or are they talking about lighting up fibre with different transmitter pairs?
(think dwdm x 10
[http://www.webopedia.com/TERM/D/DWDM.html](http://www.webopedia.com/TERM/D/DWDM.html))

for 5 million, I'd assume its software, if that the case, its pretty much just
copy and paste what everyone else has been doing:

[http://filecatalyst.com/](http://filecatalyst.com/)
[http://asperasoft.com/](http://asperasoft.com/)

for opensource there is: [http://uftp-multicast.sourceforge.net/](http://uftp-
multicast.sourceforge.net/) [http://tsunami-
udp.sourceforge.net/](http://tsunami-udp.sourceforge.net/)
[https://github.com/facebook/wdt](https://github.com/facebook/wdt)

And a myriad of others. Multi stream TCP is fairly simple as the application
doesn't have to deal with rate limiting or error correction.

~~~
M2Ys4U
> So are they talking about software or hardware?

Well, if you read the very first sentence of the article:

A series of ultra-high-speed fiber-optic cables will weave a cluster of West
Coast university laboratories and supercomputer centers into a network called
the Pacific Research Platform as part of a five-year $5 million dollar grant
from the National Science Foundation.

~~~
jblow
That sentence is not informative. The backbone of the regular internet is
fiber optics as well (usually).

------
stephengillie
Is this related to Internet2?

[http://www.internet2.edu/](http://www.internet2.edu/)

[https://en.wikipedia.org/wiki/Internet2](https://en.wikipedia.org/wiki/Internet2)

Edit: Maybe they are parallel projects? TFA says they have invested at about
100 campuses, but about 250 were already part of Internet2 in 2013.

~~~
meragrin
It doesn't sound that way to me. This network seems to be specifically for
moving around research data among the institutions. Internet2 is essentially a
playground to research and demonstrate ways to improve the internet.

------
martinald
This really isn't that impressive in my eyes. LINX in the UK offers a 100gigE
peering port for £6,000/month. What am I missing here?

~~~
r1ch
TCP being awful at high bandwidth, and scientists not knowing how to use
anything more efficient.

------
jauer
I don't really see why this is NYT-worthy. State paper, sure, but NYT? This
isn't particularly groundbreaking.

There are existing regional sci/edu networks doing 100G as well as field-
specific networks (e.g. ESnet).

~~~
vezzy-fnord
As well as GEANT.

In fact, the list of such NRENs (national research and education networks) is
quite extensive:
[https://en.wikipedia.org/wiki/National_research_and_educatio...](https://en.wikipedia.org/wiki/National_research_and_education_network)

Earlier examples include MIT's Chaosnet.

------
amitparikh
"FedEx is still faster than the Internet" [https://what-
if.xkcd.com/31/](https://what-if.xkcd.com/31/)

Of course, the applications here are probably quite different -- this research
grant may be more geared toward building a faster network to handle large
amounts of streaming input.

------
grondilu
> The challenge in moving large amounts of scientific data is that the open
> Internet is designed for transferring small amounts of data, like web pages

Isn't there FTP for large data transfers?

~~~
stephengillie
I'm sure the File Transfer Protocol can handle 20 petabyte file transfers.
They're probably in 200GB tarballs or something most file systems can work
with.

But even if you have 1 gigabyte per second upload and download, you might not
want to wait the 2 years for the file transfer.

------
zengargoyle
This is probably what this article is referring to.

NSF Gives Green Light to Pacific Research Platform- UC San Diego, UC Berkeley
lead creation of West Coast big data freeway system.

[http://cenic.org/news/item/nsf-gives-green-light-to-
pacific-...](http://cenic.org/news/item/nsf-gives-green-light-to-pacific-
research-platform-uc-san-diego-uc-berkeley)

------
mikecmpbll
don't see how this is any different from the educational backbone
infrastructure in the UK like JANET.

JANET has been around since (and before) I was at school 15 years ago, and has
kept with the pace, more info at
[https://www.jisc.ac.uk/janet](https://www.jisc.ac.uk/janet)

------
rasz_pl
>100 gigabits

thats almost as fast as some parts of _ordinary internet backbone_ in Europe
.... so much progress, truly world leader!

~~~
ori_b
It's also shared between a small population of researchers, leading to far
more bandwidth per capita.

------
some301user
Looks like two tier internet doesn't matter any more. All hail the PR gods.

------
hippo8
Won't this make research data from universities even more harder to access?
Hope this network becomes more accessible to the public.

~~~
stephengillie
I think this network might be to shuttle data between remote databases and
computing farms. For instance, if you have 20 petabytes of data in Tulsa and
an agreement with CalPoly to crunch the data for you.

------
woah
Unfortunately, due to net neutrality laws, this can not be connected to the
Internet. If these laws did not exist, "super networks" such as this could be
defined in software and spun up at a moment's notice like some aws boxes. I'm
guessing that's why the price tag here seems so low. All the hardware is
already installed and ready to go, this project is just to set up a network
that uses it to its full capacity.

