Hacker Newsnew | comments | show | ask | jobs | submitlogin
Going With The Flow: Google’s Secret Switch To The Next Wave Of Networking (wired.com)
262 points by nsns 1046 days ago | comments



There is literally no equipment available in the market that does what these Google's switches do. Cisco, Juniper, et al, protecting their technology and investments into switching over the last 30 years, just didn't have the balls to kill their old lines by doing this wholeheartedly.

Essentially, they have spent the last 20 yrs building their software which runs on Motorola, MIPS, PowerPC, etc., running arcane switching protocols - not always interoperably even. And these 'software-less' switches can be made by almost anyone, since the software is their secret sauce.

Think of it as going from Minicomputers, which were custom boxes which had custom hw/sw from a few vendors, to PCs, which have an 'open' design and are designed for interoperability.

That's what OpenFlow does to the switching/networking ecosystem.

And since none of the incumbents want to commit hara-kiri, a few startups are trying to do this, Nicira, BigSwitch, etc. Many others have OpenFlow compatible switches, but nowhere near the scale that Google would need in their datacenters.

Brilliant stuff. And I'd love to see Cisco die because of this - they've kept the industry back for long enough.

-----


I'm nothing like a network engineer. But is the centralization described in this article a good thing for the Internet as a whole?

I can see that within an organization's internal network, they can assess the importance of different communications, and route accordingly. So on the internal side, it's potentially a big win.

But across the globe, who can assign the priority of traffic accurately and impartially? And isn't the decentralized nature of the current architecture an important feature, because of the way it can route around problems (be they technical or regulatory) of its own accord, without requiring a higher authority to tell it how (and thus without being susceptible to the agenda of that authority)?

-----


Yes, it's great. Not because of anything specific to what they're doing with it but because it fundamentally changes the game from you jumping through whatever flaming hoops the network vendor chooses to provide to being able to implement what makes sense for your business.

If you haven't dealt with network gear before, it's like going back 3-4 decades in general computing: bizarre, obscure UIs, features which are complicated by very strict limits, management is a bolt-on after-thought generally treated as a profit center ("We'll sell you tools to deal with our arcane UI!"), paid bug fixes which have to be installed by hand, interoperability across vendors is very limited, etc.

-----


The point is that the switch is programmable. You can implement the centralised behaviour. You can implement decentralised one. You can implement anything you want.

-----


Do there exist heuristics for decentralized traffic handling that this is able to support, but cannot be implemented through traditional approaches?

-----


The idea is to build a globally consistant view of the network. Currently, each node builds up its own view of the global state and routes based on this. Sure, distributed protocols allow us to share this information, but it's not guaranteed that the state will be accurate a few hops away.

OpenFlow allows an entity to keep a global consistant state, and calculate the rules by which each of the nodes should forward. This logically centralised control can then enable higher utilisation of the network. Think about traffic reports on the radio -- If you are driving and know that there is a bottleneck on one highway, then you can take an alternative route.

EDIT: I use "Global" in this context for within an AS, not necessarily internet-wide.

-----


Think about traffic reports on the radio

Thank you for that analogy, because it actually serves to illustrate my concern.

Here in NJ there's a station that reaches most of the state, and makes a big deal of its every-15-minute traffic reports. I used to listen to these while commuting, until I found from experience that their reports, at least for the roads I deal with, carried data that was either so stale as to be useless, or was just plain wrong. So now I don't listen to that station anymore. Instead, I use an app called Waze for my phone. This uses crowd-sourced data (i.e., decentralized), which also isn't wholly dependable (there's not always another user there ahead of me to make a report, and it's still susceptible to gaming), but on the whole it gives me a better picture of the traffic situation.

-----


Is that analogy necessarily parallel to networking? The radio station communicates with an entire city. The traffic jam only affects traffic within X miles of the bottle neck. I can imagine a car radio that automatically stitches to a local radio station that only broadcasts traffic jams that are relevant to cars in that area, eliminating the need for a larger centralized station.

-----


Well, I think any analogy starts to fall apart when you look too closely. But you've got the right idea. Sure, there's no reason you couldn't distribute it out further. I used the word 'entity' above in an attempt to imply that it could be "one large radio station" or "a group of smaller radio stations"--the point is that the decision-making is abstracted out to somewhere else.

Note also the use of "logically centralized", not "physically centralized".

-----


The point is that this centralization is within an org boundary, not across networks.

Today, to do this, you may need to configure several switches, routers, between the server, and the source & dest of traffic to the server, while not being able to globally optimize.

Depending on security considerations, it may even preclude certain servers from being in certain racks, based on the switch it is connected through.

-----


On the OpenFlow.org site I see that HP has a firmware upgrade for their switches that supports OpenFlow and IBM has an OpenFlow switch.

-----


It seems as though ASICs are the method of choice for every high performance system. The examples that come to mind are supercomputer interconnects and the completely engineered from scratch Anton[1] machines that DE Shaw Research uses for very specific computational chemistry applications.

[1]: http://en.wikipedia.org/wiki/Anton_%28computer%29

-----


Yes, going from a general purpose CPU to an ASIC will generally improve performance per watt by two or three orders of magnitude.

-----


the packet switching still occurs in ASIC, it's the routing forwarding table population which can be done in software

-----


Sorry, "literally no equipment," except the openflow stuff? Which shipped some time back.

-----


If the customers and the way of buying is the same, Cisco will acquire and proceed. Dramatic technology changes don't kill incumbents if they are "sustaining" to their customers and way of doing business.

-----


Fuck Spanning Tree

edit: down voters, have you configured STP in a data center? have you had a single vmware esx instance shut down the root VLAN on a DC? Spanning tree is being addressed with solutions like this.

-----


There's a big difference between saying "This is good / bad; I've configured X and you have to put up with Y." and an empty "Fuck spanning tree".

You have knowledge. I could learn stuff from you. I learn almost nothing from your comment "fuck spanning tree".

This kind of behind the curtain stuff is mysterious to many people. I would welcome something that taught me more about it. I'd especially welcome informed insights from someone who works with the technology.

-----


Sorry, I forget my audience when I'm on HN. I professionally teach classes regarding L2 networking and servers, so I do hope I have some insight to share. I was a Cisco systems engineer at one point in my life.

Spanning Tree Protocol: http://en.wikipedia.org/wiki/Spanning_Tree_Protocol

STP as it's called, builds linear networks. Simple, single paths, through layer 2 (see: ethernet) networks. Think of spanning tree as a large state table tracking all MAC addresses on a network. If the state table realizes there a duplicate entries (ie: duplicate paths) for a single MAC address, it literally brings down the entire network to recreate path without duplicate entries.

Most managed (commercial) ethernet switches, speak the spanning tree protocol; this protocol allows synchronization of MAC tables between switches. However, by default, the vmware vswitch does not speak this protocol. This creates problems when you multi-home servers (connect a single server to multiple siwtches). The vswitch does not participate in spanning tree, and the default vswitch "load-balances" by transmitting frames from the various ports it has accessible. This, in the traditional switches' eyes, constitutes a loop in the network and can bring an entire ethernet domain down. This is a horrific scenario during which all participating hosts lose network access for 15 seconds or more, depending on the configuration (STP vs. RVSTP). If the vswitch remains active with it's default settings, the network may be down until a network engineer realizes the problem or the server is taken offline.

The reason I say "fuck spanning tree" is as a network engineer, I've taken entire data center's off-line due to a mistaken configuration on a ESX host (which I did not have visibility into at the time). This is obviously not a good way to go about production practices.

Network coordination services, like the one developed by Google, stand a good chance of replacing this antiquated protocol. Everyone in the ethernet networking world has been plagued by STP and its related quirks. I'm, for one, very happy to see its demise and hope for a future clear of such, potentially, disruptive technology with data centers.

-----


More info about switching loops in case anyone wonders, "why not turn off STP?"

http://en.wikipedia.org/wiki/Switching_loop

-----


As much as I agree with you, WAN backbones weren't running spanning tree to begin with.

-----


Nope, they still link the instances. L2 domains traverse WANs in DCs now a days.

-----


Only in some (read: insane) datacenter designs. You don't need spanned layer 2 domains unless you're doing crazy things like long-distance Vmotion. There's almost always a better way that invites less pain.

-----


Totally agree, but there are protocols in place to tunnel layer 2 domains to encompass multiple data centers. VMotion between DCs is not as crazy as it sounds, it's just prohibitively expensive.

-----


Note that the openflow protocol has been implemented in Haskell by Galois and Yale - http://hackage.haskell.org/package/nettle-openflow - and you can also configure openflow networks - http://hackage.haskell.org/package/nettle-netkit / http://hackage.haskell.org/package/nettle-frp

-----


It's interesting that this is now the insurgent, because in some sense, this looks like networking done the old way, the telco way: centralised provisioning, aiming for 100% utilisation, etc. The kind of thing that cisco et all were originally the small rebels against.

-----


I was thinking the exact same thing. Google controls their own network, so they can implement a centrally-managed, circuit-based networking scheme.

Telephone networks tend to use these (on a scheme called SS7[1]) because in most countries, the telephone networks were built by monopolies. It was possible to develop the entire network as a single system and thus to obtain very high efficiencies for certain use cases.

Google goes a step further. What they seem to have done is married circuit-based networking with batch planning. The network itself is circuit based -- rather than each packet "finding" its own way, it can be routed end-to-end by a central plan. But the decision of what to move when can also be planned. Note the reference to "simulating a load". That's similar to what mainframe batch planning achieves.

As usual, everything old is new again.

[1] http://en.wikipedia.org/wiki/Signalling_System_No._7

-----


Lol interesting I will have to share this with some of my old school Telco mates :-)

So whats next Google reinvents x.400 and x.500 (not the special needs version LDAP)

-----


The basic idea of OpenFlow is getting the manufacturers to split between making good software and good hardware. If you make great hardware you can continue to operate, someone else will control the forwarding decisions. If you want to make good software, you can use someone else's hardware.

There will be a shakeup because telling the hardware manufacturers their software isn't good enough isn't a great way to start that conversation, but it's inevitable. They will insist that the software is like that for a reason, and to do so is saying the last twenty years of development has been done the wrong way.

I think it's highly interesting. I went to school for computer science but found computer networking very interesting. There seems to be a certain level of dismissal in the complexity of networking by people who write applications. Writing a one line java socket that connects to another TCP port is trivial, but the details are tedious. The same way we forget how difficult it is to get phone calls to work because the end result is simple - phones ring.

OpenFlow will need to reinvent the wheel unless the existing hardware manufacturers decide to give them a head start, which is unlikely. If it's open source it will evolve quickly, however. There are many difficult decisions and engineering problems to solve, which I suppose is a good sign.

-----


Google wants to make switching hardware a commodity product. Right now most switching software does not interoperate, at least not on a level Google wants.

Because of this if you want the latest and greatest features from Cisco you have to run all Cisco. Or Juniper. You can't just buy 10 Cisco switches and 10 Juniper switches and all run the same operating system. Compare this to PC hardware where I could buy any combination and install any OS I want.

-----


The hardware is commodity, but you've centralized the software/logic. Is this really a net win on a global scale? (It's obviously a net win at the micro level, or else Google wouldn't use it)

-----


The protocol doesn't force this centralization to be physical, just logical -- Hyperflow[1] is one example of how a physically distributed control plane can be created.

[1] http://static.usenix.org/event/inm10/tech/full_papers/Tootoo...

-----


It is in many enterprise environments. I dealt with one place where the security regime had their way with a large network.

End result is that nearly a decade later, physically installing more than 3 servers at any given time is a project that takes many weeks. In one instance, installation of a small cluster took over 3 years.

-----


OpenFlow is really the coolest thing to happen to internet infrastructure in a long time. Just look at this video from Stanford where they route a wireless video stream simultaneously over WiFi and WiMAX. http://www.youtube.com/watch?v=ov1DZYINg3Y This technology is really imperative if wireless networks want to cope with the increased traffic from all mobile devices. How cool will it be when, having a video call, your iPhone naturally switches from the cellular network to your home WiFi when you get home. Or event better splits the traffic over both networks.

-----


How cool will it be when, having a video call, your iPhone naturally switches from the cellular network to your home WiFi when you get home.

It might be technically cool in the video, but my cellphone contract costs more than my broadband one and the connection is more stable too. I don't want to pay a phone company while they get to freeload on my ISP contract, and I don't want my phonecalls dropping out when my broadband does.

-----


If you're paying AT&T, Verizon, or Sprint per GB, and you're getting 250GB/month from Comcast on your home connection, yeah, you're going to want your iPhone to intelligently switch to the "cheapest" connection available at the time of data usage (if roaming between connections is possible).

-----


I found the licensing interesting:

License

Copyright (c) 2008 The Board of Trustees of The Leland Stanford Junior University

We are making the OpenFlow specification and associated documentation (Software) available for public use and benefit with the expectation that others will use, modify and enhance the Software and contribute those enhancements back to the community. However, since we would like to make the Software available for broadest use, with as few restrictions as possible permission is hereby granted, free of charge, to any person obtaining a copy of this Software to deal in the Software under the copyrights without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

The name and trademarks of copyright holder(s) may NOT be used in advertising or publicity pertaining to the Software or any derivatives without specific, written prior permission.

It seems they're rolling their own license, as opposed to adopting any of the other open licensing schemes out there. Any thoughts why they might do this?

-----


It is legally identical to the MIT license. The prefactory clause is purely hortatory.

-----


It does appear to be largely identical. Is it not common practice to include the name of the MIT license with the distribution, as is the case with the GPL and Apache licenses?

-----


No, not that I have seen. It's only called the MIT license because MIT used it for their software (X, Athena, etc). The BSD license is a similar situation; it doesn't say BSD in it anywhere, it's only called that because it was used by BSD. Plus these days, neither of those licenses are overseen by who named them - vs the Apache and GNU GPL which have version numbers and caretakers, if you will.

-----


"In the user-facing network you also need every packet to arrive intact — customers would be pretty unhappy if a key sentence in a document or e-mail was dropped."

Why bother trying to illustrate with examples, if you're gonna write things like that?

-----


It's arguably accurate; a UDP text-based protocol could drop that key sentence. Nonetheless, it's wired. What did you expect?

-----


Accurate for maybe a text-based talk client, but you wouldn't use UDP for docs or email. I expected more from Steven Levy.

-----


What makes this sillier is that in a decentralised internet, TCP needs dropped packets to detect congestion and keep latency down.

-----


We used to run routing on commodity computers - then came hardware routers because you could do things faster when you specialized. Now we are back to writing new software routers to run on commodity hardware because hardware is now fast enough. Is that right? Looks like http://blog.gardeviance.org/2012/03/tens-graphs-on-organisat...

-----


Sort of but not really... openflow keeps the specialized network hardware, in the form of switching fabrics etc, but allows for a common set of programming instructions to be applied, so routing decisions can be made based on simple rules in the hardware, or exported to a computer to decide routing (for the packet or flow) without needing the computer itself to process all the traffic. This is different than using a computer with N ethernet cards, where all the data had to be marshalled around.

-----


Except Google isn't using software routing. The data plane is still in hardware; they've just replaced crufty distributed control plane software (IOS/JunOS) running on a slow embedded processor with Googly centralized control plane software running on x86 servers.

-----


Nick McKeown's OpenNetworking Summit 2011 presentation entitled "How SDN will Shape Networking"[1] explains very well the abstraction of ideas that Software-Defined Networking provides. OpenFlow is a protocol which implements the idea of SDN.

[1] http://www.youtube.com/watch?v=c9-K5O_qYgA

-----


The article uses the analogy of road traffic congestion being defeated by autonomous vehicles with swarm-like intelligence enabled by centralised computing. Which is interesting.

-----


This also implies that Google has private fiber between their data centers, right?

-----


In a brilliant master-stroke, between 2001-2005, Google bought out dark fiber, dirt-cheap, from many failed Telcos, http://www.eweek.com/c/a/IT-Infrastructure/Google-and-Its-Co...

Now they reap the benefits.

-----


I think this is fairly common. We had this at Bank of America too; our own non-Internet IP-based network. It even did multicast, almost.

-----


Yeah, but did BoA buy actual fiber in the ground? You can have your own non-Internet IP network easily if you lease dedicated physical circuits from telcos. (This is how the military does it)

-----


It is very common for companies to have dark fiber built between data centers. Once you hit a certain threshold, it is more economical than lit services for a given path.

-----


Yep, the cost of installing the cable (digging trenches, pulling through conduit, and whatnot) usually dwarfs the cost of the cable itself. If you're taking the hit anyway, running multiple fibers/conductors is a wise investment.

This is true even on a small scale. When I was upgrading the phone wiring in my house a number of years ago I went ahead and ran ethernet and coax all over the place while I was at it. If you're already on your back in a crawl space, it isn't that much more hassle to drill a few extra holes. :-)

-----


And also there's a glut of dark fibre from the telco bubble a few years ago.

See for example this from a decade ago: (http://www.businessweek.com/bwdaily/dnflash/aug2001/nf200108...)

And this, from about ten years ago: (http://www.usatoday.com/tech/news/2002/04/11/global-crossing...)

> This is true even on a small scale. When I was upgrading the phone wiring in my house a number of years ago I went ahead and ran ethernet and coax all over the place while I was at it. If you're already on your back in a crawl space, it isn't that much more hassle to drill a few extra holes.

I agree. Wired is always handy. Other people - people buying your house - may disagree. Those people might prefer wifi, even if wifi is unsuitable for the building, even if there are very many neighbours with over-powered wifi on nearby channels.

-----


Here's an article from 2007 that says they've been buying fiber for years:

http://www.voip-news.com/feature/google-dark-fiber-050707/

-----


Presumably you could use lambdas or SONET or MPLS circuits or something (at Google-scale that would cost more, but it may be cheaper for smaller networks); it doesn't have to be private fiber per se.

-----


Not only have they been buying fiber - they have been actively installing it in various cities and residential areas....

Remember that AT&T commercial from 1997 where they said "and imagine all these services coming to you from one company" and it should a data jack - implying that you would get everything from that AT&T cable... well -- google wants to be that port.

-----


I like that Google is working on both sides of the car traffic analogy.

-----


They can probably extend this system (judging purely by the analogy since I am not a network guy) to their self driving cars one day. One would think that a similar mechanism would be needed to manage traffic.

-----


Perhaps. I would propose that the application of this paradigm to traffic flows would have each intersection making the decisions for the drivers. A car says "I'm heading to east 42nd", and the intersection says "Sure, go left."

Furthermore, traffic controller(s) would constantly gather information about traffic conditions from each intersection, and tell them how to direct the traffic to optimise road usage.

This way, the traffic controller knows everything about the optimal way traffic should flow, and it only has to make decisions based on the number of intersections rather than the number of cars. If you consider that one intersection could have tens of thousands of cars per day, that provides a huge saving in computation.

This differs from the analogy in the article; Intersection = Router, Car = Packet.

-----


Probably a stupid question: Does OpenFlow possibly change my home network setup in the future too? Or for a small bussiness? Coffee shops etc.

-----


There's hasn't been much research in this direction, but there are probably potential uses for OpenFlow in home networks. For example, apps or devices could use OpenFlow to give your router advice about how to handle their traffic, e.g. firewall or QoS rules. There's already an OpenFlow implementation for WRT54G routers, so it wouldn't be too hard to experiment with it.

-----


so openflow is an alternative to ipv4/v6 ?

-----


No. The idea behind OpenFlow is having simple routers which only implement basic switching capabilities in hardware. The complex routing rules can then be set in software which allows for much easier and more capable traffic routing. It also allows more innovation in this part of the infrastructure which has traditionally been very resistant to it because of the incumbent players. You can, for example, experiment with new routing protocols without having to do a firmware or even hardware upgrade.

-----


@mgw, thanks for your insightful comments.

Traditionally the argument has been specialized hardware doing function X, will be faster (but not customizable, upgradable) compared to doing function X in software on commodity hardware (slow but upgradable).

And fpga were pointed out as somewhere in the middle ( some hw customization using sw)

What are your thoughts on the argument that by using commodity hw & implementing routing algos, etc in sw, will be more flexible but slower.

Is there a significant performance/speed cost when you implement core networking features in sw ?

-----


Specialized hardware will be faster than software for a long time. OpenFlow ist smart enough to go around this problem. The routing still takes place in specialized hardware and is therefore just as fast as in a standard device.

The routing layer in hardware acts on a set of cached rules which are very simple. Simplified, you can imagine them to be of the form Matching Rule -> Routing Action. The matching rule selects by the packet fields such as source port or destination IP. The routing actions could be "forward to port" or "drop packet". All of this is just as fast as in every other commodity router.

What is special in OpenFlow is another possibility for the "routing action" field (or for unmatched packets): You can send certain packets up into the software level, to the OpenFlow controller. This can be a centralized server and the logic is implemented in software. The software decides about the routing of these kinds of packets and sends the answer back to the router. Here the rule is cached again and from now on the routing for this is as fast as for all the other packets.

This last bit is the only part which is slower compared to commodity routers. A really great solution in my opinion.

-----


As I said in the last thread http://news.ycombinator.com/item?id=3847934 , words like "specialized" and "commodity" are being thrown around in unusual ways. With OpenFlow, your "commodity" hardware is line rate, so it's no slower than whatever it's replacing.

-----


So it's an alternative to BGP for internal networks?

-----


No it's not a routing protocol, but it makes the router so configurable that you're able to switch the protocols out easily. You could even invent your own for your internal network. I guess this is what Google has done because they have so much knowledge about their traffic and infrastructure. It's not only about routing protocols though. It also allows you to do traffic shaping and all kinds of cool stuff as well. Just imagine a software router which is just as high speed as one which implements the logic in hardware.

-----


My impression is that OpenFlow is most useful for internal networks. Instead of having smart switch, you have controllable switches and a smart controller. But I guess it would be useful for splitting up border routers into a switch and controller that does the routing and talks BGP.

-----


Think of it as an automated method for providing configuration convergence across an infrastructure, instead of relying on manual input of routing tables.

-----


..and with the switch to IP6 there'll be lots of people who need to replace hardware..

Perfect timing from Google.

-----


Unfortunately, the current OpenFlow specification (v1.1) doesn't support IPv6...

-----


IPv6 support was added in 1.2. Besides, everyone needs to have already replaced their hardware for IPv6 by now.

-----


Oops, my bad -- I didn't know OpenFlow 1.2 had been released because openflow.org hasn't been updated since early December. (I had it bookmarked and checked it occasionally just to see if IPv6 support ever got added.) The last I had heard/read, IPv6 support was just a proposed feature for a future spec...

-----


Development and releases of the OpenFlow specification have been shifted to the Open Networking Foundation[1], so you can find the latest OpenFlow release (currently 1.2) over there. It has only come out recently though, so actual hardware & software support for the new version of the protocol is still quite young.

[1] http://www.opennetworking.org

-----


As I understand it, no. ipv4/v6 define an endpoint address. This simply defines how to get from one endpoint to another efficiently.

-----


It also moves the decision making off of the switch/router and onto a server. http://www.openflow.org/wp/learnmore/

-----


What's the benefit of keeping the data plane on the cisco/junpier hardware? #ports per box?

In my naivety, I'd expect the main benefits of openflow to be on the WAN links, so you could get away with 6 or so ports in a PC-like chassis running software routing, with dumb local switches?

What am I missing?

-----


What's the benefit of keeping the data plane on the cisco/junpier hardware?

To be clear, it's not literally Cisco/Juniper hardware, but it's similar (ASIC-based).

on the WAN links you could get away with 6 or so ports in a PC-like chassis running software routing

ASICs are line rate, denser, and cheaper per port than x86 servers. Roughly $15K buys you either an x86 server with 6 10G ports or an OpenFlow switch with 64 10G ports. Since Google has over 100 ports per switch (according to EE Times), they presumably need the ports and the savings is significant at that scale.

-----


#ports/box or more accurately, #ports/rack unit & gigabits of throughput/rack unit.

The switching hardware is ASIC based and super optimized with CAM based switching on VLAN tags, destination addresses, etc. 1U switch could have 36 ports @ 1Gbps, and needs to switch 36Gbps within the switch, and perhaps 10Gbps upstream.

So line rate switching of 36Gbps+ in 1U, possible today only with ASICs, typically from Broadcom.

-----


So openflow is useful within a cabinet?

I was thinking you could have have "dumb box, many ports" and "smart box, few ports". Each cab needs one of the former, but you could get away with not many of the latter?

-----


OpenFlow is useful across cabinets.

Cisco/Juniper boxes were smart, many ports, but smart only within each box. That is you had to configure each one individually. As in you could do static provisioning with a single tool across multiple boxes, but on the order of minutes between changes. If you had to create a VLAN with ten boxes, across 5 different racks, you would be spending quite a bit of time doing that, since you would need to find spare VLANs that are unused across the fabric, configure all the switches in between, etc.

Now with OpenFlow, all the switches are controlled by the central controller. And flow configuration is dynamic - ie you don't need to fiddle with individual configs, when the flows start, the controller is queried, and if there's an appropriate flow setup on the controller, it will be implemented, with local free VLANs, etc.

Basically the entire fabric becomes as dynamic & smart as the controller, instead of each switch being smart and static.

-----


Finally a real world use case for active networks.

I built a proof of concept for soemthing like this for my master thesis - 2000.

I wish it had occurred to me that unlike the user being in control (in active networks), the network operator could be in control.

Anyhoo ...

-----


Anyone seen any router speed metrics for this? They are absent in the article, and I can't tell if that's just Wired or whether OpenFlow is still not fast enough.

-----


Sounds like most of the OpenFlow concept is a no-brainer improvement, but when they talk about using it for centralized control, that is worrisome.

-----


Think "centralized control over all the routers and other devices that make up a company's global network" (as opposed to hundreds/thousands of routers all making their own decisions)

This is not about Dr. Evil-style centralized control of the whole Internet.

-----




Applications are open for YC Summer 2015

Guidelines | FAQ | Support | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: