Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How Software Will Redefine Networking (gigaom.com)
34 points by jaybol on March 22, 2011 | hide | past | favorite | 8 comments


The idea is to implement the intelligence in networking devices that are not forwarding. Have the switch handle forwarding while some remote device programs its tables via openflow. It's a logical extension to what IEEE and MEF have been working towards with the TE protocols(PBB-TE, MPLS-TE).

The general idea being that a centralized computer handles logical path creation through the network while the hardware just forwards packets. It obviates the need for static configuration of logical overlay paths by using whatever intelligence is programmed into the openflow controller. It also means you can do really neat stuff like distributed loadbalancing.

My question is how well does it handle convergence? Is this openflow controller just going to fall over at the first link that decides to flap 100 times a second?


If this is a problem or not depends on what you mean. If a port for some strange reason goes up and down 100 times a sec then this will cause 100 "port status" messages to be sent to the controller. This might be a problem, but the controller should be able to handle this, it's still a small amount of traffic and should not require that much processing. There is also some distributed controllers, for example Onix. If you talk about convergence of the network topology i general then it depend largely on the controllers implementation.


What happens if the connection between the controller and packet forwarder is lost? Would the switch revert back to inserting its own hardware entries without the help from the controller software?

I have no experience with Openflow. I have some experience with PBB-TE and MPLS provisioning systems and this is always my first question. What happens when these controllers are down from the perspective of the switch? Distributing the controller is better but even that might become unreachable.

The concept is great and I hope this gains more traction. But I'm going to remain doubtful until I see more demos with people pulling links and more real world implementations.

Disclaimer: My day job is as a Test Engineer for a switch manufacturer. So I just see flaws in things.


For some reason the whitepaper link on the main OpenFlow website is broken, so here is:

- a Coral cache version (http://preview.tinyurl.com/5sr9vsc) - the Google cache version put through Quick View (http://tinyurl.com/6hxyxz2)

The GigaOM article was far too basic and vague for me, so the white paper proved to be a breath of fresh air! Whereas the GigaOM article flounders over explaining potential commercial applications for OpenFlow, the whitepaper takes a far more humble approach:

"Today, there is almost no practical way to experiment with new network protocols (e.g. new routing protocols, or alternative to IP) in sufficiently realistic settings (e.g. at scale carrying real traffic) to gain the confidence needed for their widespread deployment...a more promising approach [then contemporary commerical and research solutions] is to compromise on generality and to seek a degree of switch flexibility that is:

- Amenable to high-performance and low-cost implementations. - Capable of supporting a broad range of research. - Assured to isolate experimental traffic from production traffic. - Consistent with vendors' needs for closed platforms.

This paper describes the OpenFlow Switch - a specification that is an initial attempt to meet these four goals."


I don't get it. It's a protocol for managing router forwarding rules. Except every provider out there already implements this in the way that suits them best, and that's the kind of capitalist outlook that made the internet as big and useful as it is today.

I don't see how this will redefine anything for anyone, though it might make routing protocols a little bit simpler to manage. Which I guess might be important if you're a huge corporation having problems keeping your clouds from disappearing now and then.


Imagine if you can control Cisco and Juniper device with the same command. It make sense for you to be oblivious of the device's make and model. Same command, same effect. What's more is that, by saying your device is in OpenFlow version X, you can assume it can do this and that. This is unlike your Cisco box: same device, different IOS options, different capabilities.


Its about low cost. You will be able to buy a router that is either completely dumb, no OS at all, controlled remotely by your cloud control infrastructure, or maybe add into your Linux server as a card. Dont pay for all that cisco os and a cpu to run it that you dont need. The raw networking silicon plus some ports is pretty cheap and will get cheaper. Commoditization...


I have not read the specification but it seems like they reinvented Quality of Service and the protocols surrounding it like RSVP and DiffServ. Of course it gives them flexibility to do a lot more than just QoS control but that is the one that stands out to me. Big data users might be able to change their bandwidth capacity in real-time.

It will be interesting to see the implementation/deployment scenarios. Does it mean that the ISPs will be giving the bigger consumers control over the percentage of bandwidth they use when needed? It will certainly be added revenue for the ISPs and Cisco and others should like it a lot too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: