SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Exploring the Impact of Transport SDN


Software-defined networking (SDN) is a promising approach for achieving the cost-effective, end-to-end infrastructure flexibility that both network operators and their users seek.

Moreover, extending SDN would render the transport-network infrastructure under the control of optimization applications that run on the SDN controller and network manager. Bandwidth, latency and power consumption would be tailored across the transport network, depending on the demands on the network at any given moment. For example, applications could be managed so that the ones requiring the highest bandwidth and lowest latency are switched to the optical layer and consume less power, while other applications that consume less bandwidth are processed and aggregated at the electrical-packet layer.

Exploring SDN’s transport possibilities

SDN’s promise beyond the data center is so great that the networking industry and research community are exploring the technology’s transport possibilities in various demonstrations around the world. One such SDN infrastructure test bed has been launched at Marist College in Poughkeepsie, New York, in cooperation with IBM.

The test bed leverages data-center switching, server and storage technology and long-distance, optical networking equipment across three data centers. An open-source SDN application developed at Marist monitors, manages and manipulates (i.e., creates, modifies and deletes) end-to-end flows across all network layers. Control of the network’s circuit-based optical and packet-based switching layers is integrated under a common, OpenFlow-based umbrella in which the optical-transport network is represented as a flexible interconnect fabric.


Multiple test cases have been demonstrated. In one, multilayer reconfiguration is initiated from the SDN controller, which, after performing necessary calculations, initiates a flow between two of the data centers by programming the test bed’s wavelength-division multiplexing (WDM) system and OpenFlow switches. A path and bandwidth are released to a given application(s) based on scheduling that’s preset through a web portal.

In another test case, reconfiguration is initiated directly from a cloud-controller application that requests a path and bandwidth, at which point the SDN controller programs the WDM system and switches and releases bandwidth as necessary. Finally, the “application-aware” controller releases bandwidth back to the LAN/WAN (local- and wide-area networks) pool.

The Marist College test bed illustrates how this functionality can help automate typical workflows in multiple data-center environments. For example, the movement of VMs between data centers can be fully automated, including the provisioning of completely new optical circuits, in response to alarms triggered by VM-monitoring software: a server or storage system nearing maximum capacity could be the catalyst for shifting VMs to less utilized servers, with storage shifting to a remote, cloud-based data center.

Another possible use case involves flooding a connection with VMs, storage and/or video traffic: when a link reaches a certain capacity threshold—say, 90 percent—a new wavelength could be automatically activated.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel