Software-defined Networks, A History

Posted on Posted in Networks and Security, Uncategorized

Against the predominant angst concerning the investment and development of #warbots and Lethal Autonomous Weapon (LAW) technologies, it is intriguing to reflect upon the vision of the symbiosis of mankind and computer underlying the dawn of computer networking and the creation of the Internet.  In Man-Computer Symbiosis (1960), Joseph Licklider first postulated the concept of computer networks as “thinking centers” that would function to balance the speed of computers, the cost of gigantic memories, and the sophistication of programs across a number of users.  His infamous 1963 memo to the members and affiliates of the Intergalactic Computer Network further defined a use-case in which users apply networking features, such as information retrieval, through a system of cooperative hardware and software elements facilitated via a common network-control language. The development of ARPANET as a proof-of-concept for integrating the communication of different people, computers, and software programs across distance is a direct result of this inquiry.

Updated in 1983 to incorporate a new software-based, rather than hardware-based, technology, the Department of Defense transitioned ARPANET to a new network model based on TCP/IP protocols, which enabled not only the networking of different, incompatible computer systems, but also the networking of networks (Hauben, 1998).  This model continues to serve as the backbone of today’s Internet network.  Based on Internet network principles, traditional data networks apply a packet-based protocol to transport bits across a connected system of nodes.  Although widely implemented since the 1980’s, IP networking systems are, inherently, unstable complex systems – one small local event can cascade into severe global meltdowns.  Technical solutions aimed at securing IP networks against failure, such as the placement of middlebox devices, exacerbate the problem of complexity and lack-of-coordination between mechanisms. Against this background, software-defined networks (SDNs) emerge as a frontrunner in clean-slate programmable network solutions aimed at improving the management and security of enterprise networks.

Simply stated, traditional network architectures are not designed in a way that meets current requirements.  SDNs offer an alternative paradigm for meeting the needs of users, companies and service providers.  In SDNs, the control plane communicates via a southbound interface with all devices on the network, maintains a holistic view of the network’s topology, and programs the network from a central point. Applications may thus consider the network as a unified logical switch. OpenFlow is the first standard communication protocol that interfaces between network devices (data plane) and the SDN controller, ensuring the interoperability and communicability of different network devices. Virtual switches may interact with virtual servers within SDN-controlled virtual networks, and some SDN controllers use virtualization to overlay independent virtual networks over physical networks. Examining the development of the SDN within its historical context provides an essential theoretical basis of understanding upon which potential SDN solutions to modern problems may be chartered.

In the mid 1990’s, researchers seeking the freedom to rapidly test and deploy new ideas for improving network services envisioned active networking as a programmable approach to customized functionality within individual networks. In the early 2000s, a second wave of research attempted to address immediate needs in network management through the separation of the network’s control and data planes, and yielded several concepts integral to modern SDN designs. Within the last decade, a third wave of research developed open interfaces that enable network experimentation at scale. In observing where and how earlier initiatives failed to “take root”, current researchers might take measures to integrate these lessons learned as proactive caveats within their own research design. Further, in extending the common definition of SDNs to include a broader range of approaches to network programming, this comprehensive history lends itself to a larger dialogue regarding the potential application of SDN technologies to more-diverse range of problems and use-cases.

 

1996

Towards an Active Network Architecture presents DARPA-sponsored research investigating the potentials of programmable networks. A precursor to Software-defined Networks, research into active networking initiated the break with traditional network design, allowing for customized programming at the data layer. This report presents the authors’ vision of an active network architecture, and documents their approach in the deployment of an operational ActiveNet. Active networks perform computations on user data as the information packets move between clients and servers, and servers and clients through the network. There are two approaches to active networking: (1) a discrete approach, where every function is a program; and (2) an integrated approach, where every message is a program. Although these approaches are non-exclusive, differing less practically, and more conceptually, the authors frame their concept in terms of the integrated perspective. One of the weaknesses of this research is that it first conceives of the approach, and then hypothesizes its utility. Although highly intriguing, this dominance of theory over application within this research results in a sense of non-urgency. The significance of this research is that new SDN research is returning to explore the potentials of Active Networks introduced by the DARPA team. Exploring the rationale guiding the thought-experimentation within this source thus support continued inquiry into the potential applications of SDNs to emerging and future network needs. The availability of OpenFlow standard makes relevant this vision for the new internet.

2005

In A Clean Slate 4D Approach to Network Control and Management, the Stanford Clean Slate team presents their 4D network-design alternative, which refactors network functionality into four planes: data, discovery, dissemination, and decisioning. Traditional networks are constructed in three planes: the data management, control, and management planes. In traditional design, the lack of coordination between network mechanisms at the data layer creates exponential complexities that cascade up through the control and management layers. The Clean Slate concept is a radical, bottom-up redesign of network configuration that separates the data and control layers by pulling the state and control logic out of the routers and into a decisioning plane, which operates on a network-wide view of the both the topology and its traffic. This new design fulfills three essential design principles. First, the network is configured to meet performance and reliability criteria that trickle-down to command performance at the data layer. Second, network-wide views must assemble to inform a coherent snapshot of the state of the network’s components and their performance. Third, the control and management systems have the ability and sole responsibility for setting the state that directs packet forwarding within the data plane. It is in this paper that the researchers first propose separating the network elements and the decision logic, an approach fundamental to – and now ubiquitous throughout – the subsequent ripple of SDN research still predominant today. Of the Clean Slate research papers, this work best defines the problems inherent in traditional networks to which programmable networks are aimed as solutions: namely, over-complexity, rigidity and vulnerability.

2006

In Reflections on Network Architecture, Ken Calvert of the University of Kentucky presents a historical overview of active networking within the context of network research. The clean-slate approach emerged in the mid-1990s with the goal of improving on the shortcomings of the Internet’s architecture based on TCP/IP protocols. Oriented around the problem of increasing the ease and timeliness of deploying new protocols and services, active networking focused on the use of programmable network nodes to provide network service. This work presents both an overview and the evolution of DARPA’s Architectural Framework for Active Networks, which was initially designed to consolidate the breadth of DARPA’s active network research under a shared, packet-based paradigm. Additionally, this framework defines the functional components of a network node. In a design more akin to a general-purpose computer than to a router, each note of an active network runs on one or more execution environment (EE); each EE defines a virtual machine that runs on packets. The NodeOS delegates computing resources and storage among the EEs from the level of the node. This framework evolved over time to include the addition of an application layer, which enables users to execute end-to-end services by programming the execution environment. One of the drawbacks of the DARPA program was that its research focus placed disproportionate emphasis on developing a platform, rather than on solving particular end-to-end issues. The unfortunate result is that contemporary research inquiries into these issues failed to leverage the holistic framework of active networking to resolve local problems.

In traditional networks, complex routing and bridging policies, and middlebox interdiction mechanisms attempt to regulate and control networks through access control. However, the architectural complexities of these attempts to manage networks increase the inflexibility and vulnerability of networks, making them increasingly difficult to manage. In SANE: A Protection Architecture for Enterprise Networks, the Stanford Clean Slate research team builds upon their 4D approach in this first attempt to prototype a logically centralized, programmable enterprise network. The SANE architecture attempts to compromise the security needs of the enterprise with the innovation-supporting principles of openness, decentralization, and cooperation so essential to the early growth of the Internet. Looking to Saltzer and Schroeder’s definitive principles of information protection, the authors discuss the five principles underlying the SANE architecture in the context of enterprise networks. They then move into a discussion of the design architecture and its properties. Radically different in its considerations of networks for its time, the authors justify the design of SANE in context with a specific problem: ensuring enterprise network security. In recognizing the difference between enterprise networks and the Internet, the authors propose a new paradigm that prioritizes security, centralizes control and elevates the importance of consistent policies. Although other concurrent research efforts also consider the need for unified policy enforcement, SANE is unique in its consideration and elevation of users as entities. Despite these breakthroughs, the researchers recognize their approach to be an “extreme” example of network security. In this way, SANE is designed as a proof-of-concept, and less as an enterprise-ready solution.

2007

In Ethane: Taking Control of the Enterprise, Stanford’s Clean Slate research team presents the Ethane prototype as an original enterprise-level network management solution. As large networks operating under strict reliability and security constraints, enterprise networks present the risks of inflexibility and fragility and must be managed to prevent both misconfigurations and security break-ins. By only allowing permissioned communication between end-hosts, Ethane enables the enterprise to control the network. Ethane is comprised of two components: a central controller, and a set of Ethane switches. The controller acts as a central nervous plexus, coordinating the forwarding actions of the non-thinking switches. In keeping the switches “simple and dumb”, Ethane offers a more cost-effective, more adaptable solution to the problem of network-resource management than competing approaches that added new layers complexity within networks. Ethane extends the team’s clean-slate 4D design approach to create a centralized control architecture enabling enterprise management by isolating the network’s decision logic from the protocols that govern its interactions. Unlike previous clean-slate approaches, Ethane focuses on enabling incremental deployability independent of component vendors and without the requirement for host modifications; as more Ethane switches are deployed within the enterprise network, its manageability increases. Further, Ethane approaches network security as a subset of network management – its design considers the governance of the network’s policies both aide in path determination and source authentication. The first of its class, this programmable network emerged from the specific, relevant problem of network management, and gave rise to the still-popular OpenFlow standard that is the foundation of current SDN buzz.

2008

The whitepaper that launched a movement, OpenFlow: Enabling Innovation on Campus Networks is an essential primary source in which Stanford’s Clean Slate research team proposes the OpenFlow controller as an enabler of virtualized programmable networks. Aimed to address the research community’s need to run quick and cost-effective experiments, the researchers propose the OpenFlow switch technology as an alternative to closed, inflexible commercial solutions. The OpenFlow switch exploits a common set of functions that underlie many switches and routers to provide a standard, open protocol for programming the flow-tables across heterogeneous devices. Rather than requiring all commercial vendors to provide open, programmable platforms on their products, researchers can use the OpenFlow protocol to access the internal flexibility already inherent within commercial technologies, without necessitating the need for vendors to change their hardware or expose proprietary information. OpenFlow thus presents a compromise between both vendors and researchers vital to network experimentation and innovation discovery. This paper presents OpenFlow with much enthusiasm and optimism to a diverse audience. Repeated reference to the Ethane project make it clear that the OpenFlow concept is grounded on solid research without bogging the reader down in performance statistics. Uniquely, its authors call researchers, students and government agencies to join a consortium with the aim to popularize and promote the use of the OpenFlow switch within their research. This paper thus functions both to describe the high-level concept to an audience of network researchers, as well as to inform readers of the unfolding research and commercial opportunities enabled by this emerging technology.

 

References

Calvert, K. (2006). Reflections on network architecture: An active networking perspective. ACM SIGCOMM Computer Communication Review, 36(2), 27-30. doi:10.1145/1129582.1129590

Casado, M., Freedman, M. J., Pettit, J., Luo, J., Mckeown, N., & Shenker, S. (2007). Ethane: Taking control of the enterprise. Proceedings of the 2007 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications – SIGCOMM ’07. doi:10.1145/1282380.1282382

Casado, M., Garfilnkel, T., Akella, A., Boneh, D., McKeown, N., & Shenker, S. (2006). SANE: A protection architecture for enterprise networks. USENIX Security Symposium. Retrieved from http://yuba.stanford.edu/~casado/sane.pdf

Greenberg, A., Hjalmtysson, G., Maltz, D. A., Myers, A., Rexford, J., Xie, G., . . . Zhang, H. (2005). A clean slate 4D approach to network control and management. ACM SIGCOMM Computer Communication Review, 35(3), 41-52. doi:10.1145/1096536.1096541

Hauben, R. (1998, June 28). A study of the ARPANET TCP/IP digest and of the role of online communication in the transition from the ARPANET to the Internet. Retrieved from http://www.columbia.edu/~rh120/other/tcpdigest_paper.txt

Licklider, J. C. (2001, December 11). Memorandum for members and Affiliates of the intergalactic computer network. Retrieved from http://www.kurzweilai.net/memorandum-for-members-and-affiliates-of-the-intergalactic-computer-network

Mckeown, N., Anderson, T., Balakrishnan, H., Parulkar, G., Peterson, L., Rexford, J., . . . Turner, J. (2008). OpenFlow: Enabling innovation in campus networks. ACM SIGCOMM Computer Communication Review, 38(2), 69-74. doi:10.1145/1355734.1355746

Tennenhouse, D. L., & Wetherall, D. J. (1996). Towards an active network architecture. ACM SIGCOMM Computer Communication Review, 26(2), 5-17. doi:10.1145/231699.231701

Leave a Reply

Your email address will not be published. Required fields are marked *