0
2.2kviews
OpenFlow in Cloud Computing
1 Answer
0
20views

The role of OpenFlow in cloud computing and in particular the installation and configuration of Neutron will be covered. One of the promises of SDN (Software Defined Networking) and OpenFlow is the improvement that they can introduce in data centers and a cloud computing infrastructure. Therefore, it is worth covering the usage of OpenFlow (for instance, the Floodlight plugin for OpenStack) in data centers and in particular OpenStack as one of the widely used control and management software for cloud computing. A brief introduction to OpenStack and its networking component (which is called Neutron as of this writing) and its overall architecture will be discussed in this chapter. In particular, the installation and the configuration of the Floodlight OpenFlow controller plugin is explained in this chapter. Interested readers are recommended to consider this chapter as a pointer to further details that can be found in the documentation of

OpenStack Networking.

OpenStack and Neutron

computing OS), which delivers Infrastructure as a Service (IaaS). Released under Apache License; OpenStack is a free open source software. OpenStack Foundation was established in September 2012 as a non-profit corporate entity, which manages the OpenStack project. It promotes OpenStack and its developer community. OpenStack includes a set of building block projects that control pools of computing nodes (that is processing nodes), storage, and networking resources in a data center.

OpenStack provides a dashboard that enables administrators to control and provision the mentioned resources through a web-based (GUI) interface. OpenStack Compute (Nova), which is the main part of an IaaS system, is the cloud computing fabric controller. Nova is written in Python and it utilizes many external libraries such as SQLAlchemy (for database access), Kombu (for Advanced Message Queuing Protocol communication), and Eventlet (for concurrent programming).

Nova is able to manage and automate pools of computer resources and can co-operate with widely available virtualization technologies and High-performance computing (HPC) deployments. It is designed to scale horizontally on commodity computers with no proprietary hardware or software requirements and also to provide the ability to integrate with third party technologies and legacy systems. Xen Server and KVM are the typical choices for hypervisor technology, along with the Linux container technology such as LXC and Hyper-V.

OpenStack utilizes two components for its storage management:

• Swift: It is used for object storage management. Swift is also known as OpenStack Object Storage. It is a redundant and scalable storage system. Files and objects are written to multiple disks across multiple servers in the data center. The OpenStack software is responsible for ensuring data integrity and replication across the cluster. By adding new servers, storage clusters simply scale horizontally. If a server or hard drive fails, OpenStack replicates its content to new locations in the cluster from other active nodes. Since OpenStack uses software algorithms to ensure data distribution and data replication across different devices, inexpensive commodity hard disks and servers can be used for storage management.

• Cinder: It provides persistent block level storage devices for use with the OpenStack compute instances. Cinder is also known as OpenStack Block Storage. The block storage system is responsible to manage the creation, attachment, and detaching of the block devices to the servers. Block storage is suitable for performance sensitive scenarios such as expandable file systems, database storage, or for providing a server with access to a raw block level storage device. Block storage volumes are fully integrated into Nova (OpenStack compute) and OpenStack's Dashboard. This enables the cloud users to easily manage their own storage requirements. Powerful functionality for backing up data stored on block storage volumes is provided by Snapshot management. Snapshots can be used to create a new block storage volume or simply can be restored.

OpenStack Networking Architecture

Neutron is able to utilize a set of backends called plugins that support a growing set of networking technologies. These plugins may be distributed separately or as part of the main Neutron release. OpenStack Networking (Neutron) is a virtual network service that provides an efficient API to define the network connectivity and addressing, that is used by devices from other OpenStack services (such as OpenStack Compute). The OpenStack Networking API utilizes virtual network, subnet, and port abstractions to describe networking resources. In the OpenStack networking ecosystem:

• Network is an isolated L2 segment similar to VLAN in the physical networking

• A block of IPv4 or IPv6 addresses and associated configuration states is a subnet

• A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network is defined as a port. Also, a port describes the network configuration parameters (such as the MAC and IP addresses), which are associated to that port

By creating and configuring networks and subnets, users can configure rich network topologies; and then instructing other OpenStack services such as OpenStack Compute to connect virtual interfaces to ports on these networks. Neutron particularly supports each tenant having multiple private networks, and enables tenants to choose their own IP addressing scheme. The OpenStack Networking service:

• Provides advanced cloud networking scenarios, such as constructing multitiered web applications and enabling applications to be migrated to the cloud without IP addresses' modifications

• Enables cloud administrators to offer flexible and customized network offerings

• Provides API extensions that lets cloud administrators expose additional API capabilities. These new capabilities are typically introduced as an API extension, and gradually will become part of the core OpenStack Networking API.

Neutron plugins

Providing rich cloud networking by enhancing traditional networking solutions is quite challenging. Traditional networking is not scalable to cloud proportions by its design nor to cope with automatic configuration. OpenStack Networking introduces the concept of a plugin, which is a backend implementation of the OpenStack Networking API. In order to implement the logical API requests, a plugin can utilize a variety of technologies. Some plugins might use the Linux IP tables and basic VLANs, while other implementations might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow. Plugins can have different features for hardware requirements, properties, performance, scale, or operator tools. OpenStack supports a wide spectrum of plugins. Therefore, the cloud administrator is able to consider different options and decide which networking technology fits a particular use case scenario. Among different plugins for Neutron, in this section we will consider the Floodlight controller plugin for OpenStack Neutron.

You will need to execute a Floodlight controller for the OpenStack Neutron networking support to properly function. The floodlight controller can be running on a separate floodlight VM or you can obtain and download the Floodlight source as a compressed ZIP file, unzip it, compile, and run it with the following simple steps on your Ubuntu VM. Make sure you have Internet connectivity before proceeding:

 sudo apt-get update
 sudo apt-get install zip default-jdk ant
 wget --no-check-certificate https://github.com/floodlight/floodlight/archive/master.zip
 unzip master.zip
 cd floodlight-master; ant
 java -jar target/floodlight.jar -cf src/main/resources/neutron.properties

To confirm the VirtualNetworkFilter is successfully activated, enter the following commands on your Ubuntu VM:

 curl 127.0.0.1:8080/networkService/v1.1
{"status":"ok"}

Once Floodlight is confirmed running, we are ready to install OpenStack using the install-devstack script. The following are the steps:

1.It configures the OVS switch on the VM to listen to the Floodlight controller.

2.Then, it installs OpenStack and the Big Switch REST proxy plugin on the VM.

3.If you want the OpenStack Grizzly release, use the following commands:

 wget https://github.com/openstackdev/devstack/archive/stable/grizzly.zip
 unzip grizzly.zip
 cd devstack-stable-grizzly

4.If you want the OpenStack Folsom release, use the following commands:

 wget https://github.com/bigswitch/devstack/archive/floodlight/folsom.
 zip

 unzip folsom.zip
 cd devstack-floodlight-folsom

5.Use your favorite editor to create a file named localrc and fill in the following details. Remember to replace <password> to your chosen password and update BS_FL_CONTROLLERS_PORT=<floodlight IP address> with the value 8080. If you have run Floodlight in the same VM, then use 127.0.0.1 for <floodlight IP address>; otherwise, use the IP address of the VM or the host where Floodlight is running on it.

disable_service n-net
enable_service q-svc
enable_service q-dhcp
enable_service neutron
enable_service bigswitch_floodlight
Q_PLUGIN=bigswitch_floodlight
Q_USE_NAMESPACE=False
NOVA_USE_NEUTRON_API=v2
SCHEDULER=nova.scheduler.simple.SimpleScheduler
MYSQL_PASSWORD=<password>
RABBIT_PASSWORD=<password>
ADMIN_PASSWORD=<password>
SERVICE_PASSWORD=<password>
SERVICE_TOKEN=tokentoken
DEST=/opt/stack
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
#IP:Port for the BSN controller
#if more than one, separate with commas
BS_FL_CONTROLLERS_PORT=<ip_address:port>
BS_FL_CONTROLLER_TIMEOUT=10

6.Then, enter the following command:

./stack.sh

Note that OpenStack installation is a long process that cannot be interrupted. Any interruption or loss of network connectivity results in unknown states that cannot be resumed. It is recommended that you take a snapshot using VirtualBox before you begin the installation, such that you can easily power down and restore the original snapshot if indeed the process is interrupted. The script install-devstack. sh requires uninterrupted IP connectivity to run. If the installation completes successfully.


CASE STUDY:

Case study: how SDN changed Traditional Enterprise network Design Software-Defined Networking is changing everything about how networking is provisioned, monitored, and supported. At a fundamental level, SDN is changing the face of data center operations and providing new opportunities to enhance cloud computing environments, integrate with enterprise data centers, and support individual IT Outsourcing providers. SDN is designed to separate the hardware that forwards data packets (the router and data switch) from the logic that controls packet forwarding (the control plane). In SDN the decision-making that controls data forwarding is taken away from the physical switch, and given to a piece of software on a centralized server. In other words, SDN is about creating a programmable interface, similar to Software-Defined Storage, to control the network infrastructure.

To make this work, the OpenFlow protocol, which the Open Networking Foundation is in the process of standardizing across hardware devices and manufacturers, is an integral part of SDN and is used to communicate between an external controller and enabled switches. Without the OpenFlow protocol, SDN would not be possible.

The Need for a New Network Architecture

The Networking Industry, which includes data center operations, is being driven by very powerful trends in the market (e.g. mobility, server virtualization, cloud computing, and data security). These trends have caused the networking industry to reexamine traditional architectures, which are built on tiers of Ethernet switches. This traditional design made sense when client-server was dominant, but this architecture is no longer suitable for the dynamic computing and storage needs of today’s data centers.

Traditional networking technologies are extremely inadequate when compared to the current needs of the market. It is virtually impossible for traditional architectures to meet all the demands of the new computing trends that are growing and beginning to dominate the industry. Some of the key changes that are driving the need for a new network paradigm include:

1.Changing traffic patterns: In traditional client-server applications, communication occurs between one client and one server. This is no longer the case. Today’s applications access different databases and servers, creating a flurry of multi-directional machine-to-machine traffic. At the same time, individual users need to access corporate resources from different types of devices, from anywhere, and at any time.

2.The ‘consumerization of IT’: IT is under pressure to accommodate the increasing use of personal smartphones, tablets, and notebooks to access corporate networks. IT is required to provide support services for these devices while protecting corporate data and meeting compliance guidelines.

3.The rise of cloud services: Enterprises need to access applications, infrastructure, and other IT resources in the cloud on demand and à la carte. To make matters even more challenging, cloud services must be provided in an environment of increased security and compliance. Additionally, enabling self-service provisioning, whether in a private or public cloud, requires elastic scaling of computing, storage, and network resources.

4.Big Data means more bandwidth: Handling today’s ‘big data’ requires massive parallel processing on thousands of servers, all of which need network connections to each other. This trend is currently putting enormous pressure on computing networks as ‘big data’ continues to get bigger.

The changes mentioned above are exposing the limitations and weaknesses of traditional networking technologies. Existing network architectures were not designed to meet the requirements of today’s users, enterprises, and carriers. Network designers are constrained by the limitations of current networks, which include:

  • Complexity of networks which must accommodate data, voice, and video on many different devices.

  • Inconsistent network-wide policies required to configure thousands of devices and mechanisms.

  • Inability to scale as the network becomes overly complex.

  • Lack of industry-wide standards and protocols that create vendor-dependence.

Software-Defined Networking to the Rescue

SDN architecture is emerging, but has not yet become common-place. That said, there is a tremendous need for SDN and the future of networking will rely more and more on software to control and manage the devices and systems. When this happens on a larger scale, the pace of innovation for networks will accelerate as it has in the Software-Defined Storage domain. SDN promises to transform today’s traditional networks into flexible, programmable platforms with the intelligence to allocate resources dynamically and scale to support enormous data centers. SDN will also utilize virtualization technologies to support dynamic, highly automated cloud environments. SDN is certainly on its way to becoming the standard for network design in the future.

IT Outsourcing and SDN

As IT Outsourcers evolve to meet the needs of their markets, SDN will certainly be part of their growth strategies. IT Outsourcers have been and will continue to be among the earliest adopters of this technology. SDN will not only enable Outsourcing providers to be more efficient and agile, it will help them reduce costs and become more competitive.

Please log in to add an answer.