RSS

OpenStack Neutron and OVS (Open Virtual Switch) translated to the Network Engineers language

Share this page:

Introduction to Open Virtual Switch (OVS)

IaaS (Infrastructure as a Service) is provided by a group of different, interconnected Services. OpenStack is an Operating System that makes the IaaS possible, by controlling the “pools” of Compute, Storage and Networking within a Data Center using the Dashboard (later we´ll discuss some more about what Dashboard really is).

NaaS (Network as a Service) is a part we will mainly focus on. in this post NaaS is what OpenStack brings to Networking. The NaaS is in charge of configuring all the Network Elements (L2, L3 and Network Security) using the APIs (Application Programmable Interfaces). Users use the NaaS as the interface that allows them to add/configure/delete all the Network Elements, such as Routers, Load Balancers and Firewalls.

Neutron is an OpenStack module in charge of Networking. Neutron works using its Plug-ins. A Neutron Plug-in is used for different external mechanism, such as:

  • Open vSwitch (OVS), or external L2 Agents.
  • SDN Controllers, such VMware NSX, Cisco ACI, Alcatel Nuage etc.

All of you who know networking will stop here, make a “Poker face” and say - “Wait… what?". This brings us to the KEY point of this post: The OpenStacks Weak Point is the Network. Without an additional SDN controller, it isn’t capable of controlling a physical network infrastructure. Well, at least not the one that exceeds a few Switches.

Neutron consists of the following Logical Elements:

  • neutron-server: Accepts the API calls, and forwards them to the corresponding Neutron Plugins. Neutron-server is a Python daemon that exposes the OpenStack Networking API and passes tenant requests to a suite of plug-ins for additional processing. Neutron brings Networking as a Service (NaaS). This means that the user gets the interface to configure the network and provision security (Add/Remove Routers, FWs, LBs etc.), without worrying about the technology underneath.
  • DataBase: Stores the current state of different Plugins.
  • Message Queue: The place where the calls between the neutron-server and the agentes is queued.
  • Plugins and Agents:  In charge of executing different tasks, like plug/unplug the ports, manage the IP addressing etc.
  • L2 Agent (OVS or the External Agent): Manages the Layer 2 Connectivity of every Node (Network and Compute). It resides on the Hypervisor. L2 agent communicates all the connectivity changes to the neutron-server.
  • DHCP Agent: DHCP Service for the Tenant Networks.
  • L3 Agent (neutron-l3-agent): L3/NAT Connectivity (Floating IP).  It resides on the Network Node, and uses the IP Namespaces.
  • Advanced Services: Services, such as FW, LB etc.

 

Physically, Neutron is deployed as 3 isolated systems. In a Basic Deployment, we will have the Neutron Server connected to a Database. The integration to the L2, L3 and the DHCP agent, as well as to the Advanced Services (FWs, LBs) will be done through a Message Queue.

  • Controller Node, that runs the Neutron API Server, so that the Horizon and CLI API Calls all “land” here. Controller is the principal Neutron server.
  • Compute Node is where all the VMs run. These VMs need the L2 connectivity, and therefore each Compute Node needs to run the Layer 2 Agent.
  • Network Node runs all the Network Services Agents.

*There is a L2 Agent on every Compute and Network Node.

Layer 2 Agents run on the Hypervisor (there is an L2 Agent on each host), and monitors when devices are Added/Removed, and communicates it to the Neutron Server. When a new VM is created, the following happens:

Layer 3 Agents don’t run on a Hypervisor like the L2 agent, but on the separate Network Node, and they use the IP Namespaces. The L3 agent provides the isolated copy of a Network Stack. You can re-use the IP Addresses where the Tenant is what marks isolation between them. L3 Agent works on a Network Node, and for now it only supports the Static Routes.

Namespace: In Linux the Network Name Spaces are used for routing, and the L3 Agent relies on the L2 Agent to populate the “cache”.  A Namespace allows the isolation of a group of resources, on a kernel level. This allows the Multi-Tenant environment, from the Network point of view.  Only the instances within the same Network Namespace can communicate with each other, even if the instances are spread across OpenStack compute nodes.

Here is one of the examples of how to deploy the OpenStack (Juno release) Networking. In this example there is a separate API Node, but in the most deployments you will actually have this node integrated with the Controller node: