Contiv Intro

What is Contiv?

Contiv is an Open Source Project to deliver Policy-Based container for Networking. The idea behind Contiv is to make it easier for end users to deploy micro-services in their environments.

Contiv provides a higher level of networking abstraction for microservices. Contiv secures your application using a rich policy framework. It provides built-in service discovery and service routing for scale out services.

With the advent of containers and Microservices architecture, there is a need of automated or programmable network infrastructure specifically catering to dynamic workloads which can be formed using containers. With container and microservices technologies, speed and scale becomes critical. Because of these requirements, Automation becomes a critical component in the Network provisioning for future workloads.

Also with Baremetal hosts, VMs and container, there are different layers of Virtualization abstraction, complicating packet encapsulation. With public cloud technologies, tenant level isolation is necessary as well for our container workloads.

Contiv provides an IP address per container and eliminates the need for host-based port NAT. It works with different kinds of networks like pure layer 3 networks, overlay networks, and layer 2 networks, and provides the same virtual network view to containers regardless of the underlying technology. Contiv works with all major schedulers like Kubernetes and Docker Swarm. These schedulers provide compute resources to your containers and Contiv provides networking to them. Contiv supports both CNM (Docker networking Architecture) and CNI (CoreOS, the Kubernetes networking architecture). Contiv has L2, L3 (BGP), Overlay (VXLAN) and ACI modes. It has built in east-west service load balancing. Contiv also provides traffic isolation through control and data traffic.

Contiv is made of two major components:

The following diagram represents the overall architecture of Contiv where it shows how Netmaster and Netplugin are being leveraged to provide the overall Contiv solution.

Netmaster and Netplugin

Netmaster:

This one binary performs multiple tasks for Contiv. It's a REST API server that can handle multiple requests simultaneously. It learns routes and distributes to Netplugin nodes. It acts as resource manager which does resource allocation of IP addresses, VLAN and VXLAN IDs for networks. It uses distributed state store like etcd or consul to save all the desire runtime of for Contiv objects. Because of this, Contiv becomes completely stateless, scalable, and restart-able. Netmaster has in built heartbeat mechanism, through which it can talk to peer netmasters. This avoids risk of single point failure. Netmaster can work with external integration manager (Policy engine) like ACI.

Netplugin:

Each Host agent (Netplugin) implements CNI or CNM networking model adopted by popular container orchestration engines like Kubernetes and Docker Swarm, etc. It does communicate with Netmaster over REST Interface. In addition to this, Contiv uses json-rpc to distribute endpoints from Netplugin to Netmaster. Netplugin handles Up/Down events from Contiv networks and groups. It coordinates with other entities like fetching policies, creating container interface, requesting IP allocation, programming host forwarding.

Netplugin uses Contiv's custom open-flow based pipeline on linux host. It communicates with Open vSwitch (OVS) over the OVS driver. Contiv currently uses OVS for their data path. Plugin architecture of Contiv, makes it very easy to plug in any data path (eg: VPP, BPF etc).

Contiv's Custom Open-Flow based Pipeline:

Contiv has one custom open flow based packet pipeline. It has configurable modes such as overlay networking(VXLAN), Native L3 mode (BGP integration), Native L2 mode (for classic topologies). It is built from scratch and it is integral part of Contiv. Packet from container comes to input table and then with use of multiple interlinked openflow tables, we determine necessary tenant, network, endpoint group and policy information to achieve policy driven multitenant packet pipeline.

Contiv currently uses OVS (OpenVswitch) as Data plane.(VPP as a data plane integration is in the development). Ofnet Library supports mutiple software defined networking paradigms such as vlna bridge, vxlan bridge, vrouter.

Contiv Modes

Contiv can provide native connectivity (Traditional L2 and L3 network) as well as overlay connectivity (Public Cloud Case, we are currently supporting AWS). In traditinal L2 connectivity each packet coming out of container is marked with certain Vlan so that container workloads can fit in traditional L2 network without any additional settings. For L3 connectivity, Contiv uses BGP to distribute routes over network.

Contiv + ACI

With the success of Cisco ACI in the market and the need for microservices, integration between ACI and Contiv was inevitable. ACI addresses use cases such as Infrastructure Automation, Application aware Infrastructure, Scale out models and Dynamic Applications which are key pillars of modern day Microservices architectures.

Contiv working with ACI demonstrates how this integration can be achieved in a Docker containerized environment to create objects and associations that enable containers to communicate according to policy intent.

Contiv and ACI integration is done using aci-gw docker container. It uses python SDK of APIC and allows communication between Contiv and APIC.

Below is the diagram that represents a typical workflow in ACI + Contiv Integration:

Step 1 : You configure tenant and dependent resources in APIC

Step 2 and 4: Contiv Netmaster fetches this information when contiv is running in ACI mode.

Step 3: DevOps person specifies policies for their application workloads to be used by developers. This is Application intent.

Step 5: Developer launches apps which are managed by orchestration engines like Docker swarm or Kubernetes.

Step 6: Contiv Netplugin makes sure that policiy is implemetated correctly. It delegates all policy related context to APIC, so that packet fowarding can be taken care of at ACI level.

© Copyright Cisco Systems 2017