Cisco ACI and Contiv

Micro services architectures are rapidly changing the way applications are being develop. In this lab we will be introducing Project Contiv, which is an open source project to provide a high level abstraction of the networking services. Contiv provides a secure framework with a rich policy framework in order to protect these applications.

This lab will guide the student in the installation of Contiv, and how to integrate Contiv with ACI. During the lab students will become familiar on how to navigate a container environment and understand the value of Docker Swarm and be able to deploy a micro services application in an ACI environment. At the end of the lab students will be able to understand how to deploy a microservices application in an ACI environment leveraging Project Contiv..

Learn more

 

Introduction

This lab was designed to introduce the student to the world of Microservices, Containers, and Container Orchestration systems and how they can be tied into a traditional data center network. The lab will use Docker, Contiv, and ACI to create multi-container applications. Docker Swarm will manage the containers, schedule them over a cluster of Docker Hosts, and then configure the ACI fabric with a whitelist policy to ensure security.

Docker containers and the Docker Swarm orchestration system are cutting edge infrastructure technologies that are almost purpose built to support Microservices. Microservices are the latest generation of application architecture. The concept behind Microservices is to break up a large monolithic application into small independent functional components. The benefits of this architecture are many, small functional Microservices that can be scaled independently, written in different languages, and updated independently.

While Microservices offer many benefits to the application, they can create several challenges for the network. A single monolithic application may have used only a handful of IP addresses that were relatively static. Microservices running in containers may consume hundreds or thousands of IP addresses. The containers may scale up/down in response to demand. This is the challenge Contiv was created to solve. Contiv is integrated with Container Orchestration systems including Docker Swarm and Kubernetes. Contiv provides an abstraction of the network and offers built in service discovery and service routing.

Lab Setup Overview

Each POD contains two Centos Virtual Machines (VMs) that are being managed by a vCenter Cluster. Each VM has two interfaces, one for the management network and the other interface connects to the ACI leaf. Below is the a logical diagram on the VMs are connected in the lab.

 Warning!

Make sure you are in root@pod01-srv1 during these steps.

Configure base Operating System on pod01-srv1

As with any Linux application installation, there are some operating system parameters that must be modified to facilitate the operation of your application. We have grouped the requirements ahead of each component installation of Contiv.

Use of Centos 7.2 for this lab

Centos is a Linux distribution that provides a free solution that is functionally compatible with RedHat Enterprise Linux.

During this lab, you will be leveraging two Centos7 systems in order to install the different components needed such as Contiv, Ansible, Docker that allow us to create our containers.

Step 1 - Configure HOSTS file for Name Resolution

Setting proper hostnames definition plays a key role in the setup of various components of Contiv There are three things that need to match for Contiv to properly operate. The first part is that the hostname is correct in the file /etc/hosts. The second is the environment variable HOSTNAME that is derived in the system. The third is the hostname definition in /etc/hostname.

pod01-srv1
1
# This is the copy group: 1
cat <<EOF > /etc/hosts 127.0.0.1 localhost localhost.localdomain 10.0.236.17 pod01-srv1 pod01-srv1.ecatsrtpdmz.cisco.com EOF
pod01-srv1
2
# This is the copy group: 2
cat <<EOF > /etc/hostname pod01-srv1 EOF

Step 2 - Disable Network Manager Service and enable network

After we have modified the Hosts Name Resolution, we need to restart the network services in order to take effect the previous changes.

pod01-srv1
3
# This is the copy group: 3
systemctl disable NetworkManager.service systemctl stop NetworkManager.service systemctl mask NetworkManager.service chkconfig network on systemctl restart network

Step 3 - Disable FirewallD

In this step you will be disabling the FirewallD service in order to allow for you to install and set up Contiv:

pod01-srv1
4
# This is the copy group: 4
systemctl stop firewalld systemctl disable firewalld
pod01-srv1
5
# This is the copy group: 5
systemctl status firewalld
    # systemctl status firewalld
    firewalld.service - firewalld - dynamic firewall daemon
       Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
       Active: inactive (dead)
    

Now flush the iptables chains to ensure that everything has been cleaned up.

pod01-srv1
6
# This is the copy group: 6
iptables -F

Step 4 - Disable SELinux Enforcement

In this step you will be disabling SELinux enforcement. SELinux can operate in one of two modes:

pod01-srv1
7
# This is the copy group: 7
sudo setenforce 0
pod01-srv1
8
# This is the copy group: 8
sestatus
    [root@pod01-srv1 ~]# sestatus
    SELinux status:                 disabled  
    

Downloading and Upgrading software required for Contiv

In the following steps we will be downloanding some packages needed for Contiv to work properly. It is important to always check the required packages based in your version of code and system.

Step 5 - EPEL packages needed for Contiv

pod01-srv1
9
# This is the copy group: 9
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm rpm -Uvh epel-release-latest-7.noarch.rpm

Step 6 - Packages needed for Contiv

pod01-srv1
10
# This is the copy group: 10
yum -y install bzip2 sudo easy_install pip pip install netaddr yum -y install python2-crypto.x86_64 yum -y install python2-paramiko

Step 7- Upgrade system packages

Upgrade all the packages from the original ISO PXEBOOT install disk.

pod01-srv1
11
# This is the copy group: 11
yum makecache yum -y upgrade

Interface Configuration

Step 8- Enable Interface

During this step we will be enabling the interface that connects to the ACI fabric:

pod01-srv1
12
# This is the copy group: 12
ip a

[root@pod01-srv1 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:0c:01:04 brd ff:ff:ff:ff:ff:ff
    inet 10.0.236.17/24 brd 10.0.236.255 scope global dynamic eth0
       valid_lft 320sec preferred_lft 320sec
    inet6 fe80::250:56ff:fe0c:104/64 scope link
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:50:56:8c:42:0d brd ff:ff:ff:ff:ff:ff
    

Step 8 - Change the physical interface states to operational.

pod01-srv1
13
# This is the copy group: 13
ifconfig eth1 up

Step 9 - Restart network stack to activate interface.

pod01-srv1
14
# This is the copy group: 14
systemctl restart network

Step 10 - Check status of interfaces after network restart.

Using the command ip a we can see the status of the uplink interfaces that have been defined.

pod01-srv1
15
# This is the copy group: 15
ip a
[root@pod01-srv1 ~]#ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:0c:01:03 brd ff:ff:ff:ff:ff:ff
    inet 10.0.236.17/24 brd 10.0.236.255 scope global dynamic eth0
       valid_lft 544sec preferred_lft 544sec
    inet6 fe80::250:56ff:fe0c:103/64 scope link
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:8c:2e:a5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::250:56ff:fe8c:2ea5/64 scope link
       valid_lft forever preferred_lft forever
    

 Warning!

Make sure you are in root@pod01-srv1 during these steps.

Configure LLDPd for adjacency

LLDP is a tool that was born as a standard from the CDP protocol. LLDP is a vendor-neutral link layer protocol (transmitted via L2 broadcasts ) that is used by devices to share identity and capabilities to each other. This makes it easier to identify device locations across networks.

LLDP is also available in servers and can interact with Cisco ACI to make it possible to see the adjacency location of servers in relation to ports in the ACI fabric. This relationship can assist you in Contiv configuration because ACI can use this location information to remove the requirements to configure the location of compute hosts.

While it might seem a little confusing at this point of the lab, consider LLDP as a way to assist you in your Contiv deployment to configure ports in the ACI fabric to compute hosts. This is identical to the mechanism required to identify location in ACI with VMware vSphere integration.

Finally even if you don't use LLDP for the integration of Contiv as a mechanism to configure the access policies in ACI, having LLDP assist you with the ACI fabric as an operator. This provides you an easy way that you could script the locations of every device attached to the ACI fabric. ACI keeps a database of adjacencies that you can easily request and parse to get a list of CDP or LLDP entries of all ports in the ACI fabric. In all using LLDP is very useful for the network operator.

Step 1 - Add special repository for LLDPd

The best LLDPd package for integration with ACI is available on OpenSUSE repositories. You will add the following repository link to YUM such that it can pull the package and install easily.

pod01-srv1
1
# This is the copy group: 1
cd /etc/yum.repos.d/ wget http://download.opensuse.org/repositories/home:vbernat/RHEL_7/home:vbernat.repo

Step 2 - Install LLDPd

pod01-srv1
2
# This is the copy group: 2
yum -y install lldpd

Step 3 - Enable and start LLDPd

After LLDPd is installed in the system, use systemctl to enable the daemon so that it starts after reboots and then start it to that it is enabled now.

pod01-srv1
3
# This is the copy group: 3
systemctl enable lldpd systemctl start lldpd

Step 4 - Check LLDP Neighbor

Now that LLDP is functional, you can get the neighbor list with the command lldpcli show neighbor

pod01-srv1
4
# This is the copy group: 4
lldpcli show neighbor

The output of the command should show only one active neighbor relationship that is to the adjacent ACI FEX port that is used in the lab for the management port of the hosts.

Note: You may need to re-run the command above (waiting for LLDP packet to arrive)

You should be able to see two neighbors. One is for the management interface and the other one is for the ACI connection

[root@pod01-srv1 ~]# lldpcli show neighbor
-------------------------------------------------------------------------------
LLDP neighbors:
-------------------------------------------------------------------------------
Interface:    eth0, via: LLDP, RID: 2, Time: 0 day, 00:00:19
  Chassis:
    ChassisID:    mac 54:75:d0:21:0c:20
    SysName:      L3
    SysDescr:     topology/pod-1/node-203
    TTL:          120
    MgmtIP:       10.0.226.33
    Capability:   Bridge, on
    Capability:   Router, off
  Port:
    PortID:       local Eth111/1/31
    PortDescr:    topology/pod-1/paths-203/extpaths-111/pathep-[eth1/31]
  Unknown TLVs:
    TLV:          OUI: 00,01,42, SubType: 1, Len: 1 01
    TLV:          OUI: 00,01,42, SubType: 202, Len: 1 01
-------------------------------------------------------------------------------
Interface:    eth1, via: LLDP, RID: 1, Time: 0 day, 00:00:34
  Chassis:
    ChassisID:    mac f8:c2:88:87:71:ad
    SysName:      L1
    SysDescr:     topology/pod-1/node-201
    TTL:          120
    MgmtIP:       10.0.226.31
    Capability:   Bridge, on
    Capability:   Router, on
  Port:
    PortID:       local Eth1/31
    PortDescr:    topology/pod-1/paths-201/pathep-[eth1/31]
  Unknown TLVs:
    TLV:          OUI: 00,01,42, SubType: 1, Len: 1 00
    TLV:          OUI: 00,01,42, SubType: 201, Len: 1 01
    TLV:          OUI: 00,01,42, SubType: 216, Len: 2 00,00
    TLV:          OUI: 00,01,42, SubType: 215, Len: 2 4C,31
    TLV:          OUI: 00,01,42, SubType: 212, Len: 11 53,41,4C,31,38,33,32,59,36,54,4C
    TLV:          OUI: 00,01,42, SubType: 214, Len: 11 4E,39,4B,2D,43,39,33,39,36,50,58
    TLV:          OUI: 00,01,42, SubType: 210, Len: 14 6E,39,30,30,30,2D,31,32,2E,31,28,31,68,29
    TLV:          OUI: 00,01,42, SubType: 206, Len: 11 41,43,49,20,46,61,62,72,69,63,31
    TLV:          OUI: 00,01,42, SubType: 202, Len: 1 01
    TLV:          OUI: 00,01,42, SubType: 205, Len: 2 00,01
    TLV:          OUI: 00,01,42, SubType: 211, Len: 2 0F,7F
    TLV:          OUI: 00,01,42, SubType: 203, Len: 4 00,00,00,C9
    TLV:          OUI: 00,01,42, SubType: 208, Len: 4 0A,09,F0,38
    TLV:          OUI: 00,01,42, SubType: 207, Len: 30,2D,61,63,36,39,2D,31,31,
-------------------------------------------------------------------------------
    

Now you have completed the task of preparing pod01-srv1 by configuring the hostname, downloaded the packages required and have enabled both interfaces with LLDP. It is now time to prepare pod01-srv2 with the same requirements.


 Warning!

Make sure you are in root@pod01-srv2 during these steps.

Configure base Operating System on pod01-srv2

Step 1 - Configure HOSTS file for Name Resolution

Setting proper hostnames definition plays a key role in the setup of various components of Contiv There are three things that need to match for Contiv to properly operate. The first part is that the hostname is correct in the file /etc/hosts. The second is the environment variable HOSTNAME that is derived in the system. The third is the hostname definition in /etc/hostname.

pod01-srv2
1
# This is the copy group: 1
cat <<EOF > /etc/hosts 127.0.0.1 localhost localhost.localdomain 10.0.236.17 netmaster 10.0.236.49 pod01-srv2 pod01-srv2.ecatsrtpdmz.cisco.com EOF
pod01-srv2
2
# This is the copy group: 2
cat <<EOF > /etc/hostname pod01-srv2 EOF

Step 2 - Disable Network Manager Service and Enable Network

After we have modified the Hosts Name Resolution, we need to restart the network services in order to take effect the previous changes.

pod01-srv2
3
# This is the copy group: 3
systemctl disable NetworkManager.service systemctl stop NetworkManager.service systemctl mask NetworkManager.service chkconfig network on systemctl restart network

Step 3 - Disable FirewallD

In this step you will be disabling the FirewallD service in order to install Contiv.

pod01-srv2
4
# This is the copy group: 4
systemctl stop firewalld systemctl disable firewalld
pod01-srv2
5
# This is the copy group: 5
systemctl status firewalld
    # systemctl status firewalld
    firewalld.service - firewalld - dynamic firewall daemon
       Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
       Active: inactive (dead)
    

Now flush the iptables chains to ensure that everything has been cleaned up.

pod01-srv2
6
# This is the copy group: 6
iptables -F

Step 4 - Disable SELinux Enforcement

In this step you will be disabling SELinux enforcement. SELinux can operate in one of two modes:

pod01-srv2
7
# This is the copy group: 7
sudo setenforce 0
pod01-srv2
8
# This is the copy group: 8
sestatus
    [root@pod01-srv2 ~]# sestatus
    SELinux status:                 disabled  
    

Downloading and Upgrading software required for Contiv

In the following steps we will be downloanding some packages needed for Contiv to work properly. It is important to always check the required packages based in your version of code and system.

Step 5 - EPEL packages needed for Contiv

In the following steps we will be downloanding some packages needed for Contiv to work properly. It is important to always check the required packages based in your version of code and system.

pod01-srv2
9
# This is the copy group: 9
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm rpm -Uvh epel-release-latest-7.noarch.rpm

Step 6 - Packages needed for Contiv

pod01-srv2
10
# This is the copy group: 10
yum -y install bzip2 sudo easy_install pip pip install netaddr yum -y install python2-crypto.x86_64 yum -y install python2-paramiko

Step 7- Upgrade system packages

Upgrade all the packages from the original ISO PXEBOOT install disk.

pod01-srv2
11
# This is the copy group: 11
yum makecache yum -y upgrade

Interface Configuration

Step 8- Enable Interface

During this step we will enable the interface that connects to the ACI fabric

pod01-srv2
12
# This is the copy group: 12
ip a

[root@pod01-srv2 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:0c:01:04 brd ff:ff:ff:ff:ff:ff
    inet 10.0.236.49/24 brd 10.0.236.255 scope global dynamic eth0
       valid_lft 320sec preferred_lft 320sec
    inet6 fe80::250:56ff:fe0c:104/64 scope link
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:50:56:8c:42:0d brd ff:ff:ff:ff:ff:ff
    

Step 8 - Change the physical interface states to operational.

pod01-srv2
13
# This is the copy group: 13
ifconfig eth1 up

Step 9 - Restart network stack to activate interface.

pod01-srv2
14
# This is the copy group: 14
systemctl restart network

Step 10 - Check status of interfaces after network restart.

Using the command ip a we can see the status of the uplink interfaces that have been defined.

pod01-srv2
15
# This is the copy group: 15
ip a
[root@pod01-srv2 ~]#ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:0c:01:03 brd ff:ff:ff:ff:ff:ff
    inet 10.0.236.17/24 brd 10.0.236.255 scope global dynamic eth0
       valid_lft 544sec preferred_lft 544sec
    inet6 fe80::250:56ff:fe0c:103/64 scope link
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:8c:2e:a5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::250:56ff:fe8c:2ea5/64 scope link
       valid_lft forever preferred_lft forever
    

ml>

 Warning!

Make sure you are in root@pod01-srv2 during these steps.

Configure LLDPd for adjacency

LLDP is a tool that was born as a standard from the CDP protocol. LLDP is a vendor-neutral link layer protocol (transmitted via L2 broadcasts ) used by devices to share identity and capabilities to each other. This makes it easier to identify device locations across networks.

LLDP is also available in servers and can interact with Cisco ACI to make it possible to see the adjacency location of servers in relation to ports in the ACI fabric. This relationship can assist you in the Contiv configuration because ACI can use this location information to remove the requirements to configure the location of compute hosts.

While it might seem a little confusing at this point of the lab, consider LLDP as a way to assist you in your Contiv deployment to configure ports in the ACI fabric to compute hosts. This is identical to the mechanism required to identify location in ACI with VMware vSphere integration.

Finally even if you don't use LLDP for the integration of Contiv to configure the access policies in ACI, having LLDP assist you with the ACI fabric as an operator. This provides you an easy way that you could script the locations of every device attached to the ACI fabric. ACI keeps a database of adjacencies that you can easily request and parse to get a list of CDP or LLDP entries of all ports in the ACI fabric. In all using LLDP is very useful for the network operator.

Step 1 - Add special repository for LLDPd

The best LLDPd package for integration with ACI is available on OpenSUSE repositories. You will add the following repository link to YUM such that it can pull the package and install easily.

pod01-srv2
1
# This is the copy group: 1
cd /etc/yum.repos.d/ wget http://download.opensuse.org/repositories/home:vbernat/RHEL_7/home:vbernat.repo

Step 2 - Install LLDPd

pod01-srv2
2
# This is the copy group: 2
yum -y install lldpd

Step 3 - Enable and Start LLDPd

After LLDPd is installed in the system, use systemctl to enable the daemon so that it starts after reboots and then start it to that it is enabled now.

pod01-srv2
3
# This is the copy group: 3
systemctl enable lldpd systemctl start lldpd

Step 4 - Check LLDP Neighbor

Now that LLDP is functional, you can get the neighbor list with the command lldpcli show neighbor.

pod01-srv2
4
# This is the copy group: 4
lldpcli show neighbor

The output of the command should show only one active neighbor relationship that is to the adjacent ACI FEX port that is used in the lab for the management port of the hosts.

Note: You might need to re-run the command above (waiting for LLDP packet to arrive)

pod01-srv2

# lldpcli show neighbor


    [root@pod01-srv1 ~]# lldpcli show neighbor
-------------------------------------------------------------------------------
LLDP neighbors:
-------------------------------------------------------------------------------
Interface:    eth0, via: LLDP, RID: 2, Time: 0 day, 00:01:12
  Chassis:
    ChassisID:    mac 54:75:d0:21:0c:21
    SysName:      L3
    SysDescr:     topology/pod-1/node-203
    TTL:          120
    MgmtIP:       10.0.226.33
    Capability:   Bridge, on
    Capability:   Router, off
  Port:
    PortID:       local Eth111/1/32
    PortDescr:    topology/pod-1/paths-203/extpaths-111/pathep-[eth1/32]
  Unknown TLVs:
    TLV:          OUI: 00,01,42, SubType: 1, Len: 1 01
    TLV:          OUI: 00,01,42, SubType: 202, Len: 1 01
-------------------------------------------------------------------------------
Interface:    eth1, via: LLDP, RID: 1, Time: 0 day, 00:01:16
  Chassis:
    ChassisID:    mac 64:12:25:75:0e:3a
    SysName:      L2
    SysDescr:     topology/pod-1/node-202
    TTL:          120
    MgmtIP:       10.0.226.32
    Capability:   Bridge, on
    Capability:   Router, on
  Port:
    PortID:       local Eth1/32
    PortDescr:    topology/pod-1/paths-202/pathep-[eth1/32]
  Unknown TLVs:
    TLV:          OUI: 00,01,42, SubType: 1, Len: 1 00
    TLV:          OUI: 00,01,42, SubType: 201, Len: 1 01
    TLV:          OUI: 00,01,42, SubType: 216, Len: 2 00,00
    TLV:          OUI: 00,01,42, SubType: 215, Len: 2 4C,32
    TLV:          OUI: 00,01,42, SubType: 212, Len: 11 53,41,4C,31,38,31,34,50,54,42,55
    TLV:          OUI: 00,01,42, SubType: 214, Len: 11 4E,39,4B,2D,43,39,33,39,36,50,58
    TLV:          OUI: 00,01,42, SubType: 210, Len: 14 6E,39,30,30,30,2D,31,32,2E,31,28,31,68,29
    TLV:          OUI: 00,01,42, SubType: 206, Len: 11 41,43,49,20,46,61,62,72,69,63,31
    TLV:          OUI: 00,01,42, SubType: 202, Len: 1 01
    TLV:          OUI: 00,01,42, SubType: 211, Len: 2 0F,7F
    TLV:          OUI: 00,01,42, SubType: 205, Len: 2 00,01
    TLV:          OUI: 00,01,42, SubType: 203, Len: 4 00,00,00,CA
    TLV:          OUI: 00,01,42, SubType: 208, Len: 4 0A,09,F0,3A
    TLV:          OUI: 00,01,42, SubType: 207, Len: 123 01,0A,09,00,01,33,66,39,64,30,64
-------------------------------------------------------------------------------

   

Container Intro

What are Containers

Containers are a form of Operating System virtualization. Different than virtual machines where a hypervisor abstracts the underlying hardware and presents virtual hardware to the guest Operating Systems, Containers share a single Operating System and kernel. Containers offer many advantages over Virtual Machines including: instantaneous startup times (the same as a process), Reduced overhead by eliminating the need to run duplicate Operating Systems significantly more containers can run on a given host than VMs, reduced management/patching/updating because there is a only one Operating System.

Image Source: https://hub.docker.com/r/tplcom/docker-presentation/

Containers are far from new. The technology has been around various forms in the Unix/Linux world since the early 1980s. The Containers in use today are possible because of two relatively new features built into the Linux Kernel:

These two features allow a program to run on a Linux host in such a way that it is isolated from all the other running programs. Linux exposes the Kernel Container features through and interface called LXC.

Generally Containers are associated with the Linux Operating System however Windows Server 16 now offers similar functionality as well as Docker support.

What is Docker

Docker is an open-source project that automates the deployment of applications inside containers. The idea behind docker is to "build, ship and run anywhere" with the concept around separation of concerns where the developers can build an application anywhere, ship it via Docker Hub and run on any infrastructure that has Docker Engine. The Concept is illustrated in the graphic below:

Image source: https://www.docker.com/sites/default/files/home-1-solutions-2.jpg

Docker has been wildly successful and as a result has wide support on both Linux and Windows platforms, Container orchestration solutions like Kubernetes, as well as most public clouds including AWS, Azure, and Google

Docker Architecture

The Docker architecture is composed of three main components:

Image source: https://docs.docker.com/introduction/understanding-docker/

Docker Images

Docker images use the Linux Union file system (UnionFS) which allows individual layers to be combined into a single image as depicted below. The individual layers are combined to form a single file system. Changes can be made to individual layers without effecting the other layers. The layers themselves are read only with a writeable layer being added on top when a container is created.

Image source: http://docs.master.dockerproject.org/terms/layer/

A text file called the Dockerfile describes the layers and acts as a template to build a Docker image. The example below is a Dockerfile used to build an ACI toolkit image. This Dockerfile would create an image with 5 layers.

1) Ubuntu base image

2) Any updates found by running the apt-get update command

3) The apt-get install would create a layer where Git, Python and Python-pip are installed

4) The clone of the actual ACI toolkit

5) Any changes made by running the setup.py install command.

Image source: https://github.com/datacenter/acitoolkit/blob/master/Dockerfile

A docker container is simply an running instance of a Docker image. Multiple unique containers can be started from the same Docker image.

Docker Swarm

Swarm is Dockers integrated clustering solution. Before Swarm each individual Docker host, that is a physical or virtual machine running the Docker Engine operated as an autonomous system and was managed individually. Swarm mode provides the ability to cluster many hosts together and manage them through a single CLI on the Swarm Manager.

Swarm Architecture

The Swarm architecture is depicted in the diagram below. There are two types of nodes, Manager and Worker.

Image Source: https://blog.docker.com/2016/07/docker-built-in-orchestration-ready-for-production-docker-1-12-goes-ga/

As depicted in the following diagram the Worker nodes simply perform tasks given to them by the dispatchers. In contrast the Manager nodes handle providing the API to access the Swarm, orchestration, IP address allocation, dispatching tasks and Scheduling. Today, Swarm offers 3 scheduling methods:

Image Source: https://blog.docker.com/2016/07/docker-built-in-orchestration-ready-for-production-docker-1-12-goes-ga/

The final component in the architecture is a Quorum layer. The managers run an internal state store using the RAFT consensus protocol. One manager is elected as the leader and handles the functions listed in the diagram. All of the non-leaders are available as hot standbys, creating a fault-tolerant architecture.

What is Application Centric Infrastructure (ACI)

Cisco ACI is a new data center architecture designed to address the requirements of today’s traditional networks, as well as to meet emerging demands that new computing trends and business factors are placing on the network.


Application-Centric Policy Model Using Group-Based Policy


To provide agility and simplicity in data center infrastructure, a new language describing the abstracted intent of connectivity is required so that the end user doesn’t need significant networking knowledge to describe the requirements for connectivity. Additionally, this intent should be decoupled from network forwarding semantics so that the end user can describe the policy in such a way that a change in policy need not affect forwarding behavior, and the converse.


Because this abstracted, decoupled policy model did not exist prior to Cisco ACI, Cisco created such a model. It is called group-based policy (GBP) and is a working project in OpenStack and OpenDaylight.


This approach offers a number of advantages, including:



Application-Centric Policy Model



Cisco Application Policy Infrastructure Controller


Cisco APIC serves as the single point of automation and fabric element management in both physical and virtual environments. As a result, operators can build fully automated and scalable multitenant networks.


Cisco APIC is a unified point for policy-based configuration expressed through group-based policy



Cisco APIC attributes and features include the following:


Cisco APIC communicates with the Cisco ACI fabric to distribute policies to the points of attachment and provide several critical administrative functions to the fabric. Cisco APIC is not directly involved in data-plane forwarding, so a complete failure or disconnection of all Cisco APIC elements in a cluster will not result in any loss of forwarding capabilities, increasing overall system reliability.


In general, policies are distributed to nodes as needed on endpoint attachment or by an administrative static binding, allowing greater scalability across the entire fabric.


Cisco APIC also provides full native support for multitenancy so that multiple interested groups (internal or external to the organization) can share the Cisco ACI fabric securely, yet still be allowed access to shared resources if required. Cisco APIC also has full, detailed support for role-based access control (RBAC) down to each managed object in the system, so that privileges (read, write, or both) can be granted per role across the entire fabric.


Cisco APIC also has completely open APIs so that users can use Representational State Transfer (REST)-based calls (through XML or JavaScript Object Notation [JSON]) to provision, manage, monitor, or troubleshoot the system. Additionally, Cisco APIC includes a CLI and a GUI as central points of management for the entire Cisco ACI fabric.




Cisco ACI Fabric




The Cisco ACI Fabric is build around a set of hardware to provide the most scalable, extensible, simple, flexible, and efficient network in the industry. The Cisco ACI fabric is designed to address both today's and tomorrow's requirements:























Contiv Intro

What is Contiv?

Contiv is an Open Source Project to deliver Policy-Based container for Networking. The idea behind Contiv is to make it easier for end users to deploy micro-services in their environments.

Contiv provides a higher level of networking abstraction for microservices. Contiv secures your application using a rich policy framework. It provides built-in service discovery and service routing for scale out services.

With the advent of containers and Microservices architecture, there is a need of automated or programmable network infrastructure specifically catering to dynamic workloads which can be formed using containers. With container and microservices technologies, speed and scale becomes critical. Because of these requirements, Automation becomes a critical component in the Network provisioning for future workloads.

Also with Baremetal hosts, VMs and container, there are different layers of Virtualization abstraction, complicating packet encapsulation. With public cloud technologies, tenant level isolation is necessary as well for our container workloads.

Contiv provides an IP address per container and eliminates the need for host-based port NAT. It works with different kinds of networks like pure layer 3 networks, overlay networks, and layer 2 networks, and provides the same virtual network view to containers regardless of the underlying technology. Contiv works with all major schedulers like Kubernetes and Docker Swarm. These schedulers provide compute resources to your containers and Contiv provides networking to them. Contiv supports both CNM (Docker networking Architecture) and CNI (CoreOS, the Kubernetes networking architecture). Contiv has L2, L3 (BGP), Overlay (VXLAN) and ACI modes. It has built in east-west service load balancing. Contiv also provides traffic isolation through control and data traffic.

Contiv is made of two major components:

The following diagram represents the overall architecture of Contiv where it shows how Netmaster and Netplugin are being leveraged to provide the overall Contiv solution.

Netmaster and Netplugin

Netmaster:

This one binary performs multiple tasks for Contiv. It's a REST API server that can handle multiple requests simultaneously. It learns routes and distributes to Netplugin nodes. It acts as resource manager which does resource allocation of IP addresses, VLAN and VXLAN IDs for networks. It uses distributed state store like etcd or consul to save all the desire runtime of for Contiv objects. Because of this, Contiv becomes completely stateless, scalable, and restart-able. Netmaster has in built heartbeat mechanism, through which it can talk to peer netmasters. This avoids risk of single point failure. Netmaster can work with external integration manager (Policy engine) like ACI.

Netplugin:

Each Host agent (Netplugin) implements CNI or CNM networking model adopted by popular container orchestration engines like Kubernetes and Docker Swarm, etc. It does communicate with Netmaster over REST Interface. In addition to this, Contiv uses json-rpc to distribute endpoints from Netplugin to Netmaster. Netplugin handles Up/Down events from Contiv networks and groups. It coordinates with other entities like fetching policies, creating container interface, requesting IP allocation, programming host forwarding.

Netplugin uses Contiv's custom open-flow based pipeline on linux host. It communicates with Open vSwitch (OVS) over the OVS driver. Contiv currently uses OVS for their data path. Plugin architecture of Contiv, makes it very easy to plug in any data path (eg: VPP, BPF etc).

Contiv's Custom Open-Flow based Pipeline:

Contiv has one custom open flow based packet pipeline. It has configurable modes such as overlay networking(VXLAN), Native L3 mode (BGP integration), Native L2 mode (for classic topologies). It is built from scratch and it is integral part of Contiv. Packet from container comes to input table and then with use of multiple interlinked openflow tables, we determine necessary tenant, network, endpoint group and policy information to achieve policy driven multitenant packet pipeline.

Contiv currently uses OVS (OpenVswitch) as Data plane.(VPP as a data plane integration is in the development). Ofnet Library supports mutiple software defined networking paradigms such as vlna bridge, vxlan bridge, vrouter.

Contiv Modes

Contiv can provide native connectivity (Traditional L2 and L3 network) as well as overlay connectivity (Public Cloud Case, we are currently supporting AWS). In traditinal L2 connectivity each packet coming out of container is marked with certain Vlan so that container workloads can fit in traditional L2 network without any additional settings. For L3 connectivity, Contiv uses BGP to distribute routes over network.

Contiv + ACI

With the success of Cisco ACI in the market and the need for microservices, integration between ACI and Contiv was inevitable. ACI addresses use cases such as Infrastructure Automation, Application aware Infrastructure, Scale out models and Dynamic Applications which are key pillars of modern day Microservices architectures.

Contiv working with ACI demonstrates how this integration can be achieved in a Docker containerized environment to create objects and associations that enable containers to communicate according to policy intent.

Contiv and ACI integration is done using aci-gw docker container . It uses python SDK of APIC and allows communication between Contiv and APIC.

Below is the diagram that represents a typical workflow in ACI + Contiv Integration:

Step 1 : You configure tenant and dependent resources in APIC

Step 2 and 4: Contiv Netmaster fetches this information when contiv is running in ACI mode.

Step 3: DevOps person specifies policies for their application workloads to be used by developers. This is Application intent.

Step 5: Developer launches apps which are managed by orchestration engines like Docker swarm or Kubernetes.

Step 6: Contiv Netplugin makes sure that policiy is implemetated correctly. It delegates all policy related context to APIC, so that packet fowarding can be taken care of at ACI level.

 Warning!

Make sure you are in the installer-host during these steps.

installer-host Credentials


Contiv Installation

Contiv Installer

The Contiv Swarm Installer is launched from an external host to the cluster. It uses Ansible to automate the deployment of Docker, Swarm and Contiv. Ansible uses SSH connections to connect from the Installer Host to the Cluster Hosts. In this Lab we will setup SSH key based authentication between the Installer host and the Cluster Hosts to allow Ansible to reach the Cluster Hosts from the Installer Host.

Contiv Swarm installer uses a docker container to run the Ansible deployment to avoid Ansible version dependencies on the Installer host. So the only pre-requisite on the Installer Host is that we have docker installed and the installer is run as a user who is part of the docker group.

All the nodes need to be accessible to the installer host. You can have one or many master nodes and any number of worker nodes. The Installer installs the following components:

The following diagram represents the Contiv installer showing the different components and their interaction.


Table of the different versions of code to leverage during this lab.

Component Version
Docker engine 1.12.6
Docker Swarm 1.2.5
etcd KV store 2.3.7
Contiv v1.0.0-alpha-01-28-2017.10-23-11.UTC
ACI-GW container contiv/aci-gw:02-02-2017.2.1_1h

In a production environment, you should not disable the firewall. You can open following ports using IPtables command. You can use configuration management tools like Ansible, to do this job at the time of provisioning of nodes for Contiv.

Please refer this for more details: Contiv Ansible example

Software

Port Number

Protocol

Notes

Contiv

9001

TCP

Communication between OVS and Contiv

Contiv

9002

TCP

Communication between OVS and Contiv

Contiv

9003

TCP

Communication between OVS and Contiv

Contiv

9999

TCP

Netmaster Port

BGP Port

179

TCP

Contiv in L3 mode will require this

VxLAN

4789

UDP

Contiv in VXLAN network will use this port

Docker API

2385

TCP

Docker Related

Docker Swarm

2375

TCP

Docker Swarm Related

Consul

8300

TCP/UDP

Consul KV Store related

Consul

8301

TCP/UDP

Consul KV Store related

Consul

8400

TCP/UDP

Consul KV Store related

Consul

8500

TCP/UDP

Consul KV Store related

Etcd

2379

TCP

Etcd KV store related

Etcd

2380

TCP

Etcd KV store related

Etcd

4001

TCP

Etcd KV store related

Etcd

7001

TCP

Etcd KV store related

Auth_proxy

10000

TCP

Contiv authorization proxy


Step 1 - Public Key installation

installer-host
1
# This is the copy group: 1
mkdir .ssh && chmod 700 .ssh ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ""

[pod1u1@installer-host ~]# mkdir .ssh && chmod 700 .ssh
[pod1u1@installer-host ~]#
[pod1u1@installer-host ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa  -N ""
Generating public/private rsa key pair.



Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
d4:20:4c:4f:75:be:b9:01:05:92:60:00:87:99:7f:fd root@pod32-srv2.ecatsrtpdmz.cisco.com
The key's randomart image is:
+--[ RSA 2048]----+
|  .=o+=.+oo.o    |
|  +. ..+.+ +     |
|   .   .o o .    |
|    . ...  . o   |
|     .  S.  +    |
|          E  o   |
|            .    |
|                 |
|                 |
+-----------------+
     
installer-host
2
# This is the copy group: 2
sshpass -p cisco.123 ssh-copy-id -i ~/.ssh/id_rsa.pub root@pod01-srv1.ecatsrtpdmz.cisco.com -o StrictHostKeyChecking=no
installer-host
3
# This is the copy group: 3
sshpass -p cisco.123 ssh-copy-id -i ~/.ssh/id_rsa.pub root@pod01-srv2.ecatsrtpdmz.cisco.com -o StrictHostKeyChecking=no

Step 2 - Download the Contiv installer script

The Contiv installer is located https://github.com/contiv/install/releases

NOTE: We have downloaded the installer to our local server to speed up the download process

installer-host
4
# This is the copy group: 4
cd ~ wget http://10.0.226.7/nfs/contiv/contiv-full-1.1.7.tgz
[pod1u1@installer-host ~]#wget http://10.0.226.7/nfs/contiv/contiv-full-1.1.7.tgz
--2017-03-14 11:42:58--  http://http://10.0.226.7/nfs/contiv/contiv-full-1.1.7.tgz
Connecting to 10.0.226.7:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3250255 (3.1M) [application/x-gzip]
Saving to: ‘contiv-full-1.1.7.tgz’

100%[============================================================================================================================================>] 3,250,255   --.-K/s   in 0.01s

2017-03-14 11:42:58 (258 MB/s) - ‘contiv-full-1.1.7.tgz’ saved [3250255/3250255]
2017-03-14 11:42:58 (9.38 MB/s) - contiv-full-1.1.7.tgz saved [3265759/3265759]

Step 3 - Untar Contiv installation file

installer-host
5
# This is the copy group: 5
tar -zxvf contiv-full-1.1.7.tgz
[pod1u1@installer-host ~]#  tar -zxvf contiv-full-1.1.7.tgz
./
contiv-1.1.7/
contiv-1.1.7/install/
contiv-1.1.7/install/ansible/
contiv-1.1.7/install/ansible/aci_cfg.yml
contiv-1.1.7/install/ansible/cfg.yml
contiv-1.1.7/install/ansible/env.json
contiv-1.1.7/install/ansible/install.sh
contiv-1.1.7/install/ansible/install_defaults.sh
contiv-1.1.7/install/ansible/install_swarm.sh
contiv-1.1.7/install/ansible/uninstall.sh
contiv-1.1.7/install/ansible/uninstall_swarm.sh
contiv-1.1.7/install/genInventoryFile.py
contiv-1.1.7/install/k8s/
contiv-1.1.7/install/k8s/k8s1.4/
contiv-1.1.7/install/k8s/k8s1.4/aci_gw.yaml
contiv-1.1.7/install/k8s/k8s1.4/cleanup.yaml
contiv-1.1.7/install/k8s/k8s1.4/contiv.yaml
contiv-1.1.7/install/k8s/k8s1.4/etcd.yaml
contiv-1.1.7/install/k8s/k8s1.6/
contiv-1.1.7/install/k8s/k8s1.6/aci_gw.yaml
contiv-1.1.7/install/k8s/k8s1.6/cleanup.yaml
contiv-1.1.7/install/k8s/k8s1.6/contiv.yaml
contiv-1.1.7/install/k8s/k8s1.6/etcd.yaml
contiv-1.1.7/install/k8s/install.sh
contiv-1.1.7/install/k8s/uninstall.sh
contiv-1.1.7/install/generate-certificate.sh
contiv-1.1.7/README.md
contiv-1.1.7/netctl


Step 4 - Create the Configuration File (cfg.yml)

During this step we will be creating the configuration file (cfg.yml). This file will contain certain information around the nodes information such as hostname, the control and data interface, plus the APIC information. This information is necessary in order for Contiv to communicate with the ACI controller called APIC.

installer-host
6
# This is the copy group: 6
cat << EOF > ~/cfg.yml CONNECTION_INFO: pod01-srv1.ecatsrtpdmz.cisco.com: role: master control: eth0 data: eth1 pod01-srv2.ecatsrtpdmz.cisco.com: control: eth0 data: eth1 APIC_URL: "https://10.0.226.41:443" APIC_USERNAME: "admin" APIC_PASSWORD: "cisco.123" APIC_PHYS_DOMAIN: "Contiv-PD" APIC_EPG_BRIDGE_DOMAIN: "not_specified" APIC_CONTRACTS_UNRESTRICTED_MODE: "no" APIC_LEAF_NODES: - topology/pod-1/node-201 - topology/pod-1/node-202 EOF

Step 5 - Time to install Contiv

Contiv Images

Contiv Images are made of two containers contiv_network and aci_gw:

Contiv_network

The repository for contiv_network is located:

https://github.com/contiv/netplugin/releases

ACI-GW

The repository for aci-gw is located:

https://hub.docker.com/r/contiv/aci-gw/tags/

Because the version of ACI that we are using is 3.0., we will leveraging aci-gw:3.0.1k for our ACI GW:

installer-host
7
# This is the copy group: 7
cd ~/contiv-1.1.7
installer-host
8
# This is the copy group: 8
./install/ansible/install_swarm.sh -f ~/cfg.yml -e ~/.ssh/id_rsa -u root -i -m aci
 Wait
The installation process will take some time to complete. The installation process will display the completed tasks.
[pod1u1@installer-host ~]#   ./install/ansible/install_swarm.sh -f ~/cfg.yml -e ~/.ssh/id_rsa -u root -i -m aci
TASK [auth_proxy : create cert folder for proxy] *******************************
changed: [node1]

TASK [auth_proxy : copy shell script for starting auth-proxy] ******************
changed: [node1]

TASK [auth_proxy : copy cert for starting auth-proxy] **************************
changed: [node1]

TASK [auth_proxy : copy key for starting auth-proxy] ***************************
changed: [node1]

TASK [auth_proxy : copy systemd units for auth-proxy] **************************
changed: [node1]

TASK [auth_proxy : initialize auth-proxy] **************************************
changed: [node1]

TASK [auth_proxy : start auth-proxy container] *********************************
changed: [node1]

PLAY RECAP *********************************************************************
node1                      : ok=10   changed=7    unreachable=0    failed=0   

After the installation process is completed you should see the following message:

Installation is complete
=========================================================

Please export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375 in your shell before proceeding
Contiv UI is available at https://pod01-srv1.ecatsrtpdmz.cisco.com:10000

Please use the first run wizard or configure the setup as follows:
 Configure forwarding mode (optional, default is bridge).
 netctl global set --fwd-mode routing
 Configure ACI mode (optional)
 netctl global set --fabric-mode aci --vlan-range -
 Create a default network
 netctl net create -t default --subnet= default-net
 For example, netctl net create -t default --subnet=20.1.1.0/24 default-net

=========================================================
    

Step 6 - Verify Docker Swarm

We need to export the DOCKER_HOST environment variables to know where the Docker Remote API is located.

installer-host
9
# This is the copy group: 9
export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375
installer-host
10
# This is the copy group: 10
docker info
installer-host

[pod01u1@installer-host ~]#docker info
Containers: 8
 Running: 8
 Paused: 0
 Stopped: 0
Images: 10
Server Version: swarm/1.2.5
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 2
       
        
         pod01-srv1.ecatsrtpdmz.cisco.com: 10.0.236.17:2385
  â”” ID: VL2K:KYDV:LJVR:UEIK:TY6V:G4FL:RALX:77IS:ZZIJ:OIUD:3ZJB:VNJ6
  â”” Status: Healthy
        
       
       â”” Containers: 4 (4 Running, 0 Paused, 0 Stopped)
  â”” Reserved CPUs: 0 / 1
  â”” Reserved Memory: 0 B / 3.888 GiB
  â”” Labels: kernelversion=3.10.0-514.2.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  â”” UpdatedAt: 2017-01-12T15:58:47Z
  â”” ServerVersion: 1.11.1
       
        
         pod01-srv2.ecatsrtpdmz.cisco.com: 10.0.236.49:2385
  â”” ID: 7S3J:W5JA:N3XK:PEUN:IKFJ:ELLK:2RDW:6HON:L2HB:CPEF:5ABC:M6JE
  â”” Status: Healthy
        
       
       â”” Containers: 4 (4 Running, 0 Paused, 0 Stopped)
  â”” Reserved CPUs: 0 / 1
  â”” Reserved Memory: 0 B / 3.888 GiB
  â”” Labels: kernelversion=3.10.0-514.2.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  â”” UpdatedAt: 2017-01-12T15:58:46Z
  â”” ServerVersion: 1.11.1
Plugins:
 Volume:
 Network:
Kernel Version: 3.10.0-514.2.2.el7.x86_64
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 7.775 GiB
Name: c6888c3f2b61
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
WARNING: No kernel memory limit support

installer-host
11
# This is the copy group: 11
unset DOCKER_HOST exit

Reference

How to use the Contiv Installer

To get installer please refer https://github.com/contiv/install/releases

Download the install bundle, save it and extract it on the Install host.

Installer Usage:

./install/ansible/install_swarm.sh -f <host configuration file> -e <ssh key> -u <ssh user> OPTIONS

Options:

-f  string                 Configuration file listing the hostnames with the control and data interfaces and optionally ACI parameters
-e  string                  SSH key to connect to the hosts
-u  string                  SSH User
-i                          Install the Swarm scheduler stack

Options:
-m  string                  Network Mode for the Contiv installation (“standalone” or “aci”). Default mode is “standalone” and should be used for non ACI-based setups
-d  string                 Forwarding mode (“routing” or “bridge”). Default mode is “bridge”

Advanced Options:
-v  string                 ACI Image (default is contiv/aci-gw:latest). Use this to specify a specific version of the ACI Image.
-n  string                 DNS name/IP address of the host to be used as the net master  service VIP.

Additional parameters can also be updated in install/ansible/env.json.

Examples:

1. Install Contiv with Docker Swarm on hosts specified by cfg.yml.
./install/ansible/install_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i

2. Install Contiv on hosts specified by cfg.yml. Docker should be pre-installed on the hosts.
./install/ansible/install_swarm.sh -f cfg.yml -e ~/ssh_key -u admin

3. Install Contiv with Docker Swarm on hosts specified by cfg.yml in ACI mode.
./install/ansible/install_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i -m aci

4. Install Contiv with Docker Swarm on hosts specified by cfg.yml in ACI mode, using routing as the forwarding mode.
./install/ansible/install_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i -m aci -d routing

Uninstaller Usage:

./install/ansible/uninstall_swarm.sh -f <host configuration file> -e <ssh key> -u <ssh user> OPTIONS

Options:

-f  string            Configuration file listing the hostnames with the control and data interfaces and optionally ACI parameters
-e  string             SSH key to connect to the hosts
-u  string             SSH User
-i                     Uninstall the scheduler stack

Options:
-r                     Reset etcd state and remove docker containers
-g                     Remove docker images

Additional parameters can also be updated in install/ansible/env.json file.

Examples:
1. Uninstall Contiv and Docker Swarm on hosts specified by cfg.yml.
./install/ansible/uninstall_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i
2. Uninstall Contiv and Docker Swarm on hosts specified by cfg.yml for an ACI setup.
./install/ansible/uninstall_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i -m aci
3. Uninstall Contiv and Docker Swarm on hosts specified by cfg.yml for an ACI setup, remove all containers and Contiv etcd state
./install/ansible/uninstall_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i -m aci -r






 Warning!

Don't start this step until you have completed the previous step. Make sure you are in root@pod01-srv01 during these steps.

Docker Commands

During the previous section "Contiv installation" multiple components were installed for you automatically, such as:

In this section we will be running some basic commands to become more familiar with this lab. It is important to note that you can always visit http://docs.docker.com to get more information about these commands and other Docker commands.

Step 1 - Export environments

We need to export the DOCKER_HOST environment variables to know where the Docker Remote API is located.

pod01-srv1
1
# This is the copy group: 1
export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375
pod01-srv1
2
# This is the copy group: 2
cd ~ sed -i -e '$a export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375' .bashrc
pod01-srv1
3
# This is the copy group: 3
cat .bashrc
[root@pod-srv1 ~]# cat .bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi
export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375
    

Step 2 - Docker Version

This command will display the Docker Version your Docker Engine is currently running.

pod01-srv1
4
# This is the copy group: 4
docker --version
pod01-srv1

[root@pod01-srv1 ~]#docker --version
   
    
     Docker version 17.12.0-ce, build c97c6d6
    
   
   

Step 3 - Docker Network

During the installation process Docker will be installing three networks.

pod01-srv1
5
# This is the copy group: 5
docker network ls
pod01-srv1

[root@pod01-srv1 ~]#docker network ls
NETWORK ID          NAME                                      DRIVER
8713b6b1a7f6        pod01-srv1.ecatsrtpdmz.cisco.com/bridge   bridge
82497573b10b        pod01-srv1.ecatsrtpdmz.cisco.com/host     host
ae67fd1290a0        pod01-srv1.ecatsrtpdmz.cisco.com/none     null
5d930ac93f1f        pod01-srv2.ecatsrtpdmz.cisco.com/bridge   bridge
89dca9c64e0a        pod01-srv2.ecatsrtpdmz.cisco.com/host     host
37cc782ca877        pod01-srv2.ecatsrtpdmz.cisco.com/none     null


Note: You will notice that both of your servers are showing in this output this is because we are running a cluster. We will be explaining this further down.

Step 4 - Docker Network Inspect

This command will provide container information, IP address/Gateway and in our case information pertaining swarm configuration.

pod01-srv1
6
# This is the copy group: 6
docker network inspect pod01-srv1.ecatsrtpdmz.cisco.com/bridge
pod01-srv1

[root@pod01-srv1 ~]#docker network inspect pod01-srv1.ecatsrtpdmz.cisco.com/bridge
[
    {
        "Name": "bridge",
        "Id": "afb6a74e3fcf0fb2694d1ab5b764b4ce90fd2c5e12c15e723849c7371289e59f",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Step 5 - Docker Pull

This command allows you to download images from either hub.docker.com or your own repository. We will use this command to download the images we will be leveraging during this lab.

The first image we are going to be downloading is a webserver.

pod01-srv1
7
# This is the copy group: 7
docker pull cobedien/ltrcld-2003
pod01-srv1

[root@pod01-srv1 ~]#docker pull cobedien/ltrcld-2003
Using default tag: latest
Using default tag: latest
pod01-srv1.ecatsrtpdmz.cisco.com: Pulling cobedien/ltrcld-2003... : downloaded
pod01-srv2.ecatsrtpdmz.cisco.com: Pulling cobedien/ltrcld-2003... : downloaded


NOTE: We are pulling the images to both of our nodes.


Step 6 - Docker Images

Docker images allows to query all the images that are currently in the system. Note that cobedien/webserver is there.

pod01-srv1
8
# This is the copy group: 8
docker images
pod01-srv1

[root@pod01-srv1 ~]#docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
    
     
      cobedien/ltrcld-2003   latest              f0e3ee818195        3 days ago          224.7 MB
     
    
    contiv/aci-gw          02-02-2017.2.1_1h   2db0d82d9452        3 days ago          809.1 MB
contiv/auth_proxy      1.0.0-alpha         0e3e991c962a        5 days ago          28.4 MB
swarm                  1.2.5               f1d5a057a389        5 months ago        19.47 MB
quay.io/coreos/etcd    v2.3.7              e81032a59e55        7 months ago        32.29 MB


Step 7 - Docker Run

Docker run starts the container. It also provides the isolation of the container in which the process runs on the host. It is important to note that each container process runs its own isolated file system, network stack, and process tree. This command has several flags. You can execute docker run --help in order to learn about them.

pod01-srv1
9
# This is the copy group: 9
docker run -it -h=webserver --name=webserver cobedien/ltrcld-2003
pod01-srv1

[root@pod01-srv1 ~]#docker run -it -h=webserver --name=webserver cobedien/ltrcld-2003
    
     
      root@webserver:/#
     
    
    

NOTE: Now you are inside your first container!!! Congratulations!!! Feel free to navigate and execute commands.


BONUS CHALLENGE : Can you find out what OS is this container running? If you find the answer please contact the instructor.

To exit the container you can do it via two ways:

Enter control+p+q in order to keep the container running.

Step 8 - Docker ps

Docker ps will show the active/running containers (by default):

pod01-srv1
10
# This is the copy group: 10
docker ps
pod01-srv1

[root@pod01-srv1 ~]#docker ps
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS              PORTS               NAMES
     
      
       689c6ca1b6b8        cobedien/ltrcld-2003              "/bin/bash"              48 seconds ago      Up 40 seconds       80/tcp              pod01-srv2.ecatsrtpdmz.cisco.com/webserver
      
     
     406f65539fb7        contiv/auth_proxy:1.0.0-alpha     "./auth_proxy --tls-k"   14 minutes ago      Up 14 minutes                           pod01-srv1.ecatsrtpdmz.cisco.com/auth-proxy
9361de3143ab        contiv/aci-gw:02-02-2017.2.1_1h   "/bin/sh -c '/usr/bin"   16 minutes ago      Up 16 minutes                           pod01-srv1.ecatsrtpdmz.cisco.com/contiv-aci-gw
ea6fb755040d        quay.io/coreos/etcd:v2.3.7        "/etcd"                  28 minutes ago      Up 28 minutes                           pod01-srv2.ecatsrtpdmz.cisco.com/etcd
43e03ce05403        quay.io/coreos/etcd:v2.3.7        "/etcd"                  29 minutes ago      Up 29 minutes                           pod01-srv1.ecatsrtpdmz.cisco.com/etcd


    

As you can see the webserver container still running because in the previous step we used control+p+q .

Step 9 - Docker stop

Docker stop allows to stop a running container.

pod01-srv1
11
# This is the copy group: 11
docker stop webserver
pod01-srv1

[root@pod01-srv1 ~]#docker stop webserver
webserver

Now let's verify the container is not longer running

pod01-srv1
12
# This is the copy group: 12
docker ps
pod01-srv1

[root@pod01-srv1 ~]#docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS               NAMES
e3671557032a        contiv/auth_proxy:1.0.0-alpha     "./auth_proxy --tls-k"   26 minutes ago      Up 26 minutes                           pod01-srv1.ecatsrtpdmz.cisco.com/auth-proxy
6227ce912b36        contiv/aci-gw:12-01-2016.2.1_1h   "/bin/sh -c '/usr/bin"   27 minutes ago      Up 27 minutes                           pod01-srv1.ecatsrtpdmz.cisco.com/contiv-aci-gw
3f89851a8639        quay.io/coreos/etcd:v2.3.7        "/etcd"                  35 minutes ago      Up 35 minutes                           pod01-srv2.ecatsrtpdmz.cisco.com/etcd
1b70f7a77fe7        quay.io/coreos/etcd:v2.3.7        "/etcd"                  35 minutes ago      Up 35 minutes                           pod01-srv1.ecatsrtpdmz.cisco.com/etcd

As you can see the webserver container is not longer running. You can execute docker ps --all to show every container running/not running.

You can start the container again by doing docker start webserver

Step 10 - Docker rm

docker rm deletes the container from the host

pod01-srv1
13
# This is the copy group: 13
docker rm webserver
pod01-srv1

[root@pod01-srv1 ~]#docker rm webserver
webserver

Step 11 - Docker Swarm Commands

Docker info

Docker info is probably the most useful command to use in a swarm environment. This command will provide several pieces of information such as version, number of nodes, information about the nodes, status, etc

pod01-srv1
14
# This is the copy group: 14
docker info
pod01-srv1

[root@pod01-srv1 ~]#docker info
Containers: 8
 Running: 8
 Paused: 0
 Stopped: 0
Images: 10
Server Version: swarm/1.2.5
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 2
     
      
       pod01-srv1.ecatsrtpdmz.cisco.com: 10.0.236.17:2385
  â”” ID: VL2K:KYDV:LJVR:UEIK:TY6V:G4FL:RALX:77IS:ZZIJ:OIUD:3ZJB:VNJ6
  â”” Status: Healthy
      
     
     â”” Containers: 4 (4 Running, 0 Paused, 0 Stopped)
  â”” Reserved CPUs: 0 / 1
  â”” Reserved Memory: 0 B / 3.888 GiB
  â”” Labels: kernelversion=3.10.0-514.2.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  â”” UpdatedAt: 2017-01-12T15:58:47Z
  â”” ServerVersion: 1.11.1
     
      
       pod01-srv2.ecatsrtpdmz.cisco.com: 10.0.236.49:2385
  â”” ID: 7S3J:W5JA:N3XK:PEUN:IKFJ:ELLK:2RDW:6HON:L2HB:CPEF:5ABC:M6JE
  â”” Status: Healthy
      
     
     â”” Containers: 4 (4 Running, 0 Paused, 0 Stopped)
  â”” Reserved CPUs: 0 / 1
  â”” Reserved Memory: 0 B / 3.888 GiB
  â”” Labels: kernelversion=3.10.0-514.2.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  â”” UpdatedAt: 2017-01-12T15:58:46Z
  â”” ServerVersion: 1.11.1
Plugins:
 Volume:
 Network:
Kernel Version: 3.10.0-514.2.2.el7.x86_64
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 7.775 GiB
Name: c6888c3f2b61
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
WARNING: No kernel memory limit support

As you can see from the above output both of our servers are part of the cluster and the status is healthy . It is also showing the number of containers each worker node has.

Now that you are familiar with Swarm, let's check how containers are distributed across the cluster. Let's start another container and check in which Docker Engine (host) got placed. The output may be there. Let's create four containers and see where each container gets placed by Docker Swarm.

pod01-srv1
15
# This is the copy group: 15
docker run -itd --name=webserver cobedien/ltrcld-2003 sleep 6000 docker run -itd --name=webserver1 cobedien/ltrcld-2003 sleep 6000 docker run -itd --name=webserver2 cobedien/ltrcld-2003 sleep 6000 docker run -itd --name=webserver3 cobedien/ltrcld-2003 sleep 6000
pod01-srv1
16
# This is the copy group: 16
docker ps | grep webserver
pod01-srv1

[root@pod01-srv1 ~]#  docker ps | grep webserver
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS              PORTS               NAMES
     
      
       d26fa93c0edb        cobedien/ltrcld-2003         "/bin/bash"              5 seconds ago       Up 1 seconds        80/tcp              pod01-srv1.ecatsrtpdmz.cisco.com/webserver3
979a00046921        cobedien/ltrcld-2003         "/bin/bash"              8 seconds ago       Up 5 seconds        80/tcp              pod01-srv2.ecatsrtpdmz.cisco.com/webserver2
b88f623976fd        cobedien/ltrcld-2003         "/bin/bash"              11 seconds ago      Up 8 seconds        80/tcp              pod01-srv2.ecatsrtpdmz.cisco.com/webserver1
5f1464eb0d3c        cobedien/ltrcld-2003         "/bin/bash"              13 seconds ago      Up 11 seconds       80/tcp              pod01-srv2.ecatsrtpdmz.cisco.com/webserver
      
     
     

You will notice that some containers were placed in pod01-srv1.ecatsrtpdmz.cisco.com and the rest of the containers were placed in pod01-srv2.ecatsrtpdmz.cisco.com

Docker Swarm is keep tracking of the placement and balancing the placement of the container based in a load balacing algorithm

Let's remove the containers we just created in order to keep our system clean and continue with the lab.

pod01-srv1
17
# This is the copy group: 17
docker rm -f webserver docker rm -f webserver1 docker rm -f webserver2 docker rm -f webserver3



APIC Navigation

The APIC infrastructure configuration has already been configured for you. This section will show you the necessary steps to follow in order to configure the APIC.

Step 1 - Connect to APIC GUI via web

You can login into the interface with the credentials

http://10.0.226.41

 Warning!

Make no changes to the APIC controller via the GUI. The lab continues and having the infrastructure operational is key to completing.

Create Physical Domain

The physical domain in ACI is utilized to define a pool of resources that ACI can utilize to communicate to an external domain. The domains are:

Create VLAN Pool

A pool represents a range of traffic encapsulation identifiers (for example, VLAN IDs, VNIDs, and multicast addresses). A pool is a shared resource and can be consumed by multiple domains such as VMM and Layer 4 to Layer 7 services. A leaf switch does not support overlapping VLAN pools. You must not associate different overlapping VLAN pools with the VMM domain. The two types of VLAN-based pools are as follows:

Interface Policy Group

Fabric policies govern the operation of internal fabric interfaces and enable the configuration of various functions, protocols, and interfaces that connect spine and leaf switches. Administrators who have fabric administrator privileges can create new fabric policies according to their requirements. The APIC enables administrators to select the pods, switches, and interfaces to which they will apply fabric policies. The following figure provides an overview of the fabric policy model.

Fabric policies are grouped into the following categories:

Interface Policies - Leaf Profiles

Interface Policies - Leaf Policy Groups

Switch Policies - Profiles - Leaf Profiles

Attach Entity Profile

The ACI fabric provides multiple attachment points that connect through leaf ports to various external entities such as baremetal servers, hypervisors, Layer 2 switches (for example, the Cisco UCS fabric interconnect), or Layer 3 routers (for example Cisco Nexus 7000 Series switches). These attachment points can be physical ports, port channels, or a virtual port channel (vPC) on leaf switches, as shown in the figure below.

An Attachable Entitiy Profile (AEP) represents a group of external entities with similar infrastructure policy requirements. The infrastructure policies consist of physical interface policies, such as Cisco Discovery Protocol (CDP), Link Layer Discovery Protocol (LLDP), Maximum Transmission Unit (MTU), or Link Aggregation Control Protocol (LACP). An AEP is required to deploy VLAN pools on leaf switches. Encapsulation pools (and associated VLAN) are reuseable across leaf switches. An AEP implicitly provides the scope of the VLAN pool to the physical infrastructure.

 Warning!

Make sure you are in root@pod01-srv01 during these steps.

Contiv Configuration

During these steps we will be configuring several items in the Netmaster. During these steps Contiv will not communicate with APIC until we create the Application Network Profile (ANP).

Step 1 - VLAN configuration

Contiv leverages static VLAN binding in order to talk to ACI. It is important to know that these VLANs must match the APIC configuration as previously shown.

pod01-srv1
2
# This is the copy group: 2
netctl global set --fabric-mode aci --vlan-range 500-505
pod01-srv1
3
# This is the copy group: 3
netctl global info
pod01-srv1

# netctl global info
Fabric mode: aci
Forward mode: bridge
ARP mode: proxy
 
  
   Vlan Range: 500-505
  
 
 Vxlan range: 1-10000
Private subnet: 172.19.0.0/16

Step 2 - Tenant Creation

Tenant is a logical container for application policies. A tenant represents an unit of isolation from a policy perspective. Tenant can represent a customer, a division, a business unit, etc.

pod01-srv1
4
# This is the copy group: 4
netctl tenant create ContivTN01
pod01-srv1
5
# This is the copy group: 5
netctl tenant ls
pod01-srv1

[root@pod01-srv1 ~]# netctl tenant ls

Name
------
default
 
  
   ContivTN01
  
 
 

Step 3 - Tenant Subnet

Tenant subnet refers to the action where the user defined the subnet, gateway and the network name in Contiv. Then Contiv will create Bridge Domain in ACI with the information provide.

pod01-srv1
6
# This is the copy group: 6
netctl net create -t ContivTN01 -e vlan -s 10.0.248.0/29 -g 10.0.248.1 ContivNet01
pod01-srv1
7
# This is the copy group: 7
netctl net ls -t ContivTN01
pod01-srv1

[root@pod01-srv1 ~]# netctl net ls -t ContivTN01
Tenant     Network     Nw Type  Encap type  Packet tag  Subnet      Gateway    IPv6Subnet  IPv6Gateway
------     -------     -------  ----------  ----------  -------     ------     ----------  -----------
 
  
   ContivTN01  ContivNet01  data     vlan        0        10.0.248.0/29  10.0.248.1
  
 
 

Step 4 - Create Policy

Contiv Policy creates the object where then the user can add EPGs, policies, rules.

pod01-srv1
8
# This is the copy group: 8
netctl policy create -t ContivTN01 app2db
pod01-srv1
9
# This is the copy group: 9
netctl policy ls -t ContivTN01
pod01-srv1

[root@pod01-srv1 ~]# netctl policy ls -t ContivTN01
Tenant     Policy
------     ------
 
  
   ContivTN01 app2db
  
 
 

Step 5 - Create End Point Group

End Point Group (EPG), is logical relationship where it is a group of End Points (EP) that contain the same characteristics. EPGs act as a container for of applications. They allow the separation of network policy, security, and forwarding from addressing and instead apply it to logical application boundaries.

During this exercise we will be creating 2 EPGs condb and conapp .

pod01-srv1
10
# This is the copy group: 10
netctl group create -t ContivTN01 -p app2db ContivNet01 condb netctl group create -t ContivTN01 ContivNet01 conapp
pod01-srv1
11
# This is the copy group: 11
netctl group ls -t ContivTN01
pod01-srv1

[root@pod01-srv1 ~]# netctl group ls -t ContivTN01
Tenant     Group   Network     Policies  Network profile
------     -----   -------     --------  ---------------
 
  
   ContivTN01  condb   ContivNet01  app2db
ContivTN01  conapp  ContivNet01
  
 
 

Step 6 - Create Rules

Rules are the actions or policies (ACL) between a set of EPG in order to allow or deny certain traffic.

pod01-srv1
12
# This is the copy group: 12
netctl policy rule-add -t ContivTN01 -d in --protocol tcp --port 6379 --from-group conapp --action allow app2db 1
pod01-srv1
13
# This is the copy group: 13
netctl policy rule-ls app2db -t ContivTN01
pod01-srv1

[root@pod01-srv1 ~]# netctl policy rule-ls app2db -t ContivTN01
Incoming Rules:
Rule  Priority  From EndpointGroup  From Network  From IpAddress  Protocol  Port  Action
----  --------  ------------------  ------------  ---------       --------  ----  ------
 
  
   1     1         conapp                                            tcp       6379  allow
  
 
 Outgoing Rules:
Rule  Priority  To EndpointGroup  To Network  To IpAddress  Protocol  Port  Action
----  --------  ----------------  ----------  ---------     --------  ----  ------

Step 7 - Create Application Network Profle (ANP)

This is the step where the integration between ACI and Contiv occurs. After this command is executed, Contiv will send the information to APIC in order to create the objects in ACI.

pod01-srv1
14
# This is the copy group: 14
netctl app-profile create -t ContivTN01 -g conapp,condb APP-TN01
pod01-srv1
15
# This is the copy group: 15
netctl app-profile ls -t ContivTN01
pod01-srv1

[root@pod01-srv1 ~]# netctl app-profile ls -t ContivTN01
Tenant      AppProfile  Groups
------      ----------  ------
 
  
   ContivTN01  APP-TN01    conapp,condb
  
 
 

Step 8 - New Docker Networks

During this step, we are going to be using docker network ls (as previously explained) to identify the new networks that have been created for the EPG (conapp and condb).

pod01-srv1
16
# This is the copy group: 16
docker network ls
pod01-srv1

[root@pod01-srv1 ~]# docker network ls
docker network ls
NETWORK ID          NAME                                      DRIVER              SCOPE
 
  
   532dac286f4b        conapp/ContivTN01                         netplugin           global
a7c653149b03        condb/ContivTN01                          netplugin           global
  
 
 595e958bc90f        pod01-srv1.ecatsrtpdmz.cisco.com/bridge   bridge              local
3f1c2d05057e        pod01-srv1.ecatsrtpdmz.cisco.com/host     host                local
e30eb5255d1a        pod01-srv1.ecatsrtpdmz.cisco.com/none     null                local
59fe7dc02201        pod01-srv2.ecatsrtpdmz.cisco.com/bridge   bridge              local
3a563496bafd        pod01-srv2.ecatsrtpdmz.cisco.com/host     host                local
f8d06f7bae0d        pod01-srv2.ecatsrtpdmz.cisco.com/none     null                local

 Warning!

Make sure you are in root@pod01-srv01 during these steps.

Containers Start

Step 1 - Start up Containers (POD01-srv1)

Now it is time to create some containers and look how ACI and Contiv are working from an integration point of view

The first container we are going to add it to the conapp EPG

pod01-srv1
1
# This is the copy group: 1
docker run -itd -h=app --name=app --net=conapp/ContivTN01 cobedien/ltrcld-2003

The second container we are going to add it to the condb EPG

pod01-srv1
2
# This is the copy group: 2
docker run -itd -h=db --name=db --net=condb/ContivTN01 cobedien/ltrcld-2003

Lets make sure the two containers have started

pod01-srv1
3
# This is the copy group: 3
docker ps | grep ltrcld-2003
    [root@pod01-srv1 ~]# docker ps | grep ltrcld
    7ff6caba4723   cobedien/ltrcld-2003   "sleep 6000"   12 days ago   Up About a minute   pod01-srv1.ecatsrtpdmz.cisco.com/db
    8b3de8d9ad09   cobedien/ltrcld-2003   "sleep 6000"   12 days ago   Up 3 minutes        pod01-srv2.ecatsrtpdmz.cisco.com/app

    

Step 2 - Accessing the APP Container (POD01-srv1)

During this step will be accessing the APP container that we started in the previous step. For the purpose of this step, we will be accessing the APP container from [root@pod-srv1 ~]# but in a production environemnt you will be able to access any container from any worker node.

The idea to do this is to show how both containers are working at the same time. We will be accessing the DB container in the next step.

pod01-srv1
4
# This is the copy group: 4
docker exec -it app /bin/bash
 [root@pod-srv1 ~]# docker exec -it app /bin/bash
root@app:/# 
    

Step 3 - Exporting the DOCKER_HOST in POD01-srv2

 Warning!

Make sure you are in root@pod01-srv02 during these steps.

In order to see the containers in pod01-srv2 we need to export the DOCKER_HOST command

pod01-srv2
5
# This is the copy group: 5
export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375
pod01-srv2
6
# This is the copy group: 6
cd ~ sed -i -e '$a export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375' .bashrc
pod01-srv2
7
# This is the copy group: 7
cat .bashrc
[root@pod-srv1 ~]# cat .bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi
export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375
    

Step 4 - Entering DB container in pod01-srv2

pod01-srv2
8
# This is the copy group: 8
docker exec -it db /bin/bash
 [root@pod01-srv2 ~]# docker exec -it db /bin/bash
root@db:/# 
    

Explore Container Information

Now that we have both containers up and running. Lets discover the IP address and Default Gateway that were assigned by the IPAM.

Step 5 - APP Container (POD01-srv1)

pod01-srv1
9
# This is the copy group: 9
ip a
root@app:/# ip a
1: lo:  mtu 65536 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
41: eth0@if40:  mtu 1450 qdisc noqueue
    link/ether 02:02:04:01:01:01 brd ff:ff:ff:ff:ff:ff
    inet 10.0.248.2 /29 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2:4ff:fe01:101/64 scope link
       valid_lft forever preferred_lft forever
    
pod01-srv1
10
# This is the copy group: 10
netstat -rn
root@app:/# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface 
0.0.0.0         10.0.248.1       0.0.0.0         UG        0 0          0 eth0
10.0.248.0/29         0.0.0.0         255.255.255.248   U         0 0          0 eth0
    

Step 6 DB Container (POD01-srv2)

pod01-srv2
11
# This is the copy group: 11
ip a
root@db:/# ip a
1: lo:  mtu 65536 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
41: eth0@if40:  mtu 1450 qdisc noqueue
    link/ether 02:02:04:01:01:01 brd ff:ff:ff:ff:ff:ff
    inet 10.0.248.3/29  scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2:4ff:fe01:101/64 scope link
       valid_lft forever preferred_lft forever
    
pod01-srv2
12
# This is the copy group: 12
netstat -rn
root@db:/# netstat -rn
Kernel IP routing table
Destination            Gateway         Genmask          Flags   MSS Window  irtt Iface 
0.0.0.0                10.0.248.1    0.0.0.0          UG        0 0          0 eth0
10.0.248.0/29        0.0.0.0         255.255.255.248  U         0 0          0 eth0
    

Step 7 - Test Connectivity from the DB container (POD01-srv2)

Now that we have both containers up and running. Lets test some connectivity to make sure the containers can ping the default gateway and each other.

Ping the Default Gateway

pod01-srv2
13
# This is the copy group: 13
ping -c 5 10.0.248.1
root@db:/# ping -c 5 10.0.248.1
PING 10.0.248.1 (10.0.248.1): 56 data bytes
64 bytes from 10.0.248.1: seq=0 ttl=63 time=0.711 ms
64 bytes from 10.0.248.1: seq=1 ttl=63 time=0.278 ms
64 bytes from 10.0.248.1: seq=2 ttl=63 time=0.255 ms
64 bytes from 10.0.248.1: seq=3 ttl=63 time=0.246 ms
64 bytes from 10.0.248.1: seq=4 ttl=63 time=0.284 ms

--- 10.0.248.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.246/0.354/0.711 ms
    

Ping the APP container

pod01-srv2
14
# This is the copy group: 14
ping -c 5 10.0.248.2
root@db:/# ping -c 5 10.0.248.2
PING 10.0.248.2 (10.0.248.2): 56 data bytes

--- 10.0.248.2 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
    

Step 8 - iPerf Testing (POD01-srv2)

iPerf3 is a tool for active measurements of the maximum achievable bandwidth on IP networks. It supports tuning of various parameters related to timing, buffers and protocols (TCP, UDP, SCTP with IPv4 and IPv6). For each test it reports the bandwidth, loss, and other parameters.

For more information about Iperf visit https://en.wikipedia.org/wiki/Iperf

We are going to be enabling iPerf in the DB container.

pod01-srv2
15
# This is the copy group: 15
iperf -s -p 6379
root@db:/#  iperf -s -p 6379
------------------------------------------------------------
Server listening on TCP port 6379
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
    

NOTE: The DB container now is waiting for connections on TCP port 6379

Step 9 - Test Connectivity from the APP container (POD01-srv1)

Let's verify the APP container connectivity.

Ping the Default Gateway

pod01-srv1
16
# This is the copy group: 16
ping -c 5 10.0.248.1
root@app:/# ping -c 5 10.0.248.1
PING 10.0.248.1 (10.0.248.1): 56 data bytes
64 bytes from 10.0.248.1: seq=0 ttl=63 time=0.711 ms
64 bytes from 10.0.248.1: seq=1 ttl=63 time=0.278 ms
64 bytes from 10.0.248.1: seq=2 ttl=63 time=0.255 ms
64 bytes from 10.0.248.1: seq=3 ttl=63 time=0.246 ms
64 bytes from 10.0.248.1: seq=4 ttl=63 time=0.284 ms

--- 10.0.248.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.246/0.354/0.711 ms
    

Ping the DB container

pod01-srv1
17
# This is the copy group: 17
ping -c 5 10.0.248.3
root@app:/# ping -c 5 10.0.248.3
PING 10.0.248.3 (10.0.248.3): 56 data bytes

--- 10.0.248.3 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
    

Step 10 - Netcat Testing (POD01-srv1)

NC (or netcat) utility is used for just about anything involving TCP or UDP. It can open TCP connections, send UDP packets, listen on arbitrary TCP and UDP ports, do port scanning, and deal with both IPv4 and IPv6.

For more information about NC visit https://en.wikipedia.org/wiki/Netcat

If you recall, we started iPerf in the DB container on TCP port 6379. Now we are going to send a TCP request on that port to the DB container.

pod01-srv1
18
# This is the copy group: 18
nc -zvnw 1 10.0.248.3 6379
root@app:/# nc -zvnw 1 10.0.248.3 6379
Connection to 10.0.248.3 6379 port [tcp/*] succeeded!
    

You can verify the connection was successfully by going back to the DB container

root@db:/# iperf -s -p 6379
------------------------------------------------------------
Server listening on TCP port 6379
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.0.248.3 port 6379 connected with 10.0.248.2 port 46924
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 0.0 sec  0.00 Bytes  0.00 bits/sec
    

NOTE: Why the containers can ping the Default Gateway but are not able to ping each other even though they are in the same subnet?? Why the APP container is able to communicate to DB container on port 6379?

This is how Contiv and ACI are working together to create a Policy-Based Container Network with an automated way in order to deploy microservices workloads/applications. We will be discussing this concept and how to modify the policy in order to be able to ping between the APP and DB container in the following sections.

Step 11 - Exiting the APP container (POD01-srv1)

If you recall from the previous section where we left the container running but we were exiting the container. We will performing the same command in this step.

Execute the following command control+p+q


 Warning!

Make sure you are in root@pod01-srv01 during these steps.

Contiv Commands

Step 1 - Inspect VLAN configuration (POD01-srv1)

This command would help us determine which VLANs are configured and being use. This command becomes helpful in case someone needs to verify the VLANs that configured and are matching with APIC, plus the number of VLANs that are being used.

pod01-srv1
1
# This is the copy group: 1
netctl global inspect

[root@01-srv1 ~]# netctl global inspect
Inspecting global
{
  "Config": {
    "key": "global",
    "arpMode": "proxy",
    "fwdMode": "bridge",
    "name": "global",
    "networkInfraType": "aci",
    "pvtSubnet": "172.19.0.0/16",
    "vlans": "500-505",
    "vxlans": "1-10000"
  },
  "Oper": {
    "clusterMode": "docker",
    "numNetworks": 3,
    "vlansInUse": "500-502"
  }
}
     

Step 2 - Inspect Network configuration (POD01-srv1)

This command will show how the network is being configured. It is going to show the following information:

pod01-srv1
2
# This is the copy group: 2
netctl network inspect -t ContivTN01 ContivNet01

[root@01-srv1 ~]# netctl network inspect -t ContivTN01 ContivNet01
Inspeting network: ContivNet01 tenant: ContivTN01

{
  "Config": {
    "key": "ContivTN01:ContivNet01",
    "encap": "vlan",
    "gateway": "10.0.248.1",
    "networkName": "ContivTN01",
    "nwType": "data",
    "subnet": "10.0.248.0/29",
    "tenantName": "ContivTN01",
    "link-sets": {
      "EndpointGroups": {
        "ContivTN01:conapp": {
          "type": "endpointGroup",
          "key": "ContivTN01:conapp"
        },
        "ContivTN01:condb": {
          "type": "endpointGroup",
          "key": "ContivTN01:condb"
        }
      }
    },
    "links": {
      "Tenant": {
        "type": "tenant",
        "key": "ContivTN01"
      }
    }
  },
  "Oper": {
    "allocatedAddressesCount": 2,
    "allocatedIPAddresses": "10.0.248.1-10.0.248.3",
    "availableIPAddresses": "10.0.248.4-10.0.248.6, -",
    "endpoints": [
      {
        "containerID": "a3b623a60f055b3e1f795accc2668e26c7f325b8ed260c1eef4526ecc2b57f63",
        "containerName": "/app",
        "endpointGroupId": 2,
        "endpointGroupKey": "conapp:ContivTN01",
        "endpointID": "706db466919ef1bab60ba8c5b274ce7213ff21f35a00b3423bf02495bced2e9f",
        "homingHost": "pod01-srv2.ecatsrtpdmz.cisco.com",
        "ipAddress": [
          "10.0.248.2",
          ""
        ],
        "labels": "map[com.docker.swarm.id:af545079a88575bac495f5ceb27280b698c455e5814a493fa017eaa3806d6d97]",
        "macAddress": "02:02:0a:00:f8:02",
        "network": "ContivNet01.ContivTN01",
        "serviceName": "conapp"
      },
      {
        "containerID": "7406a3892ad4d476e40ea19e8777bf2de4f91bea75d248544cbd608f0ae62d4f",
        "containerName": "/db",
        "endpointGroupId": 1,
        "endpointGroupKey": "condb:ContivTN01",
        "endpointID": "ed3fdeac4cf9b2c4926643aa8abb1b361f74440edb8390e6b04d44b2640a7b5a",
        "homingHost": "pod01-srv2.ecatsrtpdmz.cisco.com",
        "ipAddress": [
          "10.0.248.3",
          ""
        ],
        "labels": "map[com.docker.swarm.id:8d5a69b5b4d1ef2f300afc03fa582c8629799393ef133f93fe07a21dd8ab69d8]",
        "macAddress": "02:02:0a:00:f8:03",
        "network": "ContivNet01.ContivTN01",
        "serviceName": "condb"
      }
    ],
    "numEndpoints": 2,
    "pktTag": 500
  }
}
     

Step 3 - Inspect EPG configuration (POD01-srv1)

This command would help us determine how the EPGs are configured and It is going to show the following information:

pod01-srv1
3
# This is the copy group: 3
netctl group inspect -t ContivTN01 conapp

[root@01-srv1 ~]# netctl group inspect -t ContivTN01 conapp
Inspeting endpointGroup: conapp tenant: ContivTN01
{
  "Config": {
    "key": "ContivTN01:conapp",
    "groupName": "conapp",
    "networkName": "ContivNet01",
    "tenantName": "ContivTN01",
    "link-sets": {
      "MatchRules": {
         "ContivTN01:app2db:1": {
          "type": "rule",
          "key": "ContivTN01:app2db:1"
        }
      }
    },
    "links": {
       "AppProfile": {
        "type": "appProfile",
        "key": "ContivTN01:APP-TN01"
      },
      "NetProfile": {},
       "Network": {
        "type": "network",
        "key": "ContivTN01:ContivNet01"
      },
      "Tenant": {
        "type": "tenant",
        "key": "ContivTN01"
      }
    }
  },

   "Oper": {
    "endpoints": [
      {
        "containerID": "a3b623a60f055b3e1f795accc2668e26c7f325b8ed260c1eef4526ecc2b57f63",
        "containerName": "/app",
        "endpointGroupId": 2,
        "endpointGroupKey": "conapp:ContivTN01",
        "endpointID": "706db466919ef1bab60ba8c5b274ce7213ff21f35a00b3423bf02495bced2e9f",
        "homingHost": "pod01-srv2.ecatsrtpdmz.cisco.com",
        "ipAddress": [
          "10.0.248.2",
          ""
        ],
        "labels": "map[com.docker.swarm.id:af545079a88575bac495f5ceb27280b698c455e5814a493fa017eaa3806d6d97]",
        "macAddress": "02:02:0a:00:f8:02",
        "network": "ContivNet01.ContivTN01",
        "serviceName": "conapp"
      }
    ],
    "numEndpoints": 1,
    "pktTag": 502
  }
}

     

Step 4 - Inspect End Point configuration (POD01-srv1)

This command would help us determine how the End Point is configured with the following information:

First, we need to query the endpointID

pod01-srv1
4
# This is the copy group: 4
netctl group inspect -t ContivTN01 conapp | grep endpointID

[root@01-srv1 ~]# netctl group inspect -t ContivTN01 conapp | grep endpointID

        "endpointID": "74de87dfb60e35c24f77d4ac292441b4f6ce8acb45d62764bd24bf1be870b63b",
     

Once we gathered the endpointID, in this particular example is "74de87dfb60e35c24f77d4ac292441b4f6ce8acb45d62764bd24bf1be870b63b" .
You will need to use the following command netctl endpoint inspect with the endpointID from your container.


netctl endpoint inspect 74de87dfb60e35c24f77d4ac292441b4f6ce8acb45d62764bd24bf1be870b63b
     

netctl endpoint inspect 74de87dfb60e35c24f77d4ac292441b4f6ce8acb45d62764bd24bf1be870b63b

{
  "Oper": {
    "endpointGroupId": 2,
    "endpointGroupKey": "conapp:ContivTN01",
    "endpointID": "74de87dfb60e35c24f77d4ac292441b4f6ce8acb45d62764bd24bf1be870b63b",
    "homingHost": "pod01-srv2.ecatsrtpdmz.cisco.com",
    "ipAddress": [
      "10.0.248.2",
      ""
    ],
    "labels": "map[]",
    "macAddress": "02:02:0a:00:f8:da",
    "network": "ContivNet01.ContivTN01",
    "serviceName": "conapp",
    "virtualPort": "vvport1"
  }
}

     

Contiv and ACI Automated Policy

As you noticed in the previous section, you were not able to ping between the APP container and the DC container. If you recall during the creation of your tenant. We created a policy to only allow TCP port 6379

pod01-srv1

[root@pod01-srv1 ~]# netctl policy rule-ls app2db -t ContivTN01
Incoming Rules:
Rule  Priority  From EndpointGroup  From Network  From IpAddress  Protocol  Port  Action
----  --------  ------------------  ------------  ---------       --------  ----  ------
 
  
   1     1         conapp                                            tcp       6379  allow
  
 
 Outgoing Rules:
Rule  Priority  To EndpointGroup  To Network  To IpAddress  Protocol  Port  Action
----  --------  ----------------  ----------  ---------     --------  ----  ------

In this section we are going to cover how Contiv and ACI are working together in order to provide an automated policy that every Data Center requires

APIC Navigation

 Warning!

Make no changes to the APIC controller via the GUI. The lab continues and having the infrastructure operational is key to completing.

The APIC infrastructure configuration has already been configured for you. This section will show you the necessary steps to follow in order to understand how Contiv and ACI are providing an automated policy

Step 2 - Connect to APIC GUI via web

You can login into the interface with the credentials

http://10.0.226.41

Step 3 - POD Tenant

A tenant is a logical container for application policies that enable an administrator to exercise domain-based access control. A tenant represents a unit of isolation from a policy perspective, but it does not represent a private network. Tenants can represent a customer in a service provider setting, an organization or domain in an enterprise setting, or just a convenient grouping of policies.

Tenants can be isolated from one another or can share resources. The primary elements that the tenant contains are filters, contracts, outside networks, bridge domains, contexts, and application profiles that contain endpoint groups (EPGs). Entities in the tenant inherit its policies. A tenant can contain one or more virtual routing and forwarding (VRF) instances or contexts; each context can associated with multiple bridge domains. Tenants are l ogical containers for application policies. The fabric can contain multiple tenants

 Warning!

Make sure you click on YOUR tenant/POD Number -- ContivTN01 -- . You may need to go to the next page in order to find your Tenant

Step 4 - Application Network Profile

An Application Network Profile is a collection of EPGs, their connections, and the policies that define those connections. Application Network Profiles are the logical representation of an application and its inter-dependencies in the network fabric.

Application Network Profiles are designed to be modeled in a logical way that matches the way that applications are designed and deployed. The configuration and enforcement of policies and connectivity is handled by the system rather than manually by an administrator.

You should be getting the same Application Network Profile as shown above. What this diagram is showing are the following components:

Step 5 - End Point Groups

As mentioned above EPG is a collection of similar End Points (EP). APIC has the knowledge of every EP that it is attached to the fabric. This is very important because by having this information individuals will be able to map and determine the location of the hosts in order to troubleshoot but more importantly to map the application.

Step 6 - Security Policies

Contracts define inbound and outbound permit, deny, and QoS rules and policies such as redirect.

Contracts allow both simple and complex definition of the way that an EPG communicates with other EPGs, depending on the requirements of the environment. Although contracts are enforced between EPGs, they are connected to EPGs using provider-consumer relationships. Essentially, one EPG provides a contract, and other EPGs consume that contract

Labels determine which EPG consumers and EPG providers can communicate with one another.

Filters are Layer 2 to Layer 4 fields, TCP/IP header fields such as Layer 3 protocol type, Layer 4 ports, and so forth. According to its related contract, an EPG provider dictates the protocols and ports in both the in and out directions. Contract subjects contain associations to the filters (and their directions) that are applied between EPGs that produce and consume the contract.

In order to identify the contract that it is getting applied on this ANP, hoover your mouse over the contract and you should see the following image appear:

As you can see in the diagram we have created a filter that it is only allowing TCP 6379. Now we need to modify this filter in order to add a policy to allow ICMP between the containers.

Step 7 - ICMP Policy (POD01-srv1)

During this step, we are going to add the necessary policy in order for the containers to be able to ping each other.

 Warning!

Make sure you are in root@pod01-srv01 during these steps.

Lets verify the current policy is in our environment

pod01-srv1
1
# This is the copy group: 1
netctl policy rule-ls -t ContivTN01 app2db
pod01-srv1

[root@pod01-srv1 ~]# netctl policy rule-ls -t ContivTN01 app2db
Incoming Rules:
Rule  Priority  From EndpointGroup  From Network  From IpAddress  Protocol  Port  Action
----  --------  ------------------  ------------  ---------       --------  ----  ------
       
        
         1     1         conapp                                            tcp       6379  allow
        
       
       Outgoing Rules:
Rule  Priority  To EndpointGroup  To Network  To IpAddress  Protocol  Port  Action
----  --------  ----------------  ----------  ---------     --------  ----  ------



It is important to note the Rule ordering, in this case TCP 6379 rule is using 1. Therefore the new ICMP rules will have 2 and 3.


pod01-srv1
2
# This is the copy group: 2
netctl policy rule-add -t ContivTN01 -d in --protocol icmp --from-group conapp --action allow app2db 2 netctl policy rule-add -t ContivTN01 -d in --protocol icmp --from-group condb --action allow app2db 3
pod01-srv1
3
# This is the copy group: 3
netctl policy rule-ls -t ContivTN01 app2db
pod01-srv1

[root@pod01-srv1 ~]# netctl policy rule-ls -t ContivTN01 app2db
Incoming Rules:
Rule  Priority  From EndpointGroup  From Network  From IpAddress  Protocol  Port  Action
----  --------  ------------------  ------------  ---------       --------  ----  ------
         
          
           1     1         conapp                                            tcp       6379  allow
2     1         conapp                                            icmp      0     allow
3     1         condb                                             icmp      0     allow
          
         
         Outgoing Rules:
Rule  Priority  To EndpointGroup  To Network  To IpAddress  Protocol  Port  Action
----  --------  ----------------  ----------  ---------     --------  ----  ------

Step 8 - Entering APP container (POD01-srv1)

pod01-srv1
4
# This is the copy group: 4
docker exec -it app /bin/bash
 [root@pod01-srv1 ~]# docker exec -it app /bin/bash
root@app:/# 
    

NOTE: You may get an error because the container is not longer running

pod01-srv1

[root@pod01-srv1 ~]#  docker exec -it app /bin/bash

Error response from daemon: Container 6f096c8fdee94a539e13008f5268f0612f8f0c084618ef108289c2bb1df5f55c is not running


If you get the error then enter the following command:

pod01-srv1
5
# This is the copy group: 5
docker start app
pod01-srv1

[root@pod01-srv1 ~]#docker start app
         
          
           app
          
         
         

Now you should be able to enter the APP container

pod01-srv1
6
# This is the copy group: 6
docker exec -it app /bin/bash
pod01-srv1

[root@pod01-srv1 ~]#docker exec -it app /bin/bash
         
          
           root@app:/
          
         
         

Step 9 - Ping the DB from the APP (POD01-srv1)

We are going to be sending a continuous ping for this step because we will be leveraging these pings for the next exercise.

pod01-srv1
7
# This is the copy group: 7
ping 10.0.248.3
pod01-srv1

         
          
           root@app:/#  ping 10.0.248.3
          
         
         PING 10.0.248.3 (10.0.248.3) 56(84) bytes of data.
64 bytes from 10.0.248.3: icmp_seq=1 ttl=64 time=0.962 ms
64 bytes from 10.0.248.3: icmp_seq=2 ttl=64 time=0.475 ms
64 bytes from 10.0.248.3: icmp_seq=3 ttl=64 time=0.303 ms
64 bytes from 10.0.248.3: icmp_seq=4 ttl=64 time=0.588 ms
64 bytes from 10.0.248.3: icmp_seq=5 ttl=64 time=0.564 ms

Step 10 - Exit APP Container (POD01-srv1)

Exit the APP container, but leave it run it by executing the following command

control p+q

Step 11 - Entering DB container (POD01-srv2)

 Warning!

Make sure you are in root@pod01-srv2 during these steps.

NOTE:

The DB container may be already running.

pod01-srv1
8
# This is the copy group: 8
docker exec -it db /bin/bash
 [root@pod01-srv1 ~]# docker exec -it db /bin/bash
root@db:/# 
    

Step 12 - Ping the APP from the DB (POD01-srv2)

pod01-srv1
9
# This is the copy group: 9
ping -c 5 10.0.248.2
pod01-srv1

          
           
            root@db:/ping -c 5 10.0.248.2
           
          
          PING 10.0.248.2 (10.0.248.2): 56 data bytes
64 bytes from 10.0.248.2: seq=0 ttl=64 time=3.490 ms
64 bytes from 10.0.248.2: seq=1 ttl=64 time=0.666 ms
64 bytes from 10.0.248.2: seq=2 ttl=64 time=0.858 ms
64 bytes from 10.0.248.2: seq=3 ttl=64 time=0.781 ms
64 bytes from 10.0.248.2: seq=4 ttl=64 time=0.338 ms

--- 10.0.248.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.665/1.167/2.873/0.855 ms

Step 13 - Exit DB Container (POD01-srv2)

Exit the DB container, but leave it run it by executing the following command

control p+q

Step 14 - Check the new policy in ACI

Log to APIC as previously mentioned and check the Application Network Profile. You should notice a new contract has been created with a newly Filter (ICMP) and then the old contract a new filter has been added (ICMP)




 Warning!

Make sure you are in root@pod01-srv1 during these steps.

Contiv Packet Capture

Now that we are able to ping between the containers APP and DB. The question comes, what happens if we run into any issues between the containers? Can the ping each other but more importantly how we will be able to capture packets from a container?

In this section we will be exploring some methods on how to capture packets in a Contiv environment.

Step 1 - Identify the Container placement (POD01-srv1)

Since we are using Docker Swarm, we need to identify where the container was placed because it could have been place in pod01-srv1 or pod01-srv2. In order to obtain this information we will be doing the following command.

pod01-srv1
1
# This is the copy group: 1
docker ps | grep app
[root@pod01-srv1 ~]# docker ps | grep app
bfeeec18bbad        cobedien/ltrcld-2003 "sleep 6000" 17 minutes ago      Up 17 minutes pod01-srv2.ecatsrtpdmz.cisco.com/app
    

NOTE: In this particular example, the app container was placed in pod01-srv2. But in your case this container may have been placed in pod01-srv1. Therefore it is important you gather the right server information.

Step 2 - Go to the right Server (POD01-srv2)

From the previous command where you gathered where the containers was placed. Now go to that server either srv1 or srv2.

In this particular example app was placed in srv2


Step 3 - Capturing Link Layer Packets (POD01-srv2)

During this example, we will be capturing the link Layer Level packets that are leaving the server. We will be executing tcpdump with some parameters in order to capture the right information that we want.

Here is the tcpdump we will be using for this step:

    tcpdump -e -i eth1 icmp -c 5
    

pod01-srv2
2
# This is the copy group: 2
tcpdump -e -i eth1 icmp -c 5
[root@pod01-srv2 ~]# tcpdump -e -i eth1 icmp -c 5
tcpdump: WARNING: eth1: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
09:49:17.390979 02:02:0a:00:f8:02 (oui Unknown) > 02:02:0a:00:f8:03 (oui Unknown), ethertype 802.1Q (0x8100), length 102: vlan 502, p 0, ethertype IPv4, 10.0.248.2 > 10.0.248.3: ICMP echo request, id 19, seq 799, length 64
09:49:17.391509 02:02:0a:00:f8:03 (oui Unknown) > 02:02:0a:00:f8:02 (oui Unknown), ethertype 802.1Q (0x8100), length 102: vlan 502, p 0, ethertype IPv4, 10.0.248.3 > 10.0.248.2: ICMP echo reply, id 19, seq 799, length 64
09:49:18.390987 02:02:0a:00:f8:02 (oui Unknown) > 02:02:0a:00:f8:03 (oui Unknown), ethertype 802.1Q (0x8100), length 102: vlan 502, p 0, ethertype IPv4, 10.0.248.2 > 10.0.248.3: ICMP echo request, id 19, seq 800, length 64
09:49:18.391508 02:02:0a:00:f8:03 (oui Unknown) > 02:02:0a:00:f8:02 (oui Unknown), ethertype 802.1Q (0x8100), length 102: vlan 502, p 0, ethertype IPv4, 10.0.248.3 > 10.0.248.2: ICMP echo reply, id 19, seq 800, length 64
09:49:19.390993 02:02:0a:00:f8:02 (oui Unknown) > 02:02:0a:00:f8:03 (oui Unknown), ethertype 802.1Q (0x8100), length 102: vlan 502, p 0, ethertype IPv4, 10.0.248.2 > 10.0.248.3: ICMP echo request, id 19, seq 801, length 64
5 packets captured
5 packets received by filter
0 packets dropped by kernel
     

As you can see from the above output, we are capturing the VLAN number for the APP container. We can verify by using the previously explained command.

pod01-srv2
3
# This is the copy group: 3
netctl group inspect -t ContivTN01 conapp

[root@01-srv1 ~]# netctl group inspect -t ContivTN01 conapp
Inspeting endpointGroup: conapp tenant: ContivTN01
{
  "Config": {
    "key": "ContivTN01:conapp",
    "groupName": "conapp",
    "networkName": "ContivNet01",
    "tenantName": "ContivTN01",
    "link-sets": {
      "MatchRules": {
       "ContivTN01:app2db:1": {
          "type": "rule",
          "key": "ContivTN01:app2db:1"
        }
      }
    },
    "links": {
     "AppProfile": {
        "type": "appProfile",
        "key": "ContivTN01:APP-TN01"
      },
      "NetProfile": {},
      "Network": {
        "type": "network",
        "key": "ContivTN01:ContivNet01"
      },
      "Tenant": {
       "type": "tenant",
        "key": "ContivTN01"
      }
    }
  },
  "Oper": {
     "pktTag": 502
  }
}
     

Step 4 - Capturing Entire Packets (POD01-srv2)

Here are the steps in order to capture the entire packet leaving the container. This could be important when you are troubleshooting complex problems.

pod01-srv1
4
# This is the copy group: 4
netctl group inspect -t ContivTN01 conapp | grep endpointID

[root@01-srv1 ~]# netctl group inspect -t ContivTN01 conapp | grep endpointID

        "endpointID": "74de87dfb60e35c24f77d4ac292441b4f6ce8acb45d62764bd24bf1be870b63b",
     

 Warning!

In the next steps, you will not be copying/pasting since the variables will change per POD. Therefore pay close attention to your outputs to complete this section

In this particular example the endpointID is 74de87dfb60e35c24f77d4ac292441b4f6ce8acb45d62764bd24bf1be870b63b.

Full Packet Capture Step 1 -- veth port

Once we gathered the endpointID, we need to get the veth port

 ovs-vsctl list interface | grep -A 14 74de87dfb60e35c24f77d4ac292441b4f6ce8acb45d62764bd24bf1be870b63b  | grep name
    
[root@pod01-srv2 ~]#  ovs-vsctl list interface | grep -A 14 74de87dfb60e35c24f77d4ac292441b4f6ce8acb45d62764bd24bf1be870b63b  | grep name
    name                : "vvport1"
    

In this particular example the veth is vvport10.

Now we have everything necessary to capture the entire packet.

Full Packet Capture Step 2 -- TCPDUMP Command

Here is the tcpdump we will be using for this step:

    tcpdump -i vvport1  -vvv -e -XX -c 2
    

[root@pod01-srv2 ~]#   tcpdump -i vvport1  -vvv -e -XX -c 2
tcpdump: WARNING: vvport10: no IPv4 address assigned
tcpdump: listening on vvport10, link-type EN10MB (Ethernet), capture size 65535 bytes
10:45:10.227950 02:02:0a:00:f8:02 (oui Unknown) > 02:02:0a:00:f8:03 (oui Unknown), ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 3888, offset 0, flags [DF], proto ICMP (1), length 84)
    10.0.248.2 > 10.0.248.3: ICMP echo request, id 19, seq 4151, length 64
        0x0000:  0202 0a00 f803 0202 0a00 f802 0800 4500  ..............E.
        0x0010:  0054 0f30 4000 4001 2773 0a00 f802 0a00  .T.0@.@.'s......
        0x0020:  f803 0800 b398 0013 1037 8677 a458 0000  .........7.w.X..
        0x0030:  0000 477a 0300 0000 0000 1011 1213 1415  ..Gz............
        0x0040:  1617 1819 1a1b 1c1d 1e1f 2021 2223 2425  ...........!"#$%
        0x0050:  2627 2829 2a2b 2c2d 2e2f 3031 3233 3435  &'()*+,-./012345
        0x0060:  3637                                     67
10:45:10.228591 02:02:0a:00:f8:03 (oui Unknown) > 02:02:0a:00:f8:02 (oui Unknown), ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 42770, offset 0, flags [none], proto ICMP (1), length 84)
    10.0.248.3 > 10.0.248.2: ICMP echo reply, id 19, seq 4151, length 64
        0x0000:  0202 0a00 f802 0202 0a00 f803 0800 4500  ..............E.
        0x0010:  0054 a712 0000 4001 cf90 0a00 f803 0a00  .T....@.........
        0x0020:  f802 0000 bb98 0013 1037 8677 a458 0000  .........7.w.X..
        0x0030:  0000 477a 0300 0000 0000 1011 1213 1415  ..Gz............
        0x0040:  1617 1819 1a1b 1c1d 1e1f 2021 2223 2425  ...........!"#$%
        0x0050:  2627 2829 2a2b 2c2d 2e2f 3031 3233 3435  &'()*+,-./012345
        0x0060:  3637                                     67
2 packets captured
2 packets received by filter
0 packets dropped by kernel
    



 Warning!

Make sure you are in root@pod01-srv01 during these steps.

External Connectivity

In the previous exercise, we created two containers (DB and APP) and they were able to communicate between each other via the rules that we added. During this step, we will creating a new application which would have connectivity to the "outside" world. In this case we will creating a web server container.

Step 1 - Create External Rules (POD01-srv1)

External Contracts is what it is going to allow us to have communication between the webserver and the the external world. IF you think about it this is the linkage that Contiv and ACI, have to the outside world in order for users be able to connect to the webserver.

It is important to note that we already created a "common contract" in ACI called "Contiv_Contract". This contract is going to be shared among all the tenant. In this particular case it is important to understand the flags of the netctl external-contracts command.

The external-contract has multiple flags

  • -contract = This is the reference of the contract that has already been defined in ACI

  • pod01-srv1
    1
    # This is the copy group: 1
    netctl external-contracts create -t ContivTN01 -p -contract "uni/tn-common/brc-Contiv_Contract" webcontract
    pod01-srv1
    2
    # This is the copy group: 2
    netctl external-contracts ls -t ContivTN01
    pod01-srv1
    
    [root@pod01-srv1 ~]# netctl external-contracts ls -t ContivTN01
    Tenant         Name           Type        Contracts
    ------         ------         ------      -------
     
      
       ContivTN01  webcontract    provided  [uni/tn-common/brc-Contiv_Contract]
      
     
     
    

    Step 2 - Create Policy (POD01-srv1)

    pod01-srv1
    3
    # This is the copy group: 3
    netctl policy create -t ContivTN01 webapp
    pod01-srv1
    4
    # This is the copy group: 4
    netctl policy ls -t ContivTN01
    pod01-srv1
    
    [root@pod01-srv1 ~]# netctl policy ls -t ContivTN01
    Tenant          Policy
    ------          ------
    ContivTN01      app2db
     
      
       ContivTN01      webapp
      
     
     
    

    Step 3 - Create End Point Group WEB (POD01-srv1)

    pod01-srv1
    5
    # This is the copy group: 5
    netctl group create -t ContivTN01 -e webcontract -p webapp ContivNet01 conweb
    pod01-srv1
    6
    # This is the copy group: 6
    netctl group ls -t ContivTN01
    pod01-srv1
    
    [root@pod01-srv1 ~]# netctl group ls -t ContivTN01WEB
    Tenant     Group   Network     Policies  Network profile
    ------     -----   -------     --------  ---------------
    ContivTN01  condb   ContivNet01  app2db
    ContivTN01  conapp  ContivNet01
     
      
       ContivTN01  conweb  ContivNet01 webapp
      
     
     
    

    Step 4 - Create Application Network Profle (ANP) (POD01-srv1)

    pod01-srv1
    7
    # This is the copy group: 7
    netctl app-profile create -t ContivTN01 -g conweb ANP01WEB
    pod01-srv1
    8
    # This is the copy group: 8
    netctl app-profile ls -t ContivTN01
    pod01-srv1
    
    [root@pod01-srv1 ~]# netctl app-profile ls -t ContivTN01
    
    Tenant         AppProfile  Groups
    ------         ----------  ------
    ContivTN01     APP-TN01    conapp,condb
     
      
       ContivTN01     ANP01WEB    conweb
      
     
     
    

    Step 5 - Start the container (POD01-srv1)

    pod01-srv1
    9
    # This is the copy group: 9
    docker run -itd -h=webserver --name=webserver --net=conweb/ContivTN01 cobedien/ltrcld-2003

    Lets make sure the webserver container has started

    pod01-srv1
    10
    # This is the copy group: 10
    docker ps | grep webserver
        [root@pod01-srv1 ~]# docker ps | grep webserver
        6a3699c2b74b       cobedien/ltrcld-2003  "/bin/bash"   12 hours ago   Up 12 hours   pod01-srv1.ecatsrtpdmz.cisco.com/webserver
    
        

    Step 6 - Accessing the Webserver container (POD01-srv1)

    During this step we will be accessing and starting the web services for the container

    pod01-srv1
    11
    # This is the copy group: 11
    docker exec -it webserver /bin/bash service apache2 start
    [root@pod01-srv1 ~]# docker exec -it webserver /bin/bash service apache2 start
     * Starting web server apache2                                                                                 AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.0.248.202. Set the 'ServerName' directive globally to suppress this message
     *
    
    pod01-srv1
    12
    # This is the copy group: 12
    docker exec -it webserver /bin/bash
        [root@pod01-srv1 ~]# docker exec -it webserver /bin/bash
    root@6a3699c2b74b:/#
        

    Lets find out the IP Address and Default Gateway that were assigned to the webserver container

    pod01-srv1
    13
    # This is the copy group: 13
    ip a
    root@webserver:/# ip a
    
    1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    82: eth0@if81:  mtu 1450 qdisc noqueue state UP group default
        link/ether 02:02:0a:00:f8:ca brd ff:ff:ff:ff:ff:ff
        inet 10.0.248.2/29 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::2:aff:fe00:f8ca/64 scope link
           valid_lft forever preferred_lft forever
        
    pod01-srv1
    14
    # This is the copy group: 14
    netstat -rn
    root@webserver:/# netstat -rn
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
    0.0.0.0         10.0.248.1    0.0.0.0         UG        0 0          0 eth0
    10.0.248.200    0.0.0.0         255.255.255.248 U         0 0          0 eth0
        

    Lets start a continuous ping from the container to the default in order for the ACI fabric to learn about this new container

    pod01-srv1
    15
    # This is the copy group: 15
    ping 10.0.248.1
    root@6a3699c2b74b:/# ping 10.0.248.1
    PING 10.0.248.1 (10.0.248.1) 56(84) bytes of data.
    64 bytes from 10.0.248.1: icmp_seq=1 ttl=63 time=0.464 ms
        

    ACI - Connecting to the outside world

    Now that we have created the webserver, it is time to connect the conweb EPG to the external network. In order to do that we need to make some changes in ACI.

    Step 7 - Connect to APIC GUI via web

    You can login into the interface with the credentials

    http://10.0.226.41

    Step 8 - VRF change

     Warning!

    Make sure you click on YOUR tenant/POD Number -- ContivTN01 -- . You may need to go to the next page in order to find your Tenant

    Once you are inside your tenant ContivTN01. We need to modify the VRF which is under the Bridge Domain to Default - Common.

    Step 9 - Add L3 Out

    The last step in order to for us to be able to to connect to the outside world is to add the L3 out to the common tenant.

     Warning!

    Make sure to select Contiv -&gt; Common

    Step 10 - View Web Server from the RDP Console

    Using the same Chrome browser you have been using to check ACI, open a tab and point the browser to the IP assigned to the webserver container 10.0.248.4


    Summary

    Congratulations!!! You have completed LTRCLD-2003 - Contiv Installation and Integration with ACI.

    During this lab you completed several tasks that showed the value of having an integration between ACI and Contiv in a microservice architecture. Because Contiv creates a Policy-Based container in order to secure your application using a rich policy framework. ACI is addressing the use cases of Infrastructure Automation, Application aware Infrastructure, Scale out models and Dynamic Applications which are key pillars of modern day Microservices architectures.

    The combination of Contiv and ACI will provide the ability for any application to be able to be deployed in a secure matter, reliable and automated that not other technology the industry can match.

    We hope this lab was beneficial to you! It is challenging to cover every aspect of Contiv ACI integration in just 4 hours. We do though give you this lab to take home! This lab is accesible from anywhere at the URL:

    http://contiv.ciscolive.com

    We have also included additional Reference sections that you can read containing material you may find interesting.

    Complete your session evaluation!

    • Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 gift card
    • Complete your session surveys through the Cisco Live mobile app or on www.ciscolive.com/us
    • Don’t forget: Cisco Live sessions will be available for viewing on demand after the event at www.ciscolive.com/online .

    Contiv Troubleshooting Document

    Information about this installation

    1: netplugin and netmaster - v1.0.0-alpha-01-28-2017.10-23-11.UTC

    netplugin --version or netmaster --version will give you version of each component running in
    this setup

    2: etcd - 2.3.7

    3: Docker Swarm - 1.2.5

    4: OpenVSwitch - 2.3.1-2.el7

    5: Docker Engine - 1.12.6

    Troubleshooting Techniques:

    1: Make sure you have passwordless SSH setup.

    To setup passwordless SSH, please use this : http://twincreations.co.uk/pre-shared-keys-for-ssh-login-without-password/

    If you have 3 nodes, Node1 Node2 and Node3 then please make sure you can do passwordless SSH from

    Node1 to Node1

    Node1 to Node2

    Node1 to Node3

    2: Make sure your etcd cluster is healthy

    sudo etcdctl cluster-health
    member 903d536c85a35515 is healthy: got healthy result from http://10.193.231.222:2379
    member fa77f6921bc496d6 is healthy: got healthy result from http://10.193.231.245:2379
    cluster is healthy
    

    3: Make sure your docker swarm cluster is healthy

    When you do run docker info command, You should be able to see

    all the nodes which are your cluster right now.

    For example, Sample o/p of docker ps command.

    docker info
    Containers: 8
     Running: 8
     Paused: 0
     Stopped: 0
    Images: 12
    Server Version: swarm/1.2.0
    Role: replica
    Primary: 10.193.231.245:2375
    Strategy: spread
    Filters: health, port, dependency, affinity, constraint
    Nodes: 2
     localhost: 10.193.231.222:2385
      └ Status: Healthy
      └ Containers: 4
      └ Reserved CPUs: 0 / 1
      └ Reserved Memory: 0 B / 10.09 GiB
      └ Labels: executiondriver=, kernelversion=3.10.0-514.2.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
      └ Error: (none)
      └ UpdatedAt: 2016-12-25T06:39:35Z
      └ ServerVersion: 1.11.1
     netmaster: 10.193.231.245:2385
      └ Status: Healthy
      └ Containers: 4
      └ Reserved CPUs: 0 / 1
      └ Reserved Memory: 0 B / 10.09 GiB
      └ Labels: executiondriver=, kernelversion=3.10.0-514.2.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
      └ Error: (none)
      └ UpdatedAt: 2016-12-25T06:39:44Z
      └ ServerVersion: 1.11.1
    Plugins:
     Volume:
     Network:
    Kernel Version: 3.10.0-514.2.2.el7.x86_64
    Operating System: linux
    Architecture: amd64
    CPUs: 2
    Total Memory: 20.18 GiB
    Name: 95720f4214ca
    Docker Root Dir:
    Debug mode (client): false
    Debug mode (server): false
    WARNING: No kernel memory limit support

    4: Netmaster error :

    If you see soemthing like this

    TASK [contiv_network : wait for netmaster to be ready] *************************
    FAILED - RETRYING: TASK: contiv_network : wait for netmaster to be ready (9 retries left).
    FAILED - RETRYING: TASK: contiv_network : wait for netmaster to be ready (9 retries left).
    FAILED - RETRYING: TASK: contiv_network : wait for netmaster to be ready (8 retries left).
    FAILED - RETRYING: TASK: contiv_network : wait for netmaster to be ready (8 retries left).

    Please make sure you cleanup everything and try running script again

    Cleanup Commands:

    sudo ovs-vsctl del-br contivVxlanBridge
    sudo ovs-vsctl del-br contivVlanBridge
    for p in `ifconfig  | grep vport | awk '{print $1}'`; do sudo ip link delete $p type veth; done
    sudo rm /var/run/docker/plugins/netplugin.sock
    sudo etcdctl rm --recursive /contiv
    sudo etcdctl rm --recursive /contiv.io
    sudo etcdctl rm --recursive /skydns
    sudo etcdctl rm --recursive /docker
    curl -X DELETE localhost:8500/v1/kv/contiv.io?recurse=true
    curl -X DELETE localhost:8500/v1/kv/docker?recurse=true
    
    -- Docker Cleanup Steps
    sudo docker kill -s 9 $(sudo docker ps -q)
    sudo docker rm -fv $(sudo docker ps -a -q)
    sudo systemctl stop docker
    for i in $(mount | grep docker | awk '{ print $3 }'); do sudo umount $i || true; done
    sudo umount /var/lib/docker/devicemapper || true
    sudo yum -y remove docker-engine.x86_64
    sudo yum -y remove docker-engine-selinux.noarch
    sudo rm -rf /var/lib/docker
    if above commnd does not execute, please reboot machine and try again
    
    -- Uninstall etcd
    
    sudo systemctl stop etcd
    sudo rm -rf /usr/bin/etcd*
    sudo rm -rf /var/lib/etcd*
    

    5: Regarding cfg.yml

    Please make sure that you have correct data and control interfaces entered in cfg.yml file. Also please verify APIC details as well.

    6: Regarding topology of ACI:

    You can give ACI topology information to contiv in following manner.

    APIC_LEAF_NODES:
        - topology/pod-1/node-101
        - topology/pod-1/node-102
        - topology/pod-1/paths-101/pathep-[eth1/14]
        

    7: Correct version of aci-gw container:

    Make sure that you are using correct aci-gw version.

    If you are using APIC 2.1_1h, then you should be using
    
    contiv/aci-gw:02-02-2017.2.1_1h
    
    otherwise please use
    
    contiv/aci-gw:latest (By default script will use this)
    
    docker ps command will show you the version of aci-gw which you are running on nodes in your cluster.
    

    8: Troubleshooting Datapath of Contiv:

    To find container ID from name of the container

    Creating container:
    docker run -itd --net="n1/t1" --name=testContainer alpine sh
    
    Finding ID of container:
    docker ps | grep test
    6e329f08fbf0        alpine              "sh"                     28 seconds ago      Up 26 seconds                                test
    

    Find endpoint ID using netctl command:

    netctl endpoint inspect 6e329f08fbf0
    Inspecting endpoint: 6e329f08fbf0
    {
      "Oper": {
        "containerID": "6e329f08fbf0646ab7952e360ba9e36a4900bea31b665a1d6c0b9b6373eb4476",
        "containerName": "/test",
        "endpointGroupId": 1,
        "endpointGroupKey": "g1:t1",
        "endpointID": "dc6ddf624ffd68f234806403e66b1afc24009fa4f8a182a7a2f617820451537b",
        "homingHost": "netplugin-node1",
        "ipAddress": [
          "20.1.1.1",
          ""
        ],
        "labels": "map[]",
        "macAddress": "02:02:14:01:01:01",
        "network": "n1.t1",
        "serviceName": "g1"
      }
    }
    [vagrant@netplugin-node1 netplugin]$ netctl endpoint inspect 6e329f08fbf0 | grep endpointID
        "endpointID": "dc6ddf624ffd68f234806403e66b1afc24009fa4f8a182a7a2f617820451537b",

    Find Name of veth port by matching the endpointID

    sudo ovs-vsctl list interface | grep -A 14 dc6ddf624ffd68f234806403e66b1afc24009fa4f8a182a7a2f617820451537b | grep name
    name                : "vvport1"
    

    Dump the flow entries in OVS

    sudo ovs-ofctl -O Openflow13 dump-flows contivVlanBridge
    OFPST_FLOW reply (OF1.3) (xid=0x2):
     cookie=0x1e, duration=529.057s, table=0, n_packets=0, n_bytes=0, priority=101,udp,dl_vlan=4093,dl_src=02:02:00:00:00:00/ff:ff:00:00:00:00,tp_dst=53 actions=pop_vlan,goto_table:1
     cookie=0x1c, duration=529.057s, table=0, n_packets=0, n_bytes=0, priority=100,arp,arp_op=1 actions=CONTROLLER:65535
     cookie=0x22, duration=528.031s, table=0, n_packets=0, n_bytes=0, priority=102,udp,in_port=2,tp_dst=53 actions=goto_table:1
     cookie=0x20, duration=528.031s, table=0, n_packets=0, n_bytes=0, priority=102,udp,in_port=1,tp_dst=53 actions=goto_table:1
     cookie=0x1a, duration=529.057s, table=0, n_packets=2102, n_bytes=260304, priority=1 actions=goto_table:1
     cookie=0x1d, duration=529.057s, table=0, n_packets=0, n_bytes=0, priority=100,udp,dl_src=02:02:00:00:00:00/ff:ff:00:00:00:00,tp_dst=53 actions=CONTROLLER:65535
     cookie=0x1b, duration=529.057s, table=1, n_packets=0, n_bytes=0, priority=1 actions=goto_table:3
     cookie=0x2d, duration=172.804s, table=1, n_packets=8, n_bytes=648, priority=10,in_port=3 actions=write_metadata:0x100010000/0xff7fff0000,goto_table:2
     cookie=0x21, duration=528.031s, table=1, n_packets=0, n_bytes=0, priority=100,in_port=1 actions=goto_table:5
     cookie=0x23, duration=528.031s, table=1, n_packets=0, n_bytes=0, priority=100,in_port=2 actions=goto_table:5
     cookie=0x19, duration=529.057s, table=2, n_packets=8, n_bytes=648, priority=1 actions=goto_table:3
     cookie=0x17, duration=529.058s, table=3, n_packets=8, n_bytes=648, priority=1 actions=goto_table:4
     cookie=0x2e, duration=172.804s, table=3, n_packets=0, n_bytes=0, priority=100,ip,metadata=0x100000000/0xff00000000,nw_dst=20.1.1.1 actions=write_metadata:0x2/0xfffe,goto_table:4
     cookie=0x18, duration=529.057s, table=4, n_packets=8, n_bytes=648, priority=1 actions=goto_table:5
     cookie=0x16, duration=529.058s, table=5, n_packets=8, n_bytes=648, priority=1 actions=goto_table:7
     cookie=0x1f, duration=529.057s, table=7, n_packets=8, n_bytes=648, priority=1 actions=NORMAL
    

    To find uplink ports :

    sudo ovs-vsctl  list interface | grep name | grep "eth[0-9]"
    name                : "eth2"
    name                : "eth3"

    9: ACI Ping problem:

    If you see ping is not working in ACI environment, check if eth1 is enabled or not.

    ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:50:56:0c:02:27 brd ff:ff:ff:ff:ff:ff
        inet 10.0.236.75/24 brd 10.0.236.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::250:56ff:fe0c:227/64 scope link 
           valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:50:56:8c:7f:04 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::e9dd:85af:62b3:370f/64 scope link 
           valid_lft forever preferred_lft forever

    also please check this- if eth1 is down you should see something like this in the log.

    grep "No active interface on uplink. Not reinjecting ARP request pkt" /var/log/messages

    10: Packet capture:

    Let us create tenant t1 and network n1 like this

    netctl tenant create t1
    netctl network create --tenant t1 --subnet=20.1.1.0/24 --gateway=20.1.1.254 -e "vlan" -p 287 n1

    We are making sure that packet coming from network n1 will have vlan 287. Let us verify that. 1: Checking docker network command:

    docker network ls
    NETWORK ID          NAME                     DRIVER              SCOPE
    25a009848ff5        n1/t1                    netplugin           global ----  network created by contiv
    7a64108b10a9        netplugin-node1/bridge   bridge              local
    596e79eac171        netplugin-node1/host     host                local
    563e91af3c9e        netplugin-node1/none     null                local
    52ff8f75afaf        netplugin-node2/bridge   bridge              local
    5898b9cd7bd3        netplugin-node2/host     host                local
    197fbc0f0c67        netplugin-node2/none     null                local
    1f33e8e07f67        netplugin-node3/bridge   bridge              local
    145542b9169c        netplugin-node3/host     host                local
    ce75be00b5d7        netplugin-node3/none     null                local

    2: Create docker containers on this network.

    docker run -itd --net="n1/t1" --name=test1 alpine sh
    docker run -itd --net="n1/t1" --name=test2 alpine sh

    now test2 container has got 20.1.1.2 and test1 has got 20.1.1.1 IP address. Let us login to container test2 and ping test1 container's IP address

    docker exec -it test2 sh
    / # ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr 02:02:14:01:01:02
              inet addr:20.1.1.2  Bcast:0.0.0.0  Mask:255.255.255.0
              inet6 addr: fe80::2:14ff:fe01:102/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
              RX packets:42 errors:0 dropped:0 overruns:0 frame:0
              TX packets:42 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:3868 (3.7 KiB)  TX bytes:3868 (3.7 KiB)
    
    / # ping 20.1.1.1
    PING 20.1.1.1 (20.1.1.1): 56 data bytes
    64 bytes from 20.1.1.1: seq=0 ttl=64 time=3.371 ms
    64 bytes from 20.1.1.1: seq=1 ttl=64 time=4.668 ms

    Now on the host where test1 container is scheduled. (you can use docker inspect command to find that out)

    sudo tcpdump -e -i eth3 icmp
    tcpdump: WARNING: eth3: no IPv4 address assigned
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on eth3, link-type EN10MB (Ethernet), capture size 65535 bytes
    00:04:31.048495 02:02:14:01:01:01 (oui Unknown) > 02:02:14:01:01:02 (oui Unknown), ethertype 802.1Q (0x8100), length 102: vlan 287, p 0, ethertype IPv4, 20.1.1.1 > 20.1.1.2: ICMP echo reply, id 7680, seq 5, length 64
    00:04:32.049022 02:02:14:01:01:01 (oui Unknown) > 02:02:14:01:01:02 (oui Unknown), ethertype 802.1Q (0x8100), length 102: vlan 287, p 0, ethertype IPv4, 20.1.1.1 > 20.1.1.2: ICMP echo reply, id 7680, seq 6, length 64
    00:04:33.049654 02:02:14:01:01:01 (oui Unknown) > 02:02:14:01:01:02 (oui Unknown), ethertype 802.1Q (0x8100), length 102: vlan 287, p 0, ethertype IPv4, 20.1.1.1 > 20.1.1.2: ICMP echo reply, id 7680, seq 7, length 64
    

    As you clearly see that vlan tag is 287. eth3 on this host is data interface at which packet came from another host and then through switch. Once packet reaches to data interface of host, OVS and contiv magic makes sure that it reaches to correct container.

    You can also see that counters at interface are increasing as you are pinging.

    Before :

    sudo ovs-ofctl -O Openflow13 dump-flows contivVlanBridge
    OFPST_FLOW reply (OF1.3) (xid=0x2):
     cookie=0x1e, duration=2376.096s, table=0, n_packets=0, n_bytes=0, priority=101,udp,dl_vlan=4093,dl_src=02:02:00:00:00:00/ff:ff:00:00:00:00,tp_dst=53 actions=pop_vlan,goto_table:1
     cookie=0x1c, duration=2376.097s, table=0, n_packets=20, n_bytes=904, priority=100,arp,arp_op=1 actions=CONTROLLER:65535
     cookie=0x20, duration=2375.069s, table=0, n_packets=0, n_bytes=0, priority=102,udp,in_port=2,tp_dst=53 actions=goto_table:1
     cookie=0x22, duration=2375.069s, table=0, n_packets=0, n_bytes=0, priority=102,udp,in_port=1,tp_dst=53 actions=goto_table:1
     cookie=0x1a, duration=2376.097s, table=0, n_packets=8682, n_bytes=1073800, priority=1 actions=goto_table:1

    After :

    sudo ovs-ofctl -O Openflow13 dump-flows contivVlanBridge
    OFPST_FLOW reply (OF1.3) (xid=0x2):
     cookie=0x1e, duration=2441.766s, table=0, n_packets=0, n_bytes=0, priority=101,udp,dl_vlan=4093,dl_src=02:02:00:00:00:00/ff:ff:00:00:00:00,tp_dst=53 actions=pop_vlan,goto_table:1
     cookie=0x1c, duration=2441.767s, table=0, n_packets=20, n_bytes=904, priority=100,arp,arp_op=1 actions=CONTROLLER:65535
     cookie=0x20, duration=2440.739s, table=0, n_packets=0, n_bytes=0, priority=102,udp,in_port=2,tp_dst=53 actions=goto_table:1
     cookie=0x22, duration=2440.739s, table=0, n_packets=0, n_bytes=0, priority=102,udp,in_port=1,tp_dst=53 actions=goto_table:1
     cookie=0x1a, duration=2441.767s, table=0, n_packets=8844, n_bytes=1093696, priority=1 actions=goto_table:1
     cookie=0x1d, duration=2441.766s, table=0, n_packets=0, n_bytes=0, priority=100,udp,dl_src=02:02:00:00:00:00/ff:ff:00:00:00:00,tp_dst=53 actions=CONTROLLER:65535
     cookie=0x1b, duration=2441.767s, table=1, n_packets=0, n_bytes=0, priority=1 actions=goto_table:3
     cookie=0x2e, duration=1172.253s, table=1, n_packets=56, n_bytes=5352, priority=10,in_port=3 actions=write_metadata:0x100000000/0xff00000000,goto_table:2
     cookie=0x23, duration=2440.739s, table=1, n_packets=56, n_bytes=5576, priority=100,in_port=1 actions=goto_table:5

    To cpature full packet and display in bytes:

    sudo tcpdump -i vvport1 -vvv -e -XX
    tcpdump: WARNING: vvport1: no IPv4 address assigned
    tcpdump: listening on vvport1, link-type EN10MB (Ethernet), capture size 65535 bytes
    00:09:50.157540 02:02:14:01:01:02 (oui Unknown) > 02:02:14:01:01:01 (oui Unknown), ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 46251, offset 0, flags [DF], proto ICMP (1), length 84)
        20.1.1.2 > 20.1.1.1: ICMP echo request, id 9472, seq 63, length 64
        0x0000:  0202 1401 0101 0202 1401 0102 0800 4500  ..............E.
        0x0010:  0054 b4ab 4000 4001 5bf9 1401 0102 1401  .T..@.@.[.......
        0x0020:  0101 0800 5d0b 2500 003f ebf9 89bb 0000  ....].%..?......
        0x0030:  0000 0000 0000 0000 0000 0000 0000 0000  ................
        0x0040:  0000 0000 0000 0000 0000 0000 0000 0000  ................
        0x0050:  0000 0000 0000 0000 0000 0000 0000 0000  ................
        0x0060:  0000                                     ..

    Here vvport1 is for container test1 running on that host. Contiv creates virtual ethernet interface (vvport1, vvport2 etc) for each container running on that host.