Warning!

Make sure you are in root@pod01-srv01 during these steps.

Containers Start

Step 1 - Start up Containers (POD01-srv1)

Now it is time to create some containers and look how ACI and Contiv are working from an integration point of view

The first container we are going to add it to the conapp EPG

pod01-srv1
1
# This is the copy group: 1
docker run -itd -h=app --name=app --net=conapp/ContivTN01 cobedien/ltrcld-2003

The second container we are going to add it to the condb EPG

pod01-srv1
2
# This is the copy group: 2
docker run -itd -h=db --name=db --net=condb/ContivTN01 cobedien/ltrcld-2003

Lets make sure the two containers have started

pod01-srv1
3
# This is the copy group: 3
docker ps | grep ltrcld-2003
    [root@pod01-srv1 ~]# docker ps | grep ltrcld
    7ff6caba4723   cobedien/ltrcld-2003   "sleep 6000"   12 days ago   Up About a minute   pod01-srv1.ecatsrtpdmz.cisco.com/db
    8b3de8d9ad09   cobedien/ltrcld-2003   "sleep 6000"   12 days ago   Up 3 minutes        pod01-srv2.ecatsrtpdmz.cisco.com/app

    

Step 2 - Accessing the APP Container (POD01-srv1)

During this step will be accessing the APP container that we started in the previous step. For the purpose of this step, we will be accessing the APP container from [root@pod-srv1 ~]# but in a production environemnt you will be able to access any container from any worker node.

The idea to do this is to show how both containers are working at the same time. We will be accessing the DB container in the next step.

pod01-srv1
4
# This is the copy group: 4
docker exec -it app /bin/bash
 [root@pod-srv1 ~]# docker exec -it app /bin/bash
root@app:/# 
    

Step 3 - Exporting the DOCKER_HOST in POD01-srv2

 Warning!

Make sure you are in root@pod01-srv02 during these steps.

In order to see the containers in pod01-srv2 we need to export the DOCKER_HOST command

pod01-srv2
5
# This is the copy group: 5
export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375
pod01-srv2
6
# This is the copy group: 6
cd ~ sed -i -e '$a export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375' .bashrc
pod01-srv2
7
# This is the copy group: 7
cat .bashrc
[root@pod-srv1 ~]# cat .bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi
export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375
    

Step 4 - Entering DB container in pod01-srv2

pod01-srv2
8
# This is the copy group: 8
docker exec -it db /bin/bash
 [root@pod01-srv2 ~]# docker exec -it db /bin/bash
root@db:/# 
    

Explore Container Information

Now that we have both containers up and running. Lets discover the IP address and Default Gateway that were assigned by the IPAM.

Step 5 - APP Container (POD01-srv1)

pod01-srv1
9
# This is the copy group: 9
ip a
root@app:/# ip a
1: lo:  mtu 65536 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
41: eth0@if40:  mtu 1450 qdisc noqueue
    link/ether 02:02:04:01:01:01 brd ff:ff:ff:ff:ff:ff
    inet 10.0.248.2 /29 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2:4ff:fe01:101/64 scope link
       valid_lft forever preferred_lft forever
    
pod01-srv1
10
# This is the copy group: 10
netstat -rn
root@app:/# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface 
0.0.0.0         10.0.248.1       0.0.0.0         UG        0 0          0 eth0
10.0.248.0/29         0.0.0.0         255.255.255.248   U         0 0          0 eth0
    

Step 6 DB Container (POD01-srv2)

pod01-srv2
11
# This is the copy group: 11
ip a
root@db:/# ip a
1: lo:  mtu 65536 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
41: eth0@if40:  mtu 1450 qdisc noqueue
    link/ether 02:02:04:01:01:01 brd ff:ff:ff:ff:ff:ff
    inet 10.0.248.3/29  scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2:4ff:fe01:101/64 scope link
       valid_lft forever preferred_lft forever
    
pod01-srv2
12
# This is the copy group: 12
netstat -rn
root@db:/# netstat -rn
Kernel IP routing table
Destination            Gateway         Genmask          Flags   MSS Window  irtt Iface 
0.0.0.0                10.0.248.1    0.0.0.0          UG        0 0          0 eth0
10.0.248.0/29        0.0.0.0         255.255.255.248  U         0 0          0 eth0
    

Step 7 - Test Connectivity from the DB container (POD01-srv2)

Now that we have both containers up and running. Lets test some connectivity to make sure the containers can ping the default gateway and each other.

Ping the Default Gateway

pod01-srv2
13
# This is the copy group: 13
ping -c 5 10.0.248.1
root@db:/# ping -c 5 10.0.248.1
PING 10.0.248.1 (10.0.248.1): 56 data bytes
64 bytes from 10.0.248.1: seq=0 ttl=63 time=0.711 ms
64 bytes from 10.0.248.1: seq=1 ttl=63 time=0.278 ms
64 bytes from 10.0.248.1: seq=2 ttl=63 time=0.255 ms
64 bytes from 10.0.248.1: seq=3 ttl=63 time=0.246 ms
64 bytes from 10.0.248.1: seq=4 ttl=63 time=0.284 ms

--- 10.0.248.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.246/0.354/0.711 ms
    

Ping the APP container

pod01-srv2
14
# This is the copy group: 14
ping -c 5 10.0.248.2
root@db:/# ping -c 5 10.0.248.2
PING 10.0.248.2 (10.0.248.2): 56 data bytes

--- 10.0.248.2 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
    

Step 8 - iPerf Testing (POD01-srv2)

iPerf3 is a tool for active measurements of the maximum achievable bandwidth on IP networks. It supports tuning of various parameters related to timing, buffers and protocols (TCP, UDP, SCTP with IPv4 and IPv6). For each test it reports the bandwidth, loss, and other parameters.

For more information about Iperf visit https://en.wikipedia.org/wiki/Iperf

We are going to be enabling iPerf in the DB container.

pod01-srv2
15
# This is the copy group: 15
iperf -s -p 6379
root@db:/#  iperf -s -p 6379
------------------------------------------------------------
Server listening on TCP port 6379
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
    

NOTE: The DB container now is waiting for connections on TCP port 6379

Step 9 - Test Connectivity from the APP container (POD01-srv1)

Let's verify the APP container connectivity.

Ping the Default Gateway

pod01-srv1
16
# This is the copy group: 16
ping -c 5 10.0.248.1
root@app:/# ping -c 5 10.0.248.1
PING 10.0.248.1 (10.0.248.1): 56 data bytes
64 bytes from 10.0.248.1: seq=0 ttl=63 time=0.711 ms
64 bytes from 10.0.248.1: seq=1 ttl=63 time=0.278 ms
64 bytes from 10.0.248.1: seq=2 ttl=63 time=0.255 ms
64 bytes from 10.0.248.1: seq=3 ttl=63 time=0.246 ms
64 bytes from 10.0.248.1: seq=4 ttl=63 time=0.284 ms

--- 10.0.248.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.246/0.354/0.711 ms
    

Ping the DB container

pod01-srv1
17
# This is the copy group: 17
ping -c 5 10.0.248.3
root@app:/# ping -c 5 10.0.248.3
PING 10.0.248.3 (10.0.248.3): 56 data bytes

--- 10.0.248.3 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
    

Step 10 - Netcat Testing (POD01-srv1)

NC (or netcat) utility is used for just about anything involving TCP or UDP. It can open TCP connections, send UDP packets, listen on arbitrary TCP and UDP ports, do port scanning, and deal with both IPv4 and IPv6.

For more information about NC visit https://en.wikipedia.org/wiki/Netcat

If you recall, we started iPerf in the DB container on TCP port 6379. Now we are going to send a TCP request on that port to the DB container.

pod01-srv1
18
# This is the copy group: 18
nc -zvnw 1 10.0.248.3 6379
root@app:/# nc -zvnw 1 10.0.248.3 6379
Connection to 10.0.248.3 6379 port [tcp/*] succeeded!
    

You can verify the connection was successfully by going back to the DB container

root@db:/# iperf -s -p 6379
------------------------------------------------------------
Server listening on TCP port 6379
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.0.248.3 port 6379 connected with 10.0.248.2 port 46924
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 0.0 sec  0.00 Bytes  0.00 bits/sec
    

NOTE: Why the containers can ping the Default Gateway but are not able to ping each other even though they are in the same subnet?? Why the APP container is able to communicate to DB container on port 6379?

This is how Contiv and ACI are working together to create a Policy-Based Container Network with an automated way in order to deploy microservices workloads/applications. We will be discussing this concept and how to modify the policy in order to be able to ping between the APP and DB container in the following sections.

Step 11 - Exiting the APP container (POD01-srv1)

If you recall from the previous section where we left the container running but we were exiting the container. We will performing the same command in this step.

Execute the following command control+p+q

© Copyright Cisco Systems 2017