Contiv and ACI Automated Policy

As you noticed in the previous section, you were not able to ping between the APP container and the DC container. If you recall during the creation of your tenant. We created a policy to only allow TCP port 6379

pod01-srv1

[root@pod01-srv1 ~]# netctl policy rule-ls app2db -t ContivTN01
Incoming Rules:
Rule  Priority  From EndpointGroup  From Network  From IpAddress  Protocol  Port  Action
----  --------  ------------------  ------------  ---------       --------  ----  ------
1     1         conapp                                            tcp       6379  allow
Outgoing Rules:
Rule  Priority  To EndpointGroup  To Network  To IpAddress  Protocol  Port  Action
----  --------  ----------------  ----------  ---------     --------  ----  ------

In this section we are going to cover how Contiv and ACI are working together in order to provide an automated policy that every Data Center requires

APIC Navigation

 Warning!

Make no changes to the APIC controller via the GUI. The lab continues and having the infrastructure operational is key to completing.

The APIC infrastructure configuration has already been configured for you. This section will show you the necessary steps to follow in order to understand how Contiv and ACI are providing an automated policy

Step 2 - Connect to APIC GUI via web

You can login into the interface with the credentials

http://10.0.226.41

Step 3 - POD Tenant

A tenant is a logical container for application policies that enable an administrator to exercise domain-based access control. A tenant represents a unit of isolation from a policy perspective, but it does not represent a private network. Tenants can represent a customer in a service provider setting, an organization or domain in an enterprise setting, or just a convenient grouping of policies.

Tenants can be isolated from one another or can share resources. The primary elements that the tenant contains are filters, contracts, outside networks, bridge domains, contexts, and application profiles that contain endpoint groups (EPGs). Entities in the tenant inherit its policies. A tenant can contain one or more virtual routing and forwarding (VRF) instances or contexts; each context can associated with multiple bridge domains. Tenants are l ogical containers for application policies. The fabric can contain multiple tenants

 Warning!

Make sure you click on YOUR tenant/POD Number -- ContivTN01 -- . You may need to go to the next page in order to find your Tenant

Step 4 - Application Network Profile

An Application Network Profile is a collection of EPGs, their connections, and the policies that define those connections. Application Network Profiles are the logical representation of an application and its inter-dependencies in the network fabric.

Application Network Profiles are designed to be modeled in a logical way that matches the way that applications are designed and deployed. The configuration and enforcement of policies and connectivity is handled by the system rather than manually by an administrator.

You should be getting the same Application Network Profile as shown above. What this diagram is showing are the following components:

Step 5 - End Point Groups

As mentioned above EPG is a collection of similar End Points (EP). APIC has the knowledge of every EP that it is attached to the fabric. This is very important because by having this information individuals will be able to map and determine the location of the hosts in order to troubleshoot but more importantly to map the application.

Step 6 - Security Policies

Contracts define inbound and outbound permit, deny, and QoS rules and policies such as redirect.

Contracts allow both simple and complex definition of the way that an EPG communicates with other EPGs, depending on the requirements of the environment. Although contracts are enforced between EPGs, they are connected to EPGs using provider-consumer relationships. Essentially, one EPG provides a contract, and other EPGs consume that contract

Labels determine which EPG consumers and EPG providers can communicate with one another.

Filters are Layer 2 to Layer 4 fields, TCP/IP header fields such as Layer 3 protocol type, Layer 4 ports, and so forth. According to its related contract, an EPG provider dictates the protocols and ports in both the in and out directions. Contract subjects contain associations to the filters (and their directions) that are applied between EPGs that produce and consume the contract.

In order to identify the contract that it is getting applied on this ANP, hoover your mouse over the contract and you should see the following image appear:

As you can see in the diagram we have created a filter that it is only allowing TCP 6379. Now we need to modify this filter in order to add a policy to allow ICMP between the containers.

Step 7 - ICMP Policy (POD01-srv1)

During this step, we are going to add the necessary policy in order for the containers to be able to ping each other.

 Warning!

Make sure you are in root@pod01-srv01 during these steps.

Lets verify the current policy is in our environment

pod01-srv1
1
# This is the copy group: 1
netctl policy rule-ls -t ContivTN01 app2db
pod01-srv1

[root@pod01-srv1 ~]# netctl policy rule-ls -t ContivTN01 app2db
Incoming Rules:
Rule  Priority  From EndpointGroup  From Network  From IpAddress  Protocol  Port  Action
----  --------  ------------------  ------------  ---------       --------  ----  ------
1     1         conapp                                            tcp       6379  allow
Outgoing Rules:
Rule  Priority  To EndpointGroup  To Network  To IpAddress  Protocol  Port  Action
----  --------  ----------------  ----------  ---------     --------  ----  ------



It is important to note the Rule ordering, in this case TCP 6379 rule is using 1. Therefore the new ICMP rules will have 2 and 3.


pod01-srv1
2
# This is the copy group: 2
netctl policy rule-add -t ContivTN01 -d in --protocol icmp --from-group conapp --action allow app2db 2 netctl policy rule-add -t ContivTN01 -d in --protocol icmp --from-group condb --action allow app2db 3
pod01-srv1
3
# This is the copy group: 3
netctl policy rule-ls -t ContivTN01 app2db
pod01-srv1

[root@pod01-srv1 ~]# netctl policy rule-ls -t ContivTN01 app2db
Incoming Rules:
Rule  Priority  From EndpointGroup  From Network  From IpAddress  Protocol  Port  Action
----  --------  ------------------  ------------  ---------       --------  ----  ------
1     1         conapp                                            tcp       6379  allow
2     1         conapp                                            icmp      0     allow
3     1         condb                                             icmp      0     allow
Outgoing Rules:
Rule  Priority  To EndpointGroup  To Network  To IpAddress  Protocol  Port  Action
----  --------  ----------------  ----------  ---------     --------  ----  ------

Step 8 - Entering APP container (POD01-srv1)

pod01-srv1
4
# This is the copy group: 4
docker exec -it app /bin/bash
 [root@pod01-srv1 ~]# docker exec -it app /bin/bash
root@app:/# 
    

NOTE: You may get an error because the container is not longer running

pod01-srv1

[root@pod01-srv1 ~]#  docker exec -it app /bin/bash

Error response from daemon: Container 6f096c8fdee94a539e13008f5268f0612f8f0c084618ef108289c2bb1df5f55c is not running


If you get the error then enter the following command:

pod01-srv1
5
# This is the copy group: 5
docker start app
pod01-srv1

[root@pod01-srv1 ~]#docker start app
app


Now you should be able to enter the APP container

pod01-srv1
6
# This is the copy group: 6
docker exec -it app /bin/bash
pod01-srv1

[root@pod01-srv1 ~]#docker exec -it app /bin/bash
root@app:/


Step 9 - Ping the DB from the APP (POD01-srv1)

We are going to be sending a continuous ping for this step because we will be leveraging these pings for the next exercise.

pod01-srv1
7
# This is the copy group: 7
ping 10.0.248.3
pod01-srv1

root@app:/#  ping 10.0.248.3
PING 10.0.248.3 (10.0.248.3) 56(84) bytes of data.
64 bytes from 10.0.248.3: icmp_seq=1 ttl=64 time=0.962 ms
64 bytes from 10.0.248.3: icmp_seq=2 ttl=64 time=0.475 ms
64 bytes from 10.0.248.3: icmp_seq=3 ttl=64 time=0.303 ms
64 bytes from 10.0.248.3: icmp_seq=4 ttl=64 time=0.588 ms
64 bytes from 10.0.248.3: icmp_seq=5 ttl=64 time=0.564 ms

Step 10 - Exit APP Container (POD01-srv1)

Exit the APP container, but leave it run it by executing the following command

control p+q

Step 11 - Entering DB container (POD01-srv2)

 Warning!

Make sure you are in root@pod01-srv2 during these steps.

NOTE:The DB container may be already running.

pod01-srv1
8
# This is the copy group: 8
docker exec -it db /bin/bash
 [root@pod01-srv1 ~]# docker exec -it db /bin/bash
root@db:/# 
    

Step 12 - Ping the APP from the DB (POD01-srv2)

pod01-srv1
9
# This is the copy group: 9
ping -c 5 10.0.248.2
pod01-srv1

root@db:/ping -c 5 10.0.248.2

PING 10.0.248.2 (10.0.248.2): 56 data bytes
64 bytes from 10.0.248.2: seq=0 ttl=64 time=3.490 ms
64 bytes from 10.0.248.2: seq=1 ttl=64 time=0.666 ms
64 bytes from 10.0.248.2: seq=2 ttl=64 time=0.858 ms
64 bytes from 10.0.248.2: seq=3 ttl=64 time=0.781 ms
64 bytes from 10.0.248.2: seq=4 ttl=64 time=0.338 ms

--- 10.0.248.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.665/1.167/2.873/0.855 ms

Step 13 - Exit DB Container (POD01-srv2)

Exit the DB container, but leave it run it by executing the following command

control p+q

Step 14 - Check the new policy in ACI

Log to APIC as previously mentioned and check the Application Network Profile. You should notice a new contract has been created with a newly Filter (ICMP) and then the old contract a new filter has been added (ICMP)

© Copyright Cisco Systems 2017