Warning!

Make sure you are in the installer-host during these steps.

installer-host Credentials


Contiv Installation

Contiv Installer

The Contiv Swarm Installer is launched from an external host to the cluster. It uses Ansible to automate the deployment of Docker, Swarm and Contiv. Ansible uses SSH connections to connect from the Installer Host to the Cluster Hosts. In this Lab we will setup SSH key based authentication between the Installer host and the Cluster Hosts to allow Ansible to reach the Cluster Hosts from the Installer Host.

Contiv Swarm installer uses a docker container to run the Ansible deployment to avoid Ansible version dependencies on the Installer host. So the only pre-requisite on the Installer Host is that we have docker installed and the installer is run as a user who is part of the docker group.

All the nodes need to be accessible to the installer host. You can have one or many master nodes and any number of worker nodes. The Installer installs the following components:

The following diagram represents the Contiv installer showing the different components and their interaction.


Table of the different versions of code to leverage during this lab.

Component Version
Docker engine 1.12.6
Docker Swarm 1.2.5
etcd KV store 2.3.7
Contiv v1.0.0-alpha-01-28-2017.10-23-11.UTC
ACI-GW container contiv/aci-gw:02-02-2017.2.1_1h

In a production environment, you should not disable the firewall. You can open following ports using IPtables command. You can use configuration management tools like Ansible, to do this job at the time of provisioning of nodes for Contiv.

Please refer this for more details: Contiv Ansible example

Software

Port Number

Protocol

Notes

Contiv

9001

TCP

Communication between OVS and Contiv

Contiv

9002

TCP

Communication between OVS and Contiv

Contiv

9003

TCP

Communication between OVS and Contiv

Contiv

9999

TCP

Netmaster Port

BGP Port

179

TCP

Contiv in L3 mode will require this

VxLAN

4789

UDP

Contiv in VXLAN network will use this port

Docker API

2385

TCP

Docker Related

Docker Swarm

2375

TCP

Docker Swarm Related

Consul

8300

TCP/UDP

Consul KV Store related

Consul

8301

TCP/UDP

Consul KV Store related

Consul

8400

TCP/UDP

Consul KV Store related

Consul

8500

TCP/UDP

Consul KV Store related

Etcd

2379

TCP

Etcd KV store related

Etcd

2380

TCP

Etcd KV store related

Etcd

4001

TCP

Etcd KV store related

Etcd

7001

TCP

Etcd KV store related

Auth_proxy

10000

TCP

Contiv authorization proxy


Step 1 - Public Key installation

installer-host
1
# This is the copy group: 1
mkdir .ssh && chmod 700 .ssh ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ""

[pod1u1@installer-host ~]# mkdir .ssh && chmod 700 .ssh
[pod1u1@installer-host ~]#
[pod1u1@installer-host ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa  -N ""
Generating public/private rsa key pair.



Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
d4:20:4c:4f:75:be:b9:01:05:92:60:00:87:99:7f:fd root@pod32-srv2.ecatsrtpdmz.cisco.com
The key's randomart image is:
+--[ RSA 2048]----+
|  .=o+=.+oo.o    |
|  +. ..+.+ +     |
|   .   .o o .    |
|    . ...  . o   |
|     .  S.  +    |
|          E  o   |
|            .    |
|                 |
|                 |
+-----------------+
     
installer-host
2
# This is the copy group: 2
sshpass -p cisco.123 ssh-copy-id -i ~/.ssh/id_rsa.pub root@pod01-srv1.ecatsrtpdmz.cisco.com -o StrictHostKeyChecking=no
installer-host
3
# This is the copy group: 3
sshpass -p cisco.123 ssh-copy-id -i ~/.ssh/id_rsa.pub root@pod01-srv2.ecatsrtpdmz.cisco.com -o StrictHostKeyChecking=no

Step 2 - Download the Contiv installer script

The Contiv installer is located https://github.com/contiv/install/releases

NOTE: We have downloaded the installer to our local server to speed up the download process

installer-host
4
# This is the copy group: 4
cd ~ wget http://10.0.226.7/nfs/contiv/contiv-full-1.1.7.tgz
[pod1u1@installer-host ~]#wget http://10.0.226.7/nfs/contiv/contiv-full-1.1.7.tgz
--2017-03-14 11:42:58--  http://http://10.0.226.7/nfs/contiv/contiv-full-1.1.7.tgz
Connecting to 10.0.226.7:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3250255 (3.1M) [application/x-gzip]
Saving to: ‘contiv-full-1.1.7.tgz’

100%[============================================================================================================================================>] 3,250,255   --.-K/s   in 0.01s

2017-03-14 11:42:58 (258 MB/s) - ‘contiv-full-1.1.7.tgz’ saved [3250255/3250255]
2017-03-14 11:42:58 (9.38 MB/s) - contiv-full-1.1.7.tgz saved [3265759/3265759]

Step 3 - Untar Contiv installation file

installer-host
5
# This is the copy group: 5
tar -zxvf contiv-full-1.1.7.tgz
[pod1u1@installer-host ~]#  tar -zxvf contiv-full-1.1.7.tgz
./
contiv-1.1.7/
contiv-1.1.7/install/
contiv-1.1.7/install/ansible/
contiv-1.1.7/install/ansible/aci_cfg.yml
contiv-1.1.7/install/ansible/cfg.yml
contiv-1.1.7/install/ansible/env.json
contiv-1.1.7/install/ansible/install.sh
contiv-1.1.7/install/ansible/install_defaults.sh
contiv-1.1.7/install/ansible/install_swarm.sh
contiv-1.1.7/install/ansible/uninstall.sh
contiv-1.1.7/install/ansible/uninstall_swarm.sh
contiv-1.1.7/install/genInventoryFile.py
contiv-1.1.7/install/k8s/
contiv-1.1.7/install/k8s/k8s1.4/
contiv-1.1.7/install/k8s/k8s1.4/aci_gw.yaml
contiv-1.1.7/install/k8s/k8s1.4/cleanup.yaml
contiv-1.1.7/install/k8s/k8s1.4/contiv.yaml
contiv-1.1.7/install/k8s/k8s1.4/etcd.yaml
contiv-1.1.7/install/k8s/k8s1.6/
contiv-1.1.7/install/k8s/k8s1.6/aci_gw.yaml
contiv-1.1.7/install/k8s/k8s1.6/cleanup.yaml
contiv-1.1.7/install/k8s/k8s1.6/contiv.yaml
contiv-1.1.7/install/k8s/k8s1.6/etcd.yaml
contiv-1.1.7/install/k8s/install.sh
contiv-1.1.7/install/k8s/uninstall.sh
contiv-1.1.7/install/generate-certificate.sh
contiv-1.1.7/README.md
contiv-1.1.7/netctl


Step 4 - Create the Configuration File (cfg.yml)

During this step we will be creating the configuration file (cfg.yml). This file will contain certain information around the nodes information such as hostname, the control and data interface, plus the APIC information. This information is necessary in order for Contiv to communicate with the ACI controller called APIC.

installer-host
6
# This is the copy group: 6
cat << EOF > ~/cfg.yml CONNECTION_INFO: pod01-srv1.ecatsrtpdmz.cisco.com: role: master control: eth0 data: eth1 pod01-srv2.ecatsrtpdmz.cisco.com: control: eth0 data: eth1 APIC_URL: "https://10.0.226.41:443" APIC_USERNAME: "admin" APIC_PASSWORD: "cisco.123" APIC_PHYS_DOMAIN: "Contiv-PD" APIC_EPG_BRIDGE_DOMAIN: "not_specified" APIC_CONTRACTS_UNRESTRICTED_MODE: "no" APIC_LEAF_NODES: - topology/pod-1/node-201 - topology/pod-1/node-202 EOF

Step 5 - Time to install Contiv

Contiv Images

Contiv Images are made of two containers contiv_network and aci_gw:

Contiv_network

The repository for contiv_network is located:

https://github.com/contiv/netplugin/releases

ACI-GW

The repository for aci-gw is located:

https://hub.docker.com/r/contiv/aci-gw/tags/

Because the version of ACI that we are using is 3.0., we will leveraging aci-gw:3.0.1k for our ACI GW:

installer-host
7
# This is the copy group: 7
cd ~/contiv-1.1.7
installer-host
8
# This is the copy group: 8
./install/ansible/install_swarm.sh -f ~/cfg.yml -e ~/.ssh/id_rsa -u root -i -m aci
 Wait
The installation process will take some time to complete. The installation process will display the completed tasks.
[pod1u1@installer-host ~]#   ./install/ansible/install_swarm.sh -f ~/cfg.yml -e ~/.ssh/id_rsa -u root -i -m aci
TASK [auth_proxy : create cert folder for proxy] *******************************
changed: [node1]

TASK [auth_proxy : copy shell script for starting auth-proxy] ******************
changed: [node1]

TASK [auth_proxy : copy cert for starting auth-proxy] **************************
changed: [node1]

TASK [auth_proxy : copy key for starting auth-proxy] ***************************
changed: [node1]

TASK [auth_proxy : copy systemd units for auth-proxy] **************************
changed: [node1]

TASK [auth_proxy : initialize auth-proxy] **************************************
changed: [node1]

TASK [auth_proxy : start auth-proxy container] *********************************
changed: [node1]

PLAY RECAP *********************************************************************
node1                      : ok=10   changed=7    unreachable=0    failed=0   

After the installation process is completed you should see the following message:

Installation is complete
=========================================================

Please export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375 in your shell before proceeding
Contiv UI is available at https://pod01-srv1.ecatsrtpdmz.cisco.com:10000

Please use the first run wizard or configure the setup as follows:
 Configure forwarding mode (optional, default is bridge).
 netctl global set --fwd-mode routing
 Configure ACI mode (optional)
 netctl global set --fabric-mode aci --vlan-range -
 Create a default network
 netctl net create -t default --subnet= default-net
 For example, netctl net create -t default --subnet=20.1.1.0/24 default-net

=========================================================
    

Step 6 - Verify Docker Swarm

We need to export the DOCKER_HOST environment variables to know where the Docker Remote API is located.

installer-host
9
# This is the copy group: 9
export DOCKER_HOST=tcp://pod01-srv1.ecatsrtpdmz.cisco.com:2375
installer-host
10
# This is the copy group: 10
docker info
installer-host

[pod01u1@installer-host ~]#docker info
Containers: 8
 Running: 8
 Paused: 0
 Stopped: 0
Images: 10
Server Version: swarm/1.2.5
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 2
 pod01-srv1.ecatsrtpdmz.cisco.com: 10.0.236.17:2385
  â”” ID: VL2K:KYDV:LJVR:UEIK:TY6V:G4FL:RALX:77IS:ZZIJ:OIUD:3ZJB:VNJ6
  â”” Status: Healthy
  â”” Containers: 4 (4 Running, 0 Paused, 0 Stopped)
  â”” Reserved CPUs: 0 / 1
  â”” Reserved Memory: 0 B / 3.888 GiB
  â”” Labels: kernelversion=3.10.0-514.2.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  â”” UpdatedAt: 2017-01-12T15:58:47Z
  â”” ServerVersion: 1.11.1
 pod01-srv2.ecatsrtpdmz.cisco.com: 10.0.236.49:2385
  â”” ID: 7S3J:W5JA:N3XK:PEUN:IKFJ:ELLK:2RDW:6HON:L2HB:CPEF:5ABC:M6JE
  â”” Status: Healthy
  â”” Containers: 4 (4 Running, 0 Paused, 0 Stopped)
  â”” Reserved CPUs: 0 / 1
  â”” Reserved Memory: 0 B / 3.888 GiB
  â”” Labels: kernelversion=3.10.0-514.2.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
  â”” UpdatedAt: 2017-01-12T15:58:46Z
  â”” ServerVersion: 1.11.1
Plugins:
 Volume:
 Network:
Kernel Version: 3.10.0-514.2.2.el7.x86_64
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 7.775 GiB
Name: c6888c3f2b61
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
WARNING: No kernel memory limit support

installer-host
11
# This is the copy group: 11
unset DOCKER_HOST exit

Reference

How to use the Contiv Installer

To get installer please refer https://github.com/contiv/install/releases

Download the install bundle, save it and extract it on the Install host.

Installer Usage:

./install/ansible/install_swarm.sh -f <host configuration file> -e <ssh key> -u <ssh user> OPTIONS

Options:

-f  string                 Configuration file listing the hostnames with the control and data interfaces and optionally ACI parameters
-e  string                  SSH key to connect to the hosts
-u  string                  SSH User
-i                          Install the Swarm scheduler stack

Options:
-m  string                  Network Mode for the Contiv installation (“standalone” or “aci”). Default mode is “standalone” and should be used for non ACI-based setups
-d  string                 Forwarding mode (“routing” or “bridge”). Default mode is “bridge”

Advanced Options:
-v  string                 ACI Image (default is contiv/aci-gw:latest). Use this to specify a specific version of the ACI Image.
-n  string                 DNS name/IP address of the host to be used as the net master  service VIP.

Additional parameters can also be updated in install/ansible/env.json.

Examples:

1. Install Contiv with Docker Swarm on hosts specified by cfg.yml.
./install/ansible/install_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i

2. Install Contiv on hosts specified by cfg.yml. Docker should be pre-installed on the hosts.
./install/ansible/install_swarm.sh -f cfg.yml -e ~/ssh_key -u admin

3. Install Contiv with Docker Swarm on hosts specified by cfg.yml in ACI mode.
./install/ansible/install_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i -m aci

4. Install Contiv with Docker Swarm on hosts specified by cfg.yml in ACI mode, using routing as the forwarding mode.
./install/ansible/install_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i -m aci -d routing

Uninstaller Usage:

./install/ansible/uninstall_swarm.sh -f <host configuration file> -e <ssh key> -u <ssh user> OPTIONS

Options:

-f  string            Configuration file listing the hostnames with the control and data interfaces and optionally ACI parameters
-e  string             SSH key to connect to the hosts
-u  string             SSH User
-i                     Uninstall the scheduler stack

Options:
-r                     Reset etcd state and remove docker containers
-g                     Remove docker images

Additional parameters can also be updated in install/ansible/env.json file.

Examples:
1. Uninstall Contiv and Docker Swarm on hosts specified by cfg.yml.
./install/ansible/uninstall_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i
2. Uninstall Contiv and Docker Swarm on hosts specified by cfg.yml for an ACI setup.
./install/ansible/uninstall_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i -m aci
3. Uninstall Contiv and Docker Swarm on hosts specified by cfg.yml for an ACI setup, remove all containers and Contiv etcd state
./install/ansible/uninstall_swarm.sh -f cfg.yml -e ~/ssh_key -u admin -i -m aci -r

© Copyright Cisco Systems 2017