This blog describes the Open Switch OPX network automation demo based on the Ansible framework, as delivered by Dell EMC at AnsibleFest2017. For more details on the demo including videos, configuration playbooks and other details please contact

Demo Overview

This demo goes over the process of deploying a BGP fabric in a leaf-spine topology using Ansible. We use a single Ansible playbook to configure and bring BGP across OS9, OS10 Enterprise Edition and OPX (Open Switch).

 Ansible Playbook Details

The Ansible playbook is deployed on the Ansible server that is part of the switch management subnet. The Ansible server is configured to connect with all the switches with SSH and deliver the configuration.

The playbook deployment includes three primary components switch inventory file, host variable files and the main playbook. This playbook directory structure is shown below.


 Switch Inventory File

The file lists the switches that would be configured by the playbook. Each node in the data center topology is listed with the OS name, Mgmt. IP address and node name.


Host Var Files

Each node in the topology has host variable file associated with it. Host var file for leaf1 node is shown below.

Ansible Playbook

The first line in the playbook shows target hosts as datacenter. From the switch inventory we can see that datacenter list includes all the leaf and spine nodes.

The role entry shows the Dell EMC roles for configuring BGP, interface, system and version information. Ansible runs each of these roles with the help from the host vars files to build the CLI commands from a set of predefined templates and deliver to the devices.

Running the Playbook


The arguments to the ansible-playbook command includes the switch list from the inventory file and the playbook file datacenter.yaml. YAML  is new serialization standard used to write the Ansible script.

The execution of playbook results in the switch configuration generated by programming the jinja2 templates defined for each role used in the playbook. The commands are then delivered to the devices via SSH client connection.

Repeated execution of the Ansible playbook, will only update any changes made to the playbook or host var file or switch inventory file and thus safe to repeat. All the modules have idempotence baked in. That is, running a module multiple times in a sequence should have the same effect as running it just once.

Playbook Results!

A portion of the output on the console is shown below.


As can be seen, all 4 leaf and 2 spine that are part of the data center topology have been configured with Quagga [BGP] to run a L3 network.

You can also look at the CLI command file for each node that were delivered to the device under the /tmp directory. This can be a way to troubleshoot deployment.

For more detailed console output, you can use –vvvv option. This makes the console output very granular and shows the details of all the steps that Ansible takes to deploy to the switches. A useful way to debug the playbooks.

You can login to one of the switches to check out the deployed configuration. A sample is shown below.


The demo highlights Ansible as the right, flexible automation framework for switch manageability and a simple programmable environment.

Ansible can prove to be pretty powerful and the network as a whole doesn’t have to be automated overnight. It’s about thinking a little differently and exploring some automation to see if it makes sense for any given environment. There are steps that can be taken to learn about these new processes and tools. And the best part is, all of these configuration management tools are open source and OPX as well can be tried at no cost.

Questions? Help? Contact: Open Networking Team at Dell EMC