pyATS and Genie: Part 2


It’s been awhile since my last post, but I finally found some time to sit down and continue writing this series. Since my last post, I have been spending a lot of time reading through the pyATS docs and the experimenting with its modules. In this post, we will be going over the basics and how to get started with pyATS. My goal is to have you parse your first ‘show’ command and to see how easy it is to traverse the output using a Python module like Dq (which we will jump into later). Now let’s take a look at how pyATS works and begin testing!

Getting Started with pyATS

Before building any scripts or running commands against any devices, let’s talk about the different libraries that coexist with pyATS. I don’t want to spend too much time on the details, but as network engineers, many of us like to know how things work at the most basic level. Here’s a quick illustration of the different libraries that we use alongside pyATS:

I built this visual because it has helped me understand each library and their respective role within the testing infrastructure. I hope it helps you understand them as well.


The installation process is pretty simple for pyATS and Genie. I recommend using a Python virtual environment, as it isolates your environment from your system-level Python installation. This allows for better dependency management and helps avoid potential conflicts. For my Python virtual environment, I’ll be using Pipenv, but you may also use virtualenv or venv to create your virtual environment. Here’s how to install the necessary packages using virtualenv or venv:

# creates the Python3 virtual environment using virtualenv or venv 
python3 -m virtualenv pyats-venv OR python3 -m venv pyats-venv

source pyats-venv/bin/activate # <--- activates the virtual environment

pip install pyats # <--- installs pyats (includes unicon)
pip install genie # <--- installs genie

Here’s how you would create the same virtual environment using Pipenv:

# install pipenv on the system-level Python3 installation
pip3 install pipenv

pipenv --three # <--- creates a Python3 virtual environment
pipenv shell # <--- activates the virtual environment

pipenv install pyats # <--- install pyats (includes unicon)
pipenv install genie # <--- install genie

I’ve started using Pipenv because of its great benefits. With Pipenv, you no longer need to track your project dependencies using a requirements.txt file. Pipenv automatically tracks the existing packages installed. As packages are installed and uninstalled, a file called the Pipfile is automatically updated. Another file, called Pipfile.lock, is also generated to help rebuild the environment in the future. Another great feature of Pipenv is its ability to identify security vulnerabilities in the packages you’re using in your environment. To learn more about Pipenv, check out this link.

So now we know about the the pyATS and Genie libraries and have them installed, let’s jump into the fun stuff!

Defining the Testbed

If you’ve used Ansible or another automation platform, you will be familiar with building an inventory file. PyATS has a relative concept called a testbed. The testbed file defines the device names, IP addresses, connection types, credentials, OS, and type. The OS is important because that value is used by the Unicon library to determine how to connect to a device and handle the different CLI prompt patterns. Let’s review a sample testbed found in the pyATS docs:

# Example
# -------
#   an example testbed file - ios_testbed.yaml

    name: IOS_Testbed
            username: admin
            password: cisco
            password: cisco

    ios-1: # <----- must match to your device hostname in the prompt
        os: ios
        type: ios
                protocol: telnet
                port: 11023
        os: ios
        type: ios
                protocol: telnet
                port: 11024
                protocol: ssh
                ipv6: '10:10:10::1/64'
                link: link-1
                type: ethernet
                ipv6: '192::1/128'
                link: ios1_Loopback0
                type: loopback
                ipv6: '10:10:10::2/64'
                link: link-1
                type: ethernet
                ipv6: '192::2/128'
                link: ios2_Loopback0
                type: loopback

I like this example because it touches on every aspect of a testbed. Starting at the top, you have the testbed defined with a set of credentials. The credentials declared under the testbed section allow them to be used by all devices in the testbed. The devices section is where you define the devices you’ll be testing. For each device, you have to to define the OS, device type, connections (can be more than one), and the credentials (if different from the ones defined under the testbed section). The last section, topology, is the most interesting. You are able to define a logical topology in your testbed file. This allows pyATS to understand how these devices are connected in the real world. Using the example testbed above, you’ll see that ‘link-1’ is used to represent a connection between the ios-1 and ios-2 devices. I like to think of it as translating a Visio diagram to a format that pyATS can understand. By allowing pyATS to understand how these devices are connected, it can provide the foundation for more complex testcases. For example, taking a snapshot of the network before and after losing a link on a specific device to see what’s affected (i.e. link status, routing, etc.).

This brings me to an important point: Everything in pyATS is treated as an object. Take a look at this visual from the pyATS docs:

| Testbed Object                                                           |
|                                                                          |
| +-----------------------------+          +-----------------------------+ |
| | Device Object - myRouterA   |          | Device Object - myRouterB   | |
| |                             |          |                             | |
| |         device interfaces   |          |          device interfaces  | |
| | +----------+ +----------+   |          |   +----------+ +----------+ | |
| | | intf Obj | | intf Obj |   |          |   |  intf Obj| | intf Obj | | |
| | | Eth1/1   | | Eth1/2 *-----------*----------*  Eth1/1| | Eth1/2   | | |
| | +----------+ + ---------+   |     |    |   +----------+ +----------+ | |
| +-----------------------------+     |    +-----------------------------+ |
|                                     |                                    |
|                               +-----*----+                               |
|                               | Link Obj |                               |
|                               |rtrA-rtrB |                               |
|                               +----------+                               |

You can see that everything is stored in the Testbed container object. From there, the device objects (myRouterA and myRouterB) both have two interface objects (Eth1/1 and Eth1/2). The link object does not belong to either device, but rather shared between the interface objects on both devices. In our example, we will not be including a topology section in the testbed. I want to keep it easy and simple. However, I wanted to point out the topology section of the testbed, as it can extend the functionality of pyATS once you dive into more advanced use cases. Now let’s move on to creating our first testbed and gathering data from our testbed devices.

Building our Testbed

For our example, we’ll be using the Always-on DevNet ‘IOS XE on CSR’ sandbox. The use case we will be looking at is verifying the IOS software version running on our device(s). It’s common for organizations to define a standard IOS software version for their switches and routers. The problem is that it would be a nightmare for a network admin to login to each device in the network and confirm the IOS version. Besides being a very manual process, it’s also very error-prone. Fortunately, pyATS and Genie provide multiple ways to gather the data AND compare it to our defined standard. Yes, there are a number of off-the-shelf tools that can perform this same function, but I want to show you how easy it is to accomplish with pyATS.

There are two ways to define a testbed: in a YAML file (most common) or directly in a Python dictionary. The YAML file is most common due to its easier readability, but at the end of the day, the testbed is loaded into a Python dictionary. Below is the testbed I’ll be using in our example:

  name: DevNet_Testbed

  csr1000v-1: # <----- must match to your device hostname in the prompt
    os: iosxe
    type: iosxe
        username: developer
        password: C1sco12345
        protocol: ssh
        ip: # <----- confirm before testing
        port: 8181

It’s fairly simple to read and understand. I defined a testbed called ‘DevNet_Testbed’ and have one device in it. A few things to note here: The hostname key, which is named ‘csr1000v-1’ in my testbed, must match the hostname shown in the device’s CLI prompt. The reason for this is pyATS is looking for the hostname when logging into the device’s CLI. Another thing to note is that since we are using a public always-on sandbox, the hostname and IP address may change. Before testing, please confirm the hostname (as shown in the CLI prompt) and the public IP address are correct in your testbed. As previously mentioned, we aren’t going to be adding a topology section to our testbed. Now that we have defined our testbed, let’s start writing our first pyATS script!

Learning the Network

Before diving into the code, I want to go over the general flow of the script. First, we will load the testbed file into the script (I just called my testbed ‘testbed.yaml’ – I know, very original). Next, we will pull out only the device we want to test against. You may ask, “Why are we pulling out the only device in the testbed?”. Well, down the road when you have a larger testbed file, there’s a good chance that you will only want to test against a subset of devices. I’m showing you how to do that now – you’ll thank me later. Otherwise, your tests would run against all devices in your testbed. After we identify the device we want to test, we connect to the device, run the necessary command(s) (‘show version’ in our case), and disconnect from the device.

In the next few sections, we are going to look at some Genie modules and device methods that help gather and structure the necessary data from a network device. I’ll provide a code example and the respective output for each method.

Now that we have a general idea of how the script will flow, let’s start writing some code!

genie execute

The execute() method instructs pyATS to connect to the device, run the desired command, and return the raw output. Below is example code that identifies the DevNet sandbox CSR in the testbed and assigns it to the variable named csr. We can then use the csr variable to access other device methods, including connect() and disconnect(). How much easier can it get?!

from pyats.topology import loader

# Load the testbed file
tb = loader.load('testbed.yaml')

# Assign the CSR device to a variable
csr = tb.devices['csr1000v-1']

# Connect to the CSR device

# Issue 'show version' command and print the output
print(csr.execute('show version'))

# Disconnect from the CSR device
Cisco IOS XE Software, Version 16.09.03
Cisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)
Technical Support:
Copyright (c) 1986-2019 by Cisco Systems, Inc.
Compiled Wed 20-Mar-19 07:56 by mcpre

Cisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.
All rights reserved.  Certain components of Cisco IOS-XE software are
licensed under the GNU General Public License ("GPL") Version 2.0.  The
software code licensed under GPL Version 2.0 is free software that comes
with ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such
GPL code under the terms of GPL Version 2.0.  For more details, see the
documentation or "License Notice" file accompanying the IOS-XE software,
or the applicable URL provided on the flyer accompanying the IOS-XE


csr1000v-1 uptime is 14 minutes
Uptime for this control processor is 15 minutes
System returned to ROM by reload
System image file is "bootflash:packages.conf"
Last reload reason: reload

## Truncated for brevity ##

For the output, you’ll notice that it looks just like it would in an SSH session. The only problem with that is, while it may be readable for a human, it’s not in a great format for a computer. This output is stored as one long string object in Python. This is not ideal when it comes to parsing out the data we need. Maybe there’s a Genie method that can collect and “parse” the data…. sorry, I know it’s a bad one… let’s move on to Genie’s parse method.

genie parse

The parse() method performs exactly the same actions as the execute() method, but provides some additional functionality. Along with sending the command to the device and collecting the output, the output is passed along to one of the thousands of available parsers in Genie’s library. For the complete list of available parsers, check them out here. These parsers will automatically break down the long string into a structured Python dictionary. This is where we can begin programmatically interacting with the data, without the need for complex regex. Let’s take a look at the code and structured output:

from pyats.topology import loader
from pprint import pprint

# Load the testbed file
tb = loader.load('testbed.yaml')

# Assign the CSR device to a variable
csr = tb.devices['csr1000v-1']

# Connect to the CSR device

# Issue 'show version' command and print the output
pprint(csr.parse('show version'))

# Disconnect from the CSR device
{'version': {'chassis': 'CSR1000V',
             'chassis_sn': '9YUMZ3N5W7V',
             'compiled_by': 'mcpre',
             'compiled_date': 'Wed 20-Mar-19 07:56',
             'curr_config_register': '0x2102',
             'disks': {'bootflash:.': {'disk_size': '7774207',
                                       'type_of_disk': 'virtual hard disk'},
                       'webui:.': {'disk_size': '0',
                                   'type_of_disk': 'WebUI ODM Files'}},
             'hostname': 'csr1000v-1',
             'image_id': 'X86_64_LINUX_IOSD-UNIVERSALK9-M',
             'image_type': 'production image',
             'label': 'RELEASE SOFTWARE (fc2)',
             'last_reload_reason': 'reload',
             'license_level': 'ax',
             'license_type': 'Default. No valid license found.',
             'main_mem': '2392579',
             'mem_size': {'non-volatile configuration': '32768',
                          'physical': '8113280'},
             'next_reload_license_level': 'ax',
             'number_of_intfs': {'Gigabit Ethernet': '3'},
             'os': 'IOS-XE',
             'platform': 'Virtual XE',
             'processor_type': 'VXE',
             'returned_to_rom_by': 'reload',
             'rom': 'IOS-XE ROMMON',
             'rtr_type': 'CSR1000V',
             'system_image': 'bootflash:packages.conf',
             'uptime': '3 minutes',
             'uptime_this_cp': '4 minutes',
             'version': '16.9.3',
             'version_short': '16.9',
             'xe_version': '16.09.03'}}

The only change we made to our code is swapping out the execute() method with the parse() method (see highlighted line of code). Also, we imported pretty print (pprint) in order to make the output look nicer. The biggest difference is the output. In 5 lines of code (minus the comments), we have a structured Python dictionary with datapoints that identify key information you’d find in a ‘show version’ output. You can see the ‘last_reload_reason’, ‘os’, ‘uptime’, and plenty of other valuable datapoints. For our use case, we will be interested in the ‘xe_version’ datapoint.

genie learn

If Genie parse takes care of our use case, why do we need to know another method? Well, what if you want to gather data across multiple network devices with different operating systems that require different ‘show’ commands? With our current knowledge, you would have to parse multiple ‘show’ commands for the devices with different OS types in your testbed. Besides that, the output of these different ‘show’ commands may not include the datapoints you are even looking for. Enter Genie learn…

The Genie learn() method allows you to learn a feature of the device. Features can be protocols running on the device (i.e. ospf, eigrp, lisp, dot1x, etc.) or attributes about the device (i.e. platform, interface). These features are broken up into what Genie calls models. For the complete list of available Genie models, check them out here. These models are used to provide a level of abstraction so that you don’t have to worry about what commands to parse for each OS. This allows you to focus more on the output and finding the datapoints you need. I’m not providing a code snippet for the learn functionality, but I do want to show a small example of how Genie learns routing across the different Cisco NOS platforms.

I chose routing because the commands are so relative across the different platforms, that it can trip up even the most experienced engineer. If you are interested in looking at an active, open-source project that takes pyATS and Genie learn to a whole new level, check out Merlin. This project was started by John Capobianco earlier this year and helps network engineers collect and document information about their network using the power of pyATS.

Querying the Data

We’ve made some good progress thus far. The proper data has been collected, but now it’s time to check it against our defined standards. For the sake of our example, I’m going to declare IOS-XE 16.12.5 as our defined standard.

So how are we going to drill down to the datapoint we are interested in? Normally, we would have to use nested for loops to dig through the dictionaries of data, but not anymore. Genie comes with a suite of helpful libraries including one called Dq (dictionary query). The documentation is tough to find because it’s buried in a submenu within the Genie docs, but I wanted to provide a link for convenience: Dq library. If the link doesn’t take you there, you’ll have to click on ‘User Guide’ along the left side and choose ‘Useful Libraries’ towards the bottom of that submenu. Along with the Dq library, there are some other useful libraries, including Diff, Find, Config, Timeout, and TempResult. These libraries are just added bonuses to the already valuable Genie library. Let’s see how we can use Dq to search and locate our desired datapoint.

from pyats.topology import loader
from pprint import pprint
from genie.utils import Dq

# Load the testbed file
tb = loader.load('testbed.yaml')

# Assign the CSR device to a variable
csr = tb.devices['csr1000v-1']

# Connect to the CSR device

# Issue 'show version' command and parse the output
parsed_output = csr.parse('show version')
# Store the standard IOS version in a variable for future use
standard_os = '16.12.05'
# Look for the 'xe_version' key and see if it contains the proper IOS version
ios_check = Dq(parsed_output).contains(standard_os).get_values('xe_version')

if ios_check:
    print('IOS Check passed!')
    print('IOS Check failed!')

# Disconnect from the CSR device

I highlighted the two lines that were added to our script in order for us to use the Dq library. The first line imports the library. The second line queries the parsed ‘show version’ output AND performs the comparison for us. Let’s take a minute and look at the magic in that single line of code.

In our code, we convert the parsed ‘show version’ output to a Dq object. This allows us to use all the available methods in the Dq library. In our example, we use the get_values() method to locate the dictionary key we are interested in, and the contains() method to check whether that key “contains” an expected value. As a result, if a value is matched, a Python list with the matched values will be returned. If there are no matches, an empty Python list will be returned. In our example, that returned Python list is stored in the ios_check variable. The last if/else statement just determines whether the list is empty or populated. If the list is populated (meaning there was a match on the IOS versions), then the IOS check passed and we are running the correct version. If the list is empty, the IOS versions did not match and the IOS check failed. Here’s what the returned value would look like if there was a match:

IOS Check passed!

You’ll notice that there is one item in the returned Python list, which is the matching string value for the ‘xe_version’ key in the parsed ‘show version’ output.

With one line of code, we were able to query a nested dictionary AND determine whether a certain value existed. Hats off to the pyATS team. There will be so much time (and lines of code) saved from using this extraordinary library. It’s also good to note that this library, along with the rest of the Genie library, can be used independently of pyATS. So whether you’re querying your network or working on a separate project with larger datasets, you can use the power of Dq.

If you struggled to follow along, or would like to review the code we used in our example, check out my Github repo linked in the References section at the end of this post.


That wraps up Part 2 of this pyATS and Genie series. I’ve been enjoying writing these posts and I hope you’ve been able to find value in these phenomenal libraries. There’s so much more to the pyATS and Genie libraries, that I’ve just scratched the surface. Please check out the docs yourself to see the great features included in these libraries. In Part 3, we are going to take a look at the AEtest testing framework and Easypy runtime environment in pyATS. We may even write our first, true testscript.

As always, if you have any questions or just want to chat, hit me up on Twitter (@devnetdan). Thanks for reading!


Github repo: dannywade/learning-pyats: Repo for all pyATS code examples

pyATS docs: pyATS Documentation – pyATS – Document – Cisco DevNet
Genie docs: index – Genie Docs – Document – Cisco DevNet
Unicon docs: Documentation – Unicon – Document – Cisco DevNet

pyATS and Genie: Part 1


Before getting started, I wanted to take the time to address the elephant in the room… how do we even pronounce pyATS? In short, it’s pie-A-T-S. However, the pyATS team has this nice visual on their site:

Now that we are all on the same page, let’s start looking at pyATS and how it fits into the network automation landscape.

“A piece of the pie”

I recently had a “Meet the Engineer” discussion with JB (@jeaubin5 on Twitter) from the pyATS dev team and we talked about the different “domains” that make up a proper network automation ecosystem and ultimately how pyATS fits into it. I always refer back to this visual from a session I attended at Cisco Live 2019 in San Diego.

Screenshot from DEVNET-1204 presentation at CLUS 2019

Each slice of the pie represents a domain in the ecosystem. If you’re just starting your network automation journey, I’m sure you’ve heard of or played with many of the tools found in the ‘Configuration Management’ domain. This domain is by far the most popular when it comes to network automation. Besides having an abundance of tools available, this domain is popular due to the number use cases it can help solve. Most of the time, network engineers begin looking at network automation when they have a proper use case or problem they’re trying to solve. Most of the time, that use case revolves around pushing out configuration at scale. This may be pushing out a mass configuration update or ensuring that devices are adhering to a “golden configuration”. However, this really shouldn’t be how you begin your network automation journey…

As with trying anything new in networking, or tech as a whole, you want to start testing with changes that have the lowest risk and impact. These changes are most of the time read-only and do not affect the configuration or operational state of a device. I think most engineers can agree with the previous statements… so why do we not take the same approach with network automation? To be fair, configuration management can be limited to only collecting and parsing out configuration (no configuration changes). However, most of the time, many use cases revolve around pushing out configuration (i.e. updating an ACL, global configuration (NTP, DNS, etc.), or simply adding a VLAN to a group of switches). Let’s start our network automation journey the right way by discovering how our network is really operating before pushing out any potential breaking changes. We will begin by diving into the ‘Stateful Test & Validation’ piece of the pie.

What’s pyATS?

Before pyATS was released to the public in late 2017, the ‘Stateful Test & Validation’ domain was pretty bare. At that time, much of the focus was around configuration management, which made sense. Configuration management was a long standing problem for many network teams, so solving that problem was a huge and quick win. Fast-forward to the present, infrastructure-as-code and automation as a whole has taken off. We are beginning to view and manage our networks through git repositories instead of SSH sessions. With this rapid adoption of automation, network engineers had begun figuring out that network automation went beyond just mass configuration pushes or config management. We could begin collecting data from the network via MDT, logs, parsed ‘show’ output, etc. and be able to tell exactly how the network is running and how it reacts to specific changes. Wouldn’t it be great knowing that all the routers across your environment had a certain number of BGP neighbors and their associated uptime? Have you ever wondered if that one switchport was down before and after the software upgrade? Wouldn’t it be great to mock up a configuration change, create testcases on specific criteria (i.e. number of routing adjacencies, CPU/memory %, etc.), and confirm it works before pushing it out to hundreds of devices? These are only a few use cases for a network testing and validation tool. Enter pyATS…

pyATS was initially an internal testing tool for Cisco. It was open sourced and released to the public in late 2017. It creates a base testing framework for other tools and libraries. When many people first hear about pyATS, they also hear about its popular counterpart Genie. Genie is a Python library that builds on top of pyATS and provides the tools for pyATS, including network device APIs, parsers, and much more. Using the metaphor from the pyATS docs, think of pyATS as an empty toolbox and Genie being all the tools to put into it. Here’s a quick visual of how pyATS and Genie work together:

Introduction – pyATS & Genie – Document – Cisco DevNet

When I first looked into pyATS and Genie, I was confused because I thought they were one product. It wasn’t until I saw the above diagram that I slowly started figuring out the differences between them. I will emphasize that it’s not super important to understand the differences between the two when you’re first starting out since you’ll be using them together, but it will down the road.


I really thought I could fit everything in one post, but the more I read through the pyATS and Genie docs, the more exciting features I found, and I want to make sure I highlight them all properly. I’ll be creating a second post (Part 2) in the next couple weeks to begin highlighting some of these neat features and diving into the technical details. Stay tuned!

In the meantime, if you have any questions/feedback, please feel free to hit me up on Twitter (@devnetdan)!

DevNet Pro Journey: COMPLETED!

You read that title correct… I’ve completed my DevNet Pro journey by passing the Cisco DEVCOR exam last week. I’m now officially a certified DevNet Professional!

In this post, I’m going to recap my entire journey, provide personal study tips, and how YOU can begin your own DevNet journey!

My Journey

Ever since I passed my CCNA in 2017, I’ve been striving to obtain my CCNP certification. However, I did a lousy job in chasing that achievement. I studied off and on for weeks at a time and really never mapped out my journey. I only took one attempt at the old CCNP Route exam and failed. After failing that, I took a hard look at each of the exam blueprints and made the ultimate decision to pause until the certifications were updated. Fast-forward to Cisco Live US 2019, I was in San Diego and was sitting at Chuck Robbins’ opening keynote where he announced the revamped Cisco certifications, and the addition of the DevNet certification track. I was pumped! By this point, I was already doing automation work at my job and loved it. After having a great Cisco Live that week, I began mapping out my journey.

Journey Mapping

This term is used a lot these days to describe and track customer experiences with a product, but I’ll be using it to describe how I got from Point A (CCNA with network automation aspirations) to Point B (CCNP and DevNet Professional).

The most important piece of mapping out your journey is to imagine the person you want to be. For example, I envisioned myself being a network automation engineer with CCNP and DevNet Professional certifications, so that was my “finish line” in this journey. This is the most crucial part of your plan, as it helps with WHY you’re embarking on this journey. There are many other WHY’s that can be included here: job promotion, future opportunities, your family, etc. Another crucial piece of your journey map is by setting limitations on what you’re achieving. This may sound weird because you always hear motivational sayings that “nothing can hold you back” and “the sky is the limit”. However, these limitations are only in the scope of this specific journey – not your entire life. This helps keep your focus on the journey’s goal. It helps focus on this specific building block. For example, my limitation for this journey was that I wasn’t going for my CCIE. I knew the temptation would be there once I passed the ENCOR exam, since it’s now the CCIE written exam, but I removed that temptation from the start by stating that only the CCNP certification was in-scope. The CCIE would be its own journey.

My Journey Recap

Once I knew where I wanted to go, I had to figure out how to get there. I started with the DevNet Associate. While studying for the DevNet Associate, the DevNet 500 was announced, which just poured fuel on my motivational fire. I ended up taking and passing the exam on the first day it was available in February 2020, and became part of the DevNet 500. After the DevNet Associate, I went back to the journey map and figured out that I could double-up the Cisco ENAUTO exam as my specialist exam for both certifications, the CCNP and DevNet Professional. With DevNet being newer, and the CCNP looming over my head since 2018, I decided to begin with the ENCOR exam. I passed that exam in October 2020 and moved immediately to the ENAUTO exam. I was able to quickly turn that around and complete it a month later in November 2020. I want to stress that I had past experience with 2 of the 3 Cisco product APIs outlined in that exam blueprint. That helped immensely while studying, which led to the quick turnaround in passing the exam. After the ENAUTO exam, I took a break for the holidays and reviewed the final exam’s blueprint, the DEVCOR. This was by far the hardest exam. Besides the amount of topics I didn’t know much about (Software Development and Design, K8s, etc.), this exam was also very broad. Like the ENCOR exam, there were a lot of topics covered and you really had to understand each one with some depth. I began studying for the DEVCOR exam in the beginning of January 2021 and was finally able to pass it last week (mid-April 2021).

DEVCOR Exam Tips

Create a Routine

One of the biggest tips I can give you is to create a routine. I mean same time, same place, on a regular cadence (that you choose). Some people start out and say they will be studying 7 nights a week/ 2-4 hours a night, but let’s be realistic… life gets in the way. Take a look at your calendar from the start and take note of life events that you know will prevent you from studying. For example, there were a few life events that I knew would throw off my schedule: family birthdays, weekend trips, etc. During those weeks, I would reduce the amount of material covered during that week. Read through the blueprint for the exam you’re studying for and assign specific topics for each week. By the end, you should have a detailed schedule of how long it will take you to study (with time to review) before sitting for your first exam attempt.

Once you have a clear schedule (with realistic expectations), you can begin figuring out a specific time to study. This will highly depend on your situation in life, with the biggest factor being family responsibilities. Some people try to add time to their day by studying later at night or earlier in the morning. However, I went with option C and chose to substitute time from my existing schedule. Rather than watching TV or Netflix after dinner every night, I chose to study. Was it tough? Absolutely. I was so used to chilling out on the couch every night and watching my favorite series. There were some nights that I felt like I could blow it off, but my schedule reminded me that I would have to study twice as long the next day if I skipped. The last piece to the puzzle is where to study (stealing a real estate term: location, location, location). This is very important in order to keep concentration during your study time. At first, I studied at my dinner table. I quickly learned that there was too much going on around me and I couldn’t focus. I ended up studying in my office where I could close the door and had proper lighting. This location will be different for everyone. The only thing I suggest is to study only at this location while you’re at home. Your mind will be prepared and it will help you focus while studying.

Study Material

With the DevNet exams being a little over a year old now, there is minimal “official” study material available. This begs the question, where do I look for study material and what material is even good? Before we jump into the available resources out there, I want to go over my approach to choosing study materials. I always use each of the following formats:

  • (2-3) Books or other reading sources (whitepapers, blogs, etc.)
  • (1-2) Video courses
  • Labbing (Crucial piece!)

Now for the specifics: For books, I normally read through the Cisco Official Certification Guide (OCG) for that particular exam (if available), supplemented by Cisco’s online documentation. For video courses, I’ve only used CBT Nuggets and Pluralsight in the past. However, this year, I’ll be checking out INE courses. For labbing, it really depends on the exam. For DevNet exams, I use Cisco DevNet’s free and reservable sandboxes. I used to use EVE-NG, but found myself troubleshooting issues vs. working with the actual products. I still use EVE-NG if I want to test some feature more extensively, but in the context of studying, your best bet is to stick with the sandbox environments.

The only exception to my study material approach is when I studied for the DEVCOR exam. With the extensiveness and depth of the exam, I decided to go all out and purchase Cisco’s Digital Learning DEVCOR course. It’s an online course that provided a combination of reading material, videos, and hands-on labs. I would highly recommend this course for anyone preparing for the DEVCOR exam. I can confidently say it helped me pass the exam. Now let’s talk about how you can get started on your own DevNet journey.

Starting your own DevNet Journey

Since I started my DevNet journey back in December 2018, the DevNet program has grown extensively. DevNet certifications were introduced. More resources have been added to the Cisco DevNet site, including learning labs and sandboxes. The program itself has become more popular due to businesses realizing the value behind automation and engineers beginning to look at their infrastructure as code. Here is how I suggest starting your journey:

  • Look through the available DevNet exam blueprints. I’d suggest starting with the DevNet Associate.
  • Review the learning labs on Cisco DevNet
    • Learning labs provide step-by-step instructions on how to programmatically interact with a specific Cisco product or network device.
  • Create API requests using Postman
    • Postman is a tool used to test and explore APIs. Many learning labs require this piece of software as a pre-requisite. After receiving your first API response through Postman, I promise you, there’s no turning back…
  • Python – Requests library
    • The Requests library is used to programmatically interact with HTTP API endpoints (i.e. Cisco products in our case). This library allows you to collect the API response and perform additional manipulation/validation of the data using the power of other Python libraries.

It’s worth mentioning that the above suggestions are specific to Cisco’s DevNet program and does not encapsulate network automation as a whole. There are plenty of open-source Python projects out there that can help you get started, such as Netmiko, Nornir, NAPALM, Scrapli, and pyATS, to name a few. These projects allow you to programmatically connect to a network device (via SSH, NETCONF, etc.), collect and parse ‘show’ command outputs, and even push configuration to a device. You may even see a couple of these libraries referenced in the DevNet exam blueprints.


I know this is a longer and more personal post, so I appreciate you reading thus far. I purposely made this more personal and detailed so that you could know my story and hopefully relate in some way. I’ve read many “study tip guides” out there and they all seemed to bullet point the same high-level topics. I figured if I added more personal details and told my story, it could be more relatable to you. If you’re interested in Cisco DevNet or network automation and have questions, please feel free to hit me up on Twitter (@devnetdan).

Thanks for reading!

DevNet Pro Journey: My First DEVCOR Exam Experience


Last week, the day finally came, I took my first swing at the Cisco DEVCOR exam. Going into it, I felt good about the material. I had studied for weeks on end and maintained a consistent schedule, studying about 6 nights a week. Consistency was one of my study goals for this exam. In the past, I would have longer study sessions, but only study 3-4 nights a week. I knew I needed to maintain consistency for an exam like DEVCOR. The final week leading up to the exam, I dedicated 2-3 hours of focused studying, which included reviewing my notes, additional online documentation, and labbing in the Cisco DevNet sandboxes. After weeks of preparation, I was ready for my first attempt.

The Exam Experience

Surprisingly, I wasn’t nervous the night before or the morning of my exam. Recently, I’ve changed my mindset for taking certification exams. I now look at certification exams as when I’m going to pass vs. if I’m going to pass. This mindset has helped remove the mental obstacle that it’s all over if I don’t pass on the first attempt. Looking forward to when you pass helps ease your mind and ensures that, no matter what, you’ll find a way to pass the exam and obtain your certification. Yes, there is a financial cost associated with each attempt, so don’t take this advice and fall to the other end of the spectrum of “there’s always next time”. You need to find that fine balance between both philosophies.

Now on to the actual exam itself.

I found the exam to be extremely fair. The questions were relatable and much less trivial than Cisco exams in years past. Comparing to the DevNet Associate, this went well beyond knowing how a particular product’s API is structured, basic API interactions, and 5-10 line Python scripts. The questions really put you in the driver’s seat and made you make a decision based on a given scenario. Just like in network engineering, you have to know how the underlying technology works before making any higher-level business decisions. The same applies when you are building out an application. There are many components to a piece of software, so knowing the underlying technologies and which pieces fit a specific use case is critical. Let’s now take a look at one of the potential reasons for my first attempt failure.

My [temporary] Kryptonite

I use the word Kryptonite, but that’s a little exaggerated. Kryptonite has a permanence to it. I’ve learned from my experience and I’ll be making an effort to ensure that this issue doesn’t cause problems in my next attempt. So what was my Kryptonite in my first attempt? Time management.

Before taking the exam, I watched Knox Hutchinson’s (Data Knox) video on YouTube where he reviewed his exam experiences for the DevNet Associate, ENAUTO, and DEVCOR. I did this before taking the ENAUTO and figured it would be good to watch it again before DEVCOR. He nailed it right on the head when he mentions the only issue he had was time. Like his experience, I was about halfway through the exam when I realized I only had about 45 minutes left. It being a 2 hour exam, that may not seem too bad, but at the rate I was answering questions, I wouldn’t have finished if I didn’t speed things up. I ended up quickly answering 15-20 questions in a row and realized I’ve made up the time, which put me in a good position to finish the test (on time). After receiving my test score, I now see that rushing those 15-20 questions may have been what led to my ultimate failure. In preparing for my second attempt, I’ll be cognizant of which topics took me the longest to answer on my first attempt and try to minimize those knowledge gaps.

So What’s Next?

My first attempt being a fail was really a learning lesson for me. I was able to pass the Cisco ENCOR and ENAUTO exams each on my first attempt, so this experience has helped me learn a lot about my preparation and ultimately that this truly is a journey. You live and learn from your failures. It wasn’t about if I pass the exam, it’s about when I pass the exam and obtain the certification. With that mindset, I’m able to look forward and learn from my mistakes/shortcomings. Immediately following my failed first attempt, I rescheduled the exam for two weeks later. My score was close enough to where I believe two weeks will be enough time to close the gaps and be prepared for a second attempt. In about a week I’ll do a self-assessment to see where I’m at and whether or not I need to push it out another week, but that’s later down the line.

I wanted to write this post as an appreciation for all the support I received on Twitter and to help elaborate on the details of my exam experience. As always, if you have any questions or would like to talk, hit me up on Twitter (@devnetdan). Thanks for reading!

DevNet Pro Journey: DEVCOR Weeks 5 and 6

Hello again! I hope you all had a great Valentine’s Day weekend! As mentioned in my tweet last week, I skipped last week to finish out my first pass of the blueprint. The topics we will be looking at this week will focus on most of section 4.0 – Application Deployment and Security and all of section 5.0 – Infrastructure and Automation. This closes out the remaining DEVCOR exam topics, so stay tuned for my complete DEVCOR exam blueprint impressions later in this post.


Thank you for being patient with my posting schedule this past month. These exam topics have been extremely tough. In particular, the remaining topics in section 4.0 revolved around security, and I don’t mean encryption/hashing methods, routing protocol authentication, or how to configure IPSec tunnels. These security topics involved how an application is deployed and is accessed by the end user. If you went through college/university, you most likely had at least one IT security course that mentioned topics around OWASP threats (cross-site scripting (XSS), CSRF, SQL injection). If so, crack open that old textbook because that is exam topic 4.10 – Implement mitigation strategies for OWASP threats. Along with that example, there are specific topics for configuring SSL certificates (4.9), implementing a logging strategy for an application (4.6), and explaining privacy concerns (4.7). These topics really shift your mindset towards the “Dev” part of “DevNet” (sorry, I had to). Now let’s jump into the topics!

Week 5 and 6 Topics

There are many topics to cover. Here’s the entire list of topics we will be covering this week, straight from the blueprint:

DEVCOR Exam Topics (

I will not be able to dive into each of these topics, but I will talk about each of them within general categories. For example, 4.3 – 4.4 relate back to my week 4 topics surrounding application development. Sections 4.5 – 4.11 all surround application security. All of section 5 (5.1 – 5.5) involves infrastructure automation using telemetry, automation platforms (Ansible/Puppet), and building apps ON the network devices using IOx. Now that we’ve grouped each of these topics, let’s take a look at each category.

The application development topics in section 4 were very interesting. I learned how to develop an application using Docker/Kubernetes and deploy it using a CI/CD pipeline. After learning each of these topics, you start to understand at a high-level how companies are agile in their code development, while also maintaining proper checks and balances using CI/CD. I don’t know about how others felt while reviewing these topics, but before learning about these technologies, I never saw an application more than one large monolithic app. I never really considered all the microservices that make up a large website such as Amazon, eBay, and many others. Docker and Kubernetes allows developers to manage all aspects of their dev environment without needing to provision multiple VMs, which ultimately reduces time. It also allows developers to modularize their app so that devs can work on different aspects of the same app (i.e. login, shopping cart, etc.) at the same time. This allows for more updates and innovation for each of these individual components, since a group of developers can be dedicated to that one component. To wrap it all up, I dove into the details of CI/CD. If you’ve read my other blog posts, you would’ve seen that I have some experience with CI/CD in GitLab for managing Cisco network devices in a virtual environment (EVE-NG) on Google Cloud Platform (GCP). I do not proclaim myself as a CI/CD expert or even a novice, but I would say I understand the importance of CI/CD and some of the steps that make a pipeline (build, test, deploy). I definitely enjoyed studying these topics and will be reviewing them again on my second passthrough of the blueprint.

Application security was by far the toughest set of topics to get through during my studies. Full disclosure: I skimmed over these topics vs the others. I purposely did this because I didn’t want to get caught up or discouraged while reviewing these topics, causing me to abandon my studies all together. My goal was to ‘check the box’ with these topics on my first passthrough and then hit them HARD on my second passthrough. I plan on dedicating multiple days for each of these topics on my second pass. I can openly say that these will be my weak points of this exam. A note to everyone reading, it’s important to identify your weaknesses when studying for a cert (and really anything in life) and know that you will have to focus more time on those topics than others. It sucks because these topics won’t be as enjoyable as the others, but you must do it. Even though these topics are my weakness, I did find them very interesting. In particular, the tenets of the “12-factor app” really helps open your eyes to the best practices of developing a secure app. This exam topic helps you elevate the existing scripts you may have created when studying for the DevNet Associate. You should no longer be hardcoding credentials in your scripts, and instead look at using environment variables. I’m glad the DEVCOR exam reviews this topic because I remember googling best practices for securely storing credentials. I had to look it up because I was using my scripts at my job to perform simple tasks, such as gathering CDP neighbor details. I didn’t want to store my TACACS credentials in plaintext within my script. This topic is just another example of how the DevNet Professional cert is preparing you for a career beyond developing simple, one-off scripts and focus more on implementing best practices of app development.

The last category of topics I reviewed was surrounding infrastructure automation. All the topics in this category make up section 5.0 – Infrastructure and Automation. This section felt more like home because I could relate to all the topics in this section. Many of the topics were high-level, such as explaining model-driven telemetry (MDT). MDT is made up of many components including RESTCONF/NETCONF, YANG data models, gRPC, and many other protocols that generate and transport the data to your monitoring server. So in order to explain MDT, you have to know the details of the underlying components. Other topics in this section include constructing workflows using Ansible and Puppet, identifying a configuration management solution based on certain requirements, and describing how to deploy an application on a network device using IOx. Let’s take a quick look at each of these other topics. Building a workflow using Ansible and Puppet was pretty natural for me. I’ve created many Ansible playbooks and understand the overall workflow using variables, roles, plays, and tasks. I have minimal experience using Puppet, but the DSL of Puppet Bolt was pretty straightforward and felt a lot like JSON. If you have not had any experience using Puppet or Bolt, I encourage you to take a look. Creating a simple manifest file (Puppet’s version of a playbook) can be less than 5 lines of code using certain modules. The last topic on the blueprint, 5.5 – Describe how to host an application on a network device (including Catalyst 9000 and Cisco IOx-enabled devices), is probably the most interesting topic in section 5. I think the idea of deploying an application (as a Docker container) on a network device is next-level. However, I’m struggling to understand the sustainability of this deployment model. I understand why you may deploy an app directly on a network device, whether for security, data processing, or even troubleshooting purposes. The issue I have is the maintainability of these applications and the operational responsibility. I’ve read from multiple sources that DNA Center is the obvious answer to maintaining the apps dispersed across all the Cisco Cat 9k devices in your environment. However, one big consideration is vendor lock-in. With the decision to use DNA Center as your automation platform of the future, you must consider the licensing and other commitments you are making to Cisco by integrating DNA Center. There is no wrong answer, it’s just a fact when committing to any tool or software. I personally find DNA Center as a great platform to centralize your automation efforts, as long as you are in an all Cisco environment or plan to refresh your existing infrastructure with Cisco Catalyst 9k switches. It has a very robust API and many built-in integrations to tools such as ServiceNow and Infoblox. The other question mark I have for deploying apps on a network device is the troubleshooting and triage efforts when things go south. Who’s ultimately responsible for the uptime of the app? What happens when the application goes down? What if the network team needs to upgrade the IOS-XE software on the switch, can the app take downtime? What if the switch loses power for a long period of time due to an outage? I know the answers to these questions will vary from organization to organization, but it brings up some considerations when creating a proper operational procedure for maintaining these apps. Overall, I really enjoyed all the topics in section 5 of the blueprint.

My Week 5 and 6 Impressions

I put a lot of my thoughts inline with each topic throughout the post, but let me highlight my impressions on each category of topics mentioned throughout this post. The application development topics, specifically CI/CD, were enjoyable to review. I already have some experience creating a CI/CD pipeline in GitLab, so studying this topic helped formalize my experience. The application security topics found in section 4 are going to be very tough for me. I know I will need to spend at least a week just on these topics. The last category of topics surround infrastructure automation. Like the app development category, the topics in this category came more natural, as I have some experience with MDT, configuring devices using RESTCONF, and tools such as Ansible. I do find the last topic in this section, deploying an app on a network device, interesting and hope to find more uses cases for this technology. As I always say, you should understand the WHY before figuring out the HOW.

DEVCOR Exam Blueprint Impressions

Now that I’ve completed my first passthrough of the entire DEVCOR exam blueprint, I wanted to provide my impression of the blueprint in its totality. As a whole, I find this blueprint to be very challenging. I think this exam really creates the foundation to becoming a developer. Like I’ve mentioned in a previous post, I found myself in a developer’s mindset throughout my studies. I wasn’t focused on specific networking concepts or technologies like in previous Cisco exams. I didn’t need to know the different timers of EIGRP or OSPF. I didn’t need to understand BGP. This exam really puts app development as front and center, with networking as a topic you’re looking to automate using best practices of app development. I look forward to my second passthrough of the blueprint and know that I’ll learn something new while reviewing these topics again.


Sorry for the long post this week, but we had a lot of exam topics to cover. With this post completing my high-level review and impressions of the DEVCOR exam blueprint, look forward to smaller posts in the future that will be more technical and focused on a specific DEVCOR exam topic. Beginning this week, I will be going through the entire blueprint a second time, but slower and with more focus. I will be using Cisco’s Digital Learning on-demand DEVCOR course as a study guide (check it out here). I’ve heard great things about it, so I figured it would be a great investment. I will most likely include reviews of the course in my future posts. Thank you for reading through this mini-series of my exam blueprint impressions throughout my DEVCOR journey. I look forward for you to stay tuned to my future posts that will continue my DEVCOR journey. If you have any questions/feedback, please hit me up on Twitter (@devnetdan)!

DevNet Pro Journey: DEVCOR Week 4

Hello, it’s a been a couple weeks! I know this isn’t technically week 4 since I didn’t post last week, but I’m calling it that for consistency sake. As mentioned in my my tweet last weekend, I was busy diving into YANG models and model-driven telemetry (MDT) using the TIG stack (we will get more into that later), so I wanted to wait until I had a little more content to post. Today’s post will mostly revolve around MDT, CI/CD workflows, and my thoughts on deploying an application using Docker and Kubernetes.


This week will be a little different than others. In the past, I reviewed entire sections (i.e. 1.1 – 1.x). My method of studying is to review each topic at a high-level with a video series, then dive into labbing each topic. The topics outlined in DEVCOR exam sections 3.8 and most of section 4 are a little more involved so it may take a few weeks to get through these sections. Every topic has multiple components and requires some additional understanding. For example, exam topic 3.8 Describe steps to build a custom dashboard to present data collected from Cisco APIs requires you to understand a few concepts: NETCONF/RESTCONF, YANG data models, and a technology stack to ingest the data and produce a dashboard (notably the TIG or ELK stack). As you can see, covering exam topic 3.8 is more involved than memorizing steps 1..2..3 for building a custom dashboard. With that being said, let’s jump into the topics!

Week 4 Topics

Here are the topics covered this week:

DEVCOR Exam Topics (

These were three monster topics. For topic 3.8, I spent two nights just reviewing the structure of YANG data models on IOS-XE devices, and that’s only one piece of the puzzle. The major mystery for me was deploying the appropriate software stack (TIG) to collect the telemetry data from the IOS-XE devices and produce clean dashboards. However, I was very surprised how easy it was deploying a TIG stack (Telegraf, InfluxDB, and Grafana) using Docker. With Docker, you can deploy the entire stack, with each component communicating with one another, using a docker-compose.yml file. Before moving on, here’s a quick summary of the TIG stack and each component:

  • Telegraf – used to collect data and metrics from the IOS-XE devices. Think of this as the entry point for the data being sent.
  • InfluxDB – time-series database to store collected data. Time-series databases are helpful in our case since we want to see metrics over a given time period. Other database types
  • Grafana – pulls data from InfluxDB and creates the fancy dashboards

There are many examples on the web for building the TIG stack using Docker, so don’t worry about writing your own (unless you want the practice!). I personally used Knox Hutchinson’s example, since I was following his tutorial on CBT Nuggets. Here’s a link to his Github Code Samples repo.

After tackling MDT and deploying a TIG stack using Docker, I started reviewing section 4 topics beginning with 4.1 Diagnose a CI/CD pipeline failure (such as missing dependency, incompatible versions of components, and failed tests). I felt comfortable reviewing this topic, as I have experience (see previous blog posts) with building CI/CD pipelines with Ansible on GitLab. However, CBT Nuggets covered deploying a CI/CD pipeline using Jenkins, which I do not have any previous experience with. After learning more about Jenkins and creating a Jenkinsfile, I found it to be very relative to GitHub Actions and CI/CD with GitLab. Unless I had a specific use case for Jenkins, I think I would rather use the integrated CI/CD workflows wherever my code was hosted (GitHub or GitLab). However, in an enterprise environment, I could see the use case for Jenkins being the central piece of software for managing many CI/CD workflows.

I’ll admit that I’m not completely finished with the third and final topic: 4.2 Integrate an application into a prebuilt CD environment leveraging Docker and Kubernetes. I do not have any previous experience with Kubernetes or integrating an application into a CD environment, so this one is taking me a bit longer. I would like to take this moment and demystify a misconception that I’ve had about Docker and Kubernetes. Docker and Kubernetes are not necessarily competing container technologies. For example, I used to think that there were Docker containers and Kubernetes containers. I believed you had to choose which type of container you wanted to deploy. After reviewing this topic, I now understand that Kubernetes helps manage containers and their workloads. Docker has their own container management software called Docker Swarm, which would rival Kubernetes, but there aren’t specific Docker containers and Kubernetes containers. Kubernetes will deploy and manage containers within a Kubernetes cluster. While deployed in a cluster, Kubernetes will monitor the workload of each container and move them between physical servers or scale them as needed. On top managing their workload, Kubernetes makes each workload portable. You can move workloads from your local machine to production without having to reboot the entire application. The greater mobility allows for greater integration and deployment cycles (i.e. CI/CD workflows). As you can see, I’m pretty pumped about reviewing this topic more in-depth, so I’ll report back next week with my additional findings.

My Week 4 Impressions

Even though this week only consisted of three topics, I still feel like I need to go back and review each one again. There was so much depth to each topic and, honestly, a lot of personal interest. For MDT, I still feel like there’s so many more details I need to cover, including diving deeper into each component of the TIG stack. I have some past experience with Grafana, mostly because it’s the component that makes the nice looking dashboards, but I would personally like to dive more into the database component (InfluxDB). During my initial labbing, I found the endless possibilities of data that can be collected using YANG models on IOS-XE devices. The YANG models really unlock so many datapoints that you may not find using traditional SNMP monitoring.

The two topics I covered in section 4 switched my mindset a bit and allowed me to recognize the thoughts and processes you should have behind deploying an application. Don’t get me wrong, these topics alone won’t teach you everything you need to know about deploying an application, but coming from a network engineering background, it’s very eye-opening. The CI/CD workflows covered in 4.1 is very relevant to how I see network engineers manipulating network device configuration in the future. Having a single source of truth for all device configurations and having built-in testing/deployment mechanisms with CI/CD workflows will be key to scaling and modernizing networks as companies go through digital transformations in the future.


Thanks for reading my DEVCOR study review this week. There weren’t as many topics this week, but each one was very heavy. Stay tuned for my next post, as I cover more exam topics in section 4.0 Application Deployment and Security. As always, if you have any feedback or questions, you can find me on Twitter @devnetdan.

DevNet Pro Journey: DEVCOR Week 3

Hello again! This week my main focus in my DEVCOR studies revolved around topics in exam topic 3.0 Cisco Platforms. My disclaimer up front is that I didn’t spend as much time as I would have liked to on studying this week, so I only completed a little over half of the the outlined topics.


As mentioned in my previous post, I was going into this week with the idea that studying would be pretty straightforward – learn the different Cisco product APIs. However, I quickly learned that I had to learn more than just the APIs themselves. Coming up, I’ll highlight the topics I reviewed and my impressions on the material.

Week 3 Topics

As always, I’m providing a screenshot directly from the exam blueprint of the topics covered.

Taken from

As you can see, there are many more topics in section 3.0 vs last week’s section (2.0). Despite having more topics in this section, I thought this was going to be easier than the past two sections. Section 3.0 topics revolve around Cisco products and their associated APIs. I figured this section would take me back to my DevNet Associate and ENAUTO study sessions, where I focused mostly on understanding the product’s API – I was wrong.

To describe my study issues this week, I found relevance with this tweet from Hank Preston.

“You can’t automate what you don’t understand…”. I found the biggest issue I had wasn’t parsing through the JSON/RPC+XML responses, but rather actually understanding the product and problems each product solves. Technical understanding aside, I didn’t work with these products day-to-day, so learning how to interact with their APIs didn’t do me any good. As a result, many of my time spent this week was going over the product white papers and other documentation – not sending in API requests. That’s why I wasn’t able to cover every topic in this section (like I thought I was going to).

I needed to first read up and understand each product, THEN learn how to interact and automate certain workflows. Don’t get me wrong – these products aren’t foreign to me. I had a rough idea about each product going into each study session. What I tried learning was how each product played a role in an enterprise environment. For example, why use FDM vs FMC to manage your devices running FTD (yes, I purposely used all the acronyms I could 🙂 ). Understanding each product and what purpose they serve in an enterprise infrastructure will help you better learn WHY you would want to interact and automate certain workflows using their APIs.

My Week 3 Impressions

I think I pretty much summed up my impressions on this week, but as it relates to the actual topics I covered, I was able to review topics 3.1 – 3.5. I was familiar with sections 3.1 and 3.3 (Webex Chatops and Meraki APIs), as I have experience using both products and their APIs. On top of that, both APIs are well-documented and easy to pickup (in my opinion). The other topics (3.2, 3.4, 3.5) were a bit tougher to pickup since I haven’t had as much experience with the products outlined in each topic (FDM, Intersight, and UCS Manager).

The main lesson I learned was that understanding a product’s functionality and role within an enterprise environment, versus jumping right into the technical docs, will better prepare you when studying for an exam.


I know this post was shorter than past weeks, but expect these weekly updates to vary depending on the topics or lessons learned from that week. Next week, I’m aiming to have the rest of section 3 and about half of section 4 completed. Section 4 (4.0 Application Deployment and Security) has a total of 11 topics, many being new to me, so I expect section 4 to take a couple weeks to review.

Please comment or hit me up on Twitter (@devnetdan) if you have any questions or feedback. Thanks for reading!

DevNet Pro Journey: DEVCOR Week 2

This week I’m continuing my DevNet Professional journey and discussing the DEVCOR exam topics I reviewed this past week: 2.0 Using APIs.


The topics in this section really makes you dive in and think about how users access your application, whereas the topics covered in 1.0 Software Development and Design mostly cover the architecture of applications. To many network engineers, these topics may seem overwhelming and almost unrelated to your current job of designing and building networks… because it is… However, the whole point of the DevNet certification track is to show how network engineering and monitoring can change in the future through automation and software development practices. You must keep your mind open to these new software development concepts.

The surprising thing I found is that you can learn these DevNet topics like you did when you studied for the CCNA. I remember studying for my CCNA a little over 3 years ago and thought how overwhelming the information was about each networking topic. From learning about MAC addresses and layer 2 operations (STP) to layer 3 and routing protocols (EIGRP, OSPF, BGP), I remember thinking, “How does anyone learn all of this and remember each detail?”. Over time, I’ve learned that network engineers do not remember each and every detail, but know where to look and have the intuition to troubleshoot an issue. You can take this same concept when studying for the DEVCOR topics. Yes, for the exam, you will be expected to recall details about each topic. However, the goal of the exam is to allow you to understand the intricacies of an application’s architecture and have the ability to make proper decisions when building or troubleshooting an application in the future. With that being said, think back to your CCNA days, that exam gave you the ability to identify the intricacies of each layer in the networking stack (OSI layers 1-4), and provide the ability to make proper decisions when building networks in the future.

Week 2 Topics

For reference, here’s a screenshot from Cisco’s website of the topics covered under 2.0 Using APIs:

Taken from

As you can see, these topics pivot away from consuming an API and focus more on constructing an API. There is a section (3.0 Cisco Platforms) that focuses more on consuming Cisco APIs, but I’ll get more into that next week. These topics felt like they were a stepping stone to the DevNet Associate and ENAUTO API topics. In those exams, you learned how to construct an API request and use Postman or a small Python script to make the request. DEVCOR builds on that by introducing a new authorization method (OAuth2) and proper error handling for REST API requests/responses ( HTTP Error Code 429 and other control flow techniques). These topics takes your “one-off” scripts and teaches you how they may be handled when integrating them into a larger application. For example, you can’t feed data into a function or method and just expect the output to be clean. When first starting out, the “blast radius” is somewhat small. You make one API request and receive an expected response. Once you begin building out a more complete application, you’ll need to add in error handling so that your application doesn’t crash.

Along with error handling, these topics also cover REST API operations, such as pagination and HTTP caching. Pagination is important because it can control the number of results that are returned, which leads to a better user experience. For example, no one likes to view websites that keep going on and on and on. Personally, if I see a tiny vertical scroll bar, I just hit CTRL+F and hope I find the information I’m looking for by using a keyword. HTTP caching is interesting and can enhance your APIs performance and make it operate quicker, which also leads to a better user experience. As you can see, all of these topics can be related back to enhancing the user experience.

My Week 2 Impressions

I definitely felt different while studying this week. At some points, I had to remind myself that this was not a traditional Cisco networking exam. This was the first week that I felt like I was in the software development world. Learning about APIs and the different components to consider when building or consuming one pulled me away from traditional networking. Besides bandwidth and delay considerations for the API’s user experience, I didn’t consider any other network-related topics. I didn’t think about routing convergence, spanning tree, or any other related technologies/protocols. I’m not discounting their importance – I just didn’t need to apply them in my studies, which felt odd when you realize you’re studying for a Cisco exam. Understanding the different components considered when constructing a RESTful API have been challenging, but exciting. I’m really beginning to understand API concepts from a developer’s point of view, instead of only from the consumer perspective.


These posts may be shorter than my normal posts, but I wanted to convey my experiences and impressions without rambling too much. Next week, I’ll be covering the exam topics found under 3.0 Cisco Platforms. This section of topics should be interesting because they relate back to the APIs of Cisco products, which I learned about when studying for the DevNet Associate and ENAUTO exams. As always, if you have any questions/feedback, please hit me up on Twitter (@devnetdan).

Thanks for reading and stay tuned for next week’s post!

DevNet Pro Journey: Starting DEVCOR

This week, I’m taking a break from my CI/CD pipeline series to talk about my DevNet Professional journey. For those that don’t know, the DevNet Professional certification is relevant to the new CCNP Enterprise. It requires you to pass a “core” exam (DEVCOR), along with a “concentration” or specialist exam (ENAUTO, CLAUTO, DCAUTO, etc.). For a more complete exam list, check out the official DevNet site here: DevNet Professional exam list. In this post, I’m going to talk about where I’m at in my DevNet Pro journey, my current experience studying for the DEVCOR exam, and my ultimate goals.

My DevNet Journey Status

As you may have saw a couple months ago, I passed my ENAUTO certification as part of completing my CCNP Enterprise certification. That was done intentionally. Many traditional network engineers prefer the Advanced Routing and Services (ENARSI) exam, since it builds on traditional networking concepts (routing protocols, VPN services, etc.). I want to emphasize that I’m not downing that exam at all. The topics in that exam are foundational for all network engineers – no matter your networking path (WAN, campus, DC, automation, etc.). However, I intentionally went after the ENAUTO exam since my overall goal is to continue down the DevNet certification path and learning about network automation.

So now with the ENAUTO completed, I now have full focus on the 350-901 DEVCOR exam. Besides the DevNet Associate exam (200-901) topics, this has been one of the only exam blueprints that I actually got excited about every exam topic. To put it into perspective, I actually yelled “YES!” out loud when I saw on Twitter that CBT Nuggets released their DEVCOR course right before Christmas. It felt like an early Christmas gift. Now that you understand the level of enthusiasm I have for this content, let’s dive into my first impressions after my first full week of studying for DEVCOR.

A Little Background…

While attending Cisco Live US 2019 in San Diego, I spent most of my time in the DevNet Zone. If you haven’t, it’s one of the most invigorating environments at Cisco Live. It’s an area that allows you to learn, connect, and get inspired by network automation. One of the best sessions I attended was titled “Working 9 to 5 as a NetDevOps Engineer” by Hank Preston. Here’s a link to the presentation slides from the Cisco Live website. Before attending this session, I dabbled with some network automation scripts that I ran locally on my laptop and began noticing some of the limitations I was experiencing. This session combined all the buzzwords you hear about surrounding network automation (Python, git, source control, CI/CD, etc.) and presented them in a big picture workflow that addressed many of my experienced limitations. This was overwhelming in a good and bad way. It was good because it unlocked so many new techniques/processes in the network automation world. However, on the flip side, it gave me serious anxiety about all the new things I had to learn ASAP if I wanted to thrive in this new, automated world.

I wanted to provide that small story because it helped me provide some perspective on my future and the reality that is NetDevOps. I think we can all agree that network automation will slowly creep into every network engineer’s life sooner or later. However, when you initially hear about it, the common question comes up: Are we network engineers or becoming software developers? This gets the room quiet because many of us do not want to go down the software development path, which is understandable. However, software development and the tools that come with it (version control (git), automated testing, etc.) are not new to the world. Software developers have been using these techniques and trying to perfect them for a long time, so why reinvent the wheel? The question now becomes, “What are we then?”. I think the simplest answer in my mind is “network engineers”. Even with automation and programmability in some of our day-to-day and on the horizon for many others, we need to understand that automation is going to be another component of network engineering, alongside L3 routing protocols, L2 protocols, network services, etc. The best thing we can do to learn about coding and automation is to understand the current best practice techniques and processes that already exist, which ultimately revolve around software development.

DEVCOR First Impressions

I began my first full week of studying for the DEVCOR exam, mostly using CBT Nuggets as my study resource, and have been quite impressed with the content. They keep it high-level so that you understand each topic and dive into practical examples when needed. So far, I’ve covered all the topics under the first major topic: 1.0 Software Development and Design. These topics help lay a good foundation for how applications are developed and interact at a high-level. It makes you think about the different apps/websites you interact with daily. I began paying closer attention to website URLs and figuring out how the presented webpage was queried on the backend. Trust me, I don’t spend 20-30 minutes looking at the web developer tools in the browser when I’m online shopping, but I do consciously think about how my searches are queried and account information is pulled on the backend.

Going back to my background story, I began understanding why, as network engineers, we would want to begin learning best practices of software development. If we ever want a shot at creating a network automation tool or introducing a new NetDevOps process in the enterprise, we need to learn and apply software development practices.

Future Goals

As you may be able to tell in this post, I’m passionate and persistent about network automation. There are many things I don’t know, but I always try to find the time to read and learn each new topic. Most recently, I began reading about Kubernetes. Why you might ask? Microservices (using containers) is a type of application architecture and Kubernetes is one of the most popular container orchestration tools out there. Have I deployed containers? Yes. Have I tried deploying Kubernetes? Nope. It’s not always about mastering a new topic – it’s about understanding how it applies to your ultimate goal. Will I learn and understand Kubernetes, or more specifically container orchestration, in the future? Yes. Why? I will need to understand it because I plan to be part of a team that builds an application one day potentially using a microservices architecture. You should always assess each new technology or exam and figure out how it affects YOUR career goals – not just accept it because it’s the industry trend.

For my career goals, I plan to obtain my DevNet Professional certification in the first half of 2021, with the ultimate goal of becoming one of the first DevNet Experts (pending the release of the certification).


I hope you continue following along my DevNet Pro (and later, Expert) journey. This upcoming week I’ll be diving into the next major topic on the blueprint: 2.0 Using APIs. I will continue tracking my progress and documenting my experience here on my blog. Hit me up on Twitter (@devnetdan) if you have any questions or feedback!

Thanks for reading!

For more context about NetDevOps and the Cisco Live presentation I linked in this post, please check out Cisco DevNet’s NetDevOps Live video series here.

CI/CD Pipelines: Part 2

Welcome to Part 2 of my CI/CD pipeline series! In this post, I will be going over the improvements I’ve made since Part 1, some issues I ran into along the way, and the overall goal(s) of this project.

Let’s start with the project goals. One of my main goals is to help provide clarity with CI/CD pipelines, as it pertains to network automation, and document the wins/losses along the way. I think we can agree that experiencing NetDevOps and Infrastructure-as-Code (IaC) is a pretty refreshing experience. There are a lot of new tools and processes learned along the way. However, we need to understand why we would do something different and go through the experience ourselves before touting it. I hope to describe my experiences (good and bad) through this series, and also why you and your team would consider adopting these new workflows.

The more technical goal of this project is to build a tool that can used as a proof-of-concept (PoC) or demo for a network team that is first being introduced to Infrastructure-as-Code. With that being said, I will continue refactoring the logic and code as I learn more about CI/CD and Ansible. From this post forward, I will also begin providing more technical details of what’s being updated in the project.

So What’s New?

As mentioned in Part 1, I mostly followed a tutorial to build the initial pipeline. In Part 2, I expanded on that by adding the following:

  • Ansible host and group vars
  • Ansible roles

Along with the the additions, I also ran into some issues, but I’ll get into that later in the post. First, let’s jump into the details of what’s changed, specifically with Ansible.

Diving into Ansible

Before going into the specific changes, here’s a current snapshot of my repository:

I plan on reviewing and making my repo public very soon (stay tuned to my Twitter!), but I figure I would include my current repo structure and use it as reference throughout this post.

Host and Group Vars

One of the first improvements I made was to break out the variables for each host and group of hosts. This helped modularize my playbooks and allows for future scalability. Take the below host_vars file and related task as an example:

Host_vars file for R1
Task: Configure Loopback Interfaces

In the above task, a Loopback interface is configured. However, to make this task scalable, we use inherited variables from each device’s host_vars file. This is super important because now we have one playbook that can configure different Loopback interfaces with different IP addresses on every device in our inventory. For example, we could add another item to the ‘loopbacks’ list (i.e. lo1: in the example host_vars file and just re-run the task to configure it. This becomes crucial when you have hundreds of devices with potentially different requirements based on design, geography, etc. With IaC, the host_vars file becomes a source of truth for what’s configured on the device (assuming no one goes rogue). You can take these same concepts and apply them to group_vars. For example, you can inherit the same SNMP server settings or syslog configuration from a specific group of devices in a playbook based on physical geography, type of device, device vendor, or any other arbitrary criteria. Now, let’s move on to roles!


This is my first time using the concept of roles in Ansible, so please do not take any of my instructions or code as best practice. I’m still in the beginning of my journey learning the advanced features of Ansible. As I was reading through the docs, I noticed that many of the examples showed different services being described as roles. For example, the Apache service could be considered part of a ‘webserver’ role. Since I’m just starting out, I figure I would make it as simple as possible and begin creating roles that revolved around pieces of a network configuration. In my example, I created the following roles: ‘device_mgmt’ and ‘eigrp’. The ‘device_mgmt’ role will consist of features around device management: Loopback interfaces, SNMP settings, NetFlow, etc. The ‘eigrp’ role is pretty self-explanatory.

Ultimately, I want to create roles for specific device types in a network (WAN routers, L3 switches, L2 switches, etc.) and shift my current roles (device_mgmt and eigrp) into dependencies of the device type roles. For now, I will be using my current roles in playbooks to configure specific device types. Here’s an example:


The above playbook is designed to configure a device (or group of devices) as a WAN router using the roles shown. As you can see, this approach is the easiest to digest and understand the different configuration items being applied (with the use of roles). I’m running this playbook against a generic group called ‘routers’, but you can imagine the different ways devices can be grouped in your inventory file (i.e. by physical geography, device vendor, role in network, etc.).

Other Misc. Changes/Issues

Besides applying new Ansible concepts, I did make some other small tweaks and changes related to the CI/CD pipeline. I had to build a new VM to use as a GitLab Runner. My existing devbox VM was having issues starting the GitLab Runner service on startup, which really held me up for a couple hours. Ultimately, I found that there was an open bug in GitLab for the service not starting on Ubuntu 20.04. As a result, I built a new VM using an Ubuntu 18.04 image, which took about 20 secs, since I’m running my entire infrastructure in GCP (perks of the cloud!).

Since I had to build a new runner, I decided to take a look at my .gitlab-ci.yml file and added a new ‘before_script’ job that activates a Python virtual environment before every job is ran. This helps eliminate any potential issues that could occur if I was using the system Python interpreter and also helps control the packages installed in the Python environment. I also added more linting tasks to lint all the YAML files in the host_vars and groups_vars directories.

The last change I made to the repo was add an Ansible configuration file locally to the repo (ansible.cfg). This allows Ansible to use the same configuration when running through the jobs in the CI pipeline.


This wraps up Part 2 of my CI/CD Pipelines series. In Part 3, we will look at adding additional configuration (i.e. NetFlow, Usernames, etc.), expanding on our existing roles, and adding some more controls to the CI/CD process, including requiring manual intervention before proceeding to the ‘deploy’ stage. Thanks for reading and stay tuned!