pyATS and Genie: Part 4

Introduction

Welcome to the fourth and final part to this blog series! In this part, we are going to wrap up pyATS and Genie by looking at some more useful modules within each of these libraries and take a quick look at an open-source web user-interface (UI) for running our pyATS testcases, because who doesn’t like a nice UI?! I don’t want to take up any more time in this intro, so let’s dive in!

Useful Modules

In the first three parts of this series, I went over the main functionality and key features of both pyATS and Genie. In this post, I want to focus on the “other” modules that are included, but not always talked about. To be more specific, we will be going over the pyATS Clean, Blitz, and Health Check modules. Don’t get me wrong – there are plenty of other features/topics that I would love to cover (Robot Framework, Metaparsers, CLI Auto-Parser, etc.) , but I wanted to focus on the modules that I felt would provide the most value to you.

pyATS Clean

The pyATS Clean framework allows you to initialize a device and bring it up with a specific configuration, OS version, etc. Beyond initializing a device, Clean also allows you to remove unwanted configuration from a device and return it back to an operational state. This can be helpful when a device is acting up and you aren’t sure what’s going on with it. Instead of parsing through the configuration or checking the operational state via show commands, you can wipe it clean and start over with a given base configuration. Remember, it’s not giving up if you get the device back to an operational state in a shorter amount of time. There are always [should be :/ ] logs for any post-mortem discussions.

Cleaning a device

So we know what pyATS Clean is, but how do we actually use it? Lucky for us, the pyATS devs made it super easy and we able to use a YAML file to declare our “cleaning”. One important note is that pyATS Clean YAML files are broken down into Clean Stages. These different stages are specific to OS and Platform types (i.e. OS: iosxe, platform: cat9k). There are some stages that are common to all supported OS types (labeled as ‘COM’ in the docs), but there are some that are OS-specific. For example, as of the 21.7 release, we only have 5 Clean Stages available for Cat 9k IOS-XE devices. I wanted to include that they are only for Cat 9k devices because, remember, it’s specific to the OS AND Platform type. Here’s a list of the supported OS/Platform types and the available Clean Stages for Cat 9k IOS-XE devices (as of v21.7):

Supported OS and Platforms
Clean Stages for Cat 9k IOS-XE devices

You can see that, as of now (v21.7), there aren’t many supported OS and Platform types. However, the best part of open-source is that anyone can contribute and help fill the voids! I do find that, even though there aren’t that many Clean Stages for IOS-XE devices, the ones that are available cover most use cases when initializing a device. Here’s a small example of a Clean YAML file from the pyATS docs:

cleaners:
    PyatsDeviceClean:
        module: genie.libs.clean
        devices: [PE1]

devices:
    PE1:

        images:
        - /path/to/image.bin

        connect:

        copy_to_linux:
            destination:
                directory: /tftp-server
                hostname: 127.0.0.1
            unique_number: 12345

        copy_to_device:
            origin:
                hostname: 127.0.0.1
            destination:
                directory: 'bootflash:'

        change_boot_variable:

        reload:

        verify_running_image:

        order:
        - connect
        - copy_to_linux
        - copy_to_device
        - change_boot_variable
        - reload
        - verify_running_image

As you can see, the YAML file is very readable. It doesn’t take a software developer to understand what’s going on here. The easiest way to understand what’s going on is by looking at the list under the order key. The order key must always be defined so that assumptions can not be made during execution. At a high-level, we are performing the following steps:

  • Transfer IOS image (.bin file) from the local machine to a Linux TFTP server (in this case, it’s a directory on the local machine)
  • Copy the IOS image from the TFTP server to the network device
  • Set the boot variables (defaults to 0x2102)
  • Reload the device
  • Verify the new image is the running image

For all sections that have a stage defined with no values after the colon, they are assuming the default values for each argument. To view the default arguments for each stage, check out the Clean Stages documentation here.

Power Cyclers

Before we jump to the next section, I wanted to mention a small (but very powerful!) feature included with Clean. Besides cleaning network devices, there is also a list of supported “Power Cyclers”. Power Cyclers are essentially your common OOB, PDU, and UPS devices. Currently, Raritan, APC, Dualcomm, Cyberswitching, and ESXi (for VMs) are supported. I believe this feature shows the amount of detail the pyATS team takes into account when developing modules for this library. Beyond just rolling back the configuration, you can add additional instructions to reboot a device if it gets stuck in a hung state. This is HUGE! You can think of this as your programmatic upgrade from the ‘reload in X’ command in IOS. It’s a major upgrade due to its adaptability and only activating if it detects a device being unresponsive. If you’d like to add these Power Cycler devices to your testbed and Clean YAML file, check out the docs here.

Alright, well now that we’ve covered pyATS Clean, let’s take a look at Blitz!

pyATS Blitz

I like to relate Blitz to an Ansible Playbook. Blitz provides you the ability to create pyATS testscripts with minimal programming experience. Like Ansible Playbooks, Blitz is written as a YAML file. Within the Blitz YAML file, you can perform all the same actions you can perform in pyATS/Genie Python code:

  • Configure a device
  • Parse output
  • Learn a device ‘feature’
  • Use device APIs
  • etc…

The best part about Blitz is that it lowers the barrier to entry, so you can share these YAML files with the other members of your team that may despise programming. Here’s a quick example of a Blitz file from the pyATS docs:

# Name of the testcase
TestBgpShutdown:
    # Location of the blitz trigger - always this same location for all blitz trigger
    source:
      pkg: genie.libs.sdk
      class: triggers.blitz.blitz.Blitz

    # Devices to run on - Default is uut
    devices: ['uut']

    # Field containing all the Testcase sections
    test_sections:

      # Section name - Can be any name, it will show as the first section of
      # the testcase
        - apply_configuration:
            # List of actions
            - configure:
                device: R3_nx
                command: |
                  router bgp 65000
                  shutdown
            - sleep:
                sleep_time: 5

        # Second section name
        - verify_configuration:
            # Action #1
            # Send show command to the device and verify if part
            # of a string is in the output or not
            - execute:
                device: R3_nx
                command: show bgp process vrf all
                include:
                    # Verify Shutdown is within the show run output
                  - 'Shutdown'
                exclude:
                    # Verify Running is not within the show run output
                  - 'Running'
            # Action #2
            # Send show command and use our available parsers to make sure
            # the bgp protocol state is shutdown
            - parse:
                device: R3_nx
                # All action supports banner field to add to the log
                banner: Verify bgp process is shutdown
                command: show bgp process vrf all
                include:
                  - get_values('shutdown')
                exclude:
                  - not_contains('running')
        - Revert_configuration:
            # Configure action, which accepts command as an argument
            - configure:
                device: R3_nx
                banner: Un-Shutting down bgp 65000
                command: |
                  router bgp 65000
                  no shutdown
        - verify_revert:
            # Send show command and verify if part of a string is in the output or not
            - execute:
                device: R3_nx
                command: show bgp process vrf all
                include:
                    # Verify Running is within the show run output
                    - 'Running'
                exclude:
                    # Verify Shutdown is not within the show run output
                    - 'Shutdown'
            # Send show command and use our available parsers to make sure
            # it is the bgp protocol state which is running
            - parse:
                device: R3_nx
                command: show bgp process vrf all

Much of the code is commented and self-descriptive, but essentially the testcase is performing a ‘shut’ and ‘no shut’ to the BGP process on the device, with specific BGP verification commands built-in to the script. This may seem like a pointless use case, but it’s a good example of the different actions used to complete the testcase. In the example, you see keys such as: configure, execute, sleep, and parse. All of these are considered Blitz actions. You can find the list of available Blitz actions here. These actions have a direct correlation with the methods we’ve used when writing Python scripts with pyATS/Genie in previous parts of this blog series.

There are many more details surrounding Blitz such as saving output, assigning variables, replying to prompt dialogue, and much more. However, I need to stay the course and provide high-level overviews of these features and why you would want to experiment with them. It’s up to you to dive deeper into the documentation and figure out whether it fits your use case. I’ll have links to all the different features mentioned at the end of this post. Now that we’ve covered Blitz, let’s take a look at a feature that builds on Blitz and is what I consider the nurse of testscripts: pyATS Health Check.

pyATS Health Check

I call this the nurse of pyATS testscripts because it helps keep a close eye on the one thing we care about most: our test device(s). The Health Check feature collects and monitors the state of our test device(s) before and after each test section. There’s also an option to have it continuously monitor our device(s) throughout the entire testscript by using a background process. It can collect metrics such as CPU/memory health, tracebacks, core dumps, and logs. This may sound like, “Oh great, another way to monitor our devices…”. Yes and no… Yes, it monitors your devices while testing is occurring, but more importantly, it will collect all the appropriate data for you automatically (if configured properly) in case of device failure. I don’t know about you, but I’ve been in situations where I wish I could have collected the logs or core dump for TAC before a router or switch rebooted itself. How many hours/days of troubleshooting could that have saved if we would have just had the logs/dumps to tell us what went wrong? Here’s another example from the pyATS docs of how you would write a Health Check in YAML:

pyats_health_processors:
  source:
    pkg: genie.libs.health
    class: health.Health
  test_sections:
    - cpu:
        - api:
            device: ASR1K-1 # <<< changed from `uut`
            function: health_cpu
            arguments:
              processes: ['BGP.*']
            include:
              - sum_value_operator('value', '<', 90)
    - memory:
        - api:
            device: ASR1K-1 # <<< changed from `uut`
            function: health_memory
            arguments:
              processes: ['BGP.*', 'OSPF.*']
            include:
              - sum_value_operator('value', '<', 90)

I love using the YAML format instead of the pyATS CLI commands because it’s much easier to read. Taking a look at the above example, this health check runs after every testcase/section. It looks at the BGP-specific processes running on the CPU and in memory. If the BGP CPU processes exceed 90%, the health check will report a failure. In addition to the BGP processes, the memory check also looks for OSPF-specific processes. If the sum of the the BGP and OSPF processes exceeds 90% memory utilization, a health check failure will be reported. I think this is a very underrated feature and is often taken for granted, until there is a failure on the device.

Wow, can you believe all the features we just reviewed are included in the pyATS library? I think this shows the depth of pyATS and how it’s starting to feel more like a platform rather than just a library, but I digress… Now, let’s shift our attention to a project that I’ve been excited about for a couple years: XPRESSO.

Everyone wants a GUI

Up to this point, we’ve mostly looked at pyATS and Genie as command-line tools and libraries that we’ve imported into our Python scripts. What if we could build and run testscripts using a web UI? How many more engineers could we attract to this great tool if we provided a web dashboard that they could interact with? Meet XPRESSO…

I first heard about XPRESSO at Cisco Live US 2019 in San Diego, CA, where I attended a session about pyATS. XPRESSO was brought up towards the end of the presentation and described as a tool that Cisco has been using internally for quite awhile, and that the pyATS team was making progress towards releasing it to the public soon. Fast-forward to the end of 2020, the pyATS team released XPRESSO to the public.

XPRESSO is web dashboard that provides central management for creating jobs, job schedules, reserving resources, testbed queueing, test result comparisons, verification testing, and baselining test results. Those are some the main features highlighted on their launch page on DevNet.

I don’t want to go into the details of setting up XPRESSO, but you have a couple options to set it up and begin testing it out on your own:

  1. Building it yourself (via Docker)
  2. DevNet Sandbox

Setting up XPRESSO on your own

The pyATS team has provided good documentation around the requirements for XPRESSO and how to set it up using Docker. At a high-level, it’s recommended to have 16-core CPU, 64GB RAM, and at least 50GB of storage. I will say, I’ve gotten XPRESSO up and running on my home PC, which has a 4-core CPU and 16GB of RAM, but it’s VERYYYY slow. I would not recommend it. On top of that, I was limited to the tasks I could perform before things stopped working. I’ve read others succeed with half of the recommended requirements (8-core CPU and 32GB RAM), but don’t get frustrated if things go wrong. I’ll include links to the XPRESSO documentation at the end of this post. Needless to say, I would recommend going with the second option if you just want to take XPRESSO for a test drive.

Using the XPRESSO DevNet sandbox

The DevNet sandboxes are the first place I look when I want to try out a new Cisco technology or platform. If you’re like me, it’s always exciting when you get a chance to play with the newest tech, but it can be a pain to setup. Many times, I call it a day after fighting for hours getting everything set up. To avoid that, I look to the sandboxes. For XPRESSO, I would highly recommend using the sandbox so that you avoid any setup headaches and focus on your main goal: experimenting with XPRESSO. The XPRESSO sandbox requires you to reserve it before gaining access. Currently, there is not an ‘always-on’ instance. Normally, you can reserve it on the spot. In order to reserve it, go to the DevNet Networking Sandboxes here and login to your account. Once logged in, search for ‘xpresso’. Here’s what you should see after you search:

Searching for the XPRESSO sandbox

Once you click ‘Reserve’, it will provide you available reservation times. Like I mentioned before, most of the time, you should be able to reserve ASAP and have the sandbox for 8 hours. However, I urge you to adjust the reservation time if you don’t plan on using it for the entire duration. These resources are held up until your reservation time expires. If you’re unsure how long you’ll need it, reserve it for the 8 hours, but remember to manually end the lab before you sign-off. This will prompt the lab to be tore down and free up its resources. Your fellow network engineers thank you!

Conclusion

We have reached the end of the pyATS and Genie series. I’ve learned a lot while writing this series for you and I hope you learned just as much (or hopefully more)! I hope you’ve been able to find the value and benefits in using these libraries. They definitely have a lot of depth and a learning curve compared to other network automation libraries, but my goal of the series was to lower that barrier to entry and provide you enough detail to decide whether these libraries will meet your network testing needs.

Thanks for reading this series and I hope you stay tuned for future posts! If you have any questions, feedback, or just want to chat, please feel free to hit me up on Twitter (@devnetdan).

References

pyATS Clean: https://pubhub.devnetcloud.com/media/genie-docs/docs/clean/index.html
pyATS Blitz: https://pubhub.devnetcloud.com/media/genie-docs/docs/blitz/index.html
pyATS Health Check: https://pubhub.devnetcloud.com/media/genie-docs/docs/health/index.html
XPRESSO Launch Page: https://developer.cisco.com/docs/xpresso/#!overview
XPRESSO Github: https://github.com/CiscoTestAutomation/xpresso
XPRESSO Requirements: https://developer.cisco.com/docs/xpresso/#!operational-requirements-constraints/operational-requirements-and-constraints

pyATS and Genie: Part 3

Introduction

If you’ve been following along in this series, you’ve seen how we can programmatically interact and gather information about our network using pyATS and Genie. That’s great and all, but wouldn’t it be better if we could begin providing true value to ourselves and our team?

As mentioned in Part 2, pyATS is essentially the foundation to our testing environment. It provides a structure to how our devices and topology are defined. In this part, we are going to dive deeper into pyATS and how we can begin writing testcases and testscripts. pyATS provides a standard automation testing framework (or harness, as they refer to it) called AEtest. AEtest, or Automation Easy Testing, comes with defined building blocks on how to structure your testscripts. Now before we get deeper, you may be asking, what is a testcase or testscript? We create “testscripts” now for running checks during a change window, such as pinging a device before and after a change to ensure connectivity. No, no, no… a testscript can be much more than that. What if we could check for a specific number of ‘established’ BGP peers AND check the routing table for a specific route before and after a change? If the results don’t meet our expectation, we can roll the changes back. Oh yeah, we can also throw in your ping test at the end too.

On top of it all, AEtest is pythonic in nature with an object-oriented design, which allows you to create relationships amongst objects, add more logic to your tests, and even alter whether a test is ran depending on another test’s results. That’s pretty insane! Enough with the intro, let’s dive into building testscripts with AEtest in pyATS.

So what is a Testscript?

AEtest provides a defined structure on how to build your testscripts. This confused me at first, as I thought this structure was a mere suggestion. However, after playing around with it, I quickly realized how important each section is to the overall testscript. Let’s take a look at the different sections of a testscript.

As I’ve said in previous posts, I’m a visual person. I like creating a picture or diagram to understand how something works. The above picture displays each major section of a testscript and why each one is important. Let’s quickly run through each one:

Common Setup

The Common Setup section of an AEtest testscript provides the basic configuration and connectivity to all test devices. For example, you would use this section to connect to each device in your testbed. If you’ve ever worked with a virtual network topology, you’ll know that you have to enable the interfaces when you bring up a virtual device (CSR1000v, Nexus 9Kv, etc.) by performing a ‘no shut’ on each interface. This would be the appropriate section to do that. Just think of this section as the preparation section.

Testcases

The Testcases section is where you write out your individual tests. Tests can include making a configuration change on a device and parsing out a ‘show’ command to confirm the result meets your expectation, or simply running a ping test. The best part about testcases is that you’re in control and can define the specific tests to meet your requirements. pyATS provides the framework, and Genie provides the set of tools to gather and parse the information that’s important to you.

Common Cleanup

The Common Cleanup section is the last section ran in a testscript. This section is focused on cleaning up any configuration changes or other changes you made during testing. The goal is to reset the environment to how it was before testing. You can think of this section as the custodian of each testscript… CLEAN UP ON TESTSCRIPT ONE!

There are many more features and levels to each section, but I’m trying to keep it as simple as possible to avoid overwhelming you. However, I’ll include links to the documentation at the end of this post so that you can read more about each section.

Writing our own Testscript

So now that we’ve established testscripts are a little more than running a few ping tests and manually recording the results in Notepad (I know we all run more than a few ping tests for verification, but I’m trying to keep it lighthearted!), let’s jump into writing our first testscript using AEtest.

In our first testscript, we are going to write a testcase to check the software version of an IOS-XE device and confirm whether it meets our defined standard.

from pyats import aetest
from pprint import pprint
from genie.utils import Dq
import logging
import sys

logger = logging.getLogger(__name__)

class CommonSetup(aetest.CommonSetup):
    
    @aetest.subsection
    def connect_to_devices(self, steps, testbed, host):
        device = testbed.devices[str(host)]
        
        # Add device as testscript param
        self.parent.parameters.update(device = device)
        
        with steps.start(f'Connect to {device.name}'):
            # Connect to the device
            device.connect()

### TESTCASE SECTION ###
class FirstTestcase(aetest.Testcase):
    
    @aetest.setup
    def setup(self, device, testbed):
        # Confirm we are connected to the device before running commands
        if device.connected:
            self.passed('Successfully connected to the device')
        else:
            self.failed('Could not connect to the device')
    
    @aetest.test
    def verify_ios_version(self, steps, device):
        with steps.start('Checking the IOS version') as step:
            try:
                self.version = device.parse('show version')
                current_dev_ios = Dq(self.version).get_values('xe_version')
                # Checks to see whether the IOS-XE version meets the set standard version of 17.3.3
                ios_check = Dq(self.version).contains('17.03.03').get_values('xe_version')
                # A populated list is returned if the version matched
                if ios_check:
                    step.passed('Device is running the proper IOS version.')
                else:
                # An empty list is returned if the version didn't match
                    step.failed('Device is running a different IOS version than the defined standard.')
            except Exception as e:
                # If an exception is caught, the testcase fails and prints the error
                self.failed(f'Could not parse data due to the following error: {str(e)}')
                
class CommonCleanup(aetest.CommonCleanup):
    
    @aetest.subsection
    def disconnect_from_devices(self, steps, device):
        with steps.start(f'Disconnecting from {device.name}'):
            # Disconnect from the device
            device.disconnect()

There’s a lot to dissect in this testscript. First, I want to point out that I highlighted the major “container classes” that I summarized earlier. These classes are referred to as “containers” because they contain other test sections. This concept took me awhile to understand, but let’s take a look at this diagram found in the pyATS docs:

                            +--------------+
                            |  TestScript  |
                            +-------+------+
                                    |
       +----------------------------+---------------------------+
       |                            |                           |
+------+------+            +--------+-------+           +-------+-------+
| CommonSetup |            |   Testcases    |           | CommonCleanup |
+------+------+            +--------+-------+           +-------+-------+
       |                            |                           |
+------+------+                     |                    +------+------+
| subsections |          +----------+-----------+        | subsections |
+-------------+          |          |           |        +-------------+
                     +---+---+  +---+---+  +----+----+
                     | setup |  | tests |  | cleanup |
                     +-------+  +-------+  +---------+

You can see that each testscript has a hierarchy to it. The Common Setup and Common Cleanup sections can be broken down into subsections, and Testcases can be broken down even further into setup, test, and cleanup sections. There is even one more level that’s not depicted here called steps. As mentioned previously, AEtest testscripts are pythonic in nature with an object-oriented design, so each of these sections can inherit from one another. For example, the TestScript object is the parent to each of the Common Setup, Testcases, and Common Cleanup sections. Any parameters defined at the TestScript level can be accessed by each of the child sections. There’s ALOT more to how these objects can relate to one another, but I’ll save that for another day. My goal is to provide just enough of an overview of the relationships found in testscripts for you to understand further explanations of our first testscript.

Now that we have a basic understanding of how a testscript is structured and its relationships, let’s start reviewing our first testscript. In our Common Setup, we connect to a specific host that is defined in our testbed. For reference, this testscript is being used in a web app that allows a user to input the hostname and IP address into a form – more to come on that at a later time :). So we have a host, testbed, and steps that are passed in as parameters to this specific section. The device parameter is added to the testscript parameters, so it can be used anywhere in this testscript. After we define the device, I have a subsection for connecting to the device using the device.connect() function – pretty straightforward. Remember, if you are testing with multiple hosts and need to connect to all of them, this would be the section to do that. This section would also be used to apply general configuration across all of testing devices.

After the Common Setup section, we create our first Testcase section. In the Testcase section, we define smaller sections for setup and the actual test section. I decided not to include a cleanup section in the Testcase, as the Common Cleanup section takes care of everything we need for this testscript. In the setup section, we ensure that we’re connected to the device (device.connected() -> returns True/False). Once we confirm we are connected to the device, we issue the show version command to the device and parse the output using Genie. After parsing the output, we check to see whether the device contains a specific IOS-XE version using the Dq library, which we reviewed in Part 2 of this series. If you haven’t already, please check out Part 2 or read the Genie docs to understand the fascinating functionalities of the Dq library – it will save you time and many lines of code! If there’s a match to the specific IOS-XE version we are expecting, a Python list will be returned with the matching string being the first item in the list. For example, if there was a match in our test, the returned value would look something like this: ['17.03.03']. Given that understanding, we check to see whether a populated list is returned. If so, the step passes and the results are rolled up to the Testcase.

The last section is the Common Cleanup section. In this section, we simply disconnect from the device. If any configuration or environmental changes were applied during testing, this would be the section to revert or reset those settings. The goal of this section is to reset the environment to how it looked before testing.

How do I run my Testscript?

So we have a testscript and (for the most part) understand each section of it, but how do we run the actual tests? In the pyATS docs, you’ll find there are two official ways to run an AEtest testscript: Standalone or using Easypy execution. I know, you may be saying, “Greattttttt… now I have to pick and choose a method to run my tests”. Well, pyATS provides some good descriptions and weighs the pros and cons to each method.

Standalone

The Standalone method is the most flexible and easier one to comprehend when first starting out. The docs even suggest this method for “rapid, lightweight script development…”. However, there are some tradeoffs with the ease of use. You are limited to executing a single script, all logging/results are printed to the standard output (stdout), and there is no archiving or official report generation. This truly is your grab n’ go testing method. If you’d like to execute your testscript with this method, you simply need to run your testscript by invoking aetest.main() at the end of your testscript file. Here’s a quick example from the pyATS docs:

# Example
# -------
#
#   enabling standalone execution

import logging
from pyats import aetest

# your testscript sections, testscases & etc
# ...
#

# add the following as the absolute last block in your testscript
if __name__ == '__main__':

    # control the environment
    # eg, change some log levels for debugging
    logging.getLogger(__name__).setLevel(logging.DEBUG)
    logging.getLogger('pyats.aetest').setLevel(logging.DEBUG)

    # aetest.main() api starts the testscript execution.
    # defaults to aetest.main(testable = '__main__')
    aetest.main()

You notice on the last line that all you need to do is invoke aetest.main(), since your testscript and all the testcases are defined in the same file. You can also call on a testscript from another Python file using aetest.main(). Here’s a quick example of how to do that:

aetest.main(testable='./testscripts/first_testscript.py',
         testbed=my_testbed, host=device_hostname)

You can see that you must provide some additional arguments. However, the only necessary argument is the testable argument. The testable argument points to the location of the testscript file. By default, it points to ‘__main__’. All other arguments are keyword arguments that can be later used in the testscript – they are loaded in as testscript-level parameters. The Standalone execution method is great for testing and developing scripts, it may even work for certain use cases, but what if we wanted a something a little more? Easypy execution might be what you’re looking for…

Easypy Execution

Easypy execution is the more “official” way to run your testscripts. You would use this method to perform production-grade testing, such as sanity/regression testing. There are many advantages to using Easypy over Standalone execution. With Easypy, you can aggregate multiple testscripts into a single job file. Easypy will take care of logging configuration, with the option for user customization in each job file. A TaskLog object is used to generate results, reports, and archives. A Reporter object is used for reporting results by generating a YAML results file, along with XML summary files.

There are multiple ways to run Easypy jobs. One of the most common ways is to use the pyats run job CLI commands. This eliminates the need to code anything additional to run your testscript. If you have pyATS installed, you will have access to the pyATS command-line. The other option is to use the run() API. Just like with the Standalone execution, you pass in the location of the testscript file as the first argument and the execution environment takes care of the rest. Here’s another example from the pyATS docs:

# Example
# -------
#
#   pyats job file example, with script arguments

from pyats.easypy import run

def main():

    # providing a couple custom script arguments as **kwargs
    run(testscript='/path/to/your/script.py',
        pyats_is_awesome = True,
        aetest_is_legendary = True)

# if this job file was run with the following command:
#   pyats run job example_job.py --testbed-file /path/to/my/testbed.yaml
#
# and the script had one testcase that prints out the script's parameters,
# the output of the script ought to be:
#
#   starting test execution for testscript 'a.py'
#   +------------------------------------------------------------------------------+
#   |                          Starting testcase Testcase                          |
#   +------------------------------------------------------------------------------+
#   +------------------------------------------------------------------------------+
#   |                            Starting section test                             |
#   +------------------------------------------------------------------------------+
#   Parameters = {'testbed': <Testbed object at 0xf742f74c>,
#                 'pyats_is_awesome': True,
#                 'aetest_is_legendary': True}
#   The result of section test is => PASSED
#   The result of testcase Testcase is => PASSED

As you can see, the run() API is called and the first argument, testscript, is the file path to the testscript we want to execute. After Easypy execution, the runtime logs and archives will be stored in a local directory. You can review each of these file manually or use the pyats logs view CLI command to view them in a nice HTML format on a local web server.

Now that I’ve reviewed each method to run a testscript, it’s up to you decide, based on requirements, which method is best for you.

Running the Testscript

To run my example testscript, I chose to use the Standalone execution and capture the results in the standard output (stdout). I used the ‘IOS XE on CSR Recommended Code Always On‘ sandbox hosted by Cisco DevNet as my test device. Here’s what the results look like for Standalone execution of the testscript:

2021-06-22T11:04:06: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:06: %AETEST-INFO: |                            Starting common setup                             |
2021-06-22T11:04:06: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:06: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:06: %AETEST-INFO: |                    Starting subsection connect_to_devices                    |
2021-06-22T11:04:06: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:07: %AETEST-INFO: +..............................................................................+
2021-06-22T11:04:07: %AETEST-INFO: :                    Starting STEP 1: Connect to csr1000v-1                    :
2021-06-22T11:04:07: %AETEST-INFO: +..............................................................................+

2021-06-22 11:04:07,061: %UNICON-INFO: +++ csr1000v-1 logfile /tmp/csr1000v-1-cli-20210622T110407061.log +++

2021-06-22 11:04:07,061: %UNICON-INFO: +++ Unicon plugin iosxe +++


2021-06-22 11:04:10,775: %UNICON-INFO: +++ connection to spawn: ssh -l developer 64.103.37.51 -p 8181, id: 140725016193680 +++

2021-06-22 11:04:10,775: %UNICON-INFO: connection to csr1000v-1
Password: 

Welcome to the DevNet Sandbox for CSR1000v and IOS XE

The following programmability features are already enabled:
  - NETCONF
  - RESTCONF

Thanks for stopping by.



csr1000v-1#

2021-06-22 11:04:11,234: %UNICON-INFO: +++ initializing handle +++

2021-06-22 11:04:11,297: %UNICON-INFO: +++ csr1000v-1 with alias 'cli': executing command 'term length 0' +++
term length 0
csr1000v-1#

2021-06-22 11:04:11,659: %UNICON-INFO: +++ csr1000v-1 with alias 'cli': executing command 'term width 0' +++
term width 0
csr1000v-1#

2021-06-22 11:04:12,059: %UNICON-INFO: +++ csr1000v-1 with alias 'cli': executing command 'show version' +++
show version
Cisco IOS XE Software, Version 16.09.03
Cisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2019 by Cisco Systems, Inc.
Compiled Wed 20-Mar-19 07:56 by mcpre


Cisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.
All rights reserved.  Certain components of Cisco IOS-XE software are
licensed under the GNU General Public License ("GPL") Version 2.0.  The
software code licensed under GPL Version 2.0 is free software that comes
with ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such
GPL code under the terms of GPL Version 2.0.  For more details, see the
documentation or "License Notice" file accompanying the IOS-XE software,
or the applicable URL provided on the flyer accompanying the IOS-XE
software.


ROM: IOS-XE ROMMON

csr1000v-1 uptime is 2 minutes
Uptime for this control processor is 3 minutes
System returned to ROM by reload
System image file is "bootflash:packages.conf"
Last reload reason: reload



This product contains cryptographic features and is subject to United
States and local country laws governing import, export, transfer and
use. Delivery of Cisco cryptographic products does not imply
third-party authority to import, export, distribute or use encryption.
Importers, exporters, distributors and users are responsible for
compliance with U.S. and local country laws. By using this product you
agree to comply with applicable laws and regulations. If you are unable
to comply with U.S. and local laws, return this product immediately.

A summary of U.S. laws governing Cisco cryptographic products may be found at:
http://www.cisco.com/wwl/export/crypto/tool/stqrg.html

If you require further assistance please contact us by sending email to
export@cisco.com.

License Level: ax
License Type: Default. No valid license found.
Next reload license Level: ax


Smart Licensing Status: Smart Licensing is DISABLED

cisco CSR1000V (VXE) processor (revision VXE) with 2392579K/3075K bytes of memory.
Processor board ID 9UB7M1TZDUS
3 Gigabit Ethernet interfaces
32768K bytes of non-volatile configuration memory.
8113280K bytes of physical memory.
7774207K bytes of virtual hard disk at bootflash:.
0K bytes of WebUI ODM Files at webui:.

Configuration register is 0x2102

csr1000v-1#

2021-06-22 11:04:13,457: %UNICON-INFO: +++ csr1000v-1 with alias 'cli': configure +++
config term
Enter configuration commands, one per line.  End with CNTL/Z.
csr1000v-1(config)#no logging console
csr1000v-1(config)#line console 0
csr1000v-1(config-line)#exec-timeout 0
csr1000v-1(config-line)#end
csr1000v-1#
2021-06-22T11:04:15: %AETEST-INFO: The result of STEP 1: Connect to csr1000v-1 is => PASSED
2021-06-22T11:04:15: %AETEST-INFO: +----------------------------------------------------------+
2021-06-22T11:04:15: %AETEST-INFO: |                       STEPS Report                       |
2021-06-22T11:04:15: %AETEST-INFO: +----------------------------------------------------------+
2021-06-22T11:04:15: %AETEST-INFO: STEP 1 - Connect to csr1000v-1                    Passed    
2021-06-22T11:04:15: %AETEST-INFO: ------------------------------------------------------------
2021-06-22T11:04:15: %AETEST-INFO: The result of subsection connect_to_devices is => PASSED
2021-06-22T11:04:15: %AETEST-INFO: The result of common setup is => PASSED
2021-06-22T11:04:15: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:15: %AETEST-INFO: |                       Starting testcase FirstTestcase                        |
2021-06-22T11:04:15: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:15: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:15: %AETEST-INFO: |                            Starting section setup                            |
2021-06-22T11:04:15: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:15: %AETEST-INFO: Passed reason: Successfully connected to the device
2021-06-22T11:04:15: %AETEST-INFO: The result of section setup is => PASSED
2021-06-22T11:04:15: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:15: %AETEST-INFO: |                     Starting section verify_ios_version                      |
2021-06-22T11:04:15: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:15: %AETEST-INFO: +..............................................................................+
2021-06-22T11:04:15: %AETEST-INFO: :                  Starting STEP 1: Checking the IOS version                   :
2021-06-22T11:04:15: %AETEST-INFO: +..............................................................................+

2021-06-22 11:04:16,465: %UNICON-INFO: +++ csr1000v-1 with alias 'cli': executing command 'show version' +++
show version
Cisco IOS XE Software, Version 16.09.03
Cisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2019 by Cisco Systems, Inc.
Compiled Wed 20-Mar-19 07:56 by mcpre


Cisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.
All rights reserved.  Certain components of Cisco IOS-XE software are
licensed under the GNU General Public License ("GPL") Version 2.0.  The
software code licensed under GPL Version 2.0 is free software that comes
with ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such
GPL code under the terms of GPL Version 2.0.  For more details, see the
documentation or "License Notice" file accompanying the IOS-XE software,
or the applicable URL provided on the flyer accompanying the IOS-XE
software.


ROM: IOS-XE ROMMON

csr1000v-1 uptime is 2 minutes
Uptime for this control processor is 3 minutes
System returned to ROM by reload
System image file is "bootflash:packages.conf"
Last reload reason: reload



This product contains cryptographic features and is subject to United
States and local country laws governing import, export, transfer and
use. Delivery of Cisco cryptographic products does not imply
third-party authority to import, export, distribute or use encryption.
Importers, exporters, distributors and users are responsible for
compliance with U.S. and local country laws. By using this product you
agree to comply with applicable laws and regulations. If you are unable
to comply with U.S. and local laws, return this product immediately.

A summary of U.S. laws governing Cisco cryptographic products may be found at:
http://www.cisco.com/wwl/export/crypto/tool/stqrg.html

If you require further assistance please contact us by sending email to
export@cisco.com.

License Level: ax
License Type: Default. No valid license found.
Next reload license Level: ax


Smart Licensing Status: Smart Licensing is DISABLED

cisco CSR1000V (VXE) processor (revision VXE) with 2392579K/3075K bytes of memory.
Processor board ID 9UB7M1TZDUS
3 Gigabit Ethernet interfaces
32768K bytes of non-volatile configuration memory.
8113280K bytes of physical memory.
7774207K bytes of virtual hard disk at bootflash:.
0K bytes of WebUI ODM Files at webui:.

Configuration register is 0x2102

csr1000v-1#
2021-06-22T11:04:19: %AETEST-ERROR: [31mFailed reason: Device is running a different IOS version than the defined standard.[0m[39m
2021-06-22T11:04:19: %AETEST-INFO: The result of STEP 1: Checking the IOS version is => FAILED
2021-06-22T11:04:19: %AETEST-INFO: +----------------------------------------------------------+
2021-06-22T11:04:19: %AETEST-INFO: |                       STEPS Report                       |
2021-06-22T11:04:19: %AETEST-INFO: +----------------------------------------------------------+
2021-06-22T11:04:19: %AETEST-INFO: STEP 1 - Checking the IOS version                 Failed    
2021-06-22T11:04:19: %AETEST-INFO: ------------------------------------------------------------
2021-06-22T11:04:19: %AETEST-INFO: The result of section verify_ios_version is => FAILED
2021-06-22T11:04:19: %AETEST-INFO: The result of testcase FirstTestcase is => FAILED
2021-06-22T11:04:19: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:19: %AETEST-INFO: |                           Starting common cleanup                            |
2021-06-22T11:04:19: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:19: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:19: %AETEST-INFO: |                 Starting subsection disconnect_from_devices                  |
2021-06-22T11:04:19: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:19: %AETEST-INFO: +..............................................................................+
2021-06-22T11:04:19: %AETEST-INFO: :                Starting STEP 1: Disconnecting from csr1000v-1                :
2021-06-22T11:04:19: %AETEST-INFO: +..............................................................................+
2021-06-22T11:04:30: %AETEST-INFO: The result of STEP 1: Disconnecting from csr1000v-1 is => PASSED
2021-06-22T11:04:30: %AETEST-INFO: +----------------------------------------------------------+
2021-06-22T11:04:30: %AETEST-INFO: |                       STEPS Report                       |
2021-06-22T11:04:30: %AETEST-INFO: +----------------------------------------------------------+
2021-06-22T11:04:30: %AETEST-INFO: STEP 1 - Disconnecting from csr1000v-1            Passed    
2021-06-22T11:04:30: %AETEST-INFO: ------------------------------------------------------------
2021-06-22T11:04:30: %AETEST-INFO: The result of subsection disconnect_from_devices is => PASSED
2021-06-22T11:04:30: %AETEST-INFO: The result of common cleanup is => PASSED
2021-06-22T11:04:30: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:30: %AETEST-INFO: |                               Detailed Results                               |
2021-06-22T11:04:30: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:30: %AETEST-INFO:  SECTIONS/TESTCASES                                                      RESULT   
2021-06-22T11:04:30: %AETEST-INFO: --------------------------------------------------------------------------------
2021-06-22T11:04:30: %AETEST-INFO: .
2021-06-22T11:04:30: %AETEST-INFO: |-- common_setup                                                          PASSED
2021-06-22T11:04:30: %AETEST-INFO: |   `-- connect_to_devices                                                PASSED
2021-06-22T11:04:30: %AETEST-INFO: |       `-- Step 1: Connect to csr1000v-1                                 PASSED
2021-06-22T11:04:30: %AETEST-INFO: |-- FirstTestcase                                                         FAILED
2021-06-22T11:04:30: %AETEST-INFO: |   |-- setup                                                             PASSED
2021-06-22T11:04:30: %AETEST-INFO: |   `-- verify_ios_version                                                FAILED
2021-06-22T11:04:30: %AETEST-INFO: |       `-- Step 1: Checking the IOS version                              FAILED
2021-06-22T11:04:30: %AETEST-INFO: `-- common_cleanup                                                        PASSED
2021-06-22T11:04:30: %AETEST-INFO:     `-- disconnect_from_devices                                           PASSED
2021-06-22T11:04:30: %AETEST-INFO:         `-- Step 1: Disconnecting from csr1000v-1                         PASSED
2021-06-22T11:04:30: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:30: %AETEST-INFO: |                                   Summary                                    |
2021-06-22T11:04:30: %AETEST-INFO: +------------------------------------------------------------------------------+
2021-06-22T11:04:30: %AETEST-INFO:  Number of ABORTED                                                            0 
2021-06-22T11:04:30: %AETEST-INFO:  Number of BLOCKED                                                            0 
2021-06-22T11:04:30: %AETEST-INFO:  Number of ERRORED                                                            0 
2021-06-22T11:04:30: %AETEST-INFO:  Number of FAILED                                                             1 
2021-06-22T11:04:30: %AETEST-INFO:  Number of PASSED                                                             2 
2021-06-22T11:04:30: %AETEST-INFO:  Number of PASSX                                                              0 
2021-06-22T11:04:30: %AETEST-INFO:  Number of SKIPPED                                                            0 
2021-06-22T11:04:30: %AETEST-INFO:  Total Number                                                                 3 
2021-06-22T11:04:30: %AETEST-INFO:  Success Rate                                                             66.7% 
2021-06-22T11:04:30: %AETEST-INFO: --------------------------------------------------------------------------------

You can see that the CSR is not running my set standard version of IOS-XE code (17.3.3), which is why you see the ‘Failed’ result for Step 1 under the verify_ios_version test. Instead, the CSR is running IOS-XE 16.9.3. Our testscript worked!

Conclusion

There was a lot to go over in this blog post, and I can tell you that I just scratched the surface. AEtest is a phenomenal testing framework and provides a structured, but flexible, way to organize and execute testscripts. I hope you were able to learn something from this post and have the time to review the pyATS docs themselves to learn more. In the final part to this series (Part 4), we are going to go over some hidden gem libraries I found in pyATS and Genie that could be useful.

As always, if you have any questions or would like to have a discussion about any topics in this blog post, feel free to hit me up on Twitter (@devnetdan). Thanks again for reading and I’ll catch you in Part 4!

References

pyATS AEtest docs: AEtest – Test Infrastructure — pyATS Documentation
pyATS – Container classes: Object Model — pyATS Documentation
Easypy Runtime Environment docs: Easypy – Runtime Environment — pyATS Documentation

pyATS and Genie: Part 2

Introduction

It’s been awhile since my last post, but I finally found some time to sit down and continue writing this series. Since my last post, I have been spending a lot of time reading through the pyATS docs and the experimenting with its modules. In this post, we will be going over the basics and how to get started with pyATS. My goal is to have you parse your first ‘show’ command and to see how easy it is to traverse the output using a Python module like Dq (which we will jump into later). Now let’s take a look at how pyATS works and begin testing!

Getting Started with pyATS

Before building any scripts or running commands against any devices, let’s talk about the different libraries that coexist with pyATS. I don’t want to spend too much time on the details, but as network engineers, many of us like to know how things work at the most basic level. Here’s a quick illustration of the different libraries that we use alongside pyATS:

I built this visual because it has helped me understand each library and their respective role within the testing infrastructure. I hope it helps you understand them as well.

Installation

The installation process is pretty simple for pyATS and Genie. I recommend using a Python virtual environment, as it isolates your environment from your system-level Python installation. This allows for better dependency management and helps avoid potential conflicts. For my Python virtual environment, I’ll be using Pipenv, but you may also use virtualenv or venv to create your virtual environment. Here’s how to install the necessary packages using virtualenv or venv:

# creates the Python3 virtual environment using virtualenv or venv 
python3 -m virtualenv pyats-venv OR python3 -m venv pyats-venv

source pyats-venv/bin/activate # <--- activates the virtual environment

pip install pyats # <--- installs pyats (includes unicon)
pip install genie # <--- installs genie

Here’s how you would create the same virtual environment using Pipenv:

# install pipenv on the system-level Python3 installation
pip3 install pipenv

pipenv --three # <--- creates a Python3 virtual environment
pipenv shell # <--- activates the virtual environment

pipenv install pyats # <--- install pyats (includes unicon)
pipenv install genie # <--- install genie

I’ve started using Pipenv because of its great benefits. With Pipenv, you no longer need to track your project dependencies using a requirements.txt file. Pipenv automatically tracks the existing packages installed. As packages are installed and uninstalled, a file called the Pipfile is automatically updated. Another file, called Pipfile.lock, is also generated to help rebuild the environment in the future. Another great feature of Pipenv is its ability to identify security vulnerabilities in the packages you’re using in your environment. To learn more about Pipenv, check out this link.

So now we know about the the pyATS and Genie libraries and have them installed, let’s jump into the fun stuff!

Defining the Testbed

If you’ve used Ansible or another automation platform, you will be familiar with building an inventory file. PyATS has a relative concept called a testbed. The testbed file defines the device names, IP addresses, connection types, credentials, OS, and type. The OS is important because that value is used by the Unicon library to determine how to connect to a device and handle the different CLI prompt patterns. Let’s review a sample testbed found in the pyATS docs:

# Example
# -------
#
#   an example testbed file - ios_testbed.yaml

testbed:
    name: IOS_Testbed
    credentials:
        default:
            username: admin
            password: cisco
        enable:
            password: cisco

devices:
    ios-1: # <----- must match to your device hostname in the prompt
        os: ios
        type: ios
        connections:
            a:
                protocol: telnet
                ip: 1.1.1.1
                port: 11023
    ios-2:
        os: ios
        type: ios
        connections:
            a:
                protocol: telnet
                ip: 1.1.1.2
                port: 11024
            vty:
                protocol: ssh
                ip: 5.5.5.5
topology:
    ios-1:
        interfaces:
            GigabitEthernet0/0:
                ipv4: 10.10.10.1/24
                ipv6: '10:10:10::1/64'
                link: link-1
                type: ethernet
            Loopback0:
                ipv4: 192.168.0.1/32
                ipv6: '192::1/128'
                link: ios1_Loopback0
                type: loopback
    ios-2:
        interfaces:
            GigabitEthernet0/0:
                ipv4: 10.10.10.2/24
                ipv6: '10:10:10::2/64'
                link: link-1
                type: ethernet
            Loopback0:
                ipv4: 192.168.0.2/32
                ipv6: '192::2/128'
                link: ios2_Loopback0
                type: loopback

I like this example because it touches on every aspect of a testbed. Starting at the top, you have the testbed defined with a set of credentials. The credentials declared under the testbed section allow them to be used by all devices in the testbed. The devices section is where you define the devices you’ll be testing. For each device, you have to to define the OS, device type, connections (can be more than one), and the credentials (if different from the ones defined under the testbed section). The last section, topology, is the most interesting. You are able to define a logical topology in your testbed file. This allows pyATS to understand how these devices are connected in the real world. Using the example testbed above, you’ll see that ‘link-1’ is used to represent a connection between the ios-1 and ios-2 devices. I like to think of it as translating a Visio diagram to a format that pyATS can understand. By allowing pyATS to understand how these devices are connected, it can provide the foundation for more complex testcases. For example, taking a snapshot of the network before and after losing a link on a specific device to see what’s affected (i.e. link status, routing, etc.).

This brings me to an important point: Everything in pyATS is treated as an object. Take a look at this visual from the pyATS docs:

+--------------------------------------------------------------------------+
| Testbed Object                                                           |
|                                                                          |
| +-----------------------------+          +-----------------------------+ |
| | Device Object - myRouterA   |          | Device Object - myRouterB   | |
| |                             |          |                             | |
| |         device interfaces   |          |          device interfaces  | |
| | +----------+ +----------+   |          |   +----------+ +----------+ | |
| | | intf Obj | | intf Obj |   |          |   |  intf Obj| | intf Obj | | |
| | | Eth1/1   | | Eth1/2 *-----------*----------*  Eth1/1| | Eth1/2   | | |
| | +----------+ + ---------+   |     |    |   +----------+ +----------+ | |
| +-----------------------------+     |    +-----------------------------+ |
|                                     |                                    |
|                               +-----*----+                               |
|                               | Link Obj |                               |
|                               |rtrA-rtrB |                               |
|                               +----------+                               |
+--------------------------------------------------------------------------+

You can see that everything is stored in the Testbed container object. From there, the device objects (myRouterA and myRouterB) both have two interface objects (Eth1/1 and Eth1/2). The link object does not belong to either device, but rather shared between the interface objects on both devices. In our example, we will not be including a topology section in the testbed. I want to keep it easy and simple. However, I wanted to point out the topology section of the testbed, as it can extend the functionality of pyATS once you dive into more advanced use cases. Now let’s move on to creating our first testbed and gathering data from our testbed devices.

Building our Testbed

For our example, we’ll be using the Always-on DevNet ‘IOS XE on CSR’ sandbox. The use case we will be looking at is verifying the IOS software version running on our device(s). It’s common for organizations to define a standard IOS software version for their switches and routers. The problem is that it would be a nightmare for a network admin to login to each device in the network and confirm the IOS version. Besides being a very manual process, it’s also very error-prone. Fortunately, pyATS and Genie provide multiple ways to gather the data AND compare it to our defined standard. Yes, there are a number of off-the-shelf tools that can perform this same function, but I want to show you how easy it is to accomplish with pyATS.

There are two ways to define a testbed: in a YAML file (most common) or directly in a Python dictionary. The YAML file is most common due to its easier readability, but at the end of the day, the testbed is loaded into a Python dictionary. Below is the testbed I’ll be using in our example:

testbed:
  name: DevNet_Testbed

devices:
  csr1000v-1: # <----- must match to your device hostname in the prompt
    os: iosxe
    type: iosxe
    credentials:
      default:
        username: developer
        password: C1sco12345
    connections:
      cli:
        protocol: ssh
        ip: 64.103.37.51 # <----- confirm before testing
        port: 8181

It’s fairly simple to read and understand. I defined a testbed called ‘DevNet_Testbed’ and have one device in it. A few things to note here: The hostname key, which is named ‘csr1000v-1’ in my testbed, must match the hostname shown in the device’s CLI prompt. The reason for this is pyATS is looking for the hostname when logging into the device’s CLI. Another thing to note is that since we are using a public always-on sandbox, the hostname and IP address may change. Before testing, please confirm the hostname (as shown in the CLI prompt) and the public IP address are correct in your testbed. As previously mentioned, we aren’t going to be adding a topology section to our testbed. Now that we have defined our testbed, let’s start writing our first pyATS script!

Learning the Network

Before diving into the code, I want to go over the general flow of the script. First, we will load the testbed file into the script (I just called my testbed ‘testbed.yaml’ – I know, very original). Next, we will pull out only the device we want to test against. You may ask, “Why are we pulling out the only device in the testbed?”. Well, down the road when you have a larger testbed file, there’s a good chance that you will only want to test against a subset of devices. I’m showing you how to do that now – you’ll thank me later. Otherwise, your tests would run against all devices in your testbed. After we identify the device we want to test, we connect to the device, run the necessary command(s) (‘show version’ in our case), and disconnect from the device.

In the next few sections, we are going to look at some Genie modules and device methods that help gather and structure the necessary data from a network device. I’ll provide a code example and the respective output for each method.

Now that we have a general idea of how the script will flow, let’s start writing some code!

genie execute

The execute() method instructs pyATS to connect to the device, run the desired command, and return the raw output. Below is example code that identifies the DevNet sandbox CSR in the testbed and assigns it to the variable named csr. We can then use the csr variable to access other device methods, including connect() and disconnect(). How much easier can it get?!

from pyats.topology import loader

# Load the testbed file
tb = loader.load('testbed.yaml')

# Assign the CSR device to a variable
csr = tb.devices['csr1000v-1']

# Connect to the CSR device
csr.connect()

# Issue 'show version' command and print the output
print(csr.execute('show version'))

# Disconnect from the CSR device
csr.disconnect()
csr1000v-1#
Cisco IOS XE Software, Version 16.09.03
Cisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2019 by Cisco Systems, Inc.
Compiled Wed 20-Mar-19 07:56 by mcpre


Cisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.
All rights reserved.  Certain components of Cisco IOS-XE software are
licensed under the GNU General Public License ("GPL") Version 2.0.  The
software code licensed under GPL Version 2.0 is free software that comes
with ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such
GPL code under the terms of GPL Version 2.0.  For more details, see the
documentation or "License Notice" file accompanying the IOS-XE software,
or the applicable URL provided on the flyer accompanying the IOS-XE
software.


ROM: IOS-XE ROMMON

csr1000v-1 uptime is 14 minutes
Uptime for this control processor is 15 minutes
System returned to ROM by reload
System image file is "bootflash:packages.conf"
Last reload reason: reload




## Truncated for brevity ##

For the output, you’ll notice that it looks just like it would in an SSH session. The only problem with that is, while it may be readable for a human, it’s not in a great format for a computer. This output is stored as one long string object in Python. This is not ideal when it comes to parsing out the data we need. Maybe there’s a Genie method that can collect and “parse” the data…. sorry, I know it’s a bad one… let’s move on to Genie’s parse method.

genie parse

The parse() method performs exactly the same actions as the execute() method, but provides some additional functionality. Along with sending the command to the device and collecting the output, the output is passed along to one of the thousands of available parsers in Genie’s library. For the complete list of available parsers, check them out here. These parsers will automatically break down the long string into a structured Python dictionary. This is where we can begin programmatically interacting with the data, without the need for complex regex. Let’s take a look at the code and structured output:

from pyats.topology import loader
from pprint import pprint

# Load the testbed file
tb = loader.load('testbed.yaml')

# Assign the CSR device to a variable
csr = tb.devices['csr1000v-1']

# Connect to the CSR device
csr.connect()

# Issue 'show version' command and print the output
pprint(csr.parse('show version'))

# Disconnect from the CSR device
csr.disconnect()
{'version': {'chassis': 'CSR1000V',
             'chassis_sn': '9YUMZ3N5W7V',
             'compiled_by': 'mcpre',
             'compiled_date': 'Wed 20-Mar-19 07:56',
             'curr_config_register': '0x2102',
             'disks': {'bootflash:.': {'disk_size': '7774207',
                                       'type_of_disk': 'virtual hard disk'},
                       'webui:.': {'disk_size': '0',
                                   'type_of_disk': 'WebUI ODM Files'}},
             'hostname': 'csr1000v-1',
             'image_id': 'X86_64_LINUX_IOSD-UNIVERSALK9-M',
             'image_type': 'production image',
             'label': 'RELEASE SOFTWARE (fc2)',
             'last_reload_reason': 'reload',
             'license_level': 'ax',
             'license_type': 'Default. No valid license found.',
             'main_mem': '2392579',
             'mem_size': {'non-volatile configuration': '32768',
                          'physical': '8113280'},
             'next_reload_license_level': 'ax',
             'number_of_intfs': {'Gigabit Ethernet': '3'},
             'os': 'IOS-XE',
             'platform': 'Virtual XE',
             'processor_type': 'VXE',
             'returned_to_rom_by': 'reload',
             'rom': 'IOS-XE ROMMON',
             'rtr_type': 'CSR1000V',
             'system_image': 'bootflash:packages.conf',
             'uptime': '3 minutes',
             'uptime_this_cp': '4 minutes',
             'version': '16.9.3',
             'version_short': '16.9',
             'xe_version': '16.09.03'}}



The only change we made to our code is swapping out the execute() method with the parse() method (see highlighted line of code). Also, we imported pretty print (pprint) in order to make the output look nicer. The biggest difference is the output. In 5 lines of code (minus the comments), we have a structured Python dictionary with datapoints that identify key information you’d find in a ‘show version’ output. You can see the ‘last_reload_reason’, ‘os’, ‘uptime’, and plenty of other valuable datapoints. For our use case, we will be interested in the ‘xe_version’ datapoint.

genie learn

If Genie parse takes care of our use case, why do we need to know another method? Well, what if you want to gather data across multiple network devices with different operating systems that require different ‘show’ commands? With our current knowledge, you would have to parse multiple ‘show’ commands for the devices with different OS types in your testbed. Besides that, the output of these different ‘show’ commands may not include the datapoints you are even looking for. Enter Genie learn…

The Genie learn() method allows you to learn a feature of the device. Features can be protocols running on the device (i.e. ospf, eigrp, lisp, dot1x, etc.) or attributes about the device (i.e. platform, interface). These features are broken up into what Genie calls models. For the complete list of available Genie models, check them out here. These models are used to provide a level of abstraction so that you don’t have to worry about what commands to parse for each OS. This allows you to focus more on the output and finding the datapoints you need. I’m not providing a code snippet for the learn functionality, but I do want to show a small example of how Genie learns routing across the different Cisco NOS platforms.

I chose routing because the commands are so relative across the different platforms, that it can trip up even the most experienced engineer. If you are interested in looking at an active, open-source project that takes pyATS and Genie learn to a whole new level, check out Merlin. This project was started by John Capobianco earlier this year and helps network engineers collect and document information about their network using the power of pyATS.

Querying the Data

We’ve made some good progress thus far. The proper data has been collected, but now it’s time to check it against our defined standards. For the sake of our example, I’m going to declare IOS-XE 16.12.5 as our defined standard.

So how are we going to drill down to the datapoint we are interested in? Normally, we would have to use nested for loops to dig through the dictionaries of data, but not anymore. Genie comes with a suite of helpful libraries including one called Dq (dictionary query). The documentation is tough to find because it’s buried in a submenu within the Genie docs, but I wanted to provide a link for convenience: Dq library. If the link doesn’t take you there, you’ll have to click on ‘User Guide’ along the left side and choose ‘Useful Libraries’ towards the bottom of that submenu. Along with the Dq library, there are some other useful libraries, including Diff, Find, Config, Timeout, and TempResult. These libraries are just added bonuses to the already valuable Genie library. Let’s see how we can use Dq to search and locate our desired datapoint.

from pyats.topology import loader
from pprint import pprint
from genie.utils import Dq

# Load the testbed file
tb = loader.load('testbed.yaml')

# Assign the CSR device to a variable
csr = tb.devices['csr1000v-1']

# Connect to the CSR device
csr.connect()

# Issue 'show version' command and parse the output
parsed_output = csr.parse('show version')
# Store the standard IOS version in a variable for future use
standard_os = '16.12.05'
# Look for the 'xe_version' key and see if it contains the proper IOS version
ios_check = Dq(parsed_output).contains(standard_os).get_values('xe_version')

if ios_check:
    print('IOS Check passed!')
else:
    print('IOS Check failed!')

# Disconnect from the CSR device
csr.disconnect()

I highlighted the two lines that were added to our script in order for us to use the Dq library. The first line imports the library. The second line queries the parsed ‘show version’ output AND performs the comparison for us. Let’s take a minute and look at the magic in that single line of code.

In our code, we convert the parsed ‘show version’ output to a Dq object. This allows us to use all the available methods in the Dq library. In our example, we use the get_values() method to locate the dictionary key we are interested in, and the contains() method to check whether that key “contains” an expected value. As a result, if a value is matched, a Python list with the matched values will be returned. If there are no matches, an empty Python list will be returned. In our example, that returned Python list is stored in the ios_check variable. The last if/else statement just determines whether the list is empty or populated. If the list is populated (meaning there was a match on the IOS versions), then the IOS check passed and we are running the correct version. If the list is empty, the IOS versions did not match and the IOS check failed. Here’s what the returned value would look like if there was a match:

['16.09.03']
IOS Check passed!

You’ll notice that there is one item in the returned Python list, which is the matching string value for the ‘xe_version’ key in the parsed ‘show version’ output.

With one line of code, we were able to query a nested dictionary AND determine whether a certain value existed. Hats off to the pyATS team. There will be so much time (and lines of code) saved from using this extraordinary library. It’s also good to note that this library, along with the rest of the Genie library, can be used independently of pyATS. So whether you’re querying your network or working on a separate project with larger datasets, you can use the power of Dq.

If you struggled to follow along, or would like to review the code we used in our example, check out my Github repo linked in the References section at the end of this post.

Conclusion

That wraps up Part 2 of this pyATS and Genie series. I’ve been enjoying writing these posts and I hope you’ve been able to find value in these phenomenal libraries. There’s so much more to the pyATS and Genie libraries, that I’ve just scratched the surface. Please check out the docs yourself to see the great features included in these libraries. In Part 3, we are going to take a look at the AEtest testing framework and Easypy runtime environment in pyATS. We may even write our first, true testscript.

As always, if you have any questions or just want to chat, hit me up on Twitter (@devnetdan). Thanks for reading!

References

Github repo: dannywade/learning-pyats: Repo for all pyATS code examples

pyATS docs: pyATS Documentation – pyATS – Document – Cisco DevNet
Genie docs: index – Genie Docs – Document – Cisco DevNet
Unicon docs: Documentation – Unicon – Document – Cisco DevNet

pyATS and Genie: Part 1

Introduction

Before getting started, I wanted to take the time to address the elephant in the room… how do we even pronounce pyATS? In short, it’s pie-A-T-S. However, the pyATS team has this nice visual on their site:

Now that we are all on the same page, let’s start looking at pyATS and how it fits into the network automation landscape.

“A piece of the pie”

I recently had a “Meet the Engineer” discussion with JB (@jeaubin5 on Twitter) from the pyATS dev team and we talked about the different “domains” that make up a proper network automation ecosystem and ultimately how pyATS fits into it. I always refer back to this visual from a session I attended at Cisco Live 2019 in San Diego.

Screenshot from DEVNET-1204 presentation at CLUS 2019

Each slice of the pie represents a domain in the ecosystem. If you’re just starting your network automation journey, I’m sure you’ve heard of or played with many of the tools found in the ‘Configuration Management’ domain. This domain is by far the most popular when it comes to network automation. Besides having an abundance of tools available, this domain is popular due to the number use cases it can help solve. Most of the time, network engineers begin looking at network automation when they have a proper use case or problem they’re trying to solve. Most of the time, that use case revolves around pushing out configuration at scale. This may be pushing out a mass configuration update or ensuring that devices are adhering to a “golden configuration”. However, this really shouldn’t be how you begin your network automation journey…

As with trying anything new in networking, or tech as a whole, you want to start testing with changes that have the lowest risk and impact. These changes are most of the time read-only and do not affect the configuration or operational state of a device. I think most engineers can agree with the previous statements… so why do we not take the same approach with network automation? To be fair, configuration management can be limited to only collecting and parsing out configuration (no configuration changes). However, most of the time, many use cases revolve around pushing out configuration (i.e. updating an ACL, global configuration (NTP, DNS, etc.), or simply adding a VLAN to a group of switches). Let’s start our network automation journey the right way by discovering how our network is really operating before pushing out any potential breaking changes. We will begin by diving into the ‘Stateful Test & Validation’ piece of the pie.

What’s pyATS?

Before pyATS was released to the public in late 2017, the ‘Stateful Test & Validation’ domain was pretty bare. At that time, much of the focus was around configuration management, which made sense. Configuration management was a long standing problem for many network teams, so solving that problem was a huge and quick win. Fast-forward to the present, infrastructure-as-code and automation as a whole has taken off. We are beginning to view and manage our networks through git repositories instead of SSH sessions. With this rapid adoption of automation, network engineers had begun figuring out that network automation went beyond just mass configuration pushes or config management. We could begin collecting data from the network via MDT, logs, parsed ‘show’ output, etc. and be able to tell exactly how the network is running and how it reacts to specific changes. Wouldn’t it be great knowing that all the routers across your environment had a certain number of BGP neighbors and their associated uptime? Have you ever wondered if that one switchport was down before and after the software upgrade? Wouldn’t it be great to mock up a configuration change, create testcases on specific criteria (i.e. number of routing adjacencies, CPU/memory %, etc.), and confirm it works before pushing it out to hundreds of devices? These are only a few use cases for a network testing and validation tool. Enter pyATS…

pyATS was initially an internal testing tool for Cisco. It was open sourced and released to the public in late 2017. It creates a base testing framework for other tools and libraries. When many people first hear about pyATS, they also hear about its popular counterpart Genie. Genie is a Python library that builds on top of pyATS and provides the tools for pyATS, including network device APIs, parsers, and much more. Using the metaphor from the pyATS docs, think of pyATS as an empty toolbox and Genie being all the tools to put into it. Here’s a quick visual of how pyATS and Genie work together:

Introduction – pyATS & Genie – Document – Cisco DevNet

When I first looked into pyATS and Genie, I was confused because I thought they were one product. It wasn’t until I saw the above diagram that I slowly started figuring out the differences between them. I will emphasize that it’s not super important to understand the differences between the two when you’re first starting out since you’ll be using them together, but it will down the road.

Conclusion

I really thought I could fit everything in one post, but the more I read through the pyATS and Genie docs, the more exciting features I found, and I want to make sure I highlight them all properly. I’ll be creating a second post (Part 2) in the next couple weeks to begin highlighting some of these neat features and diving into the technical details. Stay tuned!

In the meantime, if you have any questions/feedback, please feel free to hit me up on Twitter (@devnetdan)!

DevNet Pro Journey: COMPLETED!

You read that title correct… I’ve completed my DevNet Pro journey by passing the Cisco DEVCOR exam last week. I’m now officially a certified DevNet Professional!

In this post, I’m going to recap my entire journey, provide personal study tips, and how YOU can begin your own DevNet journey!

My Journey

Ever since I passed my CCNA in 2017, I’ve been striving to obtain my CCNP certification. However, I did a lousy job in chasing that achievement. I studied off and on for weeks at a time and really never mapped out my journey. I only took one attempt at the old CCNP Route exam and failed. After failing that, I took a hard look at each of the exam blueprints and made the ultimate decision to pause until the certifications were updated. Fast-forward to Cisco Live US 2019, I was in San Diego and was sitting at Chuck Robbins’ opening keynote where he announced the revamped Cisco certifications, and the addition of the DevNet certification track. I was pumped! By this point, I was already doing automation work at my job and loved it. After having a great Cisco Live that week, I began mapping out my journey.

Journey Mapping

This term is used a lot these days to describe and track customer experiences with a product, but I’ll be using it to describe how I got from Point A (CCNA with network automation aspirations) to Point B (CCNP and DevNet Professional).

The most important piece of mapping out your journey is to imagine the person you want to be. For example, I envisioned myself being a network automation engineer with CCNP and DevNet Professional certifications, so that was my “finish line” in this journey. This is the most crucial part of your plan, as it helps with WHY you’re embarking on this journey. There are many other WHY’s that can be included here: job promotion, future opportunities, your family, etc. Another crucial piece of your journey map is by setting limitations on what you’re achieving. This may sound weird because you always hear motivational sayings that “nothing can hold you back” and “the sky is the limit”. However, these limitations are only in the scope of this specific journey – not your entire life. This helps keep your focus on the journey’s goal. It helps focus on this specific building block. For example, my limitation for this journey was that I wasn’t going for my CCIE. I knew the temptation would be there once I passed the ENCOR exam, since it’s now the CCIE written exam, but I removed that temptation from the start by stating that only the CCNP certification was in-scope. The CCIE would be its own journey.

My Journey Recap

Once I knew where I wanted to go, I had to figure out how to get there. I started with the DevNet Associate. While studying for the DevNet Associate, the DevNet 500 was announced, which just poured fuel on my motivational fire. I ended up taking and passing the exam on the first day it was available in February 2020, and became part of the DevNet 500. After the DevNet Associate, I went back to the journey map and figured out that I could double-up the Cisco ENAUTO exam as my specialist exam for both certifications, the CCNP and DevNet Professional. With DevNet being newer, and the CCNP looming over my head since 2018, I decided to begin with the ENCOR exam. I passed that exam in October 2020 and moved immediately to the ENAUTO exam. I was able to quickly turn that around and complete it a month later in November 2020. I want to stress that I had past experience with 2 of the 3 Cisco product APIs outlined in that exam blueprint. That helped immensely while studying, which led to the quick turnaround in passing the exam. After the ENAUTO exam, I took a break for the holidays and reviewed the final exam’s blueprint, the DEVCOR. This was by far the hardest exam. Besides the amount of topics I didn’t know much about (Software Development and Design, K8s, etc.), this exam was also very broad. Like the ENCOR exam, there were a lot of topics covered and you really had to understand each one with some depth. I began studying for the DEVCOR exam in the beginning of January 2021 and was finally able to pass it last week (mid-April 2021).

DEVCOR Exam Tips

Create a Routine

One of the biggest tips I can give you is to create a routine. I mean same time, same place, on a regular cadence (that you choose). Some people start out and say they will be studying 7 nights a week/ 2-4 hours a night, but let’s be realistic… life gets in the way. Take a look at your calendar from the start and take note of life events that you know will prevent you from studying. For example, there were a few life events that I knew would throw off my schedule: family birthdays, weekend trips, etc. During those weeks, I would reduce the amount of material covered during that week. Read through the blueprint for the exam you’re studying for and assign specific topics for each week. By the end, you should have a detailed schedule of how long it will take you to study (with time to review) before sitting for your first exam attempt.

Once you have a clear schedule (with realistic expectations), you can begin figuring out a specific time to study. This will highly depend on your situation in life, with the biggest factor being family responsibilities. Some people try to add time to their day by studying later at night or earlier in the morning. However, I went with option C and chose to substitute time from my existing schedule. Rather than watching TV or Netflix after dinner every night, I chose to study. Was it tough? Absolutely. I was so used to chilling out on the couch every night and watching my favorite series. There were some nights that I felt like I could blow it off, but my schedule reminded me that I would have to study twice as long the next day if I skipped. The last piece to the puzzle is where to study (stealing a real estate term: location, location, location). This is very important in order to keep concentration during your study time. At first, I studied at my dinner table. I quickly learned that there was too much going on around me and I couldn’t focus. I ended up studying in my office where I could close the door and had proper lighting. This location will be different for everyone. The only thing I suggest is to study only at this location while you’re at home. Your mind will be prepared and it will help you focus while studying.

Study Material

With the DevNet exams being a little over a year old now, there is minimal “official” study material available. This begs the question, where do I look for study material and what material is even good? Before we jump into the available resources out there, I want to go over my approach to choosing study materials. I always use each of the following formats:

  • (2-3) Books or other reading sources (whitepapers, blogs, etc.)
  • (1-2) Video courses
  • Labbing (Crucial piece!)

Now for the specifics: For books, I normally read through the Cisco Official Certification Guide (OCG) for that particular exam (if available), supplemented by Cisco’s online documentation. For video courses, I’ve only used CBT Nuggets and Pluralsight in the past. However, this year, I’ll be checking out INE courses. For labbing, it really depends on the exam. For DevNet exams, I use Cisco DevNet’s free and reservable sandboxes. I used to use EVE-NG, but found myself troubleshooting issues vs. working with the actual products. I still use EVE-NG if I want to test some feature more extensively, but in the context of studying, your best bet is to stick with the sandbox environments.

The only exception to my study material approach is when I studied for the DEVCOR exam. With the extensiveness and depth of the exam, I decided to go all out and purchase Cisco’s Digital Learning DEVCOR course. It’s an online course that provided a combination of reading material, videos, and hands-on labs. I would highly recommend this course for anyone preparing for the DEVCOR exam. I can confidently say it helped me pass the exam. Now let’s talk about how you can get started on your own DevNet journey.

Starting your own DevNet Journey

Since I started my DevNet journey back in December 2018, the DevNet program has grown extensively. DevNet certifications were introduced. More resources have been added to the Cisco DevNet site, including learning labs and sandboxes. The program itself has become more popular due to businesses realizing the value behind automation and engineers beginning to look at their infrastructure as code. Here is how I suggest starting your journey:

  • Look through the available DevNet exam blueprints. I’d suggest starting with the DevNet Associate.
  • Review the learning labs on Cisco DevNet
    • Learning labs provide step-by-step instructions on how to programmatically interact with a specific Cisco product or network device.
  • Create API requests using Postman
    • Postman is a tool used to test and explore APIs. Many learning labs require this piece of software as a pre-requisite. After receiving your first API response through Postman, I promise you, there’s no turning back…
  • Python – Requests library
    • The Requests library is used to programmatically interact with HTTP API endpoints (i.e. Cisco products in our case). This library allows you to collect the API response and perform additional manipulation/validation of the data using the power of other Python libraries.

It’s worth mentioning that the above suggestions are specific to Cisco’s DevNet program and does not encapsulate network automation as a whole. There are plenty of open-source Python projects out there that can help you get started, such as Netmiko, Nornir, NAPALM, Scrapli, and pyATS, to name a few. These projects allow you to programmatically connect to a network device (via SSH, NETCONF, etc.), collect and parse ‘show’ command outputs, and even push configuration to a device. You may even see a couple of these libraries referenced in the DevNet exam blueprints.

Conclusion

I know this is a longer and more personal post, so I appreciate you reading thus far. I purposely made this more personal and detailed so that you could know my story and hopefully relate in some way. I’ve read many “study tip guides” out there and they all seemed to bullet point the same high-level topics. I figured if I added more personal details and told my story, it could be more relatable to you. If you’re interested in Cisco DevNet or network automation and have questions, please feel free to hit me up on Twitter (@devnetdan).

Thanks for reading!

DevNet Pro Journey: My First DEVCOR Exam Experience

Introduction

Last week, the day finally came, I took my first swing at the Cisco DEVCOR exam. Going into it, I felt good about the material. I had studied for weeks on end and maintained a consistent schedule, studying about 6 nights a week. Consistency was one of my study goals for this exam. In the past, I would have longer study sessions, but only study 3-4 nights a week. I knew I needed to maintain consistency for an exam like DEVCOR. The final week leading up to the exam, I dedicated 2-3 hours of focused studying, which included reviewing my notes, additional online documentation, and labbing in the Cisco DevNet sandboxes. After weeks of preparation, I was ready for my first attempt.

The Exam Experience

Surprisingly, I wasn’t nervous the night before or the morning of my exam. Recently, I’ve changed my mindset for taking certification exams. I now look at certification exams as when I’m going to pass vs. if I’m going to pass. This mindset has helped remove the mental obstacle that it’s all over if I don’t pass on the first attempt. Looking forward to when you pass helps ease your mind and ensures that, no matter what, you’ll find a way to pass the exam and obtain your certification. Yes, there is a financial cost associated with each attempt, so don’t take this advice and fall to the other end of the spectrum of “there’s always next time”. You need to find that fine balance between both philosophies.

Now on to the actual exam itself.

I found the exam to be extremely fair. The questions were relatable and much less trivial than Cisco exams in years past. Comparing to the DevNet Associate, this went well beyond knowing how a particular product’s API is structured, basic API interactions, and 5-10 line Python scripts. The questions really put you in the driver’s seat and made you make a decision based on a given scenario. Just like in network engineering, you have to know how the underlying technology works before making any higher-level business decisions. The same applies when you are building out an application. There are many components to a piece of software, so knowing the underlying technologies and which pieces fit a specific use case is critical. Let’s now take a look at one of the potential reasons for my first attempt failure.

My [temporary] Kryptonite

I use the word Kryptonite, but that’s a little exaggerated. Kryptonite has a permanence to it. I’ve learned from my experience and I’ll be making an effort to ensure that this issue doesn’t cause problems in my next attempt. So what was my Kryptonite in my first attempt? Time management.

Before taking the exam, I watched Knox Hutchinson’s (Data Knox) video on YouTube where he reviewed his exam experiences for the DevNet Associate, ENAUTO, and DEVCOR. I did this before taking the ENAUTO and figured it would be good to watch it again before DEVCOR. He nailed it right on the head when he mentions the only issue he had was time. Like his experience, I was about halfway through the exam when I realized I only had about 45 minutes left. It being a 2 hour exam, that may not seem too bad, but at the rate I was answering questions, I wouldn’t have finished if I didn’t speed things up. I ended up quickly answering 15-20 questions in a row and realized I’ve made up the time, which put me in a good position to finish the test (on time). After receiving my test score, I now see that rushing those 15-20 questions may have been what led to my ultimate failure. In preparing for my second attempt, I’ll be cognizant of which topics took me the longest to answer on my first attempt and try to minimize those knowledge gaps.

So What’s Next?

My first attempt being a fail was really a learning lesson for me. I was able to pass the Cisco ENCOR and ENAUTO exams each on my first attempt, so this experience has helped me learn a lot about my preparation and ultimately that this truly is a journey. You live and learn from your failures. It wasn’t about if I pass the exam, it’s about when I pass the exam and obtain the certification. With that mindset, I’m able to look forward and learn from my mistakes/shortcomings. Immediately following my failed first attempt, I rescheduled the exam for two weeks later. My score was close enough to where I believe two weeks will be enough time to close the gaps and be prepared for a second attempt. In about a week I’ll do a self-assessment to see where I’m at and whether or not I need to push it out another week, but that’s later down the line.

I wanted to write this post as an appreciation for all the support I received on Twitter and to help elaborate on the details of my exam experience. As always, if you have any questions or would like to talk, hit me up on Twitter (@devnetdan). Thanks for reading!

DevNet Pro Journey: DEVCOR Weeks 5 and 6

Hello again! I hope you all had a great Valentine’s Day weekend! As mentioned in my tweet last week, I skipped last week to finish out my first pass of the blueprint. The topics we will be looking at this week will focus on most of section 4.0 – Application Deployment and Security and all of section 5.0 – Infrastructure and Automation. This closes out the remaining DEVCOR exam topics, so stay tuned for my complete DEVCOR exam blueprint impressions later in this post.

Introduction

Thank you for being patient with my posting schedule this past month. These exam topics have been extremely tough. In particular, the remaining topics in section 4.0 revolved around security, and I don’t mean encryption/hashing methods, routing protocol authentication, or how to configure IPSec tunnels. These security topics involved how an application is deployed and is accessed by the end user. If you went through college/university, you most likely had at least one IT security course that mentioned topics around OWASP threats (cross-site scripting (XSS), CSRF, SQL injection). If so, crack open that old textbook because that is exam topic 4.10 – Implement mitigation strategies for OWASP threats. Along with that example, there are specific topics for configuring SSL certificates (4.9), implementing a logging strategy for an application (4.6), and explaining privacy concerns (4.7). These topics really shift your mindset towards the “Dev” part of “DevNet” (sorry, I had to). Now let’s jump into the topics!

Week 5 and 6 Topics

There are many topics to cover. Here’s the entire list of topics we will be covering this week, straight from the blueprint:

DEVCOR Exam Topics (cisco.com)

I will not be able to dive into each of these topics, but I will talk about each of them within general categories. For example, 4.3 – 4.4 relate back to my week 4 topics surrounding application development. Sections 4.5 – 4.11 all surround application security. All of section 5 (5.1 – 5.5) involves infrastructure automation using telemetry, automation platforms (Ansible/Puppet), and building apps ON the network devices using IOx. Now that we’ve grouped each of these topics, let’s take a look at each category.

The application development topics in section 4 were very interesting. I learned how to develop an application using Docker/Kubernetes and deploy it using a CI/CD pipeline. After learning each of these topics, you start to understand at a high-level how companies are agile in their code development, while also maintaining proper checks and balances using CI/CD. I don’t know about how others felt while reviewing these topics, but before learning about these technologies, I never saw an application more than one large monolithic app. I never really considered all the microservices that make up a large website such as Amazon, eBay, and many others. Docker and Kubernetes allows developers to manage all aspects of their dev environment without needing to provision multiple VMs, which ultimately reduces time. It also allows developers to modularize their app so that devs can work on different aspects of the same app (i.e. login, shopping cart, etc.) at the same time. This allows for more updates and innovation for each of these individual components, since a group of developers can be dedicated to that one component. To wrap it all up, I dove into the details of CI/CD. If you’ve read my other blog posts, you would’ve seen that I have some experience with CI/CD in GitLab for managing Cisco network devices in a virtual environment (EVE-NG) on Google Cloud Platform (GCP). I do not proclaim myself as a CI/CD expert or even a novice, but I would say I understand the importance of CI/CD and some of the steps that make a pipeline (build, test, deploy). I definitely enjoyed studying these topics and will be reviewing them again on my second passthrough of the blueprint.

Application security was by far the toughest set of topics to get through during my studies. Full disclosure: I skimmed over these topics vs the others. I purposely did this because I didn’t want to get caught up or discouraged while reviewing these topics, causing me to abandon my studies all together. My goal was to ‘check the box’ with these topics on my first passthrough and then hit them HARD on my second passthrough. I plan on dedicating multiple days for each of these topics on my second pass. I can openly say that these will be my weak points of this exam. A note to everyone reading, it’s important to identify your weaknesses when studying for a cert (and really anything in life) and know that you will have to focus more time on those topics than others. It sucks because these topics won’t be as enjoyable as the others, but you must do it. Even though these topics are my weakness, I did find them very interesting. In particular, the tenets of the “12-factor app” really helps open your eyes to the best practices of developing a secure app. This exam topic helps you elevate the existing scripts you may have created when studying for the DevNet Associate. You should no longer be hardcoding credentials in your scripts, and instead look at using environment variables. I’m glad the DEVCOR exam reviews this topic because I remember googling best practices for securely storing credentials. I had to look it up because I was using my scripts at my job to perform simple tasks, such as gathering CDP neighbor details. I didn’t want to store my TACACS credentials in plaintext within my script. This topic is just another example of how the DevNet Professional cert is preparing you for a career beyond developing simple, one-off scripts and focus more on implementing best practices of app development.

The last category of topics I reviewed was surrounding infrastructure automation. All the topics in this category make up section 5.0 – Infrastructure and Automation. This section felt more like home because I could relate to all the topics in this section. Many of the topics were high-level, such as explaining model-driven telemetry (MDT). MDT is made up of many components including RESTCONF/NETCONF, YANG data models, gRPC, and many other protocols that generate and transport the data to your monitoring server. So in order to explain MDT, you have to know the details of the underlying components. Other topics in this section include constructing workflows using Ansible and Puppet, identifying a configuration management solution based on certain requirements, and describing how to deploy an application on a network device using IOx. Let’s take a quick look at each of these other topics. Building a workflow using Ansible and Puppet was pretty natural for me. I’ve created many Ansible playbooks and understand the overall workflow using variables, roles, plays, and tasks. I have minimal experience using Puppet, but the DSL of Puppet Bolt was pretty straightforward and felt a lot like JSON. If you have not had any experience using Puppet or Bolt, I encourage you to take a look. Creating a simple manifest file (Puppet’s version of a playbook) can be less than 5 lines of code using certain modules. The last topic on the blueprint, 5.5 – Describe how to host an application on a network device (including Catalyst 9000 and Cisco IOx-enabled devices), is probably the most interesting topic in section 5. I think the idea of deploying an application (as a Docker container) on a network device is next-level. However, I’m struggling to understand the sustainability of this deployment model. I understand why you may deploy an app directly on a network device, whether for security, data processing, or even troubleshooting purposes. The issue I have is the maintainability of these applications and the operational responsibility. I’ve read from multiple sources that DNA Center is the obvious answer to maintaining the apps dispersed across all the Cisco Cat 9k devices in your environment. However, one big consideration is vendor lock-in. With the decision to use DNA Center as your automation platform of the future, you must consider the licensing and other commitments you are making to Cisco by integrating DNA Center. There is no wrong answer, it’s just a fact when committing to any tool or software. I personally find DNA Center as a great platform to centralize your automation efforts, as long as you are in an all Cisco environment or plan to refresh your existing infrastructure with Cisco Catalyst 9k switches. It has a very robust API and many built-in integrations to tools such as ServiceNow and Infoblox. The other question mark I have for deploying apps on a network device is the troubleshooting and triage efforts when things go south. Who’s ultimately responsible for the uptime of the app? What happens when the application goes down? What if the network team needs to upgrade the IOS-XE software on the switch, can the app take downtime? What if the switch loses power for a long period of time due to an outage? I know the answers to these questions will vary from organization to organization, but it brings up some considerations when creating a proper operational procedure for maintaining these apps. Overall, I really enjoyed all the topics in section 5 of the blueprint.

My Week 5 and 6 Impressions

I put a lot of my thoughts inline with each topic throughout the post, but let me highlight my impressions on each category of topics mentioned throughout this post. The application development topics, specifically CI/CD, were enjoyable to review. I already have some experience creating a CI/CD pipeline in GitLab, so studying this topic helped formalize my experience. The application security topics found in section 4 are going to be very tough for me. I know I will need to spend at least a week just on these topics. The last category of topics surround infrastructure automation. Like the app development category, the topics in this category came more natural, as I have some experience with MDT, configuring devices using RESTCONF, and tools such as Ansible. I do find the last topic in this section, deploying an app on a network device, interesting and hope to find more uses cases for this technology. As I always say, you should understand the WHY before figuring out the HOW.

DEVCOR Exam Blueprint Impressions

Now that I’ve completed my first passthrough of the entire DEVCOR exam blueprint, I wanted to provide my impression of the blueprint in its totality. As a whole, I find this blueprint to be very challenging. I think this exam really creates the foundation to becoming a developer. Like I’ve mentioned in a previous post, I found myself in a developer’s mindset throughout my studies. I wasn’t focused on specific networking concepts or technologies like in previous Cisco exams. I didn’t need to know the different timers of EIGRP or OSPF. I didn’t need to understand BGP. This exam really puts app development as front and center, with networking as a topic you’re looking to automate using best practices of app development. I look forward to my second passthrough of the blueprint and know that I’ll learn something new while reviewing these topics again.

Conclusion

Sorry for the long post this week, but we had a lot of exam topics to cover. With this post completing my high-level review and impressions of the DEVCOR exam blueprint, look forward to smaller posts in the future that will be more technical and focused on a specific DEVCOR exam topic. Beginning this week, I will be going through the entire blueprint a second time, but slower and with more focus. I will be using Cisco’s Digital Learning on-demand DEVCOR course as a study guide (check it out here). I’ve heard great things about it, so I figured it would be a great investment. I will most likely include reviews of the course in my future posts. Thank you for reading through this mini-series of my exam blueprint impressions throughout my DEVCOR journey. I look forward for you to stay tuned to my future posts that will continue my DEVCOR journey. If you have any questions/feedback, please hit me up on Twitter (@devnetdan)!

DevNet Pro Journey: DEVCOR Week 4

Hello, it’s a been a couple weeks! I know this isn’t technically week 4 since I didn’t post last week, but I’m calling it that for consistency sake. As mentioned in my my tweet last weekend, I was busy diving into YANG models and model-driven telemetry (MDT) using the TIG stack (we will get more into that later), so I wanted to wait until I had a little more content to post. Today’s post will mostly revolve around MDT, CI/CD workflows, and my thoughts on deploying an application using Docker and Kubernetes.

Introduction

This week will be a little different than others. In the past, I reviewed entire sections (i.e. 1.1 – 1.x). My method of studying is to review each topic at a high-level with a video series, then dive into labbing each topic. The topics outlined in DEVCOR exam sections 3.8 and most of section 4 are a little more involved so it may take a few weeks to get through these sections. Every topic has multiple components and requires some additional understanding. For example, exam topic 3.8 Describe steps to build a custom dashboard to present data collected from Cisco APIs requires you to understand a few concepts: NETCONF/RESTCONF, YANG data models, and a technology stack to ingest the data and produce a dashboard (notably the TIG or ELK stack). As you can see, covering exam topic 3.8 is more involved than memorizing steps 1..2..3 for building a custom dashboard. With that being said, let’s jump into the topics!

Week 4 Topics

Here are the topics covered this week:

DEVCOR Exam Topics (cisco.com)

These were three monster topics. For topic 3.8, I spent two nights just reviewing the structure of YANG data models on IOS-XE devices, and that’s only one piece of the puzzle. The major mystery for me was deploying the appropriate software stack (TIG) to collect the telemetry data from the IOS-XE devices and produce clean dashboards. However, I was very surprised how easy it was deploying a TIG stack (Telegraf, InfluxDB, and Grafana) using Docker. With Docker, you can deploy the entire stack, with each component communicating with one another, using a docker-compose.yml file. Before moving on, here’s a quick summary of the TIG stack and each component:

  • Telegraf – used to collect data and metrics from the IOS-XE devices. Think of this as the entry point for the data being sent.
  • InfluxDB – time-series database to store collected data. Time-series databases are helpful in our case since we want to see metrics over a given time period. Other database types
  • Grafana – pulls data from InfluxDB and creates the fancy dashboards

There are many examples on the web for building the TIG stack using Docker, so don’t worry about writing your own (unless you want the practice!). I personally used Knox Hutchinson’s example, since I was following his tutorial on CBT Nuggets. Here’s a link to his Github Code Samples repo.

After tackling MDT and deploying a TIG stack using Docker, I started reviewing section 4 topics beginning with 4.1 Diagnose a CI/CD pipeline failure (such as missing dependency, incompatible versions of components, and failed tests). I felt comfortable reviewing this topic, as I have experience (see previous blog posts) with building CI/CD pipelines with Ansible on GitLab. However, CBT Nuggets covered deploying a CI/CD pipeline using Jenkins, which I do not have any previous experience with. After learning more about Jenkins and creating a Jenkinsfile, I found it to be very relative to GitHub Actions and CI/CD with GitLab. Unless I had a specific use case for Jenkins, I think I would rather use the integrated CI/CD workflows wherever my code was hosted (GitHub or GitLab). However, in an enterprise environment, I could see the use case for Jenkins being the central piece of software for managing many CI/CD workflows.

I’ll admit that I’m not completely finished with the third and final topic: 4.2 Integrate an application into a prebuilt CD environment leveraging Docker and Kubernetes. I do not have any previous experience with Kubernetes or integrating an application into a CD environment, so this one is taking me a bit longer. I would like to take this moment and demystify a misconception that I’ve had about Docker and Kubernetes. Docker and Kubernetes are not necessarily competing container technologies. For example, I used to think that there were Docker containers and Kubernetes containers. I believed you had to choose which type of container you wanted to deploy. After reviewing this topic, I now understand that Kubernetes helps manage containers and their workloads. Docker has their own container management software called Docker Swarm, which would rival Kubernetes, but there aren’t specific Docker containers and Kubernetes containers. Kubernetes will deploy and manage containers within a Kubernetes cluster. While deployed in a cluster, Kubernetes will monitor the workload of each container and move them between physical servers or scale them as needed. On top managing their workload, Kubernetes makes each workload portable. You can move workloads from your local machine to production without having to reboot the entire application. The greater mobility allows for greater integration and deployment cycles (i.e. CI/CD workflows). As you can see, I’m pretty pumped about reviewing this topic more in-depth, so I’ll report back next week with my additional findings.

My Week 4 Impressions

Even though this week only consisted of three topics, I still feel like I need to go back and review each one again. There was so much depth to each topic and, honestly, a lot of personal interest. For MDT, I still feel like there’s so many more details I need to cover, including diving deeper into each component of the TIG stack. I have some past experience with Grafana, mostly because it’s the component that makes the nice looking dashboards, but I would personally like to dive more into the database component (InfluxDB). During my initial labbing, I found the endless possibilities of data that can be collected using YANG models on IOS-XE devices. The YANG models really unlock so many datapoints that you may not find using traditional SNMP monitoring.

The two topics I covered in section 4 switched my mindset a bit and allowed me to recognize the thoughts and processes you should have behind deploying an application. Don’t get me wrong, these topics alone won’t teach you everything you need to know about deploying an application, but coming from a network engineering background, it’s very eye-opening. The CI/CD workflows covered in 4.1 is very relevant to how I see network engineers manipulating network device configuration in the future. Having a single source of truth for all device configurations and having built-in testing/deployment mechanisms with CI/CD workflows will be key to scaling and modernizing networks as companies go through digital transformations in the future.

Conclusion

Thanks for reading my DEVCOR study review this week. There weren’t as many topics this week, but each one was very heavy. Stay tuned for my next post, as I cover more exam topics in section 4.0 Application Deployment and Security. As always, if you have any feedback or questions, you can find me on Twitter @devnetdan.

DevNet Pro Journey: DEVCOR Week 3

Hello again! This week my main focus in my DEVCOR studies revolved around topics in exam topic 3.0 Cisco Platforms. My disclaimer up front is that I didn’t spend as much time as I would have liked to on studying this week, so I only completed a little over half of the the outlined topics.

Introduction

As mentioned in my previous post, I was going into this week with the idea that studying would be pretty straightforward – learn the different Cisco product APIs. However, I quickly learned that I had to learn more than just the APIs themselves. Coming up, I’ll highlight the topics I reviewed and my impressions on the material.

Week 3 Topics

As always, I’m providing a screenshot directly from the exam blueprint of the topics covered.

Taken from https://learningnetwork.cisco.com/s/devcor-exam-topics

As you can see, there are many more topics in section 3.0 vs last week’s section (2.0). Despite having more topics in this section, I thought this was going to be easier than the past two sections. Section 3.0 topics revolve around Cisco products and their associated APIs. I figured this section would take me back to my DevNet Associate and ENAUTO study sessions, where I focused mostly on understanding the product’s API – I was wrong.

To describe my study issues this week, I found relevance with this tweet from Hank Preston.

“You can’t automate what you don’t understand…”. I found the biggest issue I had wasn’t parsing through the JSON/RPC+XML responses, but rather actually understanding the product and problems each product solves. Technical understanding aside, I didn’t work with these products day-to-day, so learning how to interact with their APIs didn’t do me any good. As a result, many of my time spent this week was going over the product white papers and other documentation – not sending in API requests. That’s why I wasn’t able to cover every topic in this section (like I thought I was going to).

I needed to first read up and understand each product, THEN learn how to interact and automate certain workflows. Don’t get me wrong – these products aren’t foreign to me. I had a rough idea about each product going into each study session. What I tried learning was how each product played a role in an enterprise environment. For example, why use FDM vs FMC to manage your devices running FTD (yes, I purposely used all the acronyms I could 🙂 ). Understanding each product and what purpose they serve in an enterprise infrastructure will help you better learn WHY you would want to interact and automate certain workflows using their APIs.

My Week 3 Impressions

I think I pretty much summed up my impressions on this week, but as it relates to the actual topics I covered, I was able to review topics 3.1 – 3.5. I was familiar with sections 3.1 and 3.3 (Webex Chatops and Meraki APIs), as I have experience using both products and their APIs. On top of that, both APIs are well-documented and easy to pickup (in my opinion). The other topics (3.2, 3.4, 3.5) were a bit tougher to pickup since I haven’t had as much experience with the products outlined in each topic (FDM, Intersight, and UCS Manager).

The main lesson I learned was that understanding a product’s functionality and role within an enterprise environment, versus jumping right into the technical docs, will better prepare you when studying for an exam.

Conclusion

I know this post was shorter than past weeks, but expect these weekly updates to vary depending on the topics or lessons learned from that week. Next week, I’m aiming to have the rest of section 3 and about half of section 4 completed. Section 4 (4.0 Application Deployment and Security) has a total of 11 topics, many being new to me, so I expect section 4 to take a couple weeks to review.

Please comment or hit me up on Twitter (@devnetdan) if you have any questions or feedback. Thanks for reading!

DevNet Pro Journey: DEVCOR Week 2

This week I’m continuing my DevNet Professional journey and discussing the DEVCOR exam topics I reviewed this past week: 2.0 Using APIs.

Introduction

The topics in this section really makes you dive in and think about how users access your application, whereas the topics covered in 1.0 Software Development and Design mostly cover the architecture of applications. To many network engineers, these topics may seem overwhelming and almost unrelated to your current job of designing and building networks… because it is… However, the whole point of the DevNet certification track is to show how network engineering and monitoring can change in the future through automation and software development practices. You must keep your mind open to these new software development concepts.

The surprising thing I found is that you can learn these DevNet topics like you did when you studied for the CCNA. I remember studying for my CCNA a little over 3 years ago and thought how overwhelming the information was about each networking topic. From learning about MAC addresses and layer 2 operations (STP) to layer 3 and routing protocols (EIGRP, OSPF, BGP), I remember thinking, “How does anyone learn all of this and remember each detail?”. Over time, I’ve learned that network engineers do not remember each and every detail, but know where to look and have the intuition to troubleshoot an issue. You can take this same concept when studying for the DEVCOR topics. Yes, for the exam, you will be expected to recall details about each topic. However, the goal of the exam is to allow you to understand the intricacies of an application’s architecture and have the ability to make proper decisions when building or troubleshooting an application in the future. With that being said, think back to your CCNA days, that exam gave you the ability to identify the intricacies of each layer in the networking stack (OSI layers 1-4), and provide the ability to make proper decisions when building networks in the future.

Week 2 Topics

For reference, here’s a screenshot from Cisco’s website of the topics covered under 2.0 Using APIs:

Taken from https://learningnetwork.cisco.com/s/devcor-exam-topics

As you can see, these topics pivot away from consuming an API and focus more on constructing an API. There is a section (3.0 Cisco Platforms) that focuses more on consuming Cisco APIs, but I’ll get more into that next week. These topics felt like they were a stepping stone to the DevNet Associate and ENAUTO API topics. In those exams, you learned how to construct an API request and use Postman or a small Python script to make the request. DEVCOR builds on that by introducing a new authorization method (OAuth2) and proper error handling for REST API requests/responses ( HTTP Error Code 429 and other control flow techniques). These topics takes your “one-off” scripts and teaches you how they may be handled when integrating them into a larger application. For example, you can’t feed data into a function or method and just expect the output to be clean. When first starting out, the “blast radius” is somewhat small. You make one API request and receive an expected response. Once you begin building out a more complete application, you’ll need to add in error handling so that your application doesn’t crash.

Along with error handling, these topics also cover REST API operations, such as pagination and HTTP caching. Pagination is important because it can control the number of results that are returned, which leads to a better user experience. For example, no one likes to view websites that keep going on and on and on. Personally, if I see a tiny vertical scroll bar, I just hit CTRL+F and hope I find the information I’m looking for by using a keyword. HTTP caching is interesting and can enhance your APIs performance and make it operate quicker, which also leads to a better user experience. As you can see, all of these topics can be related back to enhancing the user experience.

My Week 2 Impressions

I definitely felt different while studying this week. At some points, I had to remind myself that this was not a traditional Cisco networking exam. This was the first week that I felt like I was in the software development world. Learning about APIs and the different components to consider when building or consuming one pulled me away from traditional networking. Besides bandwidth and delay considerations for the API’s user experience, I didn’t consider any other network-related topics. I didn’t think about routing convergence, spanning tree, or any other related technologies/protocols. I’m not discounting their importance – I just didn’t need to apply them in my studies, which felt odd when you realize you’re studying for a Cisco exam. Understanding the different components considered when constructing a RESTful API have been challenging, but exciting. I’m really beginning to understand API concepts from a developer’s point of view, instead of only from the consumer perspective.

Conclusion

These posts may be shorter than my normal posts, but I wanted to convey my experiences and impressions without rambling too much. Next week, I’ll be covering the exam topics found under 3.0 Cisco Platforms. This section of topics should be interesting because they relate back to the APIs of Cisco products, which I learned about when studying for the DevNet Associate and ENAUTO exams. As always, if you have any questions/feedback, please hit me up on Twitter (@devnetdan).

Thanks for reading and stay tuned for next week’s post!