DevNet Pro Journey: DEVCOR Week 4

Hello, it’s a been a couple weeks! I know this isn’t technically week 4 since I didn’t post last week, but I’m calling it that for consistency sake. As mentioned in my my tweet last weekend, I was busy diving into YANG models and model-driven telemetry (MDT) using the TIG stack (we will get more into that later), so I wanted to wait until I had a little more content to post. Today’s post will mostly revolve around MDT, CI/CD workflows, and my thoughts on deploying an application using Docker and Kubernetes.


This week will be a little different than others. In the past, I reviewed entire sections (i.e. 1.1 – 1.x). My method of studying is to review each topic at a high-level with a video series, then dive into labbing each topic. The topics outlined in DEVCOR exam sections 3.8 and most of section 4 are a little more involved so it may take a few weeks to get through these sections. Every topic has multiple components and requires some additional understanding. For example, exam topic 3.8 Describe steps to build a custom dashboard to present data collected from Cisco APIs requires you to understand a few concepts: NETCONF/RESTCONF, YANG data models, and a technology stack to ingest the data and produce a dashboard (notably the TIG or ELK stack). As you can see, covering exam topic 3.8 is more involved than memorizing steps 1..2..3 for building a custom dashboard. With that being said, let’s jump into the topics!

Week 4 Topics

Here are the topics covered this week:

DEVCOR Exam Topics (

These were three monster topics. For topic 3.8, I spent two nights just reviewing the structure of YANG data models on IOS-XE devices, and that’s only one piece of the puzzle. The major mystery for me was deploying the appropriate software stack (TIG) to collect the telemetry data from the IOS-XE devices and produce clean dashboards. However, I was very surprised how easy it was deploying a TIG stack (Telegraf, InfluxDB, and Grafana) using Docker. With Docker, you can deploy the entire stack, with each component communicating with one another, using a docker-compose.yml file. Before moving on, here’s a quick summary of the TIG stack and each component:

  • Telegraf – used to collect data and metrics from the IOS-XE devices. Think of this as the entry point for the data being sent.
  • InfluxDB – time-series database to store collected data. Time-series databases are helpful in our case since we want to see metrics over a given time period. Other database types
  • Grafana – pulls data from InfluxDB and creates the fancy dashboards

There are many examples on the web for building the TIG stack using Docker, so don’t worry about writing your own (unless you want the practice!). I personally used Knox Hutchinson’s example, since I was following his tutorial on CBT Nuggets. Here’s a link to his Github Code Samples repo.

After tackling MDT and deploying a TIG stack using Docker, I started reviewing section 4 topics beginning with 4.1 Diagnose a CI/CD pipeline failure (such as missing dependency, incompatible versions of components, and failed tests). I felt comfortable reviewing this topic, as I have experience (see previous blog posts) with building CI/CD pipelines with Ansible on GitLab. However, CBT Nuggets covered deploying a CI/CD pipeline using Jenkins, which I do not have any previous experience with. After learning more about Jenkins and creating a Jenkinsfile, I found it to be very relative to GitHub Actions and CI/CD with GitLab. Unless I had a specific use case for Jenkins, I think I would rather use the integrated CI/CD workflows wherever my code was hosted (GitHub or GitLab). However, in an enterprise environment, I could see the use case for Jenkins being the central piece of software for managing many CI/CD workflows.

I’ll admit that I’m not completely finished with the third and final topic: 4.2 Integrate an application into a prebuilt CD environment leveraging Docker and Kubernetes. I do not have any previous experience with Kubernetes or integrating an application into a CD environment, so this one is taking me a bit longer. I would like to take this moment and demystify a misconception that I’ve had about Docker and Kubernetes. Docker and Kubernetes are not necessarily competing container technologies. For example, I used to think that there were Docker containers and Kubernetes containers. I believed you had to choose which type of container you wanted to deploy. After reviewing this topic, I now understand that Kubernetes helps manage containers and their workloads. Docker has their own container management software called Docker Swarm, which would rival Kubernetes, but there aren’t specific Docker containers and Kubernetes containers. Kubernetes will deploy and manage containers within a Kubernetes cluster. While deployed in a cluster, Kubernetes will monitor the workload of each container and move them between physical servers or scale them as needed. On top managing their workload, Kubernetes makes each workload portable. You can move workloads from your local machine to production without having to reboot the entire application. The greater mobility allows for greater integration and deployment cycles (i.e. CI/CD workflows). As you can see, I’m pretty pumped about reviewing this topic more in-depth, so I’ll report back next week with my additional findings.

My Week 4 Impressions

Even though this week only consisted of three topics, I still feel like I need to go back and review each one again. There was so much depth to each topic and, honestly, a lot of personal interest. For MDT, I still feel like there’s so many more details I need to cover, including diving deeper into each component of the TIG stack. I have some past experience with Grafana, mostly because it’s the component that makes the nice looking dashboards, but I would personally like to dive more into the database component (InfluxDB). During my initial labbing, I found the endless possibilities of data that can be collected using YANG models on IOS-XE devices. The YANG models really unlock so many datapoints that you may not find using traditional SNMP monitoring.

The two topics I covered in section 4 switched my mindset a bit and allowed me to recognize the thoughts and processes you should have behind deploying an application. Don’t get me wrong, these topics alone won’t teach you everything you need to know about deploying an application, but coming from a network engineering background, it’s very eye-opening. The CI/CD workflows covered in 4.1 is very relevant to how I see network engineers manipulating network device configuration in the future. Having a single source of truth for all device configurations and having built-in testing/deployment mechanisms with CI/CD workflows will be key to scaling and modernizing networks as companies go through digital transformations in the future.


Thanks for reading my DEVCOR study review this week. There weren’t as many topics this week, but each one was very heavy. Stay tuned for my next post, as I cover more exam topics in section 4.0 Application Deployment and Security. As always, if you have any feedback or questions, you can find me on Twitter @devnetdan.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s