Before we dive into the content, let’s take a step back to appreciate where we are at in building test network environments. It all started with building a physical lab environment, where we had to purchase hardware, physical space, power, cables, and the list goes on… After physical labbing became cumbersome and costly, we (network engineers) began looking at virtualized solutions (GNS3, EVE-NG, CML). Beyond the cost savings and quicker turnaround, virtualized lab environments brought network testing labs to the personal computers of many network engineers. Obviously, these virtualized labs still require a good amount of hardware resources, but the point is that anyone can invest in a powerful computer or server to emulate an entire network topology.
Virtualized labs have now become a very popular simulation tool for many engineers. It’s not to say that physical labs have gone with the wind (because there are many scenarios where actual hardware is required for specific test conditions), but it has become less popular, as virtualized labs have empowered individual network engineers to get lab environments up and running more quickly. So how could this situation get any better? Enter containerized labs…. more specifically, containerlab.
So what is containerlab you may ask? I’ve known about it for awhile, but it wasn’t until my Twitter pal Julio (@juliopdx) brought it to my attention. He was able to spin up a network topology in the matter of minutes using the power of Docker and containerlab! Beyond the speed, I was also impressed at the simplicity of defining a network topology in a YAML document. My wheels started spinning more when I figured out that this topology could be checked into version control (git) and be integrated into a CI/CD pipeline/process. So on top of creating automated testing scripts (via pyATS, Batfish, Suzieq), you can also control and configure the network topology as code! That’s pretty cool stuff!
In this post, we are going to install containerlab, create a topology with two Cisco CSR devices (running IOS-XE 17.3.2), apply some basic configuration, and destroy the lab. Here’s a high-level, step-by-step breakdown of what we are going to accomplish (only 10 steps!):
- Install containerlab (on Ubuntu 20.04 VM)
- Clone forked version of vrnetlabs
- Copy the .qcow2 image of the desired device (Cisco CSR running IOS-XE 17.3.2 – requires proper licensing/rights!)
- Create the Docker image for the CSR
- Create a topology file
- Deploy the lab
- SSH to each device and assign IP addresses to the interfaces
- Configure BGP and confirm neighbor adjacency
- Destroy the lab
I know this has been a long overview, but I’m really excited to share the potential future of virtualized network simulation. Now let’s dive into the content!
There are multiple ways to install containerlab: an install script, package managers (Linux), WSL, MacOS, and even as a container! For specific installation instructions, check out this link. I personally used the install script method using curl:
# download and install the latest release (may require sudo) bash -c "$(curl -sL https://get-clab.srlinux.dev)"
After installation, you’ll be greeted with a nice text banner. To verify the installation was a success, you can check the current version of containerlab using the following CLI command:
~$ containerlab version _ _ _ _ (_) | | | | ____ ___ ____ | |_ ____ _ ____ ____ ____| | ____| | _ / ___) _ \| _ \| _)/ _ | | _ \ / _ )/ ___) |/ _ | || \ ( (__| |_|| | | | |_( ( | | | | | ( (/ /| | | ( ( | | |_) ) \____)___/|_| |_|\___)_||_|_|_| |_|\____)_| |_|\_||_|____/ version: 0.21.0 commit: e89eeb9 date: 2021-12-02T19:11:25Z source: https://github.com/srl-labs/containerlab rel. notes: https://containerlab.srlinux.dev/rn/0.21/
After you verify containerlab is installed, we need to figure out how to get a Cisco CSR 1000v in our topology. As you may or may not know, a Cisco CSR 1000v does not have a native container image, only a VM (OVA) or .qcow2 format. This causes an issue in our “containerized” lab topology. Luckily, containerlab allows us to use VM-based NOSes (network operating systems) in our topologies with other Docker containers via an integration with vrnetlab. The vrnetlab library allows us to run VM-based NOS images in a container (link to the GitHub repo here). To allow deeper integration with containerlab, the containerlab developers forked the vrnetlab project and made the necessary changes (forked project here). For more details on vrnetlab, and how it’s integrated into containerlab, check out the project’s documentation here. Since we now understand how the Cisco CSR 1000v is being integrated into containerlab, let’s jump into setting up the image.
Containerizing a Cisco CSR 1000v
In order to setup a Cisco CSR 1000v in containerlab, I followed the documentation found here. It’s very well-written so I don’t want to rewrite all the instructions, but at a high-level, we need to perform the following actions:
- Clone the forked vrnetlab repository from GitHub.
- Copy the Cisco CSR’s .qcow2 image to the proper folder in the repository (in our case, we are copying to the ‘csr’ folder).
- Make the Docker image (details found in the respective device folder’s README file)
So at the end of all of this, we have a Docker image for the Cisco CSR 1000v. If you followed the instructions, you should be able to issue the
sudo docker images command and see this (your image may have a different tag if you’re using a different IOS-XE version):
~$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE vrnetlab/vr-csr 17.03.02 4fcaafe7a196 2 days ago 1.86GB
Ta-da! Now we have a Docker image for our Cisco CSR 1000v! Optionally, you can run this image as a Docker container (since it is just a Docker image) by running the following command that’s found in the README documentation:
docker run -d --privileged --name my-csr-router vrnetlab/vr-csr
Personally, I started the image and confirmed the container was in a ‘healthy’ state, which helped me confirm the configuration was correct and that the proper resources were available, but then stopped the container since I didn’t have a topology defined yet. Since we have our image ready, let’s begin creating our first topology file.
Deploying Your First Lab
Much like an inventory file in Ansible or a testbed file in pyATS, the topology file is what helps describe what the network looks like, written in a text format. The topology file has many options and I highly recommend that you review the different components in the documentation (link). For the sake of this demo, here’s the basic topology I defined:
name: firstlab topology: nodes: csr-r1: kind: vr-csr image: vrnetlab/vr-csr:17.03.02 csr-r2: kind: vr-csr image: vrnetlab/vr-csr:17.03.02 env: BOOT_DELAY: 30 links: - endpoints: ["csr-r1:eth1", "csr-r2:eth1"]
Ultimately, it’s a simple topology with two CSRs that have a point-to-point connection between them. The nice part about YAML is its readability. I don’t have to go line-by-line to explain the different components. The only components I want to touch on are
Kind helps define the specific setup and configuration required for each container type. For example, Nokia and Arista containers may require different startup parameters or packages installed within the containers in order for them to operate properly. The
links component is exactly what you would guess, it helps describe how the topology nodes are connected to one another. In my demo topology, you’ll see I kept it simple by having one point-to-point link between the Eth1 interfaces on each CSR.
One of the main reasons I kept this topology small (besides simplicity) was due to the amount of resources required for each CSR. At a minimum, each CSR requires 1 CPU and 4GB RAM. That adds up! You may be asking at this point, why would we even go through the trouble of using containers if the devices are going to eat up so much resources. Remember, the CSRs are VM-based NOSes and not native containers. For container-based NOSes, like Nokia SR Linux or Arista cEOS, they have a much smaller footprint and require way less resources.
The last step is to deploy the lab. Containerlab makes this super simple:
~$ containerlab deploy --topo <topology_file>.clab.yml
Once the lab is ready, you’ll see logs messages and a pretty-printed table in the terminal like this one:
dan@dan-ubuntu:~/clabs$ sudo containerlab deploy -t firstlab.clab.yml [sudo] password for dan: INFO Parsing & checking topology file: firstlab.clab.yml INFO Creating lab directory: /home/dan/clabs/clab-firstlab INFO Creating docker network: Name='clab', IPv4Subnet='172.20.20.0/24', IPv6Subnet='2001:172:20:20::/64', MTU='1500' INFO Creating container: csr-r2 INFO Creating container: csr-r1 INFO Creating virtual wire: csr-r1:eth1 <--> csr-r2:eth1 INFO Adding containerlab host entries to /etc/hosts file +---+----------------------+--------------+--------------------------+--------+---------+----------------+----------------------+ | # | Name | Container ID | Image | Kind | State | IPv4 Address | IPv6 Address | +---+----------------------+--------------+--------------------------+--------+---------+----------------+----------------------+ | 1 | clab-firstlab-csr-r1 | be0e264f7308 | vrnetlab/vr-csr:17.03.02 | vr-csr | running | 172.20.20.2/24 | 2001:172:20:20::2/64 | | 2 | clab-firstlab-csr-r2 | 75b3c3e6534c | vrnetlab/vr-csr:17.03.02 | vr-csr | running | 172.20.20.3/24 | 2001:172:20:20::3/64 | +---+----------------------+--------------+--------------------------+--------+---------+----------------+----------------------+
Now we have our lab deployed, we need to configure our devices!
Configuration and Verification
With all the virtualization occurring (VM-based NOS running inside of a Docker container on an Ubuntu VM), I thought I would definitely have issues trying to connect to the device via SSH in my host terminal. I was wrong! While deploying the topology, containerlab performs a few tasks in the background. It creates a Docker network that’s used as a management network for each device in the topology and it also adds each device and IP address to the hosts file on your host machine, so that you can connect to each device by their hostname or IP (pretty nifty!). There are some more things that go on in the background, which you can find by reading through the Quick Start documentation here. With all that being said, here’s how you can connect to a specific device:
ssh admin@<hostname or IP>
Simple as that! Alternatively, you can also access the bash shell of the Docker container itself using the following Docker command:
docker exec -it <container_name> bash
Configuring BGP and Verifying Connectivity
If you’ve made it to this point, well done! We are in the home stretch! All that is left to do now is configure the devices and verify connectivity, since our new “cabling” is a virtual Docker network (as crazy as that sounds!). For my demo, I configured BGP and LLDP to verify L3 and L2 connectivity, respectively. I wanted to use a L2 protocol just to see if it would work TBH. I tested with CDP and that did not work. For brevity, here’s a simple configuration I applied to both CSR devices:
!!! csr-r1 !!! ! Interface configuration interface Gi2 ip address 10.1.1.1 255.255.255.0 interface Loopback0 ip address 220.127.116.11 255.255.255.255 ! BGP configuration router bgp 65001 bgp log-neighbor-changes network 18.104.22.168 mask 255.255.255.255 neighbor 10.1.1.2 remote-as 65002 ! LLDP configuration lldp run
!!! csr-r2 !!! ! Interface configuration interface Gi2 ip address 10.1.1.2 255.255.255.0 interface Loopback0 ip address 22.214.171.124 255.255.255.255 ! BGP configuration router bgp 65002 bgp log-neighbor-changes network 126.96.36.199 mask 255.255.255.255 neighbor 10.1.1.1 remote-as 65001 ! LLDP configuration lldp run
Let’s verify that BGP and LLDP are working as expected.
!!! csr-r1 !!! ! Review assigned IP addresses csr-r1#sh ip int bri Interface IP-Address OK? Method Status Protocol GigabitEthernet1 10.0.0.15 YES manual up up GigabitEthernet2 10.1.1.1 YES manual up up Loopback0 188.8.131.52 YES manual up up ! Verify BGP neighbor adjacencies Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.1.1.2 4 65002 11 12 3 0 0 00:06:08 1 ! Verify LLDP neighbors csr-r1#sh lldp neighbors Capability codes: (R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device (W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other Device ID Local Intf Hold-time Capability Port ID csr-r2.example.com Gi2 120 R Gi2 Total entries displayed: 1 !!! csr-r2 !!! ! Review assigned IP addresses csr-r2#sh ip int bri Interface IP-Address OK? Method Status Protocol GigabitEthernet1 10.0.0.15 YES manual up up GigabitEthernet2 10.1.1.2 YES manual up up Loopback0 184.108.40.206 YES manual up up ! Verify BGP neighbor adjacencies Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.1.1.1 4 65001 6 5 5 0 0 00:01:11 1 ! Verify LLDP neighbors csr-r2#sh lldp nei Capability codes: (R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device (W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other Device ID Local Intf Hold-time Capability Port ID csr-r1.example.com Gi2 120 R Gi2 Total entries displayed: 1
Everything is working – we can see LLDP neighbors present and BGP is established! Obviously, you can extend the configuration or make it something completely different, but for the sake of the demo, I wanted to keep it pretty barebones. Since we are now finished with our demo topology, we need to tear it down.
Tearing Down the Lab
The teardown process is prettry straightforward. In containerlab, the teardown process is commonly referred to as “destroying” the lab (yes, maybe a little harsh). Regardless, you can choose to destroy a specific lab (using the topology name) or destory all running labs. In my case, I chose the easy route and just destroyed all running labs using the command below:
dan@dan-ubuntu:~/clabs/clab-firstlab$ sudo containerlab destroy -a INFO Parsing & checking topology file: firstlab.clab.yml INFO Destroying lab: firstlab INFO Removed container: clab-firstlab-csr-r1 INFO Removed container: clab-firstlab-csr-r2 INFO Removing containerlab host entries from /etc/hosts file
I appreciate the verbose log messages that describe each of the actions taking place during the lab teardown process. Containerlab does a great job cleaning up after itself. The next section is a little bonus for all of you that made it to the end of this post. Enjoy!
Bonus: Cool Features
Thank you for reading up to this point. As a token of appreciation, I wanted to go over some cool features I found while reading through the containerlab documentation.
Graph is actually a command in the containerlab CLI. There are other neat CLI commands, but I thought graph would be a good one to highlight. When you use the
containerlab graph command, containerlab will create a diagram of your lab topology. Here’s an example:
Pretty cool, right?! This feature will be very handy when it comes to creating artifacts for your topology.
The publish feature is a well thought out feature. To put it brief, it allows you to share your lab topology remotely over the Internet. You probably have a lot of security red flags popping in your head right about now, so let me provide a little more detail.
Containerlab is integrated with the online service mysocket.io to provide secure tunneling across the US, Asia, and Europe. To learn more about the mysocket.io service, check out their website here. Once you’re registered, you’ll have a token that can be used to authenticate your lab to the service. Basically, the mysocket.io service securely brokers the connection between your lab environment and the remote users. From the lab’s perspective, you need to “publish” specific ports on each node in your topology that you want to be available for remote users. In addition, you must add a mysocket.io node to your topology. This node allows the specified ports to be “published” and become available across the mysocket.io service. It’s super neat and I’m definitely not giving it enough justice in this short blurb, so I encourage you to check out the Publish ports page in containerlab’s documentation for more information.
The last cool feature I wanted to highlight is one that invites extensibility. When you spin up your containerlab topology for the first time, a folder is created in the same directory as your topology file, named after the topology. Within that folder, persistent data for the topology is stored, including configuration files, along with a file named
ansible-inventory.yml. As you can guess, this file contains an auto-generated Ansible inventory file that’s based on the containerlab topology file.
all: children: vr-csr: hosts: clab-firstlab-csr-r1: ansible_host: 172.20.20.2 clab-firstlab-csr-r2: ansible_host: 172.20.20.3
I said this feature invites extensibility because it allows a network engineer to use the generated Ansible inventory file in another workflow, whether that be a simple Python script or a CI/CD pipeline.
I hope this post has opened your eyes to what’s possible when it comes to the future of network simulation. I’m really excited to see where this goes and look forward to continue following the progress of the containerlab project. As always, if you have any questions or feedback, please feel free to hit me up on Twitter (@devnetdan). Until next time!
containerlab docs: https://containerlab.srlinux.dev/