Wednesday, March 21, 2018

Source of Truth

"Imagine walking down the park with your wife, and suddenly seeing your ex. Wife talks automation, she agrees. Wife says intent, she does the same. Wife talks container... and now they are best friends forever."

Since Cisco and Google announced a partnership to deliver a hybrid cloud solution last year, I started following back to see what my ex is doing in software space. During my time in Cisco it used to be a hardware-first company, or a "software solution that must run in own hardware"-first company, so it is interesting to hear about the announcement of Kubernetes-based Cisco Container Platform recently. It is great to see new materials from Cisco DevNet to transform the skills for Network Engineer towards software-based and automation, just like this awesome Network Programmability Basics video course.

One blog post by Hank Preston about "Network as Code" caught my attention. He laid the three principles of Network as Code: 
  • Store Network Configurations in Source Control
  • Source Control is the Single Source of Truth
  • Deploy Configurations with Programmatic APIs
and now I would like to expand more about this Source of Truth, in the context of network device config generation.

Source of Truth is the authoritative data source for a piece of information (it is usually compared with Source of Record, but let's not go into that discussion). In a network config generation pipeline, Source of Truth is the place we look for information needed to generate the config. And I agree with Hank, even though many organizations today use the current running device configuration that is active in the production network as the Source of Truth for network configuration, this is NOT the way to go to have a reliable system.

One important idea in Site Reliability Engineering is that in order to have a reliable system, you need to make it out of interchangeable and replaceable parts that can fail at any time. We need to treat network device as cattle, not pet, where we look at network infrastructure as fleet and any network device can fail and re-spawn automatically to return to previous state before the failure occurs. If current running device configuration in production network is the source of truth, and a device fails, we cannot use it as the source of information to generate the configuration for the replacement device. You surely can take the backup of the configuration and keep it offline somewhere, but if the active network device fails before the configuration can be backed up, will you use the previous backup as Source of Truth?

Now, we can use the configuration that we capture from current running production network as Source of Truth IF, and only if, the next changes to network device are done first in that offline configuration. So let's say you have a production network, and you capture all the config from active devices to start creating Source of Truth. You keep those device configurations in repository where you can enable version control (example below is taken from this blog post):

If you want to change the configuration in the network, you have to follow the change process (if you have any) for the configuration you put in the repository such as branching, do the change, and ask for peer review before your branch is merged back into the master.

But it is not always practical to use device configuration that is vendor-specific, sometimes it is even platform-specific, as Source of Truth. Let's say your current production network is running using one device model from certain vendor. For some reason, either during failure or not, you want to auto generate the same config for different device model or even for new device from different vendor. Or perhaps you run virtualized environment and you want to do horizontal scaling to your network device, for example to spin up new virtual router to handle more load, and the new virtual router contains mostly the same configuration like the current virtual router except some unique configuration such as hostname, IP address and so on.

Network device configuration has two components: configuration syntax, that is specific to vendor or platform, and data variables, that are consistent regardless of the syntax. And data variables can be the same for all devices (e.g. SNMP configuration, NTP server etc.) or unique for every device e.g. hostname, IP address etc. If we use Ansible as the automation platform as example, we need different information as data source to generate configuration: nodes, data variables and jinja templates.

The inventory file (INI file) contains the information of nodes where we want to perform the change. It can be as simple as a list of IP address or hostname of network devices. Data variables of the configuration can be assigned to group of devices if they are generic, like NTP server configuration, or assigned to specific node if the configuration is unique such as loopback IP address. And those variables can be stored in the same INI file, or within a set of group variable files. Jinja2 templating is used to provide the configuration syntax per device vendor, that is stored in different file for each vendor.

hostname {{ system.hostname }}
interface loopback 0
 description Management Interface
 ip address {{ system.ipaddr }} {{ system.netmask }}

Ansible playbook then uses template module with those Jinja template files as the source to render the template to generate the device configuration in selected destination folder. The configuration files in destination folder are automatically created by inserting the proper data variables into the respective Jinja templates.

As you can see, all configuration artifacts in Ansible such as inventory file, group variable files, and even the Jinja template files can be kept in repository with version control system. If you want to modify the configuration of the device in production, you have to update those files (and follow the change process), generate the new config, then the new config can be pushed into production device (you may have to push the new config to staging device first, depending on your release process). Hence, those files are the Source of Truth in this example.

But what if you want to grow bigger than that example? What if you have more data that is needed to generate the network configuration? And what if you want to store the data in different locations beyond some simple files?

Below is my attempt to draw the system for network config generation pipeline to answer those requirements:

I put human icon in the most left of the drawing to put the argument: we, human, are actually still the ultimate Source of Truth. When a network architect or engineer designs a network, he or she has already an "intent" of how the final design will look like. Designer has already thought about the intended state of the network when it runs. However, we need the designer to describe the network to be built in a data format and structure that computer can understand. This means even a detailed document such as Low Level Design document is no longer sufficient.

The data required to generate network config are distributed in different location or software system, for example:

1. Inventory Database
It has the list of all hardware (and software) in the organization, whether they are operational or not. The inventory could be maintained by operation engineer or even procurement team who put focus on ensuring the hardware/software has still valid support contract from the vendor, for example

2. Design Rules
This is usually the main content of Low Level Design document: from physical design (how port is allocated e.g. first port of router 1 is always connected to router 2 in one pair) to logical (e.g. how VLAN is assigned) and traffic policy (e.g. BGP peers and any traffic manipulation for each peer) and so on

3. IP Database
It is common for large organization to use dedicated IP address management tool. The tool can make it easier to do IP allocation planning and auditing to ensure there is no mistake such as duplication. The same tool may be used to manage VLAN assignment, VRFs, or tracking DHCP pool allocation

4. Site Information
Information about physical location, site naming, cabling layout, MDF and IDF locations, rack configuration and so on are stored in drawing format, or other format that can be understood by those who need to work or maintain the physical facilities. It may even contain the information about the environment such as power and cooling

5. Capacity Planning
Any design has a scaling factor (e.g. a pair of aggregation switches can handle up to 20 access switches, more than that means new pair of aggregation switches is required). Capacity planning is also required to forecast future demand based on organic growth, for example a calculation based on the pattern of traffic utilization growth over time

Again, all the data above can be kept in repository that has version control. So they are the Source of Truth (or System of Record for some people). And our automation tool can access them through API to get the data needed to generate network configuration.

But what if configuration generation tool is not the only tool that requires the information? What if we have another tools, such as a Build Planning or Network Analytic tools that are needed for successful config change to production network, and they need to get information from any data source listed above? Surely such tool can consume the information from the data source directly, however when we have more data sources and more consumers we introduce many-to-many relationship, and any small change in any component may impact many relationship. We need a single Source of Truth that gives the complete view of the network information, as the only authoritative data source for all consumers. And that single Source of Truth is a model.

A model is a representation of the actual thing. The picture above shows the model of the Internet. For network automation system, we need several models:

1. Topology Model 
describes the structure and represent Layer 1 to Layer 3 of the real network, using a graph with edges representing abstract links to connect between different nodes on which packets flow. The model can describe low level information for individual node composition such as multi-linecard switch, and to even higher-level abstractions such as tunnels and BGP sessions

2. Configuration Model
describes configuration data structure and content, to represent configuration intent and generated configuration. The model should be generic, i.e. vendor-neutral data conforming to OpenConfig YANG data models where possible. OpenConfig is a collection of industry standard YANG models for configuration and management that will be supported natively on networking hardware and software platforms

3. Operational Model
represents the state of the network, and uses to describe monitoring data structure and attributes. Model-Driven Telemetry is a new approach for network monitoring in which data is streamed from network devices continuously using a push model and provides near real-time access to operational statistics

Some may argue that we can have a single model for all the above (and to truly have a single Source of Truth). The decision is really up to the designer of the model, for example combining configuration information to Topology model may run the risk of adding bloat to the model, and consequently making it curation and change control even harder. And even Operational Model seems to serve specific purpose, but all three may be inter-related for example the operational state of the network may become the input to update the Topology and Configuration model.

If we go back to network config generation pipeline, configuration tool should derive information from the model (and from additional policy and template representations) to auto generate the configuration to be pushed to production network. The config generation tool should have both unit test and integration test to ensure the new configuration can be integrated successfully. There should be a close loop mechanism to provide feedback if the new configuration pushed to production does not make the network achieve the intended state. But let's keep more detailed discussion about how the generated config get pushed to the device, and how the close loop system or feedback mechanism works, for some other time.

Sounds too good to be true? The system is too hard to develop? It seems to be just another smoke and mirror? Well, some large organizations in the world have built it and they operate such system everyday due to the scale of networking they have to deal with (and I'm just discussing from a very high level here). Yours may not have similar requirement to build the automation platform for that scale, but at minimum any organization should try to reach Level 2 as described in my Autonomous Network, by using available tool like Ansible.

If you have read this far and face some difficulty to understand this post, or may feel there are some gaps and would like to see more practical example, I highly recommend you to read this new Network Programmability and Automation book. In fact, I highly recommend any Network Engineer to read this book to learn the skills required to become the next-generation Network Engineer.

And if you are someone who wakes up every morning and keep thinking about all the details required to build a real vendor-agnostic model-driven network automation platform, with close loop from streaming telemetry, with ability to rollback or to improve automatically based on the feedback, and make it run in the Cloud, please let me know.

It looks like we share the same Source of Truth.