NSX-T 2.4

The 2.4 release is officially out with lots of new enhancements and a minor architectural change (converged management & control plane cluster). The detailed coverage of the release is available on the VMware blog.  The highlights for me:

– IPv6 support
– Multiple BGP add-ons
– SR-DR merge*
– Proxy ARP on the Edge Node
– Multi-TEP on bare metal Edge
– N-S & E-W service insertion (virtual services appliances)
– Complete LLDP support
– L7 App-IDs for DFWs
– New declarative policy model with simplified UI
– Collapsed NSX Manager with NSX Controllers in a single appliance

The list is almost endless. There is clearly a huge focus on user experience and a quest for simplification. *The objects remain the same, this is a change to the way the forwarding and routing tables appear only. My previous NSX post will remain unchanged for the time being.

With the simplified UI, there is also a change in the terminology for some of the constructs. The table below was provided by VMware during a “what’s new in NSX-T 2.4” session at the Top Gun Tech Thursday, and I thought it would be worth sharing. The existing construct scheme is still there when navigating through the Advanced UI.

nsx

Another thing to point out is the official “Migration Coordinator” or v2T for NSX-v to NSX-T migrations. It supports migration of layer 2 and layer 3 networking, firewall, load balancing, and L2/L3 VPN. The migration does impact data plane traffic, and there is probably more to come on that. As far as improvements to Multisite support, which is another hot topic, more documentation has been published as well as a few demo videos for the supported use cases (Active/Standby, Active/Active, DR) and the manual tweaks that make it “lite”.

To be continued…

 

 

Datera and Modern Data Center

Datera presented at Tech Field Day 18 and provided additional insights on their software-defined storage solution as well as perspectives on modern Data Centers. Datera’s Team:

– Al Woods, CTO
– Nic Bellinger, Chief Architect and Co-Founder
– Bill Borsari, Head of Field Systems Engineering
– Shailesh Mittal, Sr. Engineering Architect

The presentation started with an introduction on enterprise software-defined storage, characteristics of a software-defined Data Center, overview of their data services platform pillars, business use cases, and current server partnerships that their software runs on.

no-silos

Datera’s meaning of a software-defined Data Center includes these characteristics:

1 – All hardware is virtualized and delivered as a service
2 – Control of the Data Center is fully automated by software
3 – Supports legacy and cloud-native applications
4 – Radically lower cost versus hard wired Data Centers

With this mind, the building blocks of the Datera Data Server Platform architecture were also presented which is extremely relevant for those interested on what is “under the hood” and how it is built, and where standard functions such as dedup, tiering, compression, and snapshots happen. Datera focused on demonstrating how the architecture is optimized to overcome the traditional storage management and data movement challenges. This is where one needs to have some background on storage operation to fully understand the evolution to a platform built for always-on transparent data movement working on a distributed, lockless, application intent driven architecture running on x86 servers. The overall solution is driven by application service level objectives (SLOs) intended to provide a “self-driving” storage infrastructure.

There were no lab demos during the sessions, however, there were some unique slide animations on what Datera calls continuous availability and perpetual storage service to contextualize how their solution works. The last part of the presentation was about containers and microservices applications and how Datera provides enough flexibility and safeguards to address such workloads and portability nature.

Modern Data Center

In a whiteboard explanation, Datera also shared that they have seen more Clos style (leaf-spine) network architecture on modern Data Centers, and see themselves as “servers” that are running in a rack-scale design alongside independent compute nodes. The network is the backplane and integral part of the technology, as compute nodes access the server-based software storage over the network with high-performance iSCSI interfaces. It also supports S3 object storage access.

One of the things I learned during the presentation is their ability to peer directly with the top of rack (leaf) switches via BGP. The list of networking vendors is published here. Essentially, Datera integrates SLO-based L3 network function virtualization (NFV) with SLO-based data virtualization to automatically provide secure and scalable data connectivity as a service for every application. It accomplishes this by running a software router (BGP) on each of its nodes as a managed service, similar to Project Calico. Al Woods wrote about the benefits of L3 networking very eloquently on this article. I find it interesting how BGP is making its way inside Data Centers in some shape or form.

In addition to the L3 networking integration for more Data Center awareness, Datera adopts an API first design by making everything API-driven, a policy-based approach from day 1 which is meant for easy operations at scale, and targeted data placement to ensure data is distributed correctly across physical infrastructure to provide for availability and resilience. This is all aligned to the concept of a scale-out modern Data Center.

As a follow-up, Datera will also be presenting at Storage Field Day 18 and there may be more opportunity to delve into their technology and have a glimpse of the user interface and multi-tenancy capabilities.

Tech Field Days

IMG_4402

… a continuation of the beginning.

I had my second appearance on Tech Field Day 18 this week, which is a major accomplishment for someone who for a long time was quiet on social media and the likes. Not that I have changed much, but it is definitely a huge step being “outside”. I had the opportunity to meet several very professional, intelligent and insightful people with great minds and an amazing ability to express themselves (in writing, speaking, improvising, podcasting). It is been an overwhelming learning experience more than anything else. Social (soft) skills do not come overnight, and it is fundamental on building trust and long-term relationships with people who naturally enlight, inspire or pave the way for others.

The video presentations have been posted for Datera, NetApp, VMware, and SolarWinds.

 

ACI Troubleshooting Notes

screenshot2019-01-31at5.43.53pm

I attended a 3-day ACI Troubleshooting v3.1 bootcamp this week and I have to say, even though I do not get involved in actual implementation after the architecture and design, it is always valuable to understand how things (can) break and ways to troubleshoot. Here are some notes I put together:

Fabric Discovery

I learned that show lldp neighbors can save lives when proposed diagram does not match physical topology. Mapping serial number to node ID and name is a must before and during fabric discovery.  The acidiag fnvread is also very helpful during the process.

Access Policies

For any endpoint connected, verification can be done top down, bottom up, or randomly, but regardless of the methodology, always make sure the policies are all interconnected. I like the top down approach, starting with switch policies (including VPC explicit groups), switch profiles, then interface policies and profile followed by policy groups. This is where all policies need to be in place (ie.: CDP, LLDP, port-channel mode) and most importantly, association to a AEP, which in turn needs to be associated to a domain (physical, VMM, L2, L3) and a VLAN pool followed by a range. If they are all interconnected, the AEP is bridging everything, then comes the logical part of the fabric.

I can only imagine what a missing AEP association can do in a real world deployment.

L2 Out

By extending a bridge domain to an external layer 2 network, a contract is required on the L2 Out (external EPG), that is known. Now, assuming this is a no-filter contract, it can be either a provider or consumer, as long as the EPG associated to the bridge domain being extended also has a matching contract, that is, if the L2 Out has a consumer contract, the associated EPG needs to have a provider contract. If L2 Out has a provider contract, then the EPG needs a consumer contract. In short, everytime I think I finally nailed the provider and consumer behavior, I learn otherwise.

L3 Out

Assuming all access policies are in place, in a OSPF connection, the same traditional checks are required, from MTU to network type. If the external device is using SVI, network broadcast is required on the OSPF interface profile for the L3 Out. I had point-to-point for a while. This is probably basics, but sometimes one can spend considerable time checking unrelated configuration.

Static Port (Binding)

Basically the solution for any connectivity issue from endpoints behind a VMM domain. I have seen it work with and without static binding of VLANs. In the past, I would associate this with the vSwitch policies, where as long as the hypervisor was seeing the leaf on the topology under virtual networking, no static binding was needed. Not the case anymore. The show vpc extended is the way to show the active vlans passing through from leaf to the host.

API Inspector

It is the easiest way to confirm specifics for API calls. With Postman, it is just a matter of copy and paste of the method, URL and payload while having the inspector running in the background for a specific configuration via GUI.

VMM AVE

Very similar process as deploying a distributed virtual switch, only that it needs a VLAN or VXLAN mode defined. If running VXLAN encapsulation, a multicast address is required along with a multicast pool, as well as a firewall mode. All the rest of the configuration is the same as far as adding vCenter credentials and specifying the Data Center name and IP address. After doing the process a few times without any success, and AVE not getting pushed to vCenter, I enabled infra-vlan on the AEP towards the host, which is a requirement when running VXLAN, and there it goes.

Follow-up

The offical ACI troubleshooting e-book has screenshots based on earlier versions but is still relevant as the policy model did not change. For most updated troubleshooting tools or tips, the BRKACI-2102 ACI Troubleshooting session from Cisco Live is recommended.

 

Cumulus VX Spine and Leaf

After hearing the word Cumulus twice from different initiatives on the same day, I decided I wanted to know more about Cumulus Networks in general, and playing with VX seems to be a great start. I am already running Vagrant and VirtualBox for other means, so having an additional box is easy. Well, the idea was just an additional box but after doing some GitHub investigative work, found out that there is already a pre-defined Cumulus Linux Demo Framework or Reference Topology available for consumption. I quickly followed this repository and built my own spine and leaf architecture:

cumulus

The whole process did not take more than 10 minutes. There is a lot that goes in the background, but still, not bad for a virtual non-prod environment or validation platform that supposedly has the same foundation as the Cumulus Linux and Cumulus RMP versions, including all the control plane elements.

The configuration is done on each of the VMs using the Network Command Line Utility (NCLU) or by editing the /etc/network/interfaces and /etc/frr/frr.conf files. This definitely requires some “essential” Linux skills. Multiple demos are available here to run using this topology, including NetQ. I have tested the config-routing demo and it worked perfectly with two spines, two leafs, and two servers. It uses an ansible playbook to push the configuration to the spine and leafs, as well as adding new interfaces to the servers for the connectivity test. A nice way to test the OSPF and BGP unnumbered concept. 

The fundamental piece is the FRR (Free Range Routing) responsible for EVPN, BGP, and OSPF functionality. Pete Lumbis did an excellent whiteboard session at Networking Field Day 17 by going over the building blocks followed by a demo on a similar topology running Cumulus VX.

Ansible Tower on Vagrant

I am still on the re-install apps land on the macOS, and this is a mini guide on how to install Ansible Tower using Vagrant for demo/trial usage only.

The first step is to install Vagrant if not already installed for other means. Vagrant relies on interactions with 3rd party systems, known as “providers”, to provide Vagrant with resources to run development environments.  I am running VirtualBox

To verify the installation of both Vagrant and VirtualBox:

vagrant --version

vboxmanage --version

Once the installation of both Vagrant and VirtualBox are completed, Ansible Tower can be initialized by creating a Vagrantfile with default instructions in the current directory as follows:

vagrant init ansible/tower

vagrant up

The process takes a few minutes the first time, and once complete:

vagrant ssh

The vagrant ssh command will give you your admin password and the Tower log-in URL (https://10.42.0.42). This is using the default (basic) settings in Vagrantfile and it can be edited further, including a more specific name for the ansible VM.

To verify the Ansible version:

ansible --version

At the moment, there are two trial/demo licenses available: one with enterprise features such as LDAP and Active Directory support, System Tracking, Audit Trails, Surveys, and one limited to 10 nodes and no expiration date, however, it does not include the enterprise features just listed. The open source alternative (or non-enterprise version) with no node limitation is the project AWX.

Below is the main (default) dashboard of Ansible Tower:

ansible tower

And here is a nice walk-through on the GUI: Ansible Tower demo.

Tip: if by any chance 10.42.0.42 can not be accessed the first time, check the routing table (ip r) and interfaces (ip a show) to see if 10.42.0.0/24 is listed on the Vagrant VM. If not listed, reinstall everything. 

Apstra in a Whiteboard

As an occasional and very ordinary writer, I think this topic deserves a “direct from the source” approach. I am definitely using credits from my blank slate bucket. In other words, nothing I write would be better than reading from who explained in a whiteboard at Networking Field Day 19 so simple and easy to understand what the Apstra AOS (Apstra Operating System) is all about and its building blocks.

Besides doing a great whiteboard session, @_vCarly also published an outline of that very busy morning at Apstra’s NFD19 Experience. It is a detailed narrative that goes from the reference architecture made of the AOS server sitting at the orchestration (or management) layer and agents installed on each individual switch (supporting modern leaf-spine designs or extensible to other environments) to the building blocks: logical device, rack type, template, blueprint, interface map, device profile, resources, and managed devices. They are all interconnected and it makes more sense when delving into the whiteboard. There is no better way to get a clear understanding of Apstra other than watching the original video followed by her narrative.

IMG_3761

On a side note, this is someone who I met in person for the first time, but who I have known for a while through videos as part of my initial Cisco ACI learning journey. I just wish my whiteboards were that decent and inspirational.

In addition to the whiteboard session, other highlights were around the ServiceNow integration delivered jointly with Network to Code, an overview and demo of Day 2 Operations via IBA (Intent-Based Analytics) with a write-up here as well, and a demo on AOS for additional context. The original videos are available at the Networking Field Day 19 portal.