Using the Cisco 3650 Managment Port

Configuring some new Cisco 3650s, I wanted to use the management ports rather than setting up management LAN SVIs and so on. This is particularly useful in a DMZ as we know the management port is in a completely different VRF.

Here’s a short summary of the steps taken to get around things not working at first as the traffic wasn’t being source from within the management VRF. IP addresses are only for the purposes of examples.

First off, configure the management interface and default route:

interface GigabitEthernet0/0
 description ** Network Managment Interface **
 vrf forwarding Mgmt-vrf
 ip address

ip route vrf Mgmt-vrf


logging source-interface GigabitEthernet0/0 vrf Mgmt-vrf
logging host


ntp server vrf Mgmt-vrf


ip tftp source-interface GigabitEthernet0/0

AAA needs a modification to work

aaa group server tacacs+ TACACS_GROUP
 ip vrf forwarding Mgmt-vrf

ip tacacs source-interface GigabitEthernet0/0


snmp-server host vrf Mgmt-vrf version 2c YOURSTRING

That covers the essentials!

Nexus FEX Bouncing

I came across an odd problem where a FEX was bouncing and was asked to look at it. The logs were a flood of interfaces going up and down and FEX status messages, however buried in amongst the logs and quite easy to miss was the following, less frequent syslog message:

%SATCTRL-FEX132-2-SATCTRL_FEX_MISCONFIG: FEX-132 is being configured as 131 on different switch

Pretty obvious clue there. Configuration was correct for the uplinks on both 5Ks:

interface Ethernet1/13
  switchport mode fex-fabric
  fex associate 131
  channel-group 131

interface Ethernet1/14
  switchport mode fex-fabric
  fex associate 132
  channel-group 132

Checking the serial numbers of the attached FEXes confirmed the problem:

First 5K

FEX: 131 Description: FEX213 - CAB 28   state: Offline
  FEX version: 7.1(3)N1(1) [Switch version: 7.1(3)N1(1)]
  FEX Interim version: 7.1(3)N1(1)
  Switch Interim version: 7.1(3)N1(1)
  Extender Serial: FOC00011122

FEX: 132 Description: FEX214 - CAB 28   state: Online
  FEX version: 7.1(3)N1(1) [Switch version: 7.1(3)N1(1)]
  FEX Interim version: 7.1(3)N1(1)
  Switch Interim version: 7.1(3)N1(1)
  Extender Serial: FOC12345678

Second 5K

FEX: 131 Description: FEX213 - CAB 28   state: Registered
  FEX version: 7.1(3)N1(1) [Switch version: 7.1(3)N1(1)]
  FEX Interim version: 7.1(3)N1(1)
  Switch Interim version: 7.1(3)N1(1)

FEX: 132 Description: FEX214 - CAB 28   state: Online
  FEX version: 7.1(3)N1(1) [Switch version: 7.1(3)N1(1)]
  FEX Interim version: 7.1(3)N1(1)
  Switch Interim version: 7.1(3)N1(1)
  Extender Serial: FOC00011122

As we can see above, the same FEX is associated with FEX131 on the first 5K and FEX132 on the second 5K. The solution was to verify which serial number was which FEX in the cabinets and to swap the cables for the two ports around on the incorrectly patched 5K. Looks like someone had been doing some patching and put things back in the wrong way around! O_o

Replacing a failed Nexus 5K and some bugs.

Being given the task of replacing a failed Nexus 5596UP (no console output, powers up with fans but no lights except amber on the mgmt. module at the back), I quickly ran into some annoying problems trying to configure the FEX uplinks before actually racking it and plugging it in. I wanted to get as much config done beforehand as possible to minimize any interruptions – I was also a bit nervous as this unit was in the VPC primary role before it failed.

The 5Ks are running in Active/Active mode similar to this diagram:


Fex definitions, port channels and other config went in fine, eg:

fex 102
  pinning max-links 1
  description "-=FEX-102=-"
  type N2248T

interface port-channel102
  description -= vPC 102 to FEX-102 / SW01 e1/1,e1/2 & SW02 e1/1,e1/2 =-
  switchport mode fex-fabric
  fex associate 102
  spanning-tree port-priority 128
  spanning-tree port type network
  vpc 102

But when trying to add the channel-group to the FEX ports (e1/1 and e1/2), it failed:

interface Ethernet1/1
  channel-group 102
command failed: port not compatible [port allowed VLAN list]

Removing the port channel allowed me to configure only one of the ports with the channel-group, giving the same error on the second member (as the port channel interface is automatically created).

The only way to get around this was to remove the port channel and use a range command:

no interface port-channel102

interface Ethernet1/1-2
 channel-group 102

Then re-add the Port channel 102 config. Obviously you’re stuffed on this version if you split the uplinks across different ASICs.

This seems to be down to a buggy version of NX-OS: version 5.1(3)N1(1a), which is quite old and already has some other bug warnings against it! Not much choice in this case given the replacement has to run the same config as the peer.

Also, FEX ports can only be configured by pre-provisioning.


slot 102
 provision model N2K-C2248T

Cisco links for replacement (some important steps such as role priority):
Replacement Documentation

Full Replacement Procedure
We aren’t running config sync so role priority wasn’t a problem – I didn’t need to change the priority on the replacement to a higher value. If using config sync, I’d follow Cisco’s guidance. In summary, here are the steps I took (may be a little paranoid but resulted in no outage when the replacement was brought online):


1) Label all cables carefully then disconnect and unrack faulty unit
2) Rack replacement – do NOT connect any cables apart from console
3) Set a management address on the mgmt0 port (I used a different IP to be safe) and set a default route in the management vrf if your FTP server is in a different subnet. eg:

vrf context management
  ip route

4) Connect management cable
5) Upgrade/downgrade code to same as the peer – FTP copy example below with images in /nos-images on FTP server.

copy ftp://user:pass@[kickstart-filename].bin bootflash:
copy ftp://user:pass@[nxos-system-filename].bin bootflash:

install all kickstart bootflash:[kickstart-filename] system bootflash:[nxos-system-filename]

6) Shut down the mgmt0 port
7) Pre-provision FEXes, otherwise you won’t be able to paste the config for the FEX interfaces.
Be sure to specify the correct model.. N2K-C2248T is correct for N2K-C2248TP-1GE

slot 102
provision model N2K-C2248T 
slot 103
provision model N2K-C2248T

Some config may look like it’s in a different order to the peer when pre-provisioned but it will sort itself out when the FEXes are online.
8) Apply the correct config via console (see note above regarding channel-group bug) – double check any long lines such as spanning-tree vlan config. Remove the additional management VRF default route from step 3 if it’s different to the config you put in.
9) Shut down all ports (including mgmt) – use range commands ie: int e1/1-48
10) Connect all cables to ports

NB: Cisco say to change the vpc role priority to a higher value on the replacement but this was not necessary as we’re not using config sync. Also, the VPC peer role does not pre-empt, so if replacing primary, for example, the secondary will stay in “secondary, operational primary” state.


Monitor console messages for anything strange from the Dist layer with term mon. You should see VPCs come up OK. I also set several pings running to hosts in the row that are connected to the now single-homed FEXes.
1) Open up mgmt port from console which will now have the correct IP. Check you can ping peer unit.
2) Open up all infrastructure liks to core/dist layer and the peer unit
3) Open FEX links (I did one pair of links at a time to catch any cabling issues – paranoid!)
4) Test!

Useful commands:

show vpc role
show fex
show vpc consistency-parameters vpc xxx
show vpc peer-keepalive
show port-channel database int poXXX

HP NNMi9 False Redundancy Group alerts/Cisco Nexus Duplicate Discoveries

It seems that HP are getting around to fixing the issue of duplicate Cisco Nexus nodes being discovered (it’s not in patch 5) but until then, it’s possible to work around this. Duplicate discoveries play havoc with RRG alerting which isn’t funny when someone’s woken up in the middle of the night for it.

To stop NNMi discovering duplicate nodes once you have the devices in your topology, do the following:

1) Create an Auto-Discovery rule with the lowest ordering in the list (eg: No-AutoDiscover-Rule)
2) Uncheck all options on the left hand pane and uncheck Enable Ping Sweep
3) Add the IP addresses of all the mgmt0 interfaces in the management VRF (Type: Include in rule)

High CPU on Nexus 5K and no SNMP response – snmpd

Strange issue with SNMP not responding today on a nexus 5K. Tried removing and re-adding the SNMP config, removing the acl altogether that we use to control access and still no joy.

Upon checking CPU usage, it seemed quite high. show proc cpu sort output showed that snmpd was quite busy:

PID    Runtime(ms)  Invoked   uSecs  1Sec    Process
-----  -----------  --------  -----  ------  -----------
 4559           59  991518226      0   44.5%  snmpd
 4605          179        87   2065    9.0%  netstack
 1178         2091  1733135010      0    1.0%  kirqd
    1          157  25653477      0    0.0%  init
    2          837   3474116      0    0.0%  migration/0
    3          600  3970856252      0    0.0%  ksoftirqd/0

I was sure I’d dealt with this before and it seems that I was hitting a bug.

Official word is that: There is a memory leak in one of the processes called libcmd that is used by SNMP. Workaround is entering the hidden command:

no snmp-server load-mib dot1dbridgesnmp

The best solution, however, would be to perform a software upgrade to 5.0(3)N2(2) or later where this is fixed.