NNMi9 Enabling Node Component Threshold Alerts (iSPI for Metrics)

NNMi9 (and iSPI for Metrics) allows threshold alerting on node components (or interface stats), however this needs to be set up at interface or node level within the monitoring configuration menus.

Example 1: Enabling CPU Threshold Alerting
Configuration > Monitoring Configuration


Edit Routers (or another monitoring group, which refers to the node group you want) and in the Threshold Settings Tab, Add a new count based threshold setting.


Select CPU 5Min Utilization and set thresholds as desired. It’s best to disable the low value completely by setting Low Value and Low Value Rearm to 0 as we’re only concerned with a high value. You may want to set the High Trigger Count to 2 or more rather than 1, considering the fact that config writes and other normal activity can cause CPU spikes.


Now Save and Close down to the Monitoring Configuration menu, then either Save or Save and Close from there.

Example 2: Interface Error Thresholds with iSPI for Metrics
Configuration > Monitoring Configuration > Interface Settings
Edit the desired interface configuration:


Add a new Count based Threshold, Select Input Error Rate and set as follows to alert for any amount of errors on any given poll.


Save & Close to Monitoring Configuration, then Save again. Repeat this for Output Error rate then save and close from the Monitoring Configuration menu.

To verify Interface Error threshold monitoring is working, you can go to Monitoring > Interface Performance like so:


Now you get alerts like the following in the alarm browser!


NNMi Node “No Status” Despite Monitoring Settings

NNMi 9.22: On occasion, it has been noted that a node in NNMi will absolutely refuse to come out of “No Status” despite belonging to a valid monitoring configuration group. A config poll and status poll are successful but this is never reflected in the map view or node inventory.

Note that after trying these steps you should give NNMi at least 3 x your poll rate (eg: 15 minutes) to sort itself out. Also, take a backup BEFORE doing any of this to be on the safe side.


1) Create a new New Group called “Monitoring Policy Temp Bucket” or something similar, add the problem node(s) into this group.

2) Add this node group as the first Entry in your Monitoring Configuration > Node Settings. Ensure that you enable SNMP polling, Management Address Polling, and all SNMP Fault Monitoring checkboxes. Changing FaultPolling interval to 1 minute will also help.

3) Check Configuration Details > Monitoring Settings of the node to ensure it’s in the temp bucket and do a config poll, followed by a status poll once finished.

4) Move the node(s) out of the temporary “Bucket” and check monitoring config and try config/status poll again.

If this doesn’t work (and you’ve waited at least 15 minutes), then delete the node(s) and reload them in from the command line with nnmloadseeds.ovpl -n [ip address of node]. Wait 15 minutes.

If that hasn’t fixed it, then there is one more drastic thing that’s worked in the past: Click on the node in map view and in the status panel, use the Device profile link to edit the device profile for that device and change it (eg: from Device Category Switch to Device Category Router… Author: Customer) save, then config/status poll and change the device profile settings back again. This will almost certainly fix the problem, I have no idea why, seeing as when this occurs, it doesn’t happen for all devices of the same profile!

Drilldown on a Single Value Field in Splunk

By default you can’t drill down on a single value field visualisation in a splunk view if you are using a rangemap to change colours.

eg: rangemap field=count low=0-0 default=elevated

This can be circumvented with the following addition to the XML in the dashboard (thankfully this works in simplified XML):

      <option name="linkFields">result</option>
      <option name="linkSearch">
        search index=main c_msg_severity=0   
      <option name="linkView">flashtimeline</option>

Splunk Cisco ASA App – Getting it working!

There are some apps on splunkbase for Cisco Firewalls (in particular a Cisco Security Suite and Cisco ASA App) – these work well but there are a few gotchas that stop this app from working.

Prerequisites: Install the latest Sideview Utils from http://sideviewapps.com/apps/sideview-utils and install the Google Maps app from splunkbase.

1) Ensure that you have a “firewall” index created and searchable by the appropriate roles. Be careful if the firewall index is owned by another app; if you remove that app then the index will disappear and you’ll wonder why this one is no longer working!

2) Ensure that the source is being tagged for the “firewall” index (if using a forwarder, you need to set index = firewall in the monitor statement)

3) Copy the etc/apps/Splunk_CiscoFirewalls/default/transforms.conf and props.conf files into the etc/apps/Splunk_CiscoFirewalls/local directory, and edit the local version of transforms.conf so that the the asa sourcetype is correctly set. This must depend on software version but one is commented out here. You may need to swap these around: certainly on 8.2 the log format is ASA- and not ASA–

DEST_KEY = MetaData:Sourcetype
REGEX = %ASA-\d+-\d+
#REGEX = %ASA--\d+-\d+
FORMAT = sourcetype::cisco_asa

If you really need to cater for both eventualities, then you could use:

REGEX = %ASA--?\d+-\d+

4) I also came across an issue where the sourcetypes were being correctly set, but the host field was incorrectly being detected as the machine running my light forwarder. I got around this by editing the etc/apps/Splunk_CiscoFirewalls/local/props.conf file and changing the first TRANSFORMS line, adding syslog-host as the final entry:

TRANSFORMS-force-sourcetype_for_cisco_devices = force_sourcetype_for_cisco_pix, force_sourcetype_for_cisco_asa, force_sourcetype_for_cisco_fwsm, force_sourcetype_for_cisco_acs, force_sourcetype_for_cisco_ios, force_sourcetype_for_cisco_catchall, syslog-host

5) This app also has a cisco “catch-all” sourcetype formatter which may cause problems with other apps (eg: they might expect sourcetype=syslog or cisco_syslog). You may want to comment this out because it’s not exhaustive and will result in some of your cisco logs being split sourcetype:

#DEST_KEY = MetaData:Sourcetype
#REGEX = :\s\%((SNMP|CDP|FAN|LINE|LINEPROTO|RTD|SYS|C\d+_[^-]+)-\d+-\S+)
#FORMAT = sourcetype::cisco

Selectively monitor files in a directory with Splunk Forwarder

Scenario: Lots of log files all in the same directory on a remote host, we don’t want to monitor all of them and we don’t want to specify a long list of files to monitor in our forwarding configuration.

Solution: Use a blacklist entry. The below example monitors all files in the /logs directory, sets a sourcetype of fw_log and ignores any filenames ending with LONDONA or AMSTERDAMA

Edit file: /home/splunk/opt/splunkforwarder/etc/system/local/inputs.conf

disabled = false
blacklist = (\.LONDONA$|\.AMSTERDAMA$)
sourcetype = fw_log
index = firewall

Or to use a wildcard and monitor certain files..

disabled = false
sourcetype = asa_log
blacklist = (\.LONDONA$|\.AMSTERDAMA$)
index = firewall

Similarly, we can create a whitelist instead:

whitelist = (\.CDCA$|\.CDCB$)
disabled = false
index = firewall

(we could use add monitor /logs/ -index main -sourcetype fw_log but as we’re blacklisting, we may as well edit manually)

Note: forwarder was added with

./splunk add forward-server remotehostname:9997

Check forwarding with:

splunk list forward-server
Splunk username: admin
Active forwards:
Configured but inactive forwards:

Then configuring a receiver on port 9997 on the indexer.