Bash multi-threading – parallel SNMP polls

Bit of a misleading title, that. It’s really not possible and there are issues trying to set vars from the output of background child processes. However, it seems it’s possible to fake it if you’re willing to fudge it a bit with temporary files.

I got annoyed with SNMP polls across a large number of targets being very slow so decided to write something to get around it. Actually, one of the biggest issues is the default retry value of snmp commands which is set at 5. See the man page for snmpcmd which shows this.

Here is a script for grabbing the first line of SNMP get output from the specified OID from a large number of devices. There is no maximum limit here unlike my previous batch script, so if it’s a huge list, run at your own risk. It’s good for checking for things like devices still set to public read string. Beware that it uses temporary files given bash limitations, so bear in mind your user file limits.

snmphosts.txt should contain an IP address or resolvable hostname on each line.

# Parallel SNMP Query for BASH - Who needs multithreading? ;)
# Version: V1.0 - Mark M (
# Date:    15/05/2019
# The intention of this script is to get around how slow it is to poll
# a large number of SNMP hosts sequentially. This is achieved by a loop
# which sends each poll to the background which writes its output to
# a unique file suffixed by .$i in folder $OUTDIR. It is not possible
# to populate variables with the results of background child processes 
# in BASH so this is one workaround.
# Once complete, awk is used to pick out the fields of the output files
# to avoid issues with blank responses/lack of newlines. Stderror is redirected 
# to the files so that we can see when a poll failed. We only pick out
# the first line of the result with head -1 in the poll to avoid
# loads of extra lines per host with say, sysDescr for example.
# Keeping retries low will speed this up even more.
# Caveats: Extremely large lists will generate enough files to hit quotas
# or user max file limits.

SNMPOID="."   # system.sysDescr.0
# Some additional useful SNMP OIDs below that should usually respond.
#SNMPOID="."   # system.sysUpTime.0
#SNMPOID="."	# system.sysName.0

# Create temporary output dir if required
if [ ! -d $OUTDIR ]; then
   mkdir $OUTDIR
   if [ $? -ne 0 ]; then
      echo "Problem creating temp dir. Quitting."
      exit 1

# Delete any old temp files
rm -f $OUTDIR/snmpitem* 
if [ $? -ne 0 ]; then
   echo "Error deleting old temp files in $OUTDIR. Exiting."

# Init i for loop

# Loop through each host, sending query to background.
# Assigning each host to an array element for future use.
# Ignore blank lines and commented lines in $SNMPLIST file.
for host in $(cat $SNMPLIST | egrep -iv "^$|^#");
   printf "Polling Item $i - $host\n"
   printf "$host:" > $OUTDIR/snmpitem.$i

   # This bit is tricky. We have to redirect stderr to stdout in both instances
   # here to ensure we see if we get no response or some other error.
   snmpget -Ov -v$SNMPVER -r $SNMPRETRIES -c $SNMPCOMMUNITY $host $SNMPOID 2>&1 | head -1 >> $OUTDIR/snmpitem.$i 2>&1 &
   i=$(( $i + 1 ))
printf "Queries launched. Waiting..."
# Use BASH builtin to wait for child processes to exit.
printf "Done!\n"

# Count total number in array
SNMPCOUNT=$(echo ${#HOSTS[*]})
echo "Host Count: $SNMPCOUNT ($i)"

# Use Awk to pick out fields of all files which will avoid
# formatting errors for failures. This will be in same
# order as an ls statement
awk -F":" '{print $1":"$3}' $OUTDIR/snmpitem.*

# Delete temp files
rm -f $OUTDIR/snmpitem* 

Running tasks in parallel batches in Bash

I had a requirement to run quite a lot of tasks in parallel with varying parameters. Initial experimentation suggested I might end up with a lot of processes running and potentially cause system issues so I looked into creating a script to run things in parallel, albeit in controlled batches.

In this example, I’ve substituted the actual actions I was taking with a random sleep command so that processes will finish at different times. What would probably be best would be to have the actions in another script that will log its output somewhere either by writing to a file or by using logger so syslog deals with the flurry. Typically unix file writes are atomic up to 4KB so having several processes writing at the same time isn’t a huge issue.

Bash below:

# Loop through items in word list to run actions
# and process in parallel batches to avoid having
# too many processes.
# v1.0 7/5/2019
USAGE="`basename $0` /path/to/wordlist {batch size}"
if [ ! $BATCHSIZE ]; then

if [ ! $1 ] || [ ! $2 ]; then
    echo "$USAGE"
    exit 1

echo "Using wordlist $WORDLIST in batches of $BATCHSIZE"
for word in `cat $WORDLIST`; do
    if [ $(( $i % $BATCHSIZE )) -eq 0 ] && [ $i -ne 0 ]; then
       echo "Batch of $BATCHSIZE done... waiting"

    # Take actions here and run as background processes
    SLEEPRND=`echo $(( $RANDOM % 9 + 1 ))`
    echo "Action: $word - Sleeping for $SLEEPRND"
    sleep $SLEEPRND &

    # Increment counter for tracking
    i=$(( $i + 1 ))
printf "Waiting..."
printf "all jobs run.\n"


[me@server ~]$ ./ wordlist localhost 5
Using wordlist wordlist against host localhost in batches of 5
Action: a - Sleeping for 2
Action: b - Sleeping for 3
Action: c - Sleeping for 2
Action: d - Sleeping for 2
Action: e - Sleeping for 5
Batch of 5 done... waiting
Action: f - Sleeping for 7
Action: g - Sleeping for 1
Action: h - Sleeping for 1
Action: i - Sleeping for 1
Action: j - Sleeping for 8
Batch of 5 done... waiting
Action: k - Sleeping for 2
Action: l - Sleeping for 1
Waiting...All jobs run.