F5 LTM – Marking a VS down when less than X pool members are available.

UPDATE: FAR MORE ELEGANT SOLUTION FOUND:

Credit to this thread for saving me loads of hassle:
https://devcentral.f5.com/questions/disable-virtual-server-if-active-members-less-than-x

I modified this slightly and it works like a charm! I put each of 2 VSs in their own pool, assign a customized monitor to each pool, then put those two pools in the WIP on the GTM.

1) Create iRule on LTM.

when HTTP_REQUEST { 
       if {[URI::query [HTTP::uri] ] starts_with "p=" } { 
              set poolname [URI::query [HTTP::uri] p] 
              set minmember [URI::query [HTTP::uri] mmember] 
              set response "BIGIP Pool Status - [clock format [clock seconds]]" 
              if { [active_members $poolname ] < $minmember } { 
                     append response "DOWN - $poolname
" } else { append response "UP - $poolname
" } HTTP::respond 200 content $response "Content-Type" "text/html" } }

2) Create a new VS on each site, listening on port 80 with NO MEMBERS. iRule is the only resource.

3) On the GTM, two monitors are created, one for each site.

eg: Send string:

GET /?p=SITE-A-POOL&mmember=3 HTTP/1.0

Receive string:

/.*UP.*/

Alias Service Port: HTTP

4) On the GTM, the WIP contains two distinct pools, each containing a single VS for each site. LB method set to global availability.

5) Each pool on the GTM then has its respective monitor assigned

Much better than my solution!

————

This is a bit of a fudge and is by no means an elegant solution, however it was needed as a temporary measure so I thought I’d document it as it’s a good example of using external monitors.

Scenario: A VS in each site on separate LTMs, each having one local pool for an app. The pool or VS is required to be marked offline when less than a certain amount of pool members are available. This will make the GTM fail DNS over to the other site.

There’s no inbuilt functionality to do this, and when I was looking at iRules, the LB:down command didn’t look like it’d do what I wanted as it triggers a monitor. Priority groups won’t work because we only have the local servers in one pool on each LTM.

Solution: Use a dummy node and pool with an external monitor that calls a script to check available members and enable/disable the VS. I was looking at creating a dependency on the GTM to a dummy VS which would also work.

Caveats:

– Using this solution means that before you can manually enable/disable the VS, you need to remove the monitor from the dummy pool to stop the script from running.

– This is only tested on 10.2.4 – version 11 has the monitors in a different location IIRC. You could also end up with both VSs down in worst case scenario which may or may not be desirable.

– Don’t use the monitor on more than one pool/node as the variables are tied to the monitor! A monitor runs for each member in a pool.

Steps:

1) Create a new monitor. We will set 3 variables which the script will pick up.

monitor

PARENTVS – The name of the VS that needs to be marked down.
THRESHOLD – The minimum number of pool members that need to be available
POOLNAME – The name of the pool we need to monitor

2) Create a dummy node, eg: “DummyNode”, IP: 1.1.1.1 – set health monitors to none.

3) Create a dummy pool containing the dummy node above, and set the pool monitor to the name of your monitor (eg: MinMembers-test)

4) Add the script (eg: testmon) to the /config/monitors directory. Don’t forget to chmod +x the file. My script looks like this – basic but works. You may want to add proper ltm logging:

#!/bin/sh
#
# External monitor script to mark member UP or DOWN depending on
# number of pool members available in the specified pool.
#
# Expected vars
#THRESHOLD              # Minimum no. of pool members required to be active
#POOLNAME               # Get poolname from passed variable
#PARENTVS               # Parent VS to disable/enable
#
# This needs to be set or tmsh won't work
REMOTEUSER=root
export REMOTEUSER

# Log some information so we can see what the last script run did.
echo "`date +%D" "%H:%M:%S` PARENTVS: $PARENTVS POOLNAME:$POOLNAME"  > /tmp/test-status.log

# use tmsh to check available members
COUNT=`/usr/bin/tmsh show ltm pool $POOLNAME members | grep "Pool member is available" | wc -l`

# Set pool as enabled/disabled depending on outcome. Setting already
# enabled pool doesn't hurt and is less overhead than checking.
#
if [ "$COUNT" -ge "$THRESHOLD" ]
then
   echo "`date +%D" "%H:%M:%S` pool $POOLNAME UP (Total: $COUNT, min: $THRESHOLD)" >> /tmp/test-status.log
   echo "`date +%D" "%H:%M:%S` VS $PARENTVS will be enabled if needed" >> /tmp/test-status.log
   tmsh modify ltm virtual $PARENTVS enabled >> /tmp/test-status.log
   echo "UP ($COUNT/$THRESHOLD)"
else
   echo "`date +%D" "%H:%M:%S` pool $POOLNAME DOWN (Total: $COUNT, min: $THRESHOLD)" >> /tmp/test-status.log
   echo "`date +%D" "%H:%M:%S` VS $PARENTVS will be disabled if needed" >> /tmp/test-status.log
   tmsh modify ltm virtual $PARENTVS disabled >> /tmp/test-status.log
fi

If anyone’s done this without such a massive kludge, I’d love to hear how it was done. This is only temporary so I can get away with it.

F5 External Monitor script not working

I was trying to create a custom external monitor on an F5 LTM today which depended on the output of a tmsh command. I ended up tearing my hair out for a while because the monitor script would run from the command line when supplied with the correct arguments, however, when trying to run it from the monitor on the LTM, it was failing.

As I was sanitizing the output from the tmsh command for my requirements, I never saw the problem.

The cause: REMOTEUSER variable is required to be set when using tmsh in a script used by a monitor.

Just set:

REMOTEUSER=root
export REMOTEUSER

In your script and things should work!

I also learned that as soon as anything is returned to STDOUT, the monitor is assumed to be successful (UP) and your script will be killed off – no further lines will be executed! This was also confusing as I wasn’t getting my log details that were supposed to write to a file at the end! :)