Adding vlans to trunk on Juniper MX – behaviour

Just a small note on how the CLI on an MX behaves when you try to add vlans to an existing trunk as sometimes people are confused about doing this. Obviously it’s not really a problem as nothing happens until commit, but this works in a similar fashion to Cisco “switchport trunk allowed vlan-add xxx” command if you’re using set commands.

Existing config as below:

fe-0/0/3 {
    unit 0 {
        family bridge {
            interface-mode trunk;
            vlan-id-list [ 1 2 3 ];

Adding a vlan like this does not overwrite the existing config:

set interfaces fe-0/0/3 unit 0 family bridge vlan-id-list 5

Config is updated as below:

fe-0/0/3 {
    unit 0 {
        family bridge {
            interface-mode trunk;
            vlan-id-list [ 1 2 3 5 ];

iptables error: iptables: Setting chains to policy ACCEPT: security raw nat[FAILED]filter

I found that my VPN had stopped working on one of my linode hosted VPSs (CentOS release 6.5). I was struggling to suss this out but the logs suggested some sort of issue with network connectivity making TLS negotiation fail (“LS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity)).

After a bit of head-scratching, I decided to restart iptables, only to be confronted with this error:

iptables: Setting chains to policy ACCEPT: security raw nat[FAILED]filter
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
iptables: Applying firewall rules:                         [  OK  ]

That turned out to be the culprit and apparently this is a known issue.

Adding the following “security” section to /etc/init.d/iptables fixed the problem. Search for “Setting chains” (note the capital S) to get to the section you need. In my file I found this section at line 140.

    echo -n $"${IPTABLES}: Setting chains to policy $policy: "
    for i in $tables; do
        echo -n "$i "
        case "$i" in
+           security)
+               $IPTABLES -t filter -P INPUT $policy \
+               $IPTABLES -t filter -P OUTPUT $policy \
+               $IPTABLES -t filter -P FORWARD $policy \
+               || let ret+=1
+               ;;

Using the Cisco 3650 Managment Port

Configuring some new Cisco 3650s, I wanted to use the management ports rather than setting up management LAN SVIs and so on. This is particularly useful in a DMZ as we know the management port is in a completely different VRF.

Here’s a short summary of the steps taken to get around things not working at first as the traffic wasn’t being source from within the management VRF. IP addresses are only for the purposes of examples.

First off, configure the management interface and default route:

interface GigabitEthernet0/0
 description ** Network Managment Interface **
 vrf forwarding Mgmt-vrf
 ip address

ip route vrf Mgmt-vrf


logging source-interface GigabitEthernet0/0 vrf Mgmt-vrf
logging host


ntp server vrf Mgmt-vrf


ip tftp source-interface GigabitEthernet0/0

AAA needs a modification to work

aaa group server tacacs+ TACACS_GROUP
 ip vrf forwarding Mgmt-vrf

ip tacacs source-interface GigabitEthernet0/0


snmp-server host vrf Mgmt-vrf version 2c YOURSTRING

That covers the essentials!

F5 LTM Single VS to multiple pools port mapping

Scenario: 1 to 1 mapping of ports on an IP for SSL termination to a corresponding inside port on a local server.

Rather than creating a VS on the same IP for each individual port I decided to create the pools containing the same node but with individual ports and manage the VS part with an iRule.


pool0 –
pool1 –
pool2 –

An iRule was then created on a VS listening on all ports to redirect as required.

SSL Profile (Client): [ your SSL profile ]
VLAN and Tunnel Traffic: Enabled on… [ appropriate interface ]
Source Address Translation: Auto Map
Address Translation: [ Should be ticked ]
Port Transation: Tick
Resources: [ Pick the iRule ]


    switch [TCP::local_port] {
        "5000" { pool pool0 }
        "5001" { pool pool1 } 
        "5002" { pool pool2 } 
        default { reject }

Seems to work OK!

NB: I found a gotcha here as I was replacing an existing VS with a specific port. If you have a VS for a specific port and shut it down, then create a VS on the same IP listening on all ports, incoming connections to the port on the shutdown VS will be denied! You can get around this by changing the original VS to an unused port to allow for reverting or just delete it.

X11 forwarding over SSH on firewalled CentOS host

I had a few issues with X11 forwarding over SSH on one of my CentOS hosts. After a bit of fiddling, I discovered that there were a couple of things I hadn’t taken into account.

I’d set my putty session up to allow X11 fowarding, and set the X display location to “localhost”. On the server, I installed xclock and its dependencies for testing, and set the following in /etc/ssh/sshd_config:

X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes

I restarted sshd, however this still wasn’t working.

In short, I was missing two things:

1) xauth wasn’t installed. This is required!
2) I wasn’t allowing connections to localhost in my iptables config. This was fixed in my ruleset with:

iptables -A INPUT -i lo -j ACCEPT

sshd was restarted after installing xauth and adding the firewall rule and it now works a treat!