So I was doing so labbing on the gear I have physically at the moment and came across some unusual and potentially buggy piece of config on a Juniper SRX 110H and a 220H. I’ll set the scene really quickly. I have an SRX 110 and 220 one in Christchurch, New Zealand and the other in Sydney. Between them I am running a RSVP MLSP connection over GRE. These are running in full flow firewall mode with exceptions for MPLS etc.
So aim of the lab was to setup a VPLS between the two sites and to test its operation. Fairly standard set of work for those familiar, however as you may know, it is hard to test end to end connectivity via basic tools such as ping or trace route unless you have devices on the end of interfaces that have been adding into the VPN especially setup and all that. I just wanted a proof of concept and usually when I’m testing a VPLS end to end, I use a routing interface within a VPLS in the form of an IRB interface, which in turn is added into a L3 VRF. The idea being pinging via the L3 VRF, across the VPLS, and back into the remote side L3 VRF. Easy right? Well the SRX did not like the IRB interface, and I’m not talking about unsupported platforms, software or anything like that.
First off lets get some config:
set routing-instances CORE_VPLS_LAB instance-type vpls set routing-instances CORE_VPLS_LAB vlan-id none set routing-instances CORE_VPLS_LAB routing-interface irb.10 set routing-instances CORE_VPLS_LAB route-distinguisher 192.168.1.1:1000 set routing-instances CORE_VPLS_LAB vrf-target target:64514:1000 set routing-instances CORE_VPLS_LAB protocols vpls site-range 255 set routing-instances CORE_VPLS_LAB protocols vpls no-tunnel-services set routing-instances CORE_VPLS_LAB protocols vpls site srx-au site-identifier 1 set routing-instances CORE_VPLS_LAB protocols vpls connectivity-type irb
..and the L3 VRF side
set routing-instances CORE_L3VPN_LAB instance-type vrf set routing-instances CORE_L3VPN_LAB interface irb.10 set routing-instances CORE_L3VPN_LAB route-distinguisher 192.168.1.11001 set routing-instances CORE_L3VPN_LAB vrf-target target:64514:1001
.. and the irb interface
set interfaces irb unit 10 family inet address 220.127.116.11/24
Ok, so the interface has been added into the correct VRF/VPN, however on this SRX, the issue I came across was the system DOES NOT recognise that the interface has been added into a VRF/VPN and therefore stays in a DOWN state.
perrin@srx-au# run show interfaces terse irb Interface Admin Link Proto Local Remote irb up up irb.10 up down inet 18.104.22.168/24
.. and here lies the problem
perrin@srx-au# run show interfaces irb.10 Logical interface irb.10 (Index 92) (SNMP ifIndex 591) Flags: Hardware-Down SNMP-Traps 0x0 Encapsulation: ENET2 Bandwidth: 1000mbps Routing Instance: None Bridging Domain: None Input packets : 0 Output packets: 0 Security: Zone: Null Protocol inet, MTU: 1514 Flags: Sendbcast-pkt-to-re Addresses, Flags: Dest-route-down Is-Preferred Is-Primary Destination: 1.1.1/24, Local: 22.214.171.124, Broadcast: 126.96.36.199
Essentially I am stuck here, even with a JNCIE-SP glancing over it briefly we couldn’t come up with a instant fix for my testing procedures using an IRB interface. It is certainly an interesting one and the SRX naturally can’t ping the IRB locally either. Now disclaimer, SRXs are the jack of all trades, master of none, at least thats my opinion of the low end branch series devices, so it therefore could something I’m missing that is platform specific that isn’t showing up in the usual places, or (help me) I’ve configured something wrong. Either way I thought it was certainly of interest especially to anyone dealing with L2/L3 troubleshooting and deployment. These devices are both running JUNOS Software Release [12.1X44-D25.5]