Presenting VPLS to a SRX cluster

Image

Hi there, this is more of an open question on a scenario I’m working through at the moment on a Juniper SRX 650 cluster. One particular service that is being presented to this cluster is a VPLS service which interconnects all other offices on this particular network via smaller single chassis SRXs. I am going to be plugging in 2 x 10GB interfaces to the chassis into the VPLS environment. For redundancy, one link is going off to another POP and other to a local switch. As far as the SRX is concerned this is just a reth interface with an IP address. What I want to 100% be sure that there would be any weirdities with the operation of the VPLS as far as the PE routers are concerned dealing with a clustered SRX. I want the right MAC to show up on the right port and the traffic to flow to the chassis listed as primary for that particular reth. I’ll edit this with a diagram real soon.

Thanks!

 

vpls

IPv6 via Tunnel Broker

One of the things I can’t currently do with two ISPs I connect with is native IPv6 (with a static prefix). In the meantime, I’ve sourced IPv6 tunnels as close to my physical location as possible. Currently I have this setup in New Zealand, on a Christchurch VDSL connection with a tunnel broker to Wellington. In this post I will demonstrate the configuration required to get this up and running nice and quickly on your network. This will be based on Junos OS. Because I’m working with SRX here, we need to check the status of IPv6 and enable this if it isn’t alread

perrin@srx-nz# run show security flow status 
  Flow forwarding mode:
    Inet forwarding mode: flow based
    Inet6 forwarding mode: drop
    MPLS forwarding mode: drop
    ISO forwarding mode: drop
    Advanced services data-plane memory mode: Default
  Flow trace status
    Flow tracing status: off
  Flow session distribution
    Distribution mode: RR-based

It is disabled so we need to enable it then reboot the router.

perrin@srx-nz# set security forwarding-options family inet6 mode flow-based 

perrin@srx-nz# commit check 
warning: You have enabled/disabled inet6 flow.
You must reboot the system for your change to take effect.
If you have deployed a cluster, be sure to reboot all nodes.
configuration check succeeds

 

First thing you’ll need to do is sign up to a tunnel broker. I won’t name names, these are easily available with a quick google search. Basically a tunnel broker provides means to connect to the IPv6 network which currently as of day (24/1/14) has about 16482 prefixes. This is achieved in a by routing the IPv6 traffic via an IPv4 tunnel until it reaches a destination that can route natively with IPv6. This is what I have done via an IPIP tunnel with my tunnel broker as follows:

set interfaces ip-0/0/0 unit 2 description "inet6.0 - TUNNEL_SERVICE - Tunnel to xxxx xxxx"
set interfaces ip-0/0/0 unit 2 tunnel source 202.124.x.x
set interfaces ip-0/0/0 unit 2 tunnel destination 202.21.x.x
set interfaces ip-0/0/0 unit 2 family inet
set interfaces ip-0/0/0 unit 2 family inet6 mtu 1480
set interfaces ip-0/0/0 unit 2 family inet6 address 2001:4428:x:x::2/64

Cool so if your family with IPIP tunnels, you’ll know they encapsulate IP traffic with a new IP header rewritten with a new source/dest header. Reasonably simplistic and less overhead than a GRE tunnel, which is good for my DSL connections. The tunnel broker assuaged me a /64 for the tunnel and then another /64 to assign to my LAN. You can see above, you specify in the source and destination fields, your local external IPv6 address and the tunnel brokers IPv4 address.

set interfaces vlan unit 5 family inet6 address 2001:4428:200:x::x/64

Vlan 5 is the SVI I use on my LAN which is where I need to assign the IPv6 addresses. To do this, I use the route advertisement protocol.

set protocols router-advertisement interface vlan.5 prefix 2001:4428:200:8x::/64

This assigns all the compatible devices with global IPv6 addresses.

perrin@server:~$ ifconfig
eth0      Link encap:Ethernet  HWaddr 00:12:79:bd:d6:8b  
          inet addr:192.168.1.50  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: 2001:4428:200:812e:x:x:x:x64 Scope:Global
**output ommited**

Naturally to route traffic out of the local network we need a default route pointing to the other end of the IPIP tunnel’s v6 address

set routing-options rib inet6.0 static route ::/0 next-hop 2001:4428:200:x::1

That’s pretty much it to be honest. Simple IPv6 routing. Be sure to find a destination as close as possible to you to eliminate any potential unnecessary latency.

To verify, I would test both on the router and the host that has been assigned the addresses dynamically.

perrin@srx-nz> ping 2001:4428:200:x::1 source 2001:4428:200:x::2  
PING6(56=40+8+8 bytes) 2001:4428:200:x::2 --> 2001:4428:200:x::1
16 bytes from 2001:4428:200:x::1, icmp_seq=0 hlim=64 time=46.450 ms
16 bytes from 2001:4428:200:x::1, icmp_seq=1 hlim=64 time=43.979 ms
16 bytes from 2001:4428:200:x::1, icmp_seq=2 hlim=64 time=45.496 ms
16 bytes from 2001:4428:200:x::1, icmp_seq=3 hlim=64 time=46.494 ms
^C
--- 2001:4428:200:x::1 ping6 statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max/std-dev = 43.979/45.605/46.494/1.020 ms

Just a note here, Chorus NZ is running very harsh DLM profile on my VDSL resulting in a 35ms last mile latency… And from the host

perrin@server:~$ ping6 google.com
PING google.com(2404:6800:4006:806::1000) 56 data bytes
64 bytes from 2404:6800:4006:806::1000: icmp_seq=1 ttl=52 time=78.2 ms
64 bytes from 2404:6800:4006:806::1000: icmp_seq=2 ttl=52 time=78.2 ms
64 bytes from 2404:6800:4006:806::1000: icmp_seq=3 ttl=52 time=78.9 ms
64 bytes from 2404:6800:4006:806::1000: icmp_seq=4 ttl=52 time=81.9 ms
^C
--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3001ms
rtt min/avg/max/mdev = 78.234/79.351/81.942/1.549 ms
perrin@server:~$ ping6 snap.net.nz
PING snap.net.nz(cookiemonster.snap.net.nz) 56 data bytes
64 bytes from cookiemonster.snap.net.nz: icmp_seq=1 ttl=59 time=47.7 ms
64 bytes from cookiemonster.snap.net.nz: icmp_seq=2 ttl=59 time=47.1 ms
64 bytes from cookiemonster.snap.net.nz: icmp_seq=3 ttl=59 time=47.8 ms
64 bytes from cookiemonster.snap.net.nz: icmp_seq=4 ttl=59 time=47.9 ms
64 bytes from cookiemonster.snap.net.nz: icmp_seq=5 ttl=59 time=49.4 ms
^C
--- snap.net.nz ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 47.150/48.014/49.429/0.771 ms

That’s it. Enjoy but remember, you’ve now opened up your router to the ipv6 internet so you should make sure you secure your RE in the same way you would with IPv4 traffic for things like SSH, SNMP and any protocols you may run on the internet

MP-BGP signalled VPLS and L3 VRFs over MPLS on SRX

My last post was how to setup an RSVP based MPLS tunnel between two SRXs. These general concepts can be applied to any Juniper router like SRX, J and MX, no matter whether it’s in flow mode like my SRXs or not. In this post, I will setup a L2 VPLS and a L3 VRF and do all the verification to show you the correct operation of these running on the network. (This post will just be L2 VPLS, in the next one ill move into point to point L2 tunnels, using LDP and BGP signalling)

Currently I have a simple MPLS network using RSVP between two SRX Juniper router/firewalls. There is one path in each direction and no complicated knobs or features being utilised at this stage. What I want to be able to achieve in this post is building a Layer 3 VRF to exchange routes in and a VPLS to bridge a LAN between the two routers.

perrin@srx-au> show configuration protocols bgp | display set 
perrin@srx-au> show configuration protocols bgp group mpls-core | display set
set protocols bgp group mpls-core type internal
set protocols bgp group mpls-core multihop
set protocols bgp group mpls-core local-address 119.69.0.5
set protocols bgp group mpls-core family inet unicast
set protocols bgp group mpls-core family inet multicast
set protocols bgp group mpls-core family inet-vpn unicast
set protocols bgp group mpls-core family inet6 unicast
set protocols bgp group mpls-core family inet6 multicast
set protocols bgp group mpls-core family inet6-vpn unicast
set protocols bgp group mpls-core family l2vpn signaling
set protocols bgp group mpls-core graceful-restart
set protocols bgp group mpls-core neighbour 119.69.0.1

Just to quickly recap, here is the BGP configuration we will be using to signal our VRFs, applicable to this situation is “l2vpn signalling” and “inet-vpn unicast” These enable BGP to carry correct NLRI per service we are enabling.

I’ll start with the L3 VRF

perrin@srx-au> show configuration routing-instances |display set 
set routing-instances CORE_L3VPN_LAB instance-type vrf
set routing-instances CORE_L3VPN_LAB interface lt-0/0/0.1
set routing-instances CORE_L3VPN_LAB route-distinguisher 119.69.0.5:1001
set routing-instances CORE_L3VPN_LAB vrf-target target:64514:1001

Not too bad yeah! I'll explain a few things quickly. So firstly there is the vrf-target. This is specifying the BGP community to match when exchanging routes with other PE routers, or in this case, the SRX on the other end. This is carried across the MPLS tunnel as a secondary tag, going through a larger MPLS network this would have the top MPLS label added on top the the P routers would use to pass the traffic through to the correct egress router. VRFs will export all VRFs tagged with "64514:1001" and import them into any router with a matching target. The "route-distinguisher" essentially "distinguishes" which IP address belongs to what VRF. The RD field is added to a route when transported across the network. Naturally there is a lot more detail in what each of these two components too, but I'm trying to make this a more hands on, "lets actually configure it" exercise.

To demonstrate the redistribution of routes across this VPN, I have done a simple route import from my inet.0 table on one SRX,

set routing-options interface-routes rib-group inet inet.0-to-CORE_L3VPN_LAB
set routing-options rib-groups inet.0-to-CORE_L3VPN_LAB import-rib inet.0
set routing-options rib-groups inet.0-to-CORE_L3VPN_LAB import-rib CORE_L3VPN_LAB.inet.0

That is a rib copy to do interface routes only coping from inet.0 to CORE_L3VPN_LAB. Example before the RIB copy

perrin@srx-nz# run show route table CORE_L3VPN_LAB terse 

CORE_L3VPN_LAB.inet.0: 3 destinations, 4 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

A Destination        P Prf   Metric 1   Metric 2  Next hop         AS path
* 1.1.1.0/24         D   0                       >lt-0/0/0.1   
                     B 170        100            >gr-0/0/0.2       I
* 1.1.1.2/32         L   0                        Local
* 192.168.1.0/24     B 170        100            >gr-0/0/0.2       I

and after the RIB copy is applied

perrin@srx-nz> show route table CORE_L3VPN_LAB terse   

CORE_L3VPN_LAB.inet.0: 34 destinations, 37 routes (33 active, 0 holddown, 1 hidden)
+ = Active Route, - = Last Active, * = Both

A Destination        P Prf   Metric 1   Metric 2  Next hop         AS path
* 1.1.1.0/24         D   0                       >lt-0/0/0.1   
                     B 170        100            >gr-0/0/0.2       I
* 1.1.1.2/32         L   0                        Local
* 10.0.2.0/24        D   0                       >lo0.0        
* 10.0.2.1/32        L   0                        Local
* 10.0.252.0/31      D   0                       >gr-0/0/0.3   
* 10.0.252.0/32      L   0                        Local
* 10.0.255.0/31      D   0                       >gr-0/0/0.0   
* 10.0.255.0/32      L   0                        Local
* 10.0.255.2/31      D   0                       >ip-0/0/0.3   
* 10.0.255.2/32      L   0                        Local
* 10.0.255.4/31      D   0                       >gr-0/0/0.2   
* 10.0.255.4/32      L   0                        Local
* 10.0.255.12/30     D   0                       >gr-0/0/0.0   
* 10.0.255.13/32     L   0                        Local
* 10.0.255.240/30    D   0                       >ip-0/0/0.0   
* 10.0.255.242/32    L   0                        Local
* 119.69.0.1/32      D   0                       >lo0.0        
* 172.16.1.0/30      D   0                       >vlan.20      
* 172.16.1.1/32      L   0                        Local
* 172.16.2.0/32      D   0                       >lo0.0        
* 172.16.5.0/31      D   0                       >vlan.30      
* 172.16.5.0/32      L   0                        Local
* 172.16.31.254/32   D   0                       >lo0.0        
* 172.31.255.254/32  D   0                       >lo0.0        
* 172.32.255.254/32  D   0                       >lo0.0        
* 192.168.0.16/30    D   0                       >gr-0/0/0.1   
* 192.168.0.18/32    L   0                        Local
* 192.168.1.0/24     D   0                       >vlan.5       
                     B 170        100            >gr-0/0/0.2       I
* 192.168.1.2/32     L   0                        Local
* 192.168.1.251/32   L   0                        Local
* 202.124.101.64/29  D   0                       >ge-0/0/0.0   
                     D   0                       >ge-0/0/0.0   
* 202.124.101.66/32  L   0                        Local
* 202.124.101.69/32  L   0                        Local

As you can see here we have a heap of routes now populated that are traditionally interface routes on the srx_nz router. All going to plan, the routes should have now been transported via the MPLS tunnel to the other end.

perrin@srx-au> show route table CORE_L3VPN_LAB.inet.0          

CORE_L3VPN_LAB.inet.0: 17 destinations, 17 routes (17 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.0/24         *[Direct/0] 6d 03:57:59
                    > via lt-0/0/0.1
1.1.1.1/32         *[Local/0] 6d 03:57:59
                      Local via lt-0/0/0.1
10.0.2.0/24        *[BGP/170] 00:01:10, localpref 100, from 119.69.0.1
                      AS path: I
                    > via gr-0/0/0.2, label-switched-path 5-to-1
10.0.2.1/32        *[BGP/170] 00:01:10, localpref 100, from 119.69.0.1
                      AS path: I
                    > via gr-0/0/0.2, label-switched-path 5-to-1
10.0.252.0/31      *[BGP/170] 00:01:10, localpref 100, from 119.69.0.1
                      AS path: I
                    > via gr-0/0/0.2, label-switched-path 5-to-1
10.0.255.0/31      *[BGP/170] 00:01:10, localpref 100, from 119.69.0.1
                      AS path: I
                    > via gr-0/0/0.2, label-switched-path 5-to-1
10.0.255.2/31      *[BGP/170] 00:01:10, localpref 100, from 119.69.0.1
***output ommited***

Excellent, everything is working well, I haven't displayed everything here but you get the idea. All the routes have been learned from BGP via LSP 5-to-1, if you were using a route reflector for your BGP, then that would be where the "from 119.69.0.1" for my lab just have the BGP point to point. By default, BGP is the only routing protocol that can view the inet.3 table which it uses to resolve the next hop.

perrin@srx-au> show route table inet.3    
inet.3: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

119.69.0.1/32      *[RSVP/7/1] 05:01:26, metric 1
                    > via gr-0/0/0.2, label-switched-path 5-to-1

For a more in-depth look at routes, you can add the extensive flag on the end to view more detailed information like BGP attributes and label information

perrin@srx-au> show route table CORE_L3VPN_LAB.inet.0 10.0.2.0/24 extensive 

CORE_L3VPN_LAB.inet.0: 17 destinations, 17 routes (17 active, 0 holddown, 0 hidden)
10.0.2.0/24 (1 entry, 1 announced)
TSI:
KRT in-kernel 10.0.2.0/24 -> {indirect(262146)}
        *BGP    Preference: 170/-101
                Route Distinguisher: 119.69.0.1:1001
                Next hop type: Indirect
                Address: 0x15b95bc
                Next-hop reference count: 18
                Source: 119.69.0.1
                Next hop type: Router, Next hop index: 643
                Next hop: via gr-0/0/0.2 weight 0x1, selected
                Label-switched-path 5-to-1
                Label operation: Push 315504
                Label TTL action: prop-ttl
                Protocol next hop: 119.69.0.1
                Push 315504
                Indirect next hop: 1694bc8 262146
                State: 
                Local AS: 64514 Peer AS: 64514
                Age: 53:00 	Metric2: 1 
                Task: BGP_64514.119.69.0.1+179
                Announcement bits (1): 1-KRT 
                AS path: I
                Communities: target:64514:1001
                Import Accepted
                VPN Label: 315504
                Localpref: 100
                Router ID: 119.69.0.1
                Primary Routing Table bgp.l3vpn.0
                Indirect next hops: 1
                        Protocol next hop: 119.69.0.1 Metric: 1
                        Push 315504
                        Indirect next hop: 1694bc8 262146
                        Indirect path forwarding next hops: 1
                                Next hop type: Router
                                Next hop: via gr-0/0/0.2 weight 0x1
			119.69.0.1/32 Originating RIB: inet.3
			  Metric: 1			  Node path count: 1
			  Forwarding nexthops: 1
				Nexthop: via gr-0/0/0.2

Thats pretty much that sorted. A fully operational L3 VRF using RSVP MPLS. Now I'll show you the VPLS.

perrin@srx-au> show configuration routing-instances CORE_VPLS_LAB | display set 
set routing-instances CORE_VPLS_LAB instance-type vpls
set routing-instances CORE_VPLS_LAB vlan-id none
set routing-instances CORE_VPLS_LAB interface lt-0/0/0.0
set routing-instances CORE_VPLS_LAB route-distinguisher 119.69.0.5:1000
set routing-instances CORE_VPLS_LAB vrf-target target:64514:1000
set routing-instances CORE_VPLS_LAB protocols vpls site-range 255
set routing-instances CORE_VPLS_LAB protocols vpls no-tunnel-services
set routing-instances CORE_VPLS_LAB protocols vpls site srx-au site-identifier 5
set routing-instances CORE_VPLS_LAB protocols vpls connectivity-type ce

This is example assumes prior knowledge of VPLS operation, thinks like mac address transporting and components with NLRI for VPLS attributes. With the VPLS configured on both sides, we can verify it with these few commands. Also things like vrf-target and route-distinguisher share very similar properties here in application as they do with L3 VRFs

perrin@srx-au> show vpls connections 
**output ommited**
Instance: CORE_VPLS_LAB
  Local site: srx-au (5)
    connection-site           Type  St     Time last up          # Up trans
    1                         rmt   Up     Jan 20 17:00:59 2014           1
      Remote PE: 119.69.0.1, Negotiated control-word: No
      Incoming label: 262145, Outgoing label: 262157
      Local interface: lsi.1051648, Status: Up, Encapsulation: VPLS
        Description: Intf - vpls CORE_VPLS_LAB local site 5 remote site 1
perrin@srx-au> show route table CORE_VPLS_LAB 

CORE_VPLS_LAB.l2vpn.0: 2 destinations, 2 routes (2 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

119.69.0.1:1000:1:1/96                
                   *[BGP/170] 05:29:56, localpref 100, from 119.69.0.1
                      AS path: I
                    > via gr-0/0/0.2, label-switched-path 5-to-1
119.69.0.5:1000:5:1/96                
                   *[L2VPN/170/-101] 6d 04:38:11, metric2 1
                      Indirect

Here you can see the remote VPLS site is signed as UP, that command also shows some label operation and the local label switched interface. LSI's are used when no-tunnel-services as you longer need to use a tunnel pic to run VPLS.

We can go into more detail my adding the extensive command on the end to view things like community tags and local-pref

perrin@srx-au> show route table CORE_VPLS_LAB extensive 

CORE_VPLS_LAB.l2vpn.0: 2 destinations, 2 routes (2 active, 0 holddown, 0 hidden)
 119.69.0.1:1000:1:1/96 (1 entry, 1 announced)
        *BGP    Preference: 170/-101
                Route Distinguisher: 119.69.0.1:1000
                Next hop type: Indirect
                Address: 0x15ba0b8
                Next-hop reference count: 5
                Source: 119.69.0.1
                Protocol next hop: 119.69.0.1
                Indirect next hop: 2 no-forward
                State: 
                Local AS: 64514 Peer AS: 64514
                Age: 5:38:48 	Metric2: 1 
                Task: BGP_64514.119.69.0.1+179
                Announcement bits (1): 0-CORE_VPLS_LAB-l2vpn 
                AS path: I
                Communities: target:64514:1000 Layer2-info: encaps:VPLS, control flags:, mtu: 0, site preference: 100
                Import Accepted
                Label-base: 262153, range: 8
                Localpref: 100
                Router ID: 119.69.0.1
                Primary Routing Table bgp.l2vpn.0
                Indirect next hops: 1
                        Protocol next hop: 119.69.0.1 Metric: 1
                        Indirect next hop: 2 no-forward
                        Indirect path forwarding next hops: 1
                                Next hop type: Router
                                Next hop: via gr-0/0/0.2 weight 0x1
			119.69.0.1/32 Originating RIB: inet.3
			  Metric: 1			  Node path count: 1
			  Forwarding nexthops: 1
				Nexthop: via gr-0/0/0.2
**output ommited**

Unfortunately one really crucial command in my opinion that seems to be missing from the SRX platform is the "show vpls mac-table" command. This can be used to view the bridging table of the VPLS instance. Remote sites show up as their appropriate LSI interfaces with the 48 bit mac address and locally learnt macs via the local interface. i.e. ge-0/0/0 or in my case lt-0/0/0.0 Of course on a proper MPLS router like the MX line, these commands are all supported and work well.

That raps up a quick overview on how to provision simple VPLS and L3 services over MP-BGP signalled MPLS. I can appreciate that this kind of configuration is all over the internet but here I am trying to demonstrate a working model over some very basic juniper routers, at low cost and between two countries :) Enjoy and stay tuned for some reading on LDP based technology and more advanced RSVP-TE concepts.

RSVP based MPLS over GRE on a Flow based Juniper SRX

Recently I moved from Christchurch to Sydney and wanted to have some fun doing various labs and tests between two SRXs, one over here and the other in Christchurch. These were mentioned in my first blog if you haven’t already seen. The reason I’m writing this blog is I’ve found a gap online where I can’t seem to find anyone who has written up something for doing RSVP based MPLSoGRE. There are a few variants, some using LDPoGRE or LDPoGREoIPSEC which is all very good, but if you’re like me, you’ll love the power and granuality of RSVP. The other reason is my move also took me into a new company that doesn’t use Juniper mainstream on the SP network, and any Junos OS I could use to do anything was keeping me sane!

Aside from doing virtual labs on VMWare firefly machines, I wanted to give this real world scenario a go, that being, running an MPLS network between Christchurch and Sydney on two simple DSL connections using SRX. Currently and as far to my knowledge, SRXs can’t do MPLS in flow mode, however, Junos OS allows you to manually change traffic to packet mode which ill demonstrate soon. OF course you could have your SRX or J series router in packet mode, but depending on the application of your Juniper device, that may just not suit. Firstly I’ll verify for you that my SRX is in flow mode

<perrin@srx-au> show security flow status 
  Flow forwarding mode:
    Inet forwarding mode: flow based
    Inet6 forwarding mode: drop
    MPLS forwarding mode: drop
    ISO forwarding mode: drop
    Advanced services data-plane memory mode: Default
  Flow trace status
    Flow tracing status: off
  Flow session distribution
    Distribution mode: RR-based>

Setting up the configuration

This example will use GRE between two public IP addresses and won’t run over a IPSEC VPN. (If you wanted to you would just setup the GRE tunnel with the two addresses specified inside the IPSEC section of your VPN)

set security ipsec vpn srx_aus ike proxy-identity local 172.32.255.254/32
set security ipsec vpn srx_aus ike proxy-identity remote 172.32.255.255/32

I’ll make another post on more detail for this kind of setup. Anyhow, standard GRE..

set interfaces gr-0/0/0 unit 2 description "inet.0 - TUNNEL_SERVICE - Australia SRX GRE Tunnel for MPLS"
set interfaces gr-0/0/0 unit 2 tunnel source 20.20.20.20 
set interfaces gr-0/0/0 unit 2 tunnel destination 30.30.30.30
set interfaces gr-0/0/0 unit 2 family inet mtu 1516
set interfaces gr-0/0/0 unit 2 family inet address 10.0.255.4/31
set interfaces gr-0/0/0 unit 2 family mpls filter input MPLS-packet-mode

and loopback:

set interfaces lo0 unit 0 family inet address 119.69.0.5/32

I’ll quickly confirm the GRE tunnel operation.

perrin@srx-au> ping 10.0.255.4 source 10.0.255.5 count 5 
PING 10.0.255.4 (10.0.255.4): 56 data bytes
64 bytes from 10.0.255.4: icmp_seq=0 ttl=64 time=75.513 ms
64 bytes from 10.0.255.4: icmp_seq=1 ttl=64 time=74.488 ms
64 bytes from 10.0.255.4: icmp_seq=2 ttl=64 time=74.344 ms
64 bytes from 10.0.255.4: icmp_seq=3 ttl=64 time=73.671 ms
64 bytes from 10.0.255.4: icmp_seq=4 ttl=64 time=74.476 ms

--- 10.0.255.4 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 73.671/74.498/75.513/0.590 ms

You’ll notice I have a firewall filter added to family MPLS on the GRE interface. This is the key to running the MPLS on a flow based SRX.

perrin@srx-nz> show configuration firewall family mpls filter MPLS-packet-mode | display set    
set firewall family mpls filter MPLS-packet-mode term all-traffic then packet-mode
set firewall family mpls filter MPLS-packet-mode term all-traffic then accept

As this filter has no “from” statement, all traffic is matched then processed with “then” and as you can see I’m turning packet-mode on for MPLS traffic only”
Because the rest my SRX environment is still running in flow mode, there are a few small things that need to be accounted for. One of those is “intra-zone” traffic, that is, two interfaces inside the same zone, talking to each other. In this particular setup, that is communication between lo0 and gr-0/0/0.2. To keep things simple for this example, I have added a allow all policy for this zone.

set security policies from-zone Tunnel_Services to-zone Tunnel_Services policy allow_all match source-address any
set security policies from-zone Tunnel_Services to-zone Tunnel_Services policy allow_all match destination-address any
set security policies from-zone Tunnel_Services to-zone Tunnel_Services policy allow_all match application any
set security policies from-zone Tunnel_Services to-zone Tunnel_Services policy allow_all then permit

Here you can see that loopback traffic transit gr-0/0/0.2

Session ID: 11663, Policy name: allow_all/9, Timeout: 1772, Valid
  In: 119.69.0.1/1 --> 119.69.0.5/1;rsvp, If: gr-0/0/0.2, Pkts: 1671, Bytes: 373840
  Out: 119.69.0.5/1 --> 119.69.0.1/1;rsvp, If: .local..0, Pkts: 0, Bytes: 0

Security Zone config for interfaces participating in the MPLS

perrin@srx-au> show configuration security zones security-zone Tunnel_Services | display set
set security zones security-zone Tunnel_Services interfaces gr-0/0/0.2 host-inbound-traffic protocols all
set security zones security-zone Tunnel_Services interfaces lo0.0 host-inbound-traffic protocols all

Now for the basic MPLS and RSVP configuration:

perrin@srx-au> show configuration protocols mpls | display set 
set protocols mpls label-switched-path 5-to-1 to 119.69.0.1
set protocols mpls interface gr-0/0/0.2
perrin@srx-au> show configuration protocols rsvp | display set    
set protocols rsvp interface gr-0/0/0.2

It’s a very simple configuration set and I’m only specifying one very basic LSP, at this stage I am not running, primary/seconday paths with fast reroute etc. Just keeping it basic for this demonstration. (there will be further blogs with more advanced CSPF material) The backbone of any MPLS network in a service provider network, is a solid IGP configuration and in my case MP-BGP with all the right knobs turned on for the type of traffic I want to run over this network. Here is my example OSPF and BGP configuration.

perrin@srx-au> show configuration protocols ospf | display set 
set protocols ospf traffic-engineering
set protocols ospf area 0.0.0.0 interface lo0.0
set protocols ospf area 0.0.0.0 interface gr-0/0/0.2 interface-type p2p

perrin@srx-au> show configuration protocols bgp group mpls-core | display set
set protocols bgp group mpls-core type internal
set protocols bgp group mpls-core multihop
set protocols bgp group mpls-core local-address 119.69.0.5
set protocols bgp group mpls-core family inet unicast
set protocols bgp group mpls-core family inet multicast
set protocols bgp group mpls-core family inet-vpn unicast
set protocols bgp group mpls-core family inet6 unicast
set protocols bgp group mpls-core family inet6 multicast
set protocols bgp group mpls-core family inet6-vpn unicast
set protocols bgp group mpls-core family l2vpn signaling
set protocols bgp group mpls-core graceful-restart
set protocols bgp group mpls-core neighbour 119.69.0.1

OSPF here is pretty straight forward. I have changed the interface type to point to point mode. Usually OSPF listens on multicast address 224.0.0.5, but instead now the SRX will unicast the OSPF packets to the remote instead of mulicast. Makes sense when there is only one device listening. You need to make 100% sure you have traffic engineering turned on else your LSPs won’t stand up. OSPF uses Type 10 opaque LSAs to carry traffic engineering extensions. This populates the traffic engineering database with this information. As you can see above there is a few more BGP parameters setup, many of these needed for our setup here. For example, to enable your bgp to carry network layer reachability information (NLRI) for particular traffic types, I’ve setup the “inet unicast” likewise with “inet6″ for ipv6 information. What we are going to use here most of all is “l2vpn signalling” and “inet-vpn unicast” for VPLS and L3 VRFs.

Verification

Lets do some verification for what we have done so far. OSPF and BGP are standing up

perrin@srx-au> show ospf neighbor 
Address          Interface              State     ID               Pri  Dead
10.0.255.4       gr-0/0/0.2             Full      111.69.0.1       128    32

perrin@srx-au> show ospf database opaque-area 

    OSPF database, Area 0.0.0.0
 Type       ID               Adv Rtr           Seq      Age  Opt  Cksum  Len 
OpaqArea 1.0.0.1          111.69.0.1       0x8000001e  1076  0x22 0x178f  28
OpaqArea*1.0.0.1          111.69.0.5       0x8000001e  1479  0x22 0x2777  28
OpaqArea 1.0.0.3          111.69.0.1       0x8000002c   120  0x22 0xded0 136
OpaqArea*1.0.0.3          111.69.0.5       0x8000002c   119  0x22 0x7d39 136

perrin@srx-au> show bgp neighbor 119.69.0.1     
Peer: 119.69.0.1+56907 AS 64514 Local: 119.69.0.5+179 AS 64514
  Type: Internal    State: Established    Flags: 
**Output ommited**

Great everything is working so far. One thing I would recommend checking is that your OSPF database contains OpaqArea LSA types! If you add a detail on the end of that command you will get a world of information about what TE stuff OSPF is transporting. When the BGP neighbour is being checked, you can verify all the address families that have been configured

perrin@srx-au> show bgp neighbor 111.69.0.1 | match "Address families" 
  Address families configured: inet-unicast inet-multicast inet-vpn-unicast inet6-unicast inet6-multicast inet6-vpn-unicast l2vpn-signaling

Right lets check the MPLS side of things starting with whether the LSP is up.

perrin@srx-au> show rsvp interface 
RSVP interface: 1 active
                  Active Subscr- Static      Available   Reserved    Highwater
Interface   State resv   iption  BW          BW          BW          mark
gr-0/0/0.2  Up         1   100%  800Mbps     800Mbps     0bps        0bps  

perrin@srx-au> show mpls lsp 
Ingress LSP: 2 sessions
To              From            State Rt P     ActivePath       LSPname
119.69.0.1      191.69.0.5      Up     0 *                      5-to-1
Total 2 displayed, Up 1, Down 1

Egress LSP: 1 sessions
To              From            State   Rt Style Labelin Labelout LSPname 
119.69.0.5      119.69.0.1      Up       0  1 FF       3        - 1-to-5
Total 1 displayed, Up 1, Down 0

Transit LSP: 0 sessions
Total 0 displayed, Up 0, Down 0

We can also verify the CSPF operation by adding detail on the end of the previous command.

perrin@srx-au> show mpls lsp detail ingress 
Ingress LSP: 2 sessions

111.69.0.1
  From: 111.69.0.5, State: Up, ActiveRoute: 0, LSPname: 5-to-1
  ActivePath:  (primary)
  LSPtype: Static Configured
  LoadBalance: Random
  Encoding type: Packet, Switching type: Packet, GPID: IPv4
 *Primary                    State: Up
    Priorities: 7 0
    SmartOptimizeTimer: 180
    Computed ERO (S [L] denotes strict [loose] hops): (CSPF metric: 1)
 10.0.255.4 S 
    Received RRO (ProtectionFlag 1=Available 2=InUse 4=B/W 8=Node 10=SoftPreempt 20=Node-ID):
          10.0.255.4

You can see here that the LSP has successfully negioated a ERO (Explicit Route Object) and received a confirmation RRO (Route Record Object). To quickly summarise what these are, the ERO will list either loose or strict LSRs the LSP must transit. The RRO is a record of LSRs the LSP has transited to establish the path, essentially a list of address the path has gone through.
Now remember, when creating an LSP, you are building a path in one direction, this is not an automatic bidirectional path so you must remember to create a mirrored path on your remote device. This can be verified with “show mpls lsp” command and confirming you have a ingress and egress LSP. Naturally if you were running and LDP based MPLS topology things would be a little difference, but for RSVP, thats how we take care of business.
At this stage, you are pretty much ready to add your L2 and L3 VRFs into the mix and get some traffic running across this link. I will make another post on configuring these and I will go into further detail on setup and verification.

I hope this will serve as a real world working example that will help with studies or maybe even a production application for anyone who is interested. From here, I will be adding some posts on some more advanced features of RSVP like using fast reroute, primary, secondary paths and some other cool stuff like that in the future as well!

Source NAT off for SSH access to Junos OS security device

Quick tip if you run an SRX/J series router in an office or home environment and you are connecting to the devices external IP address for SSH access from within the internal LAN. Ok so you have some really simple NAT rules for your trusted network’s source NAT. ie.

set security nat source rule-set SRX_LAN-to-InternetCombined from zone SRX_LAN
set security nat source rule-set SRX_LAN-to-InternetCombined to zone InternetCombined
set security nat source rule-set SRX_LAN-to-InternetCombined rule Interface_NAT_1 match source-address 192.168.5.0/24
set security nat source rule-set SRX_LAN-to-InternetCombined rule Interface_NAT_1 match destination-address 0.0.0.0/0
set security nat source rule-set SRX_LAN-to-InternetCombined rule Interface_NAT_1 then source-nat interface

The issue I’ve come across with this setup, is when creating an SSH session from 192.168.5.x to the routers external IP address, my very generic and broad destination address rule catches this and turns on source NAT, essentially making my session look like (my external IP for this example is 1.1.1.1) 1.1.1.1 is trying to SSH to 1.1.1.1. This just gets all kinds of annoying and I don’t want to have to create policies or complicated flow based ways around this and also 1.1.1.1 isn’t inside my SRX_LAN zone if your following the from and to NAT statements. SO easiest way is a little bit of source nat off, which disables the NAT function, per term or rule that you specify. This makes my config look like this instead.

set security nat source rule-set SRX_LAN-to-InternetCombined from zone SRX_LAN
set security nat source rule-set SRX_LAN-to-InternetCombined to zone InternetCombined
set security nat source rule-set SRX_LAN-to-InternetCombined rule Interface_NAT_0 match source-address 192.168.5.0/24
set security nat source rule-set SRX_LAN-to-InternetCombined rule Interface_NAT_0 match destination-address 1.1.1.1/32
set security nat source rule-set SRX_LAN-to-InternetCombined rule Interface_NAT_0 match destination-port 22
set security nat source rule-set SRX_LAN-to-InternetCombined rule Interface_NAT_0 then source-nat off
set security nat source rule-set SRX_LAN-to-InternetCombined rule Interface_NAT_1 match source-address 192.168.5.0/24
set security nat source rule-set SRX_LAN-to-InternetCombined rule Interface_NAT_1 match destination-address 0.0.0.0/0
set security nat source rule-set SRX_LAN-to-InternetCombined rule Interface_NAT_1 then source-nat interface

Ive created a small rule inserted before rule Interface_NAT_1 that matches the LAN source of 192.168.5.0/24, dest port of 22 and turns source NAT off. Simple and easy for a home user. What actually got me doing this was I use a connection manager for devices I need to manage, this list would be upwards of more than 100 devices so I didn’t want to have duplicates of the same device just to access the internal or external interface depending on where I was located at the time. Having just the one with the external IP address is now all i need.

SRX and IRB interface compatibility

So I was doing so labbing on the gear I have physically at the moment and came across some unusual and potentially buggy piece of config on a Juniper SRX 110H and a 220H. I’ll set the scene really quickly. I have an SRX 110 and 220 one in Christchurch, New Zealand and the other in Sydney. Between them I am running a RSVP MLSP connection over GRE. These are running in full flow firewall mode with exceptions for MPLS etc.

So aim of the lab was to setup a VPLS between the two sites and to test its operation. Fairly standard set of work for those familiar, however as you may know, it is hard to test end to end connectivity via basic tools such as ping or trace route unless you have devices on the end of interfaces that have been adding into the VPN especially setup and all that. I just wanted a proof of concept and usually when I’m testing a VPLS end to end, I use a routing interface within a VPLS ¬†in the form of an IRB interface, which in turn is added into a L3 VRF. The idea being pinging via the L3 VRF, across the VPLS, and back into the remote side L3 VRF. Easy right? Well the SRX did not like the IRB interface, and I’m not talking about unsupported platforms, software or anything like that.

First off lets get some config:

set routing-instances CORE_VPLS_LAB instance-type vpls
set routing-instances CORE_VPLS_LAB vlan-id none
set routing-instances CORE_VPLS_LAB routing-interface irb.10
set routing-instances CORE_VPLS_LAB route-distinguisher 192.168.1.1:1000
set routing-instances CORE_VPLS_LAB vrf-target target:64514:1000
set routing-instances CORE_VPLS_LAB protocols vpls site-range 255
set routing-instances CORE_VPLS_LAB protocols vpls no-tunnel-services
set routing-instances CORE_VPLS_LAB protocols vpls site srx-au site-identifier 1
set routing-instances CORE_VPLS_LAB protocols vpls connectivity-type irb

..and the L3 VRF side

set routing-instances CORE_L3VPN_LAB instance-type vrf
set routing-instances CORE_L3VPN_LAB interface irb.10
set routing-instances CORE_L3VPN_LAB route-distinguisher 192.168.1.11001
set routing-instances CORE_L3VPN_LAB vrf-target target:64514:1001

.. and the irb interface

set interfaces irb unit 10 family inet address 1.1.1.1/24

Ok, so the interface has been added into the correct VRF/VPN, however on this SRX, the issue I came across was the system DOES NOT recognise that the interface has been added into a VRF/VPN and therefore stays in a DOWN state.
Shown here..

perrin@srx-au# run show interfaces terse irb                 
Interface               Admin Link Proto    Local                 Remote
irb                     up    up  
irb.10                  up    down inet     1.1.1.1/24

.. and here lies the problem

perrin@srx-au# run show interfaces irb.10    
  Logical interface irb.10 (Index 92) (SNMP ifIndex 591) 
    Flags: Hardware-Down SNMP-Traps 0x0 Encapsulation: ENET2
    Bandwidth: 1000mbps
    Routing Instance: None Bridging Domain: None
    Input packets : 0 
    Output packets: 0
    Security: Zone: Null
    Protocol inet, MTU: 1514
      Flags: Sendbcast-pkt-to-re
      Addresses, Flags: Dest-route-down Is-Preferred Is-Primary
        Destination: 1.1.1/24, Local: 1.1.1.1, Broadcast: 1.1.1.255

Essentially I am stuck here, even with a JNCIE-SP glancing over it briefly we couldn’t come up with a instant fix for my testing procedures using an IRB interface. It is certainly an interesting one and the SRX naturally can’t ping the IRB locally either. Now disclaimer, SRXs are the jack of all trades, master of none, at least thats my opinion of the low end branch series devices, so it therefore could something I’m missing that is platform specific that isn’t showing up in the usual places, or (help me) I’ve configured something wrong. Either way I thought it was certainly of interest especially to anyone dealing with L2/L3 troubleshooting and deployment. These devices are both running¬†JUNOS Software Release [12.1X44-D25.5]