8

I am load balancing traffic on dual, same size links aggregating to the same VRF on the PE router (Juniper MX5 JunOS 11.4). Traffic from the CE (Cisco) is balancing nicely but I need get the reverse right.

I am not NATing inside the multi-site network, the only NATing happens on the edge firewall to the Internet.

I have configured the VRF as follows on the Juniper PE router:

# show routing-instances {client}
instance-type vrf;
.
.
vrf-export {client}-load-balance;
.
.
routing-options {
    static {
        .
        .
        route 10.0.0.0/24 next-hop [ 196.33.144.11 196.33.144.3 ];
        .
        .
    }
}
forwarding-options {
    load-balance {
        indexed-next-hop;
        per-flow {
            hash-seed;
        }
    }
}

and in the main configuration this:

# show policy-options policy-statement {client}-load-balance
then {
     load-balance per-packet;
}

and

# show forwarding-options hash-key
family inet {
    layer-3;
    layer-4;
}

The router still chooses only the 196.33.144.3 hop to route the subnet's (10.0.0.0/24) traffic to and not balancing over both links.

Here are some checks:

# run show route forwarding-table table {client}
Routing table: {client}.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
default            user     0 8:5b:e:84:4c:b0    ucst   561     3 ge-1/1/2.3017
default            perm     0                    rjct   961     1
0.0.0.0/32         perm     0                    dscd   959     1
10.0.0.0/24        user     0 196.33.144.3       ucst   589     5 ge-1/1/5.2100
10.0.0.55/32       user     0                    ucst   645     6 gr-1/1/10.1
10.0.0.210/32      user     0                    ucst   645     6 gr-1/1/10.1
10.0.6.0/24        user     0                    ucst   921     3 gr-1/1/10.16
.
.

and

# run show route 10.0.0.0 table {client}.inet.0

{client}.inet.0: 19 destinations, 20 routes (19 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

10.0.0.0/24        *[Static/5] 3d 07:43:36
                    > to 196.33.144.3 via ge-1/1/5.2100
                      to 196.33.144.11 via gr-1/1/10.1

and

# run show route table {client}.inet.0 detail

{client}.inet.0: 19 destinations, 20 routes (19 active, 0 holddown, 0 hidden)
.
.
10.0.0.0/24 (1 entry, 1 announced)
        *Static Preference: 5
                Next hop type: Router, Next hop index: 1048574
                Address: 0xb6b407c
                Next-hop reference count: 3
                Next hop: 196.33.144.3 via ge-1/1/5.2100, selected
                Next hop: 196.33.144.11 via gr-1/1/10.1
                State: <Active Int Ext>
                Age: 3d 7:46:23
                Task: RT
                Announcement bits (2): 0-RT 2-KRT
                AS path: I
                AS path: Recorded

10.0.0.55/32 (1 entry, 1 announced)
        *Static Preference: 5
.
.

There are guides explaining this using the default inet.0 instance of the router but I can't find examples of this being done inside a VRF.

I am trying the vrf-export command as an alternative for "forwading-table export load-balance-policy-name" because the VRF does not have the forwarding-table option.

Any ideas what I can try?

Shawn Gradwell
  • 313
  • 1
  • 3
  • 10
  • Are both next-hops reachable from the MX? – Jordan Head Jun 10 '15 at 19:47
  • Yes. I can ping both IPs successfully using: # run ping IP routing-instance {client} – Shawn Gradwell Jun 10 '15 at 19:49
  • Okay, let me lab this up - I have a hunch. – Jordan Head Jun 10 '15 at 19:59
  • 2
    _"I am trying the vrf-export command as an alternative for `forwading-table export load-balance-policy-name`"_ That's strange, without modifying your forwarding-table, ECMP isn't going to work. I don't mean to offend you, but are you positive you're trying to put it in under the correct `edit` level? It should be `set routing-options forwarding-table export {client}-load-balance`. – Ryan Foley Jun 10 '15 at 20:21
  • Oh I totally misread a portion of it. Ryan is absolutely right, you must apply the load balancing policy to the hierarchy he mentioned. VRF-export isn't for load-balancing, its for things like route targets/distinguishers. – Jordan Head Jun 10 '15 at 20:26
  • @ShawnGradwell - I lab'd this up and I think your issue might be the fact that you have mixed interface types, one ge- and a GRE tunnel. I'm not entirely sure you can load balance over two different types. I tried it with 2 GRE tunnels, and obviously 2 ge-'s, and it worked. Once I mixed the two it stopped working. I'll try and chase this down. – Jordan Head Jun 10 '15 at 21:00
  • @RyanFoley - I'm not the OP =P – Jordan Head Jun 10 '15 at 21:00
  • Hah, no problem at all :) – Jordan Head Jun 10 '15 at 23:29

1 Answers1

5

It appears that you're applying the load balancing policy to the routing-instance. It needs to be applied to the forwarding-table in order for it to perform ECMP on the forwarding plane.

routing-options {
     forwarding-table {
          export load-balancing-policy;
     }
}

To confirm it's working, you should see something similar to this. Note the additional entry on the forwarding table for entry 10.0.0.0/24.

# run show route forwarding-table table {client}
Routing table: {client}.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
default            user     0 8:5b:e:84:4c:b0    ucst   561     3 ge-1/1/2.3017
default            perm     0                    rjct   961     1
0.0.0.0/32         perm     0                    dscd   959     1
10.0.0.0/24        user     0 196.33.144.3       ucst   589     5 ge-1/1/5.2100 *
10.0.0.0/24        user     0 196.33.144.11      ucst   645     6 gr-1/1/10.1   *
10.0.0.55/32       user     0                    ucst   645     6 gr-1/1/10.1
10.0.0.210/32      user     0                    ucst   645     6 gr-1/1/10.1
10.0.6.0/24        user     0                    ucst   921     3 gr-1/1/10.16
.
.
Ryan Foley
  • 5,479
  • 4
  • 23
  • 43
  • 1
    This worked! I also added the two next-hop IPs to this specific policy to lock it down only to that route. Great stuff thanks! – Shawn Gradwell Jun 11 '15 at 05:21