Create two loopback interfaces on juniper MX router

In Junos it is not possible to create two Loopback interfaces in one routing instance, so the configuration example below would be invalid:

root@router# show interfaces lo0 unit 0 { family inet { address 10.1.1.1/32; } } unit 1 { family inet { address 10.1.1.2/32; } } [edit] root@ny-edge-r1# commit check [edit interfaces lo0] 'unit 1' if_instance: Multiple loopback interfaces not permitted in master routing instance error: configuration check-out failed [edit] root@router#

 

Instead, we have to create a separate routing instance, create Loopback interface there, and import a direct route of the second loopback to the main routing instance inet.0:

root@router> show configuration interfaces lo0 unit 0 { family inet { address 10.1.1.1/32; } } unit 2 { family inet { address 10.1.1.2/32; } } ------------------------------------------------------------------------------------------ root@router> show configuration routing-options interface-routes { rib-group inet GROUP1; } rib-groups { GROUP1 { import-rib [ RI01.inet.0 inet.0 ]; import-policy LO2-2-INET; } } ------------------------------------------------------------------------------------------ root@router> show configuration policy-options policy-statement LO2-2-INET term 10 { from instance RI01; then accept; } term 100 { then reject; } ------------------------------------------------------------------------------------------ root@router> show configuration routing-instances RI01 { instance-type virtual-router; interface lo0.2; routing-options { interface-routes { rib-group inet GROUP1; } } }





RIB Group Confusion

This article is take from http://www.subnetzero.info/2014/04/10/rib-group-confusion/

Continuing on the subject of confusing Junos features, I’d like to talk about RIB groups. When I started here at Juniper, I remember being utterly baffled by this feature and its use. RIB groups are confusing both because the official documentation is confusing, and because many people, trying to be helpful, say things that are entirely wrong. I do think there would have been an easier way to design this feature, but RIB groups are what we have, so that’s what I’ll talk about.

Read more

commit synchronize force

To enforce a commit synchronize on the Routing Engines, log in to the Routing Engine from which you want to synchronize and issue the commit synchronize command with the force option:

[edit]user@host# commit synchronize force

re0:re1:commit complete

re0:commit complete

[edit]user@host#

 

For the commit synchronization process, the master Routing Engine commits the configuration and sends a copy of the configuration to the backup Routing Engine. Then the backup Routing Engine loads and commits the configuration. So, the commit synchronization between the master and backup Routing Engines takes place one Routing Engine at a time. If the configuration has a large text size or many apply-groups, commit times can be longer than desired.

SRX cluster initial setup

SRX cluster initial setup:

1. Power on both SRX units. Console to the first one.


2. Enable cluster mode and reboot the devices:
    On device A:    >set chassis cluster cluster-id 1 node 0 reboot
    On device B:    >set chassis cluster cluster-id 1 node 1 reboot


3. Remove default configuration:
root> configure shared
 delete interfaces
 delete system services dhcp   
 delete security nat
 delete protocols stp
 set protocols rstp
 delete security policies
 delete security zones
 delete vlans

4. Configure authentication and ssh access on each device:
root# set system root-authentication plain-text-password
root# set system services ssh root-login allow

 

5. Configure the device specific configurations such as host names and management IP addresses. This is specific to each device and is the only part of the configuration that is unique to its specific node.  This is done by entering the following commands (all on the primary node):

on Node0:
root# set groups node0 system host-name nyc-broadway-451-0                    
root# set groups node0 interfaces fxp0 unit 0 family inet address 172.25.25.1/24   
set apply-groups "${node0}"

on Node1:
root# set groups node1 system host-name nyc-broadway-451-1                    
root# set groups node1 interfaces fxp0 unit 0 family inet address 172.25.25.2/24    
set apply-groups "${node1}"


The 'set apply-groups' command is run so that the individual configs for each node, set by the above commands, are applied only to that node. This command is required.


6. Configure the FAB links (data plane links for RTO sync, etc):
     set interfaces fab0 fabric-options member-interfaces ge-0/0/2
     set interfaces fab0 fabric-options member-interfaces ge-0/0/3
     
     set interfaces fab1 fabric-options member-interfaces ge-5/0/2     
     set interfaces fab1 fabric-options member-interfaces ge-5/0/3


7.  Configure the Redundancy Group 0 for the Routing Engine failover properties. Also configure Redundancy Group 1 (all the interfaces will be in one Redundancy Group in this example) to define the failover properties for the Reth interfaces.
    
set chassis cluster reth-count 3
set chassis cluster redundancy-group 0 node 0 priority 100
set chassis cluster redundancy-group 0 node 1 priority 1
set chassis cluster redundancy-group 1 node 0 priority 100
set chassis cluster redundancy-group 1 node 1 priority 1

-configure switch fabric
set interfaces swfab0 fabric-options member-interfaces ge-0/0/4
set interfaces swfab1 fabric-options member-interfaces ge-5/0/4


8. Configure interfaces
set interfaces ge-0/0/15 gigether-options redundant-parent reth0
set interfaces ge-5/0/15 gigether-options redundant-parent reth0

Minimum effort SRX Cluster upgrade procedure

This is a minimum effort upgrade procedure for an SRX Branch cluster.

It as assumed that the cluster is being managed through a reth interface, thus there is no direct access to node1 via fxp0, and that the cluster is running at least JunOS 10.1r1, thus the ability to login to the backup node from the master node exists.

For a minimum downtime upgrade procedure instead of a minimum effort one, see Juniper KB17947, or use the cable pulling method described in these forums by contributor rahula.

Read more

EX4200 virtual-chassis upgrade with minimal downtime

If the stack is running 12.1 or later code, the least downtime method is to use the nonstop-upgrade option.  That will do the upgrade member by member, taking only one down at a time.  You do need to ensure that you have the virtual chassis configured to run in non-stop mode though.

set chassis redundancy graceful-switchover
set ethernet-switching-options nonstop-bridging
set routing-options nonstop-routing

Once the stack is configured properly, doing a non-stop upgrade is only slightly different from the method you used.

request system software nonstop-upgrade reboot <package>
Read more