SRX240 cluster management interface

http://forums.juniper.net/t5/SRX-Services-Gateway/SRX240-fxp0-management-interfaces-not-working/td-p/76524

You've discovered one of the more aggravating "features" of the SRX products.  The fxp0 interfaces become "out of band" management, and I use the quotes because Juniper has a very different opinion of what "out of band" means than many other manufacturers and customers.  Personally I think it's an incredibly impractical way to do management, and I don't even use fxp0 interfaces on my clusters because I can't stand the way Juniper thinks they should work.

 

Basically, you need to have a completely separate management network (in other words, your 10.26.4.0/25 network needs to be separate from any kind of possible transit traffic through the SRX and routed one hop up from the SRX cluster).  If your management PC does not live on that same network, you need to configure specific "backup-router" statements in the groups config.  For example, if your PC is 192.168.0.5, you might have something like this:

groups {
node0 {
system {
host-name f1-sou1;
backup-router 10.24.4.126 destination 192.168.1.0/24;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.26.4.2/25;
}
}
}
}
}
node1 {
system {
host-name f1-sou2;
backup-router 10.26.4.126 destination 192.168.1.0/24;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.26.4.3/25;
}
}
}
}
}
}
apply-groups "${node}";

You'll also want to make sure you have system management services enabled on your fxp0 interface, if you plan to use that interface for ssh or web management:

system {
services {
ssh;
web-management {
http {
interface fxp0;
}
https {
system-generated-certificate;
interface fxp0;
}
}
}
}

Another option is to use Virtual Chassis mode for the SRX.

 

http://kb.juniper.net/InfoCenter/index?page=content&id=KB18228&smlogin=true

 

You might find that this works a little nicer than fighting with the fxp0 nonsense.  You can log into the active node via the reth interface.  If you need to access the secondary node, you can use "request routing-engine login node 1."  This is how I do my management of SRX clusters.