If the stack is running 12.1 or later code, the least downtime method is to use the nonstop-upgrade option. That will do the upgrade member by member, taking only one down at a time. You do need to ensure that you have the virtual chassis configured to run in non-stop mode though.
set chassis redundancy graceful-switchover set ethernet-switching-options nonstop-bridging set routing-options nonstop-routing
Once the stack is configured properly, doing a non-stop upgrade is only slightly different from the method you used.
request system software nonstop-upgrade reboot <package>
You didn't say what version you are coming from. Judging from the 20 minute time though I'm guessing it is pre-10.4R3. Unfortunately, going from 10.4R2 or earlier to 10.4R3 or later, or vice versa, requires that the onboard flash be repartitioned and reformatted. It takes about 20 minutes and there is no way around it. The repartion & reformat is only done when crossing the 10.4R2/10.4R3 boundary. If you are not crossing that threshold the upgrade should only take about 5 minutes.
If this were 3 or more members I would say just take the 20 minute outage. With only 2 members though, you can minimize the downtime with very careful timing of reboot commands.
Make sure that you have graceful-switchover enabled. It must be enabled for this to work with minimal outage. You also need to have console access to at least one of the members, although both would be better. If you can only have one, go into the member that is currently the backup.
Note that timing of this process is crucial. You want to have both of them rebooting simultaneously, just in different stages.
1) copy the install image to /var/tmp on the stack
2) stage the upgrade to both members. Do NOT add the reboot option!
request system software add <filename>
3) Reboot the backup routing engine (the other member)
request system reboot member #
At this point you need to watch the console output of the member that you rebooted. There will be two reboots. The first is the one you just did; that one starts the actual upgrade process. The second is after the upgrade completes and it is rebooting to run the new code. You're watching for the second one to start.
Immediately after you see the second reboot occur reboot the currently active routing engine / member. Do NOT wait for the second reboot to complete!
request system reboot local
This will cause an outage while the member finishes booting and comes online; about 2-3 minutes.
You want to avoid having the upgraded node come online while the old node is still running as master. Because of the code difference the upgraded node will not be allowed to join the virtual chassis. Depending on the code it can cause the upgraded member to go into line card mode and getting back to master/backup mode is annoying. You want it to come online, see there is no active virtual chassis master, and assume the master role. That's why the timing is crucial.
If you have any concerns about using this method then I strongly recommend that you don't use it. Follow the simple, one-step method that does all of them at one time and just accept that there will be an outage during that time.