High Availability - Installation and Configuration Rls 7.0

Introduction

UCX High Availability Replicated Configuration (HARC) is a software feature that creates UCX redundancy in a cluster of two UCX instances designated as Active Node and Standby Node. The purpose of this feature is to protect against outages due to UCX system failures, not for protection against network outages. This implies that the two UCX systems should be co-located with as few active network components between the two systems as possible. 

In the HARC application, the two UCX systems are joined into a cluster and appear to the administrator, users and all devices that communicate with the cluster as a single UCX system. For communication and maintenance purposes, all actions are performed from the Active node via the Cluster IP address If the Active Node system is not available or if there is failure at any UCX component software level then the Standby Node will take over and become the Active node. 

For HARC functionality the Standby node must have the same count of licensed extensions as for the Active node. To cost-effectively facilitate the licensing of the Standby node, E-MetroTel has introduced a reduced-price extension license specifically for HARC configurations: the order code is UCXHARCEXT-1 (please contact your E-MetroTel representative for pricing and additional details).

Both UCX systems must be on the same physical layer 2 network/subnet. A failure of any active networking devices between the two UCX systems can result in service disruption and database corruption in situations where the two systems remain operational but the communication path between the two is lost. UCX HARC can be implemented on Galaxy Expand platform with i5 Processor cards, Galaxy 3000, UCX450 and UCX1000 Appliances, on UCX Cloud and on UCX virtual instances. The UCX HARC software can be deployed in combination with the UCX Survivable Remote Gateway (SRG) functionality to increase the overall availability of telephony services across geographically disperse locations.

HARCwith_i5_processor.png

Note that the High Availability cluster configuration transfers control of all UCX functionality from the active node to the standby node when required.  Since physical trunk interfaces based on a Dahd connected to the system cannot be connected to both units at once, High Availability functionality cannot be configured on UCX systems with internal Dahdi-based trunk cards (or for systems that are licensed for internal trunk cards).

Configuration

Step One - Prerequisite Configuration for each system

The following prerequisite information contains links to appropriate documentation to support the configuration required during the installation:

  • Both systems must be the same type of deployment architecture - i.e. the same Galaxy Appliance, UCX Virtualization SW, or UCX-Cloud
  • IP Addressing
    • This must be done for both UCX systems
    • Both systems must be on the same local LAN segment
    •  At most, the configuration may require six IP addresses, one for each Network Interface (eth0 and eth1) on each UCX system and a cluster IP address for each Network Interface, that will "float" between the two systems
    • The IP addresses must be statically assigned or the addresses assigned by a DHCP server must be tied to the MAC address and must not change (also known as DHCP reservation) 
    • You will need two unique node names. They are not required to be accessible from any other internal or external location but they are required for the HA installation
    • Beginning in UCX Release 7.0, MDSE (and only MDSE) can be supported using a separate network interface connection (specifically Ethernet 1). All other connections must use Ethernet 0.
    • Review the information contained in this document in Step Two - IP Address Planning of this document before configuring the systems
  • Install license
    • Both systems must have identical licenses
    • The High Availability License must be installed for both UCX systems; the Product Code shown in the Licenses / License Details page on each system must contain the AS tag (refer to License Details for more information).
  • Set password 
    • The passwords on both systems must be set to exactly the same password before you configure the High Availability.
    • Once the systems are configured as a cluster, the initial password will be that of the designated Primary system.  Password changes of the cluster subsequent to that will be propagated to both individual systems.
  • Register on VPN
    • E-MetroTel recommends that both systems be connected to the VPN, so VPN registration must be completed prior to creating the cluster.
  • Update software
    • It is important that each of the systems are updated prior to installing the High Availability addon package. On each of the systems, press the Set Source button on the Updates page on the System tab. This flushes the update cache on the UCX system and reloads the mirror list for both systems, effectively resetting the list. Afterwards, start the Software Update.
  • Backup systems
    • Create backups of both systems prior to starting the High Availability configuration process. This should always be performed when making major changes to the system configuration.
    • Note that the once the two systems are configured in the High Availability cluster, the Secondary node programming will be overwritten by the Primary node settings.
    • Also note that if your hardware system is a model equipped with a second Hard Drive in a RAID configuration, you can temporarily remove the second hard drive from the the system to use as a backup should you need to restore to the PRevious configuration. Once you are satisfied with the new High Availability operation you can re-install the RAID drive and it will automatically be updated.
  • Set date and time
    • Ensure that the systems have the same NTP Settings
    • You will not be able to initiate the High Availability cluster unless the timezones match.
  • Configure SMTP settings
    • Ensure SMTP settings on both systems are configured and tested to ensure they are both working correctly prior to initiating the High Availability cluster.

Step Two - IP Address Planning

If you are adding the HARC functionality to an existing UCX deployment, up-front planning of how to manage the IP Addressing of the nodes can save significant time by minimizing the time spent reconfiguring all other devices, external interfaces, and applications to accommodate the Cluster IP Address.  Likewise, if you are deploying the HARC functionality on a new installation but want to validate the UCX operation prior to enabling HARC, time can also be saved using the same IP Address planning.

IP Addressing Strategy

The HARC configuration requires at least three IP Addresses - one for each of the cluster nodes, and one for the cluster itself, and possibly six if MDSE is used on a separate LAN.

  1. If you already have a working UCX system configured using its default IP Address 192.168.1.200, then all connected devices, external interfaces and applications are already programed or configured to communicate with that address.  While this UCX will be used as the Primary Node, the existing Address should be considered to be the final Cluster IP Address
  2. Set up a second UCX with its own IP Address such as 192.168.1.202; this will be the Secondary Node IP Address
  3. Choose another address, such as 192.168.1.201, for what will be ultimately become the Primary Node IP Address
  4. After all connected devices, external interfaces and applications are verified to be functional with the existing UCX but prior to actually enabling the HARC functionality, change the IP Address of the first system to the new Primary Node IP Address.
  5. After completing Step Three below, configure the IP Addressing of the on the Cluster Setup page as shown in Step Four, below.
Note that the UCX built-in DHCP server cannot be used on a High Availability system.

Step Three - Install package

If you are installing the HARC functionality in an Azure environment, you must install the ucx-azure package on each node as described in the Configuring UCX Hyper-V with a Microsoft Azure Monitoring Agent documentation prior to proceeding with Step Three.

Install the High Availability addon package on both UCX systems. Refer to the Addon Packages documentation for complete description of the process.

  • Ensure your systems are each updated to the latest available updates (refer to Software Update)
  • Login to the UCX Web-based Configuration Utility.
  • Select Accessories.
  • Scroll to the High Availability icon (shown here with the "Purchased" filter applied)
    UCX70HighAvailAddonPack.png
  • Click on Install.
  • You will see a Progress Indication during the installation.
    UCX70HighAvailAddonPackInstalling.png
  • Once the installation progress has finished, there will a temporary indication that the  ucx-ha package has been installed
    UCX70HighAvailAddonPackInstalled.png
  • After the package is fully installed, there will be a new option to Uninstall it and the High Availability tab will appear on the top menu.
    UCX70HighAvailAddonPackUninstallOption.png
  • Repeat this process for both UCX systems.

Step Four - Setup Cluster

  • Login to the UCX Web-based Configuration Utility of the UCX system you will consider as the main, or Primary Node.
  • Click on the High Availability tab. You will be automatically directed to the Cluster Setup page. If your systems only have a single Network Interface, the following screen will be shown:
    UCX70HAClusterSetupFirstAccess.png
  •  You may also see a page with an error message "The high availability cluster hasn't been configured yet. Use the Cluster Setup page for the configuration."; click on the Cluster Setup link or the HA Cluster sub-menu tab to access this page.
  • If you have two Ethernet interfaces active on your systems there will be an additional IP Address field for the Primary and Secondary nodes and the Cluster IP Address. There will also be a field for selecting MDSE. The system will only use the second (Ethernet 1) interfaces if you have assigned an IP address for the second Cluster IP Address field.
    UCX&)HAClusterSetupTwoNICS.png
  • Fill in all the required fields according to the prerequisites checklist and click on the Save button.  Note that the Hostnames that you enter here will over-write the names already entered in the Network page on the System tab on the two UCX systems.Once these fields are populated, click on Save.
    UCX70HAClusterSetupFilled.png
  • If there are any package mismatches between the two systems, an error message will be displayed. Resolve the package differences and repeat the cluster setup again.
  • Once the cluster has been successfully setup, a message will be displayed.
    UCX70HAClusterSetupSuccess.png
  • Wait for both systems to complete the reboot, then login to the UCX Web-based Configuration Utility using the Cluster IP address.
Note: Once the High Availability cluster becomes operational, you first login will use the Cluster IP address and the password you set for the Primary Node system during the initial configuration.  If you change the Admin password while the cluster is operational, the new password will be set for both the Primary and Secondary systems, regardless of which node is active.  If you split the cluster, both the Primary Node and Secondary node systems will require this new password.

Step Five - Select Active Node

  • Login to the UCX Web-based Configuration Utility using the Cluster IP address.
  • Click on the High Availability tab and make sure the HA Console sub-menu tab is selected. If the High Availability system is running and both nodes are communicating with each other, the Primary node will be highlighted as green text. See High Availability - Web-based Configuration Utility Rls 7.0 for additional status indications.
    UCX70HAClusterHAConsoletab.png
  • To switch the roles of the two servers (in this case make the Secondary system the active node) click on the Switch Node Roles button of the node you want as the active node.
When you swap nodes, ALL CALLS WILL BE DISCONNECTED and NO CALLS will be able to be placed until the swap has completed and the phones and trunks have registered to the new node.
  • You will receive a warning that doing so will drop  all active calls!
    UCX70HAClusterManualSwitchWarning.png
  • After clicking OK, the system will display a information notice that the switchover is proceeding.
    UCX70HAClusterManualSwitchingInfo.png
  • The resources will take a few minutes to synchronize. Once the synchronization is complete, the system that previously had been assigned the Standby role will be highlighted green and assigned the Active role.
    UCX70HAClusterManualSwitchComplete.png

Active - Standby Failover

When the cluster loses connection to the active node, it will automatically failover to the standby node. Refer to High Availability - Web-based Configuration Utility Rls 7.0 for additional details on status indications used in the Web-based Configuration utility.

 

Page Tags: 
high availability
ha
harc
active standby