Overview

The Quantum Fabric Container Cluster Solution provides a flexible and scalable solution for deploying Quantum Fabric on your on-premises setup, including scaling the installation to multi-node systems and robust logging capabilities.

Quantum Fabric Container Cluster Solution On-Premises setup occurs with minimal manual intervention and leverages the following technologies:

  • Docker - To package different components as portable container images (with all the required binaries and libs).
  • Kubernetes - To orchestrate and maintain all the running containers. It will also have features like auto-scaling, secrets, deployment upgrades, and rollbacks.

You should use this installation if you plan to use docker to setup your on-premise production grade installation of Quantum Fabric. If you want to setup a developer instance, please refer to the install instructions at Single container setup (On-Prem).

NOTE:
  • For versions V9 ServicePack 5 or later, containers for the Fabric components run on the Red Hat Universal Base Image (UBI).
  • For versions V9 ServicePack 4 or earlier, containers for the Fabric components run on a Debian image.

Salient Features

The Quantum Fabric Container Cluster Solution provides developers with tools to build applications. The Quantum Fabric Container Cluster Solution has the following features:

  1. Deploys Quantum Fabric on Kubernetes environment.
  2. Supports deployment in Linux (CentOS version 7.4).

Prerequisites

Software Requirements

  • Install OpenJDK 11.
  • Install Zip and Dig.

Supported OS Platform

Fabric Kubernetes cluster is supported on CentOS version 7.4.

Supported Application Servers

Quantum Fabric Container Cluster Solution only supports the Tomcat Application Server. The Tomcat server comes bundled along with the installer.

Docker images for Quantum Fabric are built using Tomcat and JDK as the base image.

  • V9SP2 (or earlier) is built using Tomcat 9.0.22-jdk11
  • V9SP2FP1 to V9SP5 is built using Tomcat 9.0.33-jdk11
  • V9SP6 (or later) is built using Tomcat 9.0.62-jdk11

Supported Databases

Quantum Fabric Container Cluster Solution supports the following database servers:

Database Type Version Supported
Postgres 14.4
MySQL 5.6, 5.7, 8.0.26
Microsoft SQL Server 2016, 2017
Oracle Oracle 12c (12.1.0.1.0), Oracle 18c
NOTE:
  • MySQL 8 and MySQL 8 Cluster are supported from certain versions of Quantum Fabric. For more information, refer to the Quantum Fabric - Supported OS, Application Servers, and Databases Guide.
  • Multi-node setups for Quantum Fabric are not supported with Postgres.
  • You must have an existing external database. The Database does not come bundled with the Installer.

Supported Kubernetes Version

Quantum Fabric Container Cluster Solution supports the following Kubernetes package versions:

Kubernetes Package Supported Version
(V9SP2 or earlier)
Supported Version
(V9SP2FP1 to V9SP5)
Supported Version
(V9SP6 or later)
Docker 18.09.9 19.03 20.10.17
kubectl 1.15.4 1.15.4 1.23.7
kubelet 1.15.4 1.15.4 1.23.7
kubeadm 1.15.4 1.15.4 1.23.7

 

Date and Time

The date and time should be synchronized across all the nodes in a cluster.

Hardware Requirements

This setup requires a minimum of three machines (1 master node, 1 worker node, and 1 load balancer node) for a development environment and five machines (3 masters, 2 worker nodes, and 1 load balancer node) for a production grade environment.

Following is the hardware requirement for a development environment setup.

Component Requirement
RAM 4 GB for master node, 8 GB for worker node
Internal Storage 100 GB
CPU Cores Dual or above.

 

Following is the hardware requirement for a production grade cluster.

Component Requirement
RAM 20 GB
Internal Storage 100 GB
CPU Cores Dual or above.

 

Firewall Settings

You must ensure that the following ports are used to expose the required services when a Firewall is enabled on your system.

Required Open Ports

The following port ranges must be opened before installing Fabric on all nodes in a cluster.

  • 30000-32767/tcp
  • 6443-10252/tcp
  • 2379-2380/tcp

Run the following commands to open the ports:

sudo firewall-cmd --permanent --add-port=30000-32767/tcp --zone=public 
sudo firewall-cmd --permanent --add-port=6443-10252/tcp --zone=public
sudo firewall-cmd --permanent --add-port=2379-2380/tcp --zone=public

The following ports must be opened explicitly when a firewall is enabled on all nodes. These are required by Weave CNI to avoid hostname issues.

  • 6783/udp, 6783/tcp
  • 6784/udp, 6784/tcp

Run the following commands to open the ports:

sudo firewall-cmd --permanent --add-port=6783/udp --zone=public
sudo firewall-cmd --permanent --add-port=6783/tcp --zone=public
sudo firewall-cmd --permanent --add-port=6784/udp --zone=public
sudo firewall-cmd --permanent --add-port=6784/tcp --zone=public

The following ports should be opened on the load-balancer node.

  • 443/tcp
  • 80/tcp

Run the following commands to open the ports:

sudo firewall-cmd --permanent --add-port=443/tcp --zone=public
sudo firewall-cmd --permanent --add-port=80/tcp --zone=public

At the end run the following command to load the new firewall settings.

sudo firewall-cmd --reload

Some of the app services in Fabric cluster use specific node ports to expose the services. Therefore, the ports mentioned below must be free on all nodes before installing the Fabric Kubernetes cluster.

Ports Services using the Port
30000 Prometheus
31000 Kubernetes Dashboard. (The default port value can be changed by changing value of KUBERNETES_DASHBOARD_PORT in the config.properties file)
30200 Internal nginx load balancer
30300 Kibana logging tool
30400 Grafana monitoring tool

Preinstallation Tasks

Following are the preinstallation tasks to be performed on all nodes before starting installation.

  1. All Kubernetes masters and nodes must have the swap disabled. This is the recommended deployment as per the Kubernetes community.
    1. Run the following command to disable swap.
      sudo swapoff -a
    2. Run the following command to update fstab so that the swap remains disabled even after a reboot.
      sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
    3. Restart the node after the swap is disabled.
  2. Download Fabric container artifacts from Quantum Downloads.
  3. If the Fabric cluster is being reinstalled, you must execute the following command on all nodes and delete the .kube folder present in the root directory from all the master nodes.
    sudo kubeadm reset

Architecture

Following is the architecture diagram of the Quantum Fabric Container Cluster Solution.

The complete Quantum Fabric Container Cluster Solution will have a total of six pods.

Following is the list of pods present in the Quantum Fabric Container Cluster Solution:

  • Console
  • API Developer Portal
  • Identity
  • Integration
  • Engagement
  • Database (exits on completion of the Database scripts execution)

The following is the list of Docker images used for the pods in the Quantum Fabric Container Cluster Solution:

  • Fabric Console (Contains mfconsole.war,   workspace.war,  and  accounts.war)
  • API Developer Portal (Contains apiportal.war)
  • Identity (Contains authService.war)
  • Integration (Contains admin.war,  services.war middleware.war, and apps.war)
  • Engagement (Contains kpns.war)
  • Database (Contains Database migration scripts)

Quantum Fabric images will no longer be available to the public. Please ensure to download the tar files and extract the images.
For further information, refer to Extracting Images from tar files and pushing into private registry.

Quantum Fabric Container Cluster Solution downloadable bundle contains the following directories and files:

IMPORTANT:

For V9SP6 or later, the installation process uses the following helm charts to deploy the Fabric Containers.

  • fabric_app: This artifact is used to deploy Quantum Fabric
  • fabric_db: This artifact is used to create a database and insert tables to the Fabric database server

The process of generating ad deploying Quantum Fabric artifacts by using helm charts is automated in the install-fabric.sh script.

  • install-fabric.sh - Installation script
  • install-actions.sh - Installation actions script which is internally used by install-fabric.sh.
  • templates - Directory that contains the config template files
  • add-ons - Directory that contains additional deployments and configurations for the Kubernetes cluster.
  • config.properties - Properties file to pass the inputs to install-fabric.sh instead of giving inputs in shell prompts.
  • lib - Folder containing the fabric-utils.jar file.
  • resources - Folder containing a few resources required for the Kubernetes CNI.
  • setup-loadbalancer.sh - Needs to be executed on a different machine to support the load balancer.

Installation Types

Inputs for installation can be gathered through two modes. Any of the following modes can be chosen for installation.

  • Command Line - Quantum Fabric Container Cluster Solution can be installed using the bundled install shell script, which will prompt the user for required values.

    The syntax for installing through the Command Line:

    sudo ./install-fabric.sh

  • Silent Installation - The installation script also supports silent installation if the config.properties file is passed as an argument (for example, /path/install-fabric.sh config.properties). Using this, you can have additional ability to pass custom Tomcat JAVA_OPTS, Heap memory settings, and time-zone settings as well.

    The syntax for installing using the config.properties:

    sudo ./install-fabric.sh config.properties

Configuration

The following parameters are to be provided by the user during Installation:

  1. INSTALL_ENV_NAME - The install environment name can be anything in lowercase (String). The possible values are: dev, qa, prod, or eastusprod.

    NOTE: The Install Environment Name must not contain numbers.

  2. FABRIC_BUILD_VERSION - The build version of Fabric that you want to install. While upgrading, this specifies the build version to which you want to upgrade.

    NOTE: If you build images by using the PreImage kit, ensure that you tag the Docker images in the config.properties file. For more information, refer to PreImage Kit for Quantum Fabric Containers.

  3. FABRIC_DATABASE_BUILD_VERSION - The build version of the database that you want to use with the Fabric installation.
  4. FABRIC_BUILD_TYPE - The type of Fabric environment that must be created. For production environments, the value must be PRODUCTION. For dev, QA, or other non-production environments, the value must be NON-PRODUCTION.
  5. Install Components: The following properties must be set to either Y (yes) or N (no). If ALL_COMPONENTS_ENABLED is set to Y, the rest of the inputs can be left empty. If ALL_COMPONENTS_ENABLED is set to N then at least one of the following input properties must be set to Y.

    • ALL_COMPONENTS_ENABLED
    • INTEGRATION_ENABLED
    • IDENTITY_ENABLED
    • MESSAGING_ENABLED
    • CONSOLE_ENABLED
    • APIPORTAL_ENABLED
  6. Application Server Details
    • SERVER_DOMAIN_NAME: The Domain Name for Quantum Fabric. This value should be the hostname of the LoadBalancer. For example: abc.companyname (DNS name).

      NOTE: Domain name cannot be an IP address or 'localhost'.

    • COM_PROTOCOL: The communication protocol for Quantum Fabric. This value can be either http or https.
    • HTTPS_CERT_FILE: The path to the existing certificate and key files. This value can be empty if the communication protocol is HTTP. The path should point to a valid pem file.
    • HTTPS_KEY_FILE: The path to the existing certificate and key files. This value can be empty if the communication protocol is HTTP. The path should point to a valid pem file.
  7. Database Details:
    • DB_TYPE - This is the Database type you want to use for hosting Quantum Fabric. The possible values are:
      • For MySQL DB server: mysql
      • For Azure MSSQL or SQL Server: sqlserver
      • For Oracle DB server: oracle
    • DB_HOST - This is the Database Server hostname used to connect to the Database Server.
    • NOTE: If the Database Hostname is an IP address, it should be a static IP address.

    • DB_PORT– This is the Port Number used to connect to the Database Server. This can be empty for cloud manage service.
    • DB_USER - This is the Database Username used to connect to the Database Server.
    • DB_PASS - This is the Database Password used to connect to the Database Server.
    • DB_PASS_SECRET_KEY - This is the decryption key for the database password, which is required only if you are using an encrypted password.
    • IMPORTANT: If you are using an encrypted password, use the values that you receive from the encryption utility. For more information, refer to Encrypting the Database Password.

    • DB_PREFIX – This is the Database server prefix for Quantum Fabric Schemas/Databases.
    • DB_SUFFIX – This is the Database server suffix for Quantum Fabric Schemas/Databases.
    • NOTE:
      • Database Prefix and Suffix are optional inputs.
      • In case of upgrade, ensure that the values of the Database Prefix and Suffix that you provide are the same as you had provided during the initial installation.
    • If DB_TYPE is set as oracle, the following String values need to be provided:
      • DB_DATA_TS: Database Data tablespace name.
      • DB_INDEX_TS: Database Index tablespace name.
      • DB_LOB_TS: Database LOB tablespace name.
      • DB_SERVICE: Database service name.
    • USE_EXISTING_DB: If you want to use an existing databases from a previous Quantum Fabric instance set this variable to Y. If not the set the USE_EXISTING_DB variable to N.

      You must provide the location of the previously installed artifacts (the location should contain upgrade.properties file).

      For example: PREVIOUS_INSTALL_LOCATION = /C/kony-fabric-containers-onprem/kubernetes.

  8. Quantum Fabric Account Registration Details: The following properties are required for owner registration. It is not required for an upgrade. If OWNER_REGISTRATION_REQUIRED is set to Y (yes), then you must provide all the following inputs:
    • OWNER_REGISTRATION_REQUIRED - Y/N.
    • OWNER_USER_ID – E-mail ID used for Quantum Fabric Registration.
    • OWNER_PASSWORD – Password used for Quantum Fabric Registration.
    • OWNER_FIRST_NAME – First Name used for Quantum Fabric Registration.
    • OWNER_LAST_NAME – Last Name used for Quantum Fabric Registration.
    • OWNER_ENV_NAME – Environment name to which the generated applications should be published.
  9. Alertmanager Configuration: The following properties are required to configure the Alertmanager. If ALERTMANAGER_SETUP_REQUIRED is set to Y (yes), then you must provide all the following inputs:
    • ALERTMANAGER_SETUP_REQUIRED - Y/N.
    • SMTP_SMARTHOST – The SMTP host that is used to send emails.
    • RECIPIENT_ADDRESS – The email address to which the notifications are sent.
    • SENDER_ADDRESS - The email address of the sender.
    • SENDER_PASSWORD - The password that is used to authenticate the sender.
  10. TIME_ZONE - The Time Zone of the Database used for Quantum Fabric installation. The Time Zone variable must be set to maintain consistency between the Application server and the Database server. For determining what value to set for the time zone you can refer to List of tz database time zones on Wikipedia.

    NOTE: The Time Zone is an optional value. If you do not provide any Time Zone, it is set to Etc/UTC.

  1. Readiness and Liveness Probes Details: Following variables are set with default values in seconds. You can modify them in the config.properties file.
    • IDENTITY_READINESS_INIT_DELAY: The readiness probe initial delay for Identity, in seconds. The default value is 180.
    • IDENTITY_LIVENESS_INIT_DELAY: The liveness probe initial delay for Identity, in seconds. The default value is 300.
    • CONSOLE_READINESS_INIT_DELAY: The readiness probe initial delay for Console, in seconds. The default value is 300.
    • CONSOLE_LIVENESS_INIT_DELAY: Liveness probe initial delay for Console, in seconds. The default value is 600.
    • INTEGRATION_READINESS_INIT_DELAY: The readiness probe initial delay for Integration, in seconds. The default value is 300.
    • INTEGRATION_LIVENESS_INIT_DELAY: Liveness probe initial delay for Integration, in seconds. The default value is 600.
    • ENGAGEMENT_READINESS_INIT_DELAY: The readiness probe initial delay for Engagement, in seconds. The default value is 180.
    • ENGAGEMENT_LIVENESS_INIT_DELAY: The liveness probe initial delay for Engagement, in seconds. The default value is 300.
  2. Minimum and Maximum RAM percentage Details: The following variables are set with default String values. You can modify them in the config.properties file.
    • CONSOLE_MIN_RAM_PERCENTAGE: Minimum RAM percentage for Console. The default value is "50".
    • CONSOLE_MAX_RAM_PERCENTAGE: Maximum RAM percentage for Console. The default value is "80".
    • ENGAGEMENT_MIN_RAM_PERCENTAGE: Minimum RAM percentage for Engagement. The default value is "50".
    • ENGAGEMENT_MAX_RAM_PERCENTAGE: Maximum RAM percentage for Engagement. The default value is "80".
    • IDENTITY_MIN_RAM_PERCENTAGE: Minimum RAM percentage for Identity. The default value is "50".
    • IDENTITY_MAX_RAM_PERCENTAGE: Maximum RAM percentage for Identity. The default value is "80".
    • INTEGRATION_MIN_RAM_PERCENTAGE: Minimum RAM percentage for Integration. The default value is "50".
    • INTEGRATION_MAX_RAM_PERCENTAGE: Maximum RAM percentage for Integration. The default value is "80".
    • APIPORTAL_MIN_RAM_PERCENTAGE: Minimum RAM percentage for API Portal. The default value is "50".
    • APIPORTAL_MAX_RAM_PERCENTAGE: Maximum RAM percentage for API Portal. The default value is "80".
  3. Container resource limits for memory and CPU: The following variables are set with default String values. You can modify them in the config.properties file.
    • IDENTITY_RESOURCE_MEMORY_LIMIT: The resource memory limit for Identity. The default value is "1.2G".
    • IDENTITY_RESOURCE_REQUESTS_MEMORY: The resource memory requests for Identity. The default value is "1G".
    • IDENTITY_RESOURCE_REQUESTS_CPU: The resource CPU requests for Identity. The default value is "200m".
    • CONSOLE_RESOURCE_MEMORY_LIMIT: The resource memory limit for Console. The default value is "2.2G".
    • CONSOLE_RESOURCE_REQUESTS_MEMORY: The resource memory requests for Console. The default value is "2G".
    • CONSOLE_RESOURCE_REQUESTS_CPU: The resource CPU requests for Console. The default value is "300m".
    • APIPORTAL_RESOURCE_MEMORY_LIMIT: The resource memory limit for API Portal. The default value is "1.2G",
    • APIPORTAL_RESOURCE_REQUESTS_MEMORY: The resource memory requests for API Portal. The default value is "1G".
    • APIPORTAL_RESOURCE_REQUESTS_CPU: The resource CPU requests for API Portal. The default value is "200m",
    • INTEGRATION_RESOURCE_MEMORY_LIMIT: The resource memory limit for Integration. The default value is "2.2G".
    • INTEGRATION_RESOURCE_REQUESTS_MEMORY: The resource memory requests for Integration. The default value is "2G".
    • INTEGRATION_RESOURCE_REQUESTS_CPU: The resource CPU requests for Integration. The default value is "300m".
    • ENGAGEMENT_RESOURCE_MEMORY_LIMIT: The resource memory limit for Engagement. The default value is "1.2G".
    • ENGAGEMENT_RESOURCE_REQUESTS_MEMORY: The resource memory requests for Engagement. The default value is "1G".
    • ENGAGEMENT_RESOURCE_REQUESTS_CPU: The resource CPU requests for Engagement. The default value is "200m".
  4. Custom JAVA_OPTS Details: The following variables can be set with default String values. You can modify them in the config.properties file.
    • CONSOLE_CUSTOM_JAVA_OPTS: The custom JAVA_OPTS for Console.
    • ENGAGEMENT_CUSTOM_JAVA_OPTS: The custom JAVA_OPTS for Engagement
    • IDENTITY_CUSTOM_JAVA_OPTS: The custom JAVA_OPTS for Identity.
    • INTEGRATION_CUSTOM_JAVA_OPTS: The custom JAVA_OPTS for Integration.
    • APIPORTAL_CUSTOM_JAVA_OPTS: The custom JAVA_OPTS for API Portal.
  5. Number of instances to be deployed for each component: The following variables can be set with default integer values. You can modify them in the config.properties file.
    • IDENTITY_REPLICAS: The number of instances of Identity. The default value is 1.
    • CONSOLE_REPLICAS: The number of instances of Console. The default value is 1.
    • APIPORTAL_REPLICAS: The number of instances of API Portal. The default value is 1.
    • INTEGRATION_REPLICAS: The number of instances of Integration. The default value is 1.
    • ENGAGEMENT_REPLICAS: The number of instances of Engagement. The default value is 1.
  6. The port on which the Kubernetes Dashboard can be accessed:

    • KUBERNETES_DASHBOARD_PORT: The default value is 31000.

Installation

Run the Quantum Fabric Container Cluster install script to generate and deploy Quantum Fabric containers.

Steps to Install Quantum Fabric Container Cluster Solution on an On-Premises setup:

  1. Setting up HAProxy LoadBalancer

    NOTE: This step is optional if you have your own load balancer setup. You must provide the pre-configured load balancer hostname as the SERVER_DOMAIN_NAME input in the installation process instead of performing the following instructions.

  2. Setting up Cluster
  3. Resuming installation in Master Node

Setting up HAProxy LoadBalancer

Before starting the cluster installation, a loadbalancer must be set up. Without the loadbalancer the cluster setup cannot be started. The HAProxy is a certified external loadbalancer for the Fabric container cluster setup. Installation and configuration of HAProxy is taken care of by the script. Initially, the installation script configures the loadbalancer with one master. Going forward you can edit the haproxy.cfg file to add other masters to the loadbalancer.

Perform the following steps to setup the HAProxy loadbalancer through a script file.

  1. Download the kony-fabric-containers-onprem_9.0.0.0_GA.zip from the Download Link and extract it.
    sudo unzip KonyFabricContainersOnPrem-9.0.0.0_GA.zip 
    -d KonyFabricContainerOnPrem-9.0.0.0_GA
  2. Navigate to fabric container artifact folder.
    cd KonyFabricContainerOnPrem-9.0.0.0_GA
  3. Run setup-loadbalancer.sh. It prompts for master node hostname and communication protocol. The loadbalancer is then installed and configured.
    sudo ./setup-loadbalancer.sh
  4. Check if haproxy gets successfully started by executing the systemctl status haproxy command. If it throws the "cannot bind socket [0.0.0.0:xxxx|https://stackoverflow.com/questions/34793885/haproxy-cannot-bind-socket-0-0-0-08888]" error. Run the following command and restart haproxy.
    setsebool -P haproxy_connect_any=1

Before setting up master nodes, all the IPs of master nodes should be configured in HAProxy loadbalancer. If any master node IP is not listed and you try to set the master node, that node will not be added to cluster. To add master node IPs, edit the file /etc/haproxy/haproxy.cfg and make the following two changes for each master node IP to be added.

  1. In the Configure HAProxy SecureBackend section, add the following line at the end. List all the other master node IPs in the same way.
    server k8s-api-2 <IP-ADDRESS>:6443 check
  2. In the Configure HAProxy Fabric Backend section, add the following line at the end. List all the other master node IPs in the same way.
    server fabric-1 <IP-ADDRESS>:30200 check

Setting up Cluster

In setting up a cluster a minimum of one master can be configured. To improve failure tolerance and ensure cluster availability it is recommended to have an odd number (minimum three) of master nodes. To start cluster setup choose one node as starting point of installation which should be the master node.

  1. Download the kony-fabric-containers-onprem_9.0.0.0_GA.zip from the Download Link and extract it.
    sudo unzip KonyFabricContainersOnPrem-9.0.0.0_GA.zip 
    -d KonyFabricContainerOnPrem-9.0.0.0_GA
  2. Navigate to fabric container artifact folder.
    cd KonyFabricContainerOnPrem-9.0.0.0_GA
  3. Run the following command to setup the master. It downloads and installs cluster packages, initializes kubeadm, sets up Weave CNI, generates token and waits for the worker node to join the master to proceed further.
    sudo ./install-fabric.sh config.properties
  4. NOTE: You must provide execute permissions to run the install-fabric.sh file on Linux.

  5. To join other nodes as master/worker, copy the setup-node.zip file which is available in the Fabric Container artifact folder(KonyFabricContainerOnPrem-9.0.0.0_GA) into the node to join as the master/worker and then perform the following steps.
    1. Extract the setup-nodes.zip file.
      sudo unzip setup-node.zip -d setup-node
    2. Navigate to the setup-nodes folder.
      cd setup-node
    3. Execute the following command.

      For Master:

      sudo ./setup-node.sh control-plane

      For Worker:

      sudo ./setup-node.sh
  6. After the cluster setup is done to get the list of nodes in cluster, use below command
    kubectl get nodes

Resuming Installation In Master Node

After joining masters and worker nodes, you must press enter in the master node terminal to resume the installation.

  1. Kubernetes Dashboard and DB Migrations

  2. Fabric and Ingress configuration

  3. EFK , Prometheus, and Grafana Monitoring tool configuration.

  4. Fabric healthcheck

  5. Run the following command to check status of pods.
    sudo kubectl get pods

Known Issues and Limitations

Quantum Fabric Container Cluster Solution has the following known issues and limitations:

  • Support for SPA / Desktop Web is only available for zipped SPA apps, not for WARs.