Configuration and Setup for Fabric on OpenShift

Extract Fabric Image Tars

Quantum Fabric images will no longer be available to the public. Please ensure to download the tar files and extract the images.

For further information, refer to Extracting Images from tar files and pushing into private registry.

Configure the properties file

  1. Extract the provided FabricOpenShift.zip file. The OpenShift Fabric artifacts generation zip is organized as follows:
    • lib: Contains dependent jars and helper bash functions.
    • templates: Contains the following files
      • fabric-app-tmpl.yml: YAML template for Fabric deployments
      • fabric-db-tmpl.yml: YAML template for Fabric Database schema creation
      • fabric-services.yml: YAML template for Fabric services
    • config.properties: Contains inputs that you must configure for the installation
    • generate-kube-artifacts.sh: A user script that is used to generate required artifacts
    • helm_charts [V9SP6 or later]: Contains the following packages, which can be used to deploy Quantum Fabric
      • fabric_app: Helm charts that are used to deploy Fabric
      • fabric_db: Helm charts that are used to create a database and insert tables into the Fabric database server

    As a result of executing the generate-kube-artifacts.sh script, the artifacts folder is created containing YAML configuration files. The YAML configuration files are generated based on the config.properties file, and they must be applied later to deploy Quantum Fabric on the OpenShift cluster.

  2. Update the config.properties file with relevant information.

For more information about the properties, refer to the following section.

Deploy Fabric on OpenShift

After you update the config.properties file, follow these steps to deploy Quantum Fabric on OpenShift:

  1. Create a project from the oc command line or from the OpenShift console.

    For example:
    oc new-project fabrictest
  2. Generate the Fabric services by running the following command:
    ./generate-kube-artifacts.sh config.properties svcs


    To generate the services configuration, you only need to fill the INSTALL_ENV_NAME property and the ## Install Components ### section in the config.properties file.



    IMPORTANT: For V9SP6 (or later), you can use Helm charts to install and deploy Quantum Fabric on OpenShift. For more information, refer to Deploy Fabric using Helm Charts.

  3. Create the services by running the following command:
    oc apply -f ./artifacts/<INSTALL_ENV_NAME>/fabric-services.yml
    oc apply -f ./artifacts/<INSTALL_ENV_NAME>/fabric-db-secret.yml
    oc apply -f ./artifacts/<INSTALL_ENV_NAME>/fabric-app-secret.yml


    The <INSTALL_ENV_NAME> is the install environment name input as provided in the config.properties file.
  4. Follow either of the following steps to expose Fabric publicly.
    • For development or proof of concept, using HTTP (non-SSL) routes and OpenShift generated hostnames is a quick way to get started.

      Create the routes from the OpenShift console or by running the following command:




      oc expose service/<fabric-service> --path=<context_path> --name=<route_name>


      For <fabric-service> and <context-path>, refer to Context paths for Fabric components. The <fabric-service> is one of the Fabric services that were created earlier.



      Executing the oc expose service command assigns a unique OpenShift generated host name to the route that is created. The assigned host name can be identified by executing the following command:
      oc describe route <fabric-route>


    • For production, Temenos recommends that you use a custom domain name and terminate your SSL connection at the public load balancer. You need to obtain a custom domain name from a DNS provider, and SSL certificate and keys from a certificate authority. For more information, refer to Exposing apps with routes in OpenShift 4.

      NOTE: To configure a Passthrough route type, refer to Configure Passthrough Routes.
      The Reencrypt route type is not supported.


      For a sample configuration, refer to the following screenshot.

Configure Passthrough Routes

To configure the passthrough, you need to create a secure route for the Fabric component. To do so, follow these steps:

  1. On the Create Route page, configure details for the Fabric component.

    NOTE: The details in the screenshot are specific to the API Portal component and use a specific domain. Make sure that you change the details for other routes.
    For more information, refer to Context paths and Service Names for Fabric components.

  2. Under Security, select the Secure Route check box.
  3. From the TLS termination list, select Passthrough.
  4. Provide the required values in the Application Server Details section of the config.properties file.

Context paths and Service Names for Fabric components

NOTE:
  • Make sure that you use the same host name for all the Fabric routes that you plan to create.
  • The Fabric components for accounts, mfconsole, and workspace share the same deployment and service. Therefore, while creating the routes, make sure that you map these routes to the same service: kony-fabric-console.

Make sure that the format of the route location is as follows:

<scheme>://<common_domain_name>/<fabric_context_path>

For example, https://quantum-fabric.domain/mfconsole

Reference table for the mapping of paths and service names:

Fabric Component Fabric Service Name Context Path
mfconsole kony-fabric-console /mfconsole
workspace kony-fabric-console /workspace
accounts kony-fabric-console /accounts
Identity kony-fabric-identity /authService
Integration kony-fabric-integration /admin
Services kony-fabric-integration /services
apps kony-fabric-integration /apps
Engagement kony-fabric-engagement /kpns
ApiPortal kony-fabric-apiportal /apiportal

Additional Resources

You can also use automatic certificate management for your cluster. To automate with OpenShift routes, you can use the open-source project openshift-acme. To automate with Kubernetes Ingress, you can use cert-manager.

Alternatively, if you are using a managed OpenShift service, the public cloud vendor might offer a key manager service, such as Certificate Manager on IBM Cloud.

Deploy Kubernetes artifacts

After deploying Fabric and creating routes for the components, follow these steps to deploy the remaining Fabric Kubernetes artifacts:

  1. In the config.properties file, in the SERVER_DOMAIN_NAME field, add the custom host name or the host name that was generated while creating the routes.
  2. In the config.properties file, update the Database Details section with appropriate information.
  3. Generate the Fabric application and database configuration files by executing the following command.
    ./generate-kube-artifacts.sh config.properties
     
  4. Edit the privileged security context by running the following command, and then add the service account that corresponds to your project.
    oc edit scc privileged
     
    For example: In the following screenshot, fabrictest is the project and default is the service account that is being used.

    NOTE: Adding the service account to the privileged security context is only needed to create the database schema in the following step. After the execution of the Fabric database job is completed, you can remove the service account from the privileged security context.

  5. Create the database schema by executing the following command.


    oc apply -f ./artifacts/<INSTALL_ENV_NAME>/fabric-db.yml

    The <INSTALL_ENV_NAME> is the name of the install environment that you provided in the config.properties file.
  6. After the database schema creation is completed, verify the completion by executing the following command.
    oc get jobs

  7. Create the Fabric deployments by executing the following command.
    
    
    oc apply -f ./artifacts/<INSTALL_ENV_NAME>/fabric-app.yml

    Based on the default replica count that is provided in the config.properties file, one deployment of every Fabric component is created. Based on your requirements, the Fabric deployments can be scaled up from the OpenShift Console, or from the command line.

Deploy Fabric using Helm Charts

To deploy Fabric by using Helm charts, make sure that you have installed Helm, and then follow these steps:

  1. Open a terminal console and navigate to the extracted folder.
  2. Generate the Fabric services by running the following command:


    ./generate-kube-artifacts.sh config.properties svcs

    To generate the services configuration, you only need to fill the INSTALL_ENV_NAME property and the ## Install Components ### section in the config.properties file.
  3. Navigate to the helm_charts folder by executing the following command.
    cd helm_charts
  4. Create the fabricdb and fabricapp Helm charts by executing the following commands.
    helm package fabric_db/
    helm package fabric_app/
  5. Install the fabricdb Helm chart by executing the following command.
    helm install fabricdb  fabric-db-<Version>.tgz
  6. Install the fabricapp Helm chart by executing the following command.
    helm install fabricapp  fabric-app-<Version>.tgz
IMPORTANT:
  • Make sure that you generate the artifacts (generate-artifacts.sh) before creating the Helm charts.
  • Make sure that the fabricdb Helm chart installation is complete before installing the fabricapp Helm chart.

Launch the Fabric Console

  1. After all the Fabric services are up and running, launch the Fabric console by using the following URL.
    <scheme>://<fabric-hostname>/mfconsole

    The <scheme> is http or https based on your OpenShift cluster. The <fabric-hostname> is the custom host name or the host name that was generated while creating routes for the Fabric components.
  2. After you launch the Fabric Console, create an administrator account by providing the appropriate details.

After you create an administrator account, you can sign-in to the Fabric Console by using the credentials that you provided.

Logging Considerations

All Fabric application logs will be streamed to the Standard Output (stdout). You can view the logs by using the kubectl logs command. For more details refer to Interacting with running Pods.