Overview

Quantum Fabric has multiple components, such as Identity, Integration, and Engagement Services. These features can be accessed through a common, centralized Quantum Fabric Console. This document helps you install and configure Quantum Fabric (Console, Identity Services, Integration Services, and Engagement Services) on Azure Cloud. For more information about Quantum Fabric, refer to Quantum Fabric User Guide.

Quantum Fabric Containers on Azure Solution is designed to facilitate Quantum Fabric on Azure for Non-Production and Production needs. This setup occurs with minimal manual intervention and leverages the following technologies:

  • Docker - To package different components as portable container images (with all the required binaries and libs)
  • Kubernetes - To orchestrate and maintain all these running containers. It will also have features like auto-scaling, secrets, deployment upgrades and rollbacks.
  • Azure - For provisioning of the underlying infrastructure.

From Fabric V9 SP1 onwards, the installation involves a separate dedicated cluster for Integration, which will be referred to as the INT cluster. Whereas, the rest of the Fabric components will be present in a single cluster which will be referred to as the SHARED cluster.

NOTE:
  • For versions V9 ServicePack 5 or later, containers for the Fabric components run on the Red Hat Universal Base Image (UBI).
  • For versions V9 ServicePack 4 or earlier, containers for the Fabric components run on a Debian image.

There are two installation modes, Production and Non-Production. The Production mode is equipped to handle Production level traffic and Non-Production mode is for testing purposes.

During the installation you would be prompted to choose between the installation of a Production instance or a Non-Production instance. In general, for a given Azure subscription an organization installs one Production instance and multiple Non-Production instances (for example, dev, qa, uat, etc.).

  • When you select a Non-Production instance, every time you run it to setup another environment such as qa, uat, etc., only the Integration (INT) cluster gets created again, while the rest of the Fabric components are shared for all the Non-Production instances within the subscription.
  • When you select a Production instance, each installation is treated as a fresh installation where both the Shared cluster and the Integration cluster will be created again.

NOTE: In case of a Non-Production setup the information about the shared resources which are required for installing the new environments is stored as installation metadata in Azure Cosmos DB and Azure Key Vault.

Salient Features

The Quantum Fabric Containers on Azure Solution has the following features:

  • Creates a multi-layer architecture along with the Application Gateway in a Virtual Network making it secure.
  • Supports options to configure a custom DNS name, SSL cert support for secure communication, and includes Jumpbox for DevOps activities.

Overview for the Production and Non-Production Installations

Non-Production Mode

In a given Azure subscription, when you try to install a Non-Production instance, a complete Quantum Fabric is created with both the clusters (Shared and Integration) for the first Non-Production installation. Subsequently, every time you run it to setup another non-production instance such as dev, qa, uat, etc., only the Integration (INT) cluster gets created again, while the rest of the Fabric components are shared for all the Non-Production instances within the same subscription. This integration instance is registered with the existing Fabric console created as part of the first installation.

Production Mode

When you setup an Production account each installation is treated as a fresh installation where both the Shared cluster and the Integration cluster will be created for every install.

Prerequisites

  1. Azure Account - The setup script creates all the resources in this account. In your Azure subscription, your account should have the following permissions.
  2. Prerequisite packages - The Quantum Fabric Containers on Azure Solution does not need any additional software to be pre-installed, as the prerequisite packages are downloaded as a part of the setup scripts.

    The packages that are installed as part of the install scripts are: azure-cli, kubectl, jq, and sponge.

    NOTE: To make sure that the script is able to download all the necessary software, you might need to open outbound connections to the respective sites. For more information, refer to the Appendices section of this document.

  3. Generate an SSH Public Key – Using the SSH protocol, you can connect and authenticate to remote servers and services. Quantum Fabric setup expects an SSH key pair for authentication. The SSH public key is used for creating the Azure Virtual Machine, and for installing the Quantum Fabric Setup. You need to specify the SSH Public Key in the properties file.

    On Ubuntu terminal, use the ssh-keygen command to generate SSH public and private key files that are created by default in the ~/.ssh directory. This command can be executed from your local (Ubuntu) machine:

    ssh-keygen -t rsa -b 2048 - This command is used to generate the SSH key pair using RSA encryption and a bit length of 2048. Name this key to be generated as id_rsa.

    You must create and configure a key pair as you need to provide them in the Configuration settings.

    NOTE:
    • The SSH keys need to be placed in the sshkeys folder.
    • You must leave the passphrase empty while generating the SSH key. VM logins are protected by other features such as the Google Authenticator.
  4. Domain Name- You can have a Domain Name for the solution, which you can purchase from any third-party organizations, such as GoDaddy, and a proper DNS which you need to map to the public DNS of the Application Gateway. Refer to the Appendices section, for more details.
  5. SSL certs- To secure the communication, acquire the SSL certs (Azure Application Gateway requires certificates in .pfx format) and provide them during the Installation process. These SSL certs must be associated with the Domain Name that the user has procured. Refer to the Appendices section on SSL cert pfx format conversion.
    • Place the SSL certificate (in a .pfx file format) in the ssl-cert folder, and then provide the Server Domain Name and AppGateway SSL Cert Password.
    • For enabling HTTPS on the back-end of appgateway, perform the following steps:
      1. The SSL certificates with the cert data and key data should be in separate files (both in a .pem file format).
      2. Save the Cert file as ingress.pem.
      3. Save the Key file as ingress_key.pem.
      4. Place both ingress.pem and ingress_key.pem files in the certs folder of the installation directory.
  6. NOTE:
    • Temenos recommends using CA signed SSL cert to avoid any errors.
    • To execute the installation scripts, you must use Bash version 4 or later.
    • If you are using Marketing Catalog Microservice or Campaign Microservice, refer to the prerequisites later in the document.

VM Setup

You need an Azure Virtual Machine to download the artifacts, and execute the setup scripts from the VM to install Quantum Fabric on Azure.

Pre-Installation Tasks

Configuration

Edit the input parameters in the following file based on the type of solution you want to create.

  • <Installation Directory>/conf/enterprise.properties
    For more information, refer to the sample.properties (zip) file.

There are two types of input properties:

  • User Defined Parameters: These are the mandatory inputs for which you must specify values.
  • Advanced Properties: These are either populated by default or populated by the script during installation. Therefore, these generally do not need to be updated by you. These are available in the sample.properties (zip) file.

List of properties under the User Defined Section

You need to provide the following parameters during Installation:

NOTE: None of the values for parameters in the properties file should contain quotes.

PARAMETER DESCRIPTION EXAMPLE

SERVER_DOMAIN_NAME

This is the domain name property for Fabric Console component. This is not required if shared cluster is already created in previous installations.

kfazure-console.konylabs.net

AZURE_SUBSCRIPTION_ID

This is the Azure subscription ID. Steps to know this are in the next section.

abcd1234ef-22ab-33cd-444f-xyz7890p1q2r3

AZURE_LOCATION

The region in which the AKS cluster should be created. For more details refer to, Features and Supported Regions.

eastus

SERVICE_PRINCIPAL_CLIENT_ID

Application ID of the service principal created for the Azure installation.

abcd1234ef-22ab-33cd-444f-xyz7890p1q2r3

SERVICE_PRINCIPAL_CLIENT_SECRET

Password of the Service Principal.

abcd1234ef-22ab-33cd-444f-xyz7890p1q2r3

SERVICE_PRINCIPAL_OBJECT_ID

Service principal object ID. Steps to know this are in the next section.

abcd1234ef-22ab-33cd-444f-xyz7890p1q2r3

TENANT_ID

Tenant ID of the Azure account. Steps to know this are in the next section.

abcd1234ef-22ab-33cd-444f-xyz7890p1q2r3

SSH_PUBLIC_KEY

This is required for creation of the AKS. Steps to create SSH key is provided in the pre-requisites section.

NOTE: You cannot use the same SSH key for all the Non-Production installations within a subscription.

ssh-rsa ABCD123PQR456...

AZURE_RESOURCE_GROUP

Specify the name of the resource group that must be created for the cluster. Enter a name that does not conflict with any of the existing resource groups. This is not required for Non-Production if the shared cluster is already installed for the same subscription.

FabricXYZ

DATABASE_TYPE

Specify the database type you want to use for hosting Quantum Fabric on Azure.

The Quantum Fabric Containers on Azure Solution supports the MS SQL and MySQL Server Databases.

mysql

DATABASE_USER_NAME

Specify a username for the database that is used by the Fabric components. This is not required in Non-Production installation if a shared cluster is already created in the previous installation.

dbclient

DATABASE_PASSWORD

Specify a password for the database that is used by the Fabric components.

NOTE: The following special character $, *, &, !, (, ), \ are not supported in DB password.

Test#/123

INTEGRATION_DATABASE_USER_NAME

Specify a username for your INT database. This is required only if the selected database is of the type sqlserver. This is not required in Non-Production installation if a shared cluster is already created in the previous installation.

dbclient

INTEGRATION_DATABASE_PASSWORD

Specify a database password for your integration database. This is required only if the selected database is of the type sqlserver. This is not required in Non-Production installation if a shared cluster is already created in the previous installation.

NOTE: If you are using Quantum Fabric 9.0.0.1 GA docker images, you must specify the same value for SHARED_DATABASE_PASSWORD and INTEGRATION_DATABASE_PASSWORD. Integration docker image does not support multiple passwords.
You can use different passwords for your databases from Quantum Fabric V9.0.1.0 GA onwards.
The following special character $, *, &, !, (, ), \ are not supported in DB password.

FabR^c123

ENVIRONMENT_DB_USER

Specify a username for your environment database. This is required only if the selected database is of the type mysql and the installation type is Non-Production.

dbclient

ENVIRONMENT_DB_PASSWORD

Specify a password for your environment database. This is required only if the selected database is of the type mysql and the installation type is Non-Production.

FabR^c123

ENABLE_REDIS_SSL

Specifies whether an SSL connection must be enabled between the Authentication Service and REDIS.

true

(Supported values are true and false)

APPGATEWAY_SSL_CERT_PASSWORD

Password of the SSL PFX certificate.This is the Password is used for getting the pfx key for the SSL offloading.

Test@1234

Automatic Registration Details:

Auto registration inputs are not required to be provided for Non-Production if the Shared cluster is already created in a previous installation.

 

AUTO_REGISTRATION_USER_ID

The email ID used for Quantum Fabric Registration. After the installation is complete, you can add more users from the Fabric console.

micheal.doe@xyz.com

AUTO_REGISTRATION_PASSWORD

The password used for Quantum Fabric Registration.

Test@5264!

AUTO_REGISTRATION_FIRST_NAME

The first name used for Quantum Fabric Registration.

Micheal

AUTO_REGISTRATION_LAST_NAME

The last name used for Quantum Fabric Registration.

Doe

AZURE_CDN_ENABLED

Specify whether to install CDN for the apps component or not. If you have chosen a region where CDN Profile is not supported, CDN will not be configured. For more details refer to, Features and Supported Regions.

true

Supported values include: true or false.

ALERT_NOTIFICATION_ENABLED

Specify whether to enable notification for Alerts. For example sending an alert when the CPU Usage crosses 90% on integration pod, etc.

true

Supported values include: true or false.

AZURE_ACTION_GROUP_NAME

Specify name of the action group.

admin

USER_EMAIL_ID

Specify the default email ID to which alert notification should be sent.

micheal.doe@xyz.com

ARRAY_TO_WHITELIST_IPS_TO_ACCESS_FILE_SHARE

Specify the public Fabric IP of the machine where the installation script would be running. To specify multiple IPs use space separated values like ("w1.x1.y1.z1" "w2.x2.y2.z2" "w3.x3.y3.z3").

("103.140.124.130")

COMMON_RESOURCE_GROUP

Specify the name of the resource group where few subscription level shared resources would be created. This needs to be a separate value for Production and for Non-Production. This is the resource group under which PCI related function app gets created.

KonyInfra

MARKETING_CATALOG_MS

Specify whether to enable Marketing Catalog Microservice or not.

false

Supported values include: true or false.

MARKETING_CATALOG_RESOURCE_GROUP

Specify the name of the resource group where the marketing catalog microservice resources would be created.

marketingcatalog-ms

MONGO_PATH

Enter the path where Mongo Shell is installed.

/opt/mongodb-linux-x86_64-ubuntu1804-4.2.10/bin

MONGO_CONNECTION

Enter the connection string for Mongo Shell from Altas.

mongodb+srv://mongo-cluster-1.jonig.mongodb.net

MONGO_USER_NAME

Enter the Mongo Database user name which has been created earlier by you.

mongodbuser

MONGO_PASSWORD

Enter the Mongo Database password which has been given by you while creating the mongodb user.

Root@123

MONGODB_DBNAME

Enter the preferred Database name.

marketingcatalogdb

MONGODB_CONNECTIONSTR

Provide mongo DB connection string.

NOTE: Ensure that you are providing URL encoded mongodb username and password values if they contain special characters like *,@,:,?,. etc.

It can be formed by using the following string:

mongodb://<MONGO_USER_NAME>:<MONGO_PASSWORD>@<cluster-shard-0>:<port-number>,
<cluster-shard-1>:<port-number>,
<cluster-shard-2>:<port-number>/
<MONGODB_DBNAME>?ssl=true&replicaSet=<value>
&authSource=admin
&retryWrites=true
&w=majority

mongodb://mongodbuser:Root%40123@
mongo-cluster-1-shard-00-01.jonig.mongodb.net:27017,
mongo-cluster-1-shard-00-02.jonig.mongodb.net:27017,
mongo-cluster-1-shard-00-00.jonig.mongodb.net:27017/
marketingcatalogdb?ssl=true&amp;
replicaSet=atlas-qxcm8s-shard-0&amp;
authSource=admin&amp;
retryWrites=true&amp;
w=majority

EVENT_HUB_NAME_SPACE

Enter the preferred Event hub namespace.

marketingcatalogkafkaapp

CAMPAIGN_MS

Specify whether to enable Campaign Microservice or not.

false

Supported values include: true or false.

CAMPAIGN_RESOURCE_GROUP

Specify the name of the resource group where campaign microservice resources would be created.

nas-ssl-campaign

CAMPAIGN_DB_SERVER_NAME

Specify the server name for your MySQL Database.

sqlcampaigndb

CAMPAIGN_DB_USER_NAME

Enter the preferred Database username.

campaign

CAMPAIGN_DB_PASSWORD

Enter the preferred Database password.

It should be a String containing a minimum of 8 characters and combination of alpha-numeric and non-alpha-numeric characters.

root@123

ARRAY_TO_WHITELIST_IPS_TO_ACCESS_CAMPAIGN_DB

Specify the outbound public IP of the machine where the installation script would be running. To specify multiple IPs you can use space separated values like ("w1.x1.y1.z1" "w2.x2.y2.z2" "w3.x3.y3.z3").

("103.140.124.130")

New Relic Monitoring details:

New Relic Monitoring for AKS Cluster. Enable New Relic monitoring for Infrastructure monitoring. Make sure a proper new relic subscription is available before enabling this feature.

 

NEW_RELIC_INFRA_MONITORING_ENABLED

Set to true if you have a new relic license key and if new relic infra monitoring needs to be enabled.

This would monitor the node level performance metrics such as CPU, memory, etc.

true

Supported values include: true or false.

NEW_RELIC_APP_PERF_MONITORING_ENABLED

Set to true if you have a new relic license and if pod level monitoring needs to be enabled.

This would monitor the pod level metrics.

true

Supported values include: true or false.

NEW_RELIC_LICENSE_KEY

Specify the new relic license key.

ab1cd2112ef34g123a45c678ab0cd1ef234

AKS_NODE_COUNT

Specify the number of worker nodes in the cluster.

This is the minimum number of nodes to be created in the Shared AKS agent pool.

2

AKS_MAX_NODE_COUNT

Specify the maximum number of worker nodes that can be provisioned by Autoscaling.

The max number of nodes for the Shared AKS to create in case load increases.

6

AKS_NODE_SIZE_NON_PROD

This is for a Non Production instance. Type of the Azure instance created for Shared cluster in which Fabric components console, identity, engagement pods will be created. Refer to the Azure documentation for instance sizes and codes.

Standard_D4_v3

AKS_NODE_SIZE_PROD

This is for a Production instance. Type of the Azure instance created for Shared cluster in which Fabric components console, identity, engagement pods will be created. Refer to the Azure documentation for instance sizes and codes.

Standard_D4_v3

INTEGRATION_NODE_SIZE

Type of the Azure instance to be created for the Integration Components.

Standard_B2MS

AKS_MASTER_NODE_COUNT

This is the AKS Master Node Count. Specify the number of Master nodes to create.

1

INTEGRATION_AKS_NODE_COUNT

The minimum number of nodes to be created in the Integration AKS agent/node pool.

2

INTEGRATION_AKS_MAX_NODE_COUNT

The maximum number of nodes to be created in the Integration AKS agent/node pool.

6

User Defined Properties

Azure Subscription ID, Azure Service Principal ID Name, Azure Service Principal ID Secret, Service Principal Object ID, and Tenant ID - Azure Subscription ID is a GUID that uniquely identifies your subscription to use Azure services. The Application needs the Service Principal to access or configure resources through the Azure Resource Manager (ARM) in the Azure Stack.

You must have an Azure account with the permissions of a Global Administrator and the role of a User. Without these privileges, it is not possible to create the AKS clusters (or other resources).

The following sections describe fetching Azure Subscription ID, generating Azure Service Principal ID Name, Azure Service Principal ID Secret, Service Principal Object ID, and Tenant ID.

Installation

Steps to Install Quantum Fabric on MS Azure

  1. Create an Ubuntu VM or you can use an existing VM. For more details, refer to VM Setup.
  2. Extract the downloaded installation zip on that VM. For more details, refer to Pre-Installation Tasks.
  3. Modify/Enter the required properties in the <Installation Directory>/conf/enterprise.properties file. For more details, refer to Configuration.
  4. Run the installation using command bash install-fabric.sh.
  5. Choose between a Non-Production or a Production installation mode. In the following prompt a Non-Production mode has been chosen. You can choose Production mode and follow a similar flow to complete the installation.
  6. Specify a label for the Non-Production installation environment. A label is just a simple string such as dev, qa, uat etc., this will also be appended to the Azure resources created for this environment to identify them easily.

    NOTE: This step is specific to Non-Production mode.

  7. The installation begins and towards the end of the installation the following prompt will be displayed.
  8. Once the domains are mapped to the IPs/CNAME as shown in the prompt, validate the mappings and continue the installation by re-running install-fabric.sh. The CDN configuration will be done and the following prompt will be displayed to complete the manual steps of CDN configuration.

    NOTE: You must ensure that the domain mapping must reflect with correct mapping from the script client box using nslookup.

  9. As shown in the prompt, follow the steps in Setting up Azure Content Delivery Network (CDN) to perform a custom domain HTTPS configuration.
  10. Re-run the installation by running the install-fabric.sh again to complete the post-installation tasks.
  11. At the end of the installation, the following message is displayed which contains the Fabric Console URL. Copy and paste the displayed link on to a browser. Press Enter. The Fabric Console is displayed.

Post Installation

After the installation and CDN configuration is complete, you can access the Fabric Console using the link displayed on the screen.

  1. Take a backup and keep the installation logs from <Installation Directory>/logs directory.
  2. Copy and paste the displayed link on to a browser. Press Enter. The Fabric Console is displayed.
  3. Login with the registration credentials provided in the properties file and navigate across various pages. Following is a screenshot of the Environments tab from an Azure test setup after installing 3 environments, namely DEV, UAT2, and QA4.

You can now publish apps and check the runtime services.

Configuring Visualizer to Connect to Quantum Fabric on Azure

For details about connecting to Quantum Fabric Console through Visualizer, refer to Connecting to Quantum Fabric.

Multiple Node Pools in Azure Kubernetes Service

In Azure Kubernetes Service (AKS), nodes that have the same configuration are grouped into node pools, which contain virtual machines that run your apps. To support apps that have different computing or storage requirements, you can use multiple node pools in a single cluster. This feature enables higher control over how you can create and manage multiple node pools.

Currently, the Fabric installation uses an AKS cluster with a single node pool to deploy applications. This single node pool contains all the application components such as identity, integration, console, kpns, and the api portal. For multiple environments, Fabric creates multiple AKS clusters with all the resources.

To optimize the effort of operating and maintaining multiple clusters, Fabric on Azure now uses the multi node pool architecture. In this architecture, multiple node pools are created in a single AKS Cluster. This AKS cluster contains a node pool named Shared, which contains components that are shared by all environments, such as console, identity, and the api portal. In addition to the shared components, every node pool in the cluster contains the kpns and integration components. An Ingress controller is configured to implement host-based routing to the respective node pool.

Updating the Azure Kubernetes Service Cluster configuration

You need to have the current config files if you want to update the AKS cluster configuration. Once the installation is complete, take a backup of the unzipped directory where you installed the kony-fabric-containers-azure.zip to perform further updates to the AKS clusters.

IMPORTANT: If you do not have a backup of the unzipped directory, updating the AKS cluster is difficult.

Setting up Azure Content Delivery Network (CDN)

A Content Delivery Network (CDN) is a distributed network of servers that can efficiently deliver web content to users. To minimize latency, CDNs store cached content on edge servers in Point-Of-Presence (POP) locations that are close to end users.

Azure Content Delivery Network (CDN) offers a global solution for developers to rapidly deliver high-bandwidth content to users by caching the content at strategically placed physical nodes across the world.

NOTE: Azure CDN SKU for AKS cluster is set to Premium Verizon, which supports configuration of cache rules for Quantum Fabric Apps.

After you create the CDN profile and endpoint, you need to manually configure the CDN settings in the Azure portal.

Configuring Caching Rules

  1. Go to CDN profile from the list of resources available in the Azure Resource Group (having the created AKS cluster).
  2. Click Manage from the top navigation bar.

  3. Configure all the rules in the CDN Manage Console.

  4. From the HTTP Large list, select cache settings -> query string caching .
  5. Select no-cache as the query string caching and click Update.
  6. From the HTTP Large list, select Rules Engine.
  7. Configure all the required rules.
    • Rule 1
    • Rule 2
    • Rule 3
    • Rule 4
    • Rule 5
    • Rule 6
    • Rule 7
    • Rule 8
    • Rule 9
    • Rules Order

Configuring Clam AntiVirus for Azure Virtual Machines

Clam AntiVirus (ClamAV) is an open-source anti-virus software toolkit. You can choose to install ClamAV on Azure Virtual Machines in the kubernetes cluster. ClamAV is designed to be an on-demand scanner, and will only run when invoked to run.

A cron job is configured to run ClamAV on Azure Virtual Machines based on the frequency specified in the properties file. ClamAV scans all the files in the virtual machine and pushes the logs (clamscan.log and freshclam.log) present in /var/log/clamav directory to the clamavlogs container in the storage account of the Azure Resource Group in which the AKS cluster is created.

Provide the following inputs in the properties file to enable ClamAV.

Install ClamAV on Azure Virtual Machine

Follow these steps to install ClamAV on your Azure Virtual Machine:

  1. Set the value of INSTALL_CLAMAV to true in the properties file (trail.properties /enterprise.properties).
  2. Place your SSH private key and SSH public key in the sshkeys folder with names id_rsa and id_rsa.pub respectively.
  3. Set the frequency of the cron job to start the ClamAV scan and push the generated logs to the storage account.
  4. NOTE:  Use the following format to set the values for the frequencies of clamscan_cron_schedule and clamscanlogpush_cron_schedule in the properties file (trail.properties /enterprise.properties).
     * * * * *
      |  |  |  |  |
      |  |  |  |  |
      |  |  |  | +---- Day of the Week   (range: 1-7, 1 standing for Monday)
      |  |  | +------ Month of the Year (range: 1-12)
      |  | +-------- Day of the Month  (range: 1-31)
      | +---------- Hour (range: 0-23)
      +------------ Minute (range: 0-59)
     * = any value
    

    For example, if you configure the crontab timing as 00 16 * * *, this indicates that the crontab runs every day at 16:00:00 (UTC).

    • You must maintain a minimum gap of 02:30 hrs. between the clamscan_cron_schedule and clamscanlogpush_cron_schedule.
    • All cron job timings follow UTC timezone.

Access ClamAV logfiles in the Virtual Machine

Follow these steps to access the ClamAV logs in the Virtual Machine.

  • Log on to Jumpbox using the following ssh command:
    ssh devops@<Jumpbox_Public_IP_Address>
  • Log on to the Virtual Machine from Jumpbox using the following SSH command:
    ssh -i id_rsa azureuser@<Virtual_Machine_IP_Address>
  • Using the following command, go to /var/log/clamav directory:
    cd /var/log/clamav
  • This directory contains the clamscan.log and freshclam.log files that are generated by ClamAV after scanning the Virtual Machine.

Another cron job is configured to push the generated logs into the Azure Storage Account.

Edit the cron job

  • Use the following command to view existing cron jobs on the VM.
    crontab -l

  • Use the crontab -e command to open the crontab list in edit mode.

Edit the ClamAV conf file

The freshclam.conf file configures the ClamAV Database Updater.

  • Go to the etc/clamav/ directory.
  • Open the freshclam.conf file in any editor to make any required changes to the conf file.

Accessing Logs in the Azure Storage Account

All the logs generated by CLAMAV are pushed to the Azure Storage Account.

  1. Log on to the Azure portal (portal.azure.com) using your Microsoft account credentials.

  2. Select Resource Groups from the left navigation pane.

  3. All existing resource groups appear.
    Select the resource group in which the AKS Cluster is created.

  4. Select the storage account from the list of resources available in the resource group.
  5. Click Blobs to see all the containers available in the storage account.

  6. Select the clamavlogs container from the list of containers available in the storage account.

    This shows all the log files pushed by ClamAV from the virtual machine.

  7. Click Download to view the logs in the file.

  8. Unzip the downloaded .zip file and extract the content.

    You can now view all the logs that are pushed by CLAMAV from virtual machine.

Configuring OSSEC Intrusion Detection

OSSEC is an open source Host-based Intrusion Detection System (HIDS). It has a powerful correlation and analysis engine, and can perform integrating log analysis, file integrity checking, Windows registry monitoring, centralized policy enforcement, rootkit detection, real-time alerting, and active response.

OSSEC runs as a daemon process. It notifies through alert logs when intrusion attacks occur. These alert logs are pushed to the osseclogs container in the storage account in the Azure Resource Group in which AKS cluster is created.

Provide the following inputs in the properties file to enable OSSEC.

OSSEC Installation steps

  1. Enable the INSTALL_OSSEC flag in the properties file (trial.properties /enterprise.properties):

  2. As OSSEC is a daemon process, it continuously detects intrusion activities and stores alerts in alerts.log file. A cron job is configured to push the alerts from /var/ossec/logs/alerts/alerts.log to the Azure Storage Account.
  3. NOTE: Configure the cronjob osseclogpush_cron_schedule in the properties file (trail.properties /enterprise.properties) to set the frequency value of the cron job. Configure the Crontab timing in the following format:
     * * * * *
      |  |  |  |  |
      |  |  |  |  |
      |  |  |  | +---- Day of the Week   (range: 1-7, 1 standing for Monday)
      |  |  | +------ Month of the Year (range: 1-12)
      |  | +-------- Day of the Month  (range: 1-31)
      | +---------- Hour (range: 0-23)
      +------------ Minute (range: 0-59)
     * = any value
    

    For example, if you configure the crontab timing as 00 16 * * *, this indicates that the crontab runs every day at 16:00:00 (UTC).

    • You can modify the default values of the cron job, if required.
    • All cron job timings follow UTC timezone.
  4. Place your SSH private key in the sshkeys folder with name id_rsa.

Access log files of OSSEC in Virtual Machine

Follow these steps to access OSSEC logs in the Virtual Machine.

  1. Use the following SSH command to log on to Jumpbox

    $ ssh devops@<Jumpbox_Public_IP_Address>

  2. Use the following SSH command, to log on to the Virtual Machine from the Jumpbox

    $ ssh -i id_rsa azureuser@IPaddress of Virtual Machine

  3. Execute the following command to login as a root user.

    $ sudo su

  4. You can view the syslogs at /var/log/syslog.

    $cd /var/log/

    $cat syslog

  5. Logs created by OSSEC daemons are stored in the sub directories of /var/ossec/logs.

    • You can view the OSSEC logs at /var/ossec/logs/ossec.log

      $cd /var/ossec/logs/

      $cat ossec.log

    • You can view the OSSEC alerts at /var/ossec/logs/alerts/alerts.log.

      $cd /var/ossec/logs/alerts/

      $cat alerts.log

Edit cron jobs

  • To view the existing cron jobs use the following command on VM.

    $ crontab -l

  • Use the crontab -e command to open the crontab list in edit mode.

Accessing Logs in Azure Storage Account

All the logs generated by OSSEC are pushed to the Azure Storage Account.

  1. Log on to the Azure portal (portal.azure.com) using your Microsoft account credentials..

  2. Select Resource groups from the left navigation pane.

    All existing resource groups appear.
    Select the resource group in which the AKS Cluster is created.

  3. Select the storage account from the list of resources available in the resource group.
  4. Click Blobs to see all the containers available in the storage account.

  5. Select the osseclogs container from the list of containers available in the storage account.

    The page displays the log files pushed by OSSEC from Virtual Machine.

  6. Click Download to view the logs file.

    You can now view all the logs that are pushed by OSSEC from the virtual machine.

    NOTE: Follow these steps to edit the preloaded-vars.conf file to give customized inputs (other than defaults) to install OSSEC.
    1. Login to the specific Virtual Machine (node) using SSH keys in the terminal.
    2. Go to the /home/azureuser/ossec-hids-2.9.0/etc/ directory by using following command.

      $ cd /home/azureuser/ossec-hids-2.9.0/etc/

    3. Open the preloaded-vars.conf file and edit as required.
    NOTE: Follow these steps to edit the ossec.conf file and change the existing configurations of OSSEC.
    1. Login to the specific Virtual Machine (node) using SSH keys in the terminal.
    2. Go to the /var/ossec/etc directory using following command.

      $ cd /var/ossec/etc

    3. Open the ossec.conf file and edit as required.

Marketing Catalog Microservice

The Marketing Catalog Microservice is a comprehensive source of system-independent data of product details and marketing content crossing all the banks products, and capable of consuming products and services outside of the bank. Examples include products offered by FinTechs (such as payment devices) or real banking products that can be sold and marketed on behalf of other banks. Banks can choose to ingest the product details from the existing core or Product management system. This can be created and managed in the Marketing Catalog microservice.

Prerequisites

  • Run the installation on a fresh virtual machine.

    IMPORTANT: If in case you execute the file on an existing virtual machine you must ensure the Java is not installed on it or pointing it to Java 8 is mandatory.

  • Create a cluster in your Atlas account and Allow access from anywhere in the IP access list which is present under Network Access. Then, create a database user for your cluster and connect it to your cluster. Refer to Get Started with Atlas for the exact procedure that needs to be followed to execute this step.

    IMPORTANT: You must ensure allowing access from anywhere in the IP access list of the cluster. To do so you must login to your cluster and under Network Access go to IP Access List and then click on Edit and select Allow access from anywhere then save.

  • Install mongo shell of the same version as that of the cluster version. If your cluster version is 4.2.10 as shown in the following image then you must install mongo shell of version 4.2.10.

Deployment

  • If you want to create the marketing catalog microservice resources along with the end to end installation of Quantum Fabric on Azure cloud you must do the following:
    1. Provide the required inputs in the conf/enterprise.properties file.
    2. Execute the install-fabric.sh file.
  • If you want to create only the marketing catalog microservice resources then do the following:
    1. Copy the marketing_catalogs folder from KonyFabricContainersAzure-_/lib/microservices/ path.
    2. Provide the required inputs in the conf/ marketing_catalog_ms.properties file.
    3. Execute the install_marketing_catalog_ms.sh file present under the marketing_catalogs folder.

Campaign Microservice

The Campaign microservice allows the branch user to create and store Campaign definition details.

Prerequisite

  • Run the installation on a fresh virtual machine.
  • IMPORTANT: If in case you execute the file on an existing virtual machine you must ensure the Java is not installed on it or pointing it to Java 8 is mandatory.

Deployment

  • If you want to create Campaign microservice resources along with the end to end installation of Quantum Fabric on Azure cloud you must do the following:
    1. Provide the required inputs in the conf/enterprise.properties file.
    2. Execute the install-fabric.sh file.
  • If you want to create only the Campaign microservice resources then do the following:
    1. Copy the campaigns folder from KonyFabricContainersAzure-_/lib/microservices/ path.
    2. Provide the required inputs in the conf/campaign_ms.properties file.
    3. Execute the install_campaign_ms.sh file present under campaigns folder.