Configuration and Setup for Fabric on Amazon EKS

Extracting Fabric Image Tars

Quantum Fabric images will no longer be available to the public. Please ensure to download the tar files and extract the images.

For further information, refer to Extracting Images from tar files and pushing into private registry.

Configure the properties file

  1. Extract the FabricKube.zip file. The zip file is organized as follows:
    • lib: Contains dependent jars and helper bash functions
    • samples: Contains sample Fabric deployments
    • templates: Contains the following files
      • fabric-app-tmpl.yml: YAML template for Fabric deployments
      • fabric-db-tmpl.yml: YAML template for Fabric Database schema creation
      • fabric-services.yml: YAML template for Fabric services
      • fabric-ingress.yml: YAML template for Ingress configuration
    • config.properties: Contains inputs that you must configure for the installation
    • generate-kube-artifacts.sh: A user script that is used to generate required artifacts
    • helm_charts [V9SP6 or later]: Contains the following packages, which can be used to deploy Quantum Fabric
      • fabric_app: Helm charts that are used to deploy Fabric
      • fabric_db: Helm charts that are used to create a database and insert tables into the Fabric database server

    As a result of executing the generate-kube-artifacts.sh script, the artifacts folder is created containing YAML configuration files. The YAML configuration files are generated based on the config.properties file, and they must be applied later to deploy Quantum Fabric on EKS.

  2. Update the config.properties file with relevant information.

For more information about the properties, refer to the following section.

Deploy Fabric on Amazon EKS

After you update the config.properties file, follow these steps to deploy Quantum Fabric on Amazon EKS:

  1. Generate the fabric services by running the following command:
    ./generate-kube-artifacts.sh config.properties

    IMPORTANT: For V9SP6 (or later), you can use Helm charts to install and deploy Quantum Fabric on Amazon EKS. For more information, refer to Deploy Fabric using Helm Charts.

  2. Create the services by running the following command:
    
    
    kubectl apply -f ./artifacts/<INSTALL_ENV_NAME>/fabric-services.yml

    <INSTALL_ENV_NAME> is the install environment name input that you provided in the config.properties file.

  3. Create the ingress controller and the internet facing application load balancer (ALB). For more information, refer to the following blog post: Kubernetes Ingress with AWS ALB Ingress Controller.
    Use the fabric-ingress.yml file, which will map the created services to the load balancer paths.
    
    
    kubectl apply -f ./artifacts/<INSTALL_ENV_NAME>/fabric-ingress.yml

    The specified process generates a load balancer domain name. Temenos recommends that you use a custom domain name and terminate your SSL connection at the public load balancer. You need to obtain a custom domain name from a DNS provider and an SSL certificate. To generate a certificate for your custom domain name and the corresponding ARN (Amazon Resource Name), refer to Requesting a public certificate.

    The generated ARN needs to be updated in the annotations section of the fabric-ingress.yml file as shown in the following screenshot:

NOTE:
  • All fabric services are exposed by using a single ALB.
  • You need to create a security group rule to allow traffic from the ALB to the EC2 Managed nodes on the 8080 port, which is the listen port for the Fabric Services.
  • The Fabric components for accounts, mfconsole, and workspace share the same deployment and service. Therefore, while creating the ingress objects for accounts, mfconsole, and workspace, the paths are mapped to the same service: kony-fabric-console.


Make sure that the format of the route location is as follows:

<scheme>://<common_domain_name>/<fabric_context_path>

For example, https://quantum-fabric.domain/mfconsole

Reference table for the mapping of paths and service names:

Fabric Component Fabric Service Name Context Path
mfconsole kony-fabric-console /mfconsole
workspace kony-fabric-console /workspace
accounts kony-fabric-console /accounts
Identity kony-fabric-identity /authService
Integration kony-fabric-integration /admin
Services kony-fabric-integration /services
apps kony-fabric-integration /apps
Engagement kony-fabric-engagement /kpns
ApiPortal kony-fabric-apiportal /apiportal

 

Deploy Kubernetes artifacts

After deploying the Fabric components on EKS, follow these steps to deploy the remaining Fabric Kubernetes artifacts:

  1. Create the database schema by executing the following command.
    
    
    kubectl apply -f ./artifacts/<INSTALL_ENV_NAME>/fabric-db.yml

    The <INSTALL_ENV_NAME> is the name of the install environment that you provided in the config.properties file.

  2. The previous step executes a job that is responsible for creating the schemas. Verify the completion of the job by executing the following command.
    kubectl get job

  3. Create the Fabric deployments by executing the following command.
    
    
    kubectl apply -f ./artifacts/<INSTALL_ENV_NAME>/fabric-app.yml

    Based on the default replica count that is provided in the config.properties file, one deployment of every Fabric component is created. The Fabric deployments can be scaled up later as required.

Deploy Fabric using Helm Charts

To deploy Fabric by using Helm charts, make sure that you have installed Helm, and then follow these steps:

  1. Open a terminal console and navigate to the extracted folder.
  2. Generate the Fabric services by running the following command:
    ./generate-kube-artifacts.sh config.properties svcs

    To generate the services configuration, you only need to fill the INSTALL_ENV_NAME property and the ## Install Components ### section in the config.properties file.
  3. Navigate to the helm_charts folder by executing the following command.
    cd helm_charts
  4. Create the fabricdb and fabricapp Helm charts by executing the following commands.
    helm package fabric_db/
    helm package fabric_app/
  5. Install the fabricdb Helm chart by executing the following command.
    helm install fabricdb  fabric-db-<Version>.tgz
  6. Install the fabricapp Helm chart by executing the following command.
    helm install fabricapp  fabric-app-<Version>.tgz
IMPORTANT:
  • Make sure that you generate the artifacts (generate-artifacts.sh) before creating the Helm charts.
  • Make sure that the fabricdb Helm chart installation is complete before installing the fabricapp Helm chart.

Custom CA Certificates Support in Helm Chart

Enhanced support for incorporating custom CA certificates, streamlining the manual configuration process. To enable the utilization of a custom CA certificate in the deployment, follow these steps:

  1. Set the enabled parameter to true under the cacert section in the values.yaml file located in the fabric-dbinit directory, as illustrated below:

cacert:

# Set to true to enable custom CA certificate


enabled: true

2. Update the values.yaml file under the fabric directory as follows:


cacert:

# Set to true to enable custom CA certificate


enabled: true

# Provide the relative path to the cacerts file within the Helm directory to establish a custom truststore in the pods

# Ensure that the cacerts file is located within the fabric Helm chart directory

path: "cacerts"

The cacerts file provided must be present within the fabric directory, and the path to that file should be specified.

Launch the Fabric Console

  1. After all the Fabric services are up and running, launch the Fabric console by using the following URL.
    <scheme>://<fabric-hostname>/mfconsole

    The <scheme> is http or https based on your domain. The <fabric-hostname> is the host name of your publicly accessible Fabric domain.
  2. After you launch the Fabric Console, create an administrator account by providing the appropriate details.

After you create an administrator account, you can sign-in to the Fabric Console by using the credentials that you provided.

 

Data Plane Configuration Options

AWS Fargate is a computation engine for containers. With AWS Fargate, you do not require servers, and you can manage and pay for resources based on the number of applications. As all the apps are isolated in Fargate, you also get improved security.

With Amazon EKS, you can create a data plane that consists of a Fargate profile, or a combination of both EC2 instances and a Fargate profile as shown in the following diagram:

For the advantages and disadvantages of the various data plane options, refer to the following table:

Data Plane Architecture Advantages Disadvantages
Managed EC2
  • Better control and visibility into the infrastructure that is being used.
  • Allows Auto-Scaling, which needs to be configured.
  • You need to manage both Infrastructure and applications.
  • Node capacity can lie unused, that is, you are responsible to pack the maximum number of containers on the node.
AWS Fargate
  • Less Complexity. You can focus on deploying and managing applications rather than managing infrastructure.
  • Security. You are responsible only for application-level security.
  • Scaling: Auto-Scaling. You don't need to set up and scale the cluster capacity. The clusters can handle sudden spikes in traffic very well.
  • Lower Costs. You are charged for the duration of container workload usage and not for the duration for which the VM instance is running.
  • Some features of Kubernetes, such as Daemon Sets and hostPath volumes, are not available.
Mixed Mode
  • In case you have acquired EC2 instances at a reasonable cost (Spot or Reserved), you can run the fixed component of the workload on EC2 and the variable component of the workload on Fargate
  • Additional planning is required to decide how the workload must be split between managed EC2 and Fargate
  • Additional manual configuration changes are required.


The choice of Managed EC2, Fargate, or Mixed Mode depends on what works best based on the advantages and disadvantages that are highlighted in the table. For more information, refer to the Amazon documentation, especially the section on Fargate Pricing.

Steps to setup a Fargate data plane

  1. Create a Fargate Profile. Select the namespace as default while creating the profile.

    NOTE: All the generated fabric artifacts are configured with the default namespace. Unless you want to deploy the artifacts to a different namespace, no further configuration is needed.

  2. Deploy the Ingress controller to the Fargate profile. profile by following the steps in the following blog post: How do I set up the AWS Load Balancer Controller on an Amazon EKS cluster for Fargate?
  3. Deploy the Fabric artifacts as described in the earlier sections. For more information, refer to Deploying Fabric on Amazon EKS.

Steps to setup a Mixed data plane

  1. Extra planning is required to decide which components must be deployed to the EC2 managed data plane and which components must be deployed to the Fargate profile.
  2. Based on the planning, while creating a Fargate profile, you need to specify the namespace on which the fabric components need to be deployed.
  3. The generated Fabric artifact YAML files need to be edited to specify the namespace on which the deployment must occur.
  4. An Ingress object from one namespace cannot communicate with services in another namespace. To work around this issue, you need to create ingress objects that correspond to the services that are deployed in the namespace. The alb.ingress.kubernetes.io/group.name annotation can be used to group both ingress objects together and ensure that a single ALB is provisioned by Amazon.
  5. The fabric-common-secrets need to be duplicated in both namespaces so that it can be accessed by the deployments in both namespaces.

NOTE: You can refer to the Samples/mixedDataPlane folder in the FabricKube.zip file for a sample configuration where the Identity and Integration components are deployed to the fabric-runtime namespace in a Fargate data plane; and the rest of the components (Console, API Portal, and Engagement) are deployed in the EC2 managed data plane.