Workflow

Overview

A workflow is a visual representation of a business process. It provides you the ability to build processes visually with low code drag and drop functionality. A workflow consists of execution blocks which can be configured as required to accomplish your business use-case. Execution blocks offer a wide range of functionality such as the ability to design complex business logic, service invocation, triggering emails etc.

In V9 Quantum release, when a new Workflow service was introduced in Quantum Fabric, these workflow services were designed to be triggered/ instantiated from an object service invocation. From V9 SP2, a new type of trigger for Workflow service called Event triggered workflow has been introduced. This is independent of an object invocation and this type of workflow can be started via events on the Fabric bus. In other words, similar to other Fabric services, now workflow service can also be made as an Event listener and therefore be invoked by firing events.

From V9 SP3 onwards, Fabric workflows now have the capability to be invoked directly from external clients as an Orchestration API. This is achieved through the introduction of a new type called “Integration Service Triggered Workflow”. What this essentially means is that Fabric Workflows can now replace the existing Fabric Orchestration services – since workflows provide a visual orchestration designer along with data mapping, persisting in workflow_context as well as execute conditional logic.

Supported Workflow Types

The following types of Workflow are supported:

Let’s take a detailed look at each workflow type.

  • Object model triggered workflow. This type of workflow has a tight coupling to an object model and is triggered through state transitions on the linked object’s workflowstate field.

    You can link a Workflow service with an object either while creating a workflow or through object service. It is linked to a “state” field in the object, and whenever the state field is changed with a create or update on the object (either by using PUT or POST call, or any custom verb based on PUT/POST), the workflow is triggered. The workflow will execute all the subsequent tasks that were defined within it and completes the entire backend process that is related to the linked object. To get a quick overview of workflow, refer to Workflow - An Overview.

  • Event triggered workflow: This type of workflow as the name suggests can be made to trigger by listening to Events on Fabric. You can configure Workflow service to listen to an existing Server-event Topic of integration/object service verb invocations or via custom code. An event triggered workflow is initialized through the Signal Start node, which is listening to a Server Event that is raised on the Fabric event bus.

    When to use event triggered workflows:

    • If workflows need to start or resume based on listening to Events on Fabric.
    • If workflows need not be triggered through object model state transitions. If workflows are not state-managed workflows, that is, if your workflow does not depend on object model state transitions. State-managed workflows can be configured by using object-linked workflows.

    • If workflows need to execute in the background when some other Fabric service is invoked. You can configure Fabric services to raise events for particular topics, which can then be subscribed by event workflows
    • In addition to how Workflow services can be made as listeners, Event-triggered workflows can also raise Events back to Fabric anywhere during the workflow execution by the introduction of Throw Signal node. This way other Fabric services can get notified of any intermittent events raised from a Workflow.

    How event triggered workflows are different from object triggered workflows:

    • Event workflows are not associated to any object.

    • Event workflows are not state-managed workflows. Therefore, they do not need to link to any object to maintain their state.

    • Event workflows do not contain Start task. They have Signal start that listens to an event topic and starts the workflow.

    • Event workflows do not contain user tasks like in object-linked workflows that are coupled to state field transition. However Event triggered workflows can be paused by using a Catch signal event and can be resumed based on subscribing to an event topic.

    • Event workflows can be invoked from other Fabric services by using server events from directly on server-side. Whereas Object linked workflows can be invoked through object verb invocations from an external client.

  • Integration Service triggered workflow: This type of Workflow service called Integration Service (Sync/Async) workflow type that runs based on the new Fabric Workflow adapter (integration service). The Integration Service triggered workflows should be created by linking them to a new Integration Service of the type Fabric Workflow in Fabric Console.

    This type of workflow as the name suggests is triggered or invoked from an Integration service. Unlike the previous two types of workflows, this type of workflow allows you to directly call or invoke the Fabric workflow as an Orchestration service. By leveraging the workflow canvas, developers can now visually design/define their service orchestrations, do data mapping from one service to another in sequential flow, execute decisions through Business Rules or inbuilt exclusive gateway.

    In order to use Fabric Workflow as an Orchestration service, a new trigger type called “Integration Service” has been introduced in the workflow designer. Workflows thus could be given an API façade by mapping them to a Fabric Integration service, by choosing the connector type as “Fabric Workflow” under the Integration tab. These can be directly called from clients via Fabric SDK function to invoke Integration service or from other backends/ Microservices/ 3rd party systems as an API endpoint.

    When to use integration service triggered workflows:

    • If you want to invoke workflows directly from clients/backends as API endpoint
    • If you want to visually design an Orchestration service
    • If you want to execute conditional logic/business rules in your orchestration
    • If you want flexibility to do data mapping between one service to another (without the limitation of keeping same attribute/parameter names)
    • If you want to send only relevant response parameters from your orchestration result (without the limitation of sending an aggregate of all response parameters to client)
    • If you want to map a workflow/orchestration to an Object service (by selecting endpoint type as Integration & Orchestration service while creating an object service)

    These new category of workflows can send responses to clients either Synchronously or Asynchronously. Synchronous workflows are essentially complex service orchestrations or straight-through processing workflows in which the calling integration service waits till the workflow responds with the final result. Asynchronous workflows are more suited for long-running processes that have to wait for an update from back-end systems. These workflows immediately respond to the calling integration service with an “Acknowledgement” and the final result(s) from workflow is sent to client as Websocket messages.

    The following supported Response Types are supported in workflow service trigger type Integration:

    • Sync (Synchronous): The calling Integration Service will send a response to the client after the workflow execution is complete. This type of response is primarily used for straight-through-processing when the workflow is expected to be completed quickly.

      • The Integration service response will be blocked until workflow execution is complete for Sync and the final response is sent with a workflow instance_id.
      • Sync workflows will not have Timer, User Task, Start Signal, Catch signal, Throw signal.
      • Sync workflows should have Start and End nodes.
      • Each output path should have one End node with an output mapper. The End node will have output response mapper that will get sent back to client via the Integration service response.

      The following flow diagram details how the Sync Workflow works:

    • Async (Asynchronous): The calling Integration Service will send an acknowledgment that the Workflow is triggered. This type of response is primarily used for long-running Workflows. Responses to clients/devices are sent as Events that are received using a websocket.

      • Integration service response get an immediate Acknowledgement response from the workflow engine with a workflow instance_id.
      • Server-broadcast response: Server - broadcast event type allows you to raise a server event that can be listened by other Fabric services.
      • Private client response: Client-only event type allows you to raise a private event on configured Topic Name over a web socket channel associated with a specific client.

        IMPORTANT: Only clients subscribed to the Topic Name over a web socket channel will receive private responses to the workflow execution.
        For more details, refer to Server Event APIs SDKs - Quantum Visualizer.

      • Each output path may have a Throw signal with output response mapper. The websocket message event will contain the output response mapped via the Throw signal mapper. However configuring a Throw signal and sending Async response to clients via websocket is optional and dependent upon the business use-case.

      The following flow diagram details how the Async Workflow works:

Working with Workflows and Nodes

There are different kind of nodes in Workflow and each node represents a specific task or an event. These nodes are connected with each other using Sequence Flows. When you add these nodes to a workflow and connect them as per the required logic, the complete workflow is implemented automatically whenever it is invoked.

Different types of Workflow Nodes/Tasks are as follows:

Nodes/Tasks Supported Workflow Types
Object Event Integration
Sync Async
  • Start – It is an event that represents where the workflow starts. The Start event has one outgoing flow.

IMPORTANT: For Integration Service triggered workflow > Sync, the Process Incoming Payload configuration is a mandatory, in the Start node.
Refer to Start node for Integration Service workflow > Sync,

IMPORTANT: For Integration Service triggered workflow > Async, the Process Incoming Payload along with the Correlation ID configuration is a mandatory, in the Start node.
Refer to Start node for Integration Service workflow > Async,

  • User Task – It is used to represent that a user action is required in a Workflow. For example: Submitting a loan application, manager approving expense.
  • Signal Start - Start Signal node is used to start a workflow based on listening to an Event on Fabric bus.
  • Catch Signal - Catch Signal node is used to pause a workflow's execution and resume it based on an event it is listening to on Fabric bus.
  • Throw Signal - Throw Signal node is used to raise (publish) Events from workflow during or at the end of workflow execution
  • Service Task – A Service task is used to invoke a pre-configured service (Integration, Orchestration, Object and Rules Services) available in the Fabric App.
  • Timer - A Timer is used to create a delay in the workflow to prevent the immediate triggering of a subsequent event/activity. In a workflow whenever a timer node is encountered, then the workflow execution will be paused for the configured amount of minutes/hours and resume from it when the delay time elapses.
  • Message Task – It represents an intermediate event through which you can send notifications to the required recipients. The recipient can be an end user or the manager of the concerned department. For example, if the loan approval is pending from the branch manager, then you can use the Message task to send the notification to the manager.
  • Script Task - You can use this task to execute a business logic in the workflow. You can select a pre-configured JavaScript service to execute the business logic from this type of task.
  • Business Rule Task - You can use this task to execute a set of rules in the workflow. You can select a pre-configured Rules services available in the Fabric App.
  • Exclusive Gateway – Exclusive Gateways are used to model decision in a process by making exclusive (XOR) disjunction. An exclusive gateway can have multiple outgoing sequence flows and each outgoing sequence flow has its own decision condition. It evaluates the decision condition of each outgoing sequence flow and the first matched condition sequence flow is executed.
  • Parallel Gateway - You can use this node to model concurrency in a process. The Parallel Gateway node can act as either Fork or Join, which allows forking into multiple paths of execution or joining multiple incoming paths of execution.
    From V9SP4 GA, you can now create a sequence of parallel tasks (in multiple paths) using the Parallel Gateway.
  • End – It is an event that represents the end of a workflow.

IMPORTANT: For Integration Service triggered workflow > Sync, the Response Output configuration is a mandatory, in the End node.
Refer to End node for Integration Service workflow > Sync,

IMPORTANT: End event that represents the end of an Async workflow.
Refer to End node for Integration Service workflow > Async,

To go to the Workflow tab from the Quantum Fabric Console dashboard, click Add New or select any existing Quantum Fabric app, and click the Workflow tab. The Workflow tab landing page appears.

You can do the following from the Workflow landing page:

Create a New Workflow

Nodes/Tasks

After the nodes/tasks are placed in the canvas area, you can manage the properties of a node from the Properties pane. These properties vary for each type of node. Defining these properties is crucial in a workflow as the behavior of each node depends on these properties when a workflow is triggered

Namespaces

The following are namespaces available in workflow for managing workflow data:

  • IDENTITY: It denotes that the data is from the user or security attributes from the identity service response.
  • DEVICE_REQUEST: It represents the data that is available in the client request through User Task in case of Object linked workflow. This data is persisted for use in the workflow until the next Device Request payload comes from the client.

    Whereas in case of Event triggered workflow, this scope represents the data that is available in the incoming Event payload that started or resumed a workflow. This data is persisted for use in the workflow until the next Event payload is copied into Device_Request on the next Catch signal node.

  • FABRIC_WORKFLOW_CONTEXT: It represents that the data is available in the persistent store of the current workflow instance. This can be used to store required output response from any service tasks that will be needed in the workflow as well as copy data from incoming requests for further processing in workflow.
  • BACKEND_RESPONSE: It represents that the data is available from the backend response of an object service. It could be from a PUT/POST/Custom Verb call on the linked object while invoking a user task. This namespace is only available in case of object linked workflows.
    NOTE: The BACKEND_RESPONSE is applicable only for workflow trigger type Object.
  • SESSION: It represents that the data is available in the session data.
  • None: You can use it to pass static values. When you select None as the namespace, the value entered in the text-box is passed as a string.

    NOTE: If you have selected None in Namespace list, the data provided in the Value column will be considered as the data for the respective parameter.

The following are namespaces available when Looping is enabled:

NOTE: For more details, refer to Advanced Configuration > Enable Looping.

  • LOOP OUTPUT VARIABLE ( LOOP_OUTPUT_VAR)

    While executing associated service sequentially, when the associated service executed within a loop, the response of that iteration will be populated in the loop output variable. So that the response can be used for configuring exit criteria. This is applicable only for sequential loop execution type. The loop output variable holds service response for current iteration.

  • LOOP INPUT VARIABLE ( LOOP_INPUT_VAR)

    During iteration, the current object under iteration is assigned to the Loop Input variable. The loop input variable helps to supply the input values to the associated service.

    For example, let us look into the following scenario:

    1. A workflow is configured with a service task, which is enabled for looping.
      • Following data available for looping expression configured as FABRIC_WORKFLOW_CONTEXT.Users.
        User Details in FABRIC_WORKFLOW_CONTEXTInput Configuration for Service Task
        {"Users":[
              {
                 "Userid":"123"
              },
              {
                 "Userid":"234"
              },
              {
                 "Userid":"345"
              }
           ]
        }
        NOTE: The Userid is extracted while looping and assigned the same to the input parameter of the associated integration service.
    2. The response of the integration service for each user ID will be associated to Loop output variable.
      • Sample Response of Integration service, which will be populated into the Loop Output Variable on each iteration.
        {
           "id":"123",
           "firstName":"John",
           "lastName":"Matthew",
           "isActive":true
        }
    3. The Loop exit criteria can be optionally configured to break the loop when the condition is met.

      Service Responses for each iteration

      Loop Exit Condition
      --- Service response from iteration 1 ---
      { "id":"123", "firstName":"John", "lastName":"Matthew", "isActive":true }
      --- Service response from iteration 2 ---
      { "id":"234", "firstName":"James", "lastName":"Smith", "isActive":false }

      --- Here, the loop exists as the loop condition met (LOOP_OUTPUT_VARIABLE.isActive==false)---
      IMPORTANT: The Loop exit criteria is optional and is applicable for Sequential looping.

      Finally after the loop completes, the individual user data from each integration service execution is combined into the configured output param.

      Output ParametersFinal Response in FABRIC_WORKFLOW_CONTEXT
      {
         "usersDetails":[
            {
               "id":"123",
               "firstName":"John",
               "lastName":"Matthew",
               "isActive":true
            },
            {
               "id":"234",
               "firstName":"James",
               "lastName":"Smith",
               "isActive":false   }
         ]
      }

NOTE: For more information on using the best practices for workflow, refer to Workflow Best Practices.

Advanced Configurations - Workflow

You can perform the following advanced configurations while creating a Workflow Service.

Enable Looping

From V9SP4, support for the Looping feature is available for Service tasks, Script tasks, and Business Rule tasks.

Looping allows invoking a service for a collection of items and combining all the invoked responses.

You can choose the Looping feature to specify if the task is looped Sequentially or Parallelly. For a use case where for each input value, the associated service must be processed in order, you can choose Sequential looping.

For other scenarios, you can choose Parallel to execute each input value simultaneously. For example, for a use case where Account details are fetched for a set of user IDs, a Parallel execution type can be used. This would run the associated services in parallel, independently for each user ID.

How to enable looping for Workflow services:

  1. Create a Workflow service.
  2. Select a Service task, Script task, or Business Rule Task node.
  3. Click the Properties pane to expand it.
  4. Configure Service Type, Linked services and Operations.
  5. While configuring the Input Parameters, configure the LOOP_INPUT_VAR namespace.
    IMPORTANT: For Looping, two namespaces are introduced: LOOP_INPUT_VAR and LOOP_OUTPUT_VAR. These namespaces are visible only when Looping is enabled. Refer to Namespaces for more details.
  6. Go to the Advanced section in the Properties pane.
  7. Select the Enable Looping check box.

    The Loop Execution Type list appears. This specifies if the task must be looped Sequentially or Parallelly. For each input value, if the associated service must be processed in order, you can choose Sequential. Otherwise, choose Parallel to execute simultaneously.

  8. Select the Sequential or the Parallel option from the Loop Execution Type list.

    After you select the Loop Execution Type option, the Loop Condition Type list appears: This helps the Workflow Engine to decide on how to loop a particular task. The list provides two options: Loop Counter and Loop Collection.

    • These options define how many times the loop executes.

  9. Select one of the options from the Loop Condition Type list:
    • The Loop Counter option indicates that the task is iterated based on the specified count. The Loop counter expects a static number or an expression, which is evaluated at runtime.
      • If you select Loop Counter from the Loop Condition Type list, the Count and Expression options are displayed:

        1. Configure one of the following:
          • Count: Select the option and specify the number of iterations as a numeric value.

            Or

          • Expression: Select the option and specify the number of iterations as evaluated from the provided expression.

            The Expression can be configured by using any Workflow namespace.

          NOTE: Text input to enter a loop count can be static text or an expression from workflow_context.count. This is similar to the Timer node expression. For more details, refer Timer Node.

    • Loop Collection indicates that the task is being iterated based on a specified input collection or an array object.

      If you select Loop Collection from the Loop Condition Type list, the Loop Collection Expression field is displayed.

      1. Specify the collection expression from workflow namespace(s) for iteration of the associated service.

        Example: Fabric_workflow_context.userIDs

  10. This step is required If you have selected Sequential Loop. Configure Loop Exit Criteria to specify a condition that (if met) will break the execution out of the loop and proceed to the next workflow step.

    • Loop Exit Criteria: If you want to break the loop based on certain conditions, you can use the Loop Exit Criteria.
      For example, if you want an additional validation where you want to execute the task if minimum balance in the account is 10,000, then you can configure a logical expression such as BACKEND_RESPONSE.balance >=10,000.
      To create an expression, follow these steps:
      • Click ADD. The Loop Exit Criteria dialog appears.
      • Click Add Condition to configure a logical expressions. You can also click Add Group and configure a group that can have multiple logical expressions within it.
      • Each condition or group is associated with a logical operator (AND, OR). Select the required operator to determine the logic to execute the condition or group. If you select the Not check box, the selected condition is inverted.
      • Select the namespace from the list, and then add the related parameter in the Value field. For example, if you select BACKEND_RESPONSE from the namespace list, and add balance in the Value field, it would be read as BACKEND_RESPONSE.balance.
      • Select a comparison. The available options are ==, == null, >, <, !=, != null, >=, < =. For example, >=
      • Select the namespace from the list, and then add the related parameter in the Value field. For example, you can select none from the namespace list, and add 10,000 in the Value field.
      • NOTE: Now based on the example, the complete condition will be read as BACKEND_RESPONSE.balance >=10,000

      • IMPORTANT: For Looping, two Name spaces are introduced: LOOP_INPUT_VAR and LOOP_OUTPUT_VAR.
        Refer to Namespaces for more details.
      • Click SAVE.
  11. Enter a description for the Loop execution type.
  12. Click SAVE to save these changes for the task.

Use Existing Workflow Service

This feature helps you to use an existing Workflow in your fabric app. You can either clone or add existing workflow and make changes to them accordingly.

To use an existing workflow service, perform the following steps:

  • In the Workflow service tab, click Use Existing or in the left pane click the “+” icon and select Use Existing. The Existing services screen is displayed.

  • Select the required services from the Existing services screen and click Clone or Add. The Clone Service or Add Service status screen appears.

    • Clone: It creates a duplicate of the selected service. The changes made to the duplicate service will not affect the original service.

    • Add: It adds the selected service to the new Quantum Fabric app. The changes made to the service will affect all the apps using the service.
      If the service is part of any published app, you must unpublish the service to rename it.

    NOTE: If the list is long, you can search for the required service with Search option.

  • After the Clone or Add process is complete, the service is added to the Workflow services list.

  • Click the newly added service or open the Contextual menu and click Edit to configure the details of the service. For more information on configuring the details, refer to Create a Workflow.

Manage Workflows

You can manage the details of a service from the Contextual menu available adjacent to each service. The following options are available in the Contextual Menu:

  • Edit – Click to edit the details of a selected service. After you edit a service, save and republish all the apps that use this service to apply the changes.

  • Clone - Duplicates an existing service. Clone a service to create a different version of the same service. Changes made to a cloned service will not affect the original service. The name of a cloned service indicates that it is a copy of an existing service. You can edit the name of a cloned service.
  • Unlink From App - Use this option to unlink the required workflow from the linked Quantum Fabric app.
  • Unlink from Associated Object - Use this option to unlink the required workflow from the linked object of the Object service.

    NOTE: This option is disabled for event triggered workflow services.

  • Delete - Deletes a selected Workflow from Quantum Fabric Console. You cannot delete a service if the service is in use. You must do the following to delete the service:
    • Unpublish the fabric app.
    • Unlink the linked Object.
    • Unlink the service with the Fabric app.
    • Navigate to API Management > Workflow and delete the service from there.

      NOTE: When you delete a service that has multiple versions, only the active version is deleted.

  • Console Access Control – You can manage the users who can access this service from here. To know more, refer to Console Access Control.
  • Export as XML - Exports the current version of a service in the form of an XML file.
  • Export – Exports the service details in the form of a zip file. You can import this zip file to another Quantum Fabric app and use it. For more information, refer to Export and Import an Application.

NOTE: To view the Usecase related to Object Triggered Workflow and the implementation of the Usecase, refer to Object Triggered Workflow Implementation.

NOTE: To view the Usecase related to Event Triggeed Workflow and the implementation of the Usecase, refer to Event Triggered Workflow Implementation.

NOTE: To view the execution status of a workflow service by using Quantum App Services Console, refer to Quantum App Services Console > Workflow Services section.