Transaction-Processing Framework

A transaction-processing solution serves as a routing and translation layer between internal Applications and the formats (X12, EDIFACT, XML, CSV, and others) and transport protocols (AS2, FTP/S, and others) commonly used in electronic transactions between companies. Because a company cannot typically dictate that all its trading partners adhere to a single configuration, it is often necessary to develop custom solutions for each trading partner. When custom software development is required for each trading partner, the effort and time required to onboard new trading partners increases dramatically, thereby limiting the business relationships a company can pursue.

A significant feature of PortX Transaction Processing Framework is the Integration Hub Routing Engine. The Routing Engine is a Mule application that can process a wide range of messages from many different partners by dynamically applying rules and configuration data stored in Integration Hub.

Thus, non-programmers such as data analysts can use IHub to onboard new trading partner relationships and support new transaction types without having to develop, test, or deploy new components.

This page identifies high-level framework components and their roles in end-to-end transaction processing. For more detailed information about how IHub works, see Actors, Relationships and Artifacts.

Functional Architecture

B2B transaction processing typically proceeds through some combination of the stages shown in Functional Architecture.

Processing stages and their sequence may vary from implementation to implementation.
Figure 1. Functional Architecture


Each incoming transaction from a partner or internal system is delivered through a transport protocol Endpoint configured to receive the message.


Using data that may be inside a document or in incoming metadata (such as HTTP headers or filenames), this stage identifies the partner and the transaction type, then uses these to retrieve corresponding processing details from IHub. These details include:

  • Endpoint to which the transformed message is to be delivered

  • Schema to use for validation

  • Map to transform the message into the required target format


Each incoming message is translated into a message format appropriate for the target service being invoked. This translation is performed dynamically using a Map retrieved from the Trading Partner Management (TPM) API.

Parse and Validate

Makes sure the required data is present in the required format. This validation is performed dynamically using a Schema retrieved from the TPM API.


Using endpoint configuration data retrieved from the TPM API the flow dynamically delivers the message to the Business Service endpoint address.

Technical Architecture

Conceptual View of Transaction Processing Framework shows the primary components of the framework and how these components interact to process B2B transactions. Component definitions follow the diagram.

Figure 2. Conceptual View of Transaction Processing Framework



PortX (PortX) is the B2B Transaction Processing Framework user interface, enabling you to:

Trading Partner Management (TPM) API

The Trading Partner Management (TPM) API manages storage and retrieval of configuration data for partners, including the details for processing their transactions.

Tracking API

The Tracking API manages storage and retrieval of metadata from processed transactions.
For example:

  • Sender

  • Receiver

  • Time stamps

  • Message type

  • Acknowledgement status

This includes, but is not limited to correlation logic for matching acknowledgements to original messages and identifying duplicate messages.

Integration Hub Connector

The integration hub Connector runs inside the ESB and coordinates all interaction with the PortX APIs (TPM and Tracking).

Transaction-Processing Stages

PortX provides a routing engine designed to meet the needs of most customer scenarios. We also allow for extensibility where needed…​

At this time the flows identified below must be developed by the customer. PortX may provide these components as part of the product in the future. However, in order to provide full extensibility and customization, the option will remain for customers to provide their own implementations.

Receive Stages

Each receive endpoint corresponds to a component that consists of the appropriate protocol connector and the appropriate endpoint configuration. After receiving a message over a particular protocol, each receive flow:

  • Tracks the message using the Integration Hub Connector in order to persist a copy of the message as it was received from the partner.

  • Places a queue message with headers populated with any important metadata from the inbound protocol, such as transport headers and filenames, on the Resolve queue.

Receive flows are activated dynamically by a Receive Endpoint listener flow which polls the TPM system for the list of endpoints that should be active. This flow creates a specific endpoint for each flow based on a template for the required transport protocol. It then dynamically instantiates that flow into the ESB and starts it, so that the required connector endpoint is active and listening for messages.

Resolve Stage

  • Pulls together from the message and any transport headers the needed metadata fields for identifying the specific document type.

  • Passes the metadata fields to the TPM service to look up the document type and associated configuration settings (Map, Schema, target Endpoint) and adds this information to the context headers that travel with the message to be used by later stages.

  • Passes the message to the next processing stage.

Transform Stage

  • Dynamically applies the configured mapping script from the context header to translate the message into the canonical format for the target Business Service.

  • Does any necessary data translation, such as resolving partner values to your company’s values using lookup tables, functions, and flows.

  • Uses the Integration Hub Connector to track the mapped, canonical version of the message.

  • Passes the updated message body to the next processing stage.

Validate Stage

  • Dynamically applies the configured schema script to validate that the message is in the required format.

  • Uses the Integration Hub Connector to track the validation result for the message.

  • Passes the message to the next processing stage.

Deliver Stage

  • Invokes the target service by passing the transformed message to the configured transport endpoint.

  • Uses the Integration Hub Connector to track the result from the target service.

Message Payload Persistence Stage

This is an optional flow that can be implemented to store message payloads at various stages. It receives a message from the Integration Hub Connector, persists that message payload to the desired data store, and returns a URL that can be used to retrieve the message later using the Message Payload Retrieval Stage. The URL is stored in the related tracking data stored in the Tracking API in PortX and displayed to the user in the context of the transaction. Clicking this link will invoke the Message Payload Retrieval Stage and display the message payload in a pop-up window.

Message Payload Retrieval API

The Message Payload Retrieval API Stage is used to retrieve the message payload with a URL (which contains the specific transactionId of the message to be retrieved).

Business Service APIs

For each target internal service, there is typically a component that exposes a REST-based API and communicates with the backend system using the appropriate connector or connectors. These Business Service APIs are not technically part of the B2B system, but are often part of the overall solution.

Replay Stage

The replay flow coordinates replaying transactions. It polls the Tracking service for transactions that have been marked for replay. When it finds transactions that need to be replayed it:

  1. Pulls the original message body and headers from the Tracking API and the Message Payload Retrieval API.

  2. Constructs a new message with the original payload and headers and passes it to the Resolve flow to reprocess the transaction.

  3. Tracks the fact that the transaction has been replayed.

  4. Updates the TPM service to indicate that the replay is complete