This page applies to developers. It provides examples of how to prepare data from different data sources so it's ready for process analysis.
Appian's data fabric gives you the power to unify data from different systems so you can connect to process data, regardless of where the data lives, and prepare it for process insights.
Each of the following sections provides an example of a different way to connect to process data and then prepare it for process insights.
If you're connecting to a database, check out:
If you're connecting to a web service, check out Example: Web service. For Salesforce web services specifically, check out Example: Salesforce opportunities.
If you've captured data as comma-separated values (CSV), check out Example: Comma-separated values.
Configuring record events is the easiest way to capture new case and event data in a database. In this example, the Elliot Corporation wants to capture events for a financial client onboarding application.
As part of application development, the developer has already created synced record types for the core data in this app: Accounts, Customers, and Regions. The developer has also added relationships between these record types.
For the purposes of gathering case and event data, the Accounts record type is the case record type and has a one-to-one relationship with Customers. Customers has a many-to-one relationship with Regions. Through these relationships, a data steward can include information from the Customers and Regions record types in a process, providing a richer context of case data.
However, the application is currently missing an event history record type to capture and store events about accounts.
To capture event data in this application:
The developer sets up a process model, where they configure a Write Records node to write case and event data at the same time. For example, the node writes events for situations like:
As the process model runs, the node writes case data to the Account record type and event data to the Account Event History record type.
In this example, the Appian Retail company currently stores existing event data in a single database table, Retail Events. This table includes data from multiple processes, like order fulfillment, customer management, and employee management, all managed within the Appian Retail application.
For example:
Now, though, Appian Retail wants to analyze just their process for fulfilling customer orders. Their event data includes the required fields, so with some data preparation, they can use the data for process analysis. So that the data is only focused on order events, they can use part of the data in the Retail Events table as the source of a new Order event history record type.
In this example, the developer has already created a synced record type for Orders. For the purposes of process analysis, the Orders record type is the case record type.
To connect to the existing Retail Events table and prepare it for process analysis:
While configuring the source, the developer configures sync filters to only sync events that are relevant to an order.
In this example, the table includes a column (processId
) that specifies which business process an event relates to. The sync filter would limit events in the Order Event History record type to events with processId
equal to 3
(which represents the order fulfillment process).
In this example, the Acme Technical Services company uses a support case app to manage their support case data and the related events. Because their case and event data exceeds the row limit for synced record types, they use data store entities (DSEs) and custom data types (CDTs) to connect to the source database.
However, to analyze data using process insights, the data must be available in synced record types. Instead of refactoring the entire support case app, however, they can simply create synced record types to connect to the data they need, while leaving their current data structure intact.
To connect to their existing database and prepare the case and event data:
In this example, we'll discuss the Bennett Manufacturing company, which stores its data in multiple databases. Data for its employee management app is stored in an Oracle database, while data for its vendor management app is stored in a Snowflake database.
By leveraging Appian's data fabric, they can connect and prepare data from any business process involving these applications.
To connect to their process data and prepare it for process analysis:
Tip: Looking for help connecting to Salesforce specifically? See the Salesforce opportunities example.
In this example, the Fitzwilliam Software Solutions company uses Jira to track its internal software development lifecycle. Now they'd like to optimize their workflows using the data collected in Jira.
To prepare case and event data from this web service:
Salesforce opportunities allow users to track and manage potential sales deals. In this example, the Global Corporation has enabled field history tracking in their Salesforce instance, so they can easily capture case and event data for those deals.
The Opportunity table stores the case data they need for process analysis.
The OpportunityFieldHistory table stores the history of changes to the values in the fields of each opportunity. Specifically, the field
column tracks which fields in the Opportunity table were updated. To extract an event history from this larger history, we only want to sync data where the field
value is stageName
or created
.
To prepare case and event data from Salesforce opportunities:
Configures the following sync filter to only sync data from the OpportunityFieldHistory table where the field
value is stageName
or created
:
Property | Value |
---|---|
Field | field |
Condition | in |
Value | {"StageName", "created"} |
On the Opportunity Event History record type, the developer creates a sync-time custom record field to extract the name of the opportunity stage that changed. They'll call the field, ActivityName, and uses the following expression:
1
2
3
4
5
if(
rv!record[recordType!Sales Opportunity History.fields.field] = "created",
"Created",
rv!record[recordType!Sales Opportunity History.fields.newValue]
)
ActivityName
to Created
whenever the created
field is changed.ActivityName
to the value in the newValue
field.The result is a single custom record field with values like, Created
, Prospecting
, Qualification
, Needs Analysis
, etc.
Id
field from the Opportunity record type and the OpportunityID
field from the Opportunity Event History record type as the common fields for the relationship.For event data, map the following required fields:
Field | Map To |
---|---|
OpportunityId | Case ID |
ActivityName | Activity |
CreatedDate | Start |
In this example, the Peak Properties company stores data about the corporate and retail properties it manages in a legacy system that does not support API connections.
To work with that data, they export it from the legacy system to multiple CSV files. The properties.csv
file stores data about the properties (the case data), while the propertyEvents.csv
file stores data about management actions completed for those properties (event history data).
Note: To keep the recipe simple, we will not deploy these changes to a different environment; instead, we'll simply import the data into a development environment and access Process HQ from that environment.
To prepare case and event data from these CSV files:
properties.csv
and propertyEvents.csv
files. If each file does not have a unique ID column, the developer adds a blank column to the file. This blank column will be used as the ID field in the record type.The developer creates a case record type (Property) and an event history record type (Property Event History) as follows:
The developer creates two document objects, one for the properties.csv
file and one for the propertyEvents.csv
file. During this step, they will automatically generate a constant for each document.
The developer creates a process model to import the properties.csv
values into the database:
On the Data tab for the smart service, they set the inputs as follows:
Input | Value Set To |
---|---|
C S V Document | The name of the generated constant. |
Database Source Name | The name of the database. In this example, Peak Properties is using the Appian Cloud database, so this value is "jdbc/Appian" . |
Delimiter | A comma ("," ). |
File Has Header | "true" |
Table Name | The name of the database table generated from the Property record type. |
On the Data tab, they set the outputs as follows to allow for debugging:
Output | Target Set To |
---|---|
Error Message | Save to a new process variable. |
Success | Save to a new process variable. |
propertyEvents.csv
values into the database, using the same steps as they did for the properties.csv
file.This example involves a single environment. If you're deploying to another environment, you would need to perform additional steps. For example:
Data Preparation Examples