Appian Recommendations

Overview

This page covers Appian recommendations in the Appian Designer, including what they are, where they may appear, and how to address them.

Appian recommendations are best practice design patterns that should be implemented in your objects. Applying these design recommendations helps improve the performance of your applications and objects, and reduces the likelihood of runtime problems and maintainability issues. During development, Appian will alert you in real-time if it detects a risky design pattern that goes against one of these recommendations.

Note that Appian recommendations are suggestions based on Appian’s best practices. Depending on your designs and use cases, some recommendations may not apply. See recommendation dismissal for more information about dismissing recommendations in these cases.

Appearance in objects and monitoring view

Recommendations can appear in two general areas in Appian Designer: within objects, including custom data types and process models, and on the Health Dashboard located under the Monitoring View.

Developers must have at least Viewer permissions to an object to see its recommendations.

See custom data types to learn more about the appearance of recommendations in data types. For the full list of data type recommendations, click here.

See process model validations to learn more about the appearance of recommendations in the process modeler. For the full list of process model recommendations, click here.

Health dashboard

All recommendations for an environment or an application are visible in a single, centralized place on the Health Dashboard located under the Monitoring View. See the object recommendations grid of the Health Dashboard to learn more.

Process models must be published in order for their recommendations to appear on the Health Dashboard.

Active vs. dismissed status

Recommendations can either be active or dismissed.

When a recommendation is first triggered it is immediately active, meaning that it is visible both within the object and on the Health Dashboard. A recommendation will remain active and visible until it is either addressed by making the relevant change to the object’s design, or a developer dismisses it from within an object. (See data types and process models to learn more about dismissing recommendations from within object types).

Once dismissed, recommendations are no longer visible within objects. You can see dismissed recommendations for objects at any time on the Health Dashboard by using the Include dismissed recommendations filter.

Dismissed recommendations remain dismissed indefinitely unless one of the following occurs:

  1. The recommendation is addressed with a change to the object’s design, and thus the recommendation is removed.
  2. A new instance of the recommendation is triggered within the object and the recommendation is triggered anew.
    • Ex. If a data type is linked to a data store, and it contains one array of primitive type, a recommendation will appear. You decide to dismiss this recommendation because of your use case. The recommendation will remain dismissed until you add a second array of primitive type to the data type, in which case the recommendation will reappear.

Developers must have at least Editor permissions to an object to be able to dismiss its recommendations.

Export and import

Recommendation dismissals are promoted across environments as part of manual or direct deployments. When an object's recommendation is dismissed on the source environment, it will remain dismissed on the target environment after that object is deployed. If an error is encountered when importing or exporting recommendation dismissals, the related object will not be impacted.

All recommendations by object type

Data type recommendations

The following table lists the different custom data type recommendations that may be shown within data types or on the Health Dashboard in the Object Recommendations grid.

The recommendations listed in the table below do not apply to data types made by web service or plug-in.

Recommendation Name Description within Object Additional Information
Missing primary key Missing primary key. Add a primary key to ensure that each row in the database is uniquely identifiable and to avoid errors when querying and writing data. This recommendation only applies to custom data types that are mapped to a data store AND that are missing a primary key. It is a best practice to define primary keys for your data types to ensure that data is properly written to and queried from the database. Learn more about primary keys and how to define them here.
Too many fields More than 100 fields detected. Consider moving additional fields into a new, related data type. Appian recommends that data types have fewer than 100 fields for better maintenance and performance. It is a best practice to keep data types small so that they are easier to maintain over time, have better performance when queried, and reduce memory consumption. To learn more about custom data type relationships and how to implement them when breaking a larger data type into smaller related ones, see CDT Design Guidance.
Multiple levels of nesting Multiple levels of nested data types detected. Consider creating a flat data type relationship instead to avoid nesting data types more than one level deep. Multiple levels of nested data types can complicate data access at child levels and reduce query performance. This recommendation is triggered when a custom data type contains fields of data types with more than one level of nesting (for example, the Company data type has a field of type Employee and Employee has a field of type Address). Highly nested data types create complex many-to-many relationships that are hard to maintain and reduce query performance. Instead of nesting your data types, use flat data type relationships. For example, add a companyId field to the Employee data type instead of having a field of type Employee on Company. This allows you to query employees by company directly.
Primitive type array Array of primitive type detected. Consider replacing this field with a new custom data type. Data in primitive array fields cannot be updated or interacted with directly, which complicates data management. This recommendation only applies to custom data types that are mapped to a data store AND that have one or more fields of primitive type arrays. It is a best practice to avoid using arrays of primitive types because Appian treats the values in these arrays as one-to-many relationships and the values cannot be updated (only new values can be inserted). This increases the likelihood of having duplicate data and can cause queries to return unexpected results. Instead of using a primitive type array in your custom data type (Type A), create a new custom data type (Type B) to store that information. Then, add an array field of Type B to Type A; this creates a flat one-to-many relationship.
Outdated data type reference N/A This recommendation alerts you when you have a custom data type that has one or more fields that reference an outdated data type (denoted with a ^ symbol). When you open this data type, validations will appear on the affected fields and you will be required to address them before being able to save a new version of the data type.

Process model recommendations

The following table lists the different process model recommendations that may be shown within the process modeler or on the Health Dashboard in the Object Recommendations grid. Recommendations that are specific to process nodes will list the affected nodes at the end of their description.

Recommendation Name Description within Object Additional Information
Too many nodes X process nodes detected. Consider splitting this process into smaller subprocesses. Having more than 50 nodes can complicate maintainability and lead to higher memory consumption. This recommendation is triggered when a process model has more than 50 nodes. It is a best practice to keep process models small so that they are easier to understand and maintain, and occupy less memory. Reducing the number of nodes in a process model also helps to reduce its completion time. To reduce the number of nodes in your process model, combine nodes where possible or split the model into smaller subprocesses.
Too many process variables X process variables detected. Consider whether process variables should be activity class parameters or split the process into smaller subprocesses. Having more than 100 process variables can complicate maintainability and lead to higher memory consumption. This recommendation is triggered when a process model has more than 100 process variables. It is a best practice to minimize the number of process variables used in a process model for memory and maintainability reasons. To reduce the number of process variables in a process, convert your process variables into activity class parameters where applicable, or break your process into smaller subprocesses.
Gateway nodes with multiple incoming flows Gateway nodes in a loop have more than one incoming flow. Place a script task in front of each of these gateway nodes to merge incoming flows. This recommendation applies to AND, OR, XOR, and Complex gateway nodes that are used in a loop. Gateway nodes with multiple incoming flows allow first flows through, but wait for all incoming flows to arrive before executing any subsequent flows, which can cause processes to wait indefinitely. For this reason, it is a best practice to merge incoming flows using a script task in front of gateway nodes that are used in a loop.
Multiple node instances (MNI) with activity chaining Unattended nodes configured to run multiple instances (MNI) have incoming activity chaining. Consider redesigning your process to make this MNI activity a bulk or asynchronous operation. Chaining through these nodes can cause a poor user experience or performance issues. This recommendation is triggered for unattended nodes that are configured to run multiple instances AND that have incoming activity chaining. Activity chaining through multiple node instances makes it more likely to exceed the 50 node chaining limit. It can also cause performance issues that impact the user experience, such as users seeing long wait times between chained forms. If the activity can be a bulk operation, redesign your process to achieve the same result in fewer steps. For example pass an array of values into a Write to Data Store Entity node or use looping functions to perform the operations in a single script task.
Data types passed by reference Subprocess nodes pass custom data types by reference. Consider not passing the data type by reference and instead updating the value with an output variable. Passing data types by reference can cause issues with long-lived processes. This recommendation is triggered when a custom data type process variable is passed by reference into a subprocess. When a new version of a custom data type is created, active processes are not updated and continue to use the version of the data type that existed when the process started. However, subprocesses always start using the latest version of a data type. Therefore, a parent process and the subprocess model could reference different versions of a data type, depending on when the data type is updated. If this occurs AND the data type is passed by reference, the parent process will break when it reaches the subprocess node. Instead of passing custom data type variables by reference, pass the data into and out of the subprocess using input and output variables.
Open in Github Built: Fri, Nov 12, 2021 (02:44:06 PM)

On This Page

FEEDBACK