Free cookie consent management tool by TermsFeed Performance Tab [Interfaces]
Performance Tab

Overview

The Performance tab provides detailed performance information about the expressions in an object. From this tab, you can view live performance results of the expression or historical trends of the performance over time.

This page explains how to access the Performance tab in different objects and how to understand the different performance information.

Tip:  To learn more about the performance of the queries to your record types, see the Query Performance tab of the Monitor view.

Access the performance tab

You can access live performance details and historical performance trends in the following objects:

Object To access live performance details… To access historical performance trends…
Interface Click Performance in the live view. Click > Performance Trends.
Expression rule Click View Performance in the Ad Hoc Test pane. Click > Performance Trends.
Record type Go to the Performance page of the record type. N/A

You can also view performance trends for all interfaces or expression rules in the system from the Rule Performance page of the Admin Console.

Live performance details

When you open the Performance tab, your expression is reevaluated and live performance details are displayed.

For interfaces, the live performance details are displayed in the following sub-tabs:

For record types and expression rules, the live performance details are displayed in the following sections:

Initially, measurements for the top-level rule or function in the expression are displayed.

Click links or charts to drill down to a particular function, rule, or parameter. As you drill down, breadcrumbs appear at the top and can be clicked to navigate back to a higher level.

The blue bar beneath the breadcrumb shows the current part of the expression's contribution to the overall evaluation time.

Evaluation metrics

When you open the Performance tab in an interface, the Evaluation Metrics sub-tab appears by default. This tab displays the most recent evaluation of your expression.

It also provides details on total evaluation time, as well as how each part of your expression contributes to that time. This allows you to identify and address performance bottlenecks and to understand the impact of particular rules and functions.

This sub-tab contains the following sections:

Parameters and direct children

This section displays information about the current function, rule, or parameter.

The grid displays the following columns:

Column Description
Name The name of the function, rule, or parameter.
Type

The column can have the following values:

  • Evaluation: The current function, rule, or parameter being evaluated.
  • Save: The current function, rule, or parameter being saved. This value only appears in the Saved Metrics sub-tab in an interface.
  • Parameter: The parameter of the current function or rule.
  • Child: The rules or functions that are part of the evaluation.
Time (ms) The time (in milliseconds) spent evaluating the function, rule, or parameter.
Percent The percentage of the total time spent evaluating the function, rule, or parameter.

The pie chart visually displays the data described in the Percent column. You can click on a linked name in the grid or a pie chart section to drill into that function, rule, or parameter.

Tip:  For interfaces and record types, the measured time is often shorter than the time you observe while waiting for the object to load. This is because what is measured is specifically the time spent evaluating your expression and does not include application server overhead, network transmission, or client rendering.

Descendant functions and queries

This section displays each function and query that contributed to the overall evaluation time of the current function, rule, or parameter.

By default, the grid displays all the functions and queries invoked while evaluating the entire expression. When you drill down, the grid is filtered to show only those functions and rules that contributed to the evaluation time of the current function, rule, or parameter.

The grid displays the following columns:

Column Description
Function The name of the function or query.
Count Number of times the function or query was invoked while evaluating the current function, rule, or parameter.
Total Time (ms) Cumulative time (in milliseconds) spent evaluating all invocations of this function or query.
Percent Percent of the total time spent evaluating all invocations of this function or query.
Minimum Time (ms) The shortest evaluation time (in milliseconds) of the function or query.
Maximum Time (ms) The longest evaluation time (in milliseconds) of the function or query.

Descendant rules

This section displays each interface or expression rule that contributed to the overall evaluation time of the current function, rule, or parameter.

By default, the grid displays all interfaces or expression rules invoked while evaluating the entire expression. When you drill down, the grid is filtered to show only those interfaces and expression rules that contributed to the evaluation time of the current function, rule, or parameter.

The grid displays the following columns:

Column Description
Rule The name of the interface or expression rule.
Count Number of times the rule was invoked during the evaluation.
Total Time (ms) Cumulative time (in milliseconds) spent evaluating all invocations of this rule.
Percent Percent of the total time spent evaluating all invocations of this rule.
Minimum Time (ms) The shortest evaluation time (in milliseconds) of the rule.
Maximum Time (ms) The longest evaluation time (in milliseconds) of the rule.

Note:  While function and query evaluation times are independent, rule evaluation times often overlap. If rule!a calls rule!b, then rule!a's measured time will include the time spent evaluating rule!b. This means that adding all the percentages from the Descendant Rules grid often results in a total greater than 100%.

Parallel evaluation

If your interface contains multiple queries, parts of the expression may evaluate in parallel. This means that the total time the interface took to evaluate may be less than the sum of the evaluation time of each function that is called. Because of this, percentages may add up to more than 100%.

When analyzing the performance of an interface that is being evaluated in parallel, the performance tab will still help you identify bottlenecks by showing functions or rules that are outliers. As you make changes to the expression, however, you may not see a 1-to-1 improvement between the part you just improved and the overall evaluation time.

See Parallel Evaluation of Expressions for an example.

Save metrics

When the server receives a change to a component value, it must first evaluate the expression to locate and execute the component's save configuration (the saveInto parameter). Save metrics show the performance of the component's save configuration. Then, the expression is evaluated again to display the new interface context; these metrics are shown on the Evaluation Metrics sub-tab.

Only complex saves, where there are expressions (including rules) executed within the value parameter of an a!save function, will be displayed on the Save Metrics sub-tab. For example, the following expression would appear in the sub-tab:

1
2
3
4
5
6
7
	=a!localVariables(
	  local!text,
	  a!textField(
	    value: local!text,
	    saveInto: a!save(local!text, upper(save!value))
	  )
	)

Simple variable assignments, like setting the saveInto parameter to be a variable, will not appear in the Save Metrics sub-tab because these evaluations are negligible in the context of performance. For example, the following expression would not appear in the sub-tab:

1
2
3
4
5
6
7
8
9
10
11
	=a!localVariables(
	  local!text,
    local!number,
	  a!textField(
	    value: local!text,
	    saveInto: {
	       local!text,
	       a!save(local!number, 1)
      }
	  )
	)

Query metrics

The Query Metrics sub-tab is available on any interface. It displays a grid of record type queries run during the interface's evaluation.

Tip:  To see detailed information about the record type queries run throughout your applications, go to the Query Performance tab in the Monitor view.

The grid displays the following columns:

Column Description
Query UID A unique identifier for the query. Click the link to see an expression that illustrates the record types, fields, filters, and functions used in the query.
Execution Time The time (in milliseconds) that the query took to wait for resources and run.
Wait Time The time (in milliseconds) the query waited for resources.
Expression Rule The design object used to run the query, usually an expression rule. This column may be blank if the query was run directly from a record view or record list. Click this link to open the identified design object.
Record Type The record type used in the query. Click this link to open the identified record type object.
Component The component used to run the query. In some instances, this column may contain a function, record view, or record action, depending on how the interface or expression rule is configured.

The historical performance trends sub-view of the Performance tab offers a look at how the interface or expression rule performed in the past.

A moving window of thirty days of performance metrics are gathered and stored as end users interact with the interface or expression rule. The data in this view can help you understand how your expression is performing under real usage by showing the overall trends over time.

The historical performance trends are always for the top-level expression. It does not show the historical performance of the particular function, rule, or parameter within the expression.

Like the live performance details, this historical view captures only the time spent evaluating the expression. It does not capture network transmission or client rendering time, so the values shown in this interface are always slightly less than the load time experienced by end users of your interface.

The evaluation times recorded do not include the evaluation of the expression while it is tested through the interface or expression rule designers. The exception to this is when the interface or expression rule is embedded within a different expression and that expression is evaluated or tested in the interface or expression rule designers.

There are four aggregation levels offered for analyzing the historical performance of the interface.

Minute

The per-minute aggregation level of the historical performance trends shows the performance of the interface or expression rule on a minute by minute basis. It is the highest granularity view of the performance data and is most suitable for analyzing performance changes during iterative interface and rule design. In this view, you are able to filter the results even further with a given start date, start time, end date, and end time.

The times are displayed in the time zone of the user viewing the grid. There are 2 graphs and a grid shown when this aggregation is selected:

  • Evaluation Time by Minute (Graph): The minimum, average, and maximum time recorded across all evaluations each minute, in milliseconds.
  • Executions by Minute (Graph): The number of times the expression was executed each minute.
  • Executions by Minute (Grid): The count, minimum, average, and maximum evaluation time recorded in each minute, in milliseconds.

Hour

The hourly aggregation analyzes the performance on an hourly basis, providing insight into how the performance changes hour by hour. It is the default aggregation level of the historical performance trends. In this view, you are able to filter the results even further with a given start date, start time, end date, and end time.

When viewing the hourly aggregation, the hours are displayed in the time zone of the user viewing the grid. The averages are calculated using the unweighted average of the per-minute averages corresponding to the hour. There are 2 graphs and a grid shown when this aggregation is selected:

  • Evaluation Time by Hour (Graph): The minimum, average, and maximum time recorded across all evaluations each hour, in milliseconds.
  • Executions by Hour (Graph): The number of times the expression was executed each hour.
  • Executions by Hour (Grid): The count, minimum, average, and maximum evaluation time recorded in each hour, in milliseconds.

Day

The daily aggregation analyzes the performance on a daily basis. The day boundaries are determined based on the viewing user's time zone (midnight to midnight in their time zone). The averages are calculated using the unweighted average of the per-minute averages corresponding to the day. Like the hourly aggregation, there are 2 graphs and a grid shown when this aggregation is selected:

  • Evaluation Time by Day (Graph): The minimum, average, and maximum time recorded across all evaluations each day, in milliseconds.
  • Executions by Day (Graph): The number of times the expression was executed each day.
  • Executions by Day (Grid): The count, minimum, average, and maximum evaluation time recorded in each day, in milliseconds.

Week

The weekly aggregation analyzes the performance on a week to week basis. The week boundaries are determined based on the viewing user's time zone, starting on Sunday (Sunday through Saturday). The averages are calculated using the unweighted average of the per-minute averages corresponding to the week. Like the hourly and daily aggregations, there are 2 graphs and a grid shown when this aggregation is selected:

  • Evaluation Time by Week (Graph): The minimum, average, and maximum time recorded across all evaluations each week, in milliseconds.
  • Executions by Week (Graph): The number of times the expression was executed each week.
  • Executions by Week (Grid): The count, minimum, average, and maximum evaluation time recorded in each week, in milliseconds.

Next steps: review interface performance best practices

Review the Interface performance best practices to learn the best, most efficient ways to create fast interfaces.

Performance Tab

FEEDBACK