Reports are generated in the following manner:
We recommend testing your reports in a staging environment to help you understand the impact of your report design and the approximate execution speed with different volumes of data.
From the report, move the pointer over in the top-right corner of the toolbar.
For more information, click .
If you open the Performance Details for a report that has not been calculated, an empty table is displayed.
Unsorted report rows may be returned in a different order when the same report is run repeatedly, due to distributed calculations.
|The name of the operation or function applied to the report data.
|The category of operation performed.
|This column lists the application component that performed the operation.
Process-analytics#: The operation was performed by the analytics engine listed (numbered 0-15). One analytics engine exists for each process execution engine configured in the system. The standard configuration uses three numbered analytics engines, starting with zero.
Master: The operation was performed by object code running on the application server.
|The elapsed time to perform each operation. The report is sorted using this criteria.
|The number of report rows changed by an operation.
|The sequence in which the report operations are performed.
All Appian reports are run in real-time and are not cached. The execution time for each report is a critical factor in user experience and site operation.
Best Practice: While the occasional slow report isn't a concern. As a best practice, reports that are run frequently must be tuned to maintain system performance.
The practices below allow you to minimize the calculation times for your reports.
The data set that must be rendered by the reporting engines becomes larger as users drill deeper into the report results. The report rendering performance can be much slower for the last page of a report than for rendering the first page. It is better (for usability as well as performance) to provide users with different reports that provide different filters for the same data set than to create a single monolithic report that users must page through.
Filter operations work best with dates and numerical data types. Text operations are generally slower with operations that require case-insensitivity being the slowest.
Avoid using drilldown paths that evaluate expressions. For example, ff you have aggregated columns that drill down to other reports with explicitly defined filter parameters (as expressions), the drill paths evaluate for all items, not just those that are shown on the current page. Strictly speaking it will be evaluated for the page size times the number of engines.
After grouping, the sort column is the final data item that is applied to the remaining rows in your report.
In a report with grouping and aggregation, sorting by a grouped column has a minimal performance impact, whereas sorting by a non-grouped (aggregated) column has a significant negative impact on performance. Avoid sorting aggregated columns of data in such reports.
The unsorted columns in a report are only evaluated when displayed on the report page. That is, there might be 10,000 rows evaluated for the sort column, with the other columns only being evaluated for the first 25 rows. As such, the expressions used for columns not involved in filters, grouping, or sorting are significantly less-expensive to calculate. These expressions can include complex calculations without adversely impacting performance.
Aggregation (and grouping) applies to all report rows that remain after filtering.
The columns used when performing aggregation functions should be simple report variables, whenever possible.
Keep in mind that any functions applied to your report variables must be evaluated within each aggregated row, in addition to the visible rows.
Text data incurs a performance penalty when aggregated, when compared against strictly numerical aggregations. Additional processing is required to handle text-to-number and number-to-text conversion. Report display-formatting does not impact performance.
The use of expressions for the purpose of aggregation (grouping), or sorting should be avoided in long-running reports. Expressions must be applied to the entire set of data returned by the report.
Whenever possible, only use expressions with variables after running all filters, aggregations, and sorting operations, and only on the results that are actually displayed to the user.
When creating a Task Report or a report which spans process models, ensure that process variable data you want to display in the report is stored in a process variable that is has the same name in all process models.
For example, if you want to display an ItemID in a Task Report, make sure the process variable is named "ItemID" in all processes that generate tasks that are included in the report.
Reporting down through subprocesses is more difficult than reporting on a single model. To include data from a subprocess in your reports, incorporate process variables by reference.
A numeric comparison is calculated very quickly. A text comparison must apply relatively-expensive internationalization functions, as text is compared using the equality operator. Status Columns should use numbers (integers) instead of text.
If you do not require case-insensitivity in your text (for example,
"HELLO" = "HELLO" returns
"hello" = "HELLO" returns
true), then use the
exact() function to improve performance rather than the
For example, use this expression to add a data column that results in true or false:
Instead of this:
In limited circumstances, rather than calculating a value each time the report is run, you might want to use an expression in your Process Model to run the calculation during process execution. The value can then be quickly read by your report without additional processing.
This would most likely be used if a report uses an expensive calculation for filtering or sorting many rows. The downside is the additional data storage and configuration involved.
Quick tasks should be used for forms. They should not be used to build read-only interfaces with forms.
Avoid placing a slow report on a frequently used page such as the homepage. Each hit runs the report, which must complete before the page is displayed. This adds substantial load to the system as well as significant delay for the user.
If a report times out, perform the various steps below to enable your report to display within the allotted time.
View the Report Performance Details page to identify the report operations that were taking the longest time to execute.
Examine the number of rows affected by each operation listed on the Report Performance Details page. Identify any long-running (non-system) operations that are applied to large numbers of rows, such as long-running sort operations.
Optimize your report by refactoring slow operations, keeping the optimization guidelines in mind.
Use filters to reduce the number of report rows being processed.
Identify operations that might be best performed within a process and written to a variable for use in your report. This avoids both timeout errors and the overhead of having the system recalculate the result each time the report is viewed.
Each of the functions and properties used in an expression have some impact on the time it takes to generate your report. Some functions and properties have a greater impact than others.
View the report performance details information to observe which operations are consuming the most resources.
The following table lists the top 20 functions and properties available to reports that can involve complex calculations, which historically bear the highest relative cost in terms of report performance.
After optimizing your report design, you may want to consider adding a process execution and process analytics engine pair or configuring additional system resources to increase the throughput of system.
The factors to consider regarding adding system resources are:
Report processing distributes the report generation load across available process analytics engines. The standard limit for an exported, printed, or Java API-driven report is 10,000 rows in a standard configuration. For displayed reports, the number of report pages displayed is limited to approximately 10000 rows worth of report pages (in a standard configuration).
When this limit is exceeded, the following error message is displayed:
The result cannot be returned. The number of rows requested would exceed the configured limit per engine.
This error can be reported by the application server (when the report master divides the report for processing by the process analytics engines) or by the process analytics engine that is calculating its share of the report.
Reports that would exceed the maximum number of rows are still generated. Only pages that remain within the configured limit are rendered.
When your reports are configured to return large data sets, we recommend the following approaches to report design, which can assist you with viewing your desired data within the rows returned:
Filter the report: Sort the report using a column that returns the desired data in earlier pages of the report.
For example, if you need to view information for December in a report that spans an entire year, sort the date column in descending order, rather than trying to access the last page(s) of the report.
Decrease the Rows per Page value: If you configure your report Rows per Page to be more than the maximum report rows (configured by your server administrator), you receive a
result cannot be returned.. error.
Raise the Configured Limit: The maximum report rows value can be raised by your server administrator to return more rows for a report. Before making such a change, we recommend testing the increased value in an environment that includes the same (or comparable) data sets to determine whether your available JVM memory allocation is sufficient.
Analytics Java APIs that return a ResultPage (such as getReportPage and getAccepted Tasks) are also constrained by the maximum number of rows.
If you need to contact Technical Support for assistance, gather the following information beforehand:
db_PAX_XXXX-XX-XX.log and the timestamp of when the issue was reproduced
Report Performance Details