Appian runs Elasticsearch to provide search and data retrieval capabilities. In the product and documentation, this is referred to as the "search server."
The search server can be configured as a single instance or in a cluster for data redundancy and high availability.
|Search Server Instances||Data Redundancy||Automatic Failover||Example appian-topology.xml|
appian-topology.xml file must be placed in both
<APPIAN_HOME>/search-server/conf/ on all application servers.
appian-topology.xml examples above also demonstrate using the
port attribute to change the port used by each search server node from the default of 9300 to a different value. The port can be set to be different for each search server instance, as in the second example, or set to be the same for each server in the cluster as shown in the third example. If two or more search servers are configured to be on the same host (not recommended), the ports used by each will be offset, starting with the port configured for the cluster.
If hostnames are used instead of IP addresses, it is required that the hostname on each machine resolve to a non-lookback IP address that other machines can use to contact the host. For example,
ss1.domain.tld must not resolve to
127.0.0.1 on that machine, since the search server broadcasts both the hostname and IP address, as resolved on that machine, when establishing a cluster with other nodes.
The index data for the search server is located in
<APPIAN_HOME>/search-server/data/. This directory should not be shared between application servers. Since the access to search server data is latency-sensitive, it is recommended that the search server data is hosted locally on the machine, rather than on a shared drive or an external drive such as shared network-attached storage (NAS). This is true in High Availability (HA) topologies as well, since each search server node stores its own version of the data.
You cannot backup search server data by simply copying the data directory listed above. Instead, you must use the Elasticsearch snapshot and restore APIs. You can automate snapshots by configuring an Elasticsearch snapshot lifecycle management policy.
When restoring search server data from a snapshot, only the following indices should be restored:
If your site is not using the Document Extraction Suite, backing up Elasticsearch data is not necessary.
The search server's use of disk and memory resources will scale with the amount of design objects, user activity, doc extraction mappings, and rule executions. The search server stores at most six entries per rule per minute, per application server. Although limited, rule execution metrics are the largest factor so we will use them for the sizing estimation. Additionally, the rule metrics are only stored for 30 days so we will calculate the maximum disk usage based on the number of minutes in 30 days using the following equation:
1 2 3 4 5 6 Max Disk Space = Number of Unique Rules in the System x Number of Application Servers x 43,200 (the number of minutes in 30 days) x 6 (the maximum number of rule metric entries per minute) x 1 Kb (the approximate size of a rule metric entry)
For example, if your system has a sustained rate of ten unique rule executions every ten seconds for 30 days, you would expect to use ~2.6GB of disk for a single application server and ~7.8GB of disk for a three application server system.
To determine the number of rules in the system, review the "Rules and Constants" column in the
content.csv file in the
<APPIAN_HOME>/logs/data-metrics directory. To determine the number of unique rule executions that occur on your site in a given time period, review the
expressions_details.csv file in the
Below are instructions for starting and stopping the search server on Linux or Windows using the provided scripts. Note that when logging out of Windows, the search server process started by the user using the script will stop. Instead, the search server can be installed as a Windows service and started and stopped using the Windows service management console. For instructions on controlling the search server as a Windows service see Installing Search Server as a Windows Service.
start.bat on Windows)
The search server should be started before starting the application server(s).
By default, the search server will start with the configurations for minimum and maximum memory usage (JVM heap) each set to 1024 MB (1 GB). To modify the memory usage settings to a custom value:
start.conf.examplefile to a file named
start.conf.baton Windows) and place it in the
SS_MEM_ARGSvariable in the new file and modify the values to the desired memory settings. Do not exceed 30 GB or half of the system memory. Do not set to lower than 256 MB.
To add additional custom Java options to the startup command. Do the same steps with the
SS_CUSTOM_OPTS variable. In general, it is not necessary to add additional custom Java options unless instructed by Appian Support.
To maintain the settings, the
start.conf.bat should be copied to the new installation when upgrading to the next version of Appian.
stop.bat on Windows)
The search server should be stopped after stopping the application server(s).
Logs for the search server are located in the
<APPIAN_HOME>/logs/search-server/ directory. Log levels can be controlled by editing
<APPIAN_HOME>/logs/search-server/search-server.log will prints a message similar to the following each time the cluster state changes. The same information is also printed every five minutes to the
Cluster health changed for [cluster name=appian-search-cluster]. Status changed from [RED] to [GREEN]. Current cluster information: [status=GREEN], [timed out=false], [nodes=1], [data nodes=1], [active primary shards=1], [active shards=1], [relocating shards=0], [initializing shards=0], [unassigned shards=0]
The table below describes the meaning of the various cluster status levels and recovery procedures, if applicable.
|GREEN||All configured search server nodes are part of the cluster and operational||N/A|
|YELLOW||At least one search server node is down, but a majority are still available. The cluster remains operational, accepting both reads and writes.||Recover the down node(s) to the same host and port configured in
|RED||Less than a majority of search server nodes are available. The cluster is only partially operational, accepting only reads. Writes are rejected.||Recover the down node(s) to the same host and port configured in
The changes to
appian-toplogy.xml described in the recovery column are hot-reloaded and do not require downtime.
There are a few important things to understand about modifying
appian-toplogy.xmlare detected as soon as they are saved.
<search-server>element can be added, removed, or replaced in the
appian-topology.xmlat one time. When adding or removing several nodes, do each as a separate step.
search-server.logyou will see messages with "relocating shards" increasing from 0 to 1 or greater while the system is resynchronizing data and then a subsequent message with "relocating shards" returning to 0 again once the relocation is complete.
Other Error Scenarios
Should all of the nodes in the search server cluster fail during operation, both read and write calls will be rejected. Here are the features that will be affected:
If the application server running the Appian EAR is started before the search server is running, the application server will log the following error message on startup:
The search server cannot be reached. Failed to connect to server at [host:port]. Check that the search server is started. If running multiple application servers, check that appian-topology.xml is properly configured with the search cluster details. The appian-topology.xml file must be distributed to each <APPIAN_HOME>/conf/ and <APPIAN_HOME>/search-server/conf/ directory. See documentation for details. (APNX-1-4274-001)
See also Search Server Cluster Metrics Log
On This Page