This page provides instructions on how to set up a recommended, high-availability system for Appian self-managing customers. Cloud customers should see High Availability for Appian Cloud.
Since high availability is a configuration of distributed Appian systems, you should be familiar with the concepts and caveats of distributing Appian first. See High Availability and Distributed Systems.
There are 5 main steps:
These instructions provide the necessary steps to set up a highly-available system with the recommended levels of redundancy. The below diagram represents this configuration.
While the following instructions are specific to running a copy of all of Appian's components on every server, you can modify the instructions where necessary to match your actual desired configuration so long as all major components of your Appian system are replicated in structure and content in the redundant systems. If your Appian system is distributed among multiple servers, you will need to recreate that configuration for your redundant systems as well. This means if you have Appian distributed across two different servers, each redundant system will need two servers to mirror that distribution (for a total of six servers).
Install Appian on three Linux servers
Be sure to install the same version of Appian on all three servers, including any hotfixes for that version.
Update the appian-topology.xml
files on each server to include the other servers using the example below, replacing the listed machine names with the hostnames of your servers. The topology files must be identical across all servers.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
<topology port="5000">
<server host="machine1.example.com">
<engine name="forums"/>
<engine name="notify"/>
<engine name="notify-email"/>
<engine name="channels"/>
<engine name="content"/>
<engine name="collaboration-statistics"/>
<engine name="personalization"/>
<engine name="portal"/>
<engine name="process-design"/>
<engine name="process-analytics0"/>
<engine name="process-analytics1"/>
<engine name="process-analytics2"/>
<engine name="process-execution0"/>
<engine name="process-execution1"/>
<engine name="process-execution2"/>
</server>
<server host="machine2.example.com">
<engine name="forums"/>
<engine name="notify"/>
<engine name="notify-email"/>
<engine name="channels"/>
<engine name="content"/>
<engine name="collaboration-statistics"/>
<engine name="personalization"/>
<engine name="portal"/>
<engine name="process-design"/>
<engine name="process-analytics0"/>
<engine name="process-analytics1"/>
<engine name="process-analytics2"/>
<engine name="process-execution0"/>
<engine name="process-execution1"/>
<engine name="process-execution2"/>
</server>
<server host="machine3.example.com">
<engine name="forums"/>
<engine name="notify"/>
<engine name="notify-email"/>
<engine name="channels"/>
<engine name="content"/>
<engine name="collaboration-statistics"/>
<engine name="personalization"/>
<engine name="portal"/>
<engine name="process-design"/>
<engine name="process-analytics0"/>
<engine name="process-analytics1"/>
<engine name="process-analytics2"/>
<engine name="process-execution0"/>
<engine name="process-execution1"/>
<engine name="process-execution2"/>
</server>
<search-cluster>
<search-server host="machine1.example.com"/>
<search-server host="machine2.example.com"/>
<search-server host="machine3.example.com"/>
</search-cluster>
<data-server-cluster>
<data-server host="machine1.example.com" port="5400" rts-count="2"/>
<data-server host="machine2.example.com" port="5400" rts-count="2"/>
<data-server host="machine3.example.com" port="5400" rts-count="2"/>
</data-server-cluster>
<kafkaCluster>
<broker host="machine1.example.com" port="9092"/>
<broker host="machine2.example.com" port="9092"/>
<broker host="machine3.example.com" port="9092"/>
</kafkaCluster>
<zookeeperCluster>
<zookeeper host="machine1.example.com" port="2181"/>
<zookeeper host="machine2.example.com" port="2181"/>
<zookeeper host="machine3.example.com" port="2181"/>
</zookeeperCluster>
</topology>
When changing the number of Kafka brokers on a site, as you are here, you must also delete the data stored in <APPIAN_HOME>/services/data/zookeeper/
on every server.
Remove any checkpointing scheduling configurations you might have made in custom.properties
. In a high-availability configuration, the default checkpointing configurations are recommended.
<APPIAN_HOME>/conf/appian.sec
<APPIAN_HOME>/conf/appian-topology.xml
<APPIAN_HOME>/conf/custom.properties
<APPIAN_HOME>/services/conf/service_manager.conf
<APPIAN_HOME>/data-server/conf/appian-topology.xml
<APPIAN_HOME>/data-server/conf/data-server-sec.properties
APPIAN_HOME/_admin/accdocs1/
APPIAN_HOME/_admin/accdocs2/
APPIAN_HOME/_admin/accdocs3/
APPIAN_HOME/_admin/mini/
APPIAN_HOME/_admin/models/
APPIAN_HOME/_admin/plugins/
APPIAN_HOME/_admin/process_notes/
APPIAN_HOME/_admin/shared/
APPIAN_HOME/server/archived-process/
APPIAN_HOME/server/channels/gw1/
APPIAN_HOME/server/collaboration/gw1/
APPIAN_HOME/server/forums/gw1/
APPIAN_HOME/server/msg/
APPIAN_HOME/server/notifications/gw1/
APPIAN_HOME/server/personalization/gw1/
APPIAN_HOME/server/portal/gw1/
APPIAN_HOME/server/process/analytics/0000/gw1/
APPIAN_HOME/server/process/analytics/0001/gw1/
APPIAN_HOME/server/process/analytics/0002/gw1/
APPIAN_HOME/server/process/design/gw1/
APPIAN_HOME/server/process/exec/00/gw1/
APPIAN_HOME/server/process/exec/01/gw1/
APPIAN_HOME/server/process/exec/02/gw1/
<APPIAN_HOME>/data-server/node/election
directory from all the servers, if it's present.<APPIAN_HOME>/data-server/data
directory to all the servers.APPIAN_HOME/shared-logs/machine1.example.com/
APPIAN_HOME/shared-logs/machine2.example.com/
APPIAN_HOME/shared-logs/machine3.example.com/
On each server, link the APPIAN_HOME/logs/
directory to the corresponding network storage directory from the previous step.
APPIAN_HOME/shared-logs/machine1.example.com/
APPIAN_HOME/shared-logs/machine2.example.com/
APPIAN_HOME/shared-logs/machine3.example.com/
Following the directions in Starting and Stopping Appian, start each instance of a component before moving onto the next component.
APPIAN_HOME/services/bin/start.sh -p <password> -s all
on Server #1APPIAN_HOME/services/bin/start.sh -p <password> -s all
on Server #2APPIAN_HOME/services/bin/start.sh -p <password> -s all
on Server #3Do not wait for the start script to complete on the first server before running it on servers 2 and 3. The first script will not finish until at least two servers have been started.
APPIAN_HOME/data-server/bin/start.sh
on Server #1APPIAN_HOME/data-server/bin/start.sh
on Server #2APPIAN_HOME/data-server/bin/start.sh
on Server #3APPIAN_HOME/search-server/bin/start.sh
on Server #1APPIAN_HOME/search-server/bin/start.sh
on Server #2APPIAN_HOME/search-server/bin/start.sh
on Server #3APPIAN_HOME/tomcat/apache-tomcat/bin/start-appserver.sh
on Server #1APPIAN_HOME/tomcat/apache-tomcat/bin/start-appserver.sh
on Server #2APPIAN_HOME/tomcat/apache-tomcat/bin/start-appserver.sh
on Server #3