Live Backup Procedure

This page outlines the steps to be followed to back up all data of an Appian installation at an application level. It is not meant to replace or eliminate the need to backup the server at a drive and operating system level.

A live data backup can be performed on a running system in order to replicate the state of the data to an external storage device for restoration later should some catastrophic failure occur on the primary system.

Configuring replication

A live backup of Appian involves three different types of data that require three different replication mechanisms.

File system replication

The locations of application data listed in the following table should be replicated to your backup location using your standard file replication mechanism, such as rsync, disk snapshotting, or similar.

Component Name Folder Location
Application Server <APPIAN_HOME>/_admin/accdocs1/
Application Server <APPIAN_HOME>/_admin/accdocs2/
Application Server <APPIAN_HOME>/_admin/accdocs3/
Application Server <APPIAN_HOME>/_admin/mini/
Application Server <APPIAN_HOME>/_admin/models/
Application Server <APPIAN_HOME>/_admin/process_notes/
Application Server <APPIAN_HOME>/_admin/shared/
Application Server <APPIAN_HOME>/server/archived-process/
Application Server <APPIAN_HOME>/server/msg/
Search Server <APPIAN_HOME>/search-server/data/
Channels Engine <APPIAN_HOME>/server/channels/gw1/
Content and Collaboration Statistics Engines <APPIAN_HOME>/server/collaboration/gw1/
Forums Engine <APPIAN_HOME>/server/forums/gw1/
Notifications and Notifications Email Engines <APPIAN_HOME>/server/notifications/gw1/
Personalization Engine <APPIAN_HOME>/server/personalization/gw1/
Portal Engine <APPIAN_HOME>/server/portal/gw1/
Process-design Engine <APPIAN_HOME>/server/process/design/gw1
Process-analytics Engine (0000) <APPIAN_HOME>/server/process/analytics/0000/gw1/
Process-analytics Engine (0001) <APPIAN_HOME>/server/process/analytics/0001/gw1/
Process-analytics Engine (0002) <APPIAN_HOME>/server/process/analytics/0002/gw1/
Process-execution Engine (00) <APPIAN_HOME>/server/process/exec/00/gw1/
Process-execution Engine (01) <APPIAN_HOME>/server/process/exec/01/gw1/
Process-execution Engine (02) <APPIAN_HOME>/server/process/exec/02/gw1/

If you have more than the default three shards of Process Execution and Process Analytics, the gw1/ directories for those shards must be backed up as well.

After performing an upgrade or restoration, it is particularly important to ensure your keystore file, located in _admin/shared/, and Appian data source data are present and in the correct state. If there is a problem with one of the two, the system will not start up and will log the following ERROR message to the application server log:

1
The internal encryption module is in an inconsistent state. The appian.keystore file is missing or cannot be read. If migrating or restoring from a backup, ensure that _admin/shared/appian.keystore is in place. (APNX-1-4210-003)

When running in a system with multiple application servers, some of these directories are shared between servers. In those cases, the data only needs to be backed up once from one of the servers. For those directories that are not shared between servers, the data needs to be backed up from each of the servers. See High Availability and Distributed Installations for a list of directories that are shared when configuring a multiple server setup.

Internal Messaging Service replication

The Internal Messaging Service is able to replicate its data to a backup location. Appian must be installed on the backup server, but only the Internal Messaging Service needs to be running there for the backup to take place.

Topology

In conf/appian-topology.xml in the backup server, add sourceKafkaCluster and mirrorMakerCluster elements as in the examples below. In these examples, the backup system runs on backupserver1.example.com and the system to be replicated from runs on primaryserver1.example.com.

Example: Single-node

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
<topology port="5000">
  <server host="backupserver1.example.com">
    <engine name="forums"/>
    <engine name="notify"/>
    <engine name="notify-email"/>
    <engine name="channels"/>
    <engine name="content"/>
    <engine name="collaboration-statistics"/>
    <engine name="personalization"/>
    <engine name="portal"/>
    <engine name="process-design"/>
    <engine name="process-analytics0"/>
    <engine name="process-analytics1"/>
    <engine name="process-analytics2"/>
    <engine name="process-execution0"/>
    <engine name="process-execution1"/>
    <engine name="process-execution2"/>
  </server>
  <search-cluster port="9300">
    <search-server/>
  </search-cluster>
  <kafkaCluster>
    <broker host="backupserver1.example.com" port="9092"/>
  </kafkaCluster>
  <zookeeperCluster>
    <zookeeper host="backupserver1.example.com" port="2181"/>
  </zookeeperCluster>
  <sourceKafkaCluster>
    <broker host="primaryserver1.example.com" port="9092"/>
  </sourceKafkaCluster>
  <mirrorMakerCluster>
    <instance host="backupserver1.example.com"/>
  </mirrorMakerCluster>
  <data-server-cluster>
    <data-server host="backupserver1.example.com" port="5400" rts-count="2"/>
  </data-server-cluster>
</topology>

Example: High Availability

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
<topology port="5000">
  <server host="backupserver1.example.com">
    <engine name="forums"/>
    <engine name="notify"/>
    <engine name="notify-email"/>
    <engine name="channels"/>
    <engine name="content"/>
    <engine name="collaboration-statistics"/>
    <engine name="personalization"/>
    <engine name="portal"/>
    <engine name="process-design"/>
    <engine name="process-analytics0"/>
    <engine name="process-analytics1"/>
    <engine name="process-analytics2"/>
    <engine name="process-execution0"/>
    <engine name="process-execution1"/>
    <engine name="process-execution2"/>
  </server>
  <server host="backupserver2.example.com">
    <engine name="forums"/>
    <engine name="notify"/>
    <engine name="notify-email"/>
    <engine name="channels"/>
    <engine name="content"/>
    <engine name="collaboration-statistics"/>
    <engine name="personalization"/>
    <engine name="portal"/>
    <engine name="process-design"/>
    <engine name="process-analytics0"/>
    <engine name="process-analytics1"/>
    <engine name="process-analytics2"/>
    <engine name="process-execution0"/>
    <engine name="process-execution1"/>
    <engine name="process-execution2"/>
  </server>
  <server host="backupserver3.example.com">
    <engine name="forums"/>
    <engine name="notify"/>
    <engine name="notify-email"/>
    <engine name="channels"/>
    <engine name="content"/>
    <engine name="collaboration-statistics"/>
    <engine name="personalization"/>
    <engine name="portal"/>
    <engine name="process-design"/>
    <engine name="process-analytics0"/>
    <engine name="process-analytics1"/>
    <engine name="process-analytics2"/>
    <engine name="process-execution0"/>
    <engine name="process-execution1"/>
    <engine name="process-execution2"/>
  </server>
  <search-cluster>
    <search-server host="backupserver1.example.com"/>
    <search-server host="backupserver2.example.com"/>
    <search-server host="backupserver3.example.com"/>
  </search-cluster>
  <data-server-cluster>
    <data-server host="backupserver1.example.com" port="5400" rts-count="2"/>
    <data-server host="backupserver2.example.com" port="5400" rts-count="2"/>
    <data-server host="backupserver3.example.com" port="5400" rts-count="2"/>
  </data-server-cluster>
  <kafkaCluster>
    <broker host="backupserver1.example.com" port="9092"/>
    <broker host="backupserver2.example.com" port="9092"/>
    <broker host="backupserver3.example.com" port="9092"/>
  </kafkaCluster>
  <zookeeperCluster>
    <zookeeper host="backupserver1.example.com" port="2181"/>
    <zookeeper host="backupserver2.example.com" port="2181"/>
    <zookeeper host="backupserver3.example.com" port="2181"/>
  </zookeeperCluster>
  <sourceKafkaCluster>
    <broker host="primaryserver1.example.com" port="9092"/>
    <broker host="primaryserver2.example.com" port="9092"/>
    <broker host="primaryserver3.example.com" port="9092"/>
  </sourceKafkaCluster>
  <mirrorMakerCluster>
    <instance host="backupserver1.example.com"/>
  </mirrorMakerCluster>
</topology>

The host of the broker elements inside the sourceKafkaCluster elements should point to the main site's Kafka cluster (the one you want to replicate data from).

The number of broker elements inside sourceKafkaCluster must match the number of broker elements inside kafkaCluster as well as the number of brokers in the primary site's kafkaCluster.

The mirrorMakerCluster controls which server will run the replication process. There can only be one instance of this configured and it must be one of the servers listed in kafkaCluster.

Starting the backup process

1. Start ZooKeeper

On the servers that host the backup site's Zookeeper cluster, run:

1
APPIAN_HOME/services/bin/start.sh -p <password> -s zookeeper

2. Start Kafka

On the servers that host the backup site's Kafka cluster, run:

1
APPIAN_HOME/services/bin/start.sh -p <password> -s kafka

3. Start Mirror Maker

Now that the Kafka instances are up and running on the backup site, we need to start a process that will watch the main site's Kafka cluster for messages and replicate them into the backup site's Kafka cluster.

On the server that matches the hostname in the mirrorMakerCluster element in the backup site's appian-topology.xml file run:

1
APPIAN_HOME/services/bin/start.sh -p <password> -s mirror-maker

Failover procedure

When failing over from your primary site to your backup site you must:

1. Stop the mirror maker processes

On the server that matches the hostname in the mirrorMakerCluster element in the backup site's appian-topology.xml file run:

1
APPIAN_HOME/services/bin/stop.sh -p <password> -s mirror-maker

2. Start the backup Appian instance as normal

Start all Appian services.

3. Sync your records

Perform a manual sync for all of your synced records.

RDBMS replication

Use your preferred backup mechanism for the specific RDBMS(s) used by your Appian environment.

Open in Github Built: Mon, Dec 06, 2021 (04:19:37 PM)

On This Page

FEEDBACK