Installation Guide
This content applies solely to Process Mining, which must be purchased separately from the Appian base platform.

This page applies to self-managed installations only.

This page provides you with instructions on how to set up self-managed installations of Mining Prep and Process Mining.

Before you begin

Appian Process Mining is comprised of two separate modules: Mining Prep and Process Mining. Mining Prep is optional and relies on the Process Mining installation. If you want to set up a self-managed installation of Mining Prep, you must first install the Process Mining module.

If you're curious about the high-level architecture of Process Mining and Mining Prep, see Platform architecture.

Installation requirements

This section describes the requirements to set up and monitor Mining Prep and Process Mining.

Self-managed deployments run in Docker containers on a host. Each deployment and its host system should be monitored to respond to potential issues in a timely manner. The requirements for self-managed deployments depend on the choices made when integrating Mining Prep and Process Mining into your IT ecosystem.

For optimal efficiency, Appian may require provisioning of infrastructure, certificates, domain(s), and credentials to be in place before a deployment will be rolled out.

Review system requirements before proceeding with the installation.

DNS settings and SSL certificate

Mining Prep and Process Mining require DNS entries to all hosts to be present before the use of the system.

To ensure encrypted communication between the browser and the system, create SSL certificates for the URL where you'll host the installation. For example, Mining Prep requires an SSL certificate, but Process Mining can run on localhost:9100 as a demo.

DevOps engineers need access to the SSL certificates in order to correctly configure the load balancers on the hosts.

Your action items:

  1. Create a DNS entry for
  2. Create an SSL certificate for

Configure Red Hat Enterprise Linux to use Docker

If you're setting up on Red Hat Enterprise Linux (RHEL), you'll need to complete some additional steps to use the Docker container. Before you being, make sure you have Docker and Docker Compose installed.

  1. In the terminal, install Red Hat's container tools (especially Podman) with the commands:

     sudo yum module enable container-tools
     sudo yum install -y yum-utils container-selinux
  2. Start the docker service using the command:

    sudo systemctl start docker
  3. Allow root-less usage of the docker commands by adding a group and your user to that group:

     sudo groupadd docker
     sudo usermod -aG docker $USER

Monitoring recommendations

The deployment includes basic monitoring of services, which you can enable by editing the .env file as described in Monitoring configuration. However, it is recommended to also have additional systems in place to manage and monitor Mining Prep and Process Mining.

To ensure the health and optimal configuration, we recommend you monitor the following metrics and endpoints at a minimum:

  • Disk usage of Docker
  • Disk usage of the data volume(s) used by the deployment
  • Memory usage of individual services
  • CPU usage of individual services
  • HTTP endpoints of the various services (a list will be provided upon request)
  • Error rates of individual services
  • SSL certificate expiration

Download the Mining Prep and Process Mining packages

You need to download the packages for Process Mining and Mining Prep before you start the installation steps. Here's an overview of the main packages:

  • Process Mining Application image archive: process-mining-application-5.6.0.tar.gz.
  • Process Mining Utility image archive: process-mining-utility-5.6.0.tar.gz.
  • Mining Prep Application image archive: mining-prep-5.6.0.tar.gz.
  • Mining Prep Release archive: mining-prep-5.6.0-configs.tar.gz.
  • Mining Prep License file: rukh_license.key.

To download the packages:

  1. In MyAppian, navigate to SUPPORT.
  2. Click DOWNLOADS.
  3. Click PLATFORM.
  5. Select the relevant or latest Process Mining version that is available.
  6. Select a file in the Downloads list.
  7. Click the file name to download the file.
  8. Click FINISH.
  9. Repeat steps 6-8 until all files are downloaded.

Install Process Mining

Demo/staging installation

This section describes how to set up Process Mining for testing purposes only. Please note that this Process Mining setup can't connect to Mining Prep.

Start Process Mining in demo mode with port 9100 opened for unencrypted traffic. The HTTPS access, backups, and monitoring are disabled.

Follow these steps to start Process Mining in demo mode:

  1. Copy the application image archive and utility image archive from Appian to the host you want to install Process Mining on. For example:
    • Application image archive: process-mining-application-5.6.0.tar.gz. This archive contains images of services needed for the setup and the install image needed to install the setup.
    • Utility image archive: process-mining-utility-5.6.0.tar.gz. This archive contains the images for backup, monitoring, and alerts.
  2. If the tar.gz archive is split into two parts, combine them with the command: cat application-part-1.tar.gz application-part-2.tar.gz > application.tar.gz.
  3. Load the docker images from the downloaded tar.gz file with the command: docker load -i <path to tar.gz file>.
  4. Run a command similar to the following example to extract the Process Mining setup files:

    Replace the latest image name in the first command with your correct version. As shown in the following example, it will look similar to v5.6.0.

     docker create --name pm-install lanalabs/lana-install:v5.6.0
     docker cp pm-install:/home/lanalabs/lana-on-premise .
     docker rm pm-install
  5. Switch to the lana-on-premise folder.
  6. Modify the following mandatory parameters in the .env file:
    • ORGANIZATION: The name you choose will be used as the realm that has to be entered to login to the Process Mining frontend.
  7. Feel free to modify any other parameters from .env file. You can find information about each parameter in the .env file.
  8. Run the bash script with the up -d parameter to run Process Mining server: ./ up -d
  9. Now Process Mining will be running at http://localhost:9100.
  10. Sign in as the administrator with the value from USER_MAIL as username and value from USER_PASS as password.

    It takes a few seconds for the identity and access management service to provision a user. You may receive a 404 error if you sign in before this process completes.

  11. Accept the terms and conditions, and privacy statement.

You're done!

Production installation

Production installations should run with SSL certificates, and optionally monitoring and backups.

Follow these steps to start Process Mining in production mode:

  1. Copy the image archive, release archive, and license file you downloaded from Appian to the host you want to install Process Mining on.
  2. Load the docker images from the downloaded tar.gz file with the command: docker load -i <path to tar.gz file>.
  3. Run the following commands to extract the Process Mining setup files:

    Replace the latest image name in the first command with your correct version. As shown in the following example, it will look similar to v5.6.0.

     docker create --name pm-install lanalabs/lana-install:v5.6.0
     docker cp pm-install:/home/lanalabs/lana-on-premise .
     docker rm pm-install
  4. Switch to the lana-on-premise folder.
  5. Modify the following mandatory parameters in the .env-prod file:
    • COMPOSE_FILE: Compose files to apply to your installation. Default janus.yml:ssl.yml
    • SERVER_NAME: Set your server name, it could be DNS/IP.
    • ORGANIZATION: The name you choose will be used as the realm that has to be entered to login to the Process Mining frontend.
    • USER_PASS: ou need to set the USER_PASS to the initial password for your administrative user. You will be asked to reset it after the first login.
    • SECRET_KEY_BASE: The encryption key for the application, needs to be at least 16 characters long.
    • KEYCLOAK_USER: The username for the Keycloak Administrator that can modify and update the realm and users.
    • KEYCLOAK_PASSWORD: The password for the Keycloak Administrator.
    • MEMORY: RAM used by Process Mining.
    • BACKEND_AVAILABLE_PROCESSORS: Defaults to 1, set this value between 1 and the maximum number of processors Process Mining can use.
    • RESTART: Set the restart policy of docker containers. For production environment, we suggest setting to always to run Process Mining automatically in case of server reboot.
    • DB_USER: Set a user for your DB.
    • DB_PASSWORD: Set a password for your DB.
    • RABBITMQ_USER: Set a user for Rabbitmq.
    • RABBITMQ_PASSWORD: Set a password for Rabbitmq
  6. Modify the following mandatory parameters in the .env file:
    • CERT_PATH: Folder where certificates are located.
    • CERT: Certificate name.
    • CERT_KEY: Certificate key name.
  7. Run the bash script with the use-prod-config parameter to install Process Mining with .env-prod variables: ./ use-prod-config
  8. Run the bash script with the up -d parameter to run Process Mining server: ./ up -d
  9. Now Process Mining will be running at http://SERVER_NAME. This is the URL you need to specify for miningBaseUrl while configuring Mining Prep.
  10. Sign in as the administrator with the value from USER_MAIL as username and value inside the USER_PASS file as password.
  11. Accept the terms and conditions, and privacy statement.

You're done!

Configuration options

The configuration of the setup can be modified to fit the environment. For example:

Set up email server

After you create a user in the application, they will need to verify their email and set a new password. You will need to configure the email server so the Process Mining Keycloak service can automatically send out this email to new users.

To configure the email server, set the following environment variables:

These should be set in the file janus.yml under the field "environment" for the service "lana-janus":

  • KEYCLOAK_SMTP_ENABLED - Set to "true" to enable sending the emails.
  • KEYCLOAK_SMTP_AUTHENTICATION - Set to "true" if your email server requires authentication.
  • KEYCLOAK_SMTP_START_TLS - Set to "true" if your email server uses STARTTLS.
  • KEYCLOAK_SMTP_SSL - Set to "true" if your email server uses SSL encryption.
  • KEYCLOAK_SMTP_HOST - Set the hostname of your email server.
  • KEYCLOAK_SMTP_PORT - Set the port of your email server.
  • KEYCLOAK_SMTP_USERNAME - Set the username that the email server needs for authentication.
  • KEYCLOAK_SMTP_PASSWORD - Set the password that the email server needs for authentication.
  • KEYCLOAK_SMTP_ENVELOPE_FROM - Set the address which the email should be sent from.
  • KEYCLOAK_SMTP_FROM_DISPLAY_NAME - Set the name shown in the "from" field.
  • KEYCLOAK_SMTP_FROM - Set the email address shown in the "from" field.
  • KEYCLOAK_SMTP_REPLY_TO_DISPLAY_NAME - Set the name for the "reply-to" field.
  • KEYCLOAK_SMTP_REPLY_TO - Set the email address for the "reply-to" field.

Configure production setup

Apply configuration as necessary for those features and make sure the COMPOSE_FILE list contains janus.yml:ssl.yml:backup.yml before starting the services.

Modify maximum upload size for event logs

By default, an upload can't be larger than 10 GB. This limit can be changed by defining the NGINX_MAX_UPLOAD variable in the .env file. A restart of NGINX (./ up -d nginx) is required to apply the change.

Change the default open ports

The only open port of the setup is defined by NGINX_PORT in the .env-prod file. A restart of all services is necessary to apply the change. Run the following commands to guarantee a restart all services: ./ clean-restart.

Add an SSL certificate

Adding an SSL certificate will enable encrypted traffic. You will need to place the server key and certificate in a location readable by Docker and make the following changes to the .env-prod file:

  • Make sure both key files are PEM-encoded
  • Make sure the certificate file contains a certificate chain with all certificates in the chain in the right order.
  • Ensure CERT_PATH points to the folder containing the key and certificate files.
  • Ensure CERT_KEY only contains the file name of the key file.
  • Ensure CERT only contains the file name of the certificate (chain).
  • Verify that NGINX_PORT is unset or configured with the correct port numbers of the allowed open ports.
  • Replace no-ssl.yml with ssl.yml in the list of the COMPOSE_FILE variable
  • If you are using monitoring, ensure that the ENDPOINT_SCHEME is set to https
  • Verify read permissions on both the files and the folder for the "other" users. To set them accordingly you can use the command: chmod -R o+r <PATHTOCERT>
  • Restart reverse-proxy-nginx and lana-fe services to apply the changes. Run the following command: up -d reverse-proxy-nginx lana-fe

Add Mining Prep authentication redirect

You need to set up the correct redirect URL for when users sign in to Mining Prep. This requires you to add the base URL of Mining Prep to the list of allowed redirect URLs.

To set up the Mining Prep redirect URL, configure the PREP_FRONTEND_BASE_URLS environment variable that's located in the janus.yml file under the environment field for the lana-janus service.

PREP_FRONTEND_BASE_URLS accepts a comma-separated list of base URLs for the Mining Prep installation(s).

After you add your base URLs to the environment variable, run ./ up -d to apply the changes.

Upgrade Process Mining

To upgrade Process Mining to a new version, you need to copy the latest configuration file changes into your installation directory:

  1. Download the new Process Mining packages.
  2. Replace the version number in the following command with the version you're upgrading to, and execute it to copy new configuration files into your installation directory.
    docker tag lanalabs/lana-install:v5.6.0 lanalabs/lana-install:latest
    ./ update
  3. Apply the changes that are highlighted with a .new suffix to your installation directory.

    It may be helpful to use a diff command to identify the new changes. For example, if you're using Vim, you can use vimdiff FILE

  4. Restart Process Mining:
    ./ clean-restart

Install Mining Prep

Before you start installing Mining Prep, make sure you've already set up SSL certificates and installed Process Mining.

Installation instructions

  1. Copy the application image archive, release archive, and license file you downloaded from Appian to the host you want to install Mining Prep on. For example:
    • Application image archive: mining-prep-5.6.0.tar.gz.
    • Release archive: mining-prep-5.6.0-configs.tar.gz
    • License file: rukh_license.key
  2. Load the docker images from the image archive with the command: docker load -i <path to image archive file>
  3. Extract the release archive with the command: tar xzvf <file path of release archive file>. This will create the production/ folder. For example, the file for the 5.6.0 release is mining-prep-5.6.0-configs.tar.gz.
  4. Put the license key file in the production/ folder.
  5. In the production/ folder, create the configuration YAML files with the command: make create-example-config.
  6. Amend the configuration in the YAML files and rename them to prep_config.yaml and prep_secrets.yaml respectively.
  7. Create a file called .postgres.env and add the environment variables to it as described in Configuration options.
  8. Run the script to initialize the domain and certificates for the server. See more under Domain setup and SSL certificates.
  9. Amend the variables miningRedirectUrl, miningBaseUrl, and keycloak in the file frontend.json to point to the base URL of your Process Mining installation.
  10. Amend line 24 in the file nginx/conf.d/nginx.conf (for example, add_header Content-Security-Policy...) by changing the localhost towards the end of the line to the domain of your Process Mining installation.
  11. Run docker-compose up -d to start the services.
  12. Open a bash shell and set up the initial user on the database service with the command: docker-compose exec mariadb bash /

Before you sign in to Mining Prep, make sure you've set up the redirect URL as described in Add Mining Prep authentication redirect.

Domain setup and SSL certificates

We provide a helper script that asks you to provide the domain the installation will run under. It can be called with the following command: ./ When you run the command, you'll be prompted to Enter domain: and then asked Use private CA [yes/no]:. Depending on your inputs, you'll choose to create self-signed certificates, Let's Encrypt certificates, or to just initialize the folders and add certificates from your own certificate authority (CA):

Domain Private CA? Certificate
localhost no Self-signed certificate
Any domain no Let's encrypt certificate
Any domain yes Initializes folders and returns path to you

If you're using Let's Encrypt, keep in mind that it has Staging and Production APIs. By default, the Staging-API is enabled in the script. Additionally, this requires connectivity to the public internet and a reverse domain name resolution to the server hosting Mining Prep.

The Let's Encrypt Production-API has strict limits, if you reach them you will have to wait a week. Use the API only if you decided on the final domain.

In case a users' browser doesn't support the latest TLS Cipher you can change it in the file /certbot/conf/options-ssl-nginx.conf. Keep in mind that the renewal container (certbot) will show an error message and recreate the original file with every renewal to be able to supply security updates.

If OCSP stapling is not working, "resolver" can be uncommented in the file /certbot/conf/options-ssl-nginx.conf to enable nginx to use the Google DNS servers for the stapling requests. This will increase performance, but could lead to issues if external DNS servers are blocked in your network.

If you need to configure untrusted certificates for Process Mining, so that Mining Prep trusts Process Mining, you can configure it in the file prep_config.yaml under the line trust_unknown_ca: false and set it to true.

If you decide to use your own certificates, you will need to move/copy the following files into the according certbot/conf/live/DOMAIN folder:

Filename Description
fullchain.pem The certificate in PEM format bundled together with any intermediary certificates
privkey.pem The private key in PEM format

Configuration options

Depending on the needs of your Process Mining environment, you can configure various options in the following files:


  • POSTGRES_USER: The field postgres.username from the prep_config.yaml.
  • POSTGRES_PASSWORD: The field postgres.password from the prep_secrets.yaml.


  • serverUrl: The endpoint for the installation. Should be automatically configured by the script. Change the protocol to http:// if you are running the installation without SSL encryption.
  • webSocketUrl: The websocket endpoint for the installation. Should be automatically configured by the script. Change the protocol to ws:// if you are running the installation without SSL encryption.
  • defaultLanguage: The default language for every new user.
  • miningRedirectUrl: The URL for the Process Mining installation.
  • miningBaseUrl : The URL where your Process Mining instance is running. This defines the connection between Mining Prep and Process Mining environments, including the navigation menu link and the Mining Prep transform and load Success message link.
  • keycloak: An object including the baseUrl for the Process Mining Installation and the clientId for Mining Prep (this is "frontend" by default).


  • endpoint:
    • secret_key_base: The base key for signing and ecrypting cookies and other connection related items (64 bytes).
    • session_signing_salt: Salt for signing the session cookies (6 bytes).
    • liveview_signing_salt: Salt for signing Phoenix LiveView sessions (24 bytes).
  • postgres:
    • encryption_key: Encryption key for storage of datasource credentials in the database.
    • password: Password for the PostgreSQL metadata database.
  • mariadb:
    • password: Password for the MariaDB dataset database.


  • mining:
    • instances: A list of named Process Mining instance, an example is given in the file.
      • name and url: List of Process Mining instances which should be used as a target for uploading transformations.
    • trust_unknown_ca: Flag to trust unknown CAs, should only be set to true for testing.
  • postgres: The username, hostname, and port for the PostgreSQL metadata database.
  • mariadb: The username, hostname, and port for the MariaDB dataset database.
  • oidc: Stands for the Open ID provider configuration
    • mining_url: The base URL for the Process Mining installation.
    • realm: The Keycloak realm defined by the ORGANIZATION variable in the Process Mining installation.


  • RUKH_LICENSE_KEY: The file path to the license key for Mining Prep.
  • COMPOSE_FILE: If you configured a Let's Encrypt certificate via the init script make sure this line does not have a # at the start to enable the automatic renewal service.

Upgrade Mining Prep

Upgrades are provided through new tar.gz files. Since Mining Prep currently doesn't provide update scripts, follow these steps:

  1. Download the new Mining Prep packages.
  2. Extract the new tar.gz into a different folder. Possible commands depending on your Linux distribution:

     mkdir new_mining_prep && tar xzf release.tar.gz -C new_mining_prep --strip-components=1


     tar xzf release.tar.gz && mv production new_mining_prep
  3. Backup the old configuration files using the following commands:
     cd production/
     cp .env .env.bkp
     cp frontend.json frontend.json.bkp
     cp prep_config.yaml prep_config.yaml.bkp
     cp prep_secrets.yaml prep_secrets.yaml.bkp
     cp .postgres.env .postgres.env.bkp
     cd ..
  4. Move to the parent directory for the production/ folder with cd ...
  5. Compare the new configuration files with the old ones and amend them; use the following command to compare the files with vimdiff: vimdiff new_mining_prep/.env production/.env.

    Often the most important change is the updated tags for the images in the .env file. For example, RUKH_IMAGE.

  6. Load the new docker images from the image archive with the command: docker load -i <path to image archive file>
  7. Start the installation using the following commands:
     cd production
     docker-compose up -d 

For some version updates we'll provide additional guidance and migration notes in the tar.gz.


The Process Mining monitoring stack will contain:

  • Prometheus: metric collector, monitoring and alerting application.
  • Grafana: metrics visualization.
  • Alertmanager: alert manager/sender.
  • Exporters & cAdvisor: export metrics of the database, rabbitmq, hosts, dockerto Prometheus.

Monitoring configuration

The monitoring stack can be enabled by editing the .env file and adding the appropriate variables, which we will describe in following sections.

The main environment variable for enabling the stack is the COMPOSE_FILE. You need to add the monitoring.yml to it.


For Grafana, it's only necessary to set credentials for the administrator.



Alertmanager offers a wide range of notification channels. The configuration should be edited on config/monitoring/alertmanager/alertmanager.yml. The current configuration is a template of mail alerting using an SMTP server.

Visit the Prometheus docs for other notification channels (slack, webhoooks, etc) configuration templates.


The following security recommendations aim to guarantee an extra layer of security upon Process Mining.

Encryption at rest

Data encryption at rest should be done for any kind of stored data, especially for Process Mining application where sensitive data is stored. Therefore, volume/snapshot encryption is highly recommended. Cloud providers offer step by step guidelines to accomplish encryption at rest, like AWS, GCP or Azure.

Docker hardening

Docker daemon

Docker containers are managed by the process dockerd (docker daemon). The docker daemon can be configured globally to apply rules and configurations to all the containers. Docker daemon can be configured by creating a daemon.json file on the path /etc/docker. Ownership of daemon.json should be set to the user root.

Ensure live restore is enabled

Enable live restore in your daemon.json file to maintain containers alive in case of unavailability, crashes or downtime of the docker daemon.

  "live-restore": true 

Ensure Userland proxy is disabled

Docker daemon offers two (hairpin NAT and UserLand proxy) mechanisms for forwarding ports from host to containers. In most cases (when Linux kernel is not old), hairpin NAT is used because provides better performance.

Disabling Userland proxy (ONLY, if not needed) is recommend since reducing the number of attack surfaces.

  "userland-proxy": false 

Ensure centralized and remote logging is configured

By default, all docker containers logs are saved in a plain JSON file within the host machine where Docker is running. Since important and valuable data might be stored in the logs it is important to keep this data in a centralized and safe logging location. We highly recommend you to check the logging drivers supported by docker.

Ensure a separate partition for containers has been created

One requirement for installing Process Mining is the creation and mounting of a volume dedicated just for Dockerfiles. By default, Dockerfiles are stored under /var/lib/docker. Mounting the volume in the default docker path would be enough to secure (volume encryption), isolate and protect the Process Mining data.

Ensure that incoming container traffic is bound to a specific host interface

By default, containers using bridge network drivers are accepting connections on exposed ports on any network interface. For security and networking conflicts with other interfaces and services, you should set only the network interface destined for docker traffic.

On the Process Mining .env file you can assign the network interface destined for docker traffic. By default will accept connections on any network interface. Change it to another interface e.g. NET_IP=

Ensure that security updates has been performed on host machines

The security updates for the host machine are done by the customer. OS patches should be regularly checked and updated. Each company may have a different patching policy to do regular security updates. If there is no patching policy, we recommend at least quarterly performed security updates on host operating system.


Process Mining

These are the services that constitute a Process Mining Docker installation.


  • The application users interact with through their web browser.
  • Serves JavaScript components.
  • Written using Angular/ Typescript.
  • Very low resource requirements.
  • Can be scaled horizontally without impact.

Janus - Elixir Middleware

  • Middleware that handles incoming requests; checks user-permissions, and stores and manages resources to be processed with.
  • BEAM virtual machine
  • Webframework Phoenix.
  • Moderate resource requirements.
  • Can be scaled horizontally without impact.

Algernon - Process Mining Engine

  • Core process mining engine. Due to architectural design, this component is not able to scale horizontally. Vertical scaling requires an increase in MEMORY and CPU.
  • JVM as virtual machine.
  • Written in Scala.
  • Resource consumption dependent on size of processed data, can be substantial.

Python and Shiny server

  • Serves client-specific or experimental data visualizations that are not fully realized in our dashboarding components yet.
  • Shiny as the framework for building and serving visualizations.
  • Requests data from core components, authentication via injected tokens (in Frontend).


  • Used for storing relational data of the application. No tailored configuration, scaling is possible. Scaling should be based on the scale of Algernon components to handle the extra load from connections and queries. Not considered a bottleneck of performance at the moment.


  • Used for exchange of messages between various components. No tailored configuration, scaling is possible.
  • Moderate resource requirements (heavy data passing is done via shared volumes).


  • Used as a reverse proxy. Depending on the deployment, this might also be handled by other implementations of Kubernetes ingresses.


  • Used for identity and access management (IAM) via OpenID Connect (OIDC) authentication.

Mining Prep

Mining Prep uses these components:

Service name Image Description
nginx lanalabs/tf-frontend Nginx Proxy Webserver that proxies requests to the correct services. Serves frontend static files.
rukh.local lanalabs/rukh The core application that handles the transformations and data sources
MariaDB mariadb/columnstore Database storing the uploaded data
postgres postgres:13.2- alpine Database for data source metadata


This section describes how to fix common issues when installing or upgrading Mining Prep and Process Mining.

Known issues

  • If you cannot load Process Mining at localhost:XX, it may be that you are trying to reach an IP that is different from the docker container. Run docker-machine ip to know the IP that is used by the lana-janus container and then go to IP:XX.
  • If you don't shut down the server properly, you may leave some zombie containers and services running. To kill the remaining Process Mining containers you can run docker kill <CONTAINERID> and then docker rm <CONTAINERID>.
  • If you are running docker-toolbox and increase the memory in the docker-compose.yml beyond 1G, you also need to increase the memory of the VirtualBox machine. Otherwise, Process Mining will fail to allocate the memory. To increase the memory of the virtual machine, stop Process Mining and open the VirtualBox management. There you need to stop the default virtual machine that docker-toolbox created. Once the machine is stopped you can change the memory settings.


First, you need to check if all Process Mining services are working as expected. By running ./ healthchecks.


Docker information

Check if your docker installation configuration is set as expected by running docker info.

Important configurations to check are memory and CPU used by Docker: docker info | grep -E 'CPUs|Memory'.

Docker images

Check if your docker containers are using the expected images declared in .lana_images file. Run the following command to output the docker images used by your containers: ./ images

Docker running containers

Check your running containers by executing the following command: ./ ps. Check which services are unhealthy or exited and analyze their logs.

Docker logs

To analyze container logs, there are different approaches.

  • Check all logs of a container: ./ logs -t CONTAINER-NAME
  • Follow logs of a container in real-time: ./ logs -t -f CONTAINER-NAME
  • Check last N lines of logs: ./ -t --tail=N logs CONTAINER-NAME
  • You can save container logs into a file: ./ logs -t CONTAINER_NAME > CONTAINER_NAME.log

Environment variables

Check if your environment variables set in .env are correctly picked up by your container. Run the following command:

for container in $(./ ps --services); do echo "### CONTAINER: $container ###"; docker inspect -f '{{range $index, $value := .Config.Env}}{{println $value}}{{end}}' $container;done 

Container statistics

For more detailed information regarding resources consumption in your running containers, you can run: docker stats

Restart containers

When you want to update your environment variables or restart an unhealthy/stuck service you can do it in two ways:

  • Restart a single service: ./ restart [SERVICE]
  • Restart all services: ./ clean-restart

Disk full

Check how much disk is used by docker. Command: docker system df

If you run out of disk space, you should increase the size of the volume mounted on /var/lib/docker. Check the backup volume disk. If you run out of disk space on the backup volume, you should increase the size of the volume.

Docker export logs

In case that you need support from the Process Mining team, send us the file, generated by running: ./ export_docker_logs

Avoid using Process Mining in the same server with other applications

Application performance might degrade if Process Mining isn't capable to use assigned resources to full extent. This usually happens when other software contends for resources. Process analysis can result in spiky usage of CPU and MEMORY resources. Do not overprovision the system where Process Mining is running in a way that might lead to resource contention.


Port closed


You can't open Process Mining application and after a period of time, you get a connection timeout error.


Please verify the state of the server ports. Let's assume that Process Mining Application it's configured for listening throw port 443, you can check the port state in different ways:

nc -zv SERVER 443 
nmap SERVER -p 443 
telnet SERVER 443 


If the state of the port is closed, we recommend you to contact the network engineer in your organization.

DNS lookup/doesn't exists/point to wrong address


The browser shows a warning that the server IP address could not be found; the request times out.


Determine if the DNS cache of your machine or browser is causing the problem.

  • Flush the browsers DNS cache
  • Flush the DNS of the operating system
  • Try to reach Process Mining again

If the problem still occurs, then the DNS entry is missing or misconfigured. Verify the domain resolves with nslookup (Windows), dig (Mac, Linux), or ping. If the command returns an IP address, then verify if it matches the expected IP address.


A missing or misconfigured DNS entry requires action by your network administrator.

Certificate Key with passphrase


Nginx will ask for certificate key passphrase on every boot. It's critical if the container restart because if nobody types the passphrase again no one will be able to reach the application again.


Please verify the problem by running: ./ logs reverse-proxy-nginx

You should see the following request of the passphrase from the nginx server:

Enter PEM pass phrase: 


You can disable the passphrase from your SSL key by running:


Connection Refused


You can't open Process Mining Application because you get refused to connect error. The container throwing the error will be reverse-proxy-nginx.


Run the following command to get more information about the error: ./ logs reverse-proxy-nginx

One common error is that one or more services were unable to start properly leading to a start failure as well on the reverse-proxy-nginx container.

You can confirm the error by comparing your output with the one below.

2021/01/01 10:30:15 [emerg] 1#1: host not found in upstream "shiny server:3838" in /etc/nginx/conf.d/default.conf:10 
nginx: [emerg] host not found in upstream "shiny-server:3838" in /etc/nginx/conf.d/default.conf:10 


In this case, the only solution will be to inspect the logs of the failing containers and understand why they are not able to start.

Bad Gateway


You receive a bad gateway error. This is because one of your services went down/stop after startup and the reverse-proxy-nginx can't reach it anymore.


Run the following command to get more information about running containers: ./ ps


Get the logs from the exited and unhealthy containers. From there, you can understand what's wrong with your setup.


Using default settings


User installed application with default variables instead of production variables.


Check if your environment variables are set correctly. Run command:

for container in $(./ ps --services); do echo "### CONTAINER: $container ###"; docker inspect -f '{{range $index, $value := .Config.Env}}{{println $value}}{{end}}' $container;done 


Replace values for production values and start Process Mining server by running ./ up -d.

Verify if the environment variables are set correctly by running the same command used in the analysis.

Next steps

Following installation, connect Mining Prep and Process Mining to transfer data more easily.

Open in Github Built: Thu, Dec 01, 2022 (03:25:20 PM)

On This Page