Process Mining is deprecated with Appian 24.2 and will no longer be available in an upcoming release. Instead, we encourage customers to use Process HQ to explore and analyze business processes and data. |
Note: This page applies to self-managed installations only.
This page provides you with instructions on how to set up self-managed installations of Mining Prep and Process Mining.
Appian Process Mining is comprised of two separate modules: Mining Prep and Process Mining. Mining Prep is optional and relies on the Process Mining installation. If you want to set up a self-managed installation of Mining Prep, you must first install the Process Mining module.
If you're curious about the high-level architecture of Process Mining and Mining Prep, see Platform architecture.
This section describes the requirements to set up and monitor Mining Prep and Process Mining.
Self-managed deployments run in Docker containers on a host. Each deployment and its host system should be monitored to respond to potential issues in a timely manner. The requirements for self-managed deployments depend on the choices made when integrating Mining Prep and Process Mining into your IT ecosystem.
For optimal efficiency, Appian may require provisioning of infrastructure, certificates, domain(s), and credentials to be in place before a deployment will be rolled out.
Note: Review system requirements before proceeding with the installation.
Mining Prep and Process Mining require DNS entries to all hosts to be present before the use of the system.
To ensure encrypted communication between the browser and the system, create SSL certificates for the URL where you'll host the installation. For example, appian.yourintranet.com
. Mining Prep requires an SSL certificate, but Process Mining can run on localhost:9100
as a demo.
DevOps engineers need access to the SSL certificates in order to correctly configure the load balancers on the hosts.
Your action items:
appian.yourintranet.com
.appian.yourintranet.com
.If you're setting up on Red Hat Enterprise Linux (RHEL), you'll need to complete some additional steps to use the Docker container. Before you being, make sure you have Docker and Docker Compose installed.
In the terminal, install Red Hat's container tools (especially Podman) with the commands:
1
2
sudo yum module enable container-tools
sudo yum install -y yum-utils container-selinux
Start the docker service using the command:
1
sudo systemctl start docker
Allow root-less usage of the docker commands by adding a group and your user to that group:
1
2
sudo groupadd docker
sudo usermod -aG docker $USER
The deployment includes basic monitoring of services, which you can enable by editing the .env
file as described in Monitoring configuration. However, it is recommended to also have additional systems in place to manage and monitor Mining Prep and Process Mining.
To ensure the health and optimal configuration, we recommend you monitor the following metrics and endpoints at a minimum:
You need to download the packages for Process Mining and Mining Prep before you start the installation steps. Here's an overview of the main packages:
process-mining-application-5.9.0.tar.gz
.process-mining-utility-5.9.0.tar.gz
.mining-prep-5.9.0.tar.gz
.mining-prep-5.9.0-configs.tar.gz
.rukh_license.key
.To download the packages:
Caution: This section describes how to set up Process Mining for testing purposes only. Please note that this Process Mining setup can't connect to Mining Prep.
Start Process Mining in demo mode with port 9100 opened for unencrypted traffic. The HTTPS access, backups, and monitoring are disabled.
Follow these steps to start Process Mining in demo mode:
process-mining-application-5.9.0.tar.gz
. This archive contains images of services needed for the setup and the install
image needed to install the setup.process-mining-utility-5.9.0.tar.gz
. This archive contains the images for backup, monitoring, and alerts.cat application-part-1.tar.gz application-part-2.tar.gz > application.tar.gz
.docker load -i <path to tar.gz file>
.
Caution: Replace the latest image name in the first command with your correct version. As shown in the following example, it will look similar to v5.9.0
.
1
2
3
docker create --name pm-install lanalabs/lana-install:v5.9.0
docker cp pm-install:/home/lanalabs/lana-on-premise .
docker rm pm-install
lana-on-premise
folder..env
file:
ORGANIZATION
: The name you choose will be used as the realm that has to be entered to login to the Process Mining frontend.ENCRYPTION_KEY
: A random, base64-encoded string with at least 32 bytes. One approach to generate this key is to use the command: head -c 32 /dev/urandom | base64
.up -d
parameter to run Process Mining server: ./pm.sh up -d
http://localhost:9100
.USER_MAIL
as username and value from USER_PASS
as password. See Password security for specific password requirements.
Note: It takes a few seconds for the identity and access management service to provision a user. You may receive a 404 error if you sign in before this process completes.
You're done!
Production installations should run with SSL certificates, and optionally monitoring and backups.
Follow these steps to start Process Mining in production mode:
docker load -i <path to tar.gz file>
.
Caution: Replace the latest image name in the first command with your correct version. As shown in the following example, it will look similar to v5.9.0
.
1
2
3
docker create --name pm-install lanalabs/lana-install:v5.9.0
docker cp pm-install:/home/lanalabs/lana-on-premise .
docker rm pm-install
lana-on-premise
folder..env-prod
file:
COMPOSE_FILE
: Compose files to apply to your installation. Default janus.yml:ssl.ymlSERVER_NAME
: Set your server name, which can be either a DNS name or IP address.
Note: This parameter only supports alphanumeric characters, periods, or minus signs.
ORGANIZATION
: The name you choose will be used as the realm that has to be entered to login to the Process Mining frontend.ENCRYPTION_KEY
: A random, base64-encoded string with at least 32 bytes. One approach to generate this key is to use the command: head -c 32 /dev/urandom | base64
.USER_MAIL
: The username for the administrative user.USER_PASS
: You need to set the USER_PASS to the initial password for your administrative user. You will be asked to reset it after the first login. See Password security for specific password requirements.SECRET_KEY_BASE
: The encryption key for the application, needs to be at least 16 characters long.KEYCLOAK_USER
: The username for the Keycloak Administrator that can modify and update the realm and users.KEYCLOAK_PASSWORD
: The password for the Keycloak Administrator.MEMORY
: RAM used by Process Mining.BACKEND_AVAILABLE_PROCESSORS
: Defaults to 1, set this value between 1 and the maximum number of processors Process Mining can use.RESTART
: Set the restart policy of docker containers. For production environment, we suggest setting to always to run Process Mining automatically in case of server reboot.DB_USER
: Set a user for your DB.DB_PASSWORD
: Set a password for your DB.RABBITMQ_USER
: Set a user for Rabbitmq.RABBITMQ_PASSWORD
: Set a password for RabbitmqCERT_PATH
: Folder where certificates are located.CERT
: Certificate name.CERT_KEY
: Certificate key name.use-prod-config
parameter to install Process Mining with .env-prod
variables: ./pm.sh use-prod-config
up -d
parameter to run Process Mining server: ./pm.sh up -d
http://SERVER_NAME
. This is the URL you need to specify for miningBaseUrl
while configuring Mining Prep.USER_MAIL
as username and value inside the USER_PASS
file as password.You're done!
The configuration of the setup can be modified to fit the environment. For example:
After you create a user in the application, they will need to verify their email and set a new password. You will need to configure the email server so the Process Mining Keycloak service can automatically send out this email to new users.
To configure the email server, set the following environment variables:
These should be set in the file janus.yml
under the field "environment" for the service "lana-janus":
KEYCLOAK_SMTP_ENABLED
- Set to "true" to enable sending the emails.KEYCLOAK_SMTP_AUTHENTICATION
- Set to "true" if your email server requires authentication.KEYCLOAK_SMTP_START_TLS
- Set to "true" if your email server uses STARTTLS.KEYCLOAK_SMTP_SSL
- Set to "true" if your email server uses SSL encryption.KEYCLOAK_SMTP_HOST
- Set the hostname of your email server.KEYCLOAK_SMTP_PORT
- Set the port of your email server.KEYCLOAK_SMTP_USERNAME
- Set the username that the email server needs for authentication.KEYCLOAK_SMTP_PASSWORD
- Set the password that the email server needs for authentication.KEYCLOAK_SMTP_ENVELOPE_FROM
- Set the address which the email should be sent from.KEYCLOAK_SMTP_FROM_DISPLAY_NAME
- Set the name shown in the "from" field.KEYCLOAK_SMTP_FROM
- Set the email address shown in the "from" field.KEYCLOAK_SMTP_REPLY_TO_DISPLAY_NAME
- Set the name for the "reply-to" field.KEYCLOAK_SMTP_REPLY_TO
- Set the email address for the "reply-to" field.Apply configuration as necessary for those features and make sure the COMPOSE_FILE
list contains janus.yml:ssl.yml:backup.yml
before starting the services.
By default, an upload can't be larger than 10 GB. This limit can be changed by defining the NGINX_MAX_UPLOAD
variable in the .env
file. A restart of NGINX (./pm.sh up -d nginx
) is required to apply the change.
The only open port of the setup is defined by NGINX_PORT
in the .env-prod
file. A restart of all services is necessary to apply the change. Run the following commands to guarantee a restart all services: ./pm.sh clean-restart
.
Adding an SSL certificate will enable encrypted traffic. You will need to place the server key and certificate in a location readable by Docker and make the following changes to the .env-prod
file:
CERT_PATH
points to the folder containing the key and certificate files.CERT_KEY
only contains the file name of the key file.CERT
only contains the file name of the certificate (chain).NGINX_PORT
is unset or configured with the correct port numbers of the allowed open ports.no-ssl.yml
with ssl.yml
in the list of the COMPOSE_FILE
variablechmod -R o+r <PATHTOCERT>
reverse-proxy-nginx
and lana-fe
services to apply the changes. Run the following command: pm.sh up -d reverse-proxy-nginx lana-fe
You need to set up the correct redirect URL for when users sign in to Mining Prep. This requires you to add the base URL of Mining Prep to the list of allowed redirect URLs.
To set up the Mining Prep redirect URL, configure the PREP_FRONTEND_BASE_URLS
environment variable that's located in the janus.yml
file under the environment field for the lana-janus service.
PREP_FRONTEND_BASE_URLS
accepts a comma-separated list of base URLs for the Mining Prep installation(s).
After you add your base URLs to the environment variable, run ./pm.sh up -d
to apply the changes.
To upgrade Process Mining to a new version, you need to copy the latest configuration file changes into your installation directory:
1
2
docker tag lanalabs/lana-install:v5.9.0 lanalabs/lana-install:latest
./pm.sh update
.new
suffix to your installation directory.
Tip: It may be helpful to use a diff
command to identify the new changes. For example, if you're using Vim, you can use vimdiff FILE FILE.new
.
1
./pm.sh clean-restart
Before you start installing Mining Prep, make sure you've already set up SSL certificates and installed Process Mining.
mining-prep-5.9.0.tar.gz
.mining-prep-5.9.0-configs.tar.gz
rukh_license.key
docker load -i <path to image archive file>
tar xzvf <file path of release archive file>
. This will create the production/
folder. For example, the file for the 5.9.0 release is mining-prep-5.9.0-configs.tar.gz
.production/
folder.production/
folder, create the configuration YAML files with the command: make create-example-config
.prep_config.yaml
and prep_secrets.yaml
respectively..postgres.env
and add the environment variables to it as described in Configuration options.init-certs.sh
script to initialize the domain and certificates for the server. See more under Domain setup and SSL certificates.miningRedirectUrl
, miningBaseUrl
, and keycloak
in the file frontend.json
to point to the base URL of your Process Mining installation.nginx/conf.d/nginx.conf
(for example, add_header Content-Security-Policy...
) by changing the localhost
towards the end of the line to the domain of your Process Mining installation.docker-compose up -d
to start the services.docker-compose exec mariadb bash /mariadb_cs_setup.sh
.Note: Before you sign in to Mining Prep, make sure you've set up the redirect URL as described in Add Mining Prep authentication redirect.
We provide a helper script that asks you to provide the domain the installation will run under. It can be called with the following command: ./init-certs.sh
. When you run the command, you'll be prompted to Enter domain:
and then asked Use private CA [yes/no]:
.
Note: Domains only support alphanumeric characters, periods, or minus signs.
Depending on your inputs, you'll choose to create self-signed certificates, Let's Encrypt certificates, or to just initialize the folders and add certificates from your own certificate authority (CA):
Domain | Private CA? | Certificate |
---|---|---|
localhost |
no |
Self-signed certificate |
Any domain | no |
Let's encrypt certificate |
Any domain | yes |
Initializes folders and returns path to you |
If you're using Let's Encrypt, keep in mind that it has Staging and Production APIs. By default, the Staging-API is enabled in the script. Additionally, this requires connectivity to the public internet and a reverse domain name resolution to the server hosting Mining Prep.
Caution: The Let's Encrypt Production-API has strict limits, if you reach them you will have to wait a week. Use the API only if you decided on the final domain.
In case a users' browser doesn't support the latest TLS Cipher you can change it in the file /certbot/conf/options-ssl-nginx.conf
. Keep in mind that the renewal container (certbot) will show an error message and recreate the original file with every renewal to be able to supply security updates.
If OCSP stapling is not working, "resolver" can be uncommented in the file /certbot/conf/options-ssl-nginx.conf
to enable nginx to use the Google DNS servers for the stapling requests. This will increase performance, but could lead to issues if external DNS servers are blocked in your network.
If you need to configure untrusted certificates for Process Mining, so that Mining Prep trusts Process Mining, you can configure it in the file prep_config.yaml
under the line trust_unknown_ca: false
and set it to true
.
If you decide to use your own certificates, you will need to move/copy the following files into the according certbot/conf/live/DOMAIN
folder:
Filename | Description |
---|---|
fullchain.pem | The certificate in PEM format bundled together with any intermediary certificates |
privkey.pem | The private key in PEM format |
Depending on the needs of your Process Mining environment, you can configure various options in the following files:
.postgres.env
:
POSTGRES_USER
: The field postgres.username from the prep_config.yaml
.POSTGRES_PASSWORD
: The field postgres.password from the prep_secrets.yaml
.frontend.json
:
serverUrl
: The endpoint for the installation. Should be automatically configured by the init-certs.sh
script. Change the protocol to http://
if you are running the installation without SSL encryption.webSocketUrl
: The websocket endpoint for the installation. Should be automatically configured by the init-certs.sh
script. Change the protocol to ws://
if you are running the installation without SSL encryption.defaultLanguage
: The default language for every new user.miningRedirectUrl
: The URL for the Process Mining installation.miningBaseUrl
: The URL where your Process Mining instance is running. This defines the connection between Mining Prep and Process Mining environments, including the navigation menu link and the Mining Prep transform and load Success message link.keycloak
: An object including the baseUrl for the Process Mining Installation and the clientId for Mining Prep (this is "frontend" by default).prep_secrets.yaml
:
endpoint
:
secret_key_base
: The base key for signing and ecrypting cookies and other connection related items (64 bytes).session_signing_salt
: Salt for signing the session cookies (6 bytes).liveview_signing_salt
: Salt for signing Phoenix LiveView sessions (24 bytes).postgres
:
encryption_key
: Encryption key for storage of datasource credentials in the database.password
: Password for the PostgreSQL metadata database.mariadb
:
password
: Password for the MariaDB dataset database.prep_config.yaml
:
mining
:
instances
: A list of named Process Mining instance, an example is given in the file.
name
and url
: List of Process Mining instances which should be used as a target for uploading transformations.trust_unknown_ca
: Flag to trust unknown CAs, should only be set to true
for testing.postgres
: The username
, hostname
, and port
for the PostgreSQL metadata database.mariadb
: The username
, hostname
, and port
for the MariaDB dataset database.oidc
: Stands for the Open ID provider configuration
mining_url
: The base URL for the Process Mining installation.realm
: The Keycloak realm defined by the ORGANIZATION variable in the Process Mining installation..env
:
RUKH_LICENSE_KEY
: The file path to the license key for Mining Prep.COMPOSE_FILE
: If you configured a Let's Encrypt certificate via the init certs.sh script make sure this line does not have a # at the start to enable the automatic renewal service.Upgrades are provided through new tar.gz
files. Since Mining Prep currently doesn't provide update scripts, follow these steps:
Extract the new tar.gz into a different folder. Possible commands depending on your Linux distribution:
1
mkdir new_mining_prep && tar xzf release.tar.gz -C new_mining_prep --strip-components=1
or
1
tar xzf release.tar.gz && mv production new_mining_prep
1
2
3
4
5
6
7
cd production/
cp .env .env.bkp
cp frontend.json frontend.json.bkp
cp prep_config.yaml prep_config.yaml.bkp
cp prep_secrets.yaml prep_secrets.yaml.bkp
cp .postgres.env .postgres.env.bkp
cd ..
production/
folder with cd ..
.Compare the new configuration files with the old ones and amend them; use the following command to compare the files with vimdiff: vimdiff new_mining_prep/.env production/.env
.
Tip: Often the most important change is the updated tags for the images in the .env
file. For example, RUKH_IMAGE
.
docker load -i <path to image archive file>
1
2
cd production
docker-compose up -d
For some version updates we'll provide additional guidance and migration notes in the tar.gz
.
The Process Mining monitoring stack will contain:
The monitoring stack can be enabled by editing the .env
file and adding the appropriate variables, which we will describe in following sections.
The main environment variable for enabling the stack is the COMPOSE_FILE
. You need to add the monitoring.yml
to it.
For Grafana, it's only necessary to set credentials for the administrator.
1
2
GRAFANA_ADMIN_USER=admin
GRAFANA_ADMIN_PASSWORD=randompassword
Alertmanager offers a wide range of notification channels. The configuration should be edited on config/monitoring/alertmanager/alertmanager.yml
. The current configuration is a template of mail alerting using an SMTP server.
Visit the Prometheus docs for other notification channels (slack, webhoooks, etc) configuration templates.
The following security recommendations aim to guarantee an extra layer of security upon Process Mining.
Data encryption at rest should be done for any kind of stored data, especially for Process Mining application where sensitive data is stored. Therefore, volume/snapshot encryption is highly recommended. Cloud providers offer step by step guidelines to accomplish encryption at rest, like AWS, GCP or Azure.
Docker containers are managed by the process dockerd (docker daemon). The docker daemon can be configured globally to apply rules and configurations to all the containers. Docker daemon can be configured by creating a daemon.json
file on the path /etc/docker
. Ownership of daemon.json
should be set to the user root.
Enable live restore in your daemon.json file to maintain containers alive in case of unavailability, crashes or downtime of the docker daemon.
1
2
3
{
"live-restore": true
}
Docker daemon offers two (hairpin NAT and UserLand proxy) mechanisms for forwarding ports from host to containers. In most cases (when Linux kernel is not old), hairpin NAT is used because provides better performance.
Disabling Userland proxy (ONLY, if not needed) is recommend since reducing the number of attack surfaces.
1
2
3
{
"userland-proxy": false
}
By default, all docker containers logs are saved in a plain JSON file within the host machine where Docker is running. Since important and valuable data might be stored in the logs it is important to keep this data in a centralized and safe logging location. We highly recommend you to check the logging drivers supported by docker.
One requirement for installing Process Mining is the creation and mounting of a volume dedicated just for Dockerfiles. By default, Dockerfiles are stored under /var/lib/docker
. Mounting the volume in the default docker path would be enough to secure (volume encryption), isolate and protect the Process Mining data.
By default, containers using bridge network drivers are accepting connections on exposed ports on any network interface. For security and networking conflicts with other interfaces and services, you should set only the network interface destined for docker traffic.
On the Process Mining .env
file you can assign the network interface destined for docker traffic. By default 0.0.0.0
will accept connections on any network interface. Change it to another interface e.g. NET_IP=10.0.1.25
.
The security updates for the host machine are done by the customer. OS patches should be regularly checked and updated. Each company may have a different patching policy to do regular security updates. If there is no patching policy, we recommend at least quarterly performed security updates on host operating system.
These are the services that constitute a Process Mining Docker installation.
Mining Prep uses these components:
Service name | Image | Description |
---|---|---|
nginx | lanalabs/tf-frontend | Nginx Proxy Webserver that proxies requests to the correct services. Serves frontend static files. |
rukh.local | lanalabs/rukh | The core application that handles the transformations and data sources |
MariaDB | mariadb/columnstore | Database storing the uploaded data |
postgres | postgres:13.2- alpine | Database for data source metadata |
This section describes how to fix common issues when installing or upgrading Mining Prep and Process Mining.
localhost:XX
, it may be that you are trying to reach an IP that is different from the docker container. Run docker-machine ip
to know the IP that is used by the lana-janus
container and then go to IP:XX
.docker kill <CONTAINERID>
and then docker rm <CONTAINERID>
.docker-compose.yml
beyond 1G, you also need to increase the memory of the VirtualBox machine. Otherwise, Process Mining will fail to allocate the memory. To increase the memory of the virtual machine, stop Process Mining and open the VirtualBox management. There you need to stop the default virtual machine that docker-toolbox created. Once the machine is stopped you can change the memory settings.First, you need to check if all Process Mining services are working as expected. By running ./pm.sh healthchecks
.
Check if your docker installation configuration is set as expected by running docker info
.
Important configurations to check are memory and CPU used by Docker: docker info | grep -E 'CPUs|Memory'
.
Check if your docker containers are using the expected images declared in .lana_images
file. Run the following command to output the docker images used by your containers: ./pm.sh images
Check your running containers by executing the following command: ./pm.sh ps
. Check which services are unhealthy or exited and analyze their logs.
To analyze container logs, there are different approaches.
./pm.sh logs -t CONTAINER-NAME
./pm.sh logs -t -f CONTAINER-NAME
./pm.sh -t --tail=N logs CONTAINER-NAME
./pm.sh logs -t CONTAINER_NAME > CONTAINER_NAME.log
Check if your environment variables set in .env
are correctly picked up by your container. Run the following command:
1
for container in $(./pm.sh ps --services); do echo "### CONTAINER: $container ###"; docker inspect -f '{{range $index, $value := .Config.Env}}{{println $value}}{{end}}' $container;done
For more detailed information regarding resources consumption in your running containers, you can run: docker stats
When you want to update your environment variables or restart an unhealthy/stuck service you can do it in two ways:
./pm.sh restart [SERVICE]
./pm.sh clean-restart
Check how much disk is used by docker. Command: docker system df
If you run out of disk space, you should increase the size of the volume mounted on /var/lib/docker
. Check the backup volume disk. If you run out of disk space on the backup volume, you should increase the size of the volume.
In case that you need support from the Process Mining team, send us the file docker-logs.zip
, generated by running: ./pm.sh export_docker_logs
Application performance might degrade if Process Mining isn't capable to use assigned resources to full extent. This usually happens when other software contends for resources. Process analysis can result in spiky usage of CPU and MEMORY
resources. Do not overprovision the system where Process Mining is running in a way that might lead to resource contention.
Problem
You can't open Process Mining application and after a period of time, you get a connection timeout error.
Analysis
Please verify the state of the server ports. Let's assume that Process Mining Application it's configured for listening throw port 443, you can check the port state in different ways:
1
2
3
nc -zv SERVER 443
nmap SERVER -p 443
telnet SERVER 443
Solution
If the state of the port is closed, we recommend you to contact the network engineer in your organization.
Problem
The browser shows a warning that the server IP address could not be found; the request times out.
Analysis
Determine if the DNS cache of your machine or browser is causing the problem.
If the problem still occurs, then the DNS entry is missing or misconfigured. Verify the domain resolves with nslookup
(Windows), dig
(Mac, Linux), or ping
. If the command returns an IP address, then verify if it matches the expected IP address.
Solution
A missing or misconfigured DNS entry requires action by your network administrator.
Problem
Nginx will ask for certificate key passphrase on every boot. It's critical if the container restart because if nobody types the passphrase again no one will be able to reach the application again.
Analysis
Please verify the problem by running: ./pm.sh logs reverse-proxy-nginx
You should see the following request of the passphrase from the nginx server:
1
Enter PEM pass phrase:
Solution
You can disable the passphrase from your SSL key by running:
1
openssl rsa -in KEY-WITH-PASSPHRASE.key -out KEY-WITHOUT-PASSPHRASE.key
Problem
You can't open Process Mining Application because you get refused to connect
error. The container throwing the error will be reverse-proxy-nginx.
Analysis
Run the following command to get more information about the error: ./pm.sh logs reverse-proxy-nginx
One common error is that one or more services were unable to start properly leading to a start failure as well on the reverse-proxy-nginx container.
You can confirm the error by comparing your output with the one below.
1
2
3
2021/01/01 10:30:15 [emerg] 1#1: host not found in upstream "shiny server:3838" in /etc/nginx/conf.d/default.conf:10
nginx: [emerg] host not found in upstream "shiny-server:3838" in /etc/nginx/conf.d/default.conf:10
Solution
In this case, the only solution will be to inspect the logs of the failing containers and understand why they are not able to start.
Problem
You receive a bad gateway error. This is because one of your services went down/stop after startup and the reverse-proxy-nginx can't reach it anymore.
Analysis
Run the following command to get more information about running containers: ./pm.sh ps
Solution
Get the logs from the exited and unhealthy containers. From there, you can understand what's wrong with your setup.
Problem
User installed application with default variables instead of production variables.
Analysis
Check if your environment variables are set correctly. Run command:
1
for container in $(./pm.sh ps --services); do echo "### CONTAINER: $container ###"; docker inspect -f '{{range $index, $value := .Config.Env}}{{println $value}}{{end}}' $container;done
Solution
Replace values for production values and start Process Mining server by running ./pm.sh up -d
.
Verify if the environment variables are set correctly by running the same command used in the analysis.
Following installation, connect Mining Prep and Process Mining to transfer data more easily.
Installation Guide