The Appian RPA platform allows you to use automation techniques like Microsoft UI Automation.
Microsoft UI Automation is an accessibility framework for Windows applications. It is typically applied to assistive technologies, such as screen readers, but it can be used in Appian RPA to detect the attributes of elements on a screen and then act on them.
The Appian RPA API includes a component called ui-automation, which allows you to use the Microsoft UI Automation framework from Java. You do not need to declare any additional dependencies in the
pom.xml to use this module since all necessary dependencies are included in the core module
To use this framework, you must use a Windows system. According to Microsoft, this framework should run on almost all versions of Windows, from XP onwards. Appian has verified its use in systems based on Windows 7 and Windows 10.
When automating an application, you must know how to reference the components of the user interface, such as the text fields, buttons, checkboxes, and lists. To easily identify the components of the interface, Appian recommends using Microsoft's Inspect tool.
To install Inspect, download the latest version of the Windows SDK and run the
Inpsect.exe utility on your resource. You do not have to install all components. The only required components are Windows SDK Signing Tools for Desktop Apps and Windows SDK for UWP Managed Apps.
Once Inspect is installed and executed, you can hover over elements on your window to view their properties. These properties allow the UI Automation framework to locate elements on a window. The properties you need for your robotic process will vary depending on the application you are trying to automate. Common properties used in robotic processes include:
There are three objects involved in using the UI Automation framework:
The object UIAutomation allows you to access the elements of the UI Automation framework. This code snippet instantiates the object:
1 IUIAutomation automation = UIAutomation.getInstance();
Once the object is instantiated, you can launch a new program or hook the instance to an open application. For example, the following code snippet opens the Calculator application:
1 2 3 String CALC_EXECUTABLE = "calc.exe"; Application calcApp = automation.launchOrAttach(CALC_EXECUTABLE); calcApp.waitForInputIdle(30);
The Window object represents a visible window of an open application. This window represents the root component where the robotic process's actions will occur.
For example, say your robotic process has opened the Calculator application. You now need to obtain the window of the Calculator using the following code:
1 2 String CALC_TITLE_REGEX = "Calculator"; Window calcWindow = automation.getDesktopWindow(Pattern.compile(CALC_TITLE_REGEX)); calcWindow.focus();
Once you've identified the Window, you can determine what type of controls are available. Controls are UI elements like buttons, menus, hyperlinks, checkboxes, etc. They have specific capabilities that your robotic process can interact with.
There are several methods you can use to obtain controls. For example:
However, the framework supports multiple controls, and they all have their own class and extend from the AutomationBase class. See the ui-automation Developer documentation for a list of all supported controls.
Alternatively, you can use the Search class to locate controls. For example:
1 Button button1 = window.getButton(Search.getBuilder().automationId("btnOK").build())
See the UI Automation Template for steps to create your own robotic process that can automate the Calculator application on a Windows machine.
For a guided tutorial with hands-on practice, see Tutorial: Build a Bot with UIAutomation.
On This Page