Generative artificial intelligence (AI) is highly adaptable to many use cases in your business processes. With such a vast set of possibilities, it can be hard to know where to start.
This page provides some sample prompt patterns you can use with the prompt builder AI skill. As you experiment with customizing the prompt, you'll learn more about what to include to get just what you need from the skill.
Tip: When executed in a process, any input text is appended to the prompt you created with the prompt builder AI skill. This means that the prompt text is the first thing the model begins to analyze and interpret. Keep this in mind as you phrase the prompt.
For example, the model can make sense of "Summarize the following text," in the prompt when it's followed by the input text. Alternate phrasing, like "Summarize the previous text" might confuse the model and lead to low-quality results.
What makes a good prompt?
First, let's recap what a prompt is. In the context of AI, a prompt is how we communicate with the large language model (LLM) about what we want it to do. A well crafted prompt can help you communicate with the model more effectively, and reduce the amount of time you spend tweaking or troubleshooting.
Crafting a good prompt can improve:
precision and relevance of the model output,
efficiency by getting the desired response in one question,
safety by excluding undesired output, and
customization with our own relevant data.
If you're looking to build a prompt from scratch, you might want to consider the following aspects:
Define your use case: What do you want the model to do?
Tell the model what input to expect (defining the input data): If the prompt will always include content for a certain type or format, it may be helpful to tell the model this in the prompt. For example, "I want you to summarize this email text and provide a suggested response."
Tell the model what output you want (defining the output data): LLMs are not deterministic, so it is possible that the model won't always return data in the structure you specify. However, you can be specific about what you want the model to return. For example, you can tell the model you want the response to be in JSON, HTML, a bulleted list, or any other format. You can also tell the model to include its response between tags, to help keep the answer more focused.
Gather examples: You'll want to provide examples to help the model produce the answers you're looking for. Examples help provide the model with context, so it will produce output that's more specific to your request.
Consider length, tone, and audience: Another thing to consider when specifying the output is how it will be used and who will see it. Does it need to be a certain length, perhaps 3 paragraphs? Should it be written for a certain audience, such as customers? If you help define the context for the model, you'll get results that are more in-line with what you need.
Remember token limits: The model can process a set number of tokens, or units of analysis. The model will process your prompt and the input data. Keep this in mind when you craft the prompt, because the longer the prompt is, the smaller the input can be.
Tip: Put the most important part of your prompt (such as output format or a specific question to answer) at the end of your prompt so the model is clear on what your ultimate request is.
What is a token?
In the context of machine learning, a token is a unit of analysis. A model breaks down data into meaningful parts—tokens—before it begins processing that data. Language processing models break down paragraphs of text into their smallest units, but these units aren't always equivalent to words. Instead, a token could be part of a word. Therefore, the number of tokens won't always equal the number of words in the text analyzed.
As a rough estimate, six characters = one token.
Summarize long or complex content
If your business uses Appian for case management, it is often essential to surface crucial information in the form of summary updates. A single case could include hundreds of updates, which would be difficult and time-consuming for a person to read and understand the key information. Instead, you can ask an AI model to do this for you.
Prompt
Example inputs
In this example, you can use data from a record field as input text for the Execute Prompt smart service. An automated process model can routinely summarize recent updates to the records you choose, eliminating the need for a person to manually ask the AI model for a summary.
Example outputs
Generate content for legal or marketing purposes
To promote a product or service, your business needs to create a lot of written materials. Maybe you want to interact with your customers by posting on social media, writing blog posts, drafting catchy taglines, or assembling a list of frequently asked questions. Because this information is based on what the product can do for your intended customers, you can use the prompt builder AI skill to help generate additional content about your new product or service.
Prompt
In this scenario, the input text will be a description of your product and a request for a certain type of marketing collateral, such as a blog post. As a starting point, you'll formulate a prompt that sets the scene for the model:
This prompt is open-ended enough that your business could also use it to generate content for social media, blog posts, and more.
Example inputs
Example outputs
Because this prompt is generating creative content, you may want to experiment with adjusting the temperature to see the variety of language the model returns.
Identifying personally identifiable information (PII)
Suppose your business intakes and manages form submissions with sensitive information, like a person's social security number or medical history. Before you route those forms to the right person or group, it's important to verify if the form contains information that those people are privileged to see.
Tip: Remember that generative AI models sometimes produce responses that are inaccurate, inappropriate, or undesirable. Users should verify information in these responses for accuracy.
Prompt
You can craft a prompt that asks the AI model to provide a list of potentially sensitive information within any document. For example:
Classify and route unstructured text, such as a customer support request submitted through a Portal
Your business might use portals to present information to and interact with your customers. Say a customer submits a support request via your portal. If the form submission is left to be reviewed manually, a person might not see it for a day or two, leading to slower response time and a poor customer experience. You can use generative AI to analyze these form submissions, determine the type, extract key information, and route it to the appropriate team for triage and response.
Prompt
You can ask the model to return answers to a standard set of questions:
Example inputs
Example outputs
Extracting data from unstructured text, such as emails or long Records fields
An unstructured document, such as an email, might contain important information that you'll use elsewhere in Appian. For example, if you're an insurance adjuster and receive many emails from customers with information about new or existing claims, it can be time-consuming to review and extract the relevant details from each one. You can use AI to help you get started.
Prompt
Tell the model you want specific information from the emails you receive:
Example inputs
Example outputs
Alternatively, if you asked the model to output the data as a CSV instead of a list, you'd get:
1
2
3
4
5
Policy #,ABC123
Name,Sarah Thompson
Date of Incident,January 25th
Type of Damage,Hail damage to roof causing leaks in multiple rooms, damaging ceiling, walls, and flooring
Estimated Cost,$8,500