AI Tools
This article refers to features currently in Beta-release - please get in touch if you’d like to take part in the early access trial.
Quick links in this article:
In this article, we’ll explore the use of AI Tools available as actions in Gnatta Workflow, and some example use cases for a contact centre.
This article is explicitly about AI in Workflow - for information about the AI Assistant available to your agents as they handle interactions, check this article: Using AI Assistant
Action types
The following AI functions are currently available:
Voice Transcriptions
Will add an inbound message to telephony interactions containing the transcript at the end of a call.AI-generated voice transcription of an inbound phone callQuality Assessment
This action will use AI to provide a qualitative assessment of the interaction, writing a short but detailed report. See the use case below for more information!An example AI-generated QA reportÂ
Language Detect
This action will use AI to detect the given language of the content provided, returning the name and two-letter ISO code of the language, i.e. NameEnglish
and ISOCodeen
.Translate
This action, often paired with the previous one, will translate a body of text into the given languageSentiment Detect
This action will use AI to evaluate the sentiment of the interaction, analysing the last 20 messages of the current conversation. The sentiment will be summarised as one of Positive/Neutral/Negative/Mixed.Suggest Response
Much like the agent-facing AI Assistant, this action will evaluate the interaction so far and suggest a possible response, i.e. MessageSummarise
Like the AI Assistant, this action will provide a succinct summary of the interaction so far based on the last 20 messages of the current conversation, and any non-numerical data available in custom fields.
Note that Voice Transcription
is currently an account-level feature, not an action in Workflow, and must be switched on for any given telephony account. Please get in touch to try it out!
Use Case: QA Completed Chats
In this use case, we’ll put together a Workflow that will:
Be triggered on a
Chat Ended
eventQualitatively assess the interaction, and append that report to a note
Update a selection of relevant fields with QA data i.e. Rating and Escalation
To get started, you’ll need to create (or identify) a Chat Ended
event. To do that, navigate to Configuration > Workflow > Events > Add event
and select Chat Ended
from the options. If none are available, that means you’ve already created the event!
Next, you’ll need to create a few new custom fields in which to store your QA data. The action output of a Quality Assessment (by default, Quality.Output
) contains all of the below additional pieces of data:
.Overall
Overall Assessment (i.e. Excellent, Very Good, Good, Poor etc.).Rating
Rating (expressed as a number out 100).Escalate
Escalation (true or false).Confidence
Confidence (expressed as a number out 100).Explanation
Explanation (a short paragraph explaining the AIs decision)
And also the following additional ‘details':
.Detailed.Solution.Found
Solution (true or false).Detailed.Empathy.Found
Empathy (true or false).Detailed.Introduction.Found
Introduction (true or false).Detailed.Tone
Tone Rating (expressed as a number out of 5).Detailed.Spelling
Spelling Rating (expressed as a number out of 5).Detailed.Grammar
Grammar Rating (expressed as a number out of 5).Detailed.Punctuation
Punctuation Rating (expressed as a number out of 5)
As most of this information is going to be added as a note on the interaction, we’re only going to store those in data fields that might be useful for making other decisions (or as filters in reporting!) - in our case, Overall
, Rating
and Escalation
.
Navigate to Configuration > Advanced > Dynamic Data
to add the data fields you’d like. We’d recommend setting them up as String Editors, with the Reportable toggle switched on.
With these precursor steps complete, it’s time to enter the flow builder at last! Navigate to Configuration > Workflow > Builder
and start out by giving your flow a name (so you can save your progress as you go).
The first action in your flow will be the Quality Assessment
itself - so let’s go ahead and that. Edit the parameters as required, and tell the AI about your brand tone.
We’re going to leave the Action Output as Quality.Output
- we’ll be referencing this in our next actions! This output is where the entire contents of the QA report will be stored when the flow is triggered.
Next, lets add the output of the QA report to a note on the interaction. Here’s a suggested template for your note (assuming you left your Action Output label as Quality.Output):
QA Feedback
Overall: {{Quality.Output.Overall}}
Rating:Â {{Quality.Output.Rating}}
Escalation:Â {{Quality.Output.Escalate}}
Confidence: {{Quality.Output.Confidence}}
Summary: {{Quality.Output.Explanation}}
Additional details:
Solution: {{Quality.Output.Detailed.Solution.Found}}
Empathy:Â {{Quality.Output.Detailed.Empathy.Found}}
Introduction: {{Quality.Output.Detailed.Introduction.Found}}
Tone: {{Quality.Output.Detailed.Tone}}
Spelling: {{Quality.Output.Detailed.Spelling}}
Grammar: {{Quality.Output.Detailed.Grammar}}
Punctuation: {{Quality.Output.Detailed.Punctuation}}
Next, you’ll want to also store some of those QA outputs in your chosen data fields. To do that you’ll need to add an Update Interaction
action, and then preset the relevant data fields with the output (check the note template above for specific field references!). For example, we’ll update our QA Overall
field with Quality.Output.Overall
.
With that done, all that remains is to File>Save
and File>Publish
your flow, then append it to your Chat Ended
event. As soon as the flow is triggered, it’ll automatically generate a QA report, add a note and update your data fields.
Â
Use Case: Detect and Translate
In this use case, we’ll put together a Workflow that will:
Be triggered on a
New Message Received
orResponse Received
eventDetect the language of the incoming message
If that language is not English, generate a translation and add it to a note
Update a language detected field with the language name
To get started, you’ll need to create (or identify) a New Message Received
or Response Received
event. To do that, navigate to Configuration > Workflow > Events > Add event
and select the relevant event from the options. If none are available, that means you’ve already created the event!
In our example, we’re going to be adding our flow to the New Message Received
event for our Testing Email account, so the flow will be triggered whenever a new email thread is sent to our Testing Email account.
Next, you’ll need to create a new data field in which to store the language the AI detects - do this by navigatingto Configuration > Advanced > Dynamic Data
. We’d recommend setting it up as a String Editor, with the Reportable toggle switched on.
With these precursor steps complete, it’s time to enter the flow builder at last! Navigate to Configuration > Workflow > Builder
and start out by giving your flow a name (so you can save your progress as you go).
The first action we’ll be adding is Detect Language
. You’ll need to select a body of text for the AI to detect the language from - we’re going to use the Message (Echo.Body
). We’re going to leave the output label as the default, to keep things simple.
Next, we’ll need to split the flow with a Decision
action. We’re going to be checking if the output of that Detect Language
action identified the language as English. To do that, we’ll set the condition to check if DetectedLanguage.Output.IsoCode
is equal to en
.
When your Decision
action is complete, it should look a little like this:
Next, we’ll be adding a super quick JavaScript action right below the Not English branch to set a custom context variable for the language we want to translate into. We’ll then be able to insert that variable as the output language in the Translate
action after it. To do that, insert a new JavaScript
action and copy and paste the following code snippet:
// Setting the variable Var.Translate to "en"
context.Set("Var.Translate", "en");
// Log the action for debugging purposes
context.Log("Set Var.Translate to 'en'");
The next step is to add a Translate
action to translate the identified text into English. That means our source text for this action is still the Message (Echo.Body
) that we’ve identified as Not English in our Decision and the language code we want to translate into is Var.Translate
(the ‘en’ variable we set in the JS action).
Now that we’ve translated it, we can add that translation text to a note on the interaction so the agent can read it. We’re going to add the following to our note:
Language Detected: {{DetectedLanguage.Output.Name}}
AI Translation:
{{Translate.Output.Translation}}
It’s important to include .translation
on the end of your Translate action output!
Next, we’re going to update our AI Language Detected custom field with the detected language using an Update Interaction
action. Simply select your custom field and insert your Detect Language
output, with .Name
appended to capture the language name in that field. In our case, that means adding DetectedLanguage.Output.Name
to the field.
Finally, it’s best practice to drag that hanging ‘English’ branch right down to join up with your Update Interaction action - whilst you didn’t carry out a translation, it’s still useful to jot down the language detected in your custom field to make sure the data is surfaced in your reports.
Then you’re ready to hit File>Save
and File>Publish
, and attach your flow to your event. If all is working as expected, inbound messages the AI determines to be Not English will be automatically translated and added as a note!
Â
Â
Â
Â