Thank you very much for your patience. It took a while to get access to this new Appian feature, which is part of the 23.4 release.
In this post, I want to share the experience I made while chatting with my data, the greatness, and weirdness of large language models and thoughts on possible use cases.
Let’s get into it!
It Is
To make use of a large language model (LLM), Appian integrates to the Bedrock service provided by Amazon. In terms of data privacy, this means that there is one more service provider involved. This is the current state of development and may change in future iterations.
Chat-With-Records uses a normal LLM, similar to OpenAI GPT. Appian instructs the model to be a helpful assistant. When asked for instructions, I got the following answer.

I had a very nice conversation with the Appian product management and learned that the current focus of this feature is to support case management scenarios. As an example, the user asks for a summary of the case, latest activities and recommended next steps. Users typically spend a lot of time on getting this overview, just to decide what to do next. And AI can really help with this.
Just like with many other features, Appian brings a focused first implementation to the market, and then refines it over time, based on client feedback. In case the current version does not fit your specific use case, get in contact with Appian and share your ideas and requirements.
It Isn’t
AI is still in its infancies, so I think we need to do some expectation management.
Chat-With-Records is not meant to be a general LLM implementation that you can tune and tailor to your specific needs and requirements. The current version does also not support direct access to text in documents and custom instructions.
There is also no way to trigger any activities or record actions from the model, and the conversation with the model is not saved to an audit trail.
All of this might or might not change with future iterations. To repeat it, contact Appian with your specific use case to inspire product development.
My Test Case
For some basic testing, I created a simple data model using synced records. It contains data about countries, regions, languages, and statistics.

I then created a simple interface and configured the recordsChatField to get access to the base record and some additional data.

In my test scenario, I do not have any user comments, documents, or an audit trail to feed into the data context. Comments and audit trail would be just another record I can easily provide. Documents are not a real option as of now. We cannot directly pass documents to the model, and just extracting the text to the database will not work as text fields in records are currently limited to 4000 characters.
The LLM has a token limit of 100 000 which is large enough to pass substantial background data. This could also be static data like individual prompt instructions or corporate guidelines, as long as you store them as records and connect them to the base record.
I use the initial message and the suggested questions to help the user understand what types of answers and the level of detail he can expect. I consider this to be an important aspect of a well-designed user interaction using such AI capabilities.
When asked for a summary, I got a very nice answer.

This answer includes some facts not included in the data I provided. This happens because the model tries to provide a good answer, but my data is sparse. So the model mixes in some facts it learned during the training phase.
Asking the model to only use the data I provide, I got this.

I also tried to make the model forget all the data or convince it that Aruba lies in Asia. Just like any other LLM, it is quite naive and trusted me. This will become interesting once we insert more user-generated content that could try to outsmart the model. LLM security is under heavy research and I expect more robustness against malicious actors in the near future.
Now What?
I have been working with LLMs, mostly Azure AI and OpenAI GPT, in various use cases, without being an expert in AI. What I see here is a very focused and incredibly easy to use approach. Exactly what I expect from a platform that uses low-code for implementing digital processes.
As of now, chat-with-records is not generally available and must be requested from Appian. I expect this to change in future releases as the feature matures. And I have no information about the cost structure.
If I wanted to tune a LLM and train it to act within very specific boundaries like medical or insurance, I might have to look for an alternative solution.
If you have any questions, please let me know in the comments below.
Cheers!

[…] Update on February 5th 2024: Find my separate post on this feature here: Chat With Records […]