Only one version left this year. I am voting for automatic database schema management for records. Or maybe 64bit integers, lambdas for looping functions, and a flux compensator …
But let’s save the future for another time and focus on version 23.3.
The big highlight is the AI Copilot that automatically translates PDF forms into Appian interfaces. If you are OK with sending your data to the Azure OpenAI services, you need to set up and pay for. And have a use case for it.
So, I try to focus on the updates that make me rethink my usual solution approach and refine my collection best practices. Please make sure to read the release notes in detail and draw your own conclusions.
Interfaces
Grid Cell Background Color

Read-Only grids can now have custom background colors per individual cell. No need for other tricks using icons or tags to highlight cells or whole columns or rows.
Site Navigation

Not much to write here, but I can finally retire my custom-built second level navigation menu. You will not be missed …
Portal URL Parameters
Now that’s a great addition to portals. I can now send a link to a user including encrypted parameters, and evaluate these values on the portal page in a secure way. The implementation is simple, as these parameters become rule inputs on the interface created. I can then query the specific data I need to display to the user.
I did something similar before, but then the user had to enter some code manually.
Records
Querying Records
For me, an important best practice is to create use case-specific query expressions of two different kinds. Find the details in this older blog post.
To query an individual item, I can now change my code from
if(
a!isNullOrEmpty(ri!id),
error("Primary key must not be null!"),
index(
a!queryRecordType(
recordType: 'recordType!Example Record',
filters: a!queryFilter(
field: 'recordType!Example Record.fields.id',
operator: "=",
value: ri!id
),
pagingInfo: a!pagingInfo(1,1)
).data,
1,
'recordType!Example Record'()
)
)
to
a!queryRecordByIdentifier(
recordType: 'recordType!Example Record',
identifier: ri!id
)
and get the exact same behaviour, including the NULL value error.
Then I though, why not skip that habit altogether and stop using these expression rules? Hmm …. let’ see … my old code looks like this
a!localVariables(
local!user: loggedinuser(),
local!now: today(),
local!case: rule!TST_Q_GetCaseById(id: ri!caseId),
a!headerContentLayout(
...
Dropping that practice would change the code to this
a!localVariables(
local!user: loggedinuser(),
local!now: today(),
local!case: a!queryRecordByIdentifier(
recordType: 'recordType!Case',
identifier: ri!caseId
),
a!headerContentLayout(
...
I asked myself two questions:
- What version is more expressive?
- Are there other drawbacks or benefits?
So, I will stay to my best practice. First, it’s much more readable and speakable, and thus cheaper to maintain and expand later on. Second, I can easily look up all the dependent objects of my expression to see where that record is queried.
And then, there is also something I am going to change. And that is the code in the expression itself.
Writing Records
Writing record became a whole different beast with that new version. Mostly for the good, but there are some edge cases that might lead to very difficult to identify issues. But let’s start at the beginning.
Using Appian Records, we can do partial updates to data in the record data source. To just modify the status of a case, I only populate the primary key identifier and the status field. When writing, any other fields will stay untouched, and the return value of the smart service will contain these two fields only. That is pretty simple.
With the new version, Appian allows me to not only write a flat record item, but also include related records in 1:1 or 1:m relationships. In the data structure, related records are represented just like normal fields. Take a look at line 3 in my example below.
'recordType!Author'(
'recordType!Author.fields.id': 1,
'recordType!Author.relationships.sshCaseAudit': {
'recordType!Case Audit'(
'recordType!Case Audit.fields.createdBy': loggedInUser()
),
'recordType!Case Audit'(
'recordType!Case Audit.fields.createdAt': now()
)
}
)
Appian will then take care of populating the foreign key fields in the related records and return all the data you pass plus the populated identifiers and foreign key values. What a great feature!
But, what’s the problem with it?
Think of an expression or a process model that expects a record as an input. It will then perform some logic using the data in that record. Now, what do you think might happen, when you pass a partially populated record into that expression or process model?
Yes, you have zero control on the data passed in, and fully depend on the other side to only provide a fully populated record. That calls for serious trouble!
Think of my code example from above. I write the data and directly pass the returned case audit records into some logic. That might somehow work, or at least not throw an error message. Then there are other spots in the application also working with partially populated records. Have you ever tried to debug such issues? In production!
So what should we do with it? Well, that’s a real challenge. “Let’s write a best practice!” you might say. OK, but what is that? And what are we actually trying to prevent, and what is the recommendation? And how can we make sure that everyone follows it?
For process models, I follow the pattern to avoid passing data structures, but identifiers only. Then the model has to query the data fresh from the data source. Then, the model has full control on the data it needs.
In general, I think we should avoid directly using the data returned from write records.
What are your thoughts? Do you have a better idea? Please let me know!
Record Events
Record events have been introduced earlier, but got an enhancement. This whole feature seems to have been developed with the idea in mind to being able to feed event data into process mining. To be honest, I try to advise my clients to build applications that automate the flow and operation of the process and avoid leaving low-level decision-making to the users. Drive the process and assign tasks to users. For such applications, I do not see a huge case for mining. But that is only my perspective. Back to the enhancement!
When recording events, there is a new field that allows you to define the automation type of that event. Available values are:
- None (User): User Input Tasks like submitting a form
- RPA: Execute Robotic Task smart service, robotic process automation (RPA) plug-ins
- AI: AI Skills smart services (Classify Documents, Extract from Document, Classify Emails), AI plug-ins
- Integration: Call Integration smart service, Call Web Service smart service, Invoke SAP BAPI smart service
- Other: Expression Rules, Decisions, process orchestration, other smart services
That’s really cool and supports any process mining activities a lot.
I wanted to mention this feature because of some remarkable implementation details. The automation type is stored to the record as an integer with values 1 – 5. But there is no database table that these values relate to. So the Appian product team decided to introduce two additional functions to map numbers to names and vice versa.

a!automationId() accepts the automation type name and returns the identifier, while a!automationType() takes the identifier and returns the name.
Summary
While I am touching only a few of the changes, there is much more to discover. Ever wanted beyond two million rows for a synced record? Check! More powerful Excel manipulation using RPA? Check! Exports triggered by API? Check!
Let me know your thoughts about how that new version changes your way of rocking the world with Appian.
Hi Stefan,
I am very curios about the AI Copilot and I’d like to test it on my Appian Community. I was wondering if a free Open AI Service account (https://azure.microsoft.com/en-us/free/) can be used to this purpose. Did you try it?
I did not try it with a free account. When you do, please leave a comment here so others can benefit.