In the last few versions, Appian learned to deal with partially populated records for reading and writing data. We can now query just the data that we need, which reduces the amount of data fetched from an external system, saving precious CPU cycles and memory. And, when writing data back to a database, we have control about which fields are actually modified.
Cool, but why write a post about it!?
When looking closer, and considering that software is typically maintained for years by ever-changing teams, there are a few risks I want to make you aware of.
Process Data
I am designing a process model that deals with a record with many fields. I only need a few of them, so I decide to only fetch these fields, instead of the whole records. Design, test, works.
Then, for a new requirement, you take over, adding more logic to that model. And you require additional values from that record. How long do you need to find out that you are working with a partially populated record?
Then, you pass that data to another process, developed by another team member. The developer tested the model successfully, but when called by our model, it does not show any obvious errors, but some test cases fail. After some debugging, we found out that the other model uses a boolean field in an XOR and our model does not fetch this field. So the condition evaluates to FALSE, at least most of the time.
A very similar issue can arise when passing partially populated records into user interfaces.
Record Actions
A record action is a process model that runs in the context of a selected record and typically uses the process start form feature. Until recently, in the record action configuration, we had to do a separate query to get the data we needed. In the record configuration, we can now just use rv!record to pass record data to that model.
This great feature makes life a lot easier, right? Well, I tend to disagree.
The problem here is, that Appian identifies the fields to query, only by analyzing the interface used for the start form. I not only picks up fields of the base record, but also any referenced fields in related records. If we need any other record field for later in the process, that field is empty. Not cool!
This becomes even worse when considering that requirements change, and we need to remove a field from the interface which we require for some logic in the process. Do you have enough test cases to cover such a scenario?
Writing Records
When writing data back to a system, the smart service “Write Records” returns the record data with only the primary key field populated. If you require other fields for further processing down that process, you need to perform another query.
That is not only a big change in behaviour from the old style CDT & Data Stores, but we now also have to deal with two variants of the “Write Records” smart service. Process models, designed before the change, keep their old behaviour and return the fully populated record. When adding a new “Write Records” node, it follows the new style. And, as of now, there is no way to visually distinguish the one from the other.
Not cool, especially for new designers, that only learn the new way and then have to deal with an existing application.
Record Grids
When loading record data into a read-only grid, the grid automatically detects which fields are used and only loads these. That is great for performance and a good design decision. Now, when you select an item and use fv!selectedRows to store a copy of the row to a local variable, it will only contain the fields used in grid columns, but not the fully populated record.
Considerations
In my personal opinion, using partial records makes my life more difficult than it was before for no added benefit. This adds a whole new class of possibilities for mistakes and bugs, that I need to mitigate by developing new best practices, trainings, code review checklists etc.
And yes, I understand that there are scenarios that benefit from reading or writing only a subset of the fields of a record.
I try to design my applications in a way things are predictable, and a developer is not surprised by its behaviour. This is my current idea of how to avoid most of the issues described above:
- Keep doing a separate query when configuring record actions to pass the full record.
- Pass only the ID of a record to another process and let it query the data it needs.
- Pass only fully populated records to interfaces, typically ignoring related record data.
- Add extra care, managing and tracking record data in a process. Add extra queries following a “Write Records” node.
- Code and objects that benefit from partial records, get a clear warning.
Summary
Out of a sudden, the wish, or the need, for more flexibility raises a new class of risks, we did not have to deal with before. I highly recommend doing your own research, discussing this in your team, and introducing new risk mitigation activities to your development process.
The narrow line between simplicity and functionality can become very thin, and a misstep can be painful. From my perspective, Appian mostly makes superb design decisions.
I had a conversation with the product development team, from wich I learned that they are very well aware of the described issues. Trying to achieve better performance and providing more granular control on reading and writing record data, requires these changes.
Now, we have more power to design better apps and the responsibility to keep delivering top-notch quality, just like Spider-Man. I am curious to see how the platform and records will continue to evolve.
Thanks for your attention. Keep rocking the world of digital processes!

[…] trouble, as there is no way to distinguish one version from the other. I already wrote a post about partially populated records including some hints on how to deal with […]
[…] generates objects. I am missing groups and folders. I think that using rv!record is a huge problem (Partially Populated Records), and using the word “record” for all PVs and RIs is a long-term maintenance […]