Episode 32: Digital Process Ethics

In this episode of Appian Rocks, Stefan, Sandro, and Marcel dive into a conversation that starts with the seemingly dull topic of software reviews but quickly evolves into a deep and thought-provoking discussion about ethics in digital process automation. Initially, they touch on the typical components of a code review—adherence to best practices, syntax, node counts in processes, and test cases. However, they challenge the narrow scope of this approach, questioning whether technical correctness alone is sufficient, especially when the software influences real-world decisions in complex environments.

The conversation shifts to the broader context in which applications operate, especially in public sector projects. The team notes that stakeholders such as the funding agency, the users, and the beneficiaries are often different entities, each with distinct priorities. This creates a tension where developers can find themselves caught in the middle. While developers are typically not policy makers, the code they write can enforce rules and decisions that significantly affect people’s lives. This leads to a central theme of the episode: software is not neutral. It embodies decisions, and those decisions can have ethical consequences.

They explore how public sector automation transforms discretionary, human-driven processes into rigid, rule-based systems. This transition, while increasing efficiency, risks stripping away the nuance and empathy that experienced civil servants once applied. For example, decisions about child support or eligibility for government aid, which were previously made by humans considering context and individual circumstances, are now reduced to logic gates and business rules. The trio argues that this change demands new layers of oversight—beyond testing whether a process works, teams must ask whether it works *fairly* and *justly*.

A particularly striking point raised is the lack of ethical audits in most software development projects. Stefan admits he’s never performed one, and the group collectively questions why such audits aren’t standard practice. Is it because they were never needed? Or is it because ethical responsibility was previously embedded in human roles and not in the tools themselves? They agree that developers, especially solution designers and business analysts, have a duty to consider the broader impacts of their implementations.

The discussion also touches on traceability and transparency. Marcel introduces the concept of traceability as a critical requirement, particularly in government software. Every feature in an application should be traceable back to a signed-off requirement to ensure accountability. This is essential not only for auditing but also for safeguarding citizens’ rights when decisions are automated. Transparency, too, is highlighted as a core value—systems should provide users with understandable explanations for decisions, such as why a child support claim was denied.

As the episode closes, the hosts underline the need for ethical codes within development teams. Guidelines alone aren’t enough; teams must establish practical escalation paths and support for developers who encounter ethical red flags. Developers should feel empowered to say no to unethical requests and escalate questionable requirements. Ethical responsibility, they stress, belongs to everyone involved—not just legal or compliance departments.

Ultimately, this episode calls for a shift in mindset. In an era where software often replaces human discretion, ethics must become a first-class concern in digital process design. Developers, architects, and analysts need to see themselves not just as implementers of logic, but as stewards of values that impact real lives.

Episode 31: Dealing with External Data Models

In the latest “Appian Rocks” podcast, host Stefan, Sandro, and Marcel discussed managing external data models in Appian. They focused on Data Transfer Objects (DTOs) for abstracting and transferring data between incompatible systems. Marcel, a solution architect, highlighted the challenge of integrating external data, whether from microservices or legacy systems, and questioned forcing a single business object model across an enterprise.

The conversation explored communication methods and the common scenario of Appian performing internal data transformations. Stefan emphasized that Appian often needs only a subset of external data. Marcel explained that a central translation layer for DTOs could consolidate logic, preventing widespread changes if a DTO evolves. They also mentioned API composition and anti-corruption layers (ACLs), which facilitate communication between systems using their own data models, with translation in the middle. Marcel likened DTOs to “DHL packages” for data, while ACLs help reduce transferred information, adhering to the “need-to-know” principle.

Stefan pointed out the fundamental difference between process-driven Appian systems and data-storing backends. Marcel added that highly normalized external data might require denormalization for Appian UI performance. They also covered various forms of coupling, including data format, interaction style, semantics, order of operations, network location, temporal coupling, and network topology. Stefan shared an anecdote about time zone issues causing data discrepancies.

Sandro presented a “war story” about enriching read-only external customer data. Stefan immediately suggested Appian’s sync records as a solution for creating cached local copies and enhancing query speed. Marcel agreed, comparing it to a materialized view. When Sandro revealed that API-based integrations across multiple unreliable source systems led to instability, Marcel proposed an API Composer service with caching and retry mechanisms. Stefan countered that Appian’s synced records can now handle unsuccessful or partial syncs.

They concluded that data duplication is a pragmatic approach, especially for low-priority reference data or when sensitive data shouldn’t reside directly in Appian. While reliable software is costly, local data duplication can be a cost-effective solution for individual applications. The crucial factor for data duplication is ensuring awareness of changes to keep the cached data current. Marcel, despite his skepticism, acknowledged that synced records effectively solve common problems in an approachable way, aligning with Appian’s platform philosophy.

Episode 30: AI Contextualized

In this episode of Appian Rocks, Stefan, Sandro, and Marcel tackle the controversial role of artificial intelligence in process implementation projects. While acknowledging AI’s impressive capabilities, they warn against the industry’s tendency to treat it as a universal solution. What demos well in sales meetings often falls short in practice, producing answers that only sound competent. The hosts argue that uncritical adoption leads to laziness, outsourcing of judgment, and a dangerous decline in deep problem-solving skills.
Marcel frames the issue as the “hammer and nail” problem: with AI marketed as the hammer, everything starts looking like a nail. This obsession can stifle thoughtful analysis and push teams to skip the hard work of understanding processes. Stefan illustrates this with a client case where rethinking and simplifying steps—without AI—halved the workload. The real benefit came not from automation but from owning the thinking and redesign. If a team relies on a chatbot instead, it risks losing both control and learning.
Still, the hosts emphasize that AI has valuable use cases, particularly where input is noisy or unstructured. Summarizing long documents, extracting fields from messy scans, or parsing communication are areas where probabilistic language models excel. But when data is already structured and clear, adding AI can actually reduce quality. As Stefan puts it, “the best part is no part”—if a step adds no value, eliminate it rather than overengineering with AI.
The conversation then broadens to the societal and environmental costs of AI overuse. Marcel highlights the immense energy and water consumption of data centers, noting that a single AI query is vastly more resource-hungry than a standard Google search. Sandro compares the phenomenon to refrigerators: once they became widespread, people stopped considering older preservation methods and even began misusing fridges for foods that spoil faster inside them. Likewise, if developers only learn to solve problems through AI, they may never develop alternative methods, filling the industry with people who know no tools beyond the “fridge.”
The panel also warns about economic risks. Current AI feels cheap because of heavy investment subsidies, but providers will eventually move to value-based pricing, charging for “man-hours saved.” This could trap organizations in costly dependencies once AI is deeply integrated into core processes. Consultants, they argue, must therefore frame adoption not only around use-case justification but also total cost of ownership, including volatile token-based pricing.
In closing, the hosts underline that AI should be one tool among many. Its convenience is undeniable, but convenience alone is no justification. In low-code environments like Appian, the temptation to lean on AI for speed is strong, yet true transformation still requires creativity, critical analysis, and ownership of solutions. Overuse risks fragile systems and a loss of craft. For now, they agree: AI is powerful and promising, but it must be applied sparingly, thoughtfully, and only where it adds real value.

Episode 29: Expressions

Intro
In this episode of Appian Rocks, Stefan turns the spotlight on one of the most fundamental aspects of Appian development: expressions. Though they often operate behind the scenes, expressions power nearly every part of an Appian application—from interfaces to process models, decision logic to integrations. With the right approach, expressions can elevate a project’s maintainability, performance, and developer experience. But when misused, they can quickly become a source of confusion and technical debt.

TL;DL
Expressions are the lifeblood of Appian applications. In this episode, Stefan explains how to write clean, reusable, and performant expressions, shares practical tips for improving readability and maintainability, and discusses common mistakes that Appian developers should avoid.

On the role of expressions in Appian
Expressions in Appian are not just scripting snippets—they’re integral to building dynamic and flexible applications. Stefan emphasizes the importance of understanding the typed expression language deeply, especially when working with complex data structures. Expressions are used across every layer of an application, which makes writing clean and modular logic not just a best practice, but a necessity for scalability and collaboration.

Writing reusable expression rules
A major theme of the episode is the value of modularity. Stefan encourages developers to think of expression rules like functions: small, focused, and parameterized. Avoiding hardcoded logic and opting for reusable rules makes applications easier to update and test. Clear parameter naming and avoiding generic inputs like pv!input are also highlighted as critical for long-term maintainability.

Design and performance best practices
Stefan discusses how poor design choices—like deeply nested logic or repeated inline expressions—can quickly degrade both the performance and readability of applications. Instead of duplicating logic, developers should extract reusable patterns into separate expression rules. He also stresses the importance of minimizing rule chaining and understanding how and when expressions are recalculated, especially in interface contexts where performance can be affected by unnecessary re-evaluation.

Making expressions readable and maintainable
Readability is another key theme. Stefan suggests using tools like a!localVariables() to better structure logic in interfaces and avoid clutter. He cautions against overusing if() when constructs like a!match() or choose() would be clearer and more concise. Commenting logic is encouraged—especially for nested or non-obvious sections—to help both current and future developers navigate the application more effectively.

Collaboration and team alignment
Since expressions are touched by many developers over the course of a project, Stefan advocates for team-wide standards and code reviews specific to expression logic. Naming conventions, centralized utility rules, and internal documentation all contribute to making shared codebases more understandable. He emphasizes that expressions are not just technical elements—they’re collaborative artifacts that should reflect collective understanding and intentional design.

Avoiding common pitfalls
The episode wraps up with a discussion of mistakes Stefan frequently sees: expression rules that try to do too much, hardcoded assumptions that limit reuse, and dynamic evaluation bugs caused by lack of context awareness. His advice: keep logic modular, test thoroughly, and never underestimate the power of a well-named rule and a thoughtful comment.