Update: Only a few weeks after this post, Appian released version 23.4 with a built in translation feature. While this post might still be interesting, because of some remarkable implementation details, I suggest to use translation sets.
According to the story, a united human race speaking a single language and migrating eastward, comes to the land of Shinar. There they agree to build a city and a tower with its top in the sky. Yahweh, observing their city and tower, confounds their speech so that they can no longer understand each other, and scatters them around the world.
Supporting multiple languages in software is a twofold story in general. First, internationalisation, is about the language used for labels and other text. The second part is called localisation and is about adjusting certain formats like dates and numbers to the locale of the user.
The Appian platform supports localisation for 22 (in version 23.3) languages and adjusts input fields and the visual representation of dates and numbers automatically. This includes reading direction from left to right and from right to left.
But what about all the labels and other text? This where you are left alone. There is zero support built into the Appian platform. Let’s take a look at the common solution approaches and an idea I came up with.
To store translated strings in a database, we need a table with a structure like this.
To display such a value, you require a query expression that fetches the text for a given key and ISO code. The ISO code can be extracted from the account of the logged-in user using
For each text that you display on a user interface, you need to define a unique key and use code like
As this means to do a separate query for each text, you can think of preloading a list of translations into a local variable and then fetch the values from there. This reduces the number of queries to only one.
An alternative is to store translations in documents instead of the database. You use one file for each language, and the files are simple resource files you already know from the import customisation files used during deployments.
You load the file based on user locale using the “Load Resource Bundle” plugin. Then use the index() function to fetch the text for a key.
From my perspective, there are a few problems with both approaches.
One is adding and maintaining translations. For a larger application, this quickly becomes a serious issue. Just out of the number of required translations, multiplied by the number of supported languages. And there is constant change. Then, when following the file-based approach, files might become larger, and you start to partition them. Then you end up with loading multiple files in interfaces.
And you have to make sure that people use a consistent way of creating the keys. This quickly becomes a giant mess without tight quality assurance.
Doing it Right
“But Stefan, you tell me that all approaches are bad! What do I do now?”
In my applications, I translate any text to any language using the following code snippet
rule!PFX_TL("This is the text I want to get translated")
That’s it! …. really! I do not maintain a database table or a file, and I do not query from a table or load a file into a local variable. Nothing!
“Any sufficiently advanced technology is indistinguishable from magic.” Arthur C. Clarke
4 simple steps
- Create a unique key combining the user locale, the application prefix and the hash value of the text.
- Try to find that key in the in-memory cache provided by the text cache plugin.
- Try to find that key in a database table.
- Ask translation web service (Google, Azure, DeepL, OpenAI) for a translation.
When I introduce some new text in my application, the translation web service creates the required translation that is then stored in the database and in the cache. To store the values in the database, I use a stored procedure as a trick to be able to modify the data during expression evaluation.
Sorry, no magic here. But Arthur wrote a few highly recommended magic books like “The Foundation”.
In my performance test on a base level cloud environment, I see around 2 milliseconds per translated text when fetched from the cache. I consider this to be the normal, as multiple users share the same translations.
Automatic translations are not perfect. So I created a maintenance interface.
Just search for your text and change it for the better.
I created an export process that dumps all translations into a file. An import process takes these files and dumps the content to the database.
I built this in a way so that it can support any number of applications in the same environment. The application prefix is part of the key and added to the database table for easy export/import of translations per application.
When combining static text with dynamic values, we use placeholders which are replaced by the actual values after translation. This is because the sentence structure varies.
The translation services, I tested, ignore placeholder values like “[%1]”. My translation expression can now take a list of dynamic values and replace the numbered placeholders with their corresponding values.
You ask for code? Sorry, this time I cannot share the code with you. But I included all the tricks and details to get this up and running. And when you start developing something similar, I am happy to support you and share experiences.
With localisation and internationalisation, both finally solved, let’s make our clients happy. Did you know that Switzerland officially has 4 languages? And to get into business with the public sector, you need to support all of them.
Please let me know your thoughts.
Keep rocking the digital transformation! In any language!