QlikView has established itself as one of the most comprehensive business discovery and analytics platforms of the modern BI market. With it’s in-memory processing and associative engine, it allows developers to build interactive and robust dashboards for their end-users. However, building poorly performing applications with slow chart load times or an unclear user interface will often leave the end-user frustrated, causing them to quickly lose interest in the product. Below I have listed a handful of performance and optimisation best practices for Qlik developers to have in mind when building applications.
Deciding whether to keep the data model normalised (eliminate redundant data) or de-normalised (add redundant or group data) is a decision that can greatly affect the performance of an application. More often than not, the star schema data model is the most memory and CPU efficient design that maximises the performance of Qlik apps.
However, there are cases where it may be best to join tables in the script (to de-normalize). For example, if you have chart expressions containing multiple fields from different tables. Another case is if you have two very large fact tables, e.g. - Order Headers and Order Details. Qlik then has to perform the join in-memory and generate a virtual table to perform calculations, which can slow down performance.
A developer has to weigh the pros and cons on both sides, considering the way they will both calculate and present their data in the front-end of the application.
Synthetic keys are Qlik’s way of managing complex keys and are often a symptom of a poorly designed data model. Qlik may or may not handle them gracefully, and may end up using excessive amount of time and/or memory to attempt to do so. Therefore developers should work to remove them, with their removal being one part of ensuring a good data model is built.
There are several methods of removing synthetic keys:
Circular references create ambiguous results. QlikView deals with these by marking one of the tables as loosely coupled to break the loop between tables. In the dashboard, this will restrict selections made in one field to propagate through to the other fields in the table, causing confusion for the end-user. These can be removed by renaming one of the fields.
If you have a straight table in your dashboard that could potentially contain millions of rows of data, you should consider applying a calculation condition that requires users to make selections to limit the line count. After applying a calculation condition, be sure to set an error message which specifies why the chart has not loaded to avoid confusing the user.
QlikView applications can perform slowly if you start changing the way data appears from the way it was loaded in. This is because QlikView has to recalculate the changes every time a user makes a selection. To speed up the application, these changes should be made in the script.
In certain situations, applying these changes in the script may not be possible. In these cases the developer can try to utilise set analysis. This can be done by creating the necessary flags in the script and then referring to them when using set analysis in chart expressions.
Again, I have only covered a small selection of performance best practices in this post. Please comment below with your own suggestions for optimising performance of QlikView applications.