Building Confidence in Customer Journey Data with Quilliup

By Ben Simonds, Thu 06 July 2017, in category Business intelligence

Data Validation, Quillip

  
@

Context

Following on from the development of a fairly large and complex Qlikview application for analysing customer journeys (see this previous blog post), the MI team at a global private bank were faced with a growing challenge in ensuring consistency throughout the application, its ETL layers, and across different server environments.

Issues

Previously most of the testing done by the team had been in the form of manual inspection, comparing the reports in the Qlikview dashboards vs. the source data. Whilst this worked for smaller applications, with the growing scale of the customer journey app it was becoming more and more time consuming to test new features whilst ensuring existing functionality remained unaffected. Without a change in methods, testing would become a bottleneck that held up further development.

img

Enter Quilliup

With these issues in mind, a Proof of Concept implementation was built with Quilliup to see how it could reduce the burden of manual testing on the team. We focused on ensuring that data was consistent when the application was migrated across different environments (dev, UAT, prod). This was a particular area of interest because of the large number of ETL scripts for different data sources being worked on by different team members. By automating the comparison of our data across the different environments we could spot when one component had been missed out of a migration, or when the front-end of the app hadn’t been updated to reflect new requirements.

img

Tests

For our Proof of Concept, we built a small set of around 20 tests, mostly focusing on front-end objects within the Qlikview application, with a few more that performed data integrity checks on our QVD layer. The tests were run after performing a migration to ensure that the main objects in the Qlikview application appeared as expected. Multiple tests were bundled up in execution flows to make the running of them more automated. By focusing on high level KPIs that depended on a broad range of data, relatively few tests could give a good indication of when the data didn’t appear as expected.

Outcome

By automating much of our post migration testing, we saved on average a couple of man-hours per migration of checking both charts and data. This led to earlier and more efficient discovery of bugs and migration errors, as well as increased confidence in our data from users. The ability to export the data for failed tests from Quilliup also aids in faster root cause analysis when errors are found, and gives an easy way to share problems with data providers and other developers.

To learn more, join us on the 26th of July for a Roundtable breakfast at Tower 42, in which we discuss the Single customer journey and the importance effectively testing your data has in gaining a view you can trust.