The client is a world leading supplier of fast moving consumer goods with instantly recognisable and widely trusted brands. The 2012 turnover was over 50,000 million Euros and there are 172,000 global employees. They have made major acquisitions in the food industry and are adapting to consumers’ increasing desire for healthy living. They are committed to a major sustainable living programme and reducing the environmental impact of this sustained growth.
To support the sustainable living programme and reduce the environmental impact IT was having, the business was centralising all e-mail archiving to a UK-based data centre. It would also allow them to meet regulatory obligations and would significantly reduce costs.
Integrating this archiving with a large SharePoint instance posed a performance risk—the impact was unknown. Furthermore, a large number of third parties was to be involved in the project.
ROQ was engaged as a trusted supplier to provide expertise and capabilities around performance testing, having already worked for the client.
The project utilised the ROQ Performance Testing framework. Based around the four Ds – Define/Design/Develop/Deploy – it is supported by the appropriate test governance. An initial scoping exercise provided an idea of timescales, costs, resource, tools required and suggested format of the engagement.
The team included a WAN emulation specialist and performance testers using the EggPlant performance tool. Planning focussed on defining a realistic Transaction Volume Model and identification of WAN characteristics appropriate for the solution deployment.
A key concern was the potential poor response times experienced by global users due to issues such as latency and bandwidth so a WAN emulation tool (Shunra) was selected to emulate the network conditions found worldwide. This provided the focus of the testing engagement.
17 test scenarios were identified before scripts were developed to simulate. e.g. PST ingestion, auto-archiving, journaling and initial synchronisation scripts were generated, used first in isolation, then as part of a normal work load and then peak work load test scenarios. Furthermore, utility scripts to monitor the size of mailboxes and the backlog of journalled e-mails were developed and form part of the legacy for future testing.
1. The mis-match of environment sizes (Exchange, SharePoint and the archiving solution): had to carefully select the transaction volumes generated directly. In some instances, used alternative methods to generate the required load, bypassing the undersized application environment.
2. Delay in availability of any scaled test environment: In the absence of physical environments, cloud based environments were developed in order to commence our scripting efforts earlier
3. Meeting aggressive timescales after upstream project delays: development and testing of load generation scripts were completed in parallel with some execution tasks. A series of normal and peak load tests was then completed with tuning and defect resolution between cycles.
4. Answers to key questions about how the solution had performed under load were disparate across many log files: an additional tool (Splunk) was employed in order to allow analysis of the files.
A key deliverable was a detailed test report covering the completed testing cycles, defects raised/resolved and details of the systems behaviour under load, along with the risks that had been mitigated and those that remained.
ROQ enabled the client to properly understand the performance risks faced by the project and then served to mitigate them successfully within the constraints of the test environment. The remaining risks were described clearly to allow informed decision making. The nature of the iterative testing enabled the client and the application team to fix defects and re-test, ensuring a complete picture of the performance under load was available.
Additionally, the information provided about the new application’s behaviour under load and the impact on existing systems will be vital to the on-going support and monitoring of the system. Effective and pro-active management will significantly reduce costs in the long run.
Importantly, the WAN emulation tool gave an early view on expected response times for the client’s global users with a variety of different WAN characteristics. This approach was new to the client and now gives a performance baseline for future deployments.
The performance testing engagement left a legacy of re-usable performance test and utility/monitoring scripts that can be used for future regression testing when application or database upgrades are planned.