Why we shouldn’t leave Performance Testing until the last minute

By Samuel Ferguson 30/04/2019

Because it will cost the organisation a Fortune!

‘Shift Left’ – test early, test often, fail fast, fail early, fix fast, fix cheaply.

As Performance Test Engineers we have encountered many organisations who wait until before go-live to Performance Test; or just before the end of a Sprint.

Why this is a bad idea:

  • Performance Defects found late in the SDLC are more expensive to fix
  • Performance Defects found late in the SDLC can take a long time to fix (especially those root-caused to the architecture)
  • Delayed go-live dates build reputational risks
  • Reduced Load Models (less Coverage)
  • Quality of Performance tests can be compromised

To get to a definition of ‘DONE’ in the agile world the delivered Software needs to be Functionally and Non-Functionally tested. Without this we are cheating the Agile Methodology and taking away the key benefits it provides.

But why do Organisations wait until the last minute for Performance testing…

  • Product Owners and Scrum Masters aren’t on-board with early Performance Testing
  • An ineffective Performance Testing Approach is used
  • The Test Tool selection is not time efficient for early Performance Testing

How do we resolve this issue?

1. Gather the Non-Functional Requirements within the Product Backlog stage during high level analysis of User Stories. Then apply them to the User Stories Acceptance Criteria.

2. Ensure that the Performance Test Approach is seamless to the Agile Methodology and is finding performance defects early. Below are some performance testing activities within an Agile life-cycle.

Product Backlog Phases

  • Drive Performance Acceptance Criteria (NFRs) from high level User Stories
  • Develop a Performance Test Plan

Active Sprint

  • Define Response Times within User Stories
  • Recommend Best Practice for Performance
  • Build and Execute API tests to ensure response times of a single user meet Acceptance Criteria
  • Build and execute API tests and ensure response times at LOAD meet Acceptance Criteria
  • Build / Update GUI Performance Scripts inline with Development (not correlation / network layer) as not Agile friendly
  • As soon as the Functional tester has signed off Execute Performance Test
  • Analyse Execution Report against Performance Acceptance Criteria
  • Execute an Integration and Regression Performance test to ensure Performance Defects were not introduced to the wider System
  • Feedback to the team on Success or Failure of Performance Tests and Lessons Learnt

3. Drive load testing at the API layer and Unit test layer as early as possible. If this is not possible or to supplement the testing is a useful ‘Shift left’ re-use Automation Test Script method. To drive at load via the GUI (if the tool selection is appropriate). This saves a lot of time writing additional scripts and tests the client side for memory, CPU or Data Leaks usually missed in correlation / network layer execution.

4. Save time by running tests automatically meaning the suite must be tightly integrated within the CI/CD process. After each code check-in performance tests will then be executed in local environments along with functional and unit tests.

Turbo* offers a viable solution to provide early performance testing with the following key features:

  • Mobile (TurboAppium), Web Browser (TurboSelenium), Desktop Application(TurboWinium), API (TurboTestAssured) and Citrix (TurboCitrix) Concurrent Functional and Performance testing features
  • Usage of Functional Scripts can be executed at Load to re-usability and time saving
  • Drives the execution from the GUI to find leaks and provide true client-side response times