Why there must be a change in performance testing practice to survive ever-shortening delivery windows in the current IT landscape.
With the advent of agile, DevOps and continuous delivery, we are having to deal with more frequent releases and less time between release cycles. Functional testing has tried to adapt and the rise in adoption of functional automation has provided the backbone and capability for this. Performance testing and Non-Functional Testing (NFT) have struggled to keep pace. Performance and NFT engagement still tends to occur late in the delivery. Execution will inevitably be out of sprint and carried out in a big-bang style at the end of the development life-cycle – certainly not agile!
If the same approach continues the value of performance and load testing will diminish as return on investment is limited and reducing. We certainly don’t want that. We want to deliver maximum value to our clients and organisations who have more demanding customer expectations than ever before.
At the worst level, poor performance can cripple and crash your system leading to commercial and reputational damage. Millions of pounds or dollars are lost each year through poor performance of external facing channels – even a 0.5 second degradation in response time can significantly damage revenue. Slow performance of internal applications can be just as debilitating for your business operations too and will impact your bottom line.
There are two things that will elicit performance improvement:
- Mindset change: Scrum masters, programme managers and teams need to be educated, informed and coaxed into including the performance and non-functional requirement aspects into the software delivery from the beginning. Shift left and decrease go-live risk. This thought change needs to be understood and advocated by the management of each organisation or project***. I will go into the reasons for this in another blog post.
- Performance and load test tool technology improvement: Whilst API level tools have improved, the traditional e2e (UI/HTTP type) load testing technology still relies very much on HTTP and 20-year old technology that is not conducive to fast turnaround change. Things have moved on and client side performance and behaviour are becoming more important than ever before. We cannot rely on just hitting the server and server side performance statistics. I will go into the reasons for this in another blog post. We now need UI type load testing tools that can script and execute in-sprint. Ideally, we would be able to re-use the script collateral created from the other testing activities too.
*** A Quality Assurance Framework not just for testing but for the organisation, operations and processes will go hand-in-hand with this approach.
We need to move away from viewing performance testing as an end in itself and focus on Performance Engineering. How do we do that? Below is an initial check-list of what you can do to bring performance engineering to the fore in your organisation or project:
- Static design review: The solution the architect often becomes a friendly face at this stage. Your review of the design for NFT purposes will highlight flaws (Every design review I have carried out has led to improvements in the design). You raise these to the architect who is then eternally grateful for your quiet and thoughtful intervention.
- User story engagement and uplift: If a user story says, “I would like actor A to do task B so that action C happens” why is the thought process not extended that bit further to include NFRs? How long should task B take so that C happens? Beyond this, at Feature and Epic level, you can also easily include other non-functional aspects – Security, Resilience (HA and DR), Compatibility, etc
- Performance focused unit testing: Is the SQL or code optimised? Have explain plans or code analysis been run? Has it been peer reviewed and does it follow best practice? Have configurations been reviewed?
- Component level performance testing: Performance and load testing at API level of web services, REST APIs and SQL/Stored procedures before any UI is available. This stage also helps drive out the further development of stubs and mock services.
- Performance testing on small scale environments: This is not load testing, but starts providing information about the e2e behaviour of the application from the other testing activity that takes place. We would ideally have trend analysis capability and the potential to build the continuous delivery pipeline to automate deployment to pre-prod.
- e2e load testing: This is the single activity that many non-performance engineers see as performance or load testing. Ideally, we would want this to take place during each sprint and not just at the end. Each new sprint would add to the previous collateral. Testing in-sprint provides much easier ability to pinpoint bottlenecks and performance issues.
- Load testing at the Platform level (or higher): What happens if the delivery runs on a shared platform or infrastructure? Wouldn’t it be great to have the ability to drop load testing collateral onto a platform after a sprint and run this against existing platform load or other projects that are running at the same time to get an understanding of the bigger picture and impact to the wider platform?
- Monitoring in Production: How do we monitor that customer experience at the UI is not being impacted by performance issues?
- Continuous Improvement: What can we do better next time?
A change of mindset is required to fully adopt the above-mentioned practices. Tool and technology change is still limiting the latter elements of the list. API load test tools can fit into Agile, but the traditional UI/HTTP type load test tools still struggle to deliver within each agile sprint. I have been a strong advocate of raising the profile of Performance Engineering for many years but now that the agile friendly UI type load testing tool capability is available, we need to grasp it and truly start delivering full value as performance engineers.
Baz Sahathevan is a co-founder at ButlerThing, a company that enables quality for great customer experience by putting performance back into testing.