Debunking the Myths in Performance Testing (PT)
In my next couple of blogs I will talk about Key myths and challenges of performance testing before I discuss other deeper topics. This blog will specifically focus on debunking the myths in performance testing.
MYTH: Performance problems can usually be fixed by simply plugging in extra hardware.
FACT: Performance issues are not always hardware-related. Performance degradation can happen for a variety of reasons, such as improper configuration of servers, badly written application code, and an un-scalable product design and architecture. E.g. If the maximum pool size of the database is set to 10, then the maximum number of simultaneous connections to the database is limited to 10, no matter how powerful the server may be.
MYTH: PT is all about learning and using a load testing tool.
FACT: Load testing tool, though important, is only one of the components of the PT life cycle. Other critical aspects include:
* Defining clear objectives of the PT exercise.
* Selection of scenarios which are critical for business.
* Creating correct test data.
* Setting up the right test environment that is representative of your production topology.
The effort to execute a PT engagement is no different than any other testing engagement. PT is complex and it needs specialized skill sets and “performance aware” engineers.
MYTH: PT is just a subset of functional testing.
FACT: PT requires a different mindset as the main objective is not to attain maximum application coverage and file bugs, but to test for response time, stability, and scalability. The desired skill-set of a PT Engineer is also unique and specialized. It is very important to educate all stakeholders of a PT engagement on the objectives and not consider it a part of functional testing. This education will avoid wrong expectations from the PT exercise and the PT team.
MYTH: PT can ONLY be done towards the end of the testing life cycle.
FACT: PT should be conducted after the application is functionally stable. It should not be done towards the end of the release since performance bugs can be quite costly to fix. Rather it should be done incrementally with a baselining effort upfront and then incremental tests that are compared with the baseline to see if performance is improving or deteriorating.
I have tried to cover some key myths here. Please do share the myths you have seen in your experience.
In my next blog, I will move on to the challenges associated with PT. In this post, I will also cover challenges associated with PT of n-tier web applications and SaaS based applications.