- Evaluate against performance criteria
- Assess production readiness
- Find the source of performance problems
- Compare performance characteristics of multiple systems or system configurations
- Find the source of performance problems
- Support system tuning
- Find throughput levels
Key objectives for performance testing
Key performance-testing objectives should definitely be defined early in the project – preferably shortly after project kickoff as detailed requirements are being defined and finalized. But it is also important – just as with all key project requirements - to periodically revisit these objectives and ask team members and the customer if they would like to see any new performance-testing objectives should be added.
It’s a good idea – and strongly recommended based on my own past experiences – to keep in mind the following high-level considerations when determining performance-testing objectives:
- Performance-testing objectives represent the starting point of performance validation and verification activities.
- Performance-testing objectives start with the business entry points: business volume, future growth, planned changes in the short-term, obvious risks to performance, etc.
- Performance-testing objectives correlate to business needs and therefore should represent real-world business scenarios involving real customers. Don’t view testing with rose-colored glasses.
- After you have determined the high-level objectives, you can refine the objectives to map more specifically to the technology.
Test vs. Production
The goal is for the test environment to mirror the planned final production environment as closely as possible. Typically, any differences between the test and production environments are noted and accounted for while designing tests.
Before running tests, it is important to validate that the test environment matches the configuration that you were expecting and/or designed your test for. If the test environment is even slightly different from the environment you designed your tests to be run against, there is a high probability that your tests might not work at all, or worse, that they will work but will provide misleading data.
I’ve personally had projects stall due to differences in test and final production environments that we – as a delivery team – thought were inconsequential. Cover your bases thoroughly to avoid painful and costly re-work and re-testing. Nothing wipes out customer satisfaction faster than failed testing, budget overruns and going back to square one.
Summary
The bottom line – you want your final solution to be the right solution. And that’s not just in terms of meeting the functionality requirements, right? Most IT processing solutions require a certain performance level and if you can’t meet that, you’re still not rolling out a solution to your client that meets their needs. I’ve been through this – sometimes those needs/requirements are hard to meet. And they are even harder if you don’t have a good performance-testing tool at your disposal. Software quality and app performance is too important to you project clients to leave to chance.