Do you have a Salesforce test strategy? A solid test strategy is a key tool for nailing down the big decisions so that the people involved in testing can effectively make decisions that are aligned with each other and the common goals. The more you test software the more you’ll benefit from having a high level plan that outlines the principles that guide the more detailed test planning and decision-making.
But what if you don’t know if your software team is testing effectively? This post explores the concept of test strategy and provides some practical advice on how to establish one.
What Is a Test Strategy?
A test strategy defines the principles of applying testing to assure quality. This starts from the definition of quality, and may apply to an application, a larger system, or even to an organization.
Quality criteria are a lot like values:
- They don’t change frequently.
- They are not meant to be compromised
- They guide our choices.
We value some things more than others, like delivering on time or being defect-free, appreciating the needs of all users equally or serving a particular user role better, aiming at military-grade security or perfect ease of use, and so on.
All real strategies are constrained by time, money, skills, and technology. Sometimes, such constraints overrule our quality criteria – it’s unfortunate but quite common. The more explicit we are about the quality criteria and the constraints the better we understand what we are giving away if we compromise any of them.
Clarify Your Scope and Focus
Test scope defines the boundaries of the testing – meaning what is in and what is out. If we are testing a mobile application for Salesforce, we can scope out the Salesforce platform and just assume it works correctly. To balance quality and cost, we may want to test on the few most popular mobile devices only.
It is a good practice to define the most important user personas including their tasks, behaviors, and preferences as a part of the scope. Give them names and describe them like real human beings. It helps maintain the scope, justify the quality criteria, and even come up with relevant test cases (“What would Karen do?”).
Test focus guides the allocation of effort within the scope. Ask what kind of defects in the software would cause the biggest damage, and where in the software such defects are likely to be. Use your quality criteria to guide your decisions, but apply other heuristics, too: new functionality is likely to break more than old, features that were buggy earlier are likely to be buggy now, and so on.
The Right Mix of Test Levels and Types
Different types of tests reveal different kinds of defects. Applying the right mix of test levels and test types will result in the optimum quality, time, and cost.
Developers typically test at the level of code and modules. This is known as unit testing. Once the modules are known to work independently, they are tested together in integration testing. And once they seem to work together it’s time to change the perspective to application testing or system testing.
The importance of testing inter-application dependencies has grown over the past years, because digital business processes do not respect application boundaries. This is called end-to-end testing or business process testing. (Acceptance testing is often mentioned in this context but it is really a signoff process rather than a separate test level.)
In Salesforce, the platform implements a good share of the application process, and the application itself is a collection of customized tasks and workflows. Some of those customized flows may interact with other systems, bringing necessary dependencies and complexities in the process. This is why Salesforce testers tend to emphasize unit tests and business process tests.
The dominant testing type is functional testing – testing that the software does what is supposed to and doesn’t do what it’s not supposed to. It is accompanied by various types of non-functional tests such as how the software is protected against malicious use, how fast it can respond to the users, and how it behaves under heavy load and recovers from failures. These are known as security testing, performance testing, and load testing. As most software applications are used by a large number of different people, the importance of usability testing has constantly increased.
Test Architecture Puts Testing in a Context
Testing professionals call the thing being tested SUT or “system under test”. Test architecture simply means a description of how the SUT, any co-dependent systems, and the various test environments and tools work together to enable the testing.
Draw a picture of the test architecture to illustrate the context for the rest of the strategy. When you need to explain how new software release candidates proceed through the test levels and types, or where the test data comes from, it’s practical to refer to this illustration.
Design the Testing Flow to Minimize Waiting
Testing, even if automated, consumes a lot of expensive human capacity as well as computing capacity.
Many businesses invite the intended users of the application to conduct end-to-end tests and acceptance tests. They understand best how the application will be used but their feedback is likely to come too late. It would probably be wiser to involve them in the application design than in the testing. If you involve them in testing, make sure to design the process so that they don’t need to sit idle and wait. The same applies to professional testers and test environments.
The key to being fast is not to work as much as possible, but to design and manage the process to minimize the time people sit idle waiting for something to test, or the tests sit in a queue waiting for an environment where they can run. If you have an automated DevOps pipeline – as you should – make sure testing is an integral part of it so that buggy code stops early in the pipeline.
Principles of Test Methodology
Some organizations prefer to have consistent methodologies in all teams so that it is easy to move work and people among teams. Others prefer to give the teams free hands to optimize their work. The strategy should at least cover methods dictated by regulatory requirements and company policies, as well as methods that matter for the overall performance.
Typical methodology questions to nail down in the test strategy are:
- How are test cases and test data defined and stored?
- Which tests will be manual and which will be automated?
- How are test results reported?
- What does the flow and entry/exit criteria from lower level tests to higher level tests look like?
Tool selections may also be dictated by the test strategy, at least for tools that represent a significant investment or a long-term commitment.
Nowadays test strategies discuss test data and related methods more than ever, and this is particularly true for platforms such as Salesforce that were built around data. As amounts of data in applications keep growing, data exchange, synchronization, and migration are frequent sources of problems. Moreover, modern privacy regulations effectively prevent the use of production data in testing and make many test teams use synthetic or anonymized data.
The methodology should contain a regression testing policy: what shall be done to ensure that what worked before the latest changes will work after them, too? We can usually count that Salesforce have tested their new platform version thoroughly but we should not trust that the changes in the platform will not change the behavior of our own application.
Create Metrics-Based Visibility
Good testing creates real-time visibility to the progress and completion of the software. For the tester and developers, visibility means test cases and defect reports. For everyone else it means statistics, trends, and verbal analysis. They want to know how they are affected, if they should be worried, and what they are supposed to do.
The test strategy should define the mandatory metrics that cover both the outcome of the testing (as well as the testing process) and how the information is delivered to those who need it. A strong array of testing metrics should contain defect accumulation, defect density, efficiency of defect detection, and effectiveness of defect detection.
- Defect accumulation and defect density: Measure when, where, and how defects were found and when they were corrected.
- Defect accumulation: An indicator of the quality of the software and the completeness of the testing.
- Defect density: Help sharpen the test focus and thus improve test efficiency and effectiveness.
- Test efficiency: Measure how much time and effort is needed for detecting a defect.
- Effectiveness metrics: Measure how well the testing actually prevented failures in production.
For all quality metrics, the trend over time matters more than the current value.
If you feel overwhelmed, it’s intentional. Testing is complex, time-pressured, and surprisingly multi-faceted. It tends to be slow and expensive, too. The way you plan, organize, and lead your testing makes a huge difference.
A solid test strategy is a key tool for nailing down the policies, principles, and big decisions so that the people involved in testing can effectively plan, lead, and make decisions, on their own, yet aligned with each other and the common goals.