In an ideal agile web or mobile software development project, a sprint ends on time, QA’s work is recognized and appreciated as a normal part of the process, and life goes on. But in my experience, 60 to 75 percent of sprints incur some sort of delay. When delays occur, the burden to “catch up” often falls on those in the last stage, which in development means QA. In order to do our jobs well and be ready for the next sprint, we in QA will accommodate whatever twists and turns come with the project, but it can be stressful to be the last one to get the baton in a relay race, especially if you are already behind and have a specific time to beat in order for the whole team to win.
Test automation has been implemented to help many teams decrease the time of test cycles and alleviate some of the pressure on QA, but even with automation most of these pain points aren’t eliminated completely. However, the stress can be prevented with a little open communication and up-front planning. QA leads can establish mitigation strategies with stakeholders early in the development process to avoid scrambling when the inevitable delays occur. We’ll review those strategies shortly, but first let’s look at the kinds of delays that can get QA off to a late start.
- Incorrect sprint-planning estimates by the dev/test team: Before every sprint, an estimate is made for how long the story will take to accomplish. Typically, development and QA both have a say in that estimate, but sometimes the estimate is incorrect. In many cases, when development takes longer than predicted, it results in less time for QA.
- Dev/Test environment instability: Sometimes the environment is down, blocking QA for a period of time. I worked on a project recently where the test environment was down for an entire week because of defects caused by ongoing API development for other projects in the same environment. These other projects had higher priority than ours, so we had to carry over all of the stories from the previous sprint to the current one.
- Unanticipated design delays: Normally, design is working ahead of development and QA, but sometimes their work is not completed and approved for use by the development team on time. With a normal amount of time slotted for development, this means less time for QA at the end.
- Unanticipated development delays: Even if everything else goes well, the team might lose a developer or a tester, or the project might suffer delays due to waiting on project approvals, prioritization, changes in scope and numerous other reasons and unforeseen circumstances.
Given that some delays are inevitable, how can QA safeguard itself from automatically being backed into a corner? The answer is communication and prioritization.
- Communicate Accurate Workload Needs with Stakeholders – QA should empower stakeholders and the rest of the team upfront with the right information to make educated decisions around QA resources and testing scope for all phases of product development. This includes establishing rational time estimates for test cycles and proactively discussing risk-assessment strategies for reducing scope. Having these conversations helps to set expectations and provides planning for challenging situations that are probably going to happen at some point.
- Make Sure Estimates Factor in the Resources at Your Disposal – Enterprise QA workload estimates should always factor in the number of resources available for testing to accommodate fluctuations in team size, for example when someone goes on vacation or leaves a project unexpectedly. This is especially important for teams where QA resources are shared among multiple projects as it is not uncommon for QA resources to be pulled to a higher priority project, leaving you with fewer resources than you assumed you would have.
- Prioritize the Work According to Necessity – Assign priority to your test cases so you have an easy way of creating test passes after your initial user story verification testing. When creating test cases, give them all a Priority 1 as these must be run successfully in order for the story to be considered done. After completing user story verification, go back and reprioritize them, reassigning test cases that cover less common user scenarios to a priority 2 or 3. Pre-release testing or regression testing should always cover priority 1 test cases while the additional lower priority test cases can be used as needed when more attention should be focused on a particular feature. Since you’ve already executed these test cases when validating stories you should have a good idea of the level of effort involved once you’ve reprioritized them.
When Estimates Cannot be Met
I’ve never worked on an agile project where every sprint came in on time. Here are three options you have for those inevitable delays:
Option 1: Authorize overtime
Some delays can be made up easily by having QA work a few extra hours of overtime, with organizational approval. How sticky of an issue this is depends on the company’s budget and the priority level of the project. If I know a test pass needs to take a week but I only have four days to accomplish it and I’m down a tester, the first thing I’d try for is overtime. The longer the delay, the less viable this option usually becomes.
Option 2: Assign more QA resources to the team
Transferring resources from other projects is the primary way to do this. In large organizations, QA resources can be shared. A controversial alternative is having other members of your scrum team help out with testing. This approach is controversial because some QA professionals feel their craft is very specialized and only experienced QA engineers should perform testing. There are specialized testing disciplines that are highly technical and may not be areas where this can be applied, but for most manual testing I find most team members are more than capable of running well-defined test cases.
Option 3: Delay launch if possible (last resort)
This depends on the company and its priorities. For example, if a company’s flagship mobile app is one of its premier brand experiences, they can’t afford to ship it if it’s a poor user experience for their customers. The more important it is to deliver an immediately satisfying user experience to customers, the more likely there will be tolerance for slipping release times in the interest of releasing a polished and pleasing product.
However, for big products, companies might have to adhere to an announced launch date. In this situation, you may need to do whatever it takes behind the scenes to get it out that day. Theoretically, if you’re running an agile project, you can stick to a cycle and release a working product, knowing there will be another sprint and release to fix issues.
If none of these options are possible, consider mitigation strategies that introduce risk with the lowest potential impact. It is important to note that agreeing on mitigation strategies for the QA crunch up front will help facilitate a better relationship amongst QA, the rest of the development team, and project stakeholders.
Strategy 1: Limit the number of browsers/devices included in the testing scope
In testing for many development projects, you can easily remove many older browsers, OS versions, and devices from your test coverage that represent very low risk.
Strategy 2: Prioritize the remaining browsers and test devices
Create high/medium/low priority groups, based on analytics data, so it’s easy to moderate test coverage as needed. Take the deepest dive into analytics at the beginning of a project and then re-check them every month. This is the best way to keep up on trends and easily eliminate test cases.
Browsers and devices can move from high priority to medium priority from one month to another, especially if there is a new device on the market. For example, you might move a Samsung Galaxy S5 to medium priority when the S6 reaches a certain saturation point in the marketplace.
This is even easier with Apple products, whose adoption rates are always huge, both with new hardware models and OS upgrades. With new iOS releases, the adoption rate is at 50% within the first 4 days, and within six months it will be much higher. With Android, OS releases are controlled by device manufacturers who stagger their updates. Because there is a much wider range of OS adoption in Android, you have to maintain a much higher range of test coverage than with iOS.
With browsers, use analytics to look at the versions in use to determine how far back you need to test on a particular browser. It is important to use data specific to your company and products. For example, on a recent project I was surprised to see Firefox was a distant third in usage behind Chrome and Internet Explorer for desktop users. If I hadn’t checked first I probably would have assumed it should be a high priority browser and committed more resources on testing it than I should. This also highlights the importance of checking analytics regularly.
Strategy 3: Limit the scope of the test pass to features that are most likely to be impacted by recent development.
Typically, features that were developed early on have been tested more extensively than recently developed features. Consider doing a basic sanity test (also referred to as a smoke test) on these features while marshalling more extensive testing on recently developed features.
Although none of these strategies is ideal, they are worth your consideration upfront as rarely does everything go according to plan for any mobile development team. Also, consider documenting and archiving any options you’ve derived for dealing with potential time crunches, including processes, device matrixes, prioritizations, and length of test passes, as it’s very helpful to organizations and to QA leads for future planning.
Final Thoughts: Great Communication Brings Great Results
Across the board, there are a lot of QA professionals who have been repetitively crunched due to being last in line in the development process. When QA proactively communicates its role and needs at the start and throughout a project, better relationships grow between QA and stakeholders, and QA has a more visible and appreciated role on the team. More importantly, this prevents an adversarial relationship from forming with the rest of the team. Deadlines are more easily met when expectations are laid out from the beginning, and there’s less scrambling and pressure for QA, resulting in better working products. And at the end of the day, that’s why we work in QA.
Are you interested in consulting for Digital Experience and Mobile Solutions? Tell us about your challenge to learn about our flexible engagement model and the approach we would take with your organization.