Software development is a continuous process. We constantly make revisions, improvements and add new features, all in an effort to bring additional value to the user.
This continuity is necessary, but it has its own share of drawbacks. Mainly, sometimes it may be difficult to assess consequences that introduction of a single change can have for the system.
Even with the most thorough testing, certain features or parts of software can be skipped or tested not sufficiently enough. If the addition of the latest feature introduced bugs in these areas, those bugs would remain undetected. To avoid this situation, software impact analysis should be used.
Impact analysis is a technique that we at Apriorit integrated into our testing process and used to great success. We created this article in order to share our experience with you. First, we will answer the question of what is impact analysis and try to give you understanding and general information on this technique. Next, we will also share our practical experience of using impact analysis in software testing and introducing it in our company.
Defining impact analysis
Before we start discussing impact analysis, we should first define it.
Impact analysis can be described as a way to assess all the risks that arise with the introduction of a particular change to the product. There are several various definitions of impact analysis putting emphasis on different aspects of it. Looking at this definition will provide you with a broader picture of what impact analysis is.
Firstly, impact analysis is often defined as a way to detect potential consequence of introducing changes to the software. This definition focuses on analysing individual changes made to the product.
Another widespread definition of impact analysis is that it is a way to estimate potential risks, associated with a particular change, such as how the change affects resources, schedule and performance of developed software. This is a broader definition that considers changes in the context of the development process as a whole.
ISTQB glossary gives its own definition of impact analysis. According to them, impact analysis is an estimation of consequences a change has on all levels of testing and development documentation, including registering them in corresponding requests. This definition draws our attention to a more practical side of impact analysis – changes should always be documented.
By understanding what impact analysis really it, we can now understand when it should be used. Impact analysis should be employed in the following situations:
- Changes are introduced to requirements
- Request for product changes is submitted
- New, planned functionality introduced
- Existing modules and product features are changed
Why do we need to use impact analysis?
Impact analysis helps us in the following way:
- How the changes will affect existing modules and features, and what those affected modules are.
- Determine new test cases required for testing new modules or features.
- Examine how the testing process will be affected by the changes and whether existing process should be corrected.
- Determine, what effect these changes will have on the budged.
We can distinguish three different types of software impact analysis:
- Impact analysis of dependency – this is best covered by our first definition.
- Experiential impact analysis – our second definition best covers this.
- Impact analysis of traceability – this is best covered by our third definition.
Impact analysis in development
Now let us look at how impact analysis applies to developers. What skills should a developer have, and what steps they should take in to do correct impact analysis?
Several requirements need to be met in order to conduct successful impact analysis. Developer should:
- Study relationships between different modules in great detail
- Keep shared resources in mind
- Update all documentation regarding analysis
If a developer keeps these points in mind and applies them when doing impact analysis, positive results will emerge immediately.
This way developer is:
- Thinking about the product as a whole and not about one single feature or a module.
- Understanding architecture of the software and tracks relationships between modules.
- Decreasing the risk of detecting bugs while testing previously skipped modules.
The only negative thing is that resources need to be specifically allocated to keep analysis documentation up to date.
Impact analysis in testing
Impact analysis is immensely beneficial for QA. Understanding of relationships between changes can help testers with:
- Not wasting time on testing parts of the project that weren’t affected
- Focus on actual changes
- Consider what other parts of the project could potentially be affected
If a testing specialist does not employ impact analysis, they will inevitably use test cases that will not completely cover latest changes. They may also spend time needlessly testing parts of the project that stayed the same.
Impact analysis allows QA team to focus their attention where it is needed the most, optimising time spend on testing and making testing more cost-effective and efficient.
Experience of implementing impact analysis at Apriorit
Before we started using impact analysis in testing here at Apriorit, our communication between developers and testers was inefficient at times. The testing request was usually sent after building a new version. Apart from the link to a new version, the request also included the list of all fixed bugs up to this point.
Most of the time, several developers are working on the project simultaneously each completing their own tasks. The new version is created by merging the results that those developers managed to produce. However, the testing request is sent to a single developer who built this particular version. This developer is only aware of the bugs that he or she fixed.
Therefore, testing requests before impact analysis were introduced lacked all the necessary information about modules that were influenced by these changes. Even if said information was provided, it could hardly be considered full or reliable, especially when it comes to parts of the project that other developers were working on at the time. Testing requests often contained a brief description of changes made by the single developer and his or her thoughts on testing features regarding the only said changes. Better testing requests with more detailed information were written when new major features were introduced.
At this point, testers decided the necessary volume and order of tests for the new version. They made decisions based on their own experience and knowledge of the solution in question.
For example, testers may know that features A and B are related to one another. Therefore, it is easy to assume that introducing changes to feature A may affect feature B, and this feature also requires testing. However, the tester may not be aware of relationships between feature A and C and will skip testing feature C. This may also be affected by the changes. In certain cases when communication is lacking, the tester may not know that the feature was changed at all, and may not even know that it is a new bug since behavior is similar to the previously found ones.
Under these conditions, it is also very hard to define testing priorities accurately. Since time and resources are often limited, it is necessary to designate them where they are needed the most. Certain features can be tested with a smoke testing, other require only wild or acceptance testing, while other features need to be fully tested.
Therefore, under this approach, our testers faced two major difficulties:
- After the introduction of changes, not all affected feature were fully and thoroughly tested.
- Loss of time and human resources on testing features that were not affected by the new changes.
Impact Analysis in our projects
Now you are familiar with our previous workflow and problems we had with it before introducing impact analysis. Impact analysis was already a rather popular technique in QA world, and we decided to try it as a means to solve these issues.
After a number of internal discussions, we decided that introducing impact analysis would help us solve those problems and allow us to increase the quality of our products. Many members of our staff already engaged in impact analysis on some rudimentary level, albeit without the proper formal organisation of this process.
Strategy for implementing impact analysis at Apriorit was decided on the meeting held by the initiative group. There we also appointed people responsible for this process on their respective projects. We also decided on the deadline for developing principles of conducting impact analysis within each project.
Next, each team had their own meeting where both broader goals and specific details of using impact analysis were discussed. Each team member’s opinion was heard and eventually, a solution that satisfied everyone was reached. By our joined efforts, we were able to decide how we will perform impact analysis, store data and handle results for each project.
We decided to use standard Excel spreadsheets as our tool of choice. For each individual project, our QA specialists designed specific spreadsheet containing all included features and modules. The developer then used this spreadsheet to mark all changes made and all elements, that was or could potentially be affected by said changes.
Projects of different size require a completely different approach to organising the information. We will show you how we are doing it and provide you with examples of spreadsheets for both small and large projects.
Example below shows how our impact analysis table looks for smaller project with not a lot of features:
For small projects, we create a matrix that includes all major components of the product. Every single module or feature, such as updates, uninstaller, menu bar, options and hot keys, are added as both rows and columns. Rows are showing the part that was changed, while columns show parts that were potentially influenced by this particular change. Cells on the cross of rows and columns with the same feature are market preemptively.
While working with the spreadsheet, the developer chooses a row with a feature that was changed and then mars all the cells on that row that belong to the columns with features affected by the change. Developer leaves all the necessary notes and comments for said feature in this row. This process is repeated for each feature whenever new changes are introduced.
The additional value of this spreadsheet is that it allows developers to double-check themselves and make sure that they did not forget anything important.
We also decided to use color-coding in order to provide a better information about the level to which feature can be potentially affected by the changes:
- Red shows potentially strong impact
- Yellow shows potentially moderate impact
- Green shows potentially weak impact
Same information can be resent using numbers:
- 3 for strong impact
- 2 for moderate impact
- 1 for weak impact
The final spreadsheet will look like this:
This spreadsheet allows QA specialist to more efficiently prioritize tasks and develop better test plans. This is particularly helpful when product testing is strictly limited.
In this example, it is immediately apparent that features 1, 4, 6 need to be tested more thoroughly and before everything else, than features 3, 5, and finally feature 2 can be tested not as rigorously. This detailed planning allows to manage risks when testing should be done within strict time constrains. This way, critical parts of the software will be tested first and more rigorously than everything else. It also prevents us from testing feature that were not impacted by the changes and spend less time on features that experienced minor impact.
There are certain projects, where scale from 1 to 3 is not sufficient enough to describe the level of impact a change has on other features. In this case, we use scale from 1 to 5:
- 5 for very strong impact
- 4 for strong impact
- 3 for moderate impact
- 2 for weak impact
- 1 for very weak impact
- Large scale project
Now, let us look at the large scale project. Such projects often have a large number of features, each consisting of various sub-features. Using the matrix from the previous example for such large scale projects is not feasible as it will lead to a huge and very cramped table that is almost impossible to read.
Applying project with 40 feature each with 15 sub-features to the previous spreadsheet, we will get this:
|Changes/Impact||Main Feature1||Sub-Feature1||Sub-Feature2||…||Main Feature2||Sub-Feature1||Sub-Feature2||…||Main Feature3||…|
This prompted us to develop a separate spreadsheet for large projects. Rows of this spreadsheet should have all the main features of the project, while all sub-features should be included in columns.
Here is an example of such spreadsheet:
Instead of starting with marking the feature that was changed, with this spreadsheet developer immediately marks features that were affected by the change.
Typically, when changes have an impact on a certain specific sub-feature, it does not necessarily mean that other sub-features under the same feature will also be affected. In our example above, only sub-features 1, 3 and 4 are affected by the changes. Even in cases, when a change has an impact on all sub-features under a certain feature, it does not necessarily mean that the strength of this impact is the same.
Therefore, this spreadsheet provides us with all the necessary detailed information in a simplistic easy-to-read form, allowing us to avoid unnecessary cramping.
Any additional information relevant to the testing process should be mentioned in the corresponding cells inside the spreadsheet.
This information can contain:
- Configuration, with which said feature should definitely be checked
- Mention of previously existed problem that testers should keep in mind
- Mention of related products were given change needs to be tested
- Other necessary information
As our final example, a large enterprise project with more than 40 large features. On such projects, developers often tasked with:
- Implementation of additional sub-features
All of these actions impact one or several major features. When marking these features in the spreadsheet, the developer should also include the following information:
Sometimes changes are tied only to a specific environment. In this case, the developer should clearly indicate operating system and all the other necessary information about the environment that should be used in testing.
Developer comments. This is additional information that developer feels the need to provide. More often than not, it is a simple link to the bug-tracking. In other cases, it can be a comment in a plain text. Developers state his or her own recommendations, possible bug predictions and other information they deem necessary. This is often the most important part for testers.
Importance. Developers estimate how much damage was potentially caused by the changes. This can be indicated by the color, or as a score, as shown in previous examples.
Plans for this feature. Previously we had instances when QA staff fully tested certain feature only to later discover that new changes were made at the same time, requiring 10-16 more hours of additional testing. By knowing developer plans regarding the feature, testing team can assess the necessary scope of testing much more accurately.
Filled up spreadsheet, therefore, will look as follows:
Features/Impact Analysis information
|Affected configuration||Developer’s comments||Importance||Plans|
|Main Feature3||x64||Bug report #1111||4||–|
|Main Feature5||all||Bug report #1112||2||Bug report #1001|
This spreadsheet is relatively simple and easy to read.
These three examples above describe how we perform impact analysis at Apriorit, for both large and small scale projects. We hope that you can find something useful for your own practice.
Now, we will try to describe how our general formalised the system of conducting impact analysis works. It is a set of specific steps for each team. This allows every member of the team to have a clear understanding of what they need to do and what they can expect from others.
In our company, we employ two schemes for conducting impact analysis, both of them are fairly efficient. The first scheme is designed for projects with an auto-build system, while other is used for projects where builds are maid manually. In the first case, impact analysis spreadsheet is attached to the testing request while in the second it is stored together with a prepared version on the server.
Let us discuss the process without an auto-build system in more detail.
The developer should perform the following steps:
Do the necessary work on the task
After the task is complete, developer opens impact analysis spreadsheet and marks all the features that could be influenced by the changes, sets the expected level of severity of such influences and writes any additional information that he or she deems necessary.
Create a testing request using the template, used by the team
Add all the information that he or she deems necessary for testing, such as bug reports, check-points, or their own advice and opinion.
Important think to note here, that impact analysis spreadsheet used as additional information to complement testing request, rather than replace it.
After the testing request is complete, developer attaches impact analysis spreadsheet to it and sends it via email. If changes had no impact on other features whatsoever, he or she may simply state that and skip feeling up the spreadsheet.
The QA specialist is then should perform the following steps:
- Read all the information included in testing request and look at impact analysis spreadsheet
- Create testing plan and prioritise tasks according to spreadsheet
- Tests every feature market in impact analysis spreadsheet
- Writes testing report using the standard form approved by the team
- Marks the state of each feature, specified in impact analysis in the testing response
Second process includes the system with version auto-build.
Developer should perform the following steps:
- Perform his or her work with the copy of the code without changing main repository
- When the task is finished, adds the new changes to the main repository
As soon as changes are added to the main repository, the developer should fill in impact analysis spreadsheet that is stored in the main repository, by marking all the features that may be affected by the changes
This approach works for several reasons. The developer just now finished working on the task and can give the necessary information to the fullest extent; developer has a check-list to go by, make it easy for him or her to remember to add everything.
- Show the level of severity to which the changes affect certain features (by showing them with color or using a scoring system)
- Include all the necessary additional information
- Save changes to impact analysis spreadsheet and proceed to work on the next task
There is no additional action required for the person who builds the project. With an auto-build system, a responsible person makes a copy of existing spreadsheet and clears all the data from the original, to make it ready for another round of changes for the next version.
When new version have been built, the system copies the spreadsheet in the corresponding version folder. Therefore, the system tracks that information in the spreadsheet is always relevant to the corresponding version, saving developer’s time.
Testing request is written, including all the information, that developers deem necessary for testing.
Upon receiving testing request, QA specialist follows the same procedure as with the previous example.
Both of these procedures are designed for different project, but both of them proved their efficiency and effectiveness in everyday use. They allow QA specialists to reliable receive all the necessary information for thorough testing where it counts the most.
Specifics of introducing impact analysis
Now, let us look at how impact analysis was introduced into QA process in our company, how exactly it was done and what specifics and unique challenges we faced on the way.The purpose of impact analysis is to create a formal organized way for developers to provide testers with all the necessary information for defining the level, scope, and sequence of testing.
But how can the necessity of such information be explained to developers?
Main arguments that developers express against impact analysis go as follows:
- Why do you need it?
- Why do we need it? It’s additional work for us.
- QA specialists have the necessary knowledge to do it themselves
- Many objections to the procedure and structure of the spreadsheet
QA staff presented the following reasons:
- QA absolutely requires this information in order to properly estimate testing scope.
- This information allows to start testing with parts of the system that are more likely to have critical bugs. This argument works wonders on programmers, who hate when they need to fix critical bugs at the last moment before release.
- QA staff does not know relationships between features and modules as well as developers are.
- Any objections about procedure and structure of the spreadsheets were resolved via meeting and discussions.
After hearing all initial opinions on the matter, another meeting was held in order to discuss all pros and cons of impact analysis. Everybody agreed about the usefulness of impact analysis and improvements it should bring. Difficulties arose when discussing what should be constituted as a feature, how big or small these features should be, how to show that they are connected, etc. Feature lists for all projects were discussed approximately a week.
Eventually, the following rules were implemented:
In order to introduce a change in the process, the following conditions should be met:
- Start with an unresolved problem. Explain the gist of the problem to developers and make sure that they understood it.
- Solution that you presents should solve said problem. You need to explain how this works and how the problem will be solved.
- Remember not to fix what is not broken. Never introduce the change if there are no problems to solve.
- Get developers on your side. Even direct instructions from the manager would prove ineffective if developers don’t know why it should be done.
- Provide developers with relevant information from third party sources. If the new process was already implemented in the other team, you could bring people from there to talk to them.
- It is an exquisite practice to write down all pros and cons of the new solution.
- Make an emphasis on the benefits for developers and the final product.
- Allow developers that support you to persuade their peers. They will be able to come to understanding much faster.
- Look at the proposed changes from the point of view of the developer and imagine how they will affect his or her daily routine.
- You should provide a demonstration of the process in action.
- Developers will present you with many questions and arguments.
- The only way to answer them is to clearly know what you are doing, how and why.
- Developers are often afraid that innovations will lead to huge increase in workload. You should calm them down.
- Do not expect that they will immediately give in to your ideas.
- Make sure that you take in the account wishes and remarks of developers.
- In the beginning, work with under a new process should be thoroughly managed.
- After a set period of time, you should provide statistics and evidence that innovation is working. The showing increase in efficiency will motivate developers to stick to it.
- Do not forget to show developers your appreciation. If the new process makes work on a project more efficient, they will also going to thank you.
Benefit of introducing impact analysis at Apriorit
To further proof the benefit of impact analysis, we want to provide you with a practical example showing how useful this procedure is in our work at Apriorit. Example itself is fairly simple. What makes it special is the fact that it was picked up by one of our developers.
In our everyday work, we use a Quality Level in order to assess our work on a project. It can be calculated as follows:
Quality Level = (Passed\(Total-No-Run))*100
- Total – all performed teste cases
- Passed – test cases, passed successfully
- Failed – test cases, passed unsuccessfully
- No run – test cases that were not passed
- N/A – test cases that are currently impossible to run
Below you can see an example of calculating QL:
|No Run =||25||6,9|
|Quality Level =||92,3 %|
We will use Quality Level in our example in order to compare a project before and after introducing impact analysis.
Let’s look at the testing request before impact analysis was introduced. When request was first received, specified feature was tested with no bugs found. In this case, QL mentioned in testing response was 100 percent.
Everything seemed to work fine, but later, it became apparent that changes to this feature block another, connected feature.
Who is responsible for this situation? Developers, who failed to check the influence that new changes had on related feature, or QA specialist, who did nt think about checking the feature that was related to the one, mentioned in the original request. And lastly, this situation negatively impacted the quality of the product, on which both developer and tester were forking. This is the most important takeaway from the situation.
Developer, who brought up this example, thinks that situation would not be saved even if information on relationships between features was included in the initial request. In the absence of a check-list, it would not have been clear whether developer actually thoroughly analysed the possible impact new change had on other features. Moreover, this information was not considered important enough, both by developers and testers. Therefore, even if tester had checked related features and found some problems, developer could have overlooked this information in the test response, especially considering the fact that it would have been written at the very bottom.
How is the process performed now, after the impact analysis was introduced?
Now, when submitting testing request, developer analyses the impact a change to a feature may have on other features. Impact analysis spreadsheet with a detailed list of all features guarantees that developer will not forget anything. Results of testing all the features, potentially affected by the change were also added to QL value. This incentivises the developer to state all the necessary information about testing all potentially affected features, and assures that they will definitely see all the bugs, found in those features.
Inclusion of impact analysis in QL provide us with better and more accurate picture of the actual state of the project. Now developers will not be deceived by false 100 percent QL. They are also saving time on searching testing responses for the necessary information.
Introduction of impact analysis also helped us solve other issues with our workflow. Creation of standard impact analysis spreadsheets for each project gave us an idea to also standardise testing requests and responses. Awe went ahead with the idea and implemented it.
We worked out specifics of how testing requests and responses should be standardized with developers. Now they have homogenous structure and design, which improved our internal communication significantly. Such standardisation improves readability of information and allows to save time when looking for the necessary data for both developers and testers.
In this article, we looked at the importance of impact analysis process in Apriorit. Testing became faster. Now testers do not spend their time guessing what features need to be tested and do not test features unnecessarily.
Impact testing requests and responses became standardised due to the standard spreadsheets, created for impact analysis. With the need to analyse what parts of the project could be affected by the new change, developers now feel more responsible for testing requests they are sending.
Testers also have added the responsibility of checking impact analysis spreadsheet and taking provided information into account when they work. It is much easier for testers to assign correct priorities to test tasks based on the level of impact, specified in impact analysis spreadsheet.
Moreover, impact analysis introduction allowed us to solve problems with our QA process and make it more efficient. The improved work process is easier to follow and yields better results.