top of page
Writer's pictureinfo@blackfin-group.com

Testing Strategies that Improve System Implementation ROI

Updated: Apr 11, 2022



Anybody who has ever been involved in the implementation of a new technology platform already knows the risks and pitfalls that come with such an undertaking. Whether the implementation is a new loan origination system, consumer portal, enterprise web site, vendor integration or CRM platform, all are subject to the influences that drive over 50 percent of these projects to fail. However, I am more interested in discussing the 50 percent that do not fail. My experience is that nearly every system implementation project that actually makes it to launch, do so with some level of compromise and yes, even disappointment. The compromises come in the form of missing features and functionalities, incomplete integrations and poor adoption. The disappointments come from delayed timelines and lost opportunity costs. As soon as the system is launched, a new lifecycle begins with defining manual work-arounds and customized development projects aimed at solving all of the process issues brought about by systems that go into production without delivering on all of the initial promises. This “patch and fix” process makes calculating a realistic ROI difficult and actually achieving the ROI goals more aspirational than attainable.

My experience also tells me that there are activities that improve an organization’s chances for overall satisfaction and makes ROI a more realistic expectation. The simple truth: an implementation project with a proper testing approach embedded within it will improve overall implementation satisfaction and help to achieve stated ROI objectives. We all know that testing is a necessary step in any IT implementation project. The value that it can bring depends on whether that testing is approached as a chore or an opportunity. There are those out there reading this that are not convinced that the simple act of testing can deliver the kind of impact I am referring to. Before you put this blog down, let me explain some of the strategies that will hopefully have you re-thinking your next project.

Develop a detailed, methodical test plan - One of my favorite sayings from my days as a developer is, “weeks of coding can save hours of planning.” The same is true for the implementation testing and the theory is applied in a startling number of implementation projects. If you subscribe to this strategy you simply start testing something obvious and eventually stumble on a testing plan. I call this approach the “middle-out” strategy of test plans and while it is possible to test a system this way it can be incredibly difficult to determine what has not been tested. It is these unknowns that lead to poor satisfaction.

• The first step to improving ROI through testing is to design the overall implementation project so that the testing plan is embedded within the project rather than something you do once all the configurations are done.


• Take the time to think about what will be tested, why and by whom. Pay attention to the variable data you need to execute the tests and do not forget to define the proper pass/fail criteria.


• What is the risk appetite of your organization? Knowing this and accounting for it within the test plan can help to make sure you are testing the right things with the proper number of variations. Vendor integrations tend to be one of the stickier pieces of an implementation process and surprisingly, few vendors have test cases or test plans with their recommended approach. Each aspect of each product from each vendor must be accounted for within the master test plan. The test plan must consider the project use cases, test cases, test scripts and, testing effort estimates.


• Finally, a valid and thorough test plan requires that the testing team be represented in every configuration, development and training meeting scheduled for the implementation. It is hard to define test cases when you do not know the decisions that are leading to configuration and feature standards. Spending the necessary time to design and develop a comprehensive and integrated testing plan can help bring predictability to established ROI metrics by adding clarity to what works and what does not during the implementation when this information has the most value. Workflow and process improvements whose purpose is to improve accuracy or reduce time can be vetted and confirmed or cut out during the testing cycles – rather than after the project launches.

Establish a working base then iterate – Validating the functionality, viability and accuracy of a loan production system is a daunting effort. It touches nearly every department and system within the organization, talks to nearly every vendor, and supports vastly diverse work streams. To be blunt, you cannot eat an elephant in a single bite. Many implementations suffer during testing because the test plan does not have a predictable, resilient starting point. The goal here is to define end-to-end functionality in its simplest form and then iteratively add additional features and functionalities to the plan. This approach should not be confused with the “middle-out” strategy called out above. Establishing a working base is first premised on the fact that you have already defined all the layers, tiers, features, functions and integrations of your project. This is also not a “fail fast-fail often” effort. Instead, it is based on established iterative approaches to software delivery with every iteration aimed at delivering specific objectives and features. This iterative approach when designed and managed properly will improve the overall velocity of your testing efforts. How you approach what to add after the core system is often dictated by outside influences. If a primary project objective is workflow automation it may make sense to define the use and test cases for the various workflows that will benefit from this automation. Likewise, some third-party services are more complex and time consuming to configure and test. If there are multiple vendors falling into this it may make sense to tackle them once the core system testing is complete.

Because testing velocity is improved, ROI metrics are more attainable with this approach. However, there are other benefits that can improve ROI and overall engagement and satisfaction. This iterative approach allows you to better schedule the use of internal lender subject matter experts. Workflow and process issues are more quickly discovered using this process and because of this they can be handled before they begin to affect timelines. Regression testing is made simpler since changes, enhancements or patches can be tested via the use case and test cases that are affected.

Test Result Tracking – Tracking testing results has often been treated as more of an art than a science. I have seen multiple projects where only tracking the “pass/fail” result of the individual test cases was required. Most testing managers can cite a fairly robust list of test case attributes that have value in managing the overall testing process. The problem is, tracking these attributes requires time and effort and is one of the first casualties when timelines begin to tighten. To help protect ROI objectives and overall project engagement there are a select few attributes which should not be overlooked.

• Time is one of the biggest enemies in the test plan world. There are countless events that impact the time and timing required for the testing process. Configuration changes and updates and development projects may be delivered late or in multiple iterations, requiring testing to adapt. To protect your testing schedule, you need to understand how long the completed tests took and the resources required. Using this information, you can extrapolate how long the current tests may take and their required resources. Letting the project manager know if additional testing resources are needed early is always a good plan.

• Testing is meant to uncover discrepancies in workflow or user experience and deficiencies and flaws in the software. As these issues are discovered, the testing team must be able to share the information with the developers or the configuration team. This makes tracking the state of each test case an imperative. Being able to freeze the state of an individual test case may be invaluable for the development or configuration team and can improve how long it takes to resolve the issue and return to testing. This is a simple and small way to maintain testing velocity and keep focused on the project ROI objectives.

• Lastly, understanding and tracking the actual test data should be a requisite. Avoid testing plans that randomize all the variable input data. Systems react differently to different combinations of variable data and to properly troubleshoot your test plan should track the variable data used for every test script iteration.

Adopt a hybrid strategy – You can find plenty of project participants that can argue the virtues of automated testing over manual testing or vice versa. However, too much of one thing is rarely a good idea. At least part of the project’s ROI objectives will be based not only on timely delivery but also that the delivered product works with few exceptions and even fewer workarounds which requires a depth of testing. Automated testing has been around for several years but has typically seen the most success in a software development environment. The testing objectives are quite different for implementation and integration projects, so the same tools do not provide the same benefits. Consequently, if automated testing is part of your overall plan it will mean development and testing the testing engine. When you do go down this road it will require thinking about test scripts in a slightly different way.

• First, automated tests must have a binary result to be effective. I have seen automated test plans that spit out a host of information indicating that some activities passed and some failed. This approach requires a business resource or test engineer to analyze and interpret these results which defeats the purpose of automated testing in the first place. Variable data for the test scripts and the ability to preserve the transactional state for both pass and fail tests need to be considered as well.

• Implementations are a team sport. It is impossible to configure, test and deploy any enterprise system in a vacuum. The project testing plan provides one of the best opportunities to build a strong team of technologists, subject matter experts and business people. Manual testing focused on user experience by engaged key resources is critical to overall adoption of the end solution.

This is a sampling of the testing strategies I have found that have the biggest impact in preserving project ROI and defined success metrics. There are countless other activities and tactics that can improve ROI depending on the organization and the technology. The key point is that testing during an implementation should not be treated as a necessary evil but rather as an opportunity to improve overall ROI.

395 views0 comments

Recent Posts

See All

Commentaires


bottom of page