When using applications to solve business needs, organizations have two options: design and build the software in-house or purchase, configure, and customize. As the platforms become more capable and easier to configure and customize, Commercial Off-The-Shelf (COTS) software (on-premises and cloud platforms) is now often more efficient in getting those business needs solved. 

 

How do you define efficiency? Faster time to minimum viable product (MVP), scalability, integration with other systems? Perhaps regulatory compliance, general data protection regulation (GDPR), common vulnerability and exposure (CVE) fixes? What defines efficiency is particular to each organization; nonetheless, certain common assumptions can trip up organizations in successfully implementing and maintaining their business solutions.  

 

Three common assumptions: 

  1. The first assumption is that “happy path” testing will be enough; testing should be straightforward and trouble-free. 
  2. Second, testing is one and done; if it worked before and we didn’t touch that area, it will not be broken. 
  3. Third, configuration is not programming. 

 

However, these assumptions fail to account for a crucial factor: the complexities and dependencies that arise during and after configuring, customizing, turning features on and off, and integrating new COTS software into an organization’s ever-more connected IT environment. 

Why It Matters: 

Historically, UAT (User Acceptance Testing) was often the final step of software testing. The test plan was usually driven by a new set of features required by the organization, sometimes sprinkled with some infrastructure items.    

 

In this COTS and increasingly cloud-platform world, we have made fit many of the practices and gates of full design-build development. This isn’t a round peg, square hole scenario – more like round peg, oval hole – there are gaps. As risk and security teams have become a significant part of the approval process to “go live” with a specific solution, we see a conceptual shift to using regression and acceptance categories. This shift in terminology considers that some solutions will go live with no new organization-driven features merely to stay current with security requirements or a vendor-driven upgrade.   

 

Today, with more frequent releases and patch cycles, the overall user experience is based on multiple applications; data elements are pulled from various sources through API’s, processes can be chained together with multiple systems and those systems have varying levels of correctness, responsiveness and release cycles.   

 

But what does the reframing of UAT and SIT (system integration testing) to regression and acceptance testing do, and why should you consider it? 

 

Regression helps widen the parameters of test cases, incorporating knowledge of previously solved issues and things to look for. It can also cover scenarios that we know worked in the past and provide a benchmark against the current iteration of the solution. It protects us against platform changes that we may not even be aware of. 

 

Acceptance enables several stakeholders: risk, security, infrastructure, and users to detail what must be working in this specific release before it is allowed to go live. This might cover specific data masking, role enforcement, performance, and functionality. It puts these teams on a playing field where, collectively, they must approve the system for release.  

As codependent guardians of the system, each has a perspective that needs to be heard and considered.  

We are not saying these teams are superheroes – but together, they might be your superhero of a quality solution. Of course, there is all the data around how a robust, high-quality system decreases operational costs – but that is for another post!  

Takeaways  

  • It’s important to have clear acceptance criteria that are agreed by all stakeholders.  
  • Start small and build out – revisit often and don’t be afraid to remove items that are stale. 
  • Projects can go live with errors if there is agreement among stakeholders 
  • Maintain detailed records of test plans, cases, and results for effective issue Tracking and Future Reference. This helps manage acceptance and regression feedback loops.
 
 
MCANTA helps companies achieve competitive advantage through sensible digital transformation, automating software quality assurance, and business process automation. Contact Us  for a free analysis of your unique needs. 

 

 

 
By Patrick Grant | November 2nd, 2023
Share the Post:

Related Posts