Skip to content
The pitfalls-1
Chap Achen

Should UAT Be a Four Letter Word?

In the fast-paced world of retail, meeting customer demands while keeping costs in check is a constant challenge. Especially after COVID, customers expect speedy improvements in delivery, and they’re not being particularly patient about it. For example, McKinsey research showed that when delivery times are too long, almost half of omnichannel consumers will shop elsewhere.

These pressures retailers are facing from consumers can lead to a myriad of problems. How do retailers give the customers what they want in a timely yet cost-effective way? How can we make the shopping experience even more convenient? How can it be implemented quickly without things falling through the cracks? Companies need to be able to adapt to changes in the landscape quickly. However, testing these changes adds significant cycles to these improvements.

OMS practitioners know that testing an end-to-end implementation is far more complicated than other systems. OMS systems are complex, handling various order processes, inventory management and often integrating with other systems. This makes the practice of UAT (User Acceptance Testing) also more challenging to execute.

UAT plays a critical role in the successful implementation of any OMS. UAT not only allows but provides an opportunity to uncover any issues that may have slipped through the cracks during earlier phases. Certain integrations always seem to be missing in the testing environment, lack good data, have incredibly difficult edge cases to reproduce in test, and the list goes on. Sadly enough, everyone knows this yet we continue to see teams underestimate the time, effort and complexity it takes to do quality testing in the OMS domain.

On top of this - the dreaded UAT must be executed by the business that invariably has not set aside enough time to do it, may even wonder why they need to do it, or even in the best cases - want to do it but the development support required for it has moved on to the next project!  

We can’t solve all the problems retailers face in an ever-changing environment, but we can provide some insight into not dropping the ball with back-end testing efforts. Let's take a look at how to avoid the common pitfalls we see around QA testing any large OMS implementation.

The Pitfalls

QA testing is underappreciated in almost every dimension.

  • Effort underestimated - by a whopping 50% on average leading to challenges with expectation setting for stakeholders and resource allocation planning.
  • Integrations - the surrounding systems don’t plan for support or development cycles; resources required for surrounding system support are unavailable or overburdened. Other project timelines then become affected.
  • Environment syncing is not accounted for, test data setup across systems takes time and effort that is rarely appropriately planned. The overall project timeline suffers as a result.
  • Point to point integration testing and to end-to-end testing are most often overlooked. Point to point integration testing facilitates a smooth transition to full system integration testing. End-to-end testing ensures scenarios are working end to end prior to moving to production. Both of these testing cycles are critical for a smooth testing and project launch. Without them, testing can seem disjointed with bugs popping up or integrations completely not working.
  • UAT is not planned and ends up being business looking at test case results. When end users are not hands on testing the systems, they lack the familiarity with the systems to successfully utilize them. This can have the added impact of poor user adoption. Hypercare, training, and project team support of end users is extended as a result.
  • No dedicated testing team, no testing lead. A lack of ownership of testing and lack of testing experts is inefficient. Project managers wind up leading testing efforts and tests are executed by developers. These resources become overloaded with their other responsibilities plus testing and things ‘fall through the cracks’.
  • OMS implementation provider doesn’t push back on QA plans from client. Client teams are typically overly optimistic about testing timelines and pass/fail rates (70% pass on first test cycle is industry standard). Test plans don’t account for an anticipated 30% defect rate, fix and retest cycle.

Our in-house expert came up with a list of OMS QA Best Practices:

1. The cheapest bugs to fix are in discovery and design, not during the actual testing. This means making sure that the appropriate upstream and downstream teams are involved in these sessions.

2. Let discovery guide your QA estimates and have QA representation as part of discovery to inform it. If historical data around testing timelines are available, utilize them (if similar projects historically required 8 weeks of testing, the next one will likely require 8 weeks of testing, not 2 weeks of testing).

3. Bring low fidelity wireframes of the OMS user experience into the project as early as possible. A wireframe typically makes assumptions that design teams have made more apparent to users where they can be corrected before they go into development.

4. Make it incredibly clear who owns end to end testing. Is the OMS project team doing the end-to-end OMS testing or is it the client receiving the output of development and unit testing?

a. Often end to end testing is planned without ownership defined. Once the time to test is reached, everyone says “I thought your team owned that”. Ultimately end to end testing needs to be an effort across teams, with clear leadership to direct teams and facilitate hand off between systems.

5. Is that downstream system owner concerned about your design? Listen to them!

a. In one example, our OMS team implemented real time serialized inventory calls to POS. Post development, but prior to testing, the POS system owner raised red flags that OMS was continuously making inventory calls and they were failing. When the time came for integration testing, we realized that the alarms he was raising were very much valid. The functionality had to be redeveloped.

6. Plan for the worst, not the best. Budget time for the unknown challenges that will arise.

7. Someone needs to be accountable for UAT - and supported! UAT is a test and dev activity just like QA. A 30% defect rate (industry standard) should be anticipated and planned for.

8. All stakeholders, including clients (tech and business) and vendors, should approve testing plans before engaging in the work.



As you navigate the ever-evolving world of OMS, we hope these lessons learned are helpful. QA and UAT are critical phases in ensuring that the OMS functions and meets the needs of the business and its users before it goes live in a production setting. It helps uncover and resolve issues, reduces the risk of post-development problems, and ultimately contributes to the success of the system. With proper planning and resources, speed to market can still be achieved. It’s critical to not only test but plan for testing. Creating a testing plan based on the above suggestions will make for a smoother rollout, which in the end translates to happier customers.

Never forget, in the world of OMS testing, it’s not just about avoiding pitfalls, it’s also about building bridges to success. If you would like to learn more about Nextuple’s experience with UAT and QA, and how we can help you, visit our Resources Page.

Have an OMS project that needs world class delivery? Reach out today.

FAQs

How much additional time and resources typically need to be allocated for robust OMS testing compared to a less rigorous approach? 

Testing efforts are frequently underestimated, leading to resource allocation issues and unmet expectations. Additionally, robust testing helps uncover problems early in development, when they're cheaper to fix compared to post-launch bugs that can disrupt operations. While the upfront investment in comprehensive testing might seem high, the potential cost savings can be significant. Catching and fixing bugs during the testing phase is far cheaper than post-launch fixes that can disrupt business. A well-tested system with minimal bugs also fosters user adoption, leading to smoother transitions and potentially higher user satisfaction. Finally, extensive testing can even lead to a faster time to market by avoiding delays caused by fixing bugs after launch. Ultimately, the decision on how much to invest in testing involves a cost-benefit analysis. This blog post highlights the potential risks of skimping on testing, suggesting that the upfront investment can be offset by long-term cost savings.

How does extensive QA testing impact ongoing business operations? For instance, if the testing phase is significantly extended, could it delay the launch of new features or functionalities that could benefit the business?

A key concern is the potential delay in launching new features. If extensive QA testing significantly extends the testing phase, it could indeed hold up the release of functionalities that could benefit the business. This can be a double-edged sword. While robust testing helps iron out bugs and ensure a smoother launch, a delayed release might mean missing out on potential revenue or losing the competitive edge of being first to market with a new feature. Finding the right balance is crucial. Companies need to weigh the potential risks of launching a buggy system against the potential drawbacks of a delayed launch.

How can companies determine if their OMS QA testing is truly effective? Are there specific benchmarks to aim for in terms of bug identification rates or time to resolution?"

While industry standards offer a baseline, companies should define success for their OMS QA testing by tracking metrics like bug identification rate, time to resolution, and retest rate. These provide a clearer picture of how effectively testing catches and fixes issues, considering factors like project complexity and risk tolerance.

avatar

Chap Achen

Chap Achen Bio: Retail OMNI fulfillment leader with 25 years of experience. Drives product strategy at Nextuple to help retailers optimize fulfillment.

RELATED ARTICLES