Double Checking Our Work

To illustrate part of what we mean when we say that MCANTA is a “data-driven company”, we’d like to share three recent client interactions. We think that these cases illustrate how our data-driven approach helps teams move beyond distractions and align toward their common goals.

Case 1: Instrumentation Overhead

We were testing an Android-based device provided by a third-party supplier, for one of our clients. When we reported some bugs, the device supplier expressed a concern that the bugs might be artifacts introduced by the test configuration, specifically by the SCRCPY tool we used as part of the test automation.

 

In response, we conducted a series of experiments, monitoring memory and CPU use on the Android system under test. We made measurements both with an idle system, and while performing a test sequence. The test sequence was performed manually for the uninstrumented (without SCRCPY) case, and under script control for the instrumented case.

 

The key statistics are:

  • Idle SUT
  • 76% more memory being used while SCRCPY is active 
  • 33% more CPU usage while SCRCPY is active 
  • Busy SUT (System Under Test)
  • 75% more memory being used while SCRCPY is active 
  • 15% less CPU usage while SCRCPY is active 

Pictorially,

Graphs of Scrcpy performance

The client, and the third-party supplier, were reassured of the validity of the test results, and the bugs were subsequently diagnosed and fixed.

Case 2: Test Repeatability

A client expressed concern that some sub-par performance test results might have been due to sampling effects rather than a true measure of the SUT. We, therefore, repeated several of the test cases five times to assess the repeatability of the results. Here’s an example graph from our analysis of the data, showing that all five test runs agree:

Test Repeatability

With this data in hand, the development team did a root cause analysis, diagnosed a faulty database driver, and corrected the problem.

Case 3: Data Quality

In the course of monitoring the test data for a performance test project, we noticed that some of the statistics of the test data were diverging from those of the production data. Specifically, due to the re-use of some test accounts, some financial instruments were collecting a large number of subscribers.

Here is a histogram that shows the disparity:

Case3 - Test Data

We reported this result to the QA team and initiated a process to correct the test data, coordinating with the DB administrator for the test environment.

Want this for your applications?

We’re in the business of answering QA questions by providing data, whether those questions are about the SUT or about the test process. If you’d like this kind of attention to accuracy in your QA efforts, please contact us; we’d be happy to consult, coach, or outsource with your QA team.

Your Organizational Needs

Those that have implemented RPA seamlessly and without issues are in the minority and are doing a great job with their automation implementation. For everyone else who are seeing the early signs of an increase in tech debt, consult with our team at MCANTA to understand the automation solutions we offer our clients to keep their digital ecosystem scalable, resilient, and free of potential risks. 

Share the Post:

Related Posts