Benefits of minimal to optimal test automation for Salesforce B2C Commerce site

Tania Karakasheva

QA Engineer at ZaUtre

Abstract

This white paper explores the advantages of tailored test automation for Salesforce Commerce Cloud (SFCC) sites. It discusses the need for strategic decision-making in automation initiatives. A real-world case demonstrates that minimal to optimal automation for core workflows significantly reduces manual testing time, resulting in a ROI of 3.67. This paper underscores the importance of efficient automation to improve efficiency, productivity, and reduce costs in SFCC (Salesforce Commerce Cloud) projects.

Introduction

Software development process is evolving at a fast pace, to meet the dynamic, demanding marketplace, the pressure to deliver high quality software in smaller time frames is ever-increasing. E-commerce sites more than ever need to keep the speed or even be “faster”, as it is crucial to offer customers not only design, user friendly experience, but “bug free” and safe sites.  

Organizations are realizing the importance of software testing and the role it plays in developing quality code with speed and agility. Software quality can be achieved using various principles, techniques  and practices. Depending on project parameters and complexity, in relation to team knowledge and size, test automation becomes a central focus. Automated testing promises results that can add value to the whole project by offering wider software validation, increased speed of QA process and increased testing coverage.

The effort is not without its challenges. A full automation set includes suites of verification, functional, regression, and performance tests that run with little or no manual input. Such a set must be carefully designed and implemented. And most importantly should be determined if it’s needed in its full capacity.

In certain situations, when full E2E is not an acceptable solution, minimal to OAS (optimal automation) can prove advantageous. This strategy offers benefits that outweigh the investments required for its implementation and will be explored in detail below.

Background

Test automation benefits are well known, but it is important to understand that test automation is not magic. There are several factors that can compromise the integrity of a test automation and situations where the benefit is no longer proven. Before implementing test automation framework for a project, teams and business must answer a list of questions:

Does the organization need an end-to-end automation suite? 

What is the optimum amount of automation? 

How is it to be achieved?

Does the organization need an end-to-end automation suite? 

Generally speaking, this question depends on project complexity, time frame, budget, resource planning. In short, the answer seems to be that every organization/business needs an E2E (end to end) automation test suite (. Well planned and implemented ATS (Automation test suit)  will bring advantages and benefits. 

 But, whether an organization needs an end-to-end automation suite depends on various factors, including its specific goals, processes, resources, and current operational challenges. Here are some considerations to help determine if an end-to-end automation suite is necessary:

Complexity of Operations: If your organization has complex and interconnected processes that involve multiple departments or systems, an end-to-end automation suite can streamline these operations by automating tasks and workflows across the entire process chain.

Efficiency and Productivity: Automation can significantly improve efficiency and productivity by reducing manual and repetitive tasks. Consider whether your organization can benefit from these improvements, such as faster order processing, reduced errors, and quicker response times.

Cost Reduction: Assess whether automation can lead to cost savings. While there may be upfront costs to implement an automation suite, it can save money in the long run by reducing labor costs, minimizing errors, and optimizing resource utilization.

Regulatory Compliance: In industries with strict regulatory requirements, automation can help ensure compliance by consistently following predefined processes and generating audit trails.

Competitive Advantage: Evaluate whether automation can give your organization a competitive advantage by enabling quicker innovation, faster time-to-market, or better customer service.

ROI (Return on Investment): Conduct a cost-benefit analysis to determine whether the expected return on investment justifies the implementation of an automation suite. Calculate the potential savings and revenue increases against the upfront and ongoing costs.

While test automation offers numerous benefits, it also has its downsides and challenges. 

Initial Investment: Setting up test automation can be expensive in terms of tools, infrastructure, and skilled personnel. The initial cost of automation can be a significant barrier for small or budget-constrained teams.

Maintenance Overhead: Automated test scripts require ongoing maintenance to keep them up to date with changes in the application under test. This maintenance can be time-consuming and costly.

Test Environment and Test Data Issues: Test automation may be sensitive to the test environment and test data. This can lead to inconsistencies or issues that will result in false test results. Ensuring a stable and consistent test environment can be a challenge.

Limited Test Coverage: Test automation is not always suitable for all types of testing.

Over-automation: Automating too many tests or scenarios can lead to over-automation, where maintaining and executing tests becomes inefficient. 

Lack of Human Judgment: Automated tests lack human intuition and judgment. They cannot easily detect certain types of issues that require human insight.

In conclusion, the decision to implement an end-to-end automation suite should be based on a thorough assessment of your organization’s unique circumstances and objectives. 

What is the optimum amount of automation? 

The optimum amount of test automation for a Salesforce Commerce Cloud (SFCC) site depends on several factors, including your project’s size, complexity, resources, and goals. Striking the right balance between manual and automated testing is essential for ensuring a robust and efficient testing process. 

Test Coverage: Automate tests for critical and frequently executed scenarios that are integral to your ecommerce site’s functionality. For SFCC sites such scenarios are Checkout flow, Customer registration flow, Customer account flows.

Regression Testing:  Well planned test coverage plays a key role in regression testing. Owning automated regression tests let teams quickly identify and fix issues introduced by code changes or updates.

Cross-Browser and Cross-Device Testing: Already automated scripts can be executed across different browsers, devices, and screen sizes to ensure a consistent user experience for your diverse customer base.For an ecommerce site it is crucial to offer customers the same user experience across different browsers and devices.

API Testing and Data Validation: Automate API and data validation tests to verify that data is being transferred correctly between your e-commerce site and external systems or third-party services.

Exploratory and Usability Testing: Some aspects of testing, like usability and exploratory testing, are better suited for manual testing by experienced testers who can mimic real user behavior and provide subjective feedback.

It’s essential to consistently evaluate and fine-tune the extent of automation to align with the changing requirements of your SFCC site and development procedures. Aim for an equilibrium that enhances test comprehensiveness, excellence, and efficiency, all while taking into account the distinctive attributes of your e-commerce venture.

How is it to be achieved?

Once the requirement for an automated test suite has been determined, and the extent of automation is well-defined, the development team can start the process of strategizing and constructing a customized test automation suite. 

Challenge

To implement test strategy for mostly backend SFCC project. The project combined complex and interconnected processes, which involved multiple integrations and dependencies. It was planned in several sprints, each containing new and extensive functionalities. 

Meeting quality objectives was a challenge due to tight resource constraints, further compounded by strict time constraints for each development sprint. The team comprised four developers and one QA specialist, and the project had a fixed development timeline of six months.

In such situations the default choice of action would be MTS (manual testing strategy) only. Implementing full E2E ATS was not an acceptable option, because it required at least one additional experienced automation QA, who had to create a full set of needed scripts and maintain them on a weekly basis. Full E2E ATS was limited by the time frame and the budget of the project.  

Solution

During Requirement Analysis, based on resources and complexity of the project, minimal to optimal automation suit was chosen as suitable and also possible strategy, because: 

  • Only manual testing would require minimal time in the beginning of the project, but in the middle and end stages would be more time consuming than acceptable.
  • Full E2E ATS was not possible due to time frame and budget.

In this particular project there were 4 major and resource consuming SIT (System integration testing) sprints. Each of them contained more than 100 test cases that required successful checkout, resulting in completed order. Once orders were done their details were taken and passed to a QA member from another team. Created orders would be confirmed if visible, valid and correct in integrated systems (CMS, OMS –  Order management system etc.). After successful check order’s data would be complemented with additional details and orders would be replaced, returned, canceled etc. (fig.1) So time for order creation was essential for the timeline of the test sprint. Often this process would require additional manual tester to help with order creation for the period of SIT session, in order to prevent any delay in initial stages, which can lead to delay in the whole SIT timeline. 

Fig.1 Order flow process general overview

Except the mentioned SIT test sprints, as for any other projects sanity, smoke, regression, functional, performance, etc. tests were a must. As mentioned above, it was mostly a backend project, so many of the tests that confirm storefront elements and functions were not in the testing focus of our team. 

Typically, while planning test cases, QAs would use software functional specification (requirements specification) to determine aims and flows to be tested and hand-craft test cases that test each specification for the software. A portion of the manual test cases that were generated were later utilized to develop test scripts for automated execution.

To achieve a good viewpoint of our ATS goal, during Test Planning we marked only the essential fields that would benefit most from  automation, based on their importance to the end product and use of the same scenario /how many times the same scenario was to be done during manual testing/. 

Several scenarios were at focus:

– Storefront Checkout as Guest/Registered customer

– Storefront Customer creation 

– Storefront Customer login/logout

– Storefront Customer account modification

– Checkout via CSC (Customer service center) in BM (Business manager) of SFCC as Guest/Registered customer

– Customer creation via CSC in BM of SFCC

– Customer account modification via CSC in BM of CSC

Since it was mostly a back end project, checkout and customer creation flows would be done multiple times not only to confirm correct work, but to confirm/adapt payloads to other parties. Some orders would be created only to be part of other test scenarios, such as returns, replacements, exchanges and so on. Situation with customer creation was the same. Creation would be done not only to confirm correct work, but to guarantee correct integration, payloads and correct work flow from the CSC site in relation to the Storefront site. 

From listed scenarios we excluded sets related to: Storefront Customer account modification and Customer account modification via CSC in BM, due to their potential to generate a larger volume of work to implement scripts and involved numerous test cases. In our project, these specific scenarios did not justify the investment in automation, considering both the time frame and budget constraints.

At this point, the time cost was not significantly different from the time needed to achieve similar results if the project had been decided to be manually tested only.

Based on chosen test scenarios to implement ATS we used VSC (Visual Studio Code), Playwright JS/TS (Javascript/Typescript), Jenkins and Git. Playwright was chosen because it is fast, reliable, easy to use, offers good cross browser testing, excellent scalability, does not require additional tools to run and report tests, performs a range of actionability checks on the elements before making actions. Already implemented cases can be adapted fast. 

To start working on ATS with the selected tools we needed to establish our working environment. Git was already in use, Jenkins and Playwright installation and setup took 2 hours. Playwright is also “famous” with its easy installation and setup.

Implementing test scripts for chosen scenarios took 24 working hrs. Storefront elements were well designed and easy to manage, without using complicated or too long locators. Scripts were simple due to their aim only to confirm certain workflows. Main goal was to avoid multi-language problems in scripts and use one test script regardless of Storefront locale or language and achieve a high level of code reusability.    

Test data was grouped in JSON files and scripts were called with chosen JSON, containing selected test data. This step allowed QAs to quickly modify test data in order to achieve different results and adapt if needed.

Example

“Example test” would pass if an order is created with success. It would not focus on Storefront details, but only on the core flow of checkout.

test steps/expected result by step :

  1. Open PDP (Product details page) for chosen product   – > page is visible
  2. Click “Add to basket” – > product is available, button is active
  3. Click “Go to basket” – >  Mini cart contains the product, “Go to basket” is active
  4. Click on ‘Proceed to checkout” – > Product is in basket, “Proceed to checkout is active
  5. Fill shipping data – name fields, address fields
  6. Choose delivery method
  7. Check for calculated Est.delivery date
  8. Proceed to payment – > Data from 5-7 is valid and btn. “Proceed” is active
  9. Fill billing data
  10. Select payment method
  11. Add payment data
  12. Confirm order – > Data from 9-10 is valid and btn. “Proceed” is active
  13. Confirm payment
  14. Await order confirmation page – > write order number/locale/language/item/payment method

The same flow had to be done as a registered customer. In such a case, QA ensured successful login first and performed “Example test”. Manual QA had to perform login before the test, script only called already implemented login function.

Measured time needed to perform “Example test”checkout scenario: 

Test Time – seconds
Manual/guest240-300/best probable time result/
Manual/registered customer300-360/best probable time result/
Auto/guest30-33
Auto/registered customer32-35

It can be stated with a high level of confidence that auto-executed script gave results 10 times faster. 10 manual orders took around 30 minutes, 10 auto orders took 3 to 4 minutes, depending on execution settings.

Benefit

During SIT sprints, if done manually, by one person only creation of 100 orders would cost 8 QA hours at best. Note that a human is not a machine and needs breaks between orders, time to check test data, time to prepare to perform the next order, etc. Mistakes in execution are probable at some point because of the numerous repetitive actions involved. Results recording for each of the orders and its passing to other team members would cost additional 1-2 hours. Best calculated time for the whole process is about 12 hrs. of manual testing. For 4 SIT sprints the best time for manual testing would be 48 hrs. 

In this project, all test cases for each SIT session were executed via the ATS, taking one hour to execute, including the reporting of results to QAs from other teams. Test data preparation took 30-60 minutes. For 4 SIT sprints, time consumed was approximately 8 hrs. 

During these 4 SIT sprints,it can be said that ATS saved at least 40 hours QA time

Given the fact that ATS of the project contained scripts suitable for regression testing, it was used for this purpose also. Regression testing suite was configured to start automatically, twice weekly and log results via Jenkins. Its aim was to confirm successful checkout, login, user creation. With minimal labor effort this provided benefits that are well established: Ensuring Software Stability and Reliability, Detecting and Preventing Frequent Defects, Validating the Impact of Code Modifications, Mitigating Risks Associated With Software Upgrades. If done manually, each regression test session would cost approx.2 hrs. On a weekly basis daily automation of regression testing saved 4 hrs. For 22 weeks (since regression was not set up on the initial project stage), time saved by ATS was around 88 hrs QA time

Scripts were also useful during development as on demand sanity, retest etc. (approx. time saved 40hrs). Whenever necessary, all scripts were modified to align with new requirements (this took 8-10 hrs). As our automation suite was primarily designed for core workflows, the time required to adjust scripts was kept to a minimum.

Recorded QA time spent on the project can be compared to estimated time that would have been required under a solely manual testing strategy. Even with an OAS incorporating some manual testing, there is a noteworthy contrast in the time needed to achieve quality goals when compared to a strategy exclusively relying on manual testing, as illustrated in Fig.2. 

As mentioned, at the project’s outset, OAS initially consumed more time compared to a hypothetical scenario where MTS was chosen. However, as the project progressed and additional functionalities were incorporated, the advantages of OAS became apparent. By the fifth week, the required QA hours not only met but even surpassed the expected with MTS. From week 5 onward until the project’s conclusion, the anticipated trends were consistently met.

Fig.2. QA Working hours by week 

Conclusion

Did the organization need a minimal to optimal automation suite?

In evaluating the necessity of implementing a minimal to optimal automation suite for our project, several key factors were taken into consideration. The central observation was that any modifications to the backend had a significant effect, impacting a wide range of functionalities across the project. This insight prompted a strategic decision to automate core workflows, leading to remarkable improvements in efficiency and productivity. Notably, this resulted in a substantial reduction in manual and repetitive tasks, offering immediate benefits in terms of cost reduction.

By focusing on automating these essential workflows, we not only realized cost savings, but also achieved a competitive advantage. The streamlined processes enabled quicker development and reduced testing time during various project stages. This led to faster integration test sessions, minimizing the emergence of unexpected issues during UAT (User Acceptance Testing).

The ROI (Return on Investment) analysis speaks to the success of this approach, with an approximate ROI of 3.67. 

ROI = (Benefit – Cost) / Cost

Cost – The initial cost of setting up and maintaining the test automation suite, including tools, infrastructure was 2+24+10 = 36 hrs.

Benefit:

  • time saved during System Integration Testing (SIT) sprints: 40 hrs.
  • time saved during regression testing for the whole project time: 88 hrs. 
  • time saved during on demand testing: 40 hrs.

For every hour invested in setting up and maintaining the test automation suite, 3.67 hours were saved during testing and other critical activities. This strong return on investment demonstrates the clear benefits of the automation strategy we adopted.

In summary, the decision to implement a minimal to optimal automation suite was validated by the project’s outcomes. The approach not only facilitated efficient testing and error reduction but also contributed to a competitive edge and financial gains. This case illustrates the potential for substantial returns when investing in automation tailored to core workflows, and it reinforces the importance of strategic decision-making in test automation initiatives.

At present, the team that originally developed this project will be responsible for its support and maintenance, with an allocation of 100 hours per month covering both QA and developer working hours.

The current QA approach involves maintaining the existing automation suite throughout the support phase to ensure ongoing benefits and introduce new features. With the solution now operational, the chances of bugs causing losses for our client are minimized through regular regression testing sessions. Looking forward, the sole expected expenditures in the near future will be related to maintenance, emphasizing a proactive approach to sustaining the developed solution.