Why your organization should also run non-digital A/B tests
Running service-based & non-digital A/B tests will help you learn faster, scale more rapidly, and encourage data-driven decisioning. Companies may fail to leverage the extent of their data detaparments’ capabilities, by only doing A/B tests on conventional use cases (ie. marketing, e-commerce, pricing, etc.).
An A/B test is a randomized experiment that helps you compare two or more variations of something. Suppose Android Developers would like to test their phone app’s interface between two variations:
They can then compare usage/conversions/lead-time/other relevant KPI’s between the 2 versions; using statistical/analytical techniques, and make a data-driven decision on which version would suit their consumers’ needs or other hypotheses more.
More examples of A/B tests include changing color schemes on websites, sizes of buttons, marketing campaigns, payment options on shopping sites, etc.
The depth, complexity, and statistical/data-science rigor applied to A/B tests can overwhelm and dither stakeholders. A data scientist and analyst may not always be available, or your decision cannot await a 3-month experiment. That is OK; implementing A/B testing practices for service-based use cases can still benefit your organization. A few of these practices include scoping out KPI’s and proxy metrics relevant to your hypothesis, having a control group, and sustaining “constant variables” for the duration of your test.
HOWEVER, YOU CAN STILL RUN NON-DIGITAL A/B TESTS…
IN FACT — YOU SHOULD AT THE VERY LEAST IMPLEMENT BEST A/B TEST PRACTICES.
Here are 2 value add use-cases each organisation can adjust to their needs.
- Operations: Did your team recently start using JIRA to work in sprints? How is this affecting their efficiency and quality/feedback of work?
Managers who are hesitant to agile ways of working (for the right or wrong reasons), can test their hypothesis using an A/B test — have a team work in JIRA sprints and another team work in their regular way of working.
Scope out KPI’s such as employee efficiency and let the results drive your management strategy. Try to control the tenure of your employees, their seniority levels, and the complexity of projects between your two test groups; it is OK if you cannot, but increase the likelihood of running a statistically fair A/B test as much as you can.
- Reward/People Teams: Has your organization recently changed performance reviews to being quarterly instead of annually? Would you expect this to increase your employee NPS?
The above is tricky and sensitive to test on people, that is OK — how about you track change in employee NPS (eNPS) over a 6-month window; one variant having historic data of annual performance reviews, and the other variant being current data based on quarterly performance reviews.
Consider events that may impact the fairness of comparing both variants; ie. did employees receive a bonus the past year and not this year? This may impact their eNPS. Perhaps this means you should change your KPI from eNPS to “reason of leaving” on leaver tickets. See what happened here?
You are most probably not running a statistically significant A/B test above, but at least you are re-scoping KPIs and forming methodologies that help you drive better campaigns and changes, which should slowly compound to your team being data-driven rather than solely instinctive.
The opportunity cost of NOT leveraging A/B test practices in service-based domains (ie. operations, sales, hospitality, management, reward, etc.) outweighs the outcomes of running your business as usual.
This post is not to incline you to base your next big decision on running an unfair and statistically insignificant A/B test — rather implement A/B testing practices in order to shape decisions, and the compounding effects of doing so will help your organization in the long term.