Understanding of product analytics, metrics and A/B tests is a critical skill for data science consultants. While I have had some experience with these through former projects, it is a skill I am actively researching and studying to prepare for future engagements. The following article provides an overview of these three areas.
For Metrics, we will discuss - what are google metrics, commonly encountered technical and product metrics, understanding different audiences in a marketplace, and metric frameworks.
In the product analytics section, we will discuss the product lifecycle, including initial product ideas, opportunity sizing, experimental design, and measurement and launch decisions. We will highlight some common questions used to understand and assess product health, such as diagnosing problems, setting goals for a product, measuring success, launch or not.
For A/B tests, we will discuss experimentation broadly as it relates to statistical practices and design. We will discuss challenges to A/B tests and dealing with those challenges.
The aim of this guide is to reinforce my own learning in these areas, and prepare for discussions with companies and clients. I found many useful resources when collating this information, most notably; Trustworthy online controlled experiments by Kohavi, Tang, Xu, and Data interview pro on Youtube.
When creating metrics, good metrics tend to have these three characteristics;
When discussing metrics, it is important to have a good understanding of who the audience or end-user is. This is particularly important for businesses which may have multiple pillars, some examples of which are…
Technical metrics are more related to website or app performance, often these are ‘hygiene factors’, that we expect a minimum acceptable behaviour. These can also form good ‘guardrail metrics’, in that we want our experiments to have no major impact on load times. Some Common technical metrics include:
Delving into product specific metrics, we will often have an organizational ‘North Star’, or ‘overall evaluation criteria’ (OEC) metric. For companies like Meta this is ‘Daily active users (DAU)’.
When discussing products and experiments, we usually want to dig deeper to find more tactical metrics to measure experiment impact. Some examples of common product metrics (by audience types) include:
Often, metrics frameworks can be a useful way to understand many parts of a product and process.
For example, when I worked in market research, we often used the ‘marketing funnel’. This allowed us to measure and understand how consumers perceived our client’s brands across many steps of engagement. Question wording would often include either more attitudinal (would you consider) vs behavioural style (did you shop within three months) questions.
Marketing funnel:
This is similar to the ‘Growth metrics’ funnel framework (AARRR)
Growth metrics AARRR:
Other frameworks include more of an ‘input and output’ setup, such as click through rate, and fraud detection.
Product analytics helps us drive improvement in products by experimentation. The book mentioned above (Trustworthy online controlled experiments by Kohavi, Tang, Xu) goes into detailed discussion of why organisations should consider a product analytics strategy and implementation steps. For this section we will primarily focus on the role of a data science consultant interacting with the product analytics process.
The product analytics lifecycle includes four key stages:
Often data scientists will be required to make assessments of a product. These may include diagnosing a problem, measuring success, setting goals for a product, and launching or not. The following section will discuss some strategies for framing, exploring and executing on these tasks.
Diagnosing a problem. I have had first hand experience with diagnosing problems on a government infrastructure end-user research project. The following is a recommended framework and discussion items.
Measuring success and setting goals for a project can usually reuse much of the above framework and thinking. Questions may take the form of How to measure success of a product? How would you measure the health of Mentions? How would you set the goals for (whatsapp, instagram, messenger). A framework to help with these sorts of questions includes:
Often data scientists will help product teams assess if we will launch a product or not. While at Boston Consulting Group, many projects involved launch / no launch decisions, through an initial pilot, or through a series of shorter experiments such as in personalization. We may be asked to assess how to test a product idea or launch feature? Or how would you set up an experiment to understand the change in instagram stories. How would you decide to launch or not if engagement within a specific cohort decreased?
While not a full guide to A/B tests and experimentation, the following are some important considerations to design.
Some challenges may come up depending on the business context, some examples of which include:
In summary, metrics, product analytics frameworks and A/B tests are great discussions to have to be an effective data scientist and provide impact to companies and clients. It is an area I am keen to continue growing in expertise.
Chatgpt part1 why i am interested in making llm applications
An overview of my market research experience
Increasing software development expertise
Getting started with new work environments
A new programmers reading list
User focused analysis and design
Marketing segmentation approaches
Wow auctions development status
Programming and analytics in games
An overview of machine learning concepts
Natural language processing review 2018
Categorizing text data planning