Question on quasi-experimental approach for product feature change measurement
I work in ecommerce analytics and my team runs dozens of traditional, "clean" online A/B tests each year. That said, I'm far from an expert in the domain - I'm still working through a part-time master's degree and I've only been doing experimentation (without any real training) for the last 2.5 years.
One of my product partners wants to run a learning test to help with user flow optimization. But because of some engineering architecture limitations, we can't do a normal experiment. Here are some details:
- Desired outcome is to understand the impact of removing the (outdated) new user onboarding flow in our app.
- Proposed approach is to release a new app version without the onboarding flow and compare certain engagement, purchase, and retention outcomes.
- "Control" group: users in the previous app version who did experience the new user flow
- "Treatment" group: users in the new app version who would have gotten the new user flow had it not been removed
One major thing throwing me off is how to handle the shifted time series; the 4 weeks of data I'll look at for each group will be different time periods. Another thing is the lack of randomization, but that can't be helped.
Given these parameters, curious what might be the best way to approach this type of "test"? My initial thought was to use difference-in-difference but I don't think it applies given the specific lack of 'before' for each group.