This is the first talk in the upcoming series on Search and Matching at Airbnb, where we’ll be introducing our experimentation infrastructure for running A/B tests with the context of conversion optimization and learning about user behavior on the site and our mobile apps. We will also be covering the unique challenges of running A/B tests at Airbnb and discussing best practices for setting up proper experiments.
Controlled experiments are the way to go if you want to learn something about the world. They are also very useful in informing product development and design decisions. However, the complexity of our ecosystem has led us to consider a range of technical and conceptual issues that go further than the vanilla A/B-testing paradigm.
On the analytical side, we’ll talk about the importance of choosing the right unit of analysis and segmenting users into meaningful cohorts. Employing a good stopping criterion is also essential. Furthermore, we’ll discuss why understanding potential biases is crucial for running experiments effectively and soundly.
Will Moss is an engineer on the Data Infrastructure team at Airbnb, where he has been for the past 8 months. Before that, he was an engineer at Bump Technologies where he worked on all parts of the server stack. When not computering, Will likes to spend his time outdoors, hiking, biking, surfing and playing ultimate frisbee.
Jan Overgoor is a Data Scientist at Airbnb. He works on the search algorithm and the experiment framework used throughout the company. Previously he completed an Msc. of Symbolic Systems at Stanford University and wrote a thesis on trust in online communities.