A/B testing is a well-understood tool for causal inference in web companies. However, it is not a panacea and often fails when sample sizes are small, measurement lags long, and the treatment space that you want to explore large. At Opendoor, we face all of these problems.
American homes represent a $25 trillion asset class, with very little liquidity. Selling a home on the market takes months of hassle and uncertainty. Opendoor offers to buy houses from sellers, charging a fee for this service. Opendoor bears the risk in reselling the house and needs to understand the effectiveness of different liquidity models. Key metrics and resale outcomes can take many months to measure, suggesting that A/B testing may not be the best tool.
In this talk we'll cover the ingredients of a simulation-based inference -- from how to define a good data-generating process to user models -- and will walk through a case study in residential real estate. We'll discuss how it obviated the need for certain A/B tests and allowed us to become more efficient in designing the necessary ones.
Nelson manages the Risk Science group at Opendoor in San Francisco. His team is responsible for per-home liquidity estimation and developing responsive risk models. Prior to joining Opendoor, Nelson was a data scientist at Google and a software engineer at Metamarkets. He holds a BS in mathematics and an MS and PhD in statistics from Stanford University.
Data Council, PO Box 2087, Wilson, WY 83014, USA - Phone: +1 (415) 800-4938 - Email: community (at) datacouncil.ai