With hosts and guests around the globe, Airbnb aspires to provide a frictionless payments experience where our guests can pay in their local currency via a familiar payment method, and our hosts can receive money via convenient means in their preferred currency. For example, in Brazil the currency is Brazilian Real, and people are familiar with Boleto as a payment method. These are quite different than what we use in the US, and imagine this problem spread across the 190 countries Airbnb serves.

In order to achieve this, our Payments team has built a world-class payments platform that is secure and easy to use. The team’s responsibilities include support of guest payments and host payouts, new payment experiences like gift cards, and assisting in financial reconciliation, to name a few.

Because Airbnb operates in 190 countries, we support a great number of currencies and processors. Most of the time, our system functions without incident, but we do encounter some hiccups where a certain currency cannot be processed or a certain payment gateway is inaccessible. In order to catch these interruptions as quickly as possible, the Data Science team has built an anomaly detection system that can identify problems in real time as they develop. This helps the product team detect issues and opportunities quickly while freeing up data scientists’ time to work on A/B testing (new payment methods, new product launches), statistical analysis (impact of price on bookings, forecasting), and building machine learning models to personalize user experience.

To give you a look at the anomaly detection tool we built to detect outliers in payments dataset, in this blog I will use a few different sets of mock data to demonstrate how the model works. I will pretend to run an ecommerce shop in the summer of 2020 that sells three hackers items: Monitors, Keyboards, and Mouses. Also I have two suppliers: Lima and Hackberry.


The primary objective for the anomaly detection system is to find outliers in our time series dataset. Sometimes, a high level outlook would be sufficient, but most of the time we need to cut the data to decipher underlying trends. Let’s consider the case below where we are monitoring Monitors’ imports.


This overall numbers for Monitors look quite normal. We then take a look at the imports of Monitors by our two different suppliers: Lima and Hackberry.


Here we can see, Lima, our major supplier for Monitors, was not delivering the expected amount on the 18th of August, 2020 for around 3 days. We automatically fell back on our secondary supplier, Hackberry, during this time. If we only look at the high level data, we wouldn’t have detected the issue, but looking a few levels deeper provides us the information to target the right problem.

The model

Simple regression model

An intuitive model is to run a simple Ordinary Least Square regression with dummy variables indicating day of the week. The model takes the following form:

y = at + b + \sum_{i=1}^{7} a_i*{I_\textrm{day}}_i + e

where y is the amount we want to track, t is the time variable, \({I_\textrm{day}}_i\) is the indicator variable that denotes if today is the \(i^\textrm{th}\) day of the week, and e is the error term. This model is pretty simple, and it generally does a good job identifying the trend. However, there are several drawbacks:

  • The growth predictor is linear. If we experience exponential growth, it’s not good at modeling the trend.
  • There is a strong assumption that the time series only shows weekly seasonality. It cannot deal with products with other seasonality patterns.
  • Too many dummy variables require a bigger sample size for the coefficients to achieve the desirable significance level.

Even though we can observe the pattern of the metrics we want to track, and manually change the form of the model (i.e. we can add additional dummy variables when observing a strong monthly or yearly seasonality), the process is not scalable. An automated way to identify seasonality helps us to avoid our own bias and enables us to use this technique in data sets beyond payments.

Fast Fourier Transform model

When building a model about a time series with both trend and seasonality, it’s a common practice to build a model that takes the following form:

Y = S + T + e

where Y is the metric; S represents the seasonality; T represents the trend; e is the error term. For example, in our simple regression model, S is represented by the summation of the indicator functions, and T is represented by at+b.

In this section, we will develop new methods to detect trend and seasonality, incorporating the knowledge we gained from the previous section. We will use the sales of the two imaginary products, Keyboards and Mouses, to demonstrate how the model works. The two products’ sales values are depicted in the following graph:


As shown above, Keyboards are the major product initiated in September 2016, and Mouses gets introduced in Aug 2017. We will model the seasonalities and trends, and try to find anomalies where the error terms are too far from the average.


To detect the seasonality, we will use Fast Fourier Transform (FFT). In simple linear regression model, we had assumed a weekly seasonality. As we can see above, there is no strong weekly pattern in Mouses, so blindly assuming such a pattern can hurt the model because of unnecessary dummy variables. In general, FFT is a good tool to detect seasonality if we have a good amount of historical data. It is very good at detecting seasonal patterns. After applying FFT to both time series, we get the following graph:


where season_day is the period for the cosine wave of the transformation. In FFT, we usually only select periods with peak amplitudes to represent the seasonality and treat all other periods as noise. In this case, for Keyboards, we see two big peaks at 7 and 3.5 and another two smaller peaks at 45 and 60. For Mouses, we see a significant peak at 7 day, and some smaller peaks at 35, 60, and 80. The seasonalities of Keyboards and Mouses generated from the FFT are represented in the following graph:


As we can see, the amplitude of the seasonality for Keyboards grows with time, and it contains a major weekly seasonality, whereas the amplitude of Mouses shows a significant seasonality both in weekly trend and in a 40 day period.


Here we will use the rolling median as a trend of a time series. The assumption here is that the growth is not very significant in a very short period of time. For example, for a certain day, we will use the previous 7-day rolling median as the trend level for that day. The advantage of using the median instead of the mean is having better stability in case of outliers. For example, if we have a sudden increase of 10x value for a day or two, that will not affect the trend if we look at median. However, it will affect our trend if we use mean. In this demonstration, we use 14 day median as our trend, shown in the graph below:



After getting the seasonality and trend, we are going to evaluate the error term. We will use the error terms to determine if we have an anomaly in the time series dataset. When we combine the trend and the seasonality together, and subtract it from the original sales data, we get the error term. We plot the error term in the following graph:


As we can see, we have some spikes in the error term which represent anomalies in the time series data. Depending on how many false positives we can tolerate, we can choose how many standard deviations away from 0 we allow. Here we will use 4 standard deviations to get a reasonable amount or alerts.


As we see above, the alert system does a good job identifying most of the spikes in the error term, which corresponds with some anomalies in the data. Note that some of the anomalies we have detected are not abnormal to human eyes, but they are actually real anomalies because of the seasonality pattern.

Overall, based on our internal tests, this model performs well in identifying the anomalies while making minimal assumptions about the data.


Hopefully this blog post provides some insights on how to build a model to detect the anomalies. Most anomaly detection models involve modeling seasonality and trend. One key element of modeling is to make as few assumptions as possible. This will make the model general enough to suit lot more situations. However, if some assumptions greatly simplify your modeling process, don’t shy away from adopting those assumptions.

Want to work with us? We're hiring!