Ahh, the holidays — a time to enjoy the company of those closest to you and to reflect on the great experiences of the past year.  For users of Airbnb, those great experiences often include a trip where they got to see a new place through a local’s eyes, or a chance to meet someone new by hosting.  That’s why for every year since our inception in 2008, we’ve mailed holiday cards to our community, to thank them for making the Airbnb community great.

The design of these cards evolved each year, and so did the quantity. This year the challenge to reach so many mailboxes was overwhelming, so we got creative and came up with a new idea that would let our guests and hosts share their own seasons greetings with the people who made their year special.  We decided to let guests and hosts email greeting cards to one another, with custom art from our awesome designers and a personalized message from the sender about their experience.

Our engineers worked with our designers and content team to figure out the ideal user experience for sending holiday cards.  For instance, we knew that about half of our users open email on mobile devices, so it was important that we practice responsive design and make our page usable at any screen size.


But when it finally came down to putting pen to paper (or finger to keyboard, as it were), we were faced with a question that often comes up for an application of our scale: where should we stick the code?

Let’s back up a little and give some context to our dilemma.  Like many other tech startups, Airbnb started its life as a Ruby on Rails web application, and as the scope of the product has increased, this Rails application has grown as well — to a point that’s way beyond what is healthy or reasonable.  We have a lot of features, from search to messaging to payments to Wish Lists to Neighborhoods, and more.  And each of these features has its own infrastructure dependencies, ranging from data stores like MySQL and Redis to services like ZooKeeper, Memcached, Resque, and email/SMS delivery.  To mix these all up in one application environment introduces incidental complexity, where the stability of one feature can be compromised by the failure of something completely unrelated!

To cure ourselves of our Monorail (Monolithic Rails), we’ve started the arduous work of shifting to a Service-Oriented Architecture.  Now, most of the time when people talk about SOA, they think of one web application talking to several backend API services that deliver data in a serialized format like JSON — for instance, a user search service or a listing search service.  We take this approach to SOA as well, but we also like to think of each of our major features as its own web service.  When you browse Neighborhoods looking for a bohemian area with great nightlife, that has nothing to do with payments or messaging.  And when you’re sending a holiday card, that is a completely separate concern from our core booking flow.

So we made Holidays its own Rails application.  This isn’t the first time we’ve taken this approach with a new feature.  As I hinted before, our Neighborhoods product is a separate application, as is our mobile website.  After building a few of these applications, we’d developed some patterns that we’d like to share.  These are far from the ideal solutions to these problems, but they serve as good intermediate solutions that enable a more service-oriented architecture while working on the ideal infrastructure for SOA.  Our hope is that other teams looking to shift away from their own Monorails can use some of these techniques in their transition.

Sharing Code via Gems

As you write more applications, you’ll discover more and more pieces of your original Monorail that need to be shared between applications.   For example, all of our web services make use of roles functionality in our User model for authentication and personalization.  While this functionality will soon be abstracted behind an internal service with a well-defined API, one intermediate solution that has worked for us is refactoring pieces of model functionality into gems that can be shared across web services.  For the user roles case, we’ve taken the code that deals with user roles and refactored it into a module that lives inside one such internal gem.  When a web service needs access to that functionality, we mix the module into that application’s User class.  Other use cases for shared code in gems include logging, cache observers, and internationalization helpers.  Internationalization in particular was very important for our Holidays app because so many of our trips cross international borders.

We’ve also found the gem pattern useful for sharing our styles.  We maintain an internal Bootstrap-like library for standardizing styles and components across our applications, making it easy for front-end developers to quickly achieve a recognizable Airbnb look and feel.  This also eases the collaboration process between designers and developers because designers know what components are available for easy reuse across projects.  We’re also currently using this gem to share assets but have run into some problems with asset paths and are in the process of writing a service to handle asset sharing.

Proxying Requests to Multiple Applications

Although we are running multiple applications, we don’t want to make that apparent to visitors; the holidays app should be accessible at a nice clean URL like https://www.airbnb.com/holidays.  Our current solution is to use Nginx’s HttpUpstreamModule and HttpProxyModule.  We use the `upstream` directive to define a set of servers (e.g. the Holidays app instances), then use the `proxy_pass` directive to pass requests that match a certain pattern (e.g. “^/holidays”) to that upstream location.

Although it is a straightforward and fast way to begin routing requests to different web services, using Nginx as a proxy isn’t actually a great solution.  The upstream directive requires a static list of servers, so rolling servers requires manually updating the configuration and restarting nginx.  We avoid this problem by pointing to an Amazon Elastic Load Balancer, but this means getting a user to the Holidays application requires multiple round-trips.  Worse, though, ELBs have dynamic IP addresses, which is problematic because the version of Nginx we run caches the IP addresses of destination servers.  So when Amazon had some trouble with ELBs over New Years, the Holidays load balancer was assigned a new IP — and our Nginxen were left forwarding requests to some other random instance in the Amazon cloud.  We’re currently replacing nginx proxying with an in-house routing service.

Service Discovery

While I’ve mostly talked about our web services thus far, we also have several backend services that all our web services depend on, such as our listing search service and our logging service.  Our web applications need some way to discover the hosts of these services, a problem that is known as service discovery.  We currently use Apache ZooKeeper for this: instances running a given service (e.g. an instance running search) register themselves with ZooKeeper, and client services ask ZooKeeper for the current list of instances providing that service.  This is a fairly common usage of ZooKeeper that is well documented by others.

One problem we’ve run into though is that each application has to embed and maintain its own service discovery layer.  This can pose integration problems, and is just not very feasible for some applications.  Our mobile website, for instance, uses Node.js on the server, but JavaScript ZooKeeper clients are not yet as mature as we’d like for production use.  To ease the process of service discovery, we wrote an in-house system that handles all the hard work of service discovery for you; applications need only connect to this process in order to find and contact a given service.  We’re currently rolling this out as a replacement for our old service discovery methods, and hope to open source it in the future.

Move fast, don’t break things


Because of the practices we’ve adopted around separating web services, we were able to develop a completely separate Holidays application in just five days with all the standard Airbnb batteries included — authentication, styling, logging, and internationalization.  Best of all, the Holidays team was able to work separately from everyone else, releasing and deploying new builds on their own time without fear of threatening other features’ stability.

The end result?  Our community shared over 300,000 holiday cards with the people who made their year more meaningful.  It was great to get cards with personal notes from hosts I’ve stayed with wishing me well and hoping that I’ll come visit again.  Check out the site, and although it’s a bit late now, consider sending a card if you stayed on Airbnb last year!

Does creating a scalable and reliable service-oriented architecture for the world’s fastest-growing marketplace for space sound like a fun challenge to you?  Check out our jobs page!

Follow the adventures of the Airbnb engineers at @AirbnbNerds.
Follow the misadventures of Topher, this post’s author, at @clizzin.

Want to work with us? We're hiring!