Scaling buddycloud with

Buddycloud May 08 2013

Scaling buddycloud with

We want the buddycloud social network stack to be useful for everyone. From hobbyists to large service providers. In order to help address the scaling needs at the higher end of the spectrum, we’ve added support for in buddycloud’s HTTP API component. is a cloud service that handles realtime data delivery. It’s kind of like Pusher and PubNub, but with an emphasis on APIs. Normally, the buddycloud HTTP API component sends realtime updates to clients directly. To scale up to really large numbers of connections it becomes necessary to distribute the connection pooling across multiple servers. This is where Fanout can help. Rather than having to add extra servers on your own to handle high load, you can route the pushes through the Fanout CDN instead. This way, you get limitless scaling of realtime pushes without having to manage any additional servers.

To use it, just put your Fanout realm & key in the gripProxies section of buddycloud HTTP API config.js:

exports.production = { ... gripProxies: [ ... // { key: new Buffer('{key}', 'base64'), controlUri: '{realm}', controlIss: '{realm}' } ] }

Then, make sure your API domain is registered within Fanout and set as a CNAME: IN CNAME httpfront-{id}

That’s all it takes. If you’re using buddycloud as part of a large service then you can rest easy knowing that realtime updates will never be a bottleneck. Thousands of people could have a channel page open and see updates instantly. We’ve got this enabled on our demo site as well.

How we did it

Some months back, Fanout’s founder, Justin Karneges, approached us about a possible integration. At first we were skeptical. We understood the value of a CDN, but it was important to us that buddycloud always have the ability to run standalone and not depend on a third-party service. Therefore, the feature would have to be optional. Additionally, we didn’t want this to break our API. We already had a long-polling API defined, and we didn’t want the API to be different depending on if Fanout routing was enabled or not.

Fanout uses a concept called “GRIP” proxying that addresses all of these concerns. The approach dictates that a special proxy server sit in front of our webserver, handling the work of pushing out lots of data to clients. Because it acts as an HTTP proxy, we’re able to retain our API. Our webserver then speaks the open GRIP protocol to the proxy. This is the same protocol that the open source Pushpin proxy uses as well. What’s neat is that buddycloud only needs to support the GRIP protocol, and it can be fronted by either a local Pushpin instance or the Fanout cloud. Or any future GRIP-supporting proxy. This means we have a singular code path, and the code isn’t even Fanout-specific. We like standards.

Below is a diagram of how a standalone (non-Fanout) server setup looks:

In this case, a local Pushpin instance sits on the same physical server as the buddycloud HTTP API component, so that the server itself is capable of performing its own realtime deliveries. This means the server maintains all the open connections with any clients. The end result is more or less the same as before we converted the HTTP API code to use GRIP, although we hope the code will be more maintainable this way, too.

Now for the fun part. If you configure buddycloud to use the Fanout cloud service, then you essentially replace Pushpin in the scheme:

Here the Fanout CDN represents Fanout’s global cluster of servers. With this setup, the buddycloud server doesn’t need to maintain any of the open connections, and scaling concerns are handled by Fanout.


There were two main changes we had to make to get this to work. First, at any place where we were holding an HTTP request open as a long-poll, we changed the code to reply immediately with a special GRIP response instead. This special response tells the proxy service to hold the connection open on our behalf. The held connection is bound to a channel identifier so that we can later push to it using that channel. Second, whenever new data is available we now publish it to the GRIP service. What’s nice is we publish the data regardless of whether or not there are any held connections. This means we no longer need to deal with connection management.

The buddycloud HTTP API component is written in Node with Express, so we used the Nodegrip library for our GRIP support. Below we define some utility methods.

First, the sendHoldResponse() method sends an HTTP response with GRIP instructions. It specifies the channel to bind to and defines the HTTP response to use in case of a timeout. We use this method wherever we need to cause a request to long-poll.

var griplib = require('grip'); exports.sendHoldResponse = function(req, res, channelBase, prevId) { var origin = req.header('Origin', '*'); if (origin == 'null') { origin = '*'; } var headers = {}; headers['Access-Control-Allow-Origin'] = origin; headers['Access-Control-Allow-Credentials'] = 'true'; headers['Access-Control-Allow-Methods'] = 'GET, POST, PUT, DELETE'; headers['Access-Control-Allow-Headers'] = 'Authorization, Content-Type, X-Requested-With, X-Session-Id'; headers['Access-Control-Expose-Headers'] = 'Location, X-Session-Id'; var channel; var contentType; var body; if (req.accepts('application/atom+xml')) { channel = channelBase + '-atom'; contentType = 'application/atom+xml'; body = '<?xml version="1.0" encoding="utf-8"?>\n' + '<feed xmlns=""/>\n'; } else if (req.accepts('application/json')) { channel = channelBase + '-json'; contentType = 'application/json'; body = {}; body['last_cursor'] = prevId; body['items'] = []; body = JSON.stringify(body); } else { res.send(406); return; } var channelObj = new griplib.Channel(channel, prevId); headers['Content-Type'] = contentType; var response = new griplib.Response({'headers': headers, 'body': body}); var instruct = griplib.createHoldResponse(channelObj, response); console.log('grip: sending hold for channel ' + channel); res.send(instruct, {'Content-Type': 'application/grip-instruct'}); };

The code’s a bit large since we want our timeout response to support CORS along with both Atom and JSON encodings. The actual GRIP stuff is in the last few lines.

Next, the publishAtomResponse() method pushes the HTTP response that we want to send to any connected clients. As we want to support both Atom and JSON formats, this is done by having a channel for each format (suffixed with “-atom” or “-json”). We bind to one or the other channel based on the Accept header in a request, and whenever we publish we send to both channels.

exports.publishAtomResponse = function(origin, channelBase, doc, id, prevId) { if (origin == 'null') { origin = '*'; } var headers = {}; headers['Access-Control-Allow-Origin'] = origin; headers['Access-Control-Allow-Credentials'] = 'true'; headers['Access-Control-Allow-Methods'] = 'GET, POST, PUT, DELETE'; headers['Access-Control-Allow-Headers'] = 'Authorization, Content-Type, X-Requested-With, X-Session-Id'; headers['Access-Control-Expose-Headers'] = 'Location, X-Session-Id'; headers['Content-Type'] = 'application/atom+xml'; var response = doc.toString(); var channel = channelBase + '-atom'; console.log('grip: publishing on channel ' + channel); grip.publish(channel, id, prevId, headers, response); headers['Content-Type'] = 'application/json'; if (id != null) { response = {}; response['last_cursor'] = id; response['items'] = atom.toJSON(doc); response = JSON.stringify(response); } else { response = JSON.stringify(atom.toJSON(doc)); } channel = channelBase + '-json'; console.log('grip: publishing on channel ' + channel); grip.publish(channel, id, prevId, headers, response); };

The grip.publish() method seen above is another convenience method. It handles the GRIP payload formatting, and also supports publishing to multiple GRIP proxies at once:

exports.publish = function(channel, id, prevId, rheaders, rbody, sbody) { if (!config.gripProxies || config.gripProxies.length < 1) { return; } if (!pubs) { pubs = []; for(var i = 0; i < config.gripProxies.length; ++i) { var gripProxy = config.gripProxies[i]; var auth = null; if(gripProxy.controlIss) { auth = new pubcontrol.Auth.AuthJwt({'iss': gripProxy.controlIss}, gripProxy.key); } pubs.push(new pubcontrol.PubControl(gripProxy.controlUri, auth)); } } var formats = []; if (rbody != null) { formats.push(new griplib.HttpResponseFormat(200, 'OK', rheaders, rbody)); } if (sbody != null) { formats.push(new griplib.HttpStreamFormat(sbody)); } var item = new pubcontrol.Item(formats, id, prevId); for(var i = 0; i < config.gripProxies.length; ++i) { (function() { var gripProxy = config.gripProxies[i]; pubs[i].publish(channel, item, function(success, message) { if (!success) { console.log("grip: failed to publish to " + gripProxy.controlUri + ", reason: " + message); } }); }()); } }

With these convenience methods in place, implementing our /notifications/posts endpoint was easy. When a request is received but we have no items to deliver, we call sendHoldResponse() to cause the request to stay open. Whenever a new item becomes available, we call publishAtomResponse() to have it delivered down any open connections.

The buddycloud HTTP API maintains an item queue per logged-in user, with each one backed by a persistent XMPP session. The GRIP channels are unique per-user then, using the form “np-{jid}” (“np” standing for “notifications/posts”). Requests by unauthenticated users share a single anonymous JID, and that’s where the big scaling win is. If there are 1000 anonymous browsers viewing a channel, the GRIP service will deliver updates to all of them with a single push from the buddycloud HTTP API. In a future revision we will look into how to better scale updates for logged-in users.


With our changes in place, what does an API call look like? (long lines wrapped)

GET /notifications/posts?since=cursor:1 Accept: application/json HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Headers: Authorization, Content-Type, X-Requested-With, X -Session-Id Access-Control-Allow-Methods: GET, POST, PUT, DELETE Access-Control-Expose-Headers: Location, X-Session-Id Access-Control-Allow-Origin: Content-Length: 250 Content-Type: application/json {"last_cursor":"2","items":[{"id":"02befca2-5b5f-4ae1-801f-5d2113bf8ba5","sour ce":"","author":"","publi shed":"2013-05-13T20:17:56.537Z","updated":"2013-05-13T20:17:56.537Z","content ":"test post"}]}

Answer: it looks exactly the same as it did before this change. That’s the whole point. There may be a proxy service in the middle, but to the outside world it looks like buddycloud’s API.

Multiple GRIP configurations

The buddycloud HTTP API can be configured to use multiple GRIP proxies at once. This makes it possible for the component to work with both a local Pushpin instance and the service simultaneously. To do this, keep Pushpin in the network path, and set upstream_key in pushpin.conf with your Fanout key:


The resulting topology looks like this:

Why would you want to do this? The main reason is that it helps with transitioning from a non-Fanout setup to a Fanout setup. With this configuration, the buddycloud HTTP API publishes messages to both services, such that either one is capable of handling realtime connections at any moment. This can be very useful right when you set your CNAME domain to point at Fanout. While you’re waiting for the DNS change to propagate, realtime pushes will still work via the local Pushpin instance. In fact, turning Fanout on and off at this point is just a matter of a DNS change. No additional configuration or restarts of buddycloud is required. You can even point your /etc/hosts file at Fanout within your test environment to confirm everything is fine before making the official DNS change. This greatly reduces the risk involved in configuring and activating Fanout.


We are really impressed with Fanout’s approach to scaling realtime. Of all the similar such services available, it’s the only one we could ever think of using, and we love the very open way it goes about things. buddycloud will soon be offering hosted service for large businesses, and it’s nice to know that the scalability of realtime deliveries is one less thing we’ll have to worry about.