mongodb

When it comes to running MongoDB at scale, the entire stack matters

By August 2, 2018 August 19th, 2019 No Comments

The one thing you need to know about successfully scaling MongoDB? The entire stack matters. We could write several books on optimizing your clusters at scale, but we’ll start with a couple of blogs first. In this blog, we’ll cover issues with scaling your network and scaling your app.

Let’s start with some definitions:

Scalability is the ability of a system to function within a pre-defined acceptable level as it’s workload increases. The pre-defined acceptable level varies per application. For example, on a social app you many want your users to get a response within few milliseconds. However, for an application that books airfare, having the end user to wait for 10 seconds before finalize the booking is usually acceptable.

Workload, in this blog and in most cases, refers to the number of active end users.

Entire Stack refers to the application, the network, and (of course) the database.

First, we’ll talk about considerations for the network.

Scaling the Network

Latency
Its very important to know the network latency between your application tier and the database tier. Network latency is an expression of how much time it takes for a packet of data to get from one designated point to another. The lower the latency, the better.

In order to improve the latency your application tier and the database, servers must be as close as possible. It’s critical to get your data as close to the application as possible, if not in the same datacenter.

At this point you may be wondering, “my servers are in AWS, so ObjectRocket is not a good option latency-wise”. Wrong (or at least not completely true). If you are using ObjectRocket’s IAD (Virginia) & LON (London) zones and your application is in US-EAST or EU-LON, you can utilize a no-to-low-latency link between ObjectRocket and AWS called Direct Connect. Additionally, we have expanded our presence to other cloud providers, like Azure, to help you enjoy our full services and beat the latency. (We’ve also begun our Next Generation Platform test on AWS cloud.)

Bandwidth
Bandwidth is defined as the amount of data that can be transmitted in a fixed amount of time. It’s usually measured in bps (bits per second) with most applications seeing units the megabits or gigabits per second.

Bandwidth may have a negative effect on your performance by delaying the response times. It’s very important to know and monitor the network capacity between the database and the application tier. At ObjectRocket, we monitor your network capacity in order to prevent bad customer experiences.

Bandwidth best practices
From the application side, here are some best practices to follow in order to minimize the bandwidth requirements:

Projections
Whenever an application issues a read request (query), MongoDB sends the entire matching document(s). If your app only needs a portion and not the full document, you can project on the query only the fields you want transferred.

Projection example
A user searches a product catalog for “air conditioning unit”.
There are more than 100 specs (fields) associated with “air conditioning unit” but it’s not possible to display all 100 as it will be hard for the user to view and compare the results.

Instead, you could fetch the price, the brand name, the btu, and the stock availability. In this case, the customer is interested in a particular model and can click to get the full details. On the initial search, the application can craft the query to use a projection:

db.products.find{type:”air conditioning unit”},{“product_name”:1,price:1, btu:1, availability:1}.

If the entire document is few Kbs, you may end up transfering only few bytes over the wire with this technique.

Shorten field names
As described above, a read request transfers an entire document over the wire. A document consists of field names and their values. Making the field names shorter saves bandwidth as it decreases each document’s size. For example, in the product catalog described above, instead of “product_name”, the field can be named “n_p” and the price can be named as “p”.

Network compression
If you are using MongoDB 3.4 or higher, there’s a new option available on the configuration file named “net.compression.compressors”. This variable enables/disables compression on the network layer. However, it’s only effective when both the mongo process and the driver have compression enabled.

The driver must support OP_COMPRESSED in order for compression on the network layer to be functional. (View more details here.) Both the driver and the mongo process exchange compress messages, which means less bandwidth is needed during the transfer. The penalty comes on the CPU side.

Compress/decompress requires more CPU cycles. The best practice is to start with the snappy compression algorithm and if the CPUs can handle it, move to zlib compression algorithm. Zlib archives better compression ratio but needs more CPU.

Scaling the Application

Drivers
Always use the latest drivers. The latest drivers give you the option to upgrade to the latest version of MongoDB so you can make the most of the new capabilities. Plus, the latest drivers also provide improved performance through bug fixes. For example, if you don’t have one of the latest drivers, your application won’t be able to use OP_COMPRESSED that described in the previous chapter.

Driver settings are also important. Defaults work well most of the time but to maximize performance, you may have to tweak some driver parameters. The most common tweaks are on the timeout and the connection pool settings.

Throttling
You may be thinking: “Wait, are you telling us to slow down our application?!” Yes and no. Throttling doesn’t make sense for some applications. However, a throttling mechanism is a good idea when you need something in place to cope with traffic bursts. Using our example of the product catalog, let’s assume you log all visits on the website to build the user experience. Throttling those events on the database won’t be catastrophic for your business. Note that throttling events requires a queuing mechanism. Either Redis or Kafka are decent options to implement this for your application.

Caching
The vast majority of modern applications can utilize caching to their advantage. By caching, we mean a temporary fast storage — usually an in-memory database like Redis. The role of caching is to keep frequently accessed data as close to the application as possible in order to minimize hits on the permanent storage (in this case, MongoDB). Using our product catalog example, since I know that is summer and my users are frequently searching for AC units, I can probably cache the AC unit search on a Redis database and minimize the hits on MongoDB. Caching is a good fit for static data or data that doesn’t change frequently. It can easily expanded to frequently altered data under some assumptions, sacrificing accuracy for speed.

Summing it up

We’ve highlighted the most important areas to consider but there are many more items to cover. There’s also firewalls, connection handling, and bulk operators, as well as application design and code audits. This could easily turn into a book.

That’s why DBaaS is so important. We’re here to help you optimize your entire stack. If you’re already a customer, all you have to do is create a ticket to support@objectrocket.com and let us help you improve your performance. If you’re not a customer yet and are interested in our services, contact us at sales@objectrocket.com.