Speed up Your Cloud with Memcached

5
8380
Speed up your cloud
Speed up your cloudIs your website running into performance bottlenecks? Does the database or backend feel like a really expensive resource, even though you’ve got a huge cluster set up to improve parallel processing? Read on to find out why you should be including Memcached, a distributed cache, in your cloud-based application.

Caching is a concept that almost all developers use in some form or the other in their applications. It’s basically about storing a piece of information in memory so that it can be retrieved quickly, later, thus speeding up your application. Caching is mostly used for data that is accessed repeatedly, so that instead of calculating/retrieving from the disk repeatedly, which takes time, we can instead directly look it up in the cache, which is much faster.

In the context of Web applications, for example, a dynamic Web page can be cached on the server, so that we do not need to generate it again when a new request comes, provided the dynamic data present in the page hasn’t changed within that time. That is the easy part, but caches can be used at multiple places in the application stack, so you have quite a few options when it comes to choosing where to cache, what to cache and how to cache.

Without getting into too much jargon, here are some of the techniques on how and where you might like to cache data:

  • Browser caching: As Web developers might be aware, some data can be cached on the client-side in the browser, like images, etc., so that they are automatically used when repeated requests for that resource are made.
  • Content delivery network (CDN): A CDN is a network of servers that are geographically dispersed; users closest to a particular server in the CDN are then served by that server, reducing data transfer time from far-off servers, and also taking the load off the primary servers.
  • Reverse caching proxies: A reverse caching proxy is another architectural option, where a proxy server sits between the client and the server. Here the client requests resources from the proxy server, which in turn either retrieves the resources on the client’s behalf, or returns the data present in its cache. So, the client feels as if the proxy server is the source of information.
  • Server-side caching: Data or objects can alternatively be cached on the server-side itself. This can either be a local server cache, a centralised caching server or a distributed cache.
  • Local database query cache: A good database caches the database queries or data internally, so as to improve the speed of looking up data as well as the performance of the database.

You may choose to implement a cache in one way or another, or you might use a combination of more than one technique to cache different types of data at separate levels. But more importantly, it is helpful to know whether you even need caching in the particular application/use-case you are thinking about.

Most people, in the process of implementing a cache, actually lose out because it was wrongly implemented. So the cache ends up slowing down the application, instead of speeding it up. Getting fancy software with fancy features doesn’t always make sense, but using even the modest ones in the right way, does.

Why you need Memcached

This discussion assumes that you have set up a cluster, and you want to implement caching. In this case, what happens if you start caching on each node independently? You will see that some nodes face memory issues, while others have quite a bit of memory left. Moreover, most of the data stored in their individual caches is redundant.

This calls for a centralised caching mechanism that makes sense in the cloud domain, such that the data being cached in a cluster is evenly distributed and unique for the whole cluster. And yes, the answer is Memcached.

Memcached is that piece of the puzzle without which your cloud implementation does not even make sense. It provides a solution in which the available memory in the cache is the sum of that on all nodes on which the Memcached instance is running. So if, for example, you have 10 nodes, with each being allocated 1 GB of memory for caching, you get a total of 10 GB of cache available for the whole cluster. Here are some features in Memcached that might lure you into using it within the context of your application:

  • Easy scalability: This feature is applicable for almost any software with the tag of “distributed”, but still, it is worth noting that Memcached needs minimal configuration to add a new node, with almost no special interconnect requirements. Your available memory in the cache just increases on the fly.
  • Hidden complexity: Memcached hides beneath it all the complexity associated with storing/retrieving the data from the cache in a cluster. All we need to provide is the key associated with the data. The whole task of determining which node to store the data on, or to retrieve it from, is done by the Memcached client itself.
  • Minimal impact of a node failure: Even if a particular Memcached node does fail, it has almost no impact on the overall cache other than reducing the available memory, and a minor increase in the number of cache misses.
  • Flexible architecture: Memcached does not impose a restriction on all nodes to have a uniform cache size. So, some of your nodes with less physical memory can be set up to contribute perhaps only 512 MB to the cluster, while others may have 2 GB of memory dedicated for the Memcached instance. Apart from this, you can even run more than one instance of Memcached on a single node.
  • Multiple clients available: Memcached has client APIs available for various languages like PHP, C++, Java, Python, Ruby, Perl, .NET, Erlang, ColdFusion and even more.
  • Cross-platform: Memcached is available for a wide variety of platforms including Linux, BSD and Windows.
  • Multi-fetch: With the help of this feature, we can request values for more than one key at once, instead of querying them in a loop, one by one, which takes a lot of network round-trips.
  • Constant time functions: It takes the same amount of time to perform an operation in memory, whether it is a single key or a hundred. This corresponds to the Multi-fetch feature discussed before.

There are many more features that I may have missed, but I am just skimming the surface at the architectural level.

How it works

Memcached works on some very strong and unique principles, which you, as a developer, should know about in order to develop effective applications with it. Listed below are some of the major principles behind how Memcached handles data under the hood.

Consistent hashing

Memcached decides the node to store/retrieve the data on, when it receives a request, irrespective of the total number of Memcached nodes active. Rather than choosing a random node, or using the round-robin method to load-balance all the nodes, it hashes all the incoming keys, which decides the node on which the data is going to be stored. This hash function evenly distributes the keys among all the nodes.

The algorithm used for hashing is complex enough to compensate for a node failure, so that the calculation does not change when the total number of nodes changes, and there is no ambiguity while retrieving data. The application basically has to be aware about all the nodes that are in the Memcached cluster, and you need to populate it with a list of all the IPs and ports on which the Memcached servers are active.

To know more about this algorithm, read the article “Consistent Hashing in memcache-client.”

LRU algorithm

Memcached uses a predefined portion of maximum memory, so if you ask it to use a maximum of 1 GB RAM, it will try not to use memory beyond that. But what happens if a request comes in to store data, and you don’t have memory left? Memcached uses the LRU (Least Recently Used) algorithm to determine which data in memory has been least active, and discards it to make space for the new data.

This is where the main concept behind caching lies, i.e., you cannot rely on Memcached to keep some data once it has been asked to store it into memory. It all depends on the amount of traffic that the particular key has received, and it is really a very efficient way of managing memory. So while developing your application, remember that you must have a fall-back mechanism to retrieve the data once it is not found in the Memcached server.

Memory allocation

Memcached is written in the C programming language, so it has to manage its own memory allocation schemes. Many think it uses the malloc function to allocate free memory from the RAM — but that results in some major address-space fragmentation issues once you start allocating large chunks of memory. It also results in performance degradation once the application has been running for a fair amount of time and there have been many allocate-reallocate cycles.

To avoid such a scenario, Memcached uses a slab memory allocator. In order to optimise the cache usage of your application, you need to know how this slab allocation works. Basically, it’s like any other memory management scheme, where the memory is divided into a fair number of pages. Each page is assigned a particular slab class, according to which the page is divided into a finite number of equal-sized chunks. The chunk size depends on the slab class that the page has been assigned.

The page a particular data is assigned to depends on the actual size of the data, including the key and the value, according to which it is given a page of a relevant slab class. For example, a page that is 1 MB in size and belongs to Slab Class 1 will have eight 128 KB sized chunks. So, if a key-value pair is 109 KB in size, it is likely to be assigned a chunk of 128 KB in a page of this slab class (yes, 19 KB is wasted, but that’s the trade-off).

Now, it is important to note that the LRU algorithm discussed earlier works only on a per-slab basis. So, a key is likely to be evicted earlier if the slab it belongs to is being used more than others.

Installing Memcached

While installing Memcached, we need to install the daemon and the client library. Also, Memcached has a dependency on libevent, which you need to install before Memcached. But, on most major Linux distributions, it is almost as easy as sudo apt-get install memcached or yum install memcached.

Alternatively, you might want to compile from source, if you want a more recent version. In that case, you can get the libevent dependency on its website, and Memcached here. You will, of course, also need all the build dependencies.

Since all Memcached servers in the cluster do not need to know about each other, there is almost no other configuration that you need to do in order to get up and running. To start the Memcached instance, run the following:

memcached -d -m 1024 -u memcache -l 127.0.0.1 -p 11211

These options specify that Memcached is run as a daemon, with a max memory consumption of 1 GB. As the “memcache” user, listen on the IP address 127.0.0.1 and port 11211. Alternatively, you might want to try running it with a -vv flag in the foreground, to see what’s happening behind the scenes:

memcached -vv -m 1024 -u memcache -l 127.0.0.1 -p 11211

Installing the Memcached client and using it largely depends on the language/environment you are using it on, so you can look up the documentation for that particular environment if you need help. As of now, we will just connect with a telnet session, and look at some statistics with the command. I’d rather not go into programming, and will keep this discussion more language-agnostic.

$ telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
stats

After you enter the stats command as shown above, you will see a long list of statistics related to the cache, which include uptime, cache hits and cache misses, among others. This will resemble what is shown below:

STAT pid 1233
STAT uptime 23530
….
STAT get_hits 0
STAT get_misses 0
STAT delete_misses 0
….
END

Important Functions

A Memcached client mostly has the following functions available, whatever the programming environment it may be in:

  • get(key) — Retrieve the value associated with the specified key
  • set(key, value, expiry) — Add or replace the given key’s value along with the given expiry times
  • add(key, value, expiry) — Add a value associated with the new key in Memcached; if the key already exists, it returns an error.
  • append(key) — Add the current data at the end of the value that is already associated with the key.
  • prepend(key) — Add the current data before what already exists, and which is associated with the key.
  • delete(key) — Delete or invalidate the key.
  • replace(key, value) — Replace the value of the existing key in Memcached. Returns an error if the associated key does not already exist.
  • flush_all() — Invalidates all keys in Memcached memory.

The ten commandments for good cache performance

Here are some pointers that I’d like you to keep in mind while working with Memcached.

  1. Take care of stale data and expiry times. Make sure that when a value is updated in the backend database, the new value is also updated in the cache. Also, the data expiry time should be optimum; making it too long will unnecessarily take up more memory.
  2. Always keep in mind that Memcached is not a database. Do not think that a value, once stored, will stay there until it is deleted.
  3. Avoid storing frequently updated data in Memcached. There is no point in storing something that becomes stale frequently and has to be repeatedly retrieved from the database anyway.
  4. Memcached clients in multiple languages might work at the same time on the same server, but take care of compatibility and some features that a particular client might not support. For example, some of them might not support compression, or might perform compression in a different manner.
  5. Use weights to configure multiple servers with varying amounts of memory available. Give more weight to servers with more memory, so that a larger share of the overall data is stored on them. Weighing can be assigned while populating the client with the servers available.
  6. One important aspect of Memcached is that it has no in-built security mechanisms. It is your responsibility to set up a firewall, so that the Memcached interface is not publicly exposed.
  7. It is recommended that you warm up the cache (load the frequently accessed data initially) before publicly deploying your Web application with a separate program or script, so that you hit the optimum performance levels immediately.
  8. Each get or set operation fires a new atomic connection to the Memcached server, so it’s much better if you have predefined a fixed number of connections in the connection pool.
  9. While using Memcached in Linux, never allow it to use the swap, and never assign it more memory than the actual physical RAM available. That results in a very bad performance — there isn’t any point of even having a cache, in such a situation.
  10. Always try to cache complex processed data like objects or even HTML snippets, rather than raw data coming straight from the database.

To conclude, setting up a Memcached cluster is a very good way of improving performance if you have some spare resources available. In case you are working on a cloud, it is a must-have component since it can give you a significant performance boost and help you get rid of most of the bottlenecks in the application.

Further reading

5 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here