Advanced Computing in the Age of AI | Friday, April 19, 2024

Facebook Gooses Performance on Open Source Flashcache 

Facebook likes to share some of the techniques it has developed to speed up the performance of its Web applications, and one of its key homegrown tools is called Flashcache. A new version of the open source Flashcache software has just been released, and it offers a significant performance boost over the prior versions of the software.

With over a billion users, there is no economical way to create the Facebook site without caching the content that is stored in relational databases, NoSQL data stores, and object stores that comprise the social network. The lessons that Facebook has learned about using caches based on memory, flash, and disk are applicable to any data center running at extreme scale that is pumping out lots of content to large numbers of users.

Facebook has a bunch of different memory-based caching programs running out in front of databases – Memcached is prominent, but there is another cache called Tao that is used explicitly for the social graph functions recently added to Facebook. Flashcache sits inside of database servers, which are behind memory-based caches that in turn sit behind the Web servers that compose each Facebook page from dozens of different elements. Technically, Flashcache is a write back block cache Linux kernel module, and it is built using the Linux Device Mapper that is part of the storage stack in Linux. The software-based RAID disk driver and the Logical Volume Manager (LVM) are built using Device Mapper and run as Linux kernel modules, too.

Domas Mituzas, whose title is small data engineer at Facebook, explains in a blog post that Flashcache was born out of the frustration over the performance and pricing gaps between flash-based drives and disk drives with SATA or SAS interfaces. Every data center running at scale has exactly the same issue.

Three years ago, when Facebook created Flashcache, flash-based drives were too expensive even if they offered rapid access to data. SATA disks were relatively cheap in terms of the cost per gigabyte, but they are slow. SAS disks are peppier than SATA drives, but they have lower capacity and higher prices. Flashcache is used across a mix of flash and disk devices to try to optimize performance and cost at the same time.

The new Flashcache 3.0 release the company is seeing its hit rate on flash-based arrays move from 60 percent up to 80 percent while at the same time cutting disk I/O operations in half.

"Flashcache has become a building block in the Facebook stack," says Mituzas." We're already running thousands of servers on the new branch, with much-improved performance over flashcache-1.x. Our busiest systems got 40 percent read I/O reduction and 75 percent write I/O reduction, allowing us to perform more efficiently for more than a billion users with a flip of a kernel module."

While flash storage prices have been coming down, Mituzas says in the post that it is still too expensive and having to use a mix of flash and disk in its storage systems adds more complexity than Facebook would like. (We guess that is a thumb's down there.) He adds that future releases of Flashcache will be tuned to run on systems with multiple terabytes of flash and tens of terabytes of disk capacity. That gives you an idea of the servers that Facebook will be deploying in its clusters.

The source code for Flashcache 3.0 is available on Github, and Facebook is eager for others to give it a whirl.

EnterpriseAI