BLOG UPDATE: The final analysis of the Linux File System fsck test results are in and the article published - see link below.
Over the past several months I’ve been a part of an interesting article series published at Enterprise Storage Forum that discusses, and tests some of the limitations with the Linux File System. The idea was conceived over a dinner discussion I had with Instrumental Inc.’s Henry Newman last summer. We were very concerned that Linux file systems had somewhat low supported capacities – around 100TB in the case of Red Hat support for xfs for example. After some discussion about possible reasons for this we agreed on the following problem areas:
We decided we would explore problem number one using fsck (file system check) because sustained large block performance isn’t important without scalable metadata.
So a plan was devised to develop a four-part series about this topic, three of which have been published to date.
The State of File Systems Technology, Problem Statement Lays out the problem case for today’s file systems in greater detail.
Test Plan for Linux File System Fsck Testing Outlines our plan to test fairly large file systems that may be run on large systems to understand the status of file system check (fsck) performance.
Linux File System Fsck Testing – The Results Are In A review of some of the raw data results, and outlining some test change plans.
Linux File System -- Analyzing the Fsck Test Results
Data Direct Networks contributed hardware that allowed us to perform this testing. The articles have been a big hit, stirring discussion within the Linux community with the third article, featuring the testing results, receiving more than 100,000 hits in the first day.
There have also been some great user comments in the series, please enjoy, and let us know what you think by leaving a comment.