The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity
hardware. It has many similarities with existing distributed file systems. However, the differences
from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed
to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is
suitable for applications that have large data sets. HDFS relaxes a few POSIX requirements to enable
streaming access to file system data. HDFS was originally built as infrastructure for the Apache Nutch
web search engine project. HDFS is now an Apache Hadoop subproject.
Hardware failure is the norm rather than the exception. An HDFS instance may consist of hundreds or
thousands of server machines, each storing part of the file system’s data. The fact that there are a huge
number of components and that each component has a non-trivial probability of failure means that some
component of HDFS is always non-functional. Therefore, detection of faults and quick, automatic recovery
from them is a core architectural goal of HDFS.
Streaming Data Access
Applications that run on HDFS need streaming access to their data sets. They are not general
purpose applications that typically run on general purpose file systems. HDFS is designed more
for batch processing rather than interactive use by users. The emphasis is on high throughput of
data access rather than low latency of data access. POSIX imposes many hard requirements that are
not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been
traded to increase data throughput rates.
Large Data Sets
Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes
in size.
Thus, HDFS is tuned to support large files. It should provide high aggregate data bandwidth and scale
to hundreds of nodes in a single cluster. It should support tens of millions of files in a single
instance.
Simple Coherency Model
HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A MapReduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in the future.