Recently, Hadoop has attracted much attention from engineers and researchers as an emerging and effective framework for
Big Data.
HDFS (Hadoop Distributed File System) can manage a huge amount of data with high performance and reliability using only commodity hardware. However, HDFS requires a single master node, called a
NameNode, to manage the entire namespace (or all the i-nodes) of a file system. This causes the
SPOF (Single Point Of Failure) problem because the file system becomes inaccessible when the
NameNode fails. This also causes a
bottleneck of efficiency since all the access requests to the file system have to contact the
NameNode. Hadoop 2.0 resolves the SPOF problem by introducing manual failover based on two
NameNodes,
Active and
Standby. However, it still has the efficiency bottleneck problem since all the access requests have to contact the
Active in ordinary executions. It may also lose the advantage of using commodity hardware since the two
NameNodes have to share a highly reliable sophisticated storage. In this paper, we propose a new HDFS architecture to resolve all the problems mentioned above.
抄録全体を表示