hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sesha Kumar <sesha...@gmail.com>
Subject Regarding design of HDFS
Date Thu, 25 Aug 2011 08:04:43 GMT
Hi all,
I am trying to get a good understanding of how Hadoop works, for my
undergraduate project. I have the following questions/doubts :

1. Why does namenode store the blockmap (block to datanode mapping) in the
main memory for all the files, even those that are not used?

2. Why cant namenode move out a part of the blockmap from main memory to a
secondary storage device, when free space in main memory becomes scarce (
due to large number of files) ?

3. Why cant the blockmap be constructed when a file is requested (by a
client) and then be cached for later accesses?

View raw message