hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jim Kellerman <...@powerset.com>
Subject Is there a way for a client program to determine if the DFS is alive?
Date Tue, 04 Sep 2007 22:11:04 GMT
Is there a way for a client application to determine if the DFS is
alive?
 
We've seen a couple of cases (both in the build environment and on our
cluster) in which the entire hdfs cluster was shut down from under hbase
or in which the namenode failed to respond. What we'd like to do in this
case is to shut down hbase as gracefully as possible, but we don't
really have a good way to say:
 
boolean isDFSalive(FileSystem fs) {
  if (fs instanceof DistributedFileSystem) {
    return ((DistributedFileSystem) fs).isAlive();
  } else if (fs instanceof LocalFileSystem) {
    return true;
  }
  return false;
}

The closest I could think of would be to run a daemon thread that would
periodically do something like:
 
boolean isDFSAlive(FileSystem fs) {
  if (fs instanceof DistributedFileSystem) {
    try {
      DatanodeInfo[] = ((DistributedFileSystem) fs).getDataNodeStats();
      if (DatanodeInfo.length == 0 ) {
        return false;
      }
      return true;
    } catch (IOException e) {
      LOG.error("unable to contact namenode:", e);
      return false;
    }
  } else if ....

Does this approach make sense?
How expensive is getDataNodeStats?
Would it be possible to add a isAlive() method to the ClientProtocol
without breaking it? If so a) does it make sense or would it be useful?
b) would it be any less expensive?
Looking at the code, in FSNameSystem.datanodeReport(), it doesn't appear
to be expensive aside from serializing all the data instead of just
serializing a boolean.
 
What do you think?

Thanks!
 
-- 
Jim Kellerman, Senior Engineer; Powerset
jim@powerset.com

Mime
View raw message