hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Dagaev <michael.dag...@gmail.com>
Subject Hbase Exceptions
Date Tue, 03 Feb 2009 09:58:31 GMT
Hi, all

     We ran an HBase cluster of (1 master/name node + 3 region
server/data nodes).
We upped the number of open files per process, increased the heap size
of the region
servers and data nodes to 2G, and set dfs.datanode.socket.write.timeout=0, and

The cluster seems to run ok but the Hbase logged exceptions at INFO/DEBUG level.
For instance

    org.apache.hadoop.dfs.DFSClient: Could not obtain block <block name>
    from any node:  java.io.IOException: No live nodes contain current block

   org.apache.hadoop.dfs.DFSClient: Failed to connect to <host name>:50010:
   java.io.IOException: Got error in response to OP_READ_BLOCK for
file <filer name>

Does anybody know what these exceptions mean and how to fix them?

Thank you for your cooperation,

View raw message