hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Rawson <ryano...@gmail.com>
Subject Re: Hbase Exceptions
Date Tue, 03 Feb 2009 10:09:38 GMT
Try upping your xcievers to 2047 or thereabouts.  I had to do that with a
cluster of your size.

Was there any errors on the datanode side you could find?

-ryan

On Tue, Feb 3, 2009 at 1:58 AM, Michael Dagaev <michael.dagaev@gmail.com>wrote:

> Hi, all
>
>     We ran an HBase cluster of (1 master/name node + 3 region
> server/data nodes).
> We upped the number of open files per process, increased the heap size
> of the region
> servers and data nodes to 2G, and set dfs.datanode.socket.write.timeout=0,
> and
> dfs.datanode.max.xcievers=1023
>
> The cluster seems to run ok but the Hbase logged exceptions at INFO/DEBUG
> level.
> For instance
>
>    org.apache.hadoop.dfs.DFSClient: Could not obtain block <block name>
>    from any node:  java.io.IOException: No live nodes contain current block
>
>   org.apache.hadoop.dfs.DFSClient: Failed to connect to <host name>:50010:
>   java.io.IOException: Got error in response to OP_READ_BLOCK for
> file <filer name>
>
> Does anybody know what these exceptions mean and how to fix them?
>
> Thank you for your cooperation,
> M.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message