hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Dagaev <michael.dag...@gmail.com>
Subject Re: Hbase Exceptions
Date Tue, 03 Feb 2009 10:53:45 GMT
Andrew,

   We are not out of disk space. Where can I find the kernel logs to
look for file system errors?

M.

On Tue, Feb 3, 2009 at 12:45 PM, Andrew Purtell <apurtell@apache.org> wrote:
> Hi Michael,
>
> This is not an xceiver exceeded problem, or the error message
> in the datanode log would have explicitly mentioned it.
>
> It appears to be some error underneath the datanode, on the
> filesystem of the local OS. Are you out of disk space on the
> datanode, or is there some kind of error message in the
> kernel log?
>
>   - Andy
>
>> From: Michael Dagaev
>>
>> Yes, there are a lot of errors like that:
>>
>> ERROR org.apache.hadoop.dfs.DataNode:
>> DatanodeRegistration(<host
>> name>:50010,
>> storageID=DS-82848092-10.249.205.203-50010-1233235946210,
>> infoPort=50075, ipcPort=50020):
>> DataXceiver: java.io.IOException: Block
>> blk_-8920990077351707601_666766 is valid, and cannot be
>> written to.
>>
>> M.
>>
>> On Tue, Feb 3, 2009 at 12:09 PM, Ryan Rawson wrote:
>> > Try upping your xcievers to 2047 or thereabouts.  I
>> > had to do that with a cluster of your size.
>> >
>> > Was there any errors on the datanode side you could
>> > find?
>> >
>> > On Tue, Feb 3, 2009 at 1:58 AM, Michael Dagaev wrote:
> [...]
>> > >    org.apache.hadoop.dfs.DFSClient: Could not
>> > > obtain block <block name>
>> > >    from any node:  java.io.IOException: No live
>> > > nodes contain current block
>> > >
>> > >   org.apache.hadoop.dfs.DFSClient: Failed to
>> > > connect to <host name>:50010:
>> > >   java.io.IOException: Got error in response to
>> > > OP_READ_BLOCK for file <filer name>
>> > >
> [...]
>
>
>
>
>

Mime
View raw message