hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: HDFS loosing blocks or connection error
Date Fri, 23 Jan 2009 17:34:14 GMT
Richard,

This happens when the datanodes are too slow and eventually all replicas for
a single block are tagged as "bad".  What kind of instances are you using?
How many of them?

J-D

On Fri, Jan 23, 2009 at 12:13 PM, Zak, Richard [USA] <zak_richard@bah.com>wrote:

>  Might there be a reason for why this seems to routinely happen to me when
> using Hadoop 0.19.0 on Amazon EC2?
>
> 09/01/23 11:45:52 INFO hdfs.DFSClient: Could not obtain block
> blk_-1757733438820764312_6736 from any node:  java.io.IOException: No live
> nodes contain current block
> 09/01/23 11:45:55 INFO hdfs.DFSClient: Could not obtain block
> blk_-1757733438820764312_6736 from any node:  java.io.IOException: No live
> nodes contain current block
> 09/01/23 11:45:58 INFO hdfs.DFSClient: Could not obtain block
> blk_-1757733438820764312_6736 from any node:  java.io.IOException: No live
> nodes contain current block
> 09/01/23 11:46:01 WARN hdfs.DFSClient: DFS Read: java.io.IOException: Could
> not obtain block: blk_-1757733438820764312_6736 file=/stats.txt
> It seems hdfs isn't so robust or reliable as the website says and/or I have
> a configuration issue.
>
>
>  Richard J. Zak
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message