hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lohit <lohit.vijayar...@yahoo.com>
Subject Re: NotYetReplicated exceptions when pushing large files into HDFS
Date Tue, 23 Sep 2008 14:47:57 GMT
As of now, no. It's fixed for 3 retries. The ides is, if your dfs put fails even after three
retries then there is something wrong which needs to be seen.

Lohit

On Sep 23, 2008, at 4:24 AM, "Ryan LeCompte" <lecompte@gmail.com> wrote:

Thanks. Is there a way to increase the retry amount?

Ryan

On Mon, Sep 22, 2008 at 8:21 PM, lohit <lohit_bv@yahoo.com> wrote:
Yes, these are warning unless they fail for 3 times. In which case your dfs -put command would
fail with stack trace.
Thanks,
Lohit



----- Original Message ----
From: Ryan LeCompte <lecompte@gmail.com>
To: "core-user@hadoop.apache.org" <core-user@hadoop.apache.org>
Sent: Monday, September 22, 2008 5:18:01 PM
Subject: Re: NotYetReplicated exceptions when pushing large files into HDFS

I've noticed that although I get a few of these exceptions, the file
is ultimately uploaded to the HDFS cluster. Does this mean that my
file ended up getting there in 1 piece? The exceptions are just logged
at the WARN level and indicate retry attempts.

Thanks,
Ryan


On Mon, Sep 22, 2008 at 11:08 AM, Ryan LeCompte <lecompte@gmail.com> wrote:
Hello all,

I'd love to be able to upload into HDFS very large files (e.g., 8 or
10GB), but it seems like my only option is to chop up the file into
smaller pieces. Otherwise, after a while I get NotYetReplication
exceptions while the transfer is in progress. I'm using 0.18.1. Is
there any way I can do this? Perhaps use something else besides
bin/hadoop -put input output?

Thanks,
Ryan





Mime
View raw message