hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rita <rmorgan...@gmail.com>
Subject Re: Retry question
Date Sun, 18 Mar 2012 20:44:18 GMT
In the libhdfs how can I throttle the number of retries?


On Sun, Mar 18, 2012 at 1:12 PM, Marcos Ortiz <mlortiz@uci.cu> wrote:

> HDFS is precisely built with these concerns in mind.
> If you read a 60 GB file and the rack goes down, the system
> will present to you transparently another copy, based on your
> replication factor.
> A block can not be available too due to corruption, and in this case,
> it can be replicated to other live machines and fix the error with
> the fsck utility.
>
> Regards
>
>
> On 3/18/2012 9:46 AM, Rita wrote:
>
>> My replication factor is 3 and if I were reading data thru libhdfs using
>> C is there a retry method? I am reading a 60gb file and what would will
>> happen if a rack goes down and the next block isn't available? Will the
>> API retry? is there a way t configuration this option?
>>
>>
>> --
>> --- Get your facts first, then you can distort them as you please.--
>>
>
> --
> Marcos Luis Ortíz Valmaseda (@marcosluis2186)
>  Data Engineer at UCI
>  http://marcosluis2186.**posterous.com<http://marcosluis2186.posterous.com>
>
> 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS
> INFORMATICAS...
> CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION
>
> http://www.uci.cu
> http://www.facebook.com/**universidad.uci<http://www.facebook.com/universidad.uci>
> http://www.flickr.com/photos/**universidad_uci<http://www.flickr.com/photos/universidad_uci>
>



-- 
--- Get your facts first, then you can distort them as you please.--

Mime
View raw message