hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lipeng Wan <lipengwa...@gmail.com>
Subject Re: Dose hdfs support the configuration that different blocks can have different number of replcias?
Date Wed, 04 Mar 2015 20:18:01 GMT
Hi Andrew,

By using the -setrep command, can we change the replication factor of
existing files? Or, can we change the replication factor of files
dynamically? If that's possible, how much data movement overhead will
occur?
Thanks!

Lipeng

On Tue, Mar 3, 2015 at 2:57 PM, Andrew Wang <andrew.wang@cloudera.com> wrote:
> Yup, definitely. Check out the -setrep command:
>
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#setrep
>
> HTH,
> Andrew
>
> On Tue, Mar 3, 2015 at 11:49 AM, Lipeng Wan <lipengwan86@gmail.com> wrote:
>
>> Hi Andrew,
>>
>> Thanks for your reply!
>> Then is it possible for us to specify different replication factors
>> for different files?
>>
>> Lipeng
>>
>> On Tue, Mar 3, 2015 at 2:38 PM, Andrew Wang <andrew.wang@cloudera.com>
>> wrote:
>> > Hi Lipeng,
>> >
>> > Right now that is unsupported, replication is set on a per-file basis,
>> not
>> > per-block.
>> >
>> > Andrew
>> >
>> > On Tue, Mar 3, 2015 at 11:23 AM, Lipeng Wan <lipengwan86@gmail.com>
>> wrote:
>> >
>> >> Hi devs,
>> >>
>> >> By default, hdfs creates same number of replicas for each block. Is it
>> >> possible for us to create more replicas for some of the blocks?
>> >> Thanks!
>> >>
>> >> L. W.
>> >>
>>

Mime
View raw message