hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Cohen <mail4st...@gmail.com>
Subject Re: replicating existing blocks?
Date Thu, 19 May 2011 00:07:21 GMT
Thanks for the answer. Earlier, I asked about why I get occasional not replicated yet errors.
Now, I had dfs.replication set to one. What replication could it have been doing? Did the
error messages actually mean that the file couldn't get created in the cluster?

Thanks,
Steve Cohen



On May 18, 2011, at 6:39 PM, Todd Lipcon <todd@cloudera.com> wrote:

> Tried to send this, but apparently SpamAssassin finds emails about
> "replicas" to be spammy. This time with less rich text :)
> 
> On Wed, May 18, 2011 at 3:35 PM, Todd Lipcon <todd@cloudera.com> wrote:
>> 
>> Hi Steve,
>> Running setrep will indeed change those files. Changing "dfs.replication" just changes
the default replication value for files created in the future. Replication level is a file-specific
property.
>> Thanks
>> -Todd
>> 
>> On Wed, May 18, 2011 at 3:32 PM, Steve Cohen <mail4steve@gmail.com> wrote:
>>> 
>>> Say I add a datanode to a pseudo cluster and I want to change the
>>> replication factor to 2. I see that I can either run hadoop fs -setrep
>>> or change the hdfs-site.xml value for dfs.replication. But do either
>>> of these cause the existing blocks to replicate?
>>> 
>>> Thanks,
>>> Steve Cohen
>> 
>> 
>> 
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
> 
> 
> 
> --
> Todd Lipcon
> Software Engineer, Cloudera

Mime
View raw message