hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Can we declare some HDFS nodes "primary"
Date Tue, 11 Dec 2012 13:33:39 GMT
Rack awareness with replication factor of 3 on files will help.

You could declare two racks, one carrying these 10 nodes, and default rack
for the rest of them, and the rack-aware default block placement policy
will take care of the rest.
On Dec 11, 2012 5:10 PM, "David Parks" <davidparks21@yahoo.com> wrote:

> Assume for a moment that you have a large cluster of 500 AWS *spot
> instance* servers running. And you want to keep the bid price low, so at
> some point it’s likely that the whole cluster will get axed until the spot
> price comes down some.****
>
> ** **
>
> In order to maintain HDFS continuity I’d want say 10 servers running as
> normal instances, and I’d want to ensure that HDFS is replicating 100% of
> data to those 10 that don’t run the risk of group elimination.****
>
> ** **
>
> Is it possible for HDFS to ensure replication to these “primary” nodes?***
> *
>
> ** **
>

Mime
View raw message