hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Henry Junyoung Kim <henry.jy...@gmail.com>
Subject Re: are we able to decommission multi nodes at one time?
Date Tue, 02 Apr 2013 09:07:43 GMT
one more question, 

currently, our cluster is under decommissioning.

Without any safe stop steps, could I do downtime work forcibly?

2013. 4. 2., 오후 5:37, Harsh J <harsh@cloudera.com> 작성:

> Yes, you can do the downtime work in steps of 2 DNs at a time,
> especially since you mentioned the total work would be only ~30mins at
> most.
> 
> On Tue, Apr 2, 2013 at 1:46 PM, Henry Junyoung Kim
> <henry.jykim@gmail.com> wrote:
>> the rest of nodes to be alive has enough size to store.
>> 
>> for this one that you've mentioned.
>>> its easier to do so in a rolling manner without need of a
>>> decommission.
>> 
>> to check my understanding, just shutting down 2 of them and then 2 more and then
2 more without decommissions.
>> 
>> is this correct?
>> 
>> 
>> 2013. 4. 2., 오후 4:54, Harsh J <harsh@cloudera.com> 작성:
>> 
>>> Note though that its only possible to decommission 7 nodes at the same
>>> time and expect it to finish iff the remaining 8 nodes have adequate
>>> free space for the excess replicas.
>>> 
>>> If you're just going to take them down for a short while (few mins
>>> each), its easier to do so in a rolling manner without need of a
>>> decommission. You can take upto two down at a time on a replication
>>> average of 3 or 3+, and put it back in later without too much data
>>> movement impact.
>>> 
>>> On Tue, Apr 2, 2013 at 1:06 PM, Yanbo Liang <yanbohappy@gmail.com> wrote:
>>>> It's reasonable to decommission 7 nodes at the same time.
>>>> But may be it also takes long time to finish it.
>>>> Because all the replicas in these 7 nodes need to be copied to remaining
8
>>>> nodes.
>>>> The size of transfer from these nodes to the remaining nodes is equal.
>>>> 
>>>> 
>>>> 2013/4/2 Henry Junyoung Kim <henry.jykim@gmail.com>
>>>>> 
>>>>> :)
>>>>> 
>>>>> currently, I  have 15 data nodes.
>>>>> for some tests, I am trying to decommission until 8 nodes.
>>>>> 
>>>>> Now, the total dfs used size is 52 TB which is including all replicated
>>>>> blocks.
>>>>> from 15 to 8, total spent time is almost 4 days long. ;(
>>>>> 
>>>>> someone mentioned that I don't need to decommission node by node.
>>>>> for this case, is there no problems if I decommissioned 7 nodes at the
>>>>> same time?
>>>>> 
>>>>> 
>>>>> 2013. 4. 2., 오후 12:14, Azuryy Yu <azuryyyu@gmail.com> 작성:
>>>>> 
>>>>> I can translate it to native English: how many nodes you want to
>>>>> decommission?
>>>>> 
>>>>> 
>>>>> On Tue, Apr 2, 2013 at 11:01 AM, Yanbo Liang <yanbohappy@gmail.com>
wrote:
>>>>>> 
>>>>>> You want to decommission how many nodes?
>>>>>> 
>>>>>> 
>>>>>> 2013/4/2 Henry JunYoung KIM <henry.jykim@gmail.com>
>>>>>>> 
>>>>>>> 15 for datanodes and 3 for replication factor.
>>>>>>> 
>>>>>>> 2013. 4. 1., 오후 3:23, varun kumar <varun.uid@gmail.com>
작성:
>>>>>>> 
>>>>>>>> How many nodes do you have and replication factor for it.
>>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>> 
> 
> 
> 
> -- 
> Harsh J


Mime
View raw message