hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Johan Oskarsson <jo...@oskarsson.nu>
Subject Re: Decommission of datanodes
Date Mon, 30 Apr 2007 09:53:21 GMT
I was under the impression that the only way to decomission nodes in 
version 0.12.3 is to specify the nodes in a file and then point
dfs.hosts.exclude to that file.

/Johan

Timothy Chklovski wrote:
> which commands are you issuing to decommission the nodes?
> 
> On 4/29/07, Johan Oskarsson <johan@oskarsson.nu> wrote:
>>
>> Hi.
>>
>> I'm trying to decommission 10 datanodes of 35 in our cluster. The
>> process have been running for a couple of days
>> but only one node have finished. Perhaps I should have tried to
>> decommission one at the time?
>> I was afraid it would lead to unnecessary  transfers as the node being
>> decommissioned would probably have copied data to other nodes
>> that I was going to decommission later.
>>
>> There's no way of seeing how far the process have come?
>>
>> The logs contain a lot of these:
>>
>> 2007-04-29 16:56:56,411 WARN org.apache.hadoop.fs.FSNamesystem: Not able
>> to place enough replicas, still in need of 1
>>
>> Is that related to the decommission process?
>>
>> /Johan
>>
> 


Mime
View raw message