hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Serge Blazhiyevskyy <Serge.Blazhiyevs...@nice.com>
Subject Re: decommissioning datanodes
Date Fri, 08 Jun 2012 18:56:29 GMT
Your nodes need to be in include and exclude file in the same time

Do you use both files?

On 6/8/12 11:46 AM, "Chris Grier" <grier@imchris.org> wrote:

>I'm in the trying to figure out how to decommission data nodes. Here's
>I do:
>In hdfs-site.xml I have:
>    <name>dfs.hosts.exclude</name>
>    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
>Add to exclude file:
>Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the two
>nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
>nothing in the Decommissioning Nodes list). If I look at the datanode logs
>running on host1 or host2, I still see blocks being copied in and it does
>not appear that any additional replication was happening.
>What am I missing during the decommission process?

View raw message