hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Grier <gr...@imchris.org>
Subject decommissioning datanodes
Date Fri, 08 Jun 2012 18:46:47 GMT
Hello,

I'm in the trying to figure out how to decommission data nodes. Here's what
I do:

In hdfs-site.xml I have:

<property>
    <name>dfs.hosts.exclude</name>
    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
</property>

Add to exclude file:

host1
host2

Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the two
nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
nothing in the Decommissioning Nodes list). If I look at the datanode logs
running on host1 or host2, I still see blocks being copied in and it does
not appear that any additional replication was happening.

What am I missing during the decommission process?

-Chris

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message