hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tanping Wang <tanp...@yahoo-inc.com>
Subject RE: datanode down alert
Date Fri, 25 Feb 2011 19:18:02 GMT
Maybe grep for

2011-02-25 18:47:05,564 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Decommission
complete for node 102.1.1.1:50010

In the namenode log to see if decommission is completed?

I remember a similar problem was reported just a few days ago ( in attachment) by James Litton.
   According to James, no block was missing after the node was removed, however, it was unclear
when/if the decommission process was finished.
From: Rita [mailto:rmorgan466@gmail.com]
Sent: Thursday, February 24, 2011 5:59 AM
To: hdfs-user@hadoop.apache.org
Cc: Harsh J
Subject: Re: datanode down alert

Thanks for the response.

I am asking because of the following issue, https://issues.apache.org/jira/browse/HDFS-694

When I decommission a data node it shows up in the "Dead list" on the webgui coincidentally
it also shows up in the "Live" nodes.


I want to make sure this node is fully decommissioned before I remove it from the cluster.


On Tue, Feb 15, 2011 at 9:13 AM, Harsh J <qwertymaniac@gmail.com<mailto:qwertymaniac@gmail.com>>
wrote:
I know of a way but I do not know for sure if that is what you're looking for:

DFSClient.datanodeReport(DataNodeReportType.DEAD) should give you a
list of all DEAD data nodes as per the NameNode.

Although I believe the reports cost a lot, so do not do it often (rpcs the NN).

On Tue, Feb 15, 2011 at 6:51 PM, Rita <rmorgan466@gmail.com<mailto:rmorgan466@gmail.com>>
wrote:
> Is there a programmatic way to determine if a datanode is down?
>
>
>
> --
> --- Get your facts first, then you can distort them as you please.--
>


--
Harsh J
www.harshj.com<http://www.harshj.com>



--
--- Get your facts first, then you can distort them as you please.--

Mime
View raw message