hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "eric baldeschwieler (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-124) don't permit two datanodes to run from same dfs.data.dir
Date Tue, 09 May 2006 05:17:22 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-124?page=comments#action_12378570 ] 

eric baldeschwieler commented on HADOOP-124:
--------------------------------------------

I believe we have three problems to address:

1) The namenode needs to know to purge old entries identical entries when a new datanode registers.
 Else we get rot.  See doug's suggestions above.

2) You could have 2 or more datanodes on one server.  They need to always be unique.  We should
asign a unique ID to each datanode home directory and make sure the datanode is started with
a valid home directory as well.  I like the idea of assigning a uniqueID to each datanode
home.

3) You concern that two daemons might run on the same datadir.

We should address all these concerns.  Your suggestion of a startup lock and a uniqueID, plus
(hello method) should together handle all of these.


> don't permit two datanodes to run from same dfs.data.dir
> --------------------------------------------------------
>
>          Key: HADOOP-124
>          URL: http://issues.apache.org/jira/browse/HADOOP-124
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>     Versions: 0.2
>  Environment: ~30 node cluster
>     Reporter: Bryan Pendleton
>     Assignee: Konstantin Shvachko
>     Priority: Critical
>      Fix For: 0.3

>
> DFS files are still rotting.
> I suspect that there's a problem with block accounting/detecting identical hosts in the
namenode. I have 30 physical nodes, with various numbers of local disks, meaning that my current
'bin/hadoop dfs -report" shows 80 nodes after a full restart. However, when I discovered the
 problem (which resulted in losing about 500gb worth of temporary data because of missing
blocks in some of the larger chunks) -report showed 96 nodes. I suspect somehow there were
extra datanodes running against the same paths, and that the namenode was counting those as
replicated instances, which then showed up over-replicated, and one of them was told to delete
its local block, leading to the block actually getting lost.
> I will debug it more the next time the situation arises. This is at least the 5th time
I've had a large amount of file data "rot" in DFS since January.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message