hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-124) don't permit two datanodes to run from same dfs.data.dir
Date Wed, 31 May 2006 18:10:31 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-124?page=comments#action_12414118 ] 

Doug Cutting commented on HADOOP-124:
-------------------------------------

This adds a number of new public classes that I'm not certain should be public.  Should user
code ever need to access a DataStorage, DatanodeID or DatanodeRegistration, or are these only
used internally?  Also, several of these exceptions appear only to be used internally, but
I'm not certain about all of them.  Would you object if I simply make all of these new classes
package-private?  Then, if we need, we can reveal more later as needed.

> don't permit two datanodes to run from same dfs.data.dir
> --------------------------------------------------------
>
>          Key: HADOOP-124
>          URL: http://issues.apache.org/jira/browse/HADOOP-124
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>     Versions: 0.2
>  Environment: ~30 node cluster
>     Reporter: Bryan Pendleton
>     Assignee: Konstantin Shvachko
>     Priority: Critical
>      Fix For: 0.3
>  Attachments: DatanodeRegister.txt, Hadoop-124-v3.patch, Hadoop-124.patch
>
> DFS files are still rotting.
> I suspect that there's a problem with block accounting/detecting identical hosts in the
namenode. I have 30 physical nodes, with various numbers of local disks, meaning that my current
'bin/hadoop dfs -report" shows 80 nodes after a full restart. However, when I discovered the
 problem (which resulted in losing about 500gb worth of temporary data because of missing
blocks in some of the larger chunks) -report showed 96 nodes. I suspect somehow there were
extra datanodes running against the same paths, and that the namenode was counting those as
replicated instances, which then showed up over-replicated, and one of them was told to delete
its local block, leading to the block actually getting lost.
> I will debug it more the next time the situation arises. This is at least the 5th time
I've had a large amount of file data "rot" in DFS since January.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message