hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-94) disallow more than one datanode running on one computing sharing the same data directory
Date Mon, 01 May 2006 20:20:47 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-94?page=comments#action_12377277 ] 

Doug Cutting commented on HADOOP-94:
------------------------------------

I agree that we should put some sort of a lock file in the data directory, but I don't think
we should move the existing pid file, since that is managed by the generic daemon start/stop
code.  Rather we could use the nio file locking code to create an exclusive lock on a file.
 This will automatically be unlocked by the kernel if/when the jvm exits.

> disallow more than one datanode running on one computing sharing the same data directory
> ----------------------------------------------------------------------------------------
>
>          Key: HADOOP-94
>          URL: http://issues.apache.org/jira/browse/HADOOP-94
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>     Versions: 0.2
>     Reporter: Hairong Kuang
>      Fix For: 0.3

>
> Currently dfs disallows more one datanode to run on the same computer if they are started
up using the same hadoop conf dir. However, this does not prevent more than one data node
gets started, each using a different conf dir (strickly speaking, a different pid file). If
every machine has two such datanodes running, namenode will be busy on deleting and replicating
blocks or eventually lead to block loss.
> Suggested solution: put pid file in  the data directory and disallow configuration.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message