hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Moved: (HADOOP-19) Datanode corruption
Date Mon, 06 Feb 2006 18:49:57 GMT
     [ http://issues.apache.org/jira/browse/HADOOP-19?page=all ]

Doug Cutting moved NUTCH-106 to HADOOP-19:
------------------------------------------

    Project: Hadoop  (was: Nutch)
        Key: HADOOP-19  (was: NUTCH-106)
    Version:     (was: 0.8-dev)

> Datanode corruption
> -------------------
>
>          Key: HADOOP-19
>          URL: http://issues.apache.org/jira/browse/HADOOP-19
>      Project: Hadoop
>         Type: Bug
>     Reporter: Rod Taylor
>     Priority: Critical

>
> Our admins accidentally started a second nutch datanode pointing to the same directories
as one already running (same machine) which in turn caused the entire contents of the datanode
to go disappear.
> This happened because the blocking was based on the username (since fixed in our start
scripts) and it was started as two different users.
> The ndfs.name.dir and ndfs.data.dir directories were both completely devoid of content,
where they had about 150GB not all that much earlier.
> I think the solution is improved interlocking within the data directory itself (file
locked with flock or something similar).

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message