hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sameer Paranjpye (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-19) Datanode corruption
Date Fri, 24 Mar 2006 21:24:24 GMT
     [ http://issues.apache.org/jira/browse/HADOOP-19?page=all ]

Sameer Paranjpye updated HADOOP-19:
-----------------------------------

    Fix Version: 0.1
        Version: 0.1

> Datanode corruption
> -------------------
>
>          Key: HADOOP-19
>          URL: http://issues.apache.org/jira/browse/HADOOP-19
>      Project: Hadoop
>         Type: Bug
>   Components: dfs
>     Versions: 0.1
>     Reporter: Rod Taylor
>     Assignee: Doug Cutting
>     Priority: Critical
>      Fix For: 0.1

>
> Our admins accidentally started a second nutch datanode pointing to the same directories
as one already running (same machine) which in turn caused the entire contents of the datanode
to go disappear.
> This happened because the blocking was based on the username (since fixed in our start
scripts) and it was started as two different users.
> The ndfs.name.dir and ndfs.data.dir directories were both completely devoid of content,
where they had about 150GB not all that much earlier.
> I think the solution is improved interlocking within the data directory itself (file
locked with flock or something similar).

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message