hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4480) data node process should not die if one dir goes bad
Date Wed, 22 Oct 2008 17:29:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12641914#action_12641914
] 

dhruba borthakur commented on HADOOP-4480:
------------------------------------------

I think getting solution to this problem is needed, especially since you are seeing it occur
quite often. Is it caused by bad disks or buggy ext3 software? Any ideas on whether XFS on
linux avoids this problem of disks going read-only?



> data node process should not die if one dir goes bad
> ----------------------------------------------------
>
>                 Key: HADOOP-4480
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4480
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Allen Wittenauer
>
> When multiple directories are configured for the data node process to use to store blocks,
it currently exits when one of them is not writable.   Instead, it should either completely
ignore that directory or attempt to continue reading and then marking it unusable if reads
fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message