hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rodrigo Schmidt (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted
Date Tue, 02 Mar 2010 20:47:31 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12840317#action_12840317
] 

Rodrigo Schmidt commented on MAPREDUCE-1510:
--------------------------------------------

As in my test execution, the only test that failed at Hudson was org.apache.hadoop.mapred.TestMiniMRLocalFS.testWithLocal,
which is not related to this patch and is already broken in trunk.

> RAID should regenerate parity files if they get deleted
> -------------------------------------------------------
>
>                 Key: MAPREDUCE-1510
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: contrib/raid
>            Reporter: Rodrigo Schmidt
>            Assignee: Rodrigo Schmidt
>         Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.2.patch, MAPREDUCE-1510.patch
>
>
> Currently, if a source file has a replication factor lower or equal to that expected
by RAID, the file is skipped and no parity file is generated. I don't think this is a good
behavior since parity files can get wrongly deleted, leaving the source file with a low replication
factor. In that case, raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message