hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rodrigo Schmidt (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted
Date Tue, 02 Mar 2010 01:34:05 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12839952#action_12839952
] 

Rodrigo Schmidt commented on MAPREDUCE-1510:
--------------------------------------------

Passed all unit tests except

    [junit] Test org.apache.hadoop.mapred.TestMiniMRLocalFS FAILED

But this one is broken in trunk and I don't modify anything related to it, so it doesn't count.

Now I'm running the contrib tests.

> RAID should regenerate parity files if they get deleted
> -------------------------------------------------------
>
>                 Key: MAPREDUCE-1510
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: contrib/raid
>            Reporter: Rodrigo Schmidt
>            Assignee: Rodrigo Schmidt
>         Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.2.patch, MAPREDUCE-1510.patch
>
>
> Currently, if a source file has a replication factor lower or equal to that expected
by RAID, the file is skipped and no parity file is generated. I don't think this is a good
behavior since parity files can get wrongly deleted, leaving the source file with a low replication
factor. In that case, raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message