hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rodrigo Schmidt (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1510) RAID should regenerate parity files if they get deleted
Date Tue, 02 Mar 2010 02:06:05 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12839968#action_12839968
] 

Rodrigo Schmidt commented on MAPREDUCE-1510:
--------------------------------------------

Passed all contrib unit tests.

I also verified the logs and confirmed that the RaidNode was binding to "random" free ports,
different from the default one.

This patch should be fine to be committed, if it passes human review.

> RAID should regenerate parity files if they get deleted
> -------------------------------------------------------
>
>                 Key: MAPREDUCE-1510
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1510
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: contrib/raid
>            Reporter: Rodrigo Schmidt
>            Assignee: Rodrigo Schmidt
>         Attachments: MAPREDUCE-1510.1.patch, MAPREDUCE-1510.2.patch, MAPREDUCE-1510.patch
>
>
> Currently, if a source file has a replication factor lower or equal to that expected
by RAID, the file is skipped and no parity file is generated. I don't think this is a good
behavior since parity files can get wrongly deleted, leaving the source file with a low replication
factor. In that case, raid should be able to recreate the parity file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message