hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-703) Replace current fault injection implementation with one from Common
Date Fri, 06 Nov 2009 17:55:32 GMT

    [ https://issues.apache.org/jira/browse/HDFS-703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12774337#action_12774337
] 

Todd Lipcon commented on HDFS-703:
----------------------------------

I agree the code duplication is annoying, but post-split I think it makes sense. If common
makes a change, we expect them to test it in common but might not notice if the change breaks
mapreduce and HDFS. Down the line the build processes might diverge more, in which case having
separate copies is good.

Thanks for checking in the file. My trunk builds again :)

> Replace current fault injection implementation with one from Common
> -------------------------------------------------------------------
>
>                 Key: HDFS-703
>                 URL: https://issues.apache.org/jira/browse/HDFS-703
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: build
>    Affects Versions: 0.22.0
>            Reporter: Konstantin Boudnik
>            Assignee: Konstantin Boudnik
>             Fix For: 0.22.0
>
>         Attachments: HDFS-703.patch, HDFS-703.patch, HDFS-703.patch
>
>
> After HADOOP-6204 has been implemented HDFS doesn't need to have its separate implementation
of fault injection framework. Instead it has to reuse one from Common.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message