hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5257) Export namenode/datanode functionality through a pluggable RPC layer
Date Mon, 30 Mar 2009 21:19:50 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12693909#action_12693909
] 

Hadoop QA commented on HADOOP-5257:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12403497/HADOOP-5257-v8.patch
  against trunk revision 759932.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    +1 release audit.  The applied patch does not increase the total number of release audit
warnings.

    +1 core tests.  The patch passed core unit tests.

    -1 contrib tests.  The patch failed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/79/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/79/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/79/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/79/console

This message is automatically generated.

> Export namenode/datanode functionality through a pluggable RPC layer
> --------------------------------------------------------------------
>
>                 Key: HADOOP-5257
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5257
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Carlos Valiente
>            Priority: Minor
>         Attachments: HADOOP-5257-v2.patch, HADOOP-5257-v3.patch, HADOOP-5257-v4.patch,
HADOOP-5257-v5.patch, HADOOP-5257-v6.patch, HADOOP-5257-v7.patch, HADOOP-5257-v8.patch, HADOOP-5257.patch
>
>
> Adding support for pluggable components would allow exporting DFS functionallity using
arbitrary protocols, like Thirft or Protocol Buffers. I'm opening this issue on Dhruba's suggestion
in HADOOP-4707.
> Plug-in implementations would extend this base class:
> {code}abstract class Plugin {
>     public abstract datanodeStarted(DataNode datanode);
>     public abstract datanodeStopping();
>     public abstract namenodeStarted(NameNode namenode);
>     public abstract namenodeStopping();
> }{code}
> Name node instances would then start the plug-ins according to a configuration object,
and would also shut them down when the node goes down:
> {code}public class NameNode {
>     // [..]
>     private void initialize(Configuration conf)
>         // [...]
>         for (Plugin p: PluginManager.loadPlugins(conf))
>           p.namenodeStarted(this);
>     }
>     // [..]
>     public void stop() {
>         if (stopRequested)
>             return;
>         stopRequested = true;
>         for (Plugin p: plugins) 
>             p.namenodeStopping();
>         // [..]
>     }
>     // [..]
> }{code}
> Data nodes would do a similar thing in {{DataNode.startDatanode()}} and {{DataNode.shutdown}}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message