hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Carlos Valiente (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5257) Export namenode/datanode functionality through a pluggable RPC layer
Date Tue, 17 Mar 2009 08:06:50 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12682589#action_12682589
] 

Carlos Valiente commented on HADOOP-5257:
-----------------------------------------

bq. Moving Plugin to core means that you will not be able to call HDFS specific methods in
unit tests you will probably write later for testing.

My idea was to move just the {{Plugin}} interface to core, and keep the implementation classes
in HADOOP-4707 (and their unit tests) under the appropriate HDFS packages --- but perhaps
it's just simpler to leave {{Plugin}} where it lives now. Any suggestions?

> Export namenode/datanode functionality through a pluggable RPC layer
> --------------------------------------------------------------------
>
>                 Key: HADOOP-5257
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5257
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Carlos Valiente
>            Priority: Minor
>         Attachments: HADOOP-5257-v2.patch, HADOOP-5257-v3.patch, HADOOP-5257-v4.patch,
HADOOP-5257-v5.patch, HADOOP-5257.patch
>
>
> Adding support for pluggable components would allow exporting DFS functionallity using
arbitrary protocols, like Thirft or Protocol Buffers. I'm opening this issue on Dhruba's suggestion
in HADOOP-4707.
> Plug-in implementations would extend this base class:
> {code}abstract class Plugin {
>     public abstract datanodeStarted(DataNode datanode);
>     public abstract datanodeStopping();
>     public abstract namenodeStarted(NameNode namenode);
>     public abstract namenodeStopping();
> }{code}
> Name node instances would then start the plug-ins according to a configuration object,
and would also shut them down when the node goes down:
> {code}public class NameNode {
>     // [..]
>     private void initialize(Configuration conf)
>         // [...]
>         for (Plugin p: PluginManager.loadPlugins(conf))
>           p.namenodeStarted(this);
>     }
>     // [..]
>     public void stop() {
>         if (stopRequested)
>             return;
>         stopRequested = true;
>         for (Plugin p: plugins) 
>             p.namenodeStopping();
>         // [..]
>     }
>     // [..]
> }{code}
> Data nodes would do a similar thing in {{DataNode.startDatanode()}} and {{DataNode.shutdown}}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message